From mikhailo.seledtsov at oracle.com Fri Jun 1 01:43:48 2018 From: mikhailo.seledtsov at oracle.com (Mikhailo Seledtsov) Date: Thu, 31 May 2018 18:43:48 -0700 Subject: RFR(L) : 8202812 : [TESTBUG] Open source VM testbase compiler tests In-Reply-To: References: Message-ID: <5B10A4D4.5000206@oracle.com> Looks good to me. One comment though: http://cr.openjdk.java.net/~iignatyev//8202812/webrev.00/test/hotspot/jtreg/vmTestbase/jit/graph/js_CGT.testlist.html - copyright header needs to be updated please update the copyright statement prior to integration; no need for a new webrev Thank you, Misha On 5/18/18, 10:50 AM, Igor Ignatyev wrote: > http://cr.openjdk.java.net/~iignatyev//8202812/webrev.00/index.html >> 56733 lines changed: 56732 ins; 0 del; 1 mod; 1 > Hi all, > > could you please review the patch which open sources compiler tests from vm testbase? these tests were developed in different time to cover different parts of JITs. > > As usually w/ VM testbase code, these tests are old, they have been run in hotspot testing for a long period of time. Originally, these tests were run by a test harness different from jtreg and had different build and execution schemes, some parts couldn't be easily translated to jtreg, so tests might have actions or pieces of code which look weird. In a long term, we are planning to rework them. > > JBS: https://bugs.openjdk.java.net/browse/JDK-8202812 > webrev: http://cr.openjdk.java.net/~iignatyev/8202812/webrev.00/index.html > testing: :vmTestbase_vm_compiler test group > > Thanks, > -- Igor From yumin.qi at gmail.com Fri Jun 1 03:01:11 2018 From: yumin.qi at gmail.com (yumin qi) Date: Thu, 31 May 2018 20:01:11 -0700 Subject: JEP: https://bugs.openjdk.java.net/browse/JDK-8203832 In-Reply-To: <0ca282db-5b4f-607d-512a-a2183dbd4b73@oracle.com> References: <0ca282db-5b4f-607d-512a-a2183dbd4b73@oracle.com> Message-ID: Hi, Tobias Thanks for your review/questions. First I would introduce some background of JWarmup application on use scenario and how we implement the interaction between application and scheduling (dispatch system, DS). The load of each application is controlled by DS. The profiling data is collected against real input data (so it mostly matches the application run in production environments, thus reduce the deoptimization chance). When run with profiling data, application gets notification from DS when compiling should start, application then calls API to notify JVM the hot methods recorded in file can be compiled, after the compilations, a message sent out to DS so DS will dispatch load into this application. Now answer your questions: Here are some questions more detailed questions: - How is it implemented? Is it based on the replay compilation framework? A: No, it does not base on replay compilation framework. The data structure for recording profile is newly designed. - How do you handle dynamically generated code (for example, lambda forms)? A: Not handled for any dynamically generated methods. Since name are generated different some time for different runs. - What information is stored/re-used and which profile is cached? A: class/method/init order/bci method data etc.. - Does it work for C1 and C2? A: C2 only, can make it work on C1. - How is the tiered compilation policy affected? A: Currently disabled. - When do we compile (if method is first used or are there still thresholds)? A: see introduction above. No check for threshhold when compiling. - How do you avoid overloading the compile queue? A: In real application run, we did not find compile queue overloaded. This also can be controlled since we know how many compiler threads configured, and the size of recorded methods. - Is re-profiling/re-compilation supported? A: No. This answer also see answer for below question. - What if a method is deoptimized? Is the cached profile update and re-used? A: During run with pre-compiled methods, deoptimization is only seen with null-check elimination so it is not eliminated. The profile data is not updated and re-used. That is, after deoptimized, it starts from interpreter mode like freshly loaded. Thanks Yumin On Wed, May 30, 2018 at 11:38 PM, Tobias Hartmann < tobias.hartmann at oracle.com> wrote: > Hi Yumin, > > This reminds me of a project we did for a student's bachelor thesis in > 2015: > https://github.com/mohlerm/hotspot/blob/master/report/ > profile_caching_mohlerm.pdf > > We also published a paper on that topic: > https://dl.acm.org/citation.cfm?id=3132210 > > Thanks for submitting the JEP, very interesting! Here are the things we've > learned from the "cached > profiles" project, maybe you can correct this from your experience with > JWarmup: > - Startup: We were seeing great improvements for some benchmarks but also > large regressions for > others. Problems like the overhead of reading the profiles, overloading > the compile queue and > increased compile time due to more optimizations affect the startup time. > - Peak performance: Using profile information from a previous run might > cause significant > performance regressions in early stages of the execution. This is because > a "late" profile is > usually also the one with the fewest optimistic assumptions. For example, > the latest profile from a > previous run might have been updated right when the application was about > to shut down. If this > triggered class loading or has other side effects, we might not be able to > inline some methods or > perform other optimistic optimizations. Using this profile right from the > beginning in a subsequent > run limits peak performance significantly. > > Here are some questions more detailed questions: > - How is it implemented? Is it based on the replay compilation framework? > - How do you handle dynamically generated code (for example, lambda forms)? > - What information is stored/re-used and which profile is cached? > - Does it work for C1 and C2? > - How is the tiered compilation policy affected? > - When do we compile (if method is first used or are there still > thresholds)? > - How do you avoid overloading the compile queue? > - Is re-profiling/re-compilation supported? > - What if a method is deoptimized? Is the cached profile update and > re-used? > > Best regards, > Tobias > > On 29.05.2018 06:09, yumin qi wrote: > > Hi? Experts > > > > This is a newly filed JEP (JWarmup) for working on resolving java > > performance issue caused by both application load peaking up and JIT > > threads compiling java hot methods happen at same time. > > > > https://bugs.openjdk.java.net/browse/JDK-8203832 > > > > For a large java application, the load comes in short period of time, > > like the 'Single Day' sale on Alibaba's e-commerce application, this > > massive load comes in and makes many java methods ready for JIT > compilation > > to convert them into native methods. The compiler threads will kick in to > > do the complication work and take system resource from mutator java > > threads which are busy on processing requests thus lead to peak time > > performance degradation. > > > > The JWarmup technique was proposed to avoid such issue by precompiling > > the hot methods at application startup and it has been successfully > applied > > to Alibaba's e-commerce applications. We would like to contribute it to > > OpenJDK and wish it can help java developers overcome the same issue. > > > > Please review and give your feedback. > > > > Thanks > > Yumin > > > > (Alibaba Group Inc) > > > From mandy.chung at oracle.com Fri Jun 1 03:36:35 2018 From: mandy.chung at oracle.com (mandy chung) Date: Thu, 31 May 2018 20:36:35 -0700 Subject: RFR: 8203357 Container Metrics In-Reply-To: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> References: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> Message-ID: <6b8f5228-03ea-9521-aa66-e74b5595722b@oracle.com> Hi Bob, On 5/30/18 12:45 PM, Bob Vandette wrote:> > RFE: Container Metrics > > https://bugs.openjdk.java.net/browse/JDK-8203357 > > WEBREV: > > http://cr.openjdk.java.net/~bobv/8203357/webrev.00 Looks fine in general. It's good to have this internal API ready for JFR and other library code to use. I skimmed through the new tests. It'd be good to add some comments to describe what it does (for example, set up a docker image etc). launcher.properties 154 \ -XshowSettings:system (Linux Only) show host system or container\n\ 155 \ configuration and continue\n\ A newline can be placed after -XshowSettings:system consistent with other suboptions. test/lib/jdk/test/lib/containers/docker/DockerTestUtils.java There are several long lines in the new test files such as: MetricsCpuTester.java MetricsMemoryTester.java MetricsTester.java It'd help future side-by-side review if they are wrapped. I think most of them are the construction of an exception. I see a pattern of a name after @test and that is not strictly needed. TestCgroupMetrics.java 25 * @test TestCgroupMetrics TestDockerCpuMetrics.java 34 * @test TestSystemMetrics TestDockerMemoryMetrics.java 30 * @test TestSystemMetrics TestSystemMetrics.java 25 * @test TestSystemMetrics This needs a CSR for the new -XshowSettings:system option. Mandy From stefan.karlsson at oracle.com Fri Jun 1 07:13:19 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 1 Jun 2018 09:13:19 +0200 Subject: RFR: 8204165: Filter out tests requiring class unloading when ClassUnloading is disabled In-Reply-To: <322cd976-04f7-eba7-e950-4b82612ed863@oracle.com> References: <6dc0988c-7324-7d87-826a-a66cdbfa2a50@oracle.com> <4a1816b7-cc65-0a37-863a-696c183df9ef@oracle.com> <70964597-4cab-7fba-a71b-9982a4990eb2@oracle.com> <4f748bd8-f958-3dba-7cb0-6c7989bfe2f2@oracle.com> <322cd976-04f7-eba7-e950-4b82612ed863@oracle.com> Message-ID: <6a3b4ab9-557c-c2c4-1efb-0db31de59658@oracle.com> Hi again, I decided to tag the stressHierarchy as Coleen suggested: http://cr.openjdk.java.net/~stefank/8204165/webrev.03.delta/ http://cr.openjdk.java.net/~stefank/8204165/webrev.03/ Thanks, StefanK On 2018-05-31 15:27, Stefan Karlsson wrote: > Thanks for reviewing! > > stefanK > > On 2018-05-31 15:12, coleen.phillimore at oracle.com wrote: >> >> >> On 5/31/18 8:57 AM, Stefan Karlsson wrote: >>> Hi Coleen, >>> >>> On 2018-05-31 14:35, coleen.phillimore at oracle.com wrote: >>>> >>>> If? you look for ClassUnloadCommon, there are a couple others. It >>>> would be good to have these tagged as well, even though the last one >>>> tests two things. >>> >>> I've been very ZGC centric and only tagged problematic tests I've >>> found, but of course I can tag more when it makes sense. >> >> Yes, I thought that.? I'm on the fence.? I don't think it's worth >> tagging more.? I may make a test group at some point though. >>> >>>> >>>> runtime/appcds/customLoader/UnloadUnregisteredLoaderTest.java >>>> runtime/appcds/customLoader/test-classes/UnloadUnregisteredLoader.java >>> >>> Added the tag and tested with -XX:+UseG1GC -XX:-ClassUnloading. >> >> Great, these seemed to want it. >>> >>>> runtime/logging/LoaderConstraintsTest.java >>> >>> This tests seems to spawn a new test with a ProcessBuilder and >>> therefore doesn't propagate the -vmoptions flags. Even if I try to >>> turn off class unloading or run ZGC, it will end up running G1 with >>> class unloading. I think this tests needs to be changed, before it >>> makes sense to tag it. >>> >>> If you still want me to add the tag, I can do so. >>> >> >> Ugh, no, that's fine. >>>> runtime/BadObjectClass/TestUnloadClassError.java >>> >>> This test doesn't fail if I turn off class unloading. Do you still >>> want me to add the tag? >>> >> >> It does two things so no, that's ok, leave the tag off. >> >>>> >>>> If you look for triggerUnloading there are more: e.g. >>>> vmTestbase/metaspace/stressHierarchy.? I assume they pass without >>>> class unloading though. >>> >>> Maybe we can handle those in a future RFE? >>> >> >> Yes, or an RFE in the future to mark, in some other way, the tests >> that exercise class unloading. >> >> Thanks - change looks good. >> Coleen >> >>> Thanks, >>> StefanK >>> >>>> >>>> Thanks, >>>> Coleen >>>> >>>> >>>> On 5/31/18 8:01 AM, Stefan Karlsson wrote: >>>>> Hi all, >>>>> >>>>> Please review this patch to annotate jtreg tests that require >>>>> ClassUnloading with @requires vm.opt.final.ClassUnloading. >>>>> >>>>> http://cr.openjdk.java.net/~stefank/8204165/webrev.01/ >>>>> https://bugs.openjdk.java.net/browse/JDK-8204165 >>>>> >>>>> This @requires tag will filter out these tests when run with >>>>> -XX:-ClassUnloading or when run with GC that doesn't support class >>>>> unloading. >>>>> >>>>> For the discussion around the introduction of the vm.opt.final >>>>> tags, see: >>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-May/032586.html >>>>> >>>>> >>>>> Thanks, >>>>> StefanK >>>> >> From stefan.karlsson at oracle.com Fri Jun 1 07:14:51 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 1 Jun 2018 09:14:51 +0200 Subject: RFR: 8204167: Filter out tests requiring compressed oops when CompressedOops is disabled In-Reply-To: References: <2da42530-7549-dfe3-7950-a45eb258e522@oracle.com> <3944DDF8-5C4C-4C69-A3E0-A71CD75A5C16@oracle.com> <8171DA99-ECA6-4038-BA03-DA0B20BB1C9C@oracle.com> Message-ID: <270b6ee4-44d3-784c-9e01-46bc99671d67@oracle.com> Thanks, Coleen. StefanK On 2018-06-01 00:14, coleen.phillimore at oracle.com wrote: > Looks good! > Coleen > > On 5/31/18 5:05 PM, Kim Barrett wrote: >>> On May 31, 2018, at 3:59 PM, Stefan Karlsson >>> wrote: >>> >>> On 2018-05-31 21:20, Kim Barrett wrote: >>>>> On May 31, 2018, at 8:09 AM, Stefan Karlsson >>>>> wrote: >>>>> >>>>> Hi all, >>>>> >>>>> Please review this patch to add @requires >>>>> vm.opt.final.UseCompressedOops to those jtreg tests that requires >>>>> compressed oops. >>>>> >>>>> http://cr.openjdk.java.net/~stefank/8204167/webrev.01/ >>>>> https://bugs.openjdk.java.net/browse/JDK-8204167 >>>>> >>>>> With this patch we now filter out tests that require compressed >>>>> oops, when -XX:-UseCompressedOops is passed or ZGC is tested. >>>>> >>>>> Thanks, >>>>> StefanK >>>> test/hotspot/jtreg/runtime/CompressedOops/CompressedClassPointers.java >>>> test/hotspot/jtreg/runtime/CompressedOops/CompressedClassSpaceSize.java >>>> test/hotspot/jtreg/runtime/Metaspace/MaxMetaspaceSizeTest.java >>>> >>>> These seem to be testing UseCompressedClassPointers rather than >>>> UseCompressedOops.? Shouldn't they require UseCompressedClassPointers >>>> instead?? I know UCCP requires UCO, but requiring UCO here seems odd. >>> Yes, that makes sense. New webrevs: >>> >>> http://cr.openjdk.java.net/~stefank/8204167/webrev.02.delta/ >>> http://cr.openjdk.java.net/~stefank/8204167/webrev.02/ >>> >>> Thanks, >>> StefanK >> Looks good. >> > From igor.ignatyev at oracle.com Fri Jun 1 07:27:14 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Fri, 1 Jun 2018 00:27:14 -0700 Subject: RFR(L) : 8202812 : [TESTBUG] Open source VM testbase compiler tests In-Reply-To: <5B10A4D4.5000206@oracle.com> References: <5B10A4D4.5000206@oracle.com> Message-ID: Hi MIsha, thanks for spotting that, I was pretty sure I removed this file. as it isn't needed, I'll remove it instead of updating legal notice. Cheers, -- Igor > On May 31, 2018, at 6:43 PM, Mikhailo Seledtsov wrote: > > Looks good to me. > > One comment though: > http://cr.openjdk.java.net/~iignatyev//8202812/webrev.00/test/hotspot/jtreg/vmTestbase/jit/graph/js_CGT.testlist.html > - copyright header needs to be updated > please update the copyright statement prior to integration; no need for a new webrev > > Thank you, > Misha > > On 5/18/18, 10:50 AM, Igor Ignatyev wrote: >> http://cr.openjdk.java.net/~iignatyev//8202812/webrev.00/index.html >>> 56733 lines changed: 56732 ins; 0 del; 1 mod; 1 >> Hi all, >> >> could you please review the patch which open sources compiler tests from vm testbase? these tests were developed in different time to cover different parts of JITs. >> >> As usually w/ VM testbase code, these tests are old, they have been run in hotspot testing for a long period of time. Originally, these tests were run by a test harness different from jtreg and had different build and execution schemes, some parts couldn't be easily translated to jtreg, so tests might have actions or pieces of code which look weird. In a long term, we are planning to rework them. >> >> JBS: https://bugs.openjdk.java.net/browse/JDK-8202812 >> webrev: http://cr.openjdk.java.net/~iignatyev/8202812/webrev.00/index.html >> testing: :vmTestbase_vm_compiler test group >> >> Thanks, >> -- Igor From tobias.hartmann at oracle.com Fri Jun 1 08:04:54 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Fri, 1 Jun 2018 10:04:54 +0200 Subject: RFR (M) 8203837: Split nmethod unloading from nmethod cache cleaning In-Reply-To: <7847553d-0f61-f7ce-146f-1e6663cdca95@oracle.com> References: <7847553d-0f61-f7ce-146f-1e6663cdca95@oracle.com> Message-ID: <6f6eda28-2c4b-cf05-389c-87bde4bc9ef6@oracle.com> Hi Coleen, this looks good to me but someone from GC should have a look as well. Best regards, Tobias On 30.05.2018 14:23, coleen.phillimore at oracle.com wrote: > Summary: Refactor cleaning inline caches to after GC do_unloading. > > See CR for more information.? This patch refactors CompiledMethod::do_unloading() to unload nmethods > in case of !is_alive oop.? If the nmethod is not unloaded, cleans the inline caches, and exception > cache, for unloaded classes and unloaded nmethods.? The CodeCache walk in gc_epilogue is moved > earlier to combine with cleanup for class unloading. > > It doesn't add CodeCache walks to any of the GCs, and keeps the G1 parallel nmethod unloading > intact.? This patch also uses common code for CompiledMethod::clean_inline_caches which was > duplicated by the G1 functions. > > The patch also fixed a case in AOT where clear_inline_caches should be called instead of > clean_inline_caches.?? I think neither is necessary for the nmethods that are deoptimized because of > redefinition, but clear_inline_caches clears up redefined Methods* not for unloaded nmethods.? Once > the method is cleaned by the sweeper, clean_inline_caches will be called on it.? clear vs. clean ... > > The patch also converts TraceScavenge to -Xlog:gc+nmethod=trace.? I can revert this part and do it > separately; I had just converted it while looking at the output. > > open webrev at http://cr.openjdk.java.net/~coleenp/8203837.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8203837 > > Tested with mach5 hs-tier1-5, the gc-test-suite (including specjbb2015, dacapo, gcbasher), runThese > with all GCs with and without class unloading. > > This is an enhancement that we can use for making nmethod cleaning concurrent in ZGC. > > Thanks, > Coleen From stefan.johansson at oracle.com Fri Jun 1 09:04:23 2018 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Fri, 1 Jun 2018 11:04:23 +0200 Subject: RFR: 8204173: Lower the minimum number of heap memory pools in MemoryTest.java In-Reply-To: <6280012d-35ef-5670-0429-65d6647792e4@oracle.com> References: <6280012d-35ef-5670-0429-65d6647792e4@oracle.com> Message-ID: <0b0b3959-8929-e64d-8469-cf9efed34a58@oracle.com> On 2018-05-31 15:53, Stefan Karlsson wrote: > Hi all, > > Please review this patch to lower the minimum number of heap memory > pools in MemoryTest.java. > > http://cr.openjdk.java.net/~stefank/8204173/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8204173 > > Just like the comment in the test says: > > ?* NOTE: This expected result is hardcoded in this test and this test > ?* will be affected if the heap memory layout is changed in > ?* the future implementation. > ?*/ > > we need to update this test to support ZGC, which only has one heap > memory pool. The change looks good, but I think maybe you should update the comment about the pools a bit. I'm not sure why we have the history lesson in the comment. I would much rather have a comment describing the current state and since the test live with the code this should be just fine. But leave the NOTE, it is good to be explicit when we depend on hard coded stuff :) Thanks, Stefan > > Thanks, > StefanK > From coleen.phillimore at oracle.com Fri Jun 1 10:34:47 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 1 Jun 2018 06:34:47 -0400 Subject: RFR: 8204165: Filter out tests requiring class unloading when ClassUnloading is disabled In-Reply-To: <6a3b4ab9-557c-c2c4-1efb-0db31de59658@oracle.com> References: <6dc0988c-7324-7d87-826a-a66cdbfa2a50@oracle.com> <4a1816b7-cc65-0a37-863a-696c183df9ef@oracle.com> <70964597-4cab-7fba-a71b-9982a4990eb2@oracle.com> <4f748bd8-f958-3dba-7cb0-6c7989bfe2f2@oracle.com> <322cd976-04f7-eba7-e950-4b82612ed863@oracle.com> <6a3b4ab9-557c-c2c4-1efb-0db31de59658@oracle.com> Message-ID: <434e38d6-cee0-94cf-8f9f-a682c405f3f9@oracle.com> Looks good! Coleen On 6/1/18 3:13 AM, Stefan Karlsson wrote: > Hi again, > > I decided to tag the stressHierarchy as Coleen suggested: > http://cr.openjdk.java.net/~stefank/8204165/webrev.03.delta/ > http://cr.openjdk.java.net/~stefank/8204165/webrev.03/ > > Thanks, > StefanK > > > On 2018-05-31 15:27, Stefan Karlsson wrote: >> Thanks for reviewing! >> >> stefanK >> >> On 2018-05-31 15:12, coleen.phillimore at oracle.com wrote: >>> >>> >>> On 5/31/18 8:57 AM, Stefan Karlsson wrote: >>>> Hi Coleen, >>>> >>>> On 2018-05-31 14:35, coleen.phillimore at oracle.com wrote: >>>>> >>>>> If? you look for ClassUnloadCommon, there are a couple others. It >>>>> would be good to have these tagged as well, even though the last >>>>> one tests two things. >>>> >>>> I've been very ZGC centric and only tagged problematic tests I've >>>> found, but of course I can tag more when it makes sense. >>> >>> Yes, I thought that.? I'm on the fence.? I don't think it's worth >>> tagging more.? I may make a test group at some point though. >>>> >>>>> >>>>> runtime/appcds/customLoader/UnloadUnregisteredLoaderTest.java >>>>> runtime/appcds/customLoader/test-classes/UnloadUnregisteredLoader.java >>>>> >>>> >>>> Added the tag and tested with -XX:+UseG1GC -XX:-ClassUnloading. >>> >>> Great, these seemed to want it. >>>> >>>>> runtime/logging/LoaderConstraintsTest.java >>>> >>>> This tests seems to spawn a new test with a ProcessBuilder and >>>> therefore doesn't propagate the -vmoptions flags. Even if I try to >>>> turn off class unloading or run ZGC, it will end up running G1 with >>>> class unloading. I think this tests needs to be changed, before it >>>> makes sense to tag it. >>>> >>>> If you still want me to add the tag, I can do so. >>>> >>> >>> Ugh, no, that's fine. >>>>> runtime/BadObjectClass/TestUnloadClassError.java >>>> >>>> This test doesn't fail if I turn off class unloading. Do you still >>>> want me to add the tag? >>>> >>> >>> It does two things so no, that's ok, leave the tag off. >>> >>>>> >>>>> If you look for triggerUnloading there are more: e.g. >>>>> vmTestbase/metaspace/stressHierarchy.? I assume they pass without >>>>> class unloading though. >>>> >>>> Maybe we can handle those in a future RFE? >>>> >>> >>> Yes, or an RFE in the future to mark, in some other way, the tests >>> that exercise class unloading. >>> >>> Thanks - change looks good. >>> Coleen >>> >>>> Thanks, >>>> StefanK >>>> >>>>> >>>>> Thanks, >>>>> Coleen >>>>> >>>>> >>>>> On 5/31/18 8:01 AM, Stefan Karlsson wrote: >>>>>> Hi all, >>>>>> >>>>>> Please review this patch to annotate jtreg tests that require >>>>>> ClassUnloading with @requires vm.opt.final.ClassUnloading. >>>>>> >>>>>> http://cr.openjdk.java.net/~stefank/8204165/webrev.01/ >>>>>> https://bugs.openjdk.java.net/browse/JDK-8204165 >>>>>> >>>>>> This @requires tag will filter out these tests when run with >>>>>> -XX:-ClassUnloading or when run with GC that doesn't support >>>>>> class unloading. >>>>>> >>>>>> For the discussion around the introduction of the vm.opt.final >>>>>> tags, see: >>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-May/032586.html >>>>>> >>>>>> >>>>>> Thanks, >>>>>> StefanK >>>>> >>> From coleen.phillimore at oracle.com Fri Jun 1 10:35:06 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 1 Jun 2018 06:35:06 -0400 Subject: RFR (M) 8203837: Split nmethod unloading from nmethod cache cleaning In-Reply-To: <6f6eda28-2c4b-cf05-389c-87bde4bc9ef6@oracle.com> References: <7847553d-0f61-f7ce-146f-1e6663cdca95@oracle.com> <6f6eda28-2c4b-cf05-389c-87bde4bc9ef6@oracle.com> Message-ID: <5bfeae5b-c72e-6f9d-5782-3c974d84efa7@oracle.com> Thanks Tobias! Coleen On 6/1/18 4:04 AM, Tobias Hartmann wrote: > Hi Coleen, > > this looks good to me but someone from GC should have a look as well. > > Best regards, > Tobias > > On 30.05.2018 14:23, coleen.phillimore at oracle.com wrote: >> Summary: Refactor cleaning inline caches to after GC do_unloading. >> >> See CR for more information.? This patch refactors CompiledMethod::do_unloading() to unload nmethods >> in case of !is_alive oop.? If the nmethod is not unloaded, cleans the inline caches, and exception >> cache, for unloaded classes and unloaded nmethods.? The CodeCache walk in gc_epilogue is moved >> earlier to combine with cleanup for class unloading. >> >> It doesn't add CodeCache walks to any of the GCs, and keeps the G1 parallel nmethod unloading >> intact.? This patch also uses common code for CompiledMethod::clean_inline_caches which was >> duplicated by the G1 functions. >> >> The patch also fixed a case in AOT where clear_inline_caches should be called instead of >> clean_inline_caches.?? I think neither is necessary for the nmethods that are deoptimized because of >> redefinition, but clear_inline_caches clears up redefined Methods* not for unloaded nmethods.? Once >> the method is cleaned by the sweeper, clean_inline_caches will be called on it.? clear vs. clean ... >> >> The patch also converts TraceScavenge to -Xlog:gc+nmethod=trace.? I can revert this part and do it >> separately; I had just converted it while looking at the output. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8203837.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8203837 >> >> Tested with mach5 hs-tier1-5, the gc-test-suite (including specjbb2015, dacapo, gcbasher), runThese >> with all GCs with and without class unloading. >> >> This is an enhancement that we can use for making nmethod cleaning concurrent in ZGC. >> >> Thanks, >> Coleen From coleen.phillimore at oracle.com Fri Jun 1 10:35:28 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 1 Jun 2018 06:35:28 -0400 Subject: RFR (S) 8204195: Clean up macroAssembler.inline.hpp and other inline.hpp files included in .hpp files In-Reply-To: <4A4CB4E4-7340-4798-B4DA-D3D48F9A30DA@oracle.com> References: <335ac0b6-84f4-8ae5-ad32-0ba7d7260009@oracle.com> <4A4CB4E4-7340-4798-B4DA-D3D48F9A30DA@oracle.com> Message-ID: <3df0ec9e-6174-dfc9-bb67-38f69a651dce@oracle.com> Thanks Jiangli! Coleen On 5/31/18 7:52 PM, Jiangli Zhou wrote: > The changes look good to me. > > Thanks, > Jiangli > >> On May 31, 2018, at 4:10 PM, coleen.phillimore at oracle.com wrote: >> >> Summary: Moved macroAssembler.inline.hpp out of header file and distributed to .cpp files that included them: ie. c1_MacroAssembler.hpp and interp_masm.hpp. Also freeList.inline.hpp and allocation.inline.hpp. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8204195.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8204195 >> >> Tested with mach5 hs-tier1,2 on Oracle platforms: linux-x64, solaris-sparcv9, macosx-x64 and windows-x64. Also tested zero and aarch64 fastdebug builds, and linux-x64 without precompiled headers. Please test other platforms, like arm32, ppc and s390! I think these are the last platform dependent inline files that are included by .hpp files. >> >> Thanks, >> Coleen From swatibits14 at gmail.com Fri Jun 1 11:10:28 2018 From: swatibits14 at gmail.com (Swati Sharma) Date: Fri, 1 Jun 2018 16:40:28 +0530 Subject: UseNUMA membind Issue in openJDK In-Reply-To: <36dc8538-f984-d2df-e401-8362a429c6f3@linux.vnet.ibm.com> References: <9a0310b7-2880-db69-cfbc-7abba844ecbf@oracle.com> <187f278f-5069-3afa-c61f-f49a1fc0d790@linux.vnet.ibm.com> <36dc8538-f984-d2df-e401-8362a429c6f3@linux.vnet.ibm.com> Message-ID: Hi Gustavo, I will fix the thread binding issue in a separate patch. Updated the previous patch by removing the structure and using the methods provided by numa API.Here is the updated one with the changes(attached also). ========================PATCH========================= diff --git a/src/hotspot/os/linux/os_linux.cpp b/src/hotspot/os/linux/os_linux.cpp --- a/src/hotspot/os/linux/os_linux.cpp +++ b/src/hotspot/os/linux/os_linux.cpp @@ -2831,9 +2831,10 @@ // Map all node ids in which is possible to allocate memory. Also nodes are // not always consecutively available, i.e. available from 0 to the highest - // node number. + // node number. If the nodes have been bound explicitly using numactl membind, + // then allocate memory from those nodes only. for (size_t node = 0; node <= highest_node_number; node++) { - if (Linux::isnode_in_configured_nodes(node)) { + if (Linux::isnode_in_bound_nodes(node)) { ids[i++] = node; } } @@ -2930,6 +2931,12 @@ libnuma_dlsym(handle, "numa_bitmask_isbitset"))); set_numa_distance(CAST_TO_FN_PTR(numa_distance_func_t, libnuma_dlsym(handle, "numa_distance"))); + set_numa_set_membind(CAST_TO_FN_PTR(numa_set_membind_func_t, + libnuma_dlsym(handle, "numa_set_membind"))); + set_numa_get_membind(CAST_TO_FN_PTR(numa_get_membind_func_t, + libnuma_v2_dlsym(handle, "numa_get_membind"))); + set_numa_bitmask_nbytes(CAST_TO_FN_PTR(numa_bitmask_nbytes_func_t, + libnuma_dlsym(handle, "numa_bitmask_nbytes"))); if (numa_available() != -1) { set_numa_all_nodes((unsigned long*)libnuma_dlsym(handle, "numa_all_nodes")); @@ -3054,6 +3061,9 @@ os::Linux::numa_set_bind_policy_func_t os::Linux::_numa_set_bind_policy; os::Linux::numa_bitmask_isbitset_func_t os::Linux::_numa_bitmask_isbitset; os::Linux::numa_distance_func_t os::Linux::_numa_distance; +os::Linux::numa_set_membind_func_t os::Linux::_numa_set_membind; +os::Linux::numa_get_membind_func_t os::Linux::_numa_get_membind; +os::Linux::numa_bitmask_nbytes_func_t os::Linux::_numa_bitmask_nbytes; unsigned long* os::Linux::_numa_all_nodes; struct bitmask* os::Linux::_numa_all_nodes_ptr; struct bitmask* os::Linux::_numa_nodes_ptr; @@ -4962,8 +4972,9 @@ if (!Linux::libnuma_init()) { UseNUMA = false; } else { - if ((Linux::numa_max_node() < 1)) { - // There's only one node(they start from 0), disable NUMA. + if ((Linux::numa_max_node() < 1) || Linux::isbound_to_single_node()) { + // If there's only one node(they start from 0) or if the process + // is bound explicitly to a single node using membind, disable NUMA. UseNUMA = false; } } diff --git a/src/hotspot/os/linux/os_linux.hpp b/src/hotspot/os/linux/os_linux.hpp --- a/src/hotspot/os/linux/os_linux.hpp +++ b/src/hotspot/os/linux/os_linux.hpp @@ -228,6 +228,9 @@ typedef int (*numa_tonode_memory_func_t)(void *start, size_t size, int node); typedef void (*numa_interleave_memory_func_t)(void *start, size_t size, unsigned long *nodemask); typedef void (*numa_interleave_memory_v2_func_t)(void *start, size_t size, struct bitmask* mask); + typedef void (*numa_set_membind_func_t)(struct bitmask *mask); + typedef struct bitmask* (*numa_get_membind_func_t)(void); + typedef unsigned int (*numa_bitmask_nbytes_func_t)(struct bitmask *mask); typedef void (*numa_set_bind_policy_func_t)(int policy); typedef int (*numa_bitmask_isbitset_func_t)(struct bitmask *bmp, unsigned int n); @@ -244,6 +247,9 @@ static numa_set_bind_policy_func_t _numa_set_bind_policy; static numa_bitmask_isbitset_func_t _numa_bitmask_isbitset; static numa_distance_func_t _numa_distance; + static numa_set_membind_func_t _numa_set_membind; + static numa_get_membind_func_t _numa_get_membind; + static numa_bitmask_nbytes_func_t _numa_bitmask_nbytes; static unsigned long* _numa_all_nodes; static struct bitmask* _numa_all_nodes_ptr; static struct bitmask* _numa_nodes_ptr; @@ -259,6 +265,9 @@ static void set_numa_set_bind_policy(numa_set_bind_policy_func_t func) { _numa_set_bind_policy = func; } static void set_numa_bitmask_isbitset(numa_bitmask_isbitset_func_t func) { _numa_bitmask_isbitset = func; } static void set_numa_distance(numa_distance_func_t func) { _numa_distance = func; } + static void set_numa_set_membind(numa_set_membind_func_t func) { _numa_set_membind = func; } + static void set_numa_get_membind(numa_get_membind_func_t func) { _numa_get_membind = func; } + static void set_numa_bitmask_nbytes(numa_bitmask_nbytes_func_t func) {_numa_bitmask_nbytes = func; } static void set_numa_all_nodes(unsigned long* ptr) { _numa_all_nodes = ptr; } static void set_numa_all_nodes_ptr(struct bitmask **ptr) { _numa_all_nodes_ptr = (ptr == NULL ? NULL : *ptr); } static void set_numa_nodes_ptr(struct bitmask **ptr) { _numa_nodes_ptr = (ptr == NULL ? NULL : *ptr); } @@ -320,6 +329,41 @@ } else return 0; } + // Check if node is in bound node set. + static bool isnode_in_bound_nodes(int node) { + if (_numa_get_membind != NULL && _numa_bitmask_isbitset != NULL) { + return _numa_bitmask_isbitset(_numa_get_membind(), node); + } else { + return false; + } + } + // Check if bound to only one numa node. + // Returns true if bound to a single numa node, otherwise returns false. + static bool isbound_to_single_node() { + int single_node = 0; + struct bitmask* bmp = NULL; + unsigned int node = 0; + unsigned int max_number_of_nodes = 0; + if (_numa_get_membind != NULL && _numa_bitmask_nbytes != NULL) { + bmp = _numa_get_membind(); + max_number_of_nodes = _numa_bitmask_nbytes(bmp) * 8; + } else { + return false; + } + for (node = 0; node < max_number_of_nodes; node++) { + if (_numa_bitmask_isbitset(bmp, node)) { + single_node++; + if (single_node == 2) { + return false; + } + } + } + if (single_node == 1) { + return true; + } else { + return false; + } + } }; #endif // OS_LINUX_VM_OS_LINUX_HPP ======================================================= Swati On Tue, May 29, 2018 at 6:53 PM, Gustavo Romero wrote: > > Hi Swati, > > On 05/29/2018 06:12 AM, Swati Sharma wrote: >> >> I have incorporated some changes suggested by you. >> >> The use of struct bitmask's maskp for checking 64 bit in single iteration >> is more optimized compared to numa_bitmask_isbitset() as by using this we >> need to check each bit for 1024 times(SUSE case) and 64 times(Ubuntu Case). >> If its fine to iterate at initialization time then I can change. > > > Yes, I know, your version is more optimized. libnuma API should provide a > ready-made solution for that... but that's another story. I'm curious to know > what the time difference is on the worst case for both ways tho. Anyway, I > just would like to point out that, regardless performance, it's possible to > achieve the same result with current libnuma API. > > >> For the answer to your question: >> If it picks up node 16, not so bad, but what if it picks up node 0 or 1? >> It can be checked based on numa_distance instead of picking up the lgrps randomly. > > > That seems a good solution. You can do the checking very early, so > lgrp_spaces()->find() does not even fail (return -1), i.e. by changing the CPU to > node mapping on initialization (avoiding to change cas_allocate()). On that checking > both numa distance and if the node is bound (or not) would be considered to generate > the map. > > > Best regards, > Gustavo > >> Thanks, >> Swati >> >> >> >> On Fri, May 25, 2018 at 4:54 AM, Gustavo Romero < gromero at linux.vnet.ibm.com > wrote: >> >> Hi Swati, >> >> >> Thanks for CC:ing me. Sorry for the delay replying it, I had to reserve a few >> specific machines before trying your patch :-) >> >> I think that UseNUMA's original task was to figure out the best binding >> setup for the JVM automatically but I understand that it also has to be aware >> that sometimes, for some (new) particular reasons, its binding task is >> "modulated" by other external agents. Thanks for proposing a fix. >> >> I have just a question/concern on the proposal: how the JVM should behave if >> CPUs are not bound in accordance to the bound memory nodes? For instance, what >> happens if no '--cpunodebind' is passed and '--membind=0,1,16' is passed at >> the same time on this numa topology: >> >> brianh at p215n12:~$ numactl -H >> available: 4 nodes (0-1,16-17) >> node 0 cpus: 0 1 2 3 8 9 10 11 16 17 18 19 24 25 26 27 32 33 34 35 >> node 0 size: 65342 MB >> node 0 free: 56902 MB >> node 1 cpus: 40 41 42 43 48 49 50 51 56 57 58 59 64 65 66 67 72 73 74 75 >> node 1 size: 65447 MB >> node 1 free: 58322 MB >> node 16 cpus: 80 81 82 83 88 89 90 91 96 97 98 99 104 105 106 107 112 113 114 115 >> node 16 size: 65448 MB >> node 16 free: 63096 MB >> node 17 cpus: 120 121 122 123 128 129 130 131 136 137 138 139 144 145 146 147 152 153 154 155 >> node 17 size: 65175 MB >> node 17 free: 61522 MB >> node distances: >> node 0 1 16 17 >> 0: 10 20 40 40 >> 1: 20 10 40 40 >> 16: 40 40 10 20 >> 17: 40 40 20 10 >> >> >> In that case JVM will spawn threads that will run on all CPUs, including those >> CPUs in numa node 17. Then once in >> src/hotspot/share/gc/parallel/mutableNUMASpace.cpp, in cas_allocate(): >> >> 834 // This version is lock-free. >> 835 HeapWord* MutableNUMASpace::cas_allocate(size_t size) { >> 836 Thread* thr = Thread::current(); >> 837 int lgrp_id = thr->lgrp_id(); >> 838 if (lgrp_id == -1 || !os::numa_has_group_homing()) { >> 839 lgrp_id = os::numa_get_group_id(); >> 840 thr->set_lgrp_id(lgrp_id); >> 841 } >> >> a newly created thread will try to be mapped to a numa node given your CPU ID. >> So if that CPU is in numa node 17 it will then not find it in: >> >> 843 int i = lgrp_spaces()->find(&lgrp_id, LGRPSpace::equals); >> >> and will fallback to a random map, picking up a random numa node among nodes >> 0, 1, and 16: >> >> 846 if (i == -1) { >> 847 i = os::random() % lgrp_spaces()->length(); >> 848 } >> >> If it picks up node 16, not so bad, but what if it picks up node 0 or 1? >> >> I see that if one binds mem but leaves CPU unbound one has to know exactly what >> she/he is doing, because it can be likely suboptimal. On the other hand, letting >> the node being picked up randomly when there are memory nodes bound but no CPUs >> seems even more suboptimal in some scenarios. Thus, should the JVM deal with it? >> >> @Zhengyu, do you have any opinion on that? >> >> Please find a few nits / comments inline. >> >> Note that I'm not a (R)eviewer so you still need two official reviews. >> >> >> Best regards, >> Gustavo >> >> On 05/21/2018 01:44 PM, Swati Sharma wrote: >> >> ======================PATCH============================== >> diff --git a/src/hotspot/os/linux/os_linux.cpp b/src/hotspot/os/linux/os_linux.cpp >> --- a/src/hotspot/os/linux/os_linux.cpp >> +++ b/src/hotspot/os/linux/os_linux.cpp >> @@ -2832,14 +2832,42 @@ >> // Map all node ids in which is possible to allocate memory. Also nodes are >> // not always consecutively available, i.e. available from 0 to the highest >> // node number. >> + // If the nodes have been bound explicitly using numactl membind, then >> + // allocate memory from those nodes only. >> >> >> I think ok to place that comment on the same existing line, like: >> >> - // node number. >> + // node number. If the nodes have been bound explicitly using numactl membind, >> + // then allocate memory from these nodes only. >> >> >> for (size_t node = 0; node <= highest_node_number; node++) { >> - if (Linux::isnode_in_configured_nodes(node)) { >> + if (Linux::isnode_in_bounded_nodes(node)) { >> >> ---------------------------------^ s/bounded/bound/ >> >> >> ids[i++] = node; >> } >> } >> return i; >> } >> +extern "C" struct bitmask { >> + unsigned long size; /* number of bits in the map */ >> + unsigned long *maskp; >> +}; >> >> >> I think it's possible to move the function below to os_linux.hpp with its >> friends and cope with the forward declaration of 'struct bitmask*` by using the >> functions from numa API, notably numa_bitmask_nbytes() and >> numa_bitmask_isbitset() only, avoiding the member dereferecing issue and the >> need to add the above struct explicitly. >> >> >> +// Check if single memory node bound. >> +// Returns true if single memory node bound. >> >> >> I suggest a minuscule improvement, something like: >> >> +// Check if bound to only one numa node. >> +// Returns true if bound to a single numa node, otherwise returns false. >> >> >> +bool os::Linux::issingle_node_bound() { >> >> >> What about s/issingle_node_bound/isbound_to_single_node/ ? >> >> >> + struct bitmask* bmp = _numa_get_membind != NULL ? _numa_get_membind() : NULL; >> + if(!(bmp != NULL && bmp->maskp != NULL)) return false; >> >> -----^ >> Are you sure this checking is necessary? I think if numa_get_membind succeed >> bmp->maskp is always != NULL. >> >> Indentation here is odd. No space before 'if' and return on the same line. >> >> I would try to avoid lines over 80 chars. >> >> >> + int issingle = 0; >> + // System can have more than 64 nodes so check in all the elements of >> + // unsigned long array >> + for (unsigned long i = 0; i < (bmp->size / (8 * sizeof(unsigned long))); i++) { >> + if (bmp->maskp[i] == 0) { >> + continue; >> + } else if ((bmp->maskp[i] & (bmp->maskp[i] - 1)) == 0) { >> + issingle++; >> + } else { >> + return false; >> + } >> + } >> + if (issingle == 1) >> + return true; >> + return false; >> +} >> + >> >> >> As I mentioned, I think it could be moved to os_linux.hpp instead. Also, it >> could be something like: >> >> +bool os::Linux::isbound_to_single_node(void) { >> + struct bitmask* bmp; >> + unsigned long mask; // a mask element in the mask array >> + unsigned long max_num_masks; >> + int single_node = 0; >> + >> + if (_numa_get_membind != NULL) { >> + bmp = _numa_get_membind(); >> + } else { >> + return false; >> + } >> + >> + max_num_masks = bmp->size / (8 * sizeof(unsigned long)); >> + >> + for (mask = 0; mask < max_num_masks; mask++) { >> + if (bmp->maskp[mask] != 0) { // at least one numa node in the mask >> + if (bmp->maskp[mask] & (bmp->maskp[mask] - 1) == 0) { >> + single_node++; // a single numa node in the mask >> + } else { >> + return false; >> + } >> + } >> + } >> + >> + if (single_node == 1) { >> + return true; // only a single mask with a single numa node >> + } else { >> + return false; >> + } >> +} >> >> >> bool os::get_page_info(char *start, page_info* info) { >> return false; >> } >> @@ -2930,6 +2958,10 @@ >> libnuma_dlsym(handle, "numa_bitmask_isbitset"))); >> set_numa_distance(CAST_TO_FN_PTR(numa_distance_func_t, >> libnuma_dlsym(handle, "numa_distance"))); >> + set_numa_set_membind(CAST_TO_FN_PTR(numa_set_membind_func_t, >> + libnuma_dlsym(handle, "numa_set_membind"))); >> + set_numa_get_membind(CAST_TO_FN_PTR(numa_get_membind_func_t, >> + libnuma_v2_dlsym(handle, "numa_get_membind"))); >> if (numa_available() != -1) { >> set_numa_all_nodes((unsigned long*)libnuma_dlsym(handle, "numa_all_nodes")); >> @@ -3054,6 +3086,8 @@ >> os::Linux::numa_set_bind_policy_func_t os::Linux::_numa_set_bind_policy; >> os::Linux::numa_bitmask_isbitset_func_t os::Linux::_numa_bitmask_isbitset; >> os::Linux::numa_distance_func_t os::Linux::_numa_distance; >> +os::Linux::numa_set_membind_func_t os::Linux::_numa_set_membind; >> +os::Linux::numa_get_membind_func_t os::Linux::_numa_get_membind; >> unsigned long* os::Linux::_numa_all_nodes; >> struct bitmask* os::Linux::_numa_all_nodes_ptr; >> struct bitmask* os::Linux::_numa_nodes_ptr; >> @@ -4962,8 +4996,9 @@ >> if (!Linux::libnuma_init()) { >> UseNUMA = false; >> } else { >> - if ((Linux::numa_max_node() < 1)) { >> - // There's only one node(they start from 0), disable NUMA. >> + if ((Linux::numa_max_node() < 1) || Linux::issingle_node_bound()) { >> + // If there's only one node(they start from 0) or if the process >> + // is bound explicitly to a single node using membind, disable NUMA. >> UseNUMA = false; >> } >> } >> diff --git a/src/hotspot/os/linux/os_linux.hpp b/src/hotspot/os/linux/os_linux.hpp >> --- a/src/hotspot/os/linux/os_linux.hpp >> +++ b/src/hotspot/os/linux/os_linux.hpp >> @@ -228,6 +228,8 @@ >> typedef int (*numa_tonode_memory_func_t)(void *start, size_t size, int node); >> typedef void (*numa_interleave_memory_func_t)(void *start, size_t size, unsigned long *nodemask); >> typedef void (*numa_interleave_memory_v2_func_t)(void *start, size_t size, struct bitmask* mask); >> + typedef void (*numa_set_membind_func_t)(struct bitmask *mask); >> + typedef struct bitmask* (*numa_get_membind_func_t)(void); >> typedef void (*numa_set_bind_policy_func_t)(int policy); >> typedef int (*numa_bitmask_isbitset_func_t)(struct bitmask *bmp, unsigned int n); >> @@ -244,6 +246,8 @@ >> static numa_set_bind_policy_func_t _numa_set_bind_policy; >> static numa_bitmask_isbitset_func_t _numa_bitmask_isbitset; >> static numa_distance_func_t _numa_distance; >> + static numa_set_membind_func_t _numa_set_membind; >> + static numa_get_membind_func_t _numa_get_membind; >> static unsigned long* _numa_all_nodes; >> static struct bitmask* _numa_all_nodes_ptr; >> static struct bitmask* _numa_nodes_ptr; >> @@ -259,6 +263,8 @@ >> static void set_numa_set_bind_policy(numa_set_bind_policy_func_t func) { _numa_set_bind_policy = func; } >> static void set_numa_bitmask_isbitset(numa_bitmask_isbitset_func_t func) { _numa_bitmask_isbitset = func; } >> static void set_numa_distance(numa_distance_func_t func) { _numa_distance = func; } >> + static void set_numa_set_membind(numa_set_membind_func_t func) { _numa_set_membind = func; } >> + static void set_numa_get_membind(numa_get_membind_func_t func) { _numa_get_membind = func; } >> static void set_numa_all_nodes(unsigned long* ptr) { _numa_all_nodes = ptr; } >> static void set_numa_all_nodes_ptr(struct bitmask **ptr) { _numa_all_nodes_ptr = (ptr == NULL ? NULL : *ptr); } >> static void set_numa_nodes_ptr(struct bitmask **ptr) { _numa_nodes_ptr = (ptr == NULL ? NULL : *ptr); } >> @@ -320,6 +326,15 @@ >> } else >> return 0; >> } >> + // Check if node in bounded nodes >> >> >> + // Check if node is in bound node set. Maybe? >> >> >> + static bool isnode_in_bounded_nodes(int node) { >> + struct bitmask* bmp = _numa_get_membind != NULL ? _numa_get_membind() : NULL; >> + if (bmp != NULL && _numa_bitmask_isbitset != NULL && _numa_bitmask_isbitset(bmp, node)) { >> + return true; >> + } else >> + return false; >> + } >> + static bool issingle_node_bound(); >> >> >> Looks like it can be re-written like: >> >> + static bool isnode_in_bound_nodes(int node) { >> + if (_numa_get_membind != NULL && _numa_bitmask_isbitset != NULL) { >> + return _numa_bitmask_isbitset(_numa_get_membind(), node); >> + } else { >> + return false; >> + } >> + } >> >> ? >> >> >> }; >> #endif // OS_LINUX_VM_OS_LINUX_HPP >> >> >> > From coleen.phillimore at oracle.com Fri Jun 1 13:13:17 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 1 Jun 2018 09:13:17 -0400 Subject: RFR: 8204097: Simplify OopStorage::AllocateList block entry access In-Reply-To: <2A6B793E-AD54-430F-8C68-D92F964C0A37@oracle.com> References: <2A6B793E-AD54-430F-8C68-D92F964C0A37@oracle.com> Message-ID: Hi Kim,?? This change looks fine, except these names caused me a lot of confusion. http://cr.openjdk.java.net/~kbarrett/8204097/open.00/src/hotspot/share/gc/shared/oopStorage.cpp.udiff.html + block.allocate_entry()._next = old; + old->allocate_entry()._prev = █ This allocate_entry() call doesn't actually allocate anything but get the allocation list entry.?? Can these things be renamed in a subsequent RFE? thanks, Coleen On 5/30/18 4:19 PM, Kim Barrett wrote: > Please review this simplification of OopStorage::AllocateList, > removing the no longer used support for blocks being in multiple lists > simultaneously. There is now only one list of blocks, the > _allocate_list. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8204097 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8204097/open.00/ > > Testing: > Mach5 tier{1,2,3} > From bob.vandette at oracle.com Fri Jun 1 15:52:54 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Fri, 1 Jun 2018 11:52:54 -0400 Subject: RFR: 8203357 Container Metrics In-Reply-To: <6b8f5228-03ea-9521-aa66-e74b5595722b@oracle.com> References: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> <6b8f5228-03ea-9521-aa66-e74b5595722b@oracle.com> Message-ID: <57ADBBD0-381B-4890-9943-BF3222F19839@oracle.com> > On May 31, 2018, at 11:36 PM, mandy chung wrote: > > Hi Bob, > > On 5/30/18 12:45 PM, Bob Vandette wrote:> >> RFE: Container Metrics >> https://bugs.openjdk.java.net/browse/JDK-8203357 >> WEBREV: >> http://cr.openjdk.java.net/~bobv/8203357/webrev.00 Thanks for the review, here an updated webrev: http://cr.openjdk.java.net/~bobv/8203357/webrev.01/ > > Looks fine in general. It's good to have this internal API ready > for JFR and other library code to use. > > I skimmed through the new tests. It'd be good to add some comments > to describe what it does (for example, set up a docker image etc). DockerTestUtils.java does contain some comments describing what the utility functions do. I?ll add a brief comment in TestDockerCpuMetrics.java and TestDockerMemoryMetrics.java explaining the test process. > > launcher.properties > 154 \ -XshowSettings:system (Linux Only) show host system or container\n\ > 155 \ configuration and continue\n\ > > A newline can be placed after -XshowSettings:system consistent with > other suboptions. Done. I also added a newline after the vm sub-option. > > test/lib/jdk/test/lib/containers/docker/DockerTestUtils.java > > There are several long lines in the new test files such as: > MetricsCpuTester.java > MetricsMemoryTester.java > MetricsTester.java > > It'd help future side-by-side review if they are wrapped. I think > most of them are the construction of an exception. Fixed. > > I see a pattern of a name after @test and that is not strictly needed. > > TestCgroupMetrics.java > 25 * @test TestCgroupMetrics > > TestDockerCpuMetrics.java > 34 * @test TestSystemMetrics > > TestDockerMemoryMetrics.java > 30 * @test TestSystemMetrics > > TestSystemMetrics.java > 25 * @test TestSystemMetrics Remove the names after @test. > > This needs a CSR for the new -XshowSettings:system option. I filed a CSR for this a few days ago. https://bugs.openjdk.java.net/browse/JDK-8204107 Bob. > > Mandy From kim.barrett at oracle.com Fri Jun 1 16:29:34 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 1 Jun 2018 12:29:34 -0400 Subject: RFR: 8204165: Filter out tests requiring class unloading when ClassUnloading is disabled In-Reply-To: <6a3b4ab9-557c-c2c4-1efb-0db31de59658@oracle.com> References: <6dc0988c-7324-7d87-826a-a66cdbfa2a50@oracle.com> <4a1816b7-cc65-0a37-863a-696c183df9ef@oracle.com> <70964597-4cab-7fba-a71b-9982a4990eb2@oracle.com> <4f748bd8-f958-3dba-7cb0-6c7989bfe2f2@oracle.com> <322cd976-04f7-eba7-e950-4b82612ed863@oracle.com> <6a3b4ab9-557c-c2c4-1efb-0db31de59658@oracle.com> Message-ID: > On Jun 1, 2018, at 3:13 AM, Stefan Karlsson wrote: > > Hi again, > > I decided to tag the stressHierarchy as Coleen suggested: > http://cr.openjdk.java.net/~stefank/8204165/webrev.03.delta/ > http://cr.openjdk.java.net/~stefank/8204165/webrev.03/ > > Thanks, > StefanK Looks good. From vladimir.kozlov at oracle.com Fri Jun 1 16:52:36 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 1 Jun 2018 09:52:36 -0700 Subject: RFR(L) : 8202812 : [TESTBUG] Open source VM testbase compiler tests In-Reply-To: References: <6abc20ff-f125-80cf-98d6-2e69f0181673@oracle.com> Message-ID: On 5/31/18 12:19 PM, Igor Ignatyev wrote: >> On May 31, 2018, at 12:06 PM, Vladimir Kozlov wrote: >> >> Looks good to me. > thanks for reviewing it! > >> I don't understand how these tests trigger JIT compilation? main() does not have any loops. > which test/tests are you asking about? I've looked at a few tests, all of them do have loops (not necessary right in main). in any case, this is how these tests were for a long time, they definitely require reevaluation and rewriting. Almost all /jit/ tests don't have loops: http://cr.openjdk.java.net/~iignatyev//8202812/webrev.00/test/hotspot/jtreg/vmTestbase/jit/DivTest/DivTest.java.html http://cr.openjdk.java.net/~iignatyev//8202812/webrev.00/test/hotspot/jtreg/vmTestbase/jit/bounds/bounds.java.html They like just JCK tests to me. Should we run them with -Xcomp? Or you have a "driver" which trigger JIT compilation for these tests? > >> Where is GoldChecker class? > GoldChecker is defined in test/hotspot/jtreg/vmTestbase/nsk/share/GoldChecker.java, which has been integrated by 8199643: [TESTBUG] Open source common VM testbase code. > >> What --java flag mean in @run command? > "--java" isn't a flag of @run, it's the flag of ExecDriver (test/hotspot/jtreg/vmTestbase/ExecDriver.java) which makes it to interpret the rest of arguments as regular arguments of java. ExecDriver is needed by some tests b/c they either run package-private classes (which regular jtreg's @run can't do) or use zero-exit code to signalize passed status (jtreg treats only 95 as passed). ExecDriver was used to reduce changes in the tests during their conversion to jtreg format and eventually it should go for good. Okay. Thanks, Vladimir > > Thanks, > -- Igor >> >> Thanks, >> Vladimir >> >> On 5/18/18 10:50 AM, Igor Ignatyev wrote: >>> http://cr.openjdk.java.net/~iignatyev//8202812/webrev.00/index.html >>>> 56733 lines changed: 56732 ins; 0 del; 1 mod; 1 >>> Hi all, >>> could you please review the patch which open sources compiler tests from vm testbase? these tests were developed in different time to cover different parts of JITs. >>> As usually w/ VM testbase code, these tests are old, they have been run in hotspot testing for a long period of time. Originally, these tests were run by a test harness different from jtreg and had different build and execution schemes, some parts couldn't be easily translated to jtreg, so tests might have actions or pieces of code which look weird. In a long term, we are planning to rework them. >>> JBS: https://bugs.openjdk.java.net/browse/JDK-8202812 >>> webrev: http://cr.openjdk.java.net/~iignatyev/8202812/webrev.00/index.html >>> testing: :vmTestbase_vm_compiler test group >>> Thanks, >>> -- Igor > From vladimir.kozlov at oracle.com Fri Jun 1 17:02:10 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 1 Jun 2018 10:02:10 -0700 Subject: RFR (S) 8204195: Clean up macroAssembler.inline.hpp and other inline.hpp files included in .hpp files In-Reply-To: <4A4CB4E4-7340-4798-B4DA-D3D48F9A30DA@oracle.com> References: <335ac0b6-84f4-8ae5-ad32-0ba7d7260009@oracle.com> <4A4CB4E4-7340-4798-B4DA-D3D48F9A30DA@oracle.com> Message-ID: +1 Thanks, Vladimir K On 5/31/18 4:52 PM, Jiangli Zhou wrote: > The changes look good to me. > > Thanks, > Jiangli > >> On May 31, 2018, at 4:10 PM, coleen.phillimore at oracle.com wrote: >> >> Summary: Moved macroAssembler.inline.hpp out of header file and distributed to .cpp files that included them: ie. c1_MacroAssembler.hpp and interp_masm.hpp. Also freeList.inline.hpp and allocation.inline.hpp. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8204195.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8204195 >> >> Tested with mach5 hs-tier1,2 on Oracle platforms: linux-x64, solaris-sparcv9, macosx-x64 and windows-x64. Also tested zero and aarch64 fastdebug builds, and linux-x64 without precompiled headers. Please test other platforms, like arm32, ppc and s390! I think these are the last platform dependent inline files that are included by .hpp files. >> >> Thanks, >> Coleen > From rkennke at redhat.com Fri Jun 1 19:58:06 2018 From: rkennke at redhat.com (Roman Kennke) Date: Fri, 1 Jun 2018 21:58:06 +0200 Subject: RFR(M) 8203641: Refactor String Deduplication into shared In-Reply-To: References: Message-ID: <02361438-b787-93c8-25b1-5f13f5a49a29@redhat.com> Am 28.05.2018 um 23:11 schrieb Zhengyu Gu: > Hi, > > Please review this refactoring of G1 string deduplication into shared > directory, so that other GCs (such as Shenandoah) can advantage of > existing infrastructure and plugin their own implementation. > > This refactoring preserves G1's String Deduplication infrastructure > (please see the comments in stringDedup.hpp for details), so that there > is no change to G1 outside of string deduplication code. > > Following changes are made to support different GCs: > > 1. Allows plugin new dedup queue implementation. > ?? While it keeps G1's dedup queue static interface, queue itself now is > a pure virtual class. Different GC can provide different implementation > to fit its own enqueuing mechanism. > ?? For example, G1 enqueues deduplication candidates during STW > evacuate/mark pause, while Shenandoah implementation does it during > concurrent mark. > > 2. Abstracted out generation related statistics out of StringDedupStat > base class, cause not all GCs are generational. > ?? G1StringDedupStat simply extends the base to add generational > statistics. > > 3. Moved table and queue's parallel processing logic from closure > (StringDedupUnlinkOrOopsDoClosure) to corresponding table and queue. > This gives flexibility to construct closure to share among the workers > (as G1 does), as well as private closure for each worker (as Shenandoah > does). > > > Bug: https://bugs.openjdk.java.net/browse/JDK-8203641 > Webrev: http://cr.openjdk.java.net/~zgu/8203641/webrev.00/index.html > > Test: > > ? Submit test came back clean. > This change looks good to me. Thank you! Should wait a bit for G1 engineers to comment too. Roman From mandy.chung at oracle.com Fri Jun 1 20:00:22 2018 From: mandy.chung at oracle.com (mandy chung) Date: Fri, 1 Jun 2018 13:00:22 -0700 Subject: RFR: 8203357 Container Metrics In-Reply-To: <57ADBBD0-381B-4890-9943-BF3222F19839@oracle.com> References: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> <6b8f5228-03ea-9521-aa66-e74b5595722b@oracle.com> <57ADBBD0-381B-4890-9943-BF3222F19839@oracle.com> Message-ID: <548859ab-db3a-d028-0c92-d6598c722052@oracle.com> On 6/1/18 8:52 AM, Bob Vandette wrote: > I filed a CSR for this a few days ago. > > https://bugs.openjdk.java.net/browse/JDK-8204107 Typo: s/-XshowSetting/-XshowSettings In the specification section, you can include the new lines adding to java --extra-help output (that's the spec) and drop the diff. It may worth to state that the output is implementation detail. It may help to clarify the behavior when running on non-linux platform, i.e. the current launcher implementation accepts any suboption value. So java -XshowSettings:system on non-linux platform displays vm, properties, locale settings. (BTW, I file JDK-8204246 about invalid suboption). Mandy From zgu at redhat.com Fri Jun 1 20:19:47 2018 From: zgu at redhat.com (Zhengyu Gu) Date: Fri, 1 Jun 2018 16:19:47 -0400 Subject: RFR(M) 8203641: Refactor String Deduplication into shared In-Reply-To: <02361438-b787-93c8-25b1-5f13f5a49a29@redhat.com> References: <02361438-b787-93c8-25b1-5f13f5a49a29@redhat.com> Message-ID: Thanks for the review, Roman. -Zhengyu On 06/01/2018 03:58 PM, Roman Kennke wrote: > Am 28.05.2018 um 23:11 schrieb Zhengyu Gu: >> Hi, >> >> Please review this refactoring of G1 string deduplication into shared >> directory, so that other GCs (such as Shenandoah) can advantage of >> existing infrastructure and plugin their own implementation. >> >> This refactoring preserves G1's String Deduplication infrastructure >> (please see the comments in stringDedup.hpp for details), so that there >> is no change to G1 outside of string deduplication code. >> >> Following changes are made to support different GCs: >> >> 1. Allows plugin new dedup queue implementation. >> ?? While it keeps G1's dedup queue static interface, queue itself now is >> a pure virtual class. Different GC can provide different implementation >> to fit its own enqueuing mechanism. >> ?? For example, G1 enqueues deduplication candidates during STW >> evacuate/mark pause, while Shenandoah implementation does it during >> concurrent mark. >> >> 2. Abstracted out generation related statistics out of StringDedupStat >> base class, cause not all GCs are generational. >> ?? G1StringDedupStat simply extends the base to add generational >> statistics. >> >> 3. Moved table and queue's parallel processing logic from closure >> (StringDedupUnlinkOrOopsDoClosure) to corresponding table and queue. >> This gives flexibility to construct closure to share among the workers >> (as G1 does), as well as private closure for each worker (as Shenandoah >> does). >> >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8203641 >> Webrev: http://cr.openjdk.java.net/~zgu/8203641/webrev.00/index.html >> >> Test: >> >> ? Submit test came back clean. >> > > This change looks good to me. Thank you! Should wait a bit for G1 > engineers to comment too. > > Roman > > From igor.ignatyev at oracle.com Fri Jun 1 20:28:36 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Fri, 1 Jun 2018 13:28:36 -0700 Subject: RFR(L) : 8202812 : [TESTBUG] Open source VM testbase compiler tests In-Reply-To: References: <6abc20ff-f125-80cf-98d6-2e69f0181673@oracle.com> Message-ID: <49DB8A18-71D3-4352-8351-E7AF53A6F888@oracle.com> Vladimir, I've filed 8204248[1] to reevaluate these tests from (at least) triggering JIT compilation perspective (at most overall value for testing). as this patch doesn't really introduce these tests, I don't consider their current state as blocker for open sourcing. so if you don't object, I'll push 8202812 patch and we will work on 8204248 later. [1] https://bugs.openjdk.java.net/browse/JDK-8204248 -- Igor > On Jun 1, 2018, at 9:52 AM, Vladimir Kozlov wrote: > > On 5/31/18 12:19 PM, Igor Ignatyev wrote: >>> On May 31, 2018, at 12:06 PM, Vladimir Kozlov wrote: >>> >>> Looks good to me. >> thanks for reviewing it! >>> I don't understand how these tests trigger JIT compilation? main() does not have any loops. >> which test/tests are you asking about? I've looked at a few tests, all of them do have loops (not necessary right in main). in any case, this is how these tests were for a long time, they definitely require reevaluation and rewriting. > > Almost all /jit/ tests don't have loops: > > http://cr.openjdk.java.net/~iignatyev//8202812/webrev.00/test/hotspot/jtreg/vmTestbase/jit/DivTest/DivTest.java.html > http://cr.openjdk.java.net/~iignatyev//8202812/webrev.00/test/hotspot/jtreg/vmTestbase/jit/bounds/bounds.java.html > > They like just JCK tests to me. Should we run them with -Xcomp? Or you have a "driver" which trigger JIT compilation for these tests? > >>> Where is GoldChecker class? >> GoldChecker is defined in test/hotspot/jtreg/vmTestbase/nsk/share/GoldChecker.java, which has been integrated by 8199643: [TESTBUG] Open source common VM testbase code. >>> What --java flag mean in @run command? >> "--java" isn't a flag of @run, it's the flag of ExecDriver (test/hotspot/jtreg/vmTestbase/ExecDriver.java) which makes it to interpret the rest of arguments as regular arguments of java. ExecDriver is needed by some tests b/c they either run package-private classes (which regular jtreg's @run can't do) or use zero-exit code to signalize passed status (jtreg treats only 95 as passed). ExecDriver was used to reduce changes in the tests during their conversion to jtreg format and eventually it should go for good. > > Okay. > > Thanks, > Vladimir > >> Thanks, >> -- Igor >>> >>> Thanks, >>> Vladimir >>> >>> On 5/18/18 10:50 AM, Igor Ignatyev wrote: >>>> http://cr.openjdk.java.net/~iignatyev//8202812/webrev.00/index.html >>>>> 56733 lines changed: 56732 ins; 0 del; 1 mod; 1 >>>> Hi all, >>>> could you please review the patch which open sources compiler tests from vm testbase? these tests were developed in different time to cover different parts of JITs. >>>> As usually w/ VM testbase code, these tests are old, they have been run in hotspot testing for a long period of time. Originally, these tests were run by a test harness different from jtreg and had different build and execution schemes, some parts couldn't be easily translated to jtreg, so tests might have actions or pieces of code which look weird. In a long term, we are planning to rework them. >>>> JBS: https://bugs.openjdk.java.net/browse/JDK-8202812 >>>> webrev: http://cr.openjdk.java.net/~iignatyev/8202812/webrev.00/index.html >>>> testing: :vmTestbase_vm_compiler test group >>>> Thanks, >>>> -- Igor From erik.joelsson at oracle.com Fri Jun 1 20:53:52 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Fri, 1 Jun 2018 13:53:52 -0700 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled Message-ID: We need to add compilation flags for disabling speculative execution to our native libraries and executables. In order to allow for users not affected by problems with speculative execution to run a JVM at full speed, we need to be able to ship two JVM libraries - one that is compiled with speculative execution enabled, and one that is compiled without. Note that this applies to the build time C++ flags, not the compiler in the JVM itself. Luckily adding these flags to the rest of the native libraries did not have a significant performance impact so there is no need for making it optional there. This patch defines flags for disabling speculative execution for GCC and Visual Studio and applies them to all binaries except libjvm when available in the compiler. It defines a new jvm feature no-speculative-cti, which is used to control whether to use the flags for libjvm. It also defines a new jvm variant "altserver" which is the same as server, but with this new feature added. For Oracle builds, we are changing the default for linux-x64 and windows-x64 to build both server and altserver, giving the choice to the user which JVM they want to use. If others would prefer this default, we could make it default in configure as well. The change in GensrcJFR.gmk fixes a newly introduced race that appears when building multiple jvm variants. Bug: https://bugs.openjdk.java.net/browse/JDK-8202384 Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.01 /Erik From shade at redhat.com Fri Jun 1 21:00:48 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Fri, 1 Jun 2018 23:00:48 +0200 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: References: Message-ID: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> On 06/01/2018 10:53 PM, Erik Joelsson wrote: > This patch defines flags for disabling speculative execution for GCC and Visual Studio and applies > them to all binaries except libjvm when available in the compiler. It defines a new jvm feature > no-speculative-cti, which is used to control whether to use the flags for libjvm. It also defines a > new jvm variant "altserver" which is the same as server, but with this new feature added. I think the classic name for such product configuration is "hardened", no? -Aleksey From vladimir.kozlov at oracle.com Fri Jun 1 21:09:49 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 1 Jun 2018 14:09:49 -0700 Subject: RFR(L) : 8202812 : [TESTBUG] Open source VM testbase compiler tests In-Reply-To: <49DB8A18-71D3-4352-8351-E7AF53A6F888@oracle.com> References: <6abc20ff-f125-80cf-98d6-2e69f0181673@oracle.com> <49DB8A18-71D3-4352-8351-E7AF53A6F888@oracle.com> Message-ID: <808b0ff7-fdec-cc2c-b05f-4ba5dd012259@oracle.com> Good. Thank you Vladimir On 6/1/18 1:28 PM, Igor Ignatyev wrote: > Vladimir, > > I've filed 8204248[1] to reevaluate these tests from (at least) triggering JIT compilation perspective (at most overall value for testing). as this patch doesn't really introduce these tests, I don't consider their current state as blocker for open sourcing. so if you don't object, I'll push 8202812 patch and we will work on 8204248 later. > > [1] https://bugs.openjdk.java.net/browse/JDK-8204248 > > -- Igor > >> On Jun 1, 2018, at 9:52 AM, Vladimir Kozlov wrote: >> >> On 5/31/18 12:19 PM, Igor Ignatyev wrote: >>>> On May 31, 2018, at 12:06 PM, Vladimir Kozlov wrote: >>>> >>>> Looks good to me. >>> thanks for reviewing it! >>>> I don't understand how these tests trigger JIT compilation? main() does not have any loops. >>> which test/tests are you asking about? I've looked at a few tests, all of them do have loops (not necessary right in main). in any case, this is how these tests were for a long time, they definitely require reevaluation and rewriting. >> >> Almost all /jit/ tests don't have loops: >> >> http://cr.openjdk.java.net/~iignatyev//8202812/webrev.00/test/hotspot/jtreg/vmTestbase/jit/DivTest/DivTest.java.html >> http://cr.openjdk.java.net/~iignatyev//8202812/webrev.00/test/hotspot/jtreg/vmTestbase/jit/bounds/bounds.java.html >> >> They like just JCK tests to me. Should we run them with -Xcomp? Or you have a "driver" which trigger JIT compilation for these tests? >> >>>> Where is GoldChecker class? >>> GoldChecker is defined in test/hotspot/jtreg/vmTestbase/nsk/share/GoldChecker.java, which has been integrated by 8199643: [TESTBUG] Open source common VM testbase code. >>>> What --java flag mean in @run command? >>> "--java" isn't a flag of @run, it's the flag of ExecDriver (test/hotspot/jtreg/vmTestbase/ExecDriver.java) which makes it to interpret the rest of arguments as regular arguments of java. ExecDriver is needed by some tests b/c they either run package-private classes (which regular jtreg's @run can't do) or use zero-exit code to signalize passed status (jtreg treats only 95 as passed). ExecDriver was used to reduce changes in the tests during their conversion to jtreg format and eventually it should go for good. >> >> Okay. >> >> Thanks, >> Vladimir >> >>> Thanks, >>> -- Igor >>>> >>>> Thanks, >>>> Vladimir >>>> >>>> On 5/18/18 10:50 AM, Igor Ignatyev wrote: >>>>> http://cr.openjdk.java.net/~iignatyev//8202812/webrev.00/index.html >>>>>> 56733 lines changed: 56732 ins; 0 del; 1 mod; 1 >>>>> Hi all, >>>>> could you please review the patch which open sources compiler tests from vm testbase? these tests were developed in different time to cover different parts of JITs. >>>>> As usually w/ VM testbase code, these tests are old, they have been run in hotspot testing for a long period of time. Originally, these tests were run by a test harness different from jtreg and had different build and execution schemes, some parts couldn't be easily translated to jtreg, so tests might have actions or pieces of code which look weird. In a long term, we are planning to rework them. >>>>> JBS: https://bugs.openjdk.java.net/browse/JDK-8202812 >>>>> webrev: http://cr.openjdk.java.net/~iignatyev/8202812/webrev.00/index.html >>>>> testing: :vmTestbase_vm_compiler test group >>>>> Thanks, >>>>> -- Igor > From per.liden at oracle.com Fri Jun 1 21:41:04 2018 From: per.liden at oracle.com (Per Liden) Date: Fri, 1 Jun 2018 23:41:04 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) Message-ID: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> Hi, Please review the implementation of JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) Please see the JEP for more information about the project. The JEP is currently in state "Proposed to Target" for JDK 11. https://bugs.openjdk.java.net/browse/JDK-8197831 Additional information in can also be found on the ZGC project wiki. https://wiki.openjdk.java.net/display/zgc/Main Webrevs ------- To make this easier to review, we've divided the change into two webrevs. * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master This patch contains the actual ZGC implementation, the new unit tests and other changes needed in HotSpot. * ZGC Testing: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing This patch contains changes to existing tests needed by ZGC. Overview of Changes ------------------- Below follows a list of the files we add/modify in the master patch, with a short summary describing each group. * Build support - Making ZGC an optional feature. make/autoconf/hotspot.m4 make/hotspot/lib/JvmFeatures.gmk src/hotspot/share/utilities/macros.hpp * C2 AD file - Additions needed to generate ZGC load barriers (adlc does not currently offer a way to easily break this out). src/hotspot/cpu/x86/x86.ad src/hotspot/cpu/x86/x86_64.ad * C2 - Things that can't be easily abstracted out into ZGC specific code, most of which is guarded behind a #if INCLUDE_ZGC and/or if (UseZGC) condition. There should only be two logic changes (one in idealKit.cpp and one in node.cpp) that are still active when ZGC is disabled. We believe these are low risk changes and should not introduce any real change i behavior when using other GCs. src/hotspot/share/adlc/formssel.cpp src/hotspot/share/opto/* src/hotspot/share/compiler/compilerDirectives.hpp * General GC+Runtime - Registering ZGC as a collector. src/hotspot/share/gc/shared/* src/hotspot/share/runtime/vmStructs.cpp src/hotspot/share/runtime/vm_operations.hpp src/hotspot/share/prims/whitebox.cpp * GC thread local data - Increasing the size of data area by 32 bytes. src/hotspot/share/gc/shared/gcThreadLocalData.hpp * ZGC - The collector itself. src/hotspot/share/gc/z/* src/hotspot/cpu/x86/gc/z/* src/hotspot/os_cpu/linux_x86/gc/z/* test/hotspot/gtest/gc/z/* * JFR - Adding new event types. src/hotspot/share/jfr/* src/jdk.jfr/share/conf/jfr/* * Logging - Adding new log tags. src/hotspot/share/logging/* * Metaspace - Adding a friend declaration. src/hotspot/share/memory/metaspace.hpp * InstanceRefKlass - Adjustments for concurrent reference processing. src/hotspot/share/oops/instanceRefKlass.inline.hpp * vmSymbol - Disabled clone intrinsic for ZGC. src/hotspot/share/classfile/vmSymbols.cpp * Oop Verification - In four cases we disabled oop verification because it do not makes sense or is not applicable to a GC using load barriers. src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp src/hotspot/cpu/x86/stubGenerator_x86_64.cpp src/hotspot/share/compiler/oopMap.cpp src/hotspot/share/runtime/jniHandles.cpp * StackValue - Apply a load barrier in case of OSR. This is a bit of a hack. However, this will go away in the future, when we have the next iteration of C2's load barriers in place (aka "C2 late barrier insertion"). src/hotspot/share/runtime/stackValue.cpp * JVMTI - Adding an assert() to catch problems if the tagmap hashing is changed in the future. src/hotspot/share/prims/jvmtiTagMap.cpp * Legal - Adding copyright/license for 3rd party hash function used in ZHash. src/java.base/share/legal/c-libutl.md * SA - Adding basic ZGC support. src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* Testing ------- * Unit testing A number of new ZGC specific gtests have been added, in test/hotspot/gtest/gc/z/ * Regression testing No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} No new failures in Mach5, with ZGC disabled, tier{1,2,3} * Stress testing We have been continuously been running a number stress tests throughout the development, these include: specjbb2000 specjbb2005 specjbb2015 specjvm98 specjvm2008 dacapo2009 test/hotspot/jtreg/gc/stress/gcold test/hotspot/jtreg/gc/stress/systemgc test/hotspot/jtreg/gc/stress/gclocker test/hotspot/jtreg/gc/stress/gcbasher test/hotspot/jtreg/gc/stress/finalizer Kitchensink Thanks! /Per, Stefan & the ZGC team From magnus.ihse.bursie at oracle.com Sat Jun 2 06:45:50 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Sat, 2 Jun 2018 09:45:50 +0300 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: References: Message-ID: /Magnus > 1 juni 2018 kl. 23:53 skrev Erik Joelsson : > > We need to add compilation flags for disabling speculative execution to our native libraries and executables. In order to allow for users not affected by problems with speculative execution to run a JVM at full speed, we need to be able to ship two JVM libraries - one that is compiled with speculative execution enabled, and one that is compiled without. Note that this applies to the build time C++ flags, not the compiler in the JVM itself. Luckily adding these flags to the rest of the native libraries did not have a significant performance impact so there is no need for making it optional there. > > This patch defines flags for disabling speculative execution for GCC and Visual Studio and applies them to all binaries except libjvm when available in the compiler. It defines a new jvm feature no-speculative-cti, which is used to control whether to use the flags for libjvm. It also defines a new jvm variant "altserver" which is the same as server, but with this new feature added. > > For Oracle builds, we are changing the default for linux-x64 and windows-x64 to build both server and altserver, giving the choice to the user which JVM they want to use. If others would prefer this default, we could make it default in configure as well. > > The change in GensrcJFR.gmk fixes a newly introduced race that appears when building multiple jvm variants. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8202384 > > Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.01 > > /Erik > From magnus.ihse.bursie at oracle.com Sat Jun 2 06:47:24 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Sat, 2 Jun 2018 09:47:24 +0300 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: References: Message-ID: <082B6E27-E5A4-4B5C-9ACF-081B9C79472E@oracle.com> > 1 juni 2018 kl. 23:53 skrev Erik Joelsson : > > We need to add compilation flags for disabling speculative execution to our native libraries and executables. In order to allow for users not affected by problems with speculative execution to run a JVM at full speed, we need to be able to ship two JVM libraries - one that is compiled with speculative execution enabled, and one that is compiled without. Note that this applies to the build time C++ flags, not the compiler in the JVM itself. Luckily adding these flags to the rest of the native libraries did not have a significant performance impact so there is no need for making it optional there. > > This patch defines flags for disabling speculative execution for GCC and Visual Studio and applies them to all binaries except libjvm when available in the compiler. It defines a new jvm feature no-speculative-cti, which is used to control whether to use the flags for libjvm. It also defines a new jvm variant "altserver" which is the same as server, but with this new feature added. > > For Oracle builds, we are changing the default for linux-x64 and windows-x64 to build both server and altserver, giving the choice to the user which JVM they want to use. If others would prefer this default, we could make it default in configure as well. I think we should keep the praxis of only having server as default. > > The change in GensrcJFR.gmk fixes a newly introduced race that appears when building multiple jvm variants. Thanks for fixing this! > > Bug: https://bugs.openjdk.java.net/browse/JDK-8202384 > > Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.01 Looks good to me. /Magnus > > /Erik > From ioi.lam at oracle.com Mon Jun 4 07:12:48 2018 From: ioi.lam at oracle.com (Ioi Lam) Date: Mon, 4 Jun 2018 00:12:48 -0700 Subject: 8204267 - Generate comments in -XX:+PrintInterpreter to link to source code Message-ID: <07d64693-9e7b-8795-5c7a-f226b6c35610@oracle.com> Hi Folks, I've found it very hard to understand the assembler code printed by -XX:+PrintInterpreter. Since the C++ source code that generates the interpreter is fairly well documented, it might be a good idea to print out the source code along with the assembler code. I've done a quick proof-of-concept by hacking the "__" macro that's used throughout the CPU-dependent sources. Please let me know if you think this is worth doing. If so, I will try to finish it up and post a real RFR. Thanks - Ioi https://bugs.openjdk.java.net/browse/JDK-8204267 http://cr.openjdk.java.net/~iklam/misc/8204267-print-interpreter-comments.v00 http://cr.openjdk.java.net/~iklam/misc/8204267-print-interpreter-comments.v00/hs_interp.S Here are some examples (if the mailer screws up the long lines, please click the above link): ifeq? 153 ifeq? [0x00007f830cc93da0, 0x00007f830cc941c0] 1056 bytes mov??? (%rsp),%eax add??? $0x8,%rsp test?? %eax,%eax?????????? ;; 2353:?? __ testl(rax, rax); jne??? 0x00007f830cc94177? ;; 2354:?? __ jcc(j_not(cc), not_taken); mov??? -0x18(%rbp),%rcx??? ;; 2120:?? __ get_method(rcx); // rcx holds method mov??? -0x28(%rbp),%rax??? ;; 2121:?? __ profile_taken_branch(rax, rbx); // rax holds updated MDP, rbx test?? %rax,%rax je???? 0x00007f830cc93dd8 mov??? 0x8(%rax),%rbx add??? $0x1,%rbx sbb??? $0x0,%rbx mov??? %rbx,0x8(%rax) add??? 0x10(%rax),%rax mov??? %rax,-0x28(%rbp) movswl 0x1(%r13),%edx????? ;; 2133:???? __ load_signed_short(rdx, at_bcp(1)); bswap? %edx??????????????? ;; 2135:?? __ bswapl(rdx); sar??? $0x10,%edx????????? ;; 2138:???? __ sarl(rdx, 16); movslq %edx,%rdx?????????? ;; 2140:?? LP64_ONLY(__ movl2ptr(rdx, rdx)); add??? %rdx,%r13?????????? ;; 2164:?? __ addptr(rbcp, rdx); test?? %edx,%edx?????????? ;; 2179:???? __ testl(rdx, rdx);???????????? // check if forward or backward branch jns??? 0x00007f830cc93eec? ;; 2180:???? __ jcc(Assembler::positive, dispatch); // count only if backward branch mov??? 0x18(%rcx),%rax???? ;; 2184:???? __ movptr(rax, Address(rcx, Method::method_counters_offset())); test?? %rax,%rax?????????? ;; 2185:???? __ testptr(rax, rax); jne??? 0x00007f830cc93ead? ;; 2186:???? __ jcc(Assembler::notZero, has_counters); push?? %rdx??????????????? ;; 2187:???? __ push(rdx); push?? %rcx??????????????? ;; 2188:???? __ push(rcx); callq? 0x00007f830cc93e09? ;; 2189:???? __ call_VM(noreg, CAST_FROM_FN_PTR(address, InterpreterRuntime::build_method_counters), From david.holmes at oracle.com Mon Jun 4 07:15:55 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 4 Jun 2018 17:15:55 +1000 Subject: [hs] RFR (L): 8010319: Implementation of JEP 181: Nest-Based Access Control In-Reply-To: <97f8cedf-4ebc-610f-0528-e1b91f35eece@oracle.com> References: <06529fc3-2eba-101b-9aee-2757893cb8fb@oracle.com> <97f8cedf-4ebc-610f-0528-e1b91f35eece@oracle.com> Message-ID: <29d5725a-be8d-81d7-f9dd-f6f2eedc888d@oracle.com> This update fixes some tests that were being excluded temporarily, but which can now run under nestmates. Incremental hotspot webrev: http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v5-incr/ Full hotspot webrev: http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v5/ Change summary: - test/hotspot/jtreg/vmTestbase/nsk/stress/except/except004.java (see: 8203046): The test expected an IllegalAccessException using reflection to access a private field of a nested class. That's actually a reflection bug that nestmates fixes. So relocated the Abra.PRIVATE_FIELD to a top-level package-access class Ext - all other tests involve class redefinition (see: 8199450): These tests were failing because the RedefineClassHelper, if passed a string containing "class A$B { ...}" doesn't define a nested class but a top-level class called A$B (which is perfectly legal). The redefinition itself would fail as the old class called A$B was a nested class and you're not allowed to change the nest attributes in class redefinition or transformation. The fix is simply to factor out the A$B class being redefined to being a top-level package access class in the same source file, called A_B, and with all references to "B" suitable adjusted. [The alternate fix considered would be to update the RedefineClassHelper and its use of the InMemoryJavaCompiler so that the tests would pass in a string like "class A { class B { ... } }" and then read back the bytes for A$B with nest attributes intact. But that is a non-trivial task and it isn't really significant that the classes used in these tests were in fact nested.] Thanks, David On 28/05/2018 9:20 PM, David Holmes wrote: > I've added some missing JNI tests for the basic access checks. Given JNI > ignores access it should go without saying that JNI access to nestmates > will work fine, but it doesn't hurt to verify that. > > http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v4-incr/ > > Thanks, > David > > On 24/05/2018 7:48 PM, David Holmes wrote: >> Here are the further updates based on review comments and rebasing to >> get the vmTestbase updates for which some closed test changes now have >> to be applied to the open versions. >> >> Incremental hotspot webrev: >> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v3-incr/ >> >> >> Full hotspot webrev: >> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v3/ >> >> Change summary: >> >> test/hotspot/jtreg/ProblemList.txt >> - Exclude vmTestbase/nsk/stress/except/except004.java under 8203046 >> >> test/hotspot/jtreg/vmTestbase/vm/runtime/defmeth/BasicTest.java >> test/hotspot/jtreg/vmTestbase/vm/runtime/defmeth/PrivateMethodsTest.java >> - updated to work with new invokeinterface rules and nestmate changes >> - misc cleanups >> >> src/hotspot/share/runtime/reflection.?pp >> - rename verify_field_access to verify_member_access (it's always been >> mis-named and I nearly forgot to do this cleanup!) and rename >> field_class to member_class >> - add TRAPS to verify_member_access to allow use with CHECK macros >> >> src/hotspot/share/ci/ciField.cpp >> src/hotspot/share/classfile/classFileParser.cpp >> src/hotspot/share/interpreter/linkResolver.cpp >> - updated to use THREAD/CHECK with verify_member_access >> - for ciField rename thread to THREAD so it can be used with >> HAS_PENDING_EXCEPTION >> >> src/hotspot/share/oops/instanceKlass.cpp >> - use CHECK_false when calling nest_host() >> - fix indent near nestmate code >> >> src/hotspot/share/oops/instanceKlass.hpp >> - make has_nest_member private >> >> Thanks, >> David >> ----- >> >> On 23/05/2018 4:57 PM, David Holmes wrote: >>> Here are the updates so far in response to all the review comments. >>> >>> Incremental webrev: >>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v2-incr/ >>> >>> >>> Full webrev: >>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v2/ >>> >>> Change summary: >>> >>> test/runtime/Nestmates/reflectionAPI/* >>> - moved to java/lang/reflect/Nestmates >>> >>> src/hotspot/cpu/arm/templateTable_arm.cpp >>> - Fixed ARM invocation logic as provided by Boris. >>> >>> src/hotspot/share/interpreter/linkResolver.cpp >>> - expanded comment regarding exceptions >>> - Removed leftover debugging code >>> >>> src/hotspot/share/oops/instanceKlass.cpp >>> - Removed FIXME comments >>> - corrected incorrect comment >>> - Fixed if/else formatting >>> >>> src/hotspot/share/oops/instanceKlass.hpp >>> - removed unused debug method >>> >>> src/hotspot/share/oops/klassVtable.cpp >>> - added comment by request of Karen >>> >>> src/hotspot/share/runtime/reflection.cpp >>> - Removed FIXME comments >>> - expanded comments in places >>> - used CHECK_false >>> >>> Thanks, >>> David >>> >>> On 15/05/2018 10:52 AM, David Holmes wrote: >>>> This review is being spread across four groups: langtools, >>>> core-libs, hotspot and serviceability. This is the specific review >>>> thread for hotspot - webrev: >>>> >>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v1/ >>>> >>>> See below for full details - including annotated full webrev guiding >>>> the review. >>>> >>>> The intent is to have JEP-181 targeted and integrated by the end of >>>> this month. >>>> >>>> Thanks, >>>> David >>>> ----- >>>> >>>> The nestmates project (JEP-181) introduces new classfile attributes >>>> to identify classes and interfaces in the same nest, so that the VM >>>> can perform access control based on those attributes and so allow >>>> direct private access between nestmates without requiring javac to >>>> generate synthetic accessor methods. These access control changes >>>> also extend to core reflection and the MethodHandle.Lookup contexts. >>>> >>>> Direct private calls between nestmates requires a more general >>>> calling context than is permitted by invokespecial, and so the JVMS >>>> is updated to allow, and javac updated to use, invokevirtual and >>>> invokeinterface for private class and interface method calls >>>> respectively. These changed semantics also extend to MethodHandle >>>> findXXX operations. >>>> >>>> At this time we are only concerned with static nest definitions, >>>> which map to a top-level class/interface as the nest-host and all >>>> its nested types as nest-members. >>>> >>>> Please see the JEP for further details. >>>> >>>> JEP: https://bugs.openjdk.java.net/browse/JDK-8046171 >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8010319 >>>> CSR: https://bugs.openjdk.java.net/browse/JDK-8197445 >>>> >>>> All of the specification changes have been previously been worked >>>> out by the Valhalla Project Expert Group, and the implementation >>>> reviewed by the various contributors and discussed on the >>>> valhalla-dev mailing list. >>>> >>>> Acknowledgments and contributions: Alex Buckley, Maurizio >>>> Cimadamore, Mandy Chung, Tobias Hartmann, Vladimir Ivanov, Karen >>>> Kinnear, Vladimir Kozlov, John Rose, Dan Smith, Serguei Spitsyn, >>>> Kumar Srinivasan >>>> >>>> Master webrev of all changes: >>>> >>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.full.v1/ >>>> >>>> Annotated master webrev index: >>>> >>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/jep181-webrev.html >>>> >>>> Performance: this is expected to be performance neutral in a general >>>> sense. Benchmarking and performance runs are about to start. >>>> >>>> Testing Discussion: >>>> ------------------ >>>> >>>> The testing for nestmates can be broken into four main groups: >>>> >>>> -? New tests specifically related to nestmates and currently in the >>>> runtime/Nestmates directory >>>> >>>> - New tests to complement existing tests by adding in testcases not >>>> previously expressible. >>>> ?? -? For example java/lang/invoke/SpecialInterfaceCall.java tests >>>> use of invokespecial for private interface methods and performing >>>> receiver typechecks, so we add >>>> java/lang/invoke/PrivateInterfaceCall.java to do similar tests for >>>> invokeinterface. >>>> >>>> -? New JVM TI tests to verify the spec changes related to nest >>>> attributes. >>>> >>>> -? Existing tests significantly affected by the nestmates changes, >>>> primarily: >>>> ??? -? runtime/SelectionResolution >>>> >>>> ??? In most cases the nestmate changes makes certain invocations >>>> that were illegal, legal (e.g. not requiring invokespecial to invoke >>>> private interface methods; allowing access to private members via >>>> reflection/Methodhandles that were previously not allowed). >>>> >>>> - Existing tests incidentally affected by the nestmate changes >>>> >>>> ?? This includes tests of things utilising class >>>> redefinition/retransformation to alter nested types but which >>>> unintentionally alter nest relationships (which is not permitted). >>>> >>>> There are still a number of tests problem-listed with issues filed >>>> against them to have them adapted to work with nestmates. Some of >>>> these are intended to be addressed in the short-term, while some >>>> (such as the runtime/SelectionResolution test changes) may not >>>> eventuate. >>>> >>>> - https://bugs.openjdk.java.net/browse/JDK-8203033 >>>> - https://bugs.openjdk.java.net/browse/JDK-8199450 >>>> - https://bugs.openjdk.java.net/browse/JDK-8196855 >>>> - https://bugs.openjdk.java.net/browse/JDK-8194857 >>>> - https://bugs.openjdk.java.net/browse/JDK-8187655 >>>> >>>> There is also further test work still to be completed (the JNI and >>>> JDI invocation tests): >>>> - https://bugs.openjdk.java.net/browse/JDK-8191117 >>>> which will continue in parallel with the main RFR. >>>> >>>> Pre-integration Testing: >>>> ??- General: >>>> ???? - Mach5: hs/jdk tier1,2 >>>> ???? - Mach5: hs-nightly (tiers 1 -3) >>>> ??- Targetted >>>> ??? - nashorn (for asm changes) >>>> ??? - hotspot: runtime/* >>>> ?????????????? serviceability/* >>>> ?????????????? compiler/* >>>> ?????????????? vmTestbase/* >>>> ??? - jdk: java/lang/invoke/* >>>> ?????????? java/lang/reflect/* >>>> ?????????? java/lang/instrument/* >>>> ?????????? java/lang/Class/* >>>> ?????????? java/lang/management/* >>>> ?? - langtools: tools/javac >>>> ??????????????? tools/javap >>>> From tobias.hartmann at oracle.com Mon Jun 4 07:29:55 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Mon, 4 Jun 2018 09:29:55 +0200 Subject: JEP: https://bugs.openjdk.java.net/browse/JDK-8203832 In-Reply-To: References: <0ca282db-5b4f-607d-512a-a2183dbd4b73@oracle.com> Message-ID: <03cc7b39-7e28-2149-6c12-7ee53c1c2140@oracle.com> Hi Yumin, thanks for the details! On 01.06.2018 05:01, yumin qi wrote: > Thanks for your review/questions. First I would introduce some background of JWarmup application > on use scenario? and how we implement the interaction between application and scheduling (dispatch > system, DS). > > The load of each application is controlled by DS. The profiling data is collected against real > input data (so it mostly matches the application run in production environments, thus reduce the > deoptimization chance). When run with profiling data, application gets notification from DS when > compiling should start, application then calls API to notify JVM the hot methods recorded in file > can be compiled,? after the compilations, a message sent out to DS so DS will dispatch load into > this application. Could you elaborate a bit more on how the communication between the DS and the application works? A generic user application should not be aware of the pre-compilation, right? Let's assume I run a little Hello World program, when/how is pre-compilation triggered? Do I understand correctly that the profile information is only used for a "standalone" compilations of a method or is it also used for inlining? For example, if we have profile information for method B and method A inlines method B, does it use the profile information available for B when there is no profile information available for A? > A: During run with pre-compiled methods, deoptimization is only seen with null-check elimination so > it is not eliminated. The profile data is not updated and re-used. That is, after deoptimized, it > starts from interpreter mode like freshly loaded. Why do you only see deoptimizations with null-check elimination? A pre-compiled method can still have uncommon traps for reasons like an out of bounds array access or some loop predicate that does not hold, right? Thanks, Tobias From thomas.schatzl at oracle.com Mon Jun 4 08:27:56 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Mon, 04 Jun 2018 10:27:56 +0200 Subject: RFR: 8202547: Move G1 runtime calls used by generated code to G1BarrierSetRuntime In-Reply-To: <5AFEEB58.3080404@oracle.com> References: <5AFEEB58.3080404@oracle.com> Message-ID: <37a3dc47ab2f4f43efec523c59a96b1df6a60177.camel@oracle.com> Hi, On Fri, 2018-05-18 at 17:03 +0200, Erik ?sterlund wrote: > Hi, > > Generated code occasionally needs to call into the runtime for G1 > barriers. Some of these slow-path runtime calls are in > SharedRuntime, some are in G1BarrierSet. It would be nice to have > them collected in the same place. This patch move these slow-path > C++ functions for G1 into a new class, g1BarrierSetRuntime. > > Webrev: > http://cr.openjdk.java.net/~eosterlund/8202547/webrev.00/ > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8202547 > > Thanks, > /Erik looks good. Thomas From serguei.spitsyn at oracle.com Mon Jun 4 09:10:39 2018 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Mon, 4 Jun 2018 02:10:39 -0700 Subject: [hs] RFR (L): 8010319: Implementation of JEP 181: Nest-Based Access Control In-Reply-To: <29d5725a-be8d-81d7-f9dd-f6f2eedc888d@oracle.com> References: <06529fc3-2eba-101b-9aee-2757893cb8fb@oracle.com> <97f8cedf-4ebc-610f-0528-e1b91f35eece@oracle.com> <29d5725a-be8d-81d7-f9dd-f6f2eedc888d@oracle.com> Message-ID: <87a9986a-06c3-0d0b-730d-932441af65ce@oracle.com> Hi David, It looks good. Nice approach to fix these tests. Thanks, Serguei On 6/4/18 00:15, David Holmes wrote: > This update fixes some tests that were being excluded temporarily, but > which can now run under nestmates. > > Incremental hotspot webrev: > http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v5-incr/ > > Full hotspot webrev: > http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v5/ > > Change summary: > > - test/hotspot/jtreg/vmTestbase/nsk/stress/except/except004.java (see: > 8203046): > > The test expected an IllegalAccessException using reflection to access > a private field of a nested class. That's actually a reflection bug > that nestmates fixes. So relocated the Abra.PRIVATE_FIELD to a > top-level package-access class Ext > > - all other tests involve class redefinition (see: 8199450): > > These tests were failing because the RedefineClassHelper, if passed a > string containing "class A$B { ...}" doesn't define a nested class but > a top-level class called A$B (which is perfectly legal). The > redefinition itself would fail as the old class called A$B was a > nested class and you're not allowed to change the nest attributes in > class redefinition or transformation. > > The fix is simply to factor out the A$B class being redefined to being > a top-level package access class in the same source file, called A_B, > and with all references to "B" suitable adjusted. > > [The alternate fix considered would be to update the > RedefineClassHelper and its use of the InMemoryJavaCompiler so that > the tests would pass in a string like "class A { class B { ... } }" > and then read back the bytes for A$B with nest attributes intact. But > that is a non-trivial task and it isn't really significant that the > classes used in these tests were in fact nested.] > > > Thanks, > David > > > On 28/05/2018 9:20 PM, David Holmes wrote: >> I've added some missing JNI tests for the basic access checks. Given >> JNI ignores access it should go without saying that JNI access to >> nestmates will work fine, but it doesn't hurt to verify that. >> >> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v4-incr/ >> >> >> Thanks, >> David >> >> On 24/05/2018 7:48 PM, David Holmes wrote: >>> Here are the further updates based on review comments and rebasing >>> to get the vmTestbase updates for which some closed test changes now >>> have to be applied to the open versions. >>> >>> Incremental hotspot webrev: >>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v3-incr/ >>> >>> >>> Full hotspot webrev: >>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v3/ >>> >>> Change summary: >>> >>> test/hotspot/jtreg/ProblemList.txt >>> - Exclude vmTestbase/nsk/stress/except/except004.java under 8203046 >>> >>> test/hotspot/jtreg/vmTestbase/vm/runtime/defmeth/BasicTest.java >>> test/hotspot/jtreg/vmTestbase/vm/runtime/defmeth/PrivateMethodsTest.java >>> >>> - updated to work with new invokeinterface rules and nestmate changes >>> - misc cleanups >>> >>> src/hotspot/share/runtime/reflection.?pp >>> - rename verify_field_access to verify_member_access (it's always >>> been mis-named and I nearly forgot to do this cleanup!) and rename >>> field_class to member_class >>> - add TRAPS to verify_member_access to allow use with CHECK macros >>> >>> src/hotspot/share/ci/ciField.cpp >>> src/hotspot/share/classfile/classFileParser.cpp >>> src/hotspot/share/interpreter/linkResolver.cpp >>> - updated to use THREAD/CHECK with verify_member_access >>> - for ciField rename thread to THREAD so it can be used with >>> HAS_PENDING_EXCEPTION >>> >>> src/hotspot/share/oops/instanceKlass.cpp >>> - use CHECK_false when calling nest_host() >>> - fix indent near nestmate code >>> >>> src/hotspot/share/oops/instanceKlass.hpp >>> - make has_nest_member private >>> >>> Thanks, >>> David >>> ----- >>> >>> On 23/05/2018 4:57 PM, David Holmes wrote: >>>> Here are the updates so far in response to all the review comments. >>>> >>>> Incremental webrev: >>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v2-incr/ >>>> >>>> >>>> Full webrev: >>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v2/ >>>> >>>> Change summary: >>>> >>>> test/runtime/Nestmates/reflectionAPI/* >>>> - moved to java/lang/reflect/Nestmates >>>> >>>> src/hotspot/cpu/arm/templateTable_arm.cpp >>>> - Fixed ARM invocation logic as provided by Boris. >>>> >>>> src/hotspot/share/interpreter/linkResolver.cpp >>>> - expanded comment regarding exceptions >>>> - Removed leftover debugging code >>>> >>>> src/hotspot/share/oops/instanceKlass.cpp >>>> - Removed FIXME comments >>>> - corrected incorrect comment >>>> - Fixed if/else formatting >>>> >>>> src/hotspot/share/oops/instanceKlass.hpp >>>> - removed unused debug method >>>> >>>> src/hotspot/share/oops/klassVtable.cpp >>>> - added comment by request of Karen >>>> >>>> src/hotspot/share/runtime/reflection.cpp >>>> - Removed FIXME comments >>>> - expanded comments in places >>>> - used CHECK_false >>>> >>>> Thanks, >>>> David >>>> >>>> On 15/05/2018 10:52 AM, David Holmes wrote: >>>>> This review is being spread across four groups: langtools, >>>>> core-libs, hotspot and serviceability. This is the specific review >>>>> thread for hotspot - webrev: >>>>> >>>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v1/ >>>>> >>>>> See below for full details - including annotated full webrev >>>>> guiding the review. >>>>> >>>>> The intent is to have JEP-181 targeted and integrated by the end >>>>> of this month. >>>>> >>>>> Thanks, >>>>> David >>>>> ----- >>>>> >>>>> The nestmates project (JEP-181) introduces new classfile >>>>> attributes to identify classes and interfaces in the same nest, so >>>>> that the VM can perform access control based on those attributes >>>>> and so allow direct private access between nestmates without >>>>> requiring javac to generate synthetic accessor methods. These >>>>> access control changes also extend to core reflection and the >>>>> MethodHandle.Lookup contexts. >>>>> >>>>> Direct private calls between nestmates requires a more general >>>>> calling context than is permitted by invokespecial, and so the >>>>> JVMS is updated to allow, and javac updated to use, invokevirtual >>>>> and invokeinterface for private class and interface method calls >>>>> respectively. These changed semantics also extend to MethodHandle >>>>> findXXX operations. >>>>> >>>>> At this time we are only concerned with static nest definitions, >>>>> which map to a top-level class/interface as the nest-host and all >>>>> its nested types as nest-members. >>>>> >>>>> Please see the JEP for further details. >>>>> >>>>> JEP: https://bugs.openjdk.java.net/browse/JDK-8046171 >>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8010319 >>>>> CSR: https://bugs.openjdk.java.net/browse/JDK-8197445 >>>>> >>>>> All of the specification changes have been previously been worked >>>>> out by the Valhalla Project Expert Group, and the implementation >>>>> reviewed by the various contributors and discussed on the >>>>> valhalla-dev mailing list. >>>>> >>>>> Acknowledgments and contributions: Alex Buckley, Maurizio >>>>> Cimadamore, Mandy Chung, Tobias Hartmann, Vladimir Ivanov, Karen >>>>> Kinnear, Vladimir Kozlov, John Rose, Dan Smith, Serguei Spitsyn, >>>>> Kumar Srinivasan >>>>> >>>>> Master webrev of all changes: >>>>> >>>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.full.v1/ >>>>> >>>>> Annotated master webrev index: >>>>> >>>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/jep181-webrev.html >>>>> >>>>> Performance: this is expected to be performance neutral in a >>>>> general sense. Benchmarking and performance runs are about to start. >>>>> >>>>> Testing Discussion: >>>>> ------------------ >>>>> >>>>> The testing for nestmates can be broken into four main groups: >>>>> >>>>> -? New tests specifically related to nestmates and currently in >>>>> the runtime/Nestmates directory >>>>> >>>>> - New tests to complement existing tests by adding in testcases >>>>> not previously expressible. >>>>> ?? -? For example java/lang/invoke/SpecialInterfaceCall.java tests >>>>> use of invokespecial for private interface methods and performing >>>>> receiver typechecks, so we add >>>>> java/lang/invoke/PrivateInterfaceCall.java to do similar tests for >>>>> invokeinterface. >>>>> >>>>> -? New JVM TI tests to verify the spec changes related to nest >>>>> attributes. >>>>> >>>>> -? Existing tests significantly affected by the nestmates changes, >>>>> primarily: >>>>> ??? -? runtime/SelectionResolution >>>>> >>>>> ??? In most cases the nestmate changes makes certain invocations >>>>> that were illegal, legal (e.g. not requiring invokespecial to >>>>> invoke private interface methods; allowing access to private >>>>> members via reflection/Methodhandles that were previously not >>>>> allowed). >>>>> >>>>> - Existing tests incidentally affected by the nestmate changes >>>>> >>>>> ?? This includes tests of things utilising class >>>>> redefinition/retransformation to alter nested types but which >>>>> unintentionally alter nest relationships (which is not permitted). >>>>> >>>>> There are still a number of tests problem-listed with issues filed >>>>> against them to have them adapted to work with nestmates. Some of >>>>> these are intended to be addressed in the short-term, while some >>>>> (such as the runtime/SelectionResolution test changes) may not >>>>> eventuate. >>>>> >>>>> - https://bugs.openjdk.java.net/browse/JDK-8203033 >>>>> - https://bugs.openjdk.java.net/browse/JDK-8199450 >>>>> - https://bugs.openjdk.java.net/browse/JDK-8196855 >>>>> - https://bugs.openjdk.java.net/browse/JDK-8194857 >>>>> - https://bugs.openjdk.java.net/browse/JDK-8187655 >>>>> >>>>> There is also further test work still to be completed (the JNI and >>>>> JDI invocation tests): >>>>> - https://bugs.openjdk.java.net/browse/JDK-8191117 >>>>> which will continue in parallel with the main RFR. >>>>> >>>>> Pre-integration Testing: >>>>> ??- General: >>>>> ???? - Mach5: hs/jdk tier1,2 >>>>> ???? - Mach5: hs-nightly (tiers 1 -3) >>>>> ??- Targetted >>>>> ??? - nashorn (for asm changes) >>>>> ??? - hotspot: runtime/* >>>>> ?????????????? serviceability/* >>>>> ?????????????? compiler/* >>>>> ?????????????? vmTestbase/* >>>>> ??? - jdk: java/lang/invoke/* >>>>> ?????????? java/lang/reflect/* >>>>> ?????????? java/lang/instrument/* >>>>> ?????????? java/lang/Class/* >>>>> ?????????? java/lang/management/* >>>>> ?? - langtools: tools/javac >>>>> ??????????????? tools/javap >>>>> From stefan.johansson at oracle.com Mon Jun 4 09:35:19 2018 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Mon, 4 Jun 2018 11:35:19 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> Message-ID: Hi guys, On 2018-06-01 23:41, Per Liden wrote: > Hi, > > Please review the implementation of JEP 333: ZGC: A Scalable Low-Latency > Garbage Collector (Experimental) > > Please see the JEP for more information about the project. The JEP is > currently in state "Proposed to Target" for JDK 11. > > https://bugs.openjdk.java.net/browse/JDK-8197831 > > Additional information in can also be found on the ZGC project wiki. > > https://wiki.openjdk.java.net/display/zgc/Main > > > Webrevs > ------- > > To make this easier to review, we've divided the change into two webrevs. > > * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master > > ? This patch contains the actual ZGC implementation, the new unit tests > and other changes needed in HotSpot. > I've looked at the shared GC parts and some other related code that I feel comfortable reviewing. The changes looks good in general, but a few comments: src/hotspot/share/gc/shared/specialized_oop_closures.hpp 39 #include "gc/z/zOopClosures.specialized.hpp" Any good reason for not following the naming convention used by all other collectors? Same goes for zFlags.hpp in gc_globals.hpp. --- src/hotspot/share/jfr/metadata/metadata.xml 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 The fields in the events doesn't match the type definitions below, Statistics vs. Stat. ---- src/hotspot/share/oops/instanceRefKlass.inline.hpp 61 if (type == REF_PHANTOM) { 62 referent = HeapAccess::oop_load(java_lang_ref_Reference::referent_addr_raw(obj)); 63 } else { 64 referent = HeapAccess::oop_load(java_lang_ref_Reference::referent_addr_raw(obj)); 65 } Extract this to a load_referent() helper function to help readability. --- src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp src/hotspot/share/runtime/jniHandles.cpp #if INCLUDE_ZGC // ... if (!UseZGC) #endif The UseZGC-flag is always available so the #if INCLUDE_ZGC can be skipped in the two places where this pattern is used. --- Cheers, Stefan > * ZGC Testing: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing > > ? This patch contains changes to existing tests needed by ZGC. > > > Overview of Changes > ------------------- > > Below follows a list of the files we add/modify in the master patch, > with a short summary describing each group. > > * Build support - Making ZGC an optional feature. > > ? make/autoconf/hotspot.m4 > ? make/hotspot/lib/JvmFeatures.gmk > ? src/hotspot/share/utilities/macros.hpp > > * C2 AD file - Additions needed to generate ZGC load barriers (adlc does > not currently offer a way to easily break this out). > > ? src/hotspot/cpu/x86/x86.ad > ? src/hotspot/cpu/x86/x86_64.ad > > * C2 - Things that can't be easily abstracted out into ZGC specific > code, most of which is guarded behind a #if INCLUDE_ZGC and/or if > (UseZGC) condition. There should only be two logic changes (one in > idealKit.cpp and one in node.cpp) that are still active when ZGC is > disabled. We believe these are low risk changes and should not introduce > any real change i behavior when using other GCs. > > ? src/hotspot/share/adlc/formssel.cpp > ? src/hotspot/share/opto/* > ? src/hotspot/share/compiler/compilerDirectives.hpp > > * General GC+Runtime - Registering ZGC as a collector. > > ? src/hotspot/share/gc/shared/* > ? src/hotspot/share/runtime/vmStructs.cpp > ? src/hotspot/share/runtime/vm_operations.hpp > ? src/hotspot/share/prims/whitebox.cpp > > * GC thread local data - Increasing the size of data area by 32 bytes. > > ? src/hotspot/share/gc/shared/gcThreadLocalData.hpp > > * ZGC - The collector itself. > > ? src/hotspot/share/gc/z/* > ? src/hotspot/cpu/x86/gc/z/* > ? src/hotspot/os_cpu/linux_x86/gc/z/* > ? test/hotspot/gtest/gc/z/* > > * JFR - Adding new event types. > > ? src/hotspot/share/jfr/* > ? src/jdk.jfr/share/conf/jfr/* > > * Logging - Adding new log tags. > > ? src/hotspot/share/logging/* > > * Metaspace - Adding a friend declaration. > > ? src/hotspot/share/memory/metaspace.hpp > > * InstanceRefKlass - Adjustments for concurrent reference processing. > > ? src/hotspot/share/oops/instanceRefKlass.inline.hpp > > * vmSymbol - Disabled clone intrinsic for ZGC. > > ? src/hotspot/share/classfile/vmSymbols.cpp > > * Oop Verification - In four cases we disabled oop verification because > it do not makes sense or is not applicable to a GC using load barriers. > > ? src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp > ? src/hotspot/cpu/x86/stubGenerator_x86_64.cpp > ? src/hotspot/share/compiler/oopMap.cpp > ? src/hotspot/share/runtime/jniHandles.cpp > > * StackValue - Apply a load barrier in case of OSR. This is a bit of a > hack. However, this will go away in the future, when we have the next > iteration of C2's load barriers in place (aka "C2 late barrier insertion"). > > ? src/hotspot/share/runtime/stackValue.cpp > > * JVMTI - Adding an assert() to catch problems if the tagmap hashing is > changed in the future. > > ? src/hotspot/share/prims/jvmtiTagMap.cpp > > * Legal - Adding copyright/license for 3rd party hash function used in > ZHash. > > ? src/java.base/share/legal/c-libutl.md > > * SA - Adding basic ZGC support. > > ? src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* > > > Testing > ------- > > * Unit testing > > ? A number of new ZGC specific gtests have been added, in > test/hotspot/gtest/gc/z/ > > * Regression testing > > ? No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} > ? No new failures in Mach5, with ZGC disabled, tier{1,2,3} > > * Stress testing > > ? We have been continuously been running a number stress tests > throughout the development, these include: > > ??? specjbb2000 > ??? specjbb2005 > ??? specjbb2015 > ??? specjvm98 > ??? specjvm2008 > ??? dacapo2009 > ??? test/hotspot/jtreg/gc/stress/gcold > ??? test/hotspot/jtreg/gc/stress/systemgc > ??? test/hotspot/jtreg/gc/stress/gclocker > ??? test/hotspot/jtreg/gc/stress/gcbasher > ??? test/hotspot/jtreg/gc/stress/finalizer > ??? Kitchensink > > > Thanks! > > /Per, Stefan & the ZGC team From erik.osterlund at oracle.com Mon Jun 4 10:01:42 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 4 Jun 2018 12:01:42 +0200 Subject: RFR: 8202547: Move G1 runtime calls used by generated code to G1BarrierSetRuntime In-Reply-To: <37a3dc47ab2f4f43efec523c59a96b1df6a60177.camel@oracle.com> References: <5AFEEB58.3080404@oracle.com> <37a3dc47ab2f4f43efec523c59a96b1df6a60177.camel@oracle.com> Message-ID: <5B150E06.1020805@oracle.com> Hi Thomas, Thanks for the review. /Erik On 2018-06-04 10:27, Thomas Schatzl wrote: > Hi, > > On Fri, 2018-05-18 at 17:03 +0200, Erik ?sterlund wrote: >> Hi, >> >> Generated code occasionally needs to call into the runtime for G1 >> barriers. Some of these slow-path runtime calls are in >> SharedRuntime, some are in G1BarrierSet. It would be nice to have >> them collected in the same place. This patch move these slow-path >> C++ functions for G1 into a new class, g1BarrierSetRuntime. >> >> Webrev: >> http://cr.openjdk.java.net/~eosterlund/8202547/webrev.00/ >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8202547 >> >> Thanks, >> /Erik > looks good. > > Thomas From david.holmes at oracle.com Mon Jun 4 11:03:57 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 4 Jun 2018 21:03:57 +1000 Subject: [hs] RFR (L): 8010319: Implementation of JEP 181: Nest-Based Access Control In-Reply-To: <87a9986a-06c3-0d0b-730d-932441af65ce@oracle.com> References: <06529fc3-2eba-101b-9aee-2757893cb8fb@oracle.com> <97f8cedf-4ebc-610f-0528-e1b91f35eece@oracle.com> <29d5725a-be8d-81d7-f9dd-f6f2eedc888d@oracle.com> <87a9986a-06c3-0d0b-730d-932441af65ce@oracle.com> Message-ID: <4fa90fb7-84b9-3203-f39e-6ebdaaf60ebe@oracle.com> Thanks for the review Serguei. David On 4/06/2018 7:10 PM, serguei.spitsyn at oracle.com wrote: > Hi David, > > It looks good. > Nice approach to fix these tests. > > Thanks, > Serguei > > > On 6/4/18 00:15, David Holmes wrote: >> This update fixes some tests that were being excluded temporarily, but >> which can now run under nestmates. >> >> Incremental hotspot webrev: >> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v5-incr/ >> >> >> Full hotspot webrev: >> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v5/ >> >> Change summary: >> >> - test/hotspot/jtreg/vmTestbase/nsk/stress/except/except004.java (see: >> 8203046): >> >> The test expected an IllegalAccessException using reflection to access >> a private field of a nested class. That's actually a reflection bug >> that nestmates fixes. So relocated the Abra.PRIVATE_FIELD to a >> top-level package-access class Ext >> >> - all other tests involve class redefinition (see: 8199450): >> >> These tests were failing because the RedefineClassHelper, if passed a >> string containing "class A$B { ...}" doesn't define a nested class but >> a top-level class called A$B (which is perfectly legal). The >> redefinition itself would fail as the old class called A$B was a >> nested class and you're not allowed to change the nest attributes in >> class redefinition or transformation. >> >> The fix is simply to factor out the A$B class being redefined to being >> a top-level package access class in the same source file, called A_B, >> and with all references to "B" suitable adjusted. >> >> [The alternate fix considered would be to update the >> RedefineClassHelper and its use of the InMemoryJavaCompiler so that >> the tests would pass in a string like "class A { class B { ... } }" >> and then read back the bytes for A$B with nest attributes intact. But >> that is a non-trivial task and it isn't really significant that the >> classes used in these tests were in fact nested.] >> >> >> Thanks, >> David >> >> >> On 28/05/2018 9:20 PM, David Holmes wrote: >>> I've added some missing JNI tests for the basic access checks. Given >>> JNI ignores access it should go without saying that JNI access to >>> nestmates will work fine, but it doesn't hurt to verify that. >>> >>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v4-incr/ >>> >>> >>> Thanks, >>> David >>> >>> On 24/05/2018 7:48 PM, David Holmes wrote: >>>> Here are the further updates based on review comments and rebasing >>>> to get the vmTestbase updates for which some closed test changes now >>>> have to be applied to the open versions. >>>> >>>> Incremental hotspot webrev: >>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v3-incr/ >>>> >>>> >>>> Full hotspot webrev: >>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v3/ >>>> >>>> Change summary: >>>> >>>> test/hotspot/jtreg/ProblemList.txt >>>> - Exclude vmTestbase/nsk/stress/except/except004.java under 8203046 >>>> >>>> test/hotspot/jtreg/vmTestbase/vm/runtime/defmeth/BasicTest.java >>>> test/hotspot/jtreg/vmTestbase/vm/runtime/defmeth/PrivateMethodsTest.java >>>> >>>> - updated to work with new invokeinterface rules and nestmate changes >>>> - misc cleanups >>>> >>>> src/hotspot/share/runtime/reflection.?pp >>>> - rename verify_field_access to verify_member_access (it's always >>>> been mis-named and I nearly forgot to do this cleanup!) and rename >>>> field_class to member_class >>>> - add TRAPS to verify_member_access to allow use with CHECK macros >>>> >>>> src/hotspot/share/ci/ciField.cpp >>>> src/hotspot/share/classfile/classFileParser.cpp >>>> src/hotspot/share/interpreter/linkResolver.cpp >>>> - updated to use THREAD/CHECK with verify_member_access >>>> - for ciField rename thread to THREAD so it can be used with >>>> HAS_PENDING_EXCEPTION >>>> >>>> src/hotspot/share/oops/instanceKlass.cpp >>>> - use CHECK_false when calling nest_host() >>>> - fix indent near nestmate code >>>> >>>> src/hotspot/share/oops/instanceKlass.hpp >>>> - make has_nest_member private >>>> >>>> Thanks, >>>> David >>>> ----- >>>> >>>> On 23/05/2018 4:57 PM, David Holmes wrote: >>>>> Here are the updates so far in response to all the review comments. >>>>> >>>>> Incremental webrev: >>>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v2-incr/ >>>>> >>>>> >>>>> Full webrev: >>>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v2/ >>>>> >>>>> Change summary: >>>>> >>>>> test/runtime/Nestmates/reflectionAPI/* >>>>> - moved to java/lang/reflect/Nestmates >>>>> >>>>> src/hotspot/cpu/arm/templateTable_arm.cpp >>>>> - Fixed ARM invocation logic as provided by Boris. >>>>> >>>>> src/hotspot/share/interpreter/linkResolver.cpp >>>>> - expanded comment regarding exceptions >>>>> - Removed leftover debugging code >>>>> >>>>> src/hotspot/share/oops/instanceKlass.cpp >>>>> - Removed FIXME comments >>>>> - corrected incorrect comment >>>>> - Fixed if/else formatting >>>>> >>>>> src/hotspot/share/oops/instanceKlass.hpp >>>>> - removed unused debug method >>>>> >>>>> src/hotspot/share/oops/klassVtable.cpp >>>>> - added comment by request of Karen >>>>> >>>>> src/hotspot/share/runtime/reflection.cpp >>>>> - Removed FIXME comments >>>>> - expanded comments in places >>>>> - used CHECK_false >>>>> >>>>> Thanks, >>>>> David >>>>> >>>>> On 15/05/2018 10:52 AM, David Holmes wrote: >>>>>> This review is being spread across four groups: langtools, >>>>>> core-libs, hotspot and serviceability. This is the specific review >>>>>> thread for hotspot - webrev: >>>>>> >>>>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v1/ >>>>>> >>>>>> See below for full details - including annotated full webrev >>>>>> guiding the review. >>>>>> >>>>>> The intent is to have JEP-181 targeted and integrated by the end >>>>>> of this month. >>>>>> >>>>>> Thanks, >>>>>> David >>>>>> ----- >>>>>> >>>>>> The nestmates project (JEP-181) introduces new classfile >>>>>> attributes to identify classes and interfaces in the same nest, so >>>>>> that the VM can perform access control based on those attributes >>>>>> and so allow direct private access between nestmates without >>>>>> requiring javac to generate synthetic accessor methods. These >>>>>> access control changes also extend to core reflection and the >>>>>> MethodHandle.Lookup contexts. >>>>>> >>>>>> Direct private calls between nestmates requires a more general >>>>>> calling context than is permitted by invokespecial, and so the >>>>>> JVMS is updated to allow, and javac updated to use, invokevirtual >>>>>> and invokeinterface for private class and interface method calls >>>>>> respectively. These changed semantics also extend to MethodHandle >>>>>> findXXX operations. >>>>>> >>>>>> At this time we are only concerned with static nest definitions, >>>>>> which map to a top-level class/interface as the nest-host and all >>>>>> its nested types as nest-members. >>>>>> >>>>>> Please see the JEP for further details. >>>>>> >>>>>> JEP: https://bugs.openjdk.java.net/browse/JDK-8046171 >>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8010319 >>>>>> CSR: https://bugs.openjdk.java.net/browse/JDK-8197445 >>>>>> >>>>>> All of the specification changes have been previously been worked >>>>>> out by the Valhalla Project Expert Group, and the implementation >>>>>> reviewed by the various contributors and discussed on the >>>>>> valhalla-dev mailing list. >>>>>> >>>>>> Acknowledgments and contributions: Alex Buckley, Maurizio >>>>>> Cimadamore, Mandy Chung, Tobias Hartmann, Vladimir Ivanov, Karen >>>>>> Kinnear, Vladimir Kozlov, John Rose, Dan Smith, Serguei Spitsyn, >>>>>> Kumar Srinivasan >>>>>> >>>>>> Master webrev of all changes: >>>>>> >>>>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.full.v1/ >>>>>> >>>>>> Annotated master webrev index: >>>>>> >>>>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/jep181-webrev.html >>>>>> >>>>>> Performance: this is expected to be performance neutral in a >>>>>> general sense. Benchmarking and performance runs are about to start. >>>>>> >>>>>> Testing Discussion: >>>>>> ------------------ >>>>>> >>>>>> The testing for nestmates can be broken into four main groups: >>>>>> >>>>>> -? New tests specifically related to nestmates and currently in >>>>>> the runtime/Nestmates directory >>>>>> >>>>>> - New tests to complement existing tests by adding in testcases >>>>>> not previously expressible. >>>>>> ?? -? For example java/lang/invoke/SpecialInterfaceCall.java tests >>>>>> use of invokespecial for private interface methods and performing >>>>>> receiver typechecks, so we add >>>>>> java/lang/invoke/PrivateInterfaceCall.java to do similar tests for >>>>>> invokeinterface. >>>>>> >>>>>> -? New JVM TI tests to verify the spec changes related to nest >>>>>> attributes. >>>>>> >>>>>> -? Existing tests significantly affected by the nestmates changes, >>>>>> primarily: >>>>>> ??? -? runtime/SelectionResolution >>>>>> >>>>>> ??? In most cases the nestmate changes makes certain invocations >>>>>> that were illegal, legal (e.g. not requiring invokespecial to >>>>>> invoke private interface methods; allowing access to private >>>>>> members via reflection/Methodhandles that were previously not >>>>>> allowed). >>>>>> >>>>>> - Existing tests incidentally affected by the nestmate changes >>>>>> >>>>>> ?? This includes tests of things utilising class >>>>>> redefinition/retransformation to alter nested types but which >>>>>> unintentionally alter nest relationships (which is not permitted). >>>>>> >>>>>> There are still a number of tests problem-listed with issues filed >>>>>> against them to have them adapted to work with nestmates. Some of >>>>>> these are intended to be addressed in the short-term, while some >>>>>> (such as the runtime/SelectionResolution test changes) may not >>>>>> eventuate. >>>>>> >>>>>> - https://bugs.openjdk.java.net/browse/JDK-8203033 >>>>>> - https://bugs.openjdk.java.net/browse/JDK-8199450 >>>>>> - https://bugs.openjdk.java.net/browse/JDK-8196855 >>>>>> - https://bugs.openjdk.java.net/browse/JDK-8194857 >>>>>> - https://bugs.openjdk.java.net/browse/JDK-8187655 >>>>>> >>>>>> There is also further test work still to be completed (the JNI and >>>>>> JDI invocation tests): >>>>>> - https://bugs.openjdk.java.net/browse/JDK-8191117 >>>>>> which will continue in parallel with the main RFR. >>>>>> >>>>>> Pre-integration Testing: >>>>>> ??- General: >>>>>> ???? - Mach5: hs/jdk tier1,2 >>>>>> ???? - Mach5: hs-nightly (tiers 1 -3) >>>>>> ??- Targetted >>>>>> ??? - nashorn (for asm changes) >>>>>> ??? - hotspot: runtime/* >>>>>> ?????????????? serviceability/* >>>>>> ?????????????? compiler/* >>>>>> ?????????????? vmTestbase/* >>>>>> ??? - jdk: java/lang/invoke/* >>>>>> ?????????? java/lang/reflect/* >>>>>> ?????????? java/lang/instrument/* >>>>>> ?????????? java/lang/Class/* >>>>>> ?????????? java/lang/management/* >>>>>> ?? - langtools: tools/javac >>>>>> ??????????????? tools/javap >>>>>> > From magnus.ihse.bursie at oracle.com Mon Jun 4 11:20:09 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Mon, 4 Jun 2018 13:20:09 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> Message-ID: <960f8433-4a25-4ed7-e5a3-645324840116@oracle.com> Hi Per, Please always include build-dev when proposing build changes. The changes in make/autoconf/hotspot.m4 looks a bit suspect. Since you are adding zgc to disabled jvm features, which always override enabled features, it will never be possible to enable zgc on any other platform. Is this intentional? Otherwise, I suggest instead adding zgc to the baseline set NON_MINIMAL_FEATURES for linux-x64. /Magnus On 2018-06-01 23:41, Per Liden wrote: > Hi, > > Please review the implementation of JEP 333: ZGC: A Scalable > Low-Latency Garbage Collector (Experimental) > > Please see the JEP for more information about the project. The JEP is > currently in state "Proposed to Target" for JDK 11. > > https://bugs.openjdk.java.net/browse/JDK-8197831 > > Additional information in can also be found on the ZGC project wiki. > > https://wiki.openjdk.java.net/display/zgc/Main > > > Webrevs > ------- > > To make this easier to review, we've divided the change into two webrevs. > > * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master > > This patch contains the actual ZGC implementation, the new unit > tests and other changes needed in HotSpot. > > * ZGC Testing: > http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing > > This patch contains changes to existing tests needed by ZGC. > > > Overview of Changes > ------------------- > > Below follows a list of the files we add/modify in the master patch, > with a short summary describing each group. > > * Build support - Making ZGC an optional feature. > > make/autoconf/hotspot.m4 > make/hotspot/lib/JvmFeatures.gmk > src/hotspot/share/utilities/macros.hpp > > * C2 AD file - Additions needed to generate ZGC load barriers (adlc > does not currently offer a way to easily break this out). > > src/hotspot/cpu/x86/x86.ad > src/hotspot/cpu/x86/x86_64.ad > > * C2 - Things that can't be easily abstracted out into ZGC specific > code, most of which is guarded behind a #if INCLUDE_ZGC and/or if > (UseZGC) condition. There should only be two logic changes (one in > idealKit.cpp and one in node.cpp) that are still active when ZGC is > disabled. We believe these are low risk changes and should not > introduce any real change i behavior when using other GCs. > > src/hotspot/share/adlc/formssel.cpp > src/hotspot/share/opto/* > src/hotspot/share/compiler/compilerDirectives.hpp > > * General GC+Runtime - Registering ZGC as a collector. > > src/hotspot/share/gc/shared/* > src/hotspot/share/runtime/vmStructs.cpp > src/hotspot/share/runtime/vm_operations.hpp > src/hotspot/share/prims/whitebox.cpp > > * GC thread local data - Increasing the size of data area by 32 bytes. > > src/hotspot/share/gc/shared/gcThreadLocalData.hpp > > * ZGC - The collector itself. > > src/hotspot/share/gc/z/* > src/hotspot/cpu/x86/gc/z/* > src/hotspot/os_cpu/linux_x86/gc/z/* > test/hotspot/gtest/gc/z/* > > * JFR - Adding new event types. > > src/hotspot/share/jfr/* > src/jdk.jfr/share/conf/jfr/* > > * Logging - Adding new log tags. > > src/hotspot/share/logging/* > > * Metaspace - Adding a friend declaration. > > src/hotspot/share/memory/metaspace.hpp > > * InstanceRefKlass - Adjustments for concurrent reference processing. > > src/hotspot/share/oops/instanceRefKlass.inline.hpp > > * vmSymbol - Disabled clone intrinsic for ZGC. > > src/hotspot/share/classfile/vmSymbols.cpp > > * Oop Verification - In four cases we disabled oop verification > because it do not makes sense or is not applicable to a GC using load > barriers. > > src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp > src/hotspot/cpu/x86/stubGenerator_x86_64.cpp > src/hotspot/share/compiler/oopMap.cpp > src/hotspot/share/runtime/jniHandles.cpp > > * StackValue - Apply a load barrier in case of OSR. This is a bit of a > hack. However, this will go away in the future, when we have the next > iteration of C2's load barriers in place (aka "C2 late barrier > insertion"). > > src/hotspot/share/runtime/stackValue.cpp > > * JVMTI - Adding an assert() to catch problems if the tagmap hashing > is changed in the future. > > src/hotspot/share/prims/jvmtiTagMap.cpp > > * Legal - Adding copyright/license for 3rd party hash function used in > ZHash. > > src/java.base/share/legal/c-libutl.md > > * SA - Adding basic ZGC support. > > src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* > > > Testing > ------- > > * Unit testing > > A number of new ZGC specific gtests have been added, in > test/hotspot/gtest/gc/z/ > > * Regression testing > > No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} > No new failures in Mach5, with ZGC disabled, tier{1,2,3} > > * Stress testing > > We have been continuously been running a number stress tests > throughout the development, these include: > > specjbb2000 > specjbb2005 > specjbb2015 > specjvm98 > specjvm2008 > dacapo2009 > test/hotspot/jtreg/gc/stress/gcold > test/hotspot/jtreg/gc/stress/systemgc > test/hotspot/jtreg/gc/stress/gclocker > test/hotspot/jtreg/gc/stress/gcbasher > test/hotspot/jtreg/gc/stress/finalizer > Kitchensink > > > Thanks! > > /Per, Stefan & the ZGC team From boris.ulasevich at bell-sw.com Mon Jun 4 11:58:23 2018 From: boris.ulasevich at bell-sw.com (Boris Ulasevich) Date: Mon, 4 Jun 2018 14:58:23 +0300 Subject: RFR (S) 8202705: ARM32 build crashes on long JavaThread offsets Message-ID: <4840be99-ddbb-16f0-a9cb-31d7efcf0d02@bell-sw.com> Hello all, Please review this patch to allow ARM32 MacroAssembler to handle updated JavaThread offsets: http://cr.openjdk.java.net/~bulasevich/8202705/webrev.01/ https://bugs.openjdk.java.net/browse/JDK-8202705 thank you, Boris From rwestrel at redhat.com Mon Jun 4 12:04:07 2018 From: rwestrel at redhat.com (Roland Westrelin) Date: Mon, 04 Jun 2018 14:04:07 +0200 Subject: RFR (S) 8202705: ARM32 build crashes on long JavaThread offsets In-Reply-To: <4840be99-ddbb-16f0-a9cb-31d7efcf0d02@bell-sw.com> References: <4840be99-ddbb-16f0-a9cb-31d7efcf0d02@bell-sw.com> Message-ID: Hi Boris, > http://cr.openjdk.java.net/~bulasevich/8202705/webrev.01/ That looks good to me. Roland. From per.liden at oracle.com Mon Jun 4 12:05:47 2018 From: per.liden at oracle.com (Per Liden) Date: Mon, 4 Jun 2018 14:05:47 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <960f8433-4a25-4ed7-e5a3-645324840116@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <960f8433-4a25-4ed7-e5a3-645324840116@oracle.com> Message-ID: Hi Magnus, Thanks for reviewing. On 06/04/2018 01:20 PM, Magnus Ihse Bursie wrote: > Hi Per, > > Please always include build-dev when proposing build changes. > > The changes in make/autoconf/hotspot.m4 looks a bit suspect. Since you > are adding zgc to disabled jvm features, which always override enabled > features, it will never be possible to enable zgc on any other platform. > Is this intentional? Otherwise, I suggest instead adding zgc to the > baseline set NON_MINIMAL_FEATURES for linux-x64. > Yes, that's intentional, since ZGC is only supported on linux-x64 and enabling ZGC on other platforms wouldn't build. When additional platforms gets supported, this check would of course also be updated to reflect that. cheers, Per > /Magnus > > > > On 2018-06-01 23:41, Per Liden wrote: >> Hi, >> >> Please review the implementation of JEP 333: ZGC: A Scalable >> Low-Latency Garbage Collector (Experimental) >> >> Please see the JEP for more information about the project. The JEP is >> currently in state "Proposed to Target" for JDK 11. >> >> https://bugs.openjdk.java.net/browse/JDK-8197831 >> >> Additional information in can also be found on the ZGC project wiki. >> >> https://wiki.openjdk.java.net/display/zgc/Main >> >> >> Webrevs >> ------- >> >> To make this easier to review, we've divided the change into two webrevs. >> >> * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master >> >> ? This patch contains the actual ZGC implementation, the new unit >> tests and other changes needed in HotSpot. >> >> * ZGC Testing: >> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >> >> ? This patch contains changes to existing tests needed by ZGC. >> >> >> Overview of Changes >> ------------------- >> >> Below follows a list of the files we add/modify in the master patch, >> with a short summary describing each group. >> >> * Build support - Making ZGC an optional feature. >> >> ? make/autoconf/hotspot.m4 >> ? make/hotspot/lib/JvmFeatures.gmk >> ? src/hotspot/share/utilities/macros.hpp >> >> * C2 AD file - Additions needed to generate ZGC load barriers (adlc >> does not currently offer a way to easily break this out). >> >> ? src/hotspot/cpu/x86/x86.ad >> ? src/hotspot/cpu/x86/x86_64.ad >> >> * C2 - Things that can't be easily abstracted out into ZGC specific >> code, most of which is guarded behind a #if INCLUDE_ZGC and/or if >> (UseZGC) condition. There should only be two logic changes (one in >> idealKit.cpp and one in node.cpp) that are still active when ZGC is >> disabled. We believe these are low risk changes and should not >> introduce any real change i behavior when using other GCs. >> >> ? src/hotspot/share/adlc/formssel.cpp >> ? src/hotspot/share/opto/* >> ? src/hotspot/share/compiler/compilerDirectives.hpp >> >> * General GC+Runtime - Registering ZGC as a collector. >> >> ? src/hotspot/share/gc/shared/* >> ? src/hotspot/share/runtime/vmStructs.cpp >> ? src/hotspot/share/runtime/vm_operations.hpp >> ? src/hotspot/share/prims/whitebox.cpp >> >> * GC thread local data - Increasing the size of data area by 32 bytes. >> >> ? src/hotspot/share/gc/shared/gcThreadLocalData.hpp >> >> * ZGC - The collector itself. >> >> ? src/hotspot/share/gc/z/* >> ? src/hotspot/cpu/x86/gc/z/* >> ? src/hotspot/os_cpu/linux_x86/gc/z/* >> ? test/hotspot/gtest/gc/z/* >> >> * JFR - Adding new event types. >> >> ? src/hotspot/share/jfr/* >> ? src/jdk.jfr/share/conf/jfr/* >> >> * Logging - Adding new log tags. >> >> ? src/hotspot/share/logging/* >> >> * Metaspace - Adding a friend declaration. >> >> ? src/hotspot/share/memory/metaspace.hpp >> >> * InstanceRefKlass - Adjustments for concurrent reference processing. >> >> ? src/hotspot/share/oops/instanceRefKlass.inline.hpp >> >> * vmSymbol - Disabled clone intrinsic for ZGC. >> >> ? src/hotspot/share/classfile/vmSymbols.cpp >> >> * Oop Verification - In four cases we disabled oop verification >> because it do not makes sense or is not applicable to a GC using load >> barriers. >> >> ? src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp >> ? src/hotspot/cpu/x86/stubGenerator_x86_64.cpp >> ? src/hotspot/share/compiler/oopMap.cpp >> ? src/hotspot/share/runtime/jniHandles.cpp >> >> * StackValue - Apply a load barrier in case of OSR. This is a bit of a >> hack. However, this will go away in the future, when we have the next >> iteration of C2's load barriers in place (aka "C2 late barrier >> insertion"). >> >> ? src/hotspot/share/runtime/stackValue.cpp >> >> * JVMTI - Adding an assert() to catch problems if the tagmap hashing >> is changed in the future. >> >> ? src/hotspot/share/prims/jvmtiTagMap.cpp >> >> * Legal - Adding copyright/license for 3rd party hash function used in >> ZHash. >> >> ? src/java.base/share/legal/c-libutl.md >> >> * SA - Adding basic ZGC support. >> >> ? src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* >> >> >> Testing >> ------- >> >> * Unit testing >> >> ? A number of new ZGC specific gtests have been added, in >> test/hotspot/gtest/gc/z/ >> >> * Regression testing >> >> ? No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} >> ? No new failures in Mach5, with ZGC disabled, tier{1,2,3} >> >> * Stress testing >> >> ? We have been continuously been running a number stress tests >> throughout the development, these include: >> >> ??? specjbb2000 >> ??? specjbb2005 >> ??? specjbb2015 >> ??? specjvm98 >> ??? specjvm2008 >> ??? dacapo2009 >> ??? test/hotspot/jtreg/gc/stress/gcold >> ??? test/hotspot/jtreg/gc/stress/systemgc >> ??? test/hotspot/jtreg/gc/stress/gclocker >> ??? test/hotspot/jtreg/gc/stress/gcbasher >> ??? test/hotspot/jtreg/gc/stress/finalizer >> ??? Kitchensink >> >> >> Thanks! >> >> /Per, Stefan & the ZGC team > From shade at redhat.com Mon Jun 4 12:10:28 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Mon, 4 Jun 2018 14:10:28 +0200 Subject: RFR (S) 8202705: ARM32 build crashes on long JavaThread offsets In-Reply-To: <4840be99-ddbb-16f0-a9cb-31d7efcf0d02@bell-sw.com> References: <4840be99-ddbb-16f0-a9cb-31d7efcf0d02@bell-sw.com> Message-ID: On 06/04/2018 01:58 PM, Boris Ulasevich wrote: > Hello all, > > Please review this patch to allow ARM32 MacroAssembler to handle updated JavaThread offsets: > ? http://cr.openjdk.java.net/~bulasevich/8202705/webrev.01/ > ? https://bugs.openjdk.java.net/browse/JDK-8202705 Looks okay, but Rthread becomes misnomer in the middle of the method. Maybe like this? // Borrow the Rthread for alloc counter Register Ralloc = Rthread; Rthread = NULL; add(Ralloc, Ralloc, in_bytes(JavaThread::allocated_bytes_offset()); ... ... // Unborrow the Rthread sub(Ralloc, Ralloc, in_bytes(JavaThread::allocated_bytes_offset() Rthread = Ralloc; Ralloc = NULL; -Aleksey From erik.osterlund at oracle.com Mon Jun 4 12:20:53 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 4 Jun 2018 14:20:53 +0200 Subject: RFR: JDK-8203172: Primitive heap access for interpreter BarrierSetAssembler/aarch64 In-Reply-To: References: Message-ID: <5B152EA5.2060903@oracle.com> Hi Roman, Oh man, I was hoping I would never have to look at jni fast get field again. Here goes... 93 speculative_load_pclist[count] = __ pc(); // Used by the segfault handler 94 __ access_load_at(type, IN_HEAP, noreg /* tos: r0/v0 */, Address(robj, roffset), noreg, noreg); 95 I see that here you load straight to tos, which is r0 for integral types. But r0 is also c_rarg0. So it seems like if after loading the primitive to r0, the subsequent safepoint counter check fails, then the code will revert back to a slowpath call, but this time with c_rarg0 clobbered, leading to a broken JNI env pointer being passed in to the slow path C function. That does not seem right to me. This JNI fast get field code is so error prone. :( Unfortunately, the proposed API can not load floating point numbers to anything but ToS, which seems like a problem in the jni fast get field code. I think to make this work properly, you need to load integral types to result and not ToS, so that you do not clobber r0, and rely on ToS being v0 for floating point types, which does not clobber r0. That way we can dance around the issue for now I suppose. Thanks, /Erik On 2018-05-14 22:23, Roman Kennke wrote: > Similar to x86 > (http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-May/032114.html) > here comes the primitive heap access changes for aarch64: > > http://cr.openjdk.java.net/~rkennke/JDK-8203172/webrev.00/ > > Some notes: > - array access used to compute base_obj + index, and then use indexed > addressing with base_offset. This means we cannot get base_obj in the > BarrierSetAssembler API, but we need that, e.g. for resolving the target > object via forwarding pointer. I changed (base_obj+index)+base_offset to > base_obj+(index+base_offset) in all the relevant places. > > - in jniFastGetField_aarch64.cpp, we are using a trick to ensure correct > ordering field-load with the load of the safepoint counter: we make them > address dependend. For float and double loads this meant to load the > value as int/long, and then later moving those into v0. This doesn't > work when going through the BarrierSetAssembler API: it loads straight > to v0. Instead I am inserting a LoadLoad membar for float/double (which > should be rare enough anyway). > > Other than that it's pretty much analogous to x86. > > Testing: no regressions in hotspot/tier1 > > Can I please get a review? > > Thanks, Roman > From erik.osterlund at oracle.com Mon Jun 4 12:38:33 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 4 Jun 2018 14:38:33 +0200 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> Message-ID: <5B1532C9.4070206@oracle.com> Hi Roman, 42 43 virtual void obj_equals(MacroAssembler* masm, DecoratorSet decorators, 44 Register obj1, Register obj2); 45 I don't think we need to pass in any decorators here. Perhaps one day there will be some important semantic property to deal with, but today I do not think there are any properties we care about, except possibly AS_RAW, but that would never propagate into the BarrierSetAssembler anyway. On that topic, I noticed that today we do the raw version of e.g. load_heap_oop inside of the BarrierSetAssembler, and to use it you would call load_heap_oop(AS_RAW). But the cmpoop stuff does it in a different way (cmpoop_raw in the macro assembler). I think it would be ideal if we could do it the same way, which would involve calling cmpoop with AS_RAW to get a raw oop comparison, residing in BarrierSetAssembler, with the usual hardwiring in the corresponding macro assembler function when it observes AS_RAW. So it would look something like this: void cmpoop(Register src1, Address src2, DecoratorSet decorators = AS_NORMAL); What do you think? Thanks, /Erik On 2018-05-14 21:19, Roman Kennke wrote: > Similar to what's done in the runtime already, GCs might need a say > about object equality (e.g. Shenandoah GC). This adds the required > abstraction to x86 and aarch64 assembler code. > > In x86 it ends up a bit ugly because of the existing variations of > cmpoop() which take several combinations of Register, Address and > jobject as argument, and even worse, varies between 64 and 32 bit builds. > > In aarch64, I added the MacroAssembler::cmpoop() indirection to make it > more like the x86 implementation. > > http://cr.openjdk.java.net/~rkennke/JDK-8203157/webrev.00/ > > Passes hotspot/tier1 tests for x86_64/x86_32/aarch64 > > Can I please get a review? > Thanks, Roman > From per.liden at oracle.com Mon Jun 4 12:48:52 2018 From: per.liden at oracle.com (Per Liden) Date: Mon, 4 Jun 2018 14:48:52 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> Message-ID: <08ceaa5d-bb3c-cff4-d576-fbe121411205@oracle.com> Hi Stefan, Thanks for reviewing. See comments inline below. I'm linking to fixes in the zgc/zgc repo, and will let more comment come in before creating a new webrev. On 06/04/2018 11:35 AM, Stefan Johansson wrote: > Hi guys, > > On 2018-06-01 23:41, Per Liden wrote: >> Hi, >> >> Please review the implementation of JEP 333: ZGC: A Scalable >> Low-Latency Garbage Collector (Experimental) >> >> Please see the JEP for more information about the project. The JEP is >> currently in state "Proposed to Target" for JDK 11. >> >> https://bugs.openjdk.java.net/browse/JDK-8197831 >> >> Additional information in can also be found on the ZGC project wiki. >> >> https://wiki.openjdk.java.net/display/zgc/Main >> >> >> Webrevs >> ------- >> >> To make this easier to review, we've divided the change into two webrevs. >> >> * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master >> >> ?? This patch contains the actual ZGC implementation, the new unit >> tests and other changes needed in HotSpot. >> > I've looked at the shared GC parts and some other related code that I > feel comfortable reviewing. The changes looks good in general, but a few > comments: > > src/hotspot/share/gc/shared/specialized_oop_closures.hpp > ? 39 #include "gc/z/zOopClosures.specialized.hpp" > > Any good reason for not following the naming convention used by all > other collectors? Same goes for zFlags.hpp in gc_globals.hpp. Fixed. http://hg.openjdk.java.net/zgc/zgc/rev/db90a819d4ab http://hg.openjdk.java.net/zgc/zgc/rev/e019298c2791 > --- > > src/hotspot/share/jfr/metadata/metadata.xml > ?897?? > ?898???? relation="GcId"/> > ?899???? > ?900?? > ?901 > ?902?? > ?903???? > ?904???? > ?905???? > ?906?? > ?907 > ?908?? > ?909???? > ?910???? > ?911?? > ?912 > ?913?? > ?914???? > ?915?? > ?916 > ?917?? > ?918???? > ?919?? > > The fields in the events doesn't match the type definitions below, > Statistics vs. Stat. Fixed. http://hg.openjdk.java.net/zgc/zgc/rev/01db0539fa1c > ---- > > src/hotspot/share/oops/instanceRefKlass.inline.hpp > ? 61???? if (type == REF_PHANTOM) { > ? 62?????? referent = HeapAccess AS_NO_KEEPALIVE>::oop_load(java_lang_ref_Reference::referent_addr_raw(obj)); > > ? 63???? } else { > ? 64?????? referent = HeapAccess AS_NO_KEEPALIVE>::oop_load(java_lang_ref_Reference::referent_addr_raw(obj)); > > ? 65???? } > > Extract this to a load_referent() helper function to help readability. Fixed. http://hg.openjdk.java.net/zgc/zgc/rev/d64cd7cb3d13 > --- > > src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp > src/hotspot/share/runtime/jniHandles.cpp > #if INCLUDE_ZGC > ? // ... > ? if (!UseZGC) > #endif > > The UseZGC-flag is always available so the #if INCLUDE_ZGC can be > skipped in the two places where this pattern is used. Fixed. http://hg.openjdk.java.net/zgc/zgc/rev/2cf588273130 Thanks! /Per > --- > > Cheers, > Stefan > > >> * ZGC Testing: >> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >> >> ?? This patch contains changes to existing tests needed by ZGC. >> >> >> Overview of Changes >> ------------------- >> >> Below follows a list of the files we add/modify in the master patch, >> with a short summary describing each group. >> >> * Build support - Making ZGC an optional feature. >> >> ?? make/autoconf/hotspot.m4 >> ?? make/hotspot/lib/JvmFeatures.gmk >> ?? src/hotspot/share/utilities/macros.hpp >> >> * C2 AD file - Additions needed to generate ZGC load barriers (adlc >> does not currently offer a way to easily break this out). >> >> ?? src/hotspot/cpu/x86/x86.ad >> ?? src/hotspot/cpu/x86/x86_64.ad >> >> * C2 - Things that can't be easily abstracted out into ZGC specific >> code, most of which is guarded behind a #if INCLUDE_ZGC and/or if >> (UseZGC) condition. There should only be two logic changes (one in >> idealKit.cpp and one in node.cpp) that are still active when ZGC is >> disabled. We believe these are low risk changes and should not >> introduce any real change i behavior when using other GCs. >> >> ?? src/hotspot/share/adlc/formssel.cpp >> ?? src/hotspot/share/opto/* >> ?? src/hotspot/share/compiler/compilerDirectives.hpp >> >> * General GC+Runtime - Registering ZGC as a collector. >> >> ?? src/hotspot/share/gc/shared/* >> ?? src/hotspot/share/runtime/vmStructs.cpp >> ?? src/hotspot/share/runtime/vm_operations.hpp >> ?? src/hotspot/share/prims/whitebox.cpp >> >> * GC thread local data - Increasing the size of data area by 32 bytes. >> >> ?? src/hotspot/share/gc/shared/gcThreadLocalData.hpp >> >> * ZGC - The collector itself. >> >> ?? src/hotspot/share/gc/z/* >> ?? src/hotspot/cpu/x86/gc/z/* >> ?? src/hotspot/os_cpu/linux_x86/gc/z/* >> ?? test/hotspot/gtest/gc/z/* >> >> * JFR - Adding new event types. >> >> ?? src/hotspot/share/jfr/* >> ?? src/jdk.jfr/share/conf/jfr/* >> >> * Logging - Adding new log tags. >> >> ?? src/hotspot/share/logging/* >> >> * Metaspace - Adding a friend declaration. >> >> ?? src/hotspot/share/memory/metaspace.hpp >> >> * InstanceRefKlass - Adjustments for concurrent reference processing. >> >> ?? src/hotspot/share/oops/instanceRefKlass.inline.hpp >> >> * vmSymbol - Disabled clone intrinsic for ZGC. >> >> ?? src/hotspot/share/classfile/vmSymbols.cpp >> >> * Oop Verification - In four cases we disabled oop verification >> because it do not makes sense or is not applicable to a GC using load >> barriers. >> >> ?? src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp >> ?? src/hotspot/cpu/x86/stubGenerator_x86_64.cpp >> ?? src/hotspot/share/compiler/oopMap.cpp >> ?? src/hotspot/share/runtime/jniHandles.cpp >> >> * StackValue - Apply a load barrier in case of OSR. This is a bit of a >> hack. However, this will go away in the future, when we have the next >> iteration of C2's load barriers in place (aka "C2 late barrier >> insertion"). >> >> ?? src/hotspot/share/runtime/stackValue.cpp >> >> * JVMTI - Adding an assert() to catch problems if the tagmap hashing >> is changed in the future. >> >> ?? src/hotspot/share/prims/jvmtiTagMap.cpp >> >> * Legal - Adding copyright/license for 3rd party hash function used in >> ZHash. >> >> ?? src/java.base/share/legal/c-libutl.md >> >> * SA - Adding basic ZGC support. >> >> ?? src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* >> >> >> Testing >> ------- >> >> * Unit testing >> >> ?? A number of new ZGC specific gtests have been added, in >> test/hotspot/gtest/gc/z/ >> >> * Regression testing >> >> ?? No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} >> ?? No new failures in Mach5, with ZGC disabled, tier{1,2,3} >> >> * Stress testing >> >> ?? We have been continuously been running a number stress tests >> throughout the development, these include: >> >> ???? specjbb2000 >> ???? specjbb2005 >> ???? specjbb2015 >> ???? specjvm98 >> ???? specjvm2008 >> ???? dacapo2009 >> ???? test/hotspot/jtreg/gc/stress/gcold >> ???? test/hotspot/jtreg/gc/stress/systemgc >> ???? test/hotspot/jtreg/gc/stress/gclocker >> ???? test/hotspot/jtreg/gc/stress/gcbasher >> ???? test/hotspot/jtreg/gc/stress/finalizer >> ???? Kitchensink >> >> >> Thanks! >> >> /Per, Stefan & the ZGC team From coleen.phillimore at oracle.com Mon Jun 4 13:13:13 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 4 Jun 2018 09:13:13 -0400 Subject: [hs] RFR (L): 8010319: Implementation of JEP 181: Nest-Based Access Control In-Reply-To: <29d5725a-be8d-81d7-f9dd-f6f2eedc888d@oracle.com> References: <06529fc3-2eba-101b-9aee-2757893cb8fb@oracle.com> <97f8cedf-4ebc-610f-0528-e1b91f35eece@oracle.com> <29d5725a-be8d-81d7-f9dd-f6f2eedc888d@oracle.com> Message-ID: The redefine test changes look fine. http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v5-incr/test/hotspot/jtreg/runtime/appcds/redefineClass/RedefineRunningMethods_Shared.java.udiff.html I think there's a leading space in this file. thanks, Coleen On 6/4/18 3:15 AM, David Holmes wrote: > This update fixes some tests that were being excluded temporarily, but > which can now run under nestmates. > > Incremental hotspot webrev: > http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v5-incr/ > > Full hotspot webrev: > http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v5/ > > Change summary: > > - test/hotspot/jtreg/vmTestbase/nsk/stress/except/except004.java (see: > 8203046): > > The test expected an IllegalAccessException using reflection to access > a private field of a nested class. That's actually a reflection bug > that nestmates fixes. So relocated the Abra.PRIVATE_FIELD to a > top-level package-access class Ext > > - all other tests involve class redefinition (see: 8199450): > > These tests were failing because the RedefineClassHelper, if passed a > string containing "class A$B { ...}" doesn't define a nested class but > a top-level class called A$B (which is perfectly legal). The > redefinition itself would fail as the old class called A$B was a > nested class and you're not allowed to change the nest attributes in > class redefinition or transformation. > > The fix is simply to factor out the A$B class being redefined to being > a top-level package access class in the same source file, called A_B, > and with all references to "B" suitable adjusted. > > [The alternate fix considered would be to update the > RedefineClassHelper and its use of the InMemoryJavaCompiler so that > the tests would pass in a string like "class A { class B { ... } }" > and then read back the bytes for A$B with nest attributes intact. But > that is a non-trivial task and it isn't really significant that the > classes used in these tests were in fact nested.] > > > Thanks, > David > > > On 28/05/2018 9:20 PM, David Holmes wrote: >> I've added some missing JNI tests for the basic access checks. Given >> JNI ignores access it should go without saying that JNI access to >> nestmates will work fine, but it doesn't hurt to verify that. >> >> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v4-incr/ >> >> >> Thanks, >> David >> >> On 24/05/2018 7:48 PM, David Holmes wrote: >>> Here are the further updates based on review comments and rebasing >>> to get the vmTestbase updates for which some closed test changes now >>> have to be applied to the open versions. >>> >>> Incremental hotspot webrev: >>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v3-incr/ >>> >>> >>> Full hotspot webrev: >>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v3/ >>> >>> Change summary: >>> >>> test/hotspot/jtreg/ProblemList.txt >>> - Exclude vmTestbase/nsk/stress/except/except004.java under 8203046 >>> >>> test/hotspot/jtreg/vmTestbase/vm/runtime/defmeth/BasicTest.java >>> test/hotspot/jtreg/vmTestbase/vm/runtime/defmeth/PrivateMethodsTest.java >>> >>> - updated to work with new invokeinterface rules and nestmate changes >>> - misc cleanups >>> >>> src/hotspot/share/runtime/reflection.?pp >>> - rename verify_field_access to verify_member_access (it's always >>> been mis-named and I nearly forgot to do this cleanup!) and rename >>> field_class to member_class >>> - add TRAPS to verify_member_access to allow use with CHECK macros >>> >>> src/hotspot/share/ci/ciField.cpp >>> src/hotspot/share/classfile/classFileParser.cpp >>> src/hotspot/share/interpreter/linkResolver.cpp >>> - updated to use THREAD/CHECK with verify_member_access >>> - for ciField rename thread to THREAD so it can be used with >>> HAS_PENDING_EXCEPTION >>> >>> src/hotspot/share/oops/instanceKlass.cpp >>> - use CHECK_false when calling nest_host() >>> - fix indent near nestmate code >>> >>> src/hotspot/share/oops/instanceKlass.hpp >>> - make has_nest_member private >>> >>> Thanks, >>> David >>> ----- >>> >>> On 23/05/2018 4:57 PM, David Holmes wrote: >>>> Here are the updates so far in response to all the review comments. >>>> >>>> Incremental webrev: >>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v2-incr/ >>>> >>>> >>>> Full webrev: >>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v2/ >>>> >>>> Change summary: >>>> >>>> test/runtime/Nestmates/reflectionAPI/* >>>> - moved to java/lang/reflect/Nestmates >>>> >>>> src/hotspot/cpu/arm/templateTable_arm.cpp >>>> - Fixed ARM invocation logic as provided by Boris. >>>> >>>> src/hotspot/share/interpreter/linkResolver.cpp >>>> - expanded comment regarding exceptions >>>> - Removed leftover debugging code >>>> >>>> src/hotspot/share/oops/instanceKlass.cpp >>>> - Removed FIXME comments >>>> - corrected incorrect comment >>>> - Fixed if/else formatting >>>> >>>> src/hotspot/share/oops/instanceKlass.hpp >>>> - removed unused debug method >>>> >>>> src/hotspot/share/oops/klassVtable.cpp >>>> - added comment by request of Karen >>>> >>>> src/hotspot/share/runtime/reflection.cpp >>>> - Removed FIXME comments >>>> - expanded comments in places >>>> - used CHECK_false >>>> >>>> Thanks, >>>> David >>>> >>>> On 15/05/2018 10:52 AM, David Holmes wrote: >>>>> This review is being spread across four groups: langtools, >>>>> core-libs, hotspot and serviceability. This is the specific review >>>>> thread for hotspot - webrev: >>>>> >>>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v1/ >>>>> >>>>> See below for full details - including annotated full webrev >>>>> guiding the review. >>>>> >>>>> The intent is to have JEP-181 targeted and integrated by the end >>>>> of this month. >>>>> >>>>> Thanks, >>>>> David >>>>> ----- >>>>> >>>>> The nestmates project (JEP-181) introduces new classfile >>>>> attributes to identify classes and interfaces in the same nest, so >>>>> that the VM can perform access control based on those attributes >>>>> and so allow direct private access between nestmates without >>>>> requiring javac to generate synthetic accessor methods. These >>>>> access control changes also extend to core reflection and the >>>>> MethodHandle.Lookup contexts. >>>>> >>>>> Direct private calls between nestmates requires a more general >>>>> calling context than is permitted by invokespecial, and so the >>>>> JVMS is updated to allow, and javac updated to use, invokevirtual >>>>> and invokeinterface for private class and interface method calls >>>>> respectively. These changed semantics also extend to MethodHandle >>>>> findXXX operations. >>>>> >>>>> At this time we are only concerned with static nest definitions, >>>>> which map to a top-level class/interface as the nest-host and all >>>>> its nested types as nest-members. >>>>> >>>>> Please see the JEP for further details. >>>>> >>>>> JEP: https://bugs.openjdk.java.net/browse/JDK-8046171 >>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8010319 >>>>> CSR: https://bugs.openjdk.java.net/browse/JDK-8197445 >>>>> >>>>> All of the specification changes have been previously been worked >>>>> out by the Valhalla Project Expert Group, and the implementation >>>>> reviewed by the various contributors and discussed on the >>>>> valhalla-dev mailing list. >>>>> >>>>> Acknowledgments and contributions: Alex Buckley, Maurizio >>>>> Cimadamore, Mandy Chung, Tobias Hartmann, Vladimir Ivanov, Karen >>>>> Kinnear, Vladimir Kozlov, John Rose, Dan Smith, Serguei Spitsyn, >>>>> Kumar Srinivasan >>>>> >>>>> Master webrev of all changes: >>>>> >>>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.full.v1/ >>>>> >>>>> Annotated master webrev index: >>>>> >>>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/jep181-webrev.html >>>>> >>>>> Performance: this is expected to be performance neutral in a >>>>> general sense. Benchmarking and performance runs are about to start. >>>>> >>>>> Testing Discussion: >>>>> ------------------ >>>>> >>>>> The testing for nestmates can be broken into four main groups: >>>>> >>>>> -? New tests specifically related to nestmates and currently in >>>>> the runtime/Nestmates directory >>>>> >>>>> - New tests to complement existing tests by adding in testcases >>>>> not previously expressible. >>>>> ?? -? For example java/lang/invoke/SpecialInterfaceCall.java tests >>>>> use of invokespecial for private interface methods and performing >>>>> receiver typechecks, so we add >>>>> java/lang/invoke/PrivateInterfaceCall.java to do similar tests for >>>>> invokeinterface. >>>>> >>>>> -? New JVM TI tests to verify the spec changes related to nest >>>>> attributes. >>>>> >>>>> -? Existing tests significantly affected by the nestmates changes, >>>>> primarily: >>>>> ??? -? runtime/SelectionResolution >>>>> >>>>> ??? In most cases the nestmate changes makes certain invocations >>>>> that were illegal, legal (e.g. not requiring invokespecial to >>>>> invoke private interface methods; allowing access to private >>>>> members via reflection/Methodhandles that were previously not >>>>> allowed). >>>>> >>>>> - Existing tests incidentally affected by the nestmate changes >>>>> >>>>> ?? This includes tests of things utilising class >>>>> redefinition/retransformation to alter nested types but which >>>>> unintentionally alter nest relationships (which is not permitted). >>>>> >>>>> There are still a number of tests problem-listed with issues filed >>>>> against them to have them adapted to work with nestmates. Some of >>>>> these are intended to be addressed in the short-term, while some >>>>> (such as the runtime/SelectionResolution test changes) may not >>>>> eventuate. >>>>> >>>>> - https://bugs.openjdk.java.net/browse/JDK-8203033 >>>>> - https://bugs.openjdk.java.net/browse/JDK-8199450 >>>>> - https://bugs.openjdk.java.net/browse/JDK-8196855 >>>>> - https://bugs.openjdk.java.net/browse/JDK-8194857 >>>>> - https://bugs.openjdk.java.net/browse/JDK-8187655 >>>>> >>>>> There is also further test work still to be completed (the JNI and >>>>> JDI invocation tests): >>>>> - https://bugs.openjdk.java.net/browse/JDK-8191117 >>>>> which will continue in parallel with the main RFR. >>>>> >>>>> Pre-integration Testing: >>>>> ??- General: >>>>> ???? - Mach5: hs/jdk tier1,2 >>>>> ???? - Mach5: hs-nightly (tiers 1 -3) >>>>> ??- Targetted >>>>> ??? - nashorn (for asm changes) >>>>> ??? - hotspot: runtime/* >>>>> ?????????????? serviceability/* >>>>> ?????????????? compiler/* >>>>> ?????????????? vmTestbase/* >>>>> ??? - jdk: java/lang/invoke/* >>>>> ?????????? java/lang/reflect/* >>>>> ?????????? java/lang/instrument/* >>>>> ?????????? java/lang/Class/* >>>>> ?????????? java/lang/management/* >>>>> ?? - langtools: tools/javac >>>>> ??????????????? tools/javap >>>>> From coleen.phillimore at oracle.com Mon Jun 4 13:14:11 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 4 Jun 2018 09:14:11 -0400 Subject: RFR (S) 8204195: Clean up macroAssembler.inline.hpp and other inline.hpp files included in .hpp files In-Reply-To: References: <335ac0b6-84f4-8ae5-ad32-0ba7d7260009@oracle.com> <4A4CB4E4-7340-4798-B4DA-D3D48F9A30DA@oracle.com> Message-ID: <02015219-4908-d07c-52ab-3e4e20ea2581@oracle.com> Thanks Vladimir! Coleen On 6/1/18 1:02 PM, Vladimir Kozlov wrote: > +1 > > Thanks, > Vladimir K > > On 5/31/18 4:52 PM, Jiangli Zhou wrote: >> The changes look good to me. >> >> Thanks, >> Jiangli >> >>> On May 31, 2018, at 4:10 PM, coleen.phillimore at oracle.com wrote: >>> >>> Summary: Moved macroAssembler.inline.hpp out of header file and >>> distributed to .cpp files that included them: ie. >>> c1_MacroAssembler.hpp and interp_masm.hpp. Also freeList.inline.hpp >>> and allocation.inline.hpp. >>> >>> open webrev at http://cr.openjdk.java.net/~coleenp/8204195.01/webrev >>> bug link https://bugs.openjdk.java.net/browse/JDK-8204195 >>> >>> Tested with mach5 hs-tier1,2 on Oracle platforms: linux-x64, >>> solaris-sparcv9, macosx-x64 and windows-x64.? Also tested zero and >>> aarch64 fastdebug builds, and linux-x64 without precompiled >>> headers.?? Please test other platforms, like arm32, ppc and s390!? I >>> think these are the last platform dependent inline files that are >>> included by .hpp files. >>> >>> Thanks, >>> Coleen >> From per.liden at oracle.com Mon Jun 4 13:02:03 2018 From: per.liden at oracle.com (Per Liden) Date: Mon, 4 Jun 2018 15:02:03 +0200 Subject: RFR: 8204168: Increase small heap sizes in tests to accommodate ZGC In-Reply-To: References: Message-ID: <91de456d-20df-5881-8740-1e5c44f7a238@oracle.com> Looks good to me. /Per On 05/31/2018 02:32 PM, Stefan Karlsson wrote: > Hi all, > > Please review this patch to increase the heap size for tests that sets a > small heap size. > > http://cr.openjdk.java.net/~stefank/8204168/webrev.01 > https://bugs.openjdk.java.net/browse/JDK-8204168 > > This change is needed to test ZGC with these tests. ZGC doesn't use > compressed oops and has an allocation memory reserve for the GC thread, > and hence have a higher smallest heap size compared to other GCs in the > code base. > > There are some alternatives to this patch that we could consider (but I > prefer the suggested patch): > > 1) Disable these tests when running with ZGC > > 2) Split all these tests into two copies, one copy for ZGC and another > for the other GCs. > > I've been running all of these changes through tier{1,2,3} and most of > them has been run in tier{4,5,6}. > > Thanks, > StefanK From stefan.karlsson at oracle.com Mon Jun 4 13:00:51 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 4 Jun 2018 15:00:51 +0200 Subject: RFR: 8204168: Increase small heap sizes in tests to accommodate ZGC In-Reply-To: <91de456d-20df-5881-8740-1e5c44f7a238@oracle.com> References: <91de456d-20df-5881-8740-1e5c44f7a238@oracle.com> Message-ID: Thanks, Per. StefanK On 2018-06-04 15:02, Per Liden wrote: > Looks good to me. > > /Per > > On 05/31/2018 02:32 PM, Stefan Karlsson wrote: >> Hi all, >> >> Please review this patch to increase the heap size for tests that sets >> a small heap size. >> >> http://cr.openjdk.java.net/~stefank/8204168/webrev.01 >> https://bugs.openjdk.java.net/browse/JDK-8204168 >> >> This change is needed to test ZGC with these tests. ZGC doesn't use >> compressed oops and has an allocation memory reserve for the GC >> thread, and hence have a higher smallest heap size compared to other >> GCs in the code base. >> >> There are some alternatives to this patch that we could consider (but >> I prefer the suggested patch): >> >> 1) Disable these tests when running with ZGC >> >> 2) Split all these tests into two copies, one copy for ZGC and another >> for the other GCs. >> >> I've been running all of these changes through tier{1,2,3} and most of >> them has been run in tier{4,5,6}. >> >> Thanks, >> StefanK From boris.ulasevich at bell-sw.com Mon Jun 4 13:18:41 2018 From: boris.ulasevich at bell-sw.com (Boris Ulasevich) Date: Mon, 4 Jun 2018 16:18:41 +0300 Subject: RFR (S) 8202705: ARM32 build crashes on long JavaThread offsets In-Reply-To: References: <4840be99-ddbb-16f0-a9cb-31d7efcf0d02@bell-sw.com> Message-ID: <2cdb6d48-eedc-f057-32c9-5d4349cbb8cc@bell-sw.com> Hi Alexey, good point! But Rthread is not something we can redefine: > register_arm.hpp: > #define Rthread R10 Let us just leave comments and work with Ralloc: // Borrow the Rthread for alloc counter Register Ralloc = Rthread; add(Ralloc, Ralloc, in_bytes(JavaThread::allocated_bytes_offset())); ... (work with Ralloc) // Unborrow the Rthread sub(Rthread, Ralloc, in_bytes(JavaThread::allocated_bytes_offset())); Webrev: http://cr.openjdk.java.net/~bulasevich/8202705/webrev.02/ regards, Boris On 04.06.2018 15:10, Aleksey Shipilev wrote: > On 06/04/2018 01:58 PM, Boris Ulasevich wrote: >> Hello all, >> >> Please review this patch to allow ARM32 MacroAssembler to handle updated JavaThread offsets: >> ? http://cr.openjdk.java.net/~bulasevich/8202705/webrev.01/ >> ? https://bugs.openjdk.java.net/browse/JDK-8202705 > > Looks okay, but Rthread becomes misnomer in the middle of the method. > > Maybe like this? > > // Borrow the Rthread for alloc counter > Register Ralloc = Rthread; > Rthread = NULL; > add(Ralloc, Ralloc, in_bytes(JavaThread::allocated_bytes_offset()); > > ... > > ... > > // Unborrow the Rthread > sub(Ralloc, Ralloc, in_bytes(JavaThread::allocated_bytes_offset() > Rthread = Ralloc; > Ralloc = NULL; > > -Aleksey > From shade at redhat.com Mon Jun 4 13:22:39 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Mon, 4 Jun 2018 15:22:39 +0200 Subject: RFR (S) 8202705: ARM32 build crashes on long JavaThread offsets In-Reply-To: <2cdb6d48-eedc-f057-32c9-5d4349cbb8cc@bell-sw.com> References: <4840be99-ddbb-16f0-a9cb-31d7efcf0d02@bell-sw.com> <2cdb6d48-eedc-f057-32c9-5d4349cbb8cc@bell-sw.com> Message-ID: <8cb788d8-e303-2efe-ce58-3390c845a27f@redhat.com> On 06/04/2018 03:18 PM, Boris Ulasevich wrote: > Let us just leave comments and work with Ralloc: > > ? // Borrow the Rthread for alloc counter > ? Register Ralloc = Rthread; > ? add(Ralloc, Ralloc, in_bytes(JavaThread::allocated_bytes_offset())); > > ? ... (work with Ralloc) > > ? // Unborrow the Rthread > ? sub(Rthread, Ralloc, in_bytes(JavaThread::allocated_bytes_offset())); > > Webrev: > ? http://cr.openjdk.java.net/~bulasevich/8202705/webrev.02/ Looks better, thanks! -Aleksey From erik.helin at oracle.com Mon Jun 4 13:47:30 2018 From: erik.helin at oracle.com (Erik Helin) Date: Mon, 4 Jun 2018 15:47:30 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> Message-ID: <0b3ddf69-15f7-217d-fe77-2861534c695a@oracle.com> On 06/01/2018 11:41 PM, Per Liden wrote: > Webrevs > ------- > > To make this easier to review, we've divided the change into two webrevs. > > * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master First of all: great work! Thanks for pushing so many patches upstream ahead of this patch, that makes the review of this patch so much easier :) I have looked at all the shared changes, but I can't be counted as a reviewer for the C2 stuff, I don't have enough experience in that area. I can see what the build changes are doing, but Magnus and/or Erik should probably review that part. Ok, now for my comments: Small nit in make/autoconf/hotspot.m4: + # Only enable ZGC on Linux x86 Could you please change the comment to say x86_64 or x64 (similar to other such comments in that file)? x86 is a bit ambiguous (could mean a 32-bit x86 CPU). Small nit in src/hotspot/share/compiler/oopMap.cpp: + if (ZGC_ONLY(!UseZGC &&) + ((((uintptr_t)loc & (sizeof(*loc)-1)) != 0) || + !Universe::heap()->is_in_or_null(*loc))) { Do we really need ZGC_ONLY around !UseZGC && here? The code is in an #ifdef ASSERT so it doesn't seem performance sensitive, and UseZGC will be just be false if ZGC isn't compiled, right? Or have I gotten this backwards? Regarding src/hotspot/share/gc/shared/gcName.hpp, should we introduce a GCName class so that we can limit the scope of the Z och NA symbols? (Then GCNameHelper::to_string could also be moved into that class). Could also be done as a follow-up patch (if so, please file a bug). Small nit in src/hotspot/share/jfr/metadata/metadata.xml: - \ No newline at end of file + Did you happen to add a newline here (I don't know why there should not be a newline, but the comment indicates so)? Small nit in src/hotspot/share/opto/node.hpp: virtual uint ideal_reg() const; + #ifndef PRODUCT Was the extra newline here added intentionally? In src/hotspot/share/prims/jvmtiTagMap.cpp, do you need to add an include of gc/z/zGlobals.hpp for ZAddressMetadataShift? Like +#if INCLUDE_ZGC + #include "gc/z/c2/zGlobals.hpp" +#endif Or did I miss an include somewhere (wouldn't be the first time :)? In src/hotspot/share/prims/whitebox.cpp, do we need the #if INCLUDE_ZGC guards or is `if (UseZGC)` enough? Same comment for src/hotspot/share/runtime/jniHandles.cpp, do we need the #if INCLUDE_ZGC guard? > ? This patch contains the actual ZGC implementation, the new unit tests > and other changes needed in HotSpot. > > * ZGC Testing: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing Again, great work here, particularly with upstreaming so many patches ahead of this one. I only have two small comments regarding the test changes: Small nit in est/hotspot/jtreg/vmTestbase/nsk/jdi/ObjectReference/referringObjects/referringObjects001/referringObjects001.java: + // G1 fails, just like ZGC, if en explicitly GC is done here. May I suggest s/en explicitly/an explicit/ ? Also maybe remove the comment `// forceGC();`, because it might later look like your comment commented out an earlier, pre-existing call to forceGC(). Same comment as above for instances003.java, instances001.java, instanceCounts001.java. In jdk/java/lang/management/MemoryMXBean/MemoryTestZGC.sh you probably want to remove "@bug 4530538", the empty "@summary" and "@author Mandy Chung" Thanks, Erik From erik.osterlund at oracle.com Mon Jun 4 15:08:38 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 4 Jun 2018 17:08:38 +0200 Subject: RFR: JDK-8202776: Modularize GC allocations in runtime In-Reply-To: References: Message-ID: <5B1555F6.5090909@oracle.com> Hi Roman, I agree the GC should be able to perform arbitrary allocations the way it wants to. However, I would prefer to do it this way: http://cr.openjdk.java.net/~eosterlund/8202776/webrev.00/ Your approach baked more responsibility into mem_allocate, having each GC remember it needs to call into allocate_from_tlab, which no longer always allocates from TLAB (depending on UseTLAB). In my approach, a new virtual member function, obj_allocate_raw() is called, which by default conditionally calls allocate_from_tlab() if UseTLAB is on, and otherwise calls mem_allocate, exactly the way it is today. Yet it allows the flexibility of overriding obj_allocate_raw() to allocate the object in any crazy way imaginable. What do you think about this approach? Does it work for you with Shenandoah? Thanks, /Erik On 2018-05-14 13:40, Roman Kennke wrote: > Currently, GCs only get to see (and modify) 'large' allocations, e.g. > allocations of TLAB blocks (via CH::allocate_new_tlab()) or non-TLAB > objects (via CH::mem_allocate()). I think GCs need to own the whole > allocation path, including allocations *from* TLABs. For example, > Shenandoah needs to allocate one extra word per object, and do some > per-object initialization to set up the forwarding pointer. > > More generally speaking, I believe the interface between GC and rest of > the world should just be 'allocate me a chunk of X words' and it should > be totally to the GCs decretion how and where it allocates that, how > objects are laid out and whatever pre- or post-processing needs to be done. > > For runtime we're already mostly there in the form of > CollectedHeap::mem_allocate(). However, currently, TLAB allocation is > done outside of this path, and GCs cannot currently control it. This > patch propose to move the TLAB allocation to inside of mem_allocate(). > On a more-or-less related not, it also makes > CollectedHeap::fill_with_object_impl() virtual, so that GCs can > intercept that too. (For example, Shenandoah needs to write a fwd ptr, > and then fill with one word less.) > > http://cr.openjdk.java.net/~rkennke/JDK-8202776/webrev.00/ > > Passes: hotspot/tier1 tests > > Can I please get a review? > > Thanks, Roman > From rkennke at redhat.com Mon Jun 4 15:20:47 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 4 Jun 2018 17:20:47 +0200 Subject: RFR: JDK-8202776: Modularize GC allocations in runtime In-Reply-To: <5B1555F6.5090909@oracle.com> References: <5B1555F6.5090909@oracle.com> Message-ID: <76f8679c-5f2e-ec3f-9913-307d0c187dfd@redhat.com> Yes, your approach is even better. So are you reviewing me now, or should I review you? :-P Roman > Hi Roman, > > I agree the GC should be able to perform arbitrary allocations the way > it wants to. > However, I would prefer to do it this way: > http://cr.openjdk.java.net/~eosterlund/8202776/webrev.00/ > > Your approach baked more responsibility into mem_allocate, having each > GC remember it needs to call into allocate_from_tlab, which no longer > always allocates from TLAB (depending on UseTLAB). > > In my approach, a new virtual member function, obj_allocate_raw() is > called, which by default conditionally calls allocate_from_tlab() if > UseTLAB is on, and otherwise calls mem_allocate, exactly the way it is > today. Yet it allows the flexibility of overriding obj_allocate_raw() to > allocate the object in any crazy way imaginable. > > What do you think about this approach? Does it work for you with > Shenandoah? > > Thanks, > /Erik > > On 2018-05-14 13:40, Roman Kennke wrote: >> Currently, GCs only get to see (and modify) 'large' allocations, e.g. >> allocations of TLAB blocks (via CH::allocate_new_tlab()) or non-TLAB >> objects (via CH::mem_allocate()). I think GCs need to own the whole >> allocation path, including allocations *from* TLABs. For example, >> Shenandoah needs to allocate one extra word per object, and do some >> per-object initialization to set up the forwarding pointer. >> >> More generally speaking, I believe the interface between GC and rest of >> the world should just be 'allocate me a chunk of X words' and it should >> be totally to the GCs decretion how and where it allocates that, how >> objects are laid out and whatever pre- or post-processing needs to be >> done. >> >> For runtime we're already mostly there in the form of >> CollectedHeap::mem_allocate(). However, currently, TLAB allocation is >> done outside of this path, and GCs cannot currently control it. This >> patch propose to move the TLAB allocation to inside of mem_allocate(). >> On a more-or-less related not, it also makes >> CollectedHeap::fill_with_object_impl() virtual, so that GCs can >> intercept that too. (For example, Shenandoah needs to write a fwd ptr, >> and then fill with one word less.) >> >> http://cr.openjdk.java.net/~rkennke/JDK-8202776/webrev.00/ >> >> Passes: hotspot/tier1 tests >> >> Can I please get a review? >> >> Thanks, Roman >> > From erik.osterlund at oracle.com Mon Jun 4 15:27:07 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 4 Jun 2018 17:27:07 +0200 Subject: RFR: JDK-8200623: Primitive heap access for interpreter BarrierSetAssembler/x86 In-Reply-To: References: Message-ID: <5B155A4B.7020009@oracle.com> Hi Roman, On 2018-05-07 22:31, Roman Kennke wrote: > JDK-8199417 added better modularization for interpreter barriers. > Shenandoah and possibly future GCs also need barriers for primitive access. > > Some notes on implementation: > - float/double/long access produced some headaches for the following > reasons: > > - float and double would either take XMMRegister which is not > compatible with Register > - or load-from/store-to the floating point stack (see > MacroAssembler::load/store_float/double) > - long access on x86_32 would load-into/store-from 2 registers, or > else use a trick via the floating point stack to do atomic access > > None of this seemed easy/nice to do with the API. I helped myself by > accepting noreg as dst/src argument, which means the corresponding tos > (i.e. ltos, ftos, dtos) and the BSA would then access from/to > xmm0/float-stack in case of float/double or the double-reg/float-stack > in case of long/32bit, which is all that we ever need. It is indeed a bit painful that in hotspot, XMMRegister is not a Register (unlike the Graal implementation). And I think I agree that if it is indeed only ever needed by ToS, then this is the preferable solution to having two almost identicaly APIs - one for integral types and one for floating point types. It beats me though, that in this patch you do not address the jni fast get field optimization on x86. It is seemingly missing barriers now. Should probably make sure that one fits in as well. Fortunately, I think it should work out pretty well. > I'm passing MO_RELAXED to long access calls to hint that we want atomic > access or not. I hope that is ok. Absolutely. Thanks, /Erik > > Tested: hotspot/jtreg:tier1 > > http://cr.openjdk.java.net/~rkennke/JDK-8200623/webrev.00/ > > Can I please get a review? > > Thanks, Roman > From rkennke at redhat.com Mon Jun 4 15:24:38 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 4 Jun 2018 17:24:38 +0200 Subject: RFR: JDK-8203172: Primitive heap access for interpreter BarrierSetAssembler/aarch64 In-Reply-To: <5B152EA5.2060903@oracle.com> References: <5B152EA5.2060903@oracle.com> Message-ID: <23f0db5e-e2d2-a047-6111-d298c2234d28@redhat.com> Ok, right. Very good catch! This should do it, right? Sorry, I couldn't easily make an incremental diff: http://cr.openjdk.java.net/~rkennke/JDK-8203172/webrev.01/ Unfortunately, I cannot really test it because of: http://mail.openjdk.java.net/pipermail/aarch64-port-dev/2018-May/005843.html Roman > Hi Roman, > > Oh man, I was hoping I would never have to look at jni fast get field > again. Here goes... > > ?93?? speculative_load_pclist[count] = __ pc();?? // Used by the > segfault handler > ?94?? __ access_load_at(type, IN_HEAP, noreg /* tos: r0/v0 */, > Address(robj, roffset), noreg, noreg); > ?95 > > I see that here you load straight to tos, which is r0 for integral > types. But r0 is also c_rarg0. So it seems like if after loading the > primitive to r0, the subsequent safepoint counter check fails, then the > code will revert back to a slowpath call, but this time with c_rarg0 > clobbered, leading to a broken JNI env pointer being passed in to the > slow path C function. That does not seem right to me. > > This JNI fast get field code is so error prone. :( > > Unfortunately, the proposed API can not load floating point numbers to > anything but ToS, which seems like a problem in the jni fast get field > code. > I think to make this work properly, you need to load integral types to > result and not ToS, so that you do not clobber r0, and rely on ToS being > v0 for floating point types, which does not clobber r0. That way we can > dance around the issue for now I suppose. > > Thanks, > /Erik > > On 2018-05-14 22:23, Roman Kennke wrote: >> Similar to x86 >> (http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-May/032114.html) >> here comes the primitive heap access changes for aarch64: >> >> http://cr.openjdk.java.net/~rkennke/JDK-8203172/webrev.00/ >> >> Some notes: >> - array access used to compute base_obj + index, and then use indexed >> addressing with base_offset. This means we cannot get base_obj in the >> BarrierSetAssembler API, but we need that, e.g. for resolving the target >> object via forwarding pointer. I changed (base_obj+index)+base_offset to >> base_obj+(index+base_offset) in all the relevant places. >> >> - in jniFastGetField_aarch64.cpp, we are using a trick to ensure correct >> ordering field-load with the load of the safepoint counter: we make them >> address dependend. For float and double loads this meant to load the >> value as int/long, and then later moving those into v0. This doesn't >> work when going through the BarrierSetAssembler API: it loads straight >> to v0. Instead I am inserting a LoadLoad membar for float/double (which >> should be rare enough anyway). >> >> Other than that it's pretty much analogous to x86. >> >> Testing: no regressions in hotspot/tier1 >> >> Can I please get a review? >> >> Thanks, Roman >> > From sgehwolf at redhat.com Mon Jun 4 15:26:33 2018 From: sgehwolf at redhat.com (Severin Gehwolf) Date: Mon, 04 Jun 2018 17:26:33 +0200 Subject: RFR: 8203188: Add JEP-181 support to the Zero interpreter Message-ID: <9aa2709edbf7e1b417ce47ed93a2f53d591984cd.camel@redhat.com> Hi, Could I please get a review of this change adding support for JEP-181 - a.k.a Nestmates - to Zero. This patch depends on David Holmes' Nestmates implementation via JDK-8010319. Thanks to David Holmes and Chris Phillips for their initial reviews prior to this RFR. Bug: https://bugs.openjdk.java.net/browse/JDK-8203188 webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8203188/webrev.02/ Testing: Zero on Linux-x86_64 with the following test set: test/jdk/java/lang/invoke/AccessControlTest.java test/jdk/java/lang/invoke/FinalVirtualCallFromInterface.java test/jdk/java/lang/invoke/PrivateInterfaceCall.java test/jdk/java/lang/invoke/SpecialInterfaceCall.java test/jdk/java/lang/reflect/Nestmates test/hotspot/jtreg/runtime/SelectionResolution/InvokeInterfaceICCE.java test/hotspot/jtreg/runtime/SelectionResolution/InvokeInterfaceSuccessTest.java test/hotspot/jtreg/runtime/Nestmates I cannot run this through the submit repo since the main Nestmates patch hasn't yet landed in JDK 11. Currently testing a Zero bootcycle- images build on x86_64. Thoughts? Thanks, Severin From erik.osterlund at oracle.com Mon Jun 4 15:29:40 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 4 Jun 2018 17:29:40 +0200 Subject: RFR: JDK-8202776: Modularize GC allocations in runtime In-Reply-To: <76f8679c-5f2e-ec3f-9913-307d0c187dfd@redhat.com> References: <5B1555F6.5090909@oracle.com> <76f8679c-5f2e-ec3f-9913-307d0c187dfd@redhat.com> Message-ID: <5B155AE4.2090908@oracle.com> Hi Roman, I don't mind. I think this one is your patch, so I think I am still reviewing you. And I think it looks really good! :p /Erik On 2018-06-04 17:20, Roman Kennke wrote: > Yes, your approach is even better. So are you reviewing me now, or > should I review you? :-P > > Roman > > >> Hi Roman, >> >> I agree the GC should be able to perform arbitrary allocations the way >> it wants to. >> However, I would prefer to do it this way: >> http://cr.openjdk.java.net/~eosterlund/8202776/webrev.00/ >> >> Your approach baked more responsibility into mem_allocate, having each >> GC remember it needs to call into allocate_from_tlab, which no longer >> always allocates from TLAB (depending on UseTLAB). >> >> In my approach, a new virtual member function, obj_allocate_raw() is >> called, which by default conditionally calls allocate_from_tlab() if >> UseTLAB is on, and otherwise calls mem_allocate, exactly the way it is >> today. Yet it allows the flexibility of overriding obj_allocate_raw() to >> allocate the object in any crazy way imaginable. >> >> What do you think about this approach? Does it work for you with >> Shenandoah? >> >> Thanks, >> /Erik >> >> On 2018-05-14 13:40, Roman Kennke wrote: >>> Currently, GCs only get to see (and modify) 'large' allocations, e.g. >>> allocations of TLAB blocks (via CH::allocate_new_tlab()) or non-TLAB >>> objects (via CH::mem_allocate()). I think GCs need to own the whole >>> allocation path, including allocations *from* TLABs. For example, >>> Shenandoah needs to allocate one extra word per object, and do some >>> per-object initialization to set up the forwarding pointer. >>> >>> More generally speaking, I believe the interface between GC and rest of >>> the world should just be 'allocate me a chunk of X words' and it should >>> be totally to the GCs decretion how and where it allocates that, how >>> objects are laid out and whatever pre- or post-processing needs to be >>> done. >>> >>> For runtime we're already mostly there in the form of >>> CollectedHeap::mem_allocate(). However, currently, TLAB allocation is >>> done outside of this path, and GCs cannot currently control it. This >>> patch propose to move the TLAB allocation to inside of mem_allocate(). >>> On a more-or-less related not, it also makes >>> CollectedHeap::fill_with_object_impl() virtual, so that GCs can >>> intercept that too. (For example, Shenandoah needs to write a fwd ptr, >>> and then fill with one word less.) >>> >>> http://cr.openjdk.java.net/~rkennke/JDK-8202776/webrev.00/ >>> >>> Passes: hotspot/tier1 tests >>> >>> Can I please get a review? >>> >>> Thanks, Roman >>> > From glaubitz at physik.fu-berlin.de Mon Jun 4 15:45:36 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Mon, 4 Jun 2018 17:45:36 +0200 Subject: Trouble finding the definition for class BarrierSet Message-ID: <11a0ec6c-94e0-89ad-e30e-cedb7fe7ac4c@physik.fu-berlin.de> Hi! I am currently looking at JDK-8203787 again, the problem still exists (see below). What I don't understand: Why is class BarrierSet apparently not declared in this case. I tried understanding the logic through which "barrierSet.hpp" ends up being included in library_call.cpp but I cannot find anything, the level of nesting is just too deep to be able to follow all transitive #include directions. Does anyone have an idea how to be able to track this down? Does Solaris do anything special with regards to "class BarrierSet" or is there a "os_cpu"- specific implementation/include etc? There must be a trivial bug on linux-sparc which prevents "class BarrierSet" from being defined here but I can't seem to see the problem. Adrian === Output from failing command(s) repeated here === /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_library_call.o:\n" * For target hotspot_variant-server_libjvm_objs_library_call.o: (/bin/grep -v -e "^Note: including file:" < /srv/openjdk/jdk/build/linux-sparcv9-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_library_call.o.log || true) | /usr/bin/head -n 12 /srv/openjdk/jdk/src/hotspot/share/opto/library_call.cpp: In member function ?bool LibraryCallKit::inline_native_clone(bool)?: /srv/openjdk/jdk/src/hotspot/share/opto/library_call.cpp:4272:38: error: incomplete type ?BarrierSet? used in nested name specifier BarrierSetC2* bs = BarrierSet::barrier_set()->barrier_set_c2(); ^~~~~~~~~~~ if test `/usr/bin/wc -l < /srv/openjdk/jdk/build/linux-sparcv9-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_library_call.o.log` -gt 12; then /bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target jdk_modules_java.desktop__the.java.desktop_batch:\n" * For target jdk_modules_java.desktop__the.java.desktop_batch: (/bin/grep -v -e "^Note: including file:" < /srv/openjdk/jdk/build/linux-sparcv9-normal-server-release/make-support/failure-logs/jdk_modules_java.desktop__the.java.desktop_batch.log || true) | /usr/bin/head -n 12 if test `/usr/bin/wc -l < /srv/openjdk/jdk/build/linux-sparcv9-normal-server-release/make-support/failure-logs/jdk_modules_java.desktop__the.java.desktop_batch.log` -gt 12; then /bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "\n* All command lines available in /srv/openjdk/jdk/build/linux-sparcv9-normal-server-release/make-support/failure-logs.\n" * All command lines available in /srv/openjdk/jdk/build/linux-sparcv9-normal-server-release/make-support/failure-logs. /usr/bin/printf "=== End of repeated output ===\n" === End of repeated output === -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From erik.joelsson at oracle.com Mon Jun 4 15:52:15 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Mon, 4 Jun 2018 08:52:15 -0700 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> Message-ID: Hello, On 2018-06-01 14:00, Aleksey Shipilev wrote: > On 06/01/2018 10:53 PM, Erik Joelsson wrote: >> This patch defines flags for disabling speculative execution for GCC and Visual Studio and applies >> them to all binaries except libjvm when available in the compiler. It defines a new jvm feature >> no-speculative-cti, which is used to control whether to use the flags for libjvm. It also defines a >> new jvm variant "altserver" which is the same as server, but with this new feature added. > I think the classic name for such product configuration is "hardened", no? I don't know. I'm open to suggestions on naming. /Erik > -Aleksey > From shade at redhat.com Mon Jun 4 15:56:57 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Mon, 4 Jun 2018 17:56:57 +0200 Subject: RFR: JDK-8202776: Modularize GC allocations in runtime In-Reply-To: <5B155AE4.2090908@oracle.com> References: <5B1555F6.5090909@oracle.com> <76f8679c-5f2e-ec3f-9913-307d0c187dfd@redhat.com> <5B155AE4.2090908@oracle.com> Message-ID: <0d6cff83-d6be-c51c-8629-a340ad5f7fe0@redhat.com> On 06/04/2018 05:29 PM, Erik ?sterlund wrote: >>> I agree the GC should be able to perform arbitrary allocations the way >>> it wants to. >>> However, I would prefer to do it this way: >>> http://cr.openjdk.java.net/~eosterlund/8202776/webrev.00/ This looks good. I think we better hide mem_allocate under "protected" now, so we would have: protected: // TLAB path inline static HeapWord* allocate_from_tlab(Klass* klass, size_t size, TRAPS); static HeapWord* allocate_from_tlab_slow(Klass* klass, size_t size, TRAPS); // Out-of-TLAB path virtual HeapWord* mem_allocate(size_t size, bool* gc_overhead_limit_was_exceeded) = 0; public: // Entry point virtual HeapWord* obj_allocate_raw(Klass* klass, size_t size, bool* gc_overhead_limit_was_exceeded, TRAPS); -Aleksey From erik.osterlund at oracle.com Mon Jun 4 16:06:46 2018 From: erik.osterlund at oracle.com (Erik Osterlund) Date: Mon, 4 Jun 2018 18:06:46 +0200 Subject: RFR: JDK-8202776: Modularize GC allocations in runtime In-Reply-To: <0d6cff83-d6be-c51c-8629-a340ad5f7fe0@redhat.com> References: <5B1555F6.5090909@oracle.com> <76f8679c-5f2e-ec3f-9913-307d0c187dfd@redhat.com> <5B155AE4.2090908@oracle.com> <0d6cff83-d6be-c51c-8629-a340ad5f7fe0@redhat.com> Message-ID: <737D8C93-6533-4B48-BDEB-E92EE8E91C9F@oracle.com> Hi Aleksey, Sounds like a good idea. /Erik > On 4 Jun 2018, at 17:56, Aleksey Shipilev wrote: > > On 06/04/2018 05:29 PM, Erik ?sterlund wrote: >>>> I agree the GC should be able to perform arbitrary allocations the way >>>> it wants to. >>>> However, I would prefer to do it this way: >>>> http://cr.openjdk.java.net/~eosterlund/8202776/webrev.00/ > > This looks good. I think we better hide mem_allocate under "protected" now, so we would have: > > protected: > // TLAB path > inline static HeapWord* allocate_from_tlab(Klass* klass, size_t size, TRAPS); > static HeapWord* allocate_from_tlab_slow(Klass* klass, size_t size, TRAPS); > > // Out-of-TLAB path > virtual HeapWord* mem_allocate(size_t size, > bool* gc_overhead_limit_was_exceeded) = 0; > > public: > // Entry point > virtual HeapWord* obj_allocate_raw(Klass* klass, size_t size, > bool* gc_overhead_limit_was_exceeded, TRAPS); > > -Aleksey > From erik.osterlund at oracle.com Mon Jun 4 16:43:47 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 4 Jun 2018 18:43:47 +0200 Subject: RFR: JDK-8203172: Primitive heap access for interpreter BarrierSetAssembler/aarch64 In-Reply-To: <23f0db5e-e2d2-a047-6111-d298c2234d28@redhat.com> References: <5B152EA5.2060903@oracle.com> <23f0db5e-e2d2-a047-6111-d298c2234d28@redhat.com> Message-ID: Hi Roman, On 2018-06-04 17:24, Roman Kennke wrote: > Ok, right. Very good catch! > > This should do it, right? Sorry, I couldn't easily make an incremental diff: > > http://cr.openjdk.java.net/~rkennke/JDK-8203172/webrev.01/ Unfortunately, I think there is one more problem for you. The signal handler is supposed to catch SIGSEGV caused by speculative loads shot from the fantastic jni fast get field code. But it currently expects an exact PC match: address JNI_FastGetField::find_slowcase_pc(address pc) { ? for (int i=0; i Unfortunately, I cannot really test it because of: > http://mail.openjdk.java.net/pipermail/aarch64-port-dev/2018-May/005843.html That is unfortunate. If I were you, I would not dare to change anything in jni fast get field without testing it - it is very error prone. Thanks, /Erik > Roman > > >> Hi Roman, >> >> Oh man, I was hoping I would never have to look at jni fast get field >> again. Here goes... >> >> ?93?? speculative_load_pclist[count] = __ pc();?? // Used by the >> segfault handler >> ?94?? __ access_load_at(type, IN_HEAP, noreg /* tos: r0/v0 */, >> Address(robj, roffset), noreg, noreg); >> ?95 >> >> I see that here you load straight to tos, which is r0 for integral >> types. But r0 is also c_rarg0. So it seems like if after loading the >> primitive to r0, the subsequent safepoint counter check fails, then the >> code will revert back to a slowpath call, but this time with c_rarg0 >> clobbered, leading to a broken JNI env pointer being passed in to the >> slow path C function. That does not seem right to me. >> >> This JNI fast get field code is so error prone. :( >> >> Unfortunately, the proposed API can not load floating point numbers to >> anything but ToS, which seems like a problem in the jni fast get field >> code. >> I think to make this work properly, you need to load integral types to >> result and not ToS, so that you do not clobber r0, and rely on ToS being >> v0 for floating point types, which does not clobber r0. That way we can >> dance around the issue for now I suppose. >> >> Thanks, >> /Erik >> >> On 2018-05-14 22:23, Roman Kennke wrote: >>> Similar to x86 >>> (http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-May/032114.html) >>> here comes the primitive heap access changes for aarch64: >>> >>> http://cr.openjdk.java.net/~rkennke/JDK-8203172/webrev.00/ >>> >>> Some notes: >>> - array access used to compute base_obj + index, and then use indexed >>> addressing with base_offset. This means we cannot get base_obj in the >>> BarrierSetAssembler API, but we need that, e.g. for resolving the target >>> object via forwarding pointer. I changed (base_obj+index)+base_offset to >>> base_obj+(index+base_offset) in all the relevant places. >>> >>> - in jniFastGetField_aarch64.cpp, we are using a trick to ensure correct >>> ordering field-load with the load of the safepoint counter: we make them >>> address dependend. For float and double loads this meant to load the >>> value as int/long, and then later moving those into v0. This doesn't >>> work when going through the BarrierSetAssembler API: it loads straight >>> to v0. Instead I am inserting a LoadLoad membar for float/double (which >>> should be rare enough anyway). >>> >>> Other than that it's pretty much analogous to x86. >>> >>> Testing: no regressions in hotspot/tier1 >>> >>> Can I please get a review? >>> >>> Thanks, Roman >>> > From jesper.wilhelmsson at oracle.com Mon Jun 4 16:54:24 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Mon, 4 Jun 2018 18:54:24 +0200 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> Message-ID: <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> > On 4 Jun 2018, at 17:52, Erik Joelsson wrote: > > Hello, > > On 2018-06-01 14:00, Aleksey Shipilev wrote: >> On 06/01/2018 10:53 PM, Erik Joelsson wrote: >>> This patch defines flags for disabling speculative execution for GCC and Visual Studio and applies >>> them to all binaries except libjvm when available in the compiler. It defines a new jvm feature >>> no-speculative-cti, which is used to control whether to use the flags for libjvm. It also defines a >>> new jvm variant "altserver" which is the same as server, but with this new feature added. >> I think the classic name for such product configuration is "hardened", no? > I don't know. I'm open to suggestions on naming. "hardened" sounds good to me. The change looks good as well. /Jesper > > /Erik >> -Aleksey >> > From coleen.phillimore at oracle.com Mon Jun 4 17:12:24 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 4 Jun 2018 13:12:24 -0400 Subject: RFR (S) 8204237: Clean up incorrectly included .inline.hpp files from jvmciJavaClasses.hpp Message-ID: <05e83d76-d784-6360-3b88-06c5db1848c2@oracle.com> Summary: Reexpand macro to provide non-inline functions. open webrev at http://cr.openjdk.java.net/~coleenp/8204237.01/webrev bug link https://bugs.openjdk.java.net/browse/JDK-8204237 Ran mach5 hs-tier1-2 on 4 Oracle platforms.?? There are no target-dependent changes. Thanks, Coleen From vladimir.kozlov at oracle.com Mon Jun 4 17:38:32 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 4 Jun 2018 10:38:32 -0700 Subject: RFR (S) 8204237: Clean up incorrectly included .inline.hpp files from jvmciJavaClasses.hpp In-Reply-To: <05e83d76-d784-6360-3b88-06c5db1848c2@oracle.com> References: <05e83d76-d784-6360-3b88-06c5db1848c2@oracle.com> Message-ID: <2463b3b2-317a-5e2e-4fb6-3cc113b8b576@oracle.com> Looks good to me. We need review from Labs and push it up-stream to Lab's jvmci repo. Thanks, Vladimir On 6/4/18 10:12 AM, coleen.phillimore at oracle.com wrote: > Summary: Reexpand macro to provide non-inline functions. > > open webrev at http://cr.openjdk.java.net/~coleenp/8204237.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8204237 > > Ran mach5 hs-tier1-2 on 4 Oracle platforms.?? There are no > target-dependent changes. > > Thanks, > Coleen From coleen.phillimore at oracle.com Mon Jun 4 18:41:05 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 4 Jun 2018 14:41:05 -0400 Subject: RFR (S) 8204237: Clean up incorrectly included .inline.hpp files from jvmciJavaClasses.hpp In-Reply-To: <2463b3b2-317a-5e2e-4fb6-3cc113b8b576@oracle.com> References: <05e83d76-d784-6360-3b88-06c5db1848c2@oracle.com> <2463b3b2-317a-5e2e-4fb6-3cc113b8b576@oracle.com> Message-ID: <5498e07a-8899-72fe-6809-5a8b12f696c5@oracle.com> Thanks Vladimir and for including the graal-dev mailing list. Coleen On 6/4/18 1:38 PM, Vladimir Kozlov wrote: > Looks good to me. > > We need review from Labs and push it up-stream to Lab's jvmci repo. > > Thanks, > Vladimir > > On 6/4/18 10:12 AM, coleen.phillimore at oracle.com wrote: >> Summary: Reexpand macro to provide non-inline functions. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8204237.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8204237 >> >> Ran mach5 hs-tier1-2 on 4 Oracle platforms.?? There are no >> target-dependent changes. >> >> Thanks, >> Coleen From rkennke at redhat.com Mon Jun 4 19:42:35 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 4 Jun 2018 21:42:35 +0200 Subject: RFR: JDK-8203172: Primitive heap access for interpreter BarrierSetAssembler/aarch64 In-Reply-To: References: <5B152EA5.2060903@oracle.com> <23f0db5e-e2d2-a047-6111-d298c2234d28@redhat.com> Message-ID: <9e8589ca-5ac8-d7ee-a217-04048671366c@redhat.com> Am 04.06.2018 um 18:43 schrieb Erik ?sterlund: > Hi Roman, > > On 2018-06-04 17:24, Roman Kennke wrote: >> Ok, right. Very good catch! >> >> This should do it, right? Sorry, I couldn't easily make an incremental >> diff: >> >> http://cr.openjdk.java.net/~rkennke/JDK-8203172/webrev.01/ > > Unfortunately, I think there is one more problem for you. > The signal handler is supposed to catch SIGSEGV caused by speculative > loads shot from the fantastic jni fast get field code. But it currently > expects an exact PC match: > > address JNI_FastGetField::find_slowcase_pc(address pc) { > ? for (int i=0; i ??? if (speculative_load_pclist[i] == pc) { > ????? return slowcase_entry_pclist[i]; > ??? } > ? } > ? return (address)-1; > } > > This means that the way this is written now, speculative_load_pclist > registers the __ pc() right before the access_load_at call. This puts > constraints on whatever is done inside of access_load_at to only > speculatively load on the first assembled instruction. > > If you imagine a scenario where you have a GC with Brooks pointers that > also uncommits memory (like Shenandoah I presume), then I imagine you > would need something more here. If you start with a forwarding pointer > load, then that can trap (which is probably caught by the exact PC > match). But then there will be a subsequent load of the value in the > to-space object, which will not be protected. But this is also loaded > speculatively (as the subsequent safepoint counter check could > invalidate the result), and could therefore crash the VM unless > protected, as the signal handler code fails to recognize this is a > speculative load from jni fast get field. > > I imagine the solution to this would be to let speculative_load_pclist > specify a range for fuzzy SIGSEGV matching in the signal handler, rather > than an exact PC (i.e. speculative_load_pclist_start and > speculative_load_pclist_end). That would give you enough freedom to use > Brooks pointers in there. Sometimes I wonder if the lengths we go to > maintain jni fast get field is *really* worth it. I are probably right in general. But I also think we are fine with Shenandoah. Both the fwd ptr load and the field load are constructed with the same base operand. If the oop is NULL (or invalid memory) it will blow up on fwdptr load just the same as it would blow up on field load. We maintain an invariant that the fwd ptr of a valid oop results in a valid (and equivalent) oop. I therefore think we are fine for now. Should a GC ever need anything else here, I'd worry about it then. Until this happens, let's just hope to never need to touch this code again ;-) >> Unfortunately, I cannot really test it because of: >> http://mail.openjdk.java.net/pipermail/aarch64-port-dev/2018-May/005843.html >> > > That is unfortunate. If I were you, I would not dare to change anything > in jni fast get field without testing it - it is very error prone. Yeah. I guess I'll just wait with testing until this is resolved. Or else resolve it myself. Can I consider this change reviewed by you? Thanks, Roman > Thanks, > /Erik > >> Roman >> >> >>> Hi Roman, >>> >>> Oh man, I was hoping I would never have to look at jni fast get field >>> again. Here goes... >>> >>> ??93?? speculative_load_pclist[count] = __ pc();?? // Used by the >>> segfault handler >>> ??94?? __ access_load_at(type, IN_HEAP, noreg /* tos: r0/v0 */, >>> Address(robj, roffset), noreg, noreg); >>> ??95 >>> >>> I see that here you load straight to tos, which is r0 for integral >>> types. But r0 is also c_rarg0. So it seems like if after loading the >>> primitive to r0, the subsequent safepoint counter check fails, then the >>> code will revert back to a slowpath call, but this time with c_rarg0 >>> clobbered, leading to a broken JNI env pointer being passed in to the >>> slow path C function. That does not seem right to me. >>> >>> This JNI fast get field code is so error prone. :( >>> >>> Unfortunately, the proposed API can not load floating point numbers to >>> anything but ToS, which seems like a problem in the jni fast get field >>> code. >>> I think to make this work properly, you need to load integral types to >>> result and not ToS, so that you do not clobber r0, and rely on ToS being >>> v0 for floating point types, which does not clobber r0. That way we can >>> dance around the issue for now I suppose. >>> >>> Thanks, >>> /Erik >>> >>> On 2018-05-14 22:23, Roman Kennke wrote: >>>> Similar to x86 >>>> (http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-May/032114.html) >>>> >>>> here comes the primitive heap access changes for aarch64: >>>> >>>> http://cr.openjdk.java.net/~rkennke/JDK-8203172/webrev.00/ >>>> >>>> Some notes: >>>> - array access used to compute base_obj + index, and then use indexed >>>> addressing with base_offset. This means we cannot get base_obj in the >>>> BarrierSetAssembler API, but we need that, e.g. for resolving the >>>> target >>>> object via forwarding pointer. I changed >>>> (base_obj+index)+base_offset to >>>> base_obj+(index+base_offset) in all the relevant places. >>>> >>>> - in jniFastGetField_aarch64.cpp, we are using a trick to ensure >>>> correct >>>> ordering field-load with the load of the safepoint counter: we make >>>> them >>>> address dependend. For float and double loads this meant to load the >>>> value as int/long, and then later moving those into v0. This doesn't >>>> work when going through the BarrierSetAssembler API: it loads straight >>>> to v0. Instead I am inserting a LoadLoad membar for float/double (which >>>> should be rare enough anyway). >>>> >>>> Other than that it's pretty much analogous to x86. >>>> >>>> Testing: no regressions in hotspot/tier1 >>>> >>>> Can I please get a review? >>>> >>>> Thanks, Roman >>>> >> > From erik.osterlund at oracle.com Mon Jun 4 19:47:36 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 4 Jun 2018 21:47:36 +0200 Subject: Trouble finding the definition for class BarrierSet In-Reply-To: <11a0ec6c-94e0-89ad-e30e-cedb7fe7ac4c@physik.fu-berlin.de> References: <11a0ec6c-94e0-89ad-e30e-cedb7fe7ac4c@physik.fu-berlin.de> Message-ID: Hi Adrian, Can't you just #include "gc/shared/barrierSet.hpp" in library_call.cpp? It is forward declared in oop.hpp, c1_LIRAssembler.hpp and? collectedHeap.hpp. Perhaps you got one of those accidentally. Thanks, /Erik On 2018-06-04 17:45, John Paul Adrian Glaubitz wrote: > Hi! > > I am currently looking at JDK-8203787 again, the problem still exists (see below). > > What I don't understand: Why is class BarrierSet apparently not declared in this > case. I tried understanding the logic through which "barrierSet.hpp" ends up > being included in library_call.cpp but I cannot find anything, the level > of nesting is just too deep to be able to follow all transitive #include > directions. > > Does anyone have an idea how to be able to track this down? Does Solaris do > anything special with regards to "class BarrierSet" or is there a "os_cpu"- > specific implementation/include etc? > > There must be a trivial bug on linux-sparc which prevents "class BarrierSet" > from being defined here but I can't seem to see the problem. > > Adrian > > === Output from failing command(s) repeated here === > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_library_call.o:\n" > * For target hotspot_variant-server_libjvm_objs_library_call.o: > (/bin/grep -v -e "^Note: including file:" < /srv/openjdk/jdk/build/linux-sparcv9-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_library_call.o.log || true) | /usr/bin/head -n 12 > /srv/openjdk/jdk/src/hotspot/share/opto/library_call.cpp: In member function ?bool LibraryCallKit::inline_native_clone(bool)?: > /srv/openjdk/jdk/src/hotspot/share/opto/library_call.cpp:4272:38: error: incomplete type ?BarrierSet? used in nested name specifier > BarrierSetC2* bs = BarrierSet::barrier_set()->barrier_set_c2(); > ^~~~~~~~~~~ > if test `/usr/bin/wc -l < /srv/openjdk/jdk/build/linux-sparcv9-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_library_call.o.log` -gt 12; then /bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target jdk_modules_java.desktop__the.java.desktop_batch:\n" > * For target jdk_modules_java.desktop__the.java.desktop_batch: > (/bin/grep -v -e "^Note: including file:" < /srv/openjdk/jdk/build/linux-sparcv9-normal-server-release/make-support/failure-logs/jdk_modules_java.desktop__the.java.desktop_batch.log || true) | /usr/bin/head -n 12 > if test `/usr/bin/wc -l < /srv/openjdk/jdk/build/linux-sparcv9-normal-server-release/make-support/failure-logs/jdk_modules_java.desktop__the.java.desktop_batch.log` -gt 12; then /bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "\n* All command lines available in /srv/openjdk/jdk/build/linux-sparcv9-normal-server-release/make-support/failure-logs.\n" > > * All command lines available in /srv/openjdk/jdk/build/linux-sparcv9-normal-server-release/make-support/failure-logs. > /usr/bin/printf "=== End of repeated output ===\n" > === End of repeated output === > From glaubitz at physik.fu-berlin.de Mon Jun 4 20:10:02 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Mon, 4 Jun 2018 22:10:02 +0200 Subject: Trouble finding the definition for class BarrierSet In-Reply-To: References: <11a0ec6c-94e0-89ad-e30e-cedb7fe7ac4c@physik.fu-berlin.de> Message-ID: <322fa7dc-1d93-b7a0-16de-4d65d336e6c3@physik.fu-berlin.de> Hi Erik! On 06/04/2018 09:47 PM, Erik ?sterlund wrote: > Can't you just #include "gc/shared/barrierSet.hpp" in library_call.cpp? Hmm. Let me try. But why would it be necessary on linux-sparc and not anywhere else? > It is forward declared in oop.hpp, c1_LIRAssembler.hpp and? collectedHeap.hpp. Perhaps > you got one of those accidentally. You mean, I'm somehow pulling either of those headers in in library_call.cpp? Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From erik.joelsson at oracle.com Mon Jun 4 20:10:09 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Mon, 4 Jun 2018 13:10:09 -0700 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> Message-ID: <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> New webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.02/ Renamed the new jvm variant to "hardened". /Erik On 2018-06-04 09:54, jesper.wilhelmsson at oracle.com wrote: >> On 4 Jun 2018, at 17:52, Erik Joelsson wrote: >> >> Hello, >> >> On 2018-06-01 14:00, Aleksey Shipilev wrote: >>> On 06/01/2018 10:53 PM, Erik Joelsson wrote: >>>> This patch defines flags for disabling speculative execution for GCC and Visual Studio and applies >>>> them to all binaries except libjvm when available in the compiler. It defines a new jvm feature >>>> no-speculative-cti, which is used to control whether to use the flags for libjvm. It also defines a >>>> new jvm variant "altserver" which is the same as server, but with this new feature added. >>> I think the classic name for such product configuration is "hardened", no? >> I don't know. I'm open to suggestions on naming. > "hardened" sounds good to me. > > The change looks good as well. > /Jesper > >> /Erik >>> -Aleksey >>> From erik.osterlund at oracle.com Mon Jun 4 20:16:05 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 4 Jun 2018 22:16:05 +0200 Subject: RFR: JDK-8203172: Primitive heap access for interpreter BarrierSetAssembler/aarch64 In-Reply-To: <9e8589ca-5ac8-d7ee-a217-04048671366c@redhat.com> References: <5B152EA5.2060903@oracle.com> <23f0db5e-e2d2-a047-6111-d298c2234d28@redhat.com> <9e8589ca-5ac8-d7ee-a217-04048671366c@redhat.com> Message-ID: Hi Roman, On 2018-06-04 21:42, Roman Kennke wrote: > Am 04.06.2018 um 18:43 schrieb Erik ?sterlund: >> Hi Roman, >> >> On 2018-06-04 17:24, Roman Kennke wrote: >>> Ok, right. Very good catch! >>> >>> This should do it, right? Sorry, I couldn't easily make an incremental >>> diff: >>> >>> http://cr.openjdk.java.net/~rkennke/JDK-8203172/webrev.01/ >> Unfortunately, I think there is one more problem for you. >> The signal handler is supposed to catch SIGSEGV caused by speculative >> loads shot from the fantastic jni fast get field code. But it currently >> expects an exact PC match: >> >> address JNI_FastGetField::find_slowcase_pc(address pc) { >> ? for (int i=0; i> ??? if (speculative_load_pclist[i] == pc) { >> ????? return slowcase_entry_pclist[i]; >> ??? } >> ? } >> ? return (address)-1; >> } >> >> This means that the way this is written now, speculative_load_pclist >> registers the __ pc() right before the access_load_at call. This puts >> constraints on whatever is done inside of access_load_at to only >> speculatively load on the first assembled instruction. >> >> If you imagine a scenario where you have a GC with Brooks pointers that >> also uncommits memory (like Shenandoah I presume), then I imagine you >> would need something more here. If you start with a forwarding pointer >> load, then that can trap (which is probably caught by the exact PC >> match). But then there will be a subsequent load of the value in the >> to-space object, which will not be protected. But this is also loaded >> speculatively (as the subsequent safepoint counter check could >> invalidate the result), and could therefore crash the VM unless >> protected, as the signal handler code fails to recognize this is a >> speculative load from jni fast get field. >> >> I imagine the solution to this would be to let speculative_load_pclist >> specify a range for fuzzy SIGSEGV matching in the signal handler, rather >> than an exact PC (i.e. speculative_load_pclist_start and >> speculative_load_pclist_end). That would give you enough freedom to use >> Brooks pointers in there. Sometimes I wonder if the lengths we go to >> maintain jni fast get field is *really* worth it. > I are probably right in general. But I also think we are fine with > Shenandoah. Both the fwd ptr load and the field load are constructed > with the same base operand. If the oop is NULL (or invalid memory) it > will blow up on fwdptr load just the same as it would blow up on field > load. We maintain an invariant that the fwd ptr of a valid oop results > in a valid (and equivalent) oop. I therefore think we are fine for now. > Should a GC ever need anything else here, I'd worry about it then. Until > this happens, let's just hope to never need to touch this code again ;-) No I'm afraid that is not safe. After loading the forwarding pointer, the thread could be preempted, then any number of GC cycles could pass, which means that the address that the at some point read forwarding pointer points to, could be uncommitted memory. In fact it is unsafe even without uncommitted memory. Because after resolving the jobject to some address in the heap, the thread could get preempted, and any number of GC cycles could pass, causing the forwarding pointer to be read from some address in the heap that no longer is the forwarding pointer of an object, but rather a random integer. This causes the second load to blow up, even without uncommitting memory. Here is an attempt at showing different things that can go wrong: obj = *jobject // preempted for N GC cycles, meaning obj might 1) be a valid pointer to an object, or 2) be a random pointer inside of the heap or outside of the heap forward_pointer = *obj // may 1) crash with SIGSEGV, 2) read a random pointer, no longer representing the forwarding pointer, or 3) read a consistent forwarding pointer // preempted for N GC cycles, causing forward_pointer to point at pretty much anything result = *(forward_pointer + offset) // may 1) read a valid primitive value, if previous two loads were not messed up, or 2) read some random value that no longer corresponds to the object field, or 3) crash because either the forwarding pointer did point at something valid that subsequently got relocated and uncommitted before the load hits, or because the forwarding pointer never pointed to anything valid in the first place, because the forwarding pointer load read a random pointer due to the object relocating after the jobject was resolved. The summary is that both loads need protection due to how the thread in native state runs freely without necessarily caring about the GC running any number of GC cycles concurrently, making the memory super slippery, which risks crashing the VM without the proper protection. > >>> Unfortunately, I cannot really test it because of: >>> http://mail.openjdk.java.net/pipermail/aarch64-port-dev/2018-May/005843.html >>> >> That is unfortunate. If I were you, I would not dare to change anything >> in jni fast get field without testing it - it is very error prone. > > Yeah. I guess I'll just wait with testing until this is resolved. Or > else resolve it myself. Yeah. > Can I consider this change reviewed by you? I think we should agree about the safety of doing this for Shenandoah in particular first. I still think we need the PC range as opposed to exact PC to be caught in the signal handler for this to be safe for your GC algorithm. Thanks, /Erik > Thanks, > Roman > > >> Thanks, >> /Erik >> >>> Roman >>> >>> >>>> Hi Roman, >>>> >>>> Oh man, I was hoping I would never have to look at jni fast get field >>>> again. Here goes... >>>> >>>> ??93?? speculative_load_pclist[count] = __ pc();?? // Used by the >>>> segfault handler >>>> ??94?? __ access_load_at(type, IN_HEAP, noreg /* tos: r0/v0 */, >>>> Address(robj, roffset), noreg, noreg); >>>> ??95 >>>> >>>> I see that here you load straight to tos, which is r0 for integral >>>> types. But r0 is also c_rarg0. So it seems like if after loading the >>>> primitive to r0, the subsequent safepoint counter check fails, then the >>>> code will revert back to a slowpath call, but this time with c_rarg0 >>>> clobbered, leading to a broken JNI env pointer being passed in to the >>>> slow path C function. That does not seem right to me. >>>> >>>> This JNI fast get field code is so error prone. :( >>>> >>>> Unfortunately, the proposed API can not load floating point numbers to >>>> anything but ToS, which seems like a problem in the jni fast get field >>>> code. >>>> I think to make this work properly, you need to load integral types to >>>> result and not ToS, so that you do not clobber r0, and rely on ToS being >>>> v0 for floating point types, which does not clobber r0. That way we can >>>> dance around the issue for now I suppose. >>>> >>>> Thanks, >>>> /Erik >>>> >>>> On 2018-05-14 22:23, Roman Kennke wrote: >>>>> Similar to x86 >>>>> (http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-May/032114.html) >>>>> >>>>> here comes the primitive heap access changes for aarch64: >>>>> >>>>> http://cr.openjdk.java.net/~rkennke/JDK-8203172/webrev.00/ >>>>> >>>>> Some notes: >>>>> - array access used to compute base_obj + index, and then use indexed >>>>> addressing with base_offset. This means we cannot get base_obj in the >>>>> BarrierSetAssembler API, but we need that, e.g. for resolving the >>>>> target >>>>> object via forwarding pointer. I changed >>>>> (base_obj+index)+base_offset to >>>>> base_obj+(index+base_offset) in all the relevant places. >>>>> >>>>> - in jniFastGetField_aarch64.cpp, we are using a trick to ensure >>>>> correct >>>>> ordering field-load with the load of the safepoint counter: we make >>>>> them >>>>> address dependend. For float and double loads this meant to load the >>>>> value as int/long, and then later moving those into v0. This doesn't >>>>> work when going through the BarrierSetAssembler API: it loads straight >>>>> to v0. Instead I am inserting a LoadLoad membar for float/double (which >>>>> should be rare enough anyway). >>>>> >>>>> Other than that it's pretty much analogous to x86. >>>>> >>>>> Testing: no regressions in hotspot/tier1 >>>>> >>>>> Can I please get a review? >>>>> >>>>> Thanks, Roman >>>>> > From erik.osterlund at oracle.com Mon Jun 4 20:18:23 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 4 Jun 2018 22:18:23 +0200 Subject: Trouble finding the definition for class BarrierSet In-Reply-To: <322fa7dc-1d93-b7a0-16de-4d65d336e6c3@physik.fu-berlin.de> References: <11a0ec6c-94e0-89ad-e30e-cedb7fe7ac4c@physik.fu-berlin.de> <322fa7dc-1d93-b7a0-16de-4d65d336e6c3@physik.fu-berlin.de> Message-ID: <001e5b7f-5ce7-7620-7301-061b89102701@oracle.com> Hi Adrian, On 2018-06-04 22:10, John Paul Adrian Glaubitz wrote: > Hi Erik! > > On 06/04/2018 09:47 PM, Erik ?sterlund wrote: >> Can't you just #include "gc/shared/barrierSet.hpp" in library_call.cpp? > Hmm. Let me try. But why would it be necessary on linux-sparc and not > anywhere else? Not sure. But regardless of the case, that include should be in there anyway. >> It is forward declared in oop.hpp, c1_LIRAssembler.hpp and? collectedHeap.hpp. Perhaps >> you got one of those accidentally. > You mean, I'm somehow pulling either of those headers in in library_call.cpp? That sounds like the most natural explanation to me. Thanks, /Erik > Adrian From glaubitz at physik.fu-berlin.de Mon Jun 4 20:24:33 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Mon, 4 Jun 2018 22:24:33 +0200 Subject: Trouble finding the definition for class BarrierSet In-Reply-To: <001e5b7f-5ce7-7620-7301-061b89102701@oracle.com> References: <11a0ec6c-94e0-89ad-e30e-cedb7fe7ac4c@physik.fu-berlin.de> <322fa7dc-1d93-b7a0-16de-4d65d336e6c3@physik.fu-berlin.de> <001e5b7f-5ce7-7620-7301-061b89102701@oracle.com> Message-ID: <2f999768-32e6-e862-c5b4-54e59d011e5e@physik.fu-berlin.de> On 06/04/2018 10:18 PM, Erik ?sterlund wrote: >> On 06/04/2018 09:47 PM, Erik ?sterlund wrote: >>> Can't you just #include "gc/shared/barrierSet.hpp" in library_call.cpp? >> Hmm. Let me try. But why would it be necessary on linux-sparc and not >> anywhere else? > > Not sure. But regardless of the case, that include should be in there anyway. That fixes it. However, I then get: /srv/openjdk/jdk/src/hotspot/share/opto/macroArrayCopy.cpp: In member function ?Node* PhaseMacroExpand::generate_arraycopy(ArrayCopyNode*, AllocateArrayNode*, Node**, MergeMemNode*, Node**, const TypePtr*, BasicType, Node*, Node*, Node*, Node*, Node*, bool, bool, RegionNode*)?: /srv/openjdk/jdk/src/hotspot/share/opto/macroArrayCopy.cpp:553:36: error: incomplete type ?BarrierSet? used in nested name specifier BarrierSetC2* bs = BarrierSet::barrier_set()->barrier_set_c2(); ^~~~~~~~~~~ I really wonder where the other architectures are getting it from. There must be one feature/header etc that the other architectures are including but that's being used on linux-sparc. But I cannot figure out what we're missing. >>> It is forward declared in oop.hpp, c1_LIRAssembler.hpp and? collectedHeap.hpp. Perhaps >>> you got one of those accidentally. >> You mean, I'm somehow pulling either of those headers in in library_call.cpp? > > That sounds like the most natural explanation to me. But why would these forward declarations hurt? Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From bob.vandette at oracle.com Mon Jun 4 20:34:10 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Mon, 4 Jun 2018 16:34:10 -0400 Subject: ARM port consolidation Message-ID: During the JDK 9 time frame, Oracle open sourced its 32-bit and 64-bit ARM ports and contributed them to OpenJDK. These ports have been used for years in the embedded and mobile market, making them very stable and having the benefit of a single source base which can produce both 32 and 64-bit binaries. The downside of this contribution is that it resulted in two 64-bit ARM implementations being available in OpenJDK. I'd like to propose that we eliminate one of the 64-bit ARM ports and encourage everyone to enhance and support the remaining 32 and 64 bit ARM ports. This would avoid the creation of yet another port for these chip architectures. The reduction of competing ports will allow everyone to focus their attention on a single 64-bit port rather than diluting our efforts. This will result in a higher quality and a more performant implementation. The community at large (especially RedHat, BellSoft, Linaro and Cavium) have done a great job of enhancing and keeping the AArch64 port up to date with current and new Hotspot features. As a result, I propose that we standardize the 64-bit ARM implementation on this port. If there are no objections, I will file a JEP to remove the 64-bit ARM port sources that reside in jdk/open/src/hotspot/src/cpu/arm along with any build logic. This will leave the Oracle contributed 32-bit ARM port and the AArch64 64-bit ARM port. Let me know what you all think, Bob Vandette From glaubitz at physik.fu-berlin.de Mon Jun 4 20:42:09 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Mon, 4 Jun 2018 22:42:09 +0200 Subject: RFR: 8203787: Hotspot build broken on linux-sparc after 8202377 Message-ID: <8cfcfb72-4b01-62e5-f053-c57ff74ef8c1@physik.fu-berlin.de> Hi! Please review this minor change which partially fixes the Hotspot build on linux-sparc. It does not fully restore the build on linux-sparc because we're still suffering from JDK-8203301 Linux-sparc fails to build after JDK-8199712 (Flight Recorder), but I will get around fixing that in the near future as well. The webrev can be found in [1]. I'm pushing the change to the submit-jdk repository as well. Thanks, Adrian > [1] http://cr.openjdk.java.net/~glaubitz/8203787/webrev.01/ -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From erik.osterlund at oracle.com Mon Jun 4 20:48:36 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 4 Jun 2018 22:48:36 +0200 Subject: RFR: 8203787: Hotspot build broken on linux-sparc after 8202377 In-Reply-To: <8cfcfb72-4b01-62e5-f053-c57ff74ef8c1@physik.fu-berlin.de> References: <8cfcfb72-4b01-62e5-f053-c57ff74ef8c1@physik.fu-berlin.de> Message-ID: <72cfd8f2-d4ab-d229-e75f-f39a0b9dc3a8@oracle.com> Hi Adrian, The include you added in opto/library_call.cpp is not sorted correctly. Otherwise, it looks good, and I don't need another webrev. Thanks, /Erik On 2018-06-04 22:42, John Paul Adrian Glaubitz wrote: > Hi! > > Please review this minor change which partially fixes the Hotspot > build on linux-sparc. > > It does not fully restore the build on linux-sparc because we're still > suffering from JDK-8203301 Linux-sparc fails to build after JDK-8199712 > (Flight Recorder), but I will get around fixing that in the near future > as well. > > The webrev can be found in [1]. I'm pushing the change to the submit-jdk > repository as well. > > Thanks, > Adrian > >> [1] http://cr.openjdk.java.net/~glaubitz/8203787/webrev.01/ From rkennke at redhat.com Mon Jun 4 20:49:45 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 4 Jun 2018 22:49:45 +0200 Subject: RFR: JDK-8203172: Primitive heap access for interpreter BarrierSetAssembler/aarch64 In-Reply-To: References: <5B152EA5.2060903@oracle.com> <23f0db5e-e2d2-a047-6111-d298c2234d28@redhat.com> <9e8589ca-5ac8-d7ee-a217-04048671366c@redhat.com> Message-ID: <36a33e42-1470-b153-dd7a-0ef26c89678b@redhat.com> Am 04.06.2018 um 22:16 schrieb Erik ?sterlund: > Hi Roman, > > On 2018-06-04 21:42, Roman Kennke wrote: >> Am 04.06.2018 um 18:43 schrieb Erik ?sterlund: >>> Hi Roman, >>> >>> On 2018-06-04 17:24, Roman Kennke wrote: >>>> Ok, right. Very good catch! >>>> >>>> This should do it, right? Sorry, I couldn't easily make an incremental >>>> diff: >>>> >>>> http://cr.openjdk.java.net/~rkennke/JDK-8203172/webrev.01/ >>> Unfortunately, I think there is one more problem for you. >>> The signal handler is supposed to catch SIGSEGV caused by speculative >>> loads shot from the fantastic jni fast get field code. But it currently >>> expects an exact PC match: >>> >>> address JNI_FastGetField::find_slowcase_pc(address pc) { >>> ?? for (int i=0; i>> ???? if (speculative_load_pclist[i] == pc) { >>> ?????? return slowcase_entry_pclist[i]; >>> ???? } >>> ?? } >>> ?? return (address)-1; >>> } >>> >>> This means that the way this is written now, speculative_load_pclist >>> registers the __ pc() right before the access_load_at call. This puts >>> constraints on whatever is done inside of access_load_at to only >>> speculatively load on the first assembled instruction. >>> >>> If you imagine a scenario where you have a GC with Brooks pointers that >>> also uncommits memory (like Shenandoah I presume), then I imagine you >>> would need something more here. If you start with a forwarding pointer >>> load, then that can trap (which is probably caught by the exact PC >>> match). But then there will be a subsequent load of the value in the >>> to-space object, which will not be protected. But this is also loaded >>> speculatively (as the subsequent safepoint counter check could >>> invalidate the result), and could therefore crash the VM unless >>> protected, as the signal handler code fails to recognize this is a >>> speculative load from jni fast get field. >>> >>> I imagine the solution to this would be to let speculative_load_pclist >>> specify a range for fuzzy SIGSEGV matching in the signal handler, rather >>> than an exact PC (i.e. speculative_load_pclist_start and >>> speculative_load_pclist_end). That would give you enough freedom to use >>> Brooks pointers in there. Sometimes I wonder if the lengths we go to >>> maintain jni fast get field is *really* worth it. >> I are probably right in general. But I also think we are fine with >> Shenandoah. Both the fwd ptr load and the field load are constructed >> with the same base operand. If the oop is NULL (or invalid memory) it >> will blow up on fwdptr load just the same as it would blow up on field >> load. We maintain an invariant that the fwd ptr of a valid oop results >> in a valid (and equivalent) oop. I therefore think we are fine for now. >> Should a GC ever need anything else here, I'd worry about it then. Until >> this happens, let's just hope to never need to touch this code again ;-) > > No I'm afraid that is not safe. After loading the forwarding pointer, > the thread could be preempted, then any number of GC cycles could pass, > which means that the address that the at some point read forwarding > pointer points to, could be uncommitted memory. In fact it is unsafe > even without uncommitted memory. Because after resolving the jobject to > some address in the heap, the thread could get preempted, and any number > of GC cycles could pass, causing the forwarding pointer to be read from > some address in the heap that no longer is the forwarding pointer of an > object, but rather a random integer. This causes the second load to blow > up, even without uncommitting memory. > > Here is an attempt at showing different things that can go wrong: > > obj = *jobject > // preempted for N GC cycles, meaning obj might 1) be a valid pointer to > an object, or 2) be a random pointer inside of the heap or outside of > the heap > > forward_pointer = *obj // may 1) crash with SIGSEGV, 2) read a random > pointer, no longer representing the forwarding pointer, or 3) read a > consistent forwarding pointer > > // preempted for N GC cycles, causing forward_pointer to point at pretty > much anything > > result = *(forward_pointer + offset) // may 1) read a valid primitive > value, if previous two loads were not messed up, or 2) read some random > value that no longer corresponds to the object field, or 3) crash > because either the forwarding pointer did point at something valid that > subsequently got relocated and uncommitted before the load hits, or > because the forwarding pointer never pointed to anything valid in the > first place, because the forwarding pointer load read a random pointer > due to the object relocating after the jobject was resolved. > > The summary is that both loads need protection due to how the thread in > native state runs freely without necessarily caring about the GC running > any number of GC cycles concurrently, making the memory super slippery, > which risks crashing the VM without the proper protection. AWW WTF!? We are in native state in this code? It might be easier to just call bsa->resolve_for_read() (which emits the fwd ptr load), then issue another: speculative_load_pclist[count] = __ pc(); need to juggle with the counter and double-emit slowcase_entry_pclist, and all this conditionally for Shenandoah. Gaa. Or just FLAG_SET_DEFAULT(UseFastJNIAccessors,false) in Shenandoah. Funny how we had this code in Shenandoah literally for years, and nobody's ever tripped over it. It's one of those cases where I almost suspect it's been done in Java1.0 when lots of JNI code was in use because some stuff couldn't be done in fast in Java, but nowadays doesn't really make a difference. *Sigh* >>>> Unfortunately, I cannot really test it because of: >>>> http://mail.openjdk.java.net/pipermail/aarch64-port-dev/2018-May/005843.html >>>> >>>> >>> That is unfortunate. If I were you, I would not dare to change anything >>> in jni fast get field without testing it - it is very error prone. >> >> Yeah. I guess I'll just wait with testing until this is resolved. Or >> else resolve it myself. > > Yeah. > >> Can I consider this change reviewed by you? > > I think we should agree about the safety of doing this for Shenandoah in > particular first. I still think we need the PC range as opposed to exact > PC to be caught in the signal handler for this to be safe for your GC > algorithm. Yeah, I agree. I need to think this through a little bit. Thanks for pointing out this bug. I can already see nightly builds suddenly starting to fail over it, now that it's known :-) Roman From glaubitz at physik.fu-berlin.de Mon Jun 4 20:50:23 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Mon, 4 Jun 2018 22:50:23 +0200 Subject: RFR: 8203787: Hotspot build broken on linux-sparc after 8202377 In-Reply-To: <72cfd8f2-d4ab-d229-e75f-f39a0b9dc3a8@oracle.com> References: <8cfcfb72-4b01-62e5-f053-c57ff74ef8c1@physik.fu-berlin.de> <72cfd8f2-d4ab-d229-e75f-f39a0b9dc3a8@oracle.com> Message-ID: On 06/04/2018 10:48 PM, Erik ?sterlund wrote: > The include you added in opto/library_call.cpp is not sorted correctly. Otherwise, it looks good, and I don't need another webrev. Right, "g" comes before "j" ;). Will fix that. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From rkennke at redhat.com Mon Jun 4 21:14:34 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 4 Jun 2018 23:14:34 +0200 Subject: Trouble finding the definition for class BarrierSet In-Reply-To: <2f999768-32e6-e862-c5b4-54e59d011e5e@physik.fu-berlin.de> References: <11a0ec6c-94e0-89ad-e30e-cedb7fe7ac4c@physik.fu-berlin.de> <322fa7dc-1d93-b7a0-16de-4d65d336e6c3@physik.fu-berlin.de> <001e5b7f-5ce7-7620-7301-061b89102701@oracle.com> <2f999768-32e6-e862-c5b4-54e59d011e5e@physik.fu-berlin.de> Message-ID: <93dab111-9acb-e339-4752-8b6c736a43e5@redhat.com> Am 04.06.2018 um 22:24 schrieb John Paul Adrian Glaubitz: > On 06/04/2018 10:18 PM, Erik ?sterlund wrote: >>> On 06/04/2018 09:47 PM, Erik ?sterlund wrote: >>>> Can't you just #include "gc/shared/barrierSet.hpp" in library_call.cpp? >>> Hmm. Let me try. But why would it be necessary on linux-sparc and not >>> anywhere else? >> >> Not sure. But regardless of the case, that include should be in there anyway. > > That fixes it. However, I then get: > > /srv/openjdk/jdk/src/hotspot/share/opto/macroArrayCopy.cpp: In member function ?Node* PhaseMacroExpand::generate_arraycopy(ArrayCopyNode*, AllocateArrayNode*, > Node**, MergeMemNode*, Node**, const TypePtr*, BasicType, Node*, Node*, Node*, Node*, Node*, bool, bool, RegionNode*)?: > /srv/openjdk/jdk/src/hotspot/share/opto/macroArrayCopy.cpp:553:36: error: incomplete type ?BarrierSet? used in nested name specifier > BarrierSetC2* bs = BarrierSet::barrier_set()->barrier_set_c2(); > ^~~~~~~~~~~ > > I really wonder where the other architectures are getting it from. > > There must be one feature/header etc that the other architectures are > including but that's being used on linux-sparc. But I cannot figure out what > we're missing. precompiled headers maybe? If it's used in that .cpp file, include it in that .cpp file. Roman From rkennke at redhat.com Mon Jun 4 21:20:44 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 4 Jun 2018 23:20:44 +0200 Subject: RFR: JDK-8202776: Modularize GC allocations in runtime In-Reply-To: <737D8C93-6533-4B48-BDEB-E92EE8E91C9F@oracle.com> References: <5B1555F6.5090909@oracle.com> <76f8679c-5f2e-ec3f-9913-307d0c187dfd@redhat.com> <5B155AE4.2090908@oracle.com> <0d6cff83-d6be-c51c-8629-a340ad5f7fe0@redhat.com> <737D8C93-6533-4B48-BDEB-E92EE8E91C9F@oracle.com> Message-ID: Hi Aleksey, Erik, thanks for reviewing and helping with this! Moved mem_allocate() under protected: Incremental: http://cr.openjdk.java.net/~rkennke/JDK-8202776/webrev.01.diff/ Full: http://cr.openjdk.java.net/~rkennke/JDK-8202776/webrev.01/ Good now? Thanks, Roman > Hi Aleksey, > > Sounds like a good idea. > > /Erik > >> On 4 Jun 2018, at 17:56, Aleksey Shipilev wrote: >> >> On 06/04/2018 05:29 PM, Erik ?sterlund wrote: >>>>> I agree the GC should be able to perform arbitrary allocations the way >>>>> it wants to. >>>>> However, I would prefer to do it this way: >>>>> http://cr.openjdk.java.net/~eosterlund/8202776/webrev.00/ >> >> This looks good. I think we better hide mem_allocate under "protected" now, so we would have: >> >> protected: >> // TLAB path >> inline static HeapWord* allocate_from_tlab(Klass* klass, size_t size, TRAPS); >> static HeapWord* allocate_from_tlab_slow(Klass* klass, size_t size, TRAPS); >> >> // Out-of-TLAB path >> virtual HeapWord* mem_allocate(size_t size, >> bool* gc_overhead_limit_was_exceeded) = 0; >> >> public: >> // Entry point >> virtual HeapWord* obj_allocate_raw(Klass* klass, size_t size, >> bool* gc_overhead_limit_was_exceeded, TRAPS); >> >> -Aleksey >> > From erik.osterlund at oracle.com Mon Jun 4 21:21:44 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 4 Jun 2018 23:21:44 +0200 Subject: RFR: JDK-8203172: Primitive heap access for interpreter BarrierSetAssembler/aarch64 In-Reply-To: <36a33e42-1470-b153-dd7a-0ef26c89678b@redhat.com> References: <5B152EA5.2060903@oracle.com> <23f0db5e-e2d2-a047-6111-d298c2234d28@redhat.com> <9e8589ca-5ac8-d7ee-a217-04048671366c@redhat.com> <36a33e42-1470-b153-dd7a-0ef26c89678b@redhat.com> Message-ID: Hi Roman, On 2018-06-04 22:49, Roman Kennke wrote: > Am 04.06.2018 um 22:16 schrieb Erik ?sterlund: >> Hi Roman, >> >> On 2018-06-04 21:42, Roman Kennke wrote: >>> Am 04.06.2018 um 18:43 schrieb Erik ?sterlund: >>>> Hi Roman, >>>> >>>> On 2018-06-04 17:24, Roman Kennke wrote: >>>>> Ok, right. Very good catch! >>>>> >>>>> This should do it, right? Sorry, I couldn't easily make an incremental >>>>> diff: >>>>> >>>>> http://cr.openjdk.java.net/~rkennke/JDK-8203172/webrev.01/ >>>> Unfortunately, I think there is one more problem for you. >>>> The signal handler is supposed to catch SIGSEGV caused by speculative >>>> loads shot from the fantastic jni fast get field code. But it currently >>>> expects an exact PC match: >>>> >>>> address JNI_FastGetField::find_slowcase_pc(address pc) { >>>> ?? for (int i=0; i>>> ???? if (speculative_load_pclist[i] == pc) { >>>> ?????? return slowcase_entry_pclist[i]; >>>> ???? } >>>> ?? } >>>> ?? return (address)-1; >>>> } >>>> >>>> This means that the way this is written now, speculative_load_pclist >>>> registers the __ pc() right before the access_load_at call. This puts >>>> constraints on whatever is done inside of access_load_at to only >>>> speculatively load on the first assembled instruction. >>>> >>>> If you imagine a scenario where you have a GC with Brooks pointers that >>>> also uncommits memory (like Shenandoah I presume), then I imagine you >>>> would need something more here. If you start with a forwarding pointer >>>> load, then that can trap (which is probably caught by the exact PC >>>> match). But then there will be a subsequent load of the value in the >>>> to-space object, which will not be protected. But this is also loaded >>>> speculatively (as the subsequent safepoint counter check could >>>> invalidate the result), and could therefore crash the VM unless >>>> protected, as the signal handler code fails to recognize this is a >>>> speculative load from jni fast get field. >>>> >>>> I imagine the solution to this would be to let speculative_load_pclist >>>> specify a range for fuzzy SIGSEGV matching in the signal handler, rather >>>> than an exact PC (i.e. speculative_load_pclist_start and >>>> speculative_load_pclist_end). That would give you enough freedom to use >>>> Brooks pointers in there. Sometimes I wonder if the lengths we go to >>>> maintain jni fast get field is *really* worth it. >>> I are probably right in general. But I also think we are fine with >>> Shenandoah. Both the fwd ptr load and the field load are constructed >>> with the same base operand. If the oop is NULL (or invalid memory) it >>> will blow up on fwdptr load just the same as it would blow up on field >>> load. We maintain an invariant that the fwd ptr of a valid oop results >>> in a valid (and equivalent) oop. I therefore think we are fine for now. >>> Should a GC ever need anything else here, I'd worry about it then. Until >>> this happens, let's just hope to never need to touch this code again ;-) >> No I'm afraid that is not safe. After loading the forwarding pointer, >> the thread could be preempted, then any number of GC cycles could pass, >> which means that the address that the at some point read forwarding >> pointer points to, could be uncommitted memory. In fact it is unsafe >> even without uncommitted memory. Because after resolving the jobject to >> some address in the heap, the thread could get preempted, and any number >> of GC cycles could pass, causing the forwarding pointer to be read from >> some address in the heap that no longer is the forwarding pointer of an >> object, but rather a random integer. This causes the second load to blow >> up, even without uncommitting memory. >> >> Here is an attempt at showing different things that can go wrong: >> >> obj = *jobject >> // preempted for N GC cycles, meaning obj might 1) be a valid pointer to >> an object, or 2) be a random pointer inside of the heap or outside of >> the heap >> >> forward_pointer = *obj // may 1) crash with SIGSEGV, 2) read a random >> pointer, no longer representing the forwarding pointer, or 3) read a >> consistent forwarding pointer >> >> // preempted for N GC cycles, causing forward_pointer to point at pretty >> much anything >> >> result = *(forward_pointer + offset) // may 1) read a valid primitive >> value, if previous two loads were not messed up, or 2) read some random >> value that no longer corresponds to the object field, or 3) crash >> because either the forwarding pointer did point at something valid that >> subsequently got relocated and uncommitted before the load hits, or >> because the forwarding pointer never pointed to anything valid in the >> first place, because the forwarding pointer load read a random pointer >> due to the object relocating after the jobject was resolved. >> >> The summary is that both loads need protection due to how the thread in >> native state runs freely without necessarily caring about the GC running >> any number of GC cycles concurrently, making the memory super slippery, >> which risks crashing the VM without the proper protection. > AWW WTF!? We are in native state in this code? Yes. This is one of the most dangerous code paths we have in the VM I think. > It might be easier to just call bsa->resolve_for_read() (which emits the > fwd ptr load), then issue another: > > speculative_load_pclist[count] = __ pc(); > > need to juggle with the counter and double-emit slowcase_entry_pclist, > and all this conditionally for Shenandoah. Gaa. I think that by just having the speculative load PC list take a range as opposed to a precise PC, and check that a given PC is in that range, and not just exactly equal to a PC, the problem is solved for everyone. > Or just FLAG_SET_DEFAULT(UseFastJNIAccessors,false) in Shenandoah. Yeah, sometimes you wonder if it's really worth the maintenance to keep this thing. > Funny how we had this code in Shenandoah literally for years, and > nobody's ever tripped over it. Yeah it is a rather nasty race to detect. > It's one of those cases where I almost suspect it's been done in Java1.0 > when lots of JNI code was in use because some stuff couldn't be done in > fast in Java, but nowadays doesn't really make a difference. *Sigh* :) >>>>> Unfortunately, I cannot really test it because of: >>>>> http://mail.openjdk.java.net/pipermail/aarch64-port-dev/2018-May/005843.html >>>>> >>>>> >>>> That is unfortunate. If I were you, I would not dare to change anything >>>> in jni fast get field without testing it - it is very error prone. >>> Yeah. I guess I'll just wait with testing until this is resolved. Or >>> else resolve it myself. >> Yeah. >> >>> Can I consider this change reviewed by you? >> I think we should agree about the safety of doing this for Shenandoah in >> particular first. I still think we need the PC range as opposed to exact >> PC to be caught in the signal handler for this to be safe for your GC >> algorithm. > > Yeah, I agree. I need to think this through a little bit. Yeah. Still think the PC range check solution should do the trick. > Thanks for pointing out this bug. I can already see nightly builds > suddenly starting to fail over it, now that it's known :-) No problem! Thanks, /Erik > Roman > > From erik.osterlund at oracle.com Mon Jun 4 21:24:08 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 4 Jun 2018 23:24:08 +0200 Subject: RFR: JDK-8202776: Modularize GC allocations in runtime In-Reply-To: References: <5B1555F6.5090909@oracle.com> <76f8679c-5f2e-ec3f-9913-307d0c187dfd@redhat.com> <5B155AE4.2090908@oracle.com> <0d6cff83-d6be-c51c-8629-a340ad5f7fe0@redhat.com> <737D8C93-6533-4B48-BDEB-E92EE8E91C9F@oracle.com> Message-ID: Hi, Looks good. Thanks, /Erik On 2018-06-04 23:20, Roman Kennke wrote: > Hi Aleksey, Erik, > > thanks for reviewing and helping with this! > > Moved mem_allocate() under protected: > Incremental: > http://cr.openjdk.java.net/~rkennke/JDK-8202776/webrev.01.diff/ > Full: > http://cr.openjdk.java.net/~rkennke/JDK-8202776/webrev.01/ > > Good now? > > Thanks, > Roman > > >> Hi Aleksey, >> >> Sounds like a good idea. >> >> /Erik >> >>> On 4 Jun 2018, at 17:56, Aleksey Shipilev wrote: >>> >>> On 06/04/2018 05:29 PM, Erik ?sterlund wrote: >>>>>> I agree the GC should be able to perform arbitrary allocations the way >>>>>> it wants to. >>>>>> However, I would prefer to do it this way: >>>>>> http://cr.openjdk.java.net/~eosterlund/8202776/webrev.00/ >>> This looks good. I think we better hide mem_allocate under "protected" now, so we would have: >>> >>> protected: >>> // TLAB path >>> inline static HeapWord* allocate_from_tlab(Klass* klass, size_t size, TRAPS); >>> static HeapWord* allocate_from_tlab_slow(Klass* klass, size_t size, TRAPS); >>> >>> // Out-of-TLAB path >>> virtual HeapWord* mem_allocate(size_t size, >>> bool* gc_overhead_limit_was_exceeded) = 0; >>> >>> public: >>> // Entry point >>> virtual HeapWord* obj_allocate_raw(Klass* klass, size_t size, >>> bool* gc_overhead_limit_was_exceeded, TRAPS); >>> >>> -Aleksey >>> > From david.holmes at oracle.com Mon Jun 4 21:42:07 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 5 Jun 2018 07:42:07 +1000 Subject: [hs] RFR (L): 8010319: Implementation of JEP 181: Nest-Based Access Control In-Reply-To: References: <06529fc3-2eba-101b-9aee-2757893cb8fb@oracle.com> <97f8cedf-4ebc-610f-0528-e1b91f35eece@oracle.com> <29d5725a-be8d-81d7-f9dd-f6f2eedc888d@oracle.com> Message-ID: <54e1dad3-093e-8b95-53e1-3977d251aa6b@oracle.com> On 4/06/2018 11:13 PM, coleen.phillimore at oracle.com wrote: > > The redefine test changes look fine. Thanks for taking another look Coleen! > http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v5-incr/test/hotspot/jtreg/runtime/appcds/redefineClass/RedefineRunningMethods_Shared.java.udiff.html > > > I think there's a leading space in this file. Fixed! (Well spotted! - A result of remote GUI access causing an editor window to steal keyboard input before the window is even visible :( ). Thanks, David > > thanks, > Coleen > > On 6/4/18 3:15 AM, David Holmes wrote: >> This update fixes some tests that were being excluded temporarily, but >> which can now run under nestmates. >> >> Incremental hotspot webrev: >> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v5-incr/ >> >> >> Full hotspot webrev: >> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v5/ >> >> Change summary: >> >> - test/hotspot/jtreg/vmTestbase/nsk/stress/except/except004.java (see: >> 8203046): >> >> The test expected an IllegalAccessException using reflection to access >> a private field of a nested class. That's actually a reflection bug >> that nestmates fixes. So relocated the Abra.PRIVATE_FIELD to a >> top-level package-access class Ext >> >> - all other tests involve class redefinition (see: 8199450): >> >> These tests were failing because the RedefineClassHelper, if passed a >> string containing "class A$B { ...}" doesn't define a nested class but >> a top-level class called A$B (which is perfectly legal). The >> redefinition itself would fail as the old class called A$B was a >> nested class and you're not allowed to change the nest attributes in >> class redefinition or transformation. >> >> The fix is simply to factor out the A$B class being redefined to being >> a top-level package access class in the same source file, called A_B, >> and with all references to "B" suitable adjusted. >> >> [The alternate fix considered would be to update the >> RedefineClassHelper and its use of the InMemoryJavaCompiler so that >> the tests would pass in a string like "class A { class B { ... } }" >> and then read back the bytes for A$B with nest attributes intact. But >> that is a non-trivial task and it isn't really significant that the >> classes used in these tests were in fact nested.] >> >> >> Thanks, >> David >> >> >> On 28/05/2018 9:20 PM, David Holmes wrote: >>> I've added some missing JNI tests for the basic access checks. Given >>> JNI ignores access it should go without saying that JNI access to >>> nestmates will work fine, but it doesn't hurt to verify that. >>> >>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v4-incr/ >>> >>> >>> Thanks, >>> David >>> >>> On 24/05/2018 7:48 PM, David Holmes wrote: >>>> Here are the further updates based on review comments and rebasing >>>> to get the vmTestbase updates for which some closed test changes now >>>> have to be applied to the open versions. >>>> >>>> Incremental hotspot webrev: >>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v3-incr/ >>>> >>>> >>>> Full hotspot webrev: >>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v3/ >>>> >>>> Change summary: >>>> >>>> test/hotspot/jtreg/ProblemList.txt >>>> - Exclude vmTestbase/nsk/stress/except/except004.java under 8203046 >>>> >>>> test/hotspot/jtreg/vmTestbase/vm/runtime/defmeth/BasicTest.java >>>> test/hotspot/jtreg/vmTestbase/vm/runtime/defmeth/PrivateMethodsTest.java >>>> >>>> - updated to work with new invokeinterface rules and nestmate changes >>>> - misc cleanups >>>> >>>> src/hotspot/share/runtime/reflection.?pp >>>> - rename verify_field_access to verify_member_access (it's always >>>> been mis-named and I nearly forgot to do this cleanup!) and rename >>>> field_class to member_class >>>> - add TRAPS to verify_member_access to allow use with CHECK macros >>>> >>>> src/hotspot/share/ci/ciField.cpp >>>> src/hotspot/share/classfile/classFileParser.cpp >>>> src/hotspot/share/interpreter/linkResolver.cpp >>>> - updated to use THREAD/CHECK with verify_member_access >>>> - for ciField rename thread to THREAD so it can be used with >>>> HAS_PENDING_EXCEPTION >>>> >>>> src/hotspot/share/oops/instanceKlass.cpp >>>> - use CHECK_false when calling nest_host() >>>> - fix indent near nestmate code >>>> >>>> src/hotspot/share/oops/instanceKlass.hpp >>>> - make has_nest_member private >>>> >>>> Thanks, >>>> David >>>> ----- >>>> >>>> On 23/05/2018 4:57 PM, David Holmes wrote: >>>>> Here are the updates so far in response to all the review comments. >>>>> >>>>> Incremental webrev: >>>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v2-incr/ >>>>> >>>>> >>>>> Full webrev: >>>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v2/ >>>>> >>>>> Change summary: >>>>> >>>>> test/runtime/Nestmates/reflectionAPI/* >>>>> - moved to java/lang/reflect/Nestmates >>>>> >>>>> src/hotspot/cpu/arm/templateTable_arm.cpp >>>>> - Fixed ARM invocation logic as provided by Boris. >>>>> >>>>> src/hotspot/share/interpreter/linkResolver.cpp >>>>> - expanded comment regarding exceptions >>>>> - Removed leftover debugging code >>>>> >>>>> src/hotspot/share/oops/instanceKlass.cpp >>>>> - Removed FIXME comments >>>>> - corrected incorrect comment >>>>> - Fixed if/else formatting >>>>> >>>>> src/hotspot/share/oops/instanceKlass.hpp >>>>> - removed unused debug method >>>>> >>>>> src/hotspot/share/oops/klassVtable.cpp >>>>> - added comment by request of Karen >>>>> >>>>> src/hotspot/share/runtime/reflection.cpp >>>>> - Removed FIXME comments >>>>> - expanded comments in places >>>>> - used CHECK_false >>>>> >>>>> Thanks, >>>>> David >>>>> >>>>> On 15/05/2018 10:52 AM, David Holmes wrote: >>>>>> This review is being spread across four groups: langtools, >>>>>> core-libs, hotspot and serviceability. This is the specific review >>>>>> thread for hotspot - webrev: >>>>>> >>>>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.hotspot.v1/ >>>>>> >>>>>> See below for full details - including annotated full webrev >>>>>> guiding the review. >>>>>> >>>>>> The intent is to have JEP-181 targeted and integrated by the end >>>>>> of this month. >>>>>> >>>>>> Thanks, >>>>>> David >>>>>> ----- >>>>>> >>>>>> The nestmates project (JEP-181) introduces new classfile >>>>>> attributes to identify classes and interfaces in the same nest, so >>>>>> that the VM can perform access control based on those attributes >>>>>> and so allow direct private access between nestmates without >>>>>> requiring javac to generate synthetic accessor methods. These >>>>>> access control changes also extend to core reflection and the >>>>>> MethodHandle.Lookup contexts. >>>>>> >>>>>> Direct private calls between nestmates requires a more general >>>>>> calling context than is permitted by invokespecial, and so the >>>>>> JVMS is updated to allow, and javac updated to use, invokevirtual >>>>>> and invokeinterface for private class and interface method calls >>>>>> respectively. These changed semantics also extend to MethodHandle >>>>>> findXXX operations. >>>>>> >>>>>> At this time we are only concerned with static nest definitions, >>>>>> which map to a top-level class/interface as the nest-host and all >>>>>> its nested types as nest-members. >>>>>> >>>>>> Please see the JEP for further details. >>>>>> >>>>>> JEP: https://bugs.openjdk.java.net/browse/JDK-8046171 >>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8010319 >>>>>> CSR: https://bugs.openjdk.java.net/browse/JDK-8197445 >>>>>> >>>>>> All of the specification changes have been previously been worked >>>>>> out by the Valhalla Project Expert Group, and the implementation >>>>>> reviewed by the various contributors and discussed on the >>>>>> valhalla-dev mailing list. >>>>>> >>>>>> Acknowledgments and contributions: Alex Buckley, Maurizio >>>>>> Cimadamore, Mandy Chung, Tobias Hartmann, Vladimir Ivanov, Karen >>>>>> Kinnear, Vladimir Kozlov, John Rose, Dan Smith, Serguei Spitsyn, >>>>>> Kumar Srinivasan >>>>>> >>>>>> Master webrev of all changes: >>>>>> >>>>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/webrev.full.v1/ >>>>>> >>>>>> Annotated master webrev index: >>>>>> >>>>>> http://cr.openjdk.java.net/~dholmes/8010319-JEP181/jep181-webrev.html >>>>>> >>>>>> Performance: this is expected to be performance neutral in a >>>>>> general sense. Benchmarking and performance runs are about to start. >>>>>> >>>>>> Testing Discussion: >>>>>> ------------------ >>>>>> >>>>>> The testing for nestmates can be broken into four main groups: >>>>>> >>>>>> -? New tests specifically related to nestmates and currently in >>>>>> the runtime/Nestmates directory >>>>>> >>>>>> - New tests to complement existing tests by adding in testcases >>>>>> not previously expressible. >>>>>> ?? -? For example java/lang/invoke/SpecialInterfaceCall.java tests >>>>>> use of invokespecial for private interface methods and performing >>>>>> receiver typechecks, so we add >>>>>> java/lang/invoke/PrivateInterfaceCall.java to do similar tests for >>>>>> invokeinterface. >>>>>> >>>>>> -? New JVM TI tests to verify the spec changes related to nest >>>>>> attributes. >>>>>> >>>>>> -? Existing tests significantly affected by the nestmates changes, >>>>>> primarily: >>>>>> ??? -? runtime/SelectionResolution >>>>>> >>>>>> ??? In most cases the nestmate changes makes certain invocations >>>>>> that were illegal, legal (e.g. not requiring invokespecial to >>>>>> invoke private interface methods; allowing access to private >>>>>> members via reflection/Methodhandles that were previously not >>>>>> allowed). >>>>>> >>>>>> - Existing tests incidentally affected by the nestmate changes >>>>>> >>>>>> ?? This includes tests of things utilising class >>>>>> redefinition/retransformation to alter nested types but which >>>>>> unintentionally alter nest relationships (which is not permitted). >>>>>> >>>>>> There are still a number of tests problem-listed with issues filed >>>>>> against them to have them adapted to work with nestmates. Some of >>>>>> these are intended to be addressed in the short-term, while some >>>>>> (such as the runtime/SelectionResolution test changes) may not >>>>>> eventuate. >>>>>> >>>>>> - https://bugs.openjdk.java.net/browse/JDK-8203033 >>>>>> - https://bugs.openjdk.java.net/browse/JDK-8199450 >>>>>> - https://bugs.openjdk.java.net/browse/JDK-8196855 >>>>>> - https://bugs.openjdk.java.net/browse/JDK-8194857 >>>>>> - https://bugs.openjdk.java.net/browse/JDK-8187655 >>>>>> >>>>>> There is also further test work still to be completed (the JNI and >>>>>> JDI invocation tests): >>>>>> - https://bugs.openjdk.java.net/browse/JDK-8191117 >>>>>> which will continue in parallel with the main RFR. >>>>>> >>>>>> Pre-integration Testing: >>>>>> ??- General: >>>>>> ???? - Mach5: hs/jdk tier1,2 >>>>>> ???? - Mach5: hs-nightly (tiers 1 -3) >>>>>> ??- Targetted >>>>>> ??? - nashorn (for asm changes) >>>>>> ??? - hotspot: runtime/* >>>>>> ?????????????? serviceability/* >>>>>> ?????????????? compiler/* >>>>>> ?????????????? vmTestbase/* >>>>>> ??? - jdk: java/lang/invoke/* >>>>>> ?????????? java/lang/reflect/* >>>>>> ?????????? java/lang/instrument/* >>>>>> ?????????? java/lang/Class/* >>>>>> ?????????? java/lang/management/* >>>>>> ?? - langtools: tools/javac >>>>>> ??????????????? tools/javap >>>>>> > From jesper.wilhelmsson at oracle.com Tue Jun 5 00:05:11 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Tue, 5 Jun 2018 02:05:11 +0200 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> Message-ID: <111720CA-EFD3-4404-853B-C0219F2CCA18@oracle.com> Looks good to me. /Jesper > On 4 Jun 2018, at 22:10, Erik Joelsson wrote: > > New webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.02/ > > Renamed the new jvm variant to "hardened". > > /Erik > > > On 2018-06-04 09:54, jesper.wilhelmsson at oracle.com wrote: >>> On 4 Jun 2018, at 17:52, Erik Joelsson wrote: >>> >>> Hello, >>> >>> On 2018-06-01 14:00, Aleksey Shipilev wrote: >>>> On 06/01/2018 10:53 PM, Erik Joelsson wrote: >>>>> This patch defines flags for disabling speculative execution for GCC and Visual Studio and applies >>>>> them to all binaries except libjvm when available in the compiler. It defines a new jvm feature >>>>> no-speculative-cti, which is used to control whether to use the flags for libjvm. It also defines a >>>>> new jvm variant "altserver" which is the same as server, but with this new feature added. >>>> I think the classic name for such product configuration is "hardened", no? >>> I don't know. I'm open to suggestions on naming. >> "hardened" sounds good to me. >> >> The change looks good as well. >> /Jesper >> >>> /Erik >>>> -Aleksey >>>> > From david.holmes at oracle.com Tue Jun 5 01:34:52 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 5 Jun 2018 11:34:52 +1000 Subject: RFR: 8203787: Hotspot build broken on linux-sparc after 8202377 In-Reply-To: <72cfd8f2-d4ab-d229-e75f-f39a0b9dc3a8@oracle.com> References: <8cfcfb72-4b01-62e5-f053-c57ff74ef8c1@physik.fu-berlin.de> <72cfd8f2-d4ab-d229-e75f-f39a0b9dc3a8@oracle.com> Message-ID: On 5/06/2018 6:48 AM, Erik ?sterlund wrote: > Hi Adrian, > > The include you added in opto/library_call.cpp is not sorted correctly. > Otherwise, it looks good, and I don't need another webrev. Also pre-existing: #include "memory/resourceArea.hpp" #include "jfr/support/jfrIntrinsics.hpp" :) I too don't need to see another webrev. Thanks, David > Thanks, > /Erik > > On 2018-06-04 22:42, John Paul Adrian Glaubitz wrote: >> Hi! >> >> Please review this minor change which partially fixes the Hotspot >> build on linux-sparc. >> >> It does not fully restore the build on linux-sparc because we're still >> suffering from JDK-8203301 Linux-sparc fails to build after JDK-8199712 >> (Flight Recorder), but I will get around fixing that in the near future >> as well. >> >> The webrev can be found in [1]. I'm pushing the change to the submit-jdk >> repository as well. >> >> Thanks, >> Adrian >> >>> [1] http://cr.openjdk.java.net/~glaubitz/8203787/webrev.01/ > From leonid.mesnik at oracle.com Tue Jun 5 01:46:20 2018 From: leonid.mesnik at oracle.com (Leonid Mesnik) Date: Mon, 4 Jun 2018 18:46:20 -0700 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> Message-ID: Hi GC stress tests gcold, gcbasher, locker don't depend from GC. However they are not executed with default GC. They contain separate tests which define used collector and skip if any other collector is set explicitly. An example is http://hg.openjdk.java.net/jdk/jdk/file/tip/test/hotspot/jtreg/gc/stress/gcold/TestGCOldWithG1.java Do you have similar tests for ZGC? I didn't found them in test patch. Leonid > On Jun 1, 2018, at 2:41 PM, Per Liden wrote: > > Hi, > > Please review the implementation of JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) > > Please see the JEP for more information about the project. The JEP is currently in state "Proposed to Target" for JDK 11. > > https://bugs.openjdk.java.net/browse/JDK-8197831 > > Additional information in can also be found on the ZGC project wiki. > > https://wiki.openjdk.java.net/display/zgc/Main > > > Webrevs > ------- > > To make this easier to review, we've divided the change into two webrevs. > > * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master > > This patch contains the actual ZGC implementation, the new unit tests and other changes needed in HotSpot. > > * ZGC Testing: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing > > This patch contains changes to existing tests needed by ZGC. > > > Overview of Changes > ------------------- > > Below follows a list of the files we add/modify in the master patch, with a short summary describing each group. > > * Build support - Making ZGC an optional feature. > > make/autoconf/hotspot.m4 > make/hotspot/lib/JvmFeatures.gmk > src/hotspot/share/utilities/macros.hpp > > * C2 AD file - Additions needed to generate ZGC load barriers (adlc does not currently offer a way to easily break this out). > > src/hotspot/cpu/x86/x86.ad > src/hotspot/cpu/x86/x86_64.ad > > * C2 - Things that can't be easily abstracted out into ZGC specific code, most of which is guarded behind a #if INCLUDE_ZGC and/or if (UseZGC) condition. There should only be two logic changes (one in idealKit.cpp and one in node.cpp) that are still active when ZGC is disabled. We believe these are low risk changes and should not introduce any real change i behavior when using other GCs. > > src/hotspot/share/adlc/formssel.cpp > src/hotspot/share/opto/* > src/hotspot/share/compiler/compilerDirectives.hpp > > * General GC+Runtime - Registering ZGC as a collector. > > src/hotspot/share/gc/shared/* > src/hotspot/share/runtime/vmStructs.cpp > src/hotspot/share/runtime/vm_operations.hpp > src/hotspot/share/prims/whitebox.cpp > > * GC thread local data - Increasing the size of data area by 32 bytes. > > src/hotspot/share/gc/shared/gcThreadLocalData.hpp > > * ZGC - The collector itself. > > src/hotspot/share/gc/z/* > src/hotspot/cpu/x86/gc/z/* > src/hotspot/os_cpu/linux_x86/gc/z/* > test/hotspot/gtest/gc/z/* > > * JFR - Adding new event types. > > src/hotspot/share/jfr/* > src/jdk.jfr/share/conf/jfr/* > > * Logging - Adding new log tags. > > src/hotspot/share/logging/* > > * Metaspace - Adding a friend declaration. > > src/hotspot/share/memory/metaspace.hpp > > * InstanceRefKlass - Adjustments for concurrent reference processing. > > src/hotspot/share/oops/instanceRefKlass.inline.hpp > > * vmSymbol - Disabled clone intrinsic for ZGC. > > src/hotspot/share/classfile/vmSymbols.cpp > > * Oop Verification - In four cases we disabled oop verification because it do not makes sense or is not applicable to a GC using load barriers. > > src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp > src/hotspot/cpu/x86/stubGenerator_x86_64.cpp > src/hotspot/share/compiler/oopMap.cpp > src/hotspot/share/runtime/jniHandles.cpp > > * StackValue - Apply a load barrier in case of OSR. This is a bit of a hack. However, this will go away in the future, when we have the next iteration of C2's load barriers in place (aka "C2 late barrier insertion"). > > src/hotspot/share/runtime/stackValue.cpp > > * JVMTI - Adding an assert() to catch problems if the tagmap hashing is changed in the future. > > src/hotspot/share/prims/jvmtiTagMap.cpp > > * Legal - Adding copyright/license for 3rd party hash function used in ZHash. > > src/java.base/share/legal/c-libutl.md > > * SA - Adding basic ZGC support. > > src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* > > > Testing > ------- > > * Unit testing > > A number of new ZGC specific gtests have been added, in test/hotspot/gtest/gc/z/ > > * Regression testing > > No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} > No new failures in Mach5, with ZGC disabled, tier{1,2,3} > > * Stress testing > > We have been continuously been running a number stress tests throughout the development, these include: > > specjbb2000 > specjbb2005 > specjbb2015 > specjvm98 > specjvm2008 > dacapo2009 > test/hotspot/jtreg/gc/stress/gcold > test/hotspot/jtreg/gc/stress/systemgc > test/hotspot/jtreg/gc/stress/gclocker > test/hotspot/jtreg/gc/stress/gcbasher > test/hotspot/jtreg/gc/stress/finalizer > Kitchensink > > > Thanks! > > /Per, Stefan & the ZGC team From david.holmes at oracle.com Tue Jun 5 04:24:30 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 5 Jun 2018 14:24:30 +1000 Subject: ARM port consolidation In-Reply-To: References: Message-ID: <247c1b0c-a3f6-57e3-f00b-2d9a1488213e@oracle.com> Hi Bob, Looping in porters-dev, aarch32-port-dev and aarch64-port-dev. I think this is a good idea. Thanks, David On 5/06/2018 6:34 AM, Bob Vandette wrote: > During the JDK 9 time frame, Oracle open sourced its 32-bit and 64-bit > ARM ports and contributed them to OpenJDK. These ports have been used for > years in the embedded and mobile market, making them very stable and > having the benefit of a single source base which can produce both 32 and > 64-bit binaries. The downside of this contribution is that it resulted > in two 64-bit ARM implementations being available in OpenJDK. > > I'd like to propose that we eliminate one of the 64-bit ARM ports and > encourage everyone to enhance and support the remaining 32 and 64 bit > ARM ports. This would avoid the creation of yet another port for these chip > architectures. The reduction of competing ports will allow everyone > to focus their attention on a single 64-bit port rather than diluting > our efforts. This will result in a higher quality and a more performant > implementation. > > The community at large (especially RedHat, BellSoft, Linaro and Cavium) > have done a great job of enhancing and keeping the AArch64 port up to > date with current and new Hotspot features. As a result, I propose that > we standardize the 64-bit ARM implementation on this port. > > If there are no objections, I will file a JEP to remove the 64-bit ARM > port sources that reside in jdk/open/src/hotspot/src/cpu/arm > along with any build logic. This will leave the Oracle contributed > 32-bit ARM port and the AArch64 64-bit ARM port. > > Let me know what you all think, > Bob Vandette > > From david.holmes at oracle.com Tue Jun 5 04:44:19 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 5 Jun 2018 14:44:19 +1000 Subject: RFR: 8203188: Add JEP-181 support to the Zero interpreter In-Reply-To: <9aa2709edbf7e1b417ce47ed93a2f53d591984cd.camel@redhat.com> References: <9aa2709edbf7e1b417ce47ed93a2f53d591984cd.camel@redhat.com> Message-ID: Hi Severin, On 5/06/2018 1:26 AM, Severin Gehwolf wrote: > Hi, > > Could I please get a review of this change adding support for JEP-181 - > a.k.a Nestmates - to Zero. This patch depends on David Holmes' > Nestmates implementation via JDK-8010319. Thanks to David Holmes and > Chris Phillips for their initial reviews prior to this RFR. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8203188 > webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8203188/webrev.02/ src/hotspot/cpu/zero/methodHandles_zero.cpp The change here seems to be an existing bug unrelated to nestmate changes. IT also begs the question as to what happens in the same circumstance with a removed static or "special" method? (I thought I had a test for that in the nestmates changes ... will need to double-check and add it if missing!). src/hotspot/share/interpreter/bytecodeInterpreter.cpp Interpreter changes seem fine - mirroring what is done elsewhere. You can delete these incorrect comments: 2576 // This code isn't produced by javac, but could be produced by 2577 // another compliant java compiler. That code path is taken in more circumstances than the author of that comment realized. :) > Testing: > > Zero on Linux-x86_64 with the following test set: > > test/jdk/java/lang/invoke/AccessControlTest.java > test/jdk/java/lang/invoke/FinalVirtualCallFromInterface.java > test/jdk/java/lang/invoke/PrivateInterfaceCall.java > test/jdk/java/lang/invoke/SpecialInterfaceCall.java > test/jdk/java/lang/reflect/Nestmates > test/hotspot/jtreg/runtime/SelectionResolution/InvokeInterfaceICCE.java > test/hotspot/jtreg/runtime/SelectionResolution/InvokeInterfaceSuccessTest.java > test/hotspot/jtreg/runtime/Nestmates > > I cannot run this through the submit repo since the main Nestmates > patch hasn't yet landed in JDK 11. Currently testing a Zero bootcycle- > images build on x86_64. Thoughts? I can bundle this in with the nestmate changes when I push them later this week. Just send me a pointer to the finalized changeset once its finalized. I'll run it all through a final step of testing equivalent (actually more than) the submit repo. Thanks, David > Thanks, > Severin > From david.holmes at oracle.com Tue Jun 5 06:10:35 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 5 Jun 2018 16:10:35 +1000 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> Message-ID: <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> Sorry to be late to this party ... On 5/06/2018 6:10 AM, Erik Joelsson wrote: > New webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.02/ > > Renamed the new jvm variant to "hardened". As it is a hardened server build I'd prefer if that were somehow reflected in the name. Though really I don't see why this should be restricted this way ... to be honest I don't see hardened as a variant of server vs. client vs. zero etc at all, you should be able to harden any of those. So IIUC with this change we will: - always build JDK native code "hardened" (if toolchain supports it) - only build hotspot "hardened" if requested; and in that case - jvm.cfg will list -server and -hardened with server as default Is that right? I can see that we may choose to always build Oracle JDK this way but it isn't clear to me that its suitable for OpenJDK. Nor why hotspot is selectable but JDK is not. ?? Sorry. David ----- > /Erik > > > On 2018-06-04 09:54, jesper.wilhelmsson at oracle.com wrote: >>> On 4 Jun 2018, at 17:52, Erik Joelsson wrote: >>> >>> Hello, >>> >>> On 2018-06-01 14:00, Aleksey Shipilev wrote: >>>> On 06/01/2018 10:53 PM, Erik Joelsson wrote: >>>>> This patch defines flags for disabling speculative execution for >>>>> GCC and Visual Studio and applies >>>>> them to all binaries except libjvm when available in the compiler. >>>>> It defines a new jvm feature >>>>> no-speculative-cti, which is used to control whether to use the >>>>> flags for libjvm. It also defines a >>>>> new jvm variant "altserver" which is the same as server, but with >>>>> this new feature added. >>>> I think the classic name for such product configuration is >>>> "hardened", no? >>> I don't know. I'm open to suggestions on naming. >> "hardened" sounds good to me. >> >> The change looks good as well. >> /Jesper >> >>> /Erik >>>> -Aleksey >>>> > From per.liden at oracle.com Tue Jun 5 07:21:49 2018 From: per.liden at oracle.com (Per Liden) Date: Tue, 5 Jun 2018 09:21:49 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <0b3ddf69-15f7-217d-fe77-2861534c695a@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <0b3ddf69-15f7-217d-fe77-2861534c695a@oracle.com> Message-ID: Hi Erik, On 06/04/2018 03:47 PM, Erik Helin wrote: > On 06/01/2018 11:41 PM, Per Liden wrote: >> Webrevs >> ------- >> >> To make this easier to review, we've divided the change into two webrevs. >> >> * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master > > First of all: great work! Thanks for pushing so many patches upstream > ahead of this patch, that makes the review of this patch so much easier > :) I have looked at all the shared changes, but I can't be counted as a > reviewer for the C2 stuff, I don't have enough experience in that area. Thanks a lot for reviewing! > I can see what the build changes are doing, but Magnus and/or Erik > should probably review that part. Ok, now for my comments: > > Small nit in make/autoconf/hotspot.m4: > > +? # Only enable ZGC on Linux x86 > > Could you please change the comment to say x86_64 or x64 (similar to > other such comments in that file)? x86 is a bit ambiguous (could mean a > 32-bit x86 CPU). Fixed. http://hg.openjdk.java.net/zgc/zgc/rev/a81777811000 > > Small nit in src/hotspot/share/compiler/oopMap.cpp: > > +??????? if (ZGC_ONLY(!UseZGC &&) > +??????????? ((((uintptr_t)loc & (sizeof(*loc)-1)) != 0) || > +???????????? !Universe::heap()->is_in_or_null(*loc))) { > > Do we really need ZGC_ONLY around !UseZGC && here? The code is in an > #ifdef ASSERT so it doesn't seem performance sensitive, and UseZGC will > be just be false if ZGC isn't compiled, right? Or have I gotten this > backwards? Fixed. http://hg.openjdk.java.net/zgc/zgc/rev/3f6db622400c > > Regarding src/hotspot/share/gc/shared/gcName.hpp, should we introduce a > GCName class so that we can limit the scope of the Z och NA symbols? > (Then GCNameHelper::to_string could also be moved into that class). > Could also be done as a follow-up patch (if so, please file a bug). I agree, filed an RFE. https://bugs.openjdk.java.net/browse/JDK-8204324 > > Small nit in src/hotspot/share/jfr/metadata/metadata.xml: > - > \ No newline at end of file > + > > Did you happen to add a newline here (I don't know why there should not > be a newline, but the comment indicates so)? The "No newline at end of file" comment is actually generated by hg diff and is not in the file itself. I think vim added it automatically, and I think we probably should have a new line there, but I'll revert it from this change. http://hg.openjdk.java.net/zgc/zgc/rev/a8e1aec31efa > > Small nit in src/hotspot/share/opto/node.hpp: > > ?? virtual?????? uint? ideal_reg() const; > + > ?#ifndef PRODUCT > > Was the extra newline here added intentionally? Fixed. http://hg.openjdk.java.net/zgc/zgc/rev/6d6259917ded > > In src/hotspot/share/prims/jvmtiTagMap.cpp, do you need to add an > include of gc/z/zGlobals.hpp for ZAddressMetadataShift? Like > > +#if INCLUDE_ZGC > +? #include "gc/z/c2/zGlobals.hpp" > +#endif > > Or did I miss an include somewhere (wouldn't be the first time :)? Fixed. http://hg.openjdk.java.net/zgc/zgc/rev/b2e3b7c012af > > In src/hotspot/share/prims/whitebox.cpp, do we need the #if INCLUDE_ZGC > guards or is `if (UseZGC)` enough? This is certainly up for discussion, but the model I think we've been shooting for is that we don't have INCLUDE_ZGC only if there's a !UseZGC condition. Some of the "if (UseZGC)" then have ZGC specific code inside the scope, so you need the INCLUDE_ZGC anyway. In this particular case we don't have any ZGC specific code in the true path, but we might in the future. This is the model we're trying to follow, but as I said, we can discuss if this is good or not. > > Same comment for src/hotspot/share/runtime/jniHandles.cpp, do we need > the #if INCLUDE_ZGC guard? Fixed this and another similar thing in c1_LIRAssembler_x86.cpp. http://hg.openjdk.java.net/zgc/zgc/rev/2cf588273130 > >> ?? This patch contains the actual ZGC implementation, the new unit >> tests and other changes needed in HotSpot. >> >> * ZGC Testing: >> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing > > Again, great work here, particularly with upstreaming so many patches > ahead of this one. I only have two small comments regarding the test > changes: > > Small nit in > est/hotspot/jtreg/vmTestbase/nsk/jdi/ObjectReference/referringObjects/referringObjects001/referringObjects001.java: > > > +??????? // G1 fails, just like ZGC, if en explicitly GC is done here. > > May I suggest s/en explicitly/an explicit/ ? > Also maybe remove the comment `// forceGC();`, because it might later > look like your comment commented out an earlier, pre-existing call to > forceGC(). > > Same comment as above for instances003.java, instances001.java, > instanceCounts001.java. Fixed. http://hg.openjdk.java.net/zgc/zgc/rev/42cd3b259870 > > In jdk/java/lang/management/MemoryMXBean/MemoryTestZGC.sh you probably > want to remove "@bug???? 4530538", the empty "@summary" and "@author > Mandy Chung" Fixed. http://hg.openjdk.java.net/zgc/zgc/rev/ff780fec8423 Thanks again for reviewing, Erik! /Per > > Thanks, > Erik From sanhong.lsh at alibaba-inc.com Tue Jun 5 08:08:18 2018 From: sanhong.lsh at alibaba-inc.com (=?UTF-8?B?5p2O5LiJ57qiKOS4iee6oik=?=) Date: Tue, 05 Jun 2018 16:08:18 +0800 Subject: =?UTF-8?B?UmVwbHk6IEpFUDogaHR0cHM6Ly9idWdzLm9wZW5qZGsuamF2YS5uZXQvYnJvd3NlL0pESy04?= =?UTF-8?B?MjAzODMy?= In-Reply-To: <03cc7b39-7e28-2149-6c12-7ee53c1c2140@oracle.com> References: <0ca282db-5b4f-607d-512a-a2183dbd4b73@oracle.com> , <03cc7b39-7e28-2149-6c12-7ee53c1c2140@oracle.com> Message-ID: Hi Tobias Thanks for your questions, see my inline comments Thanks Sanhon ----------------------------------------------------------------- Sender:Tobias Hartmann Thanks for your review/questions. First I would introduce some background > of JWarmup applicatio > on use scenario and how we implement the interaction between application > and scheduling (dispatc > system, DS) > The load of each application is controlled by DS. The profiling data is > collected against rea > input data (so it mostly matches the application run in production > environments, thus reduce th > deoptimization chance). When run with profiling data, application gets > notification from DS whe > compiling should start, application then calls API to notify JVM the hot > methods recorded in fil > can be compiled, after the compilations, a message sent out to DS so DS > will dispatch load int > this application Could you elaborate a bit more on how the communication between the DS and the application works? generic user application should not be aware of the pre-compilation, right? Let's assume I run little Hello World program, when/how is pre-compilation triggered The user application will use API to tell JWarmup to kickoff pre-compilation at some appropriate point, generally after app initialization done, the basic workflow as follows -- DS freezes incoming user requests -- App does the necessary initialization -- After the initialization is done. ----> Notify JWarmup to kickoff pre-compilation(via API) ****** JWarmup does the compilation wor <---- The app gets notified after the compilation is done(via API in polling way -- DS resumes the requests, the application now is ready for service This is case how we use JWarmup, but we do believe the above process cloud be generalized and any app running inside cloud datacenter could benefit from the model by integrating java compilation with DS By this way, the java platform provides flexible mechanism for cloud scheduling system to define compilation behavior according to load time Do I understand correctly that the profile information is only used for a "standalone" compilation of a method or is it also used for inlining? For example, if we have profile information for metho B and method A inlines method B, does it use the profile information available for B when there i no profile information available for A It does support inlining. Actually, in "recording" phase, JWarmup also records the "MethodData" information, which can be used for compilation in next run > A: During run with pre-compiled methods, deoptimization is only seen with > null-check elimination s > it is not eliminated. The profile data is not updated and re-used. That > is, after deoptimized, i > starts from interpreter mode like freshly loaded Why do you only see deoptimizations with null-check elimination? A pre-compiled method can stil have uncommon traps for reasons like an out of bounds array access or some loop predicate that doe not hold, right We saw null-check elimination caused the de-optimization in our most cases, that's the reason this has been disabled by default in JWarmup But you are correct, assumption might be made wrong in some other cases, that's the reason JWarmup provides the option to user to deoptimize the pre-compiled methods after peak load via -XX:CompilationWarmUpDeoptTime control flag, which allows user to choose a time roughly after the peak time to do the deoptimization Thanks Tobia From matthias.baesken at sap.com Tue Jun 5 08:30:07 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Tue, 5 Jun 2018 08:30:07 +0000 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with Message-ID: <3f7c0b36458a467b85c41ed467b41614@sap.com> Hi Erik , is there some info available about the performance impact when disabling disabling speculative execution ? And which compiler versions are needed for this ? Best regards, Matthias >We need to add compilation flags for disabling speculative execution to >our native libraries and executables. In order to allow for users not >affected by problems with speculative execution to run a JVM at full >speed, we need to be able to ship two JVM libraries - one that is >compiled with speculative execution enabled, and one that is compiled >without. Note that this applies to the build time C++ flags, not the >compiler in the JVM itself. Luckily adding these flags to the rest of >the native libraries did not have a significant performance impact so >there is no need for making it optional there. > >This patch defines flags for disabling speculative execution for GCC and >Visual Studio and applies them to all binaries except libjvm when >available in the compiler. It defines a new jvm feature >no-speculative-cti, which is used to control whether to use the flags >for libjvm. It also defines a new jvm variant "altserver" which is the >same as server, but with this new feature added. > >For Oracle builds, we are changing the default for linux-x64 and >windows-x64 to build both server and altserver, giving the choice to the >user which JVM they want to use. If others would prefer this default, we >could make it default in configure as well. > >The change in GensrcJFR.gmk fixes a newly introduced race that appears >when building multiple jvm variants. > >Bug: https://bugs.openjdk.java.net/browse/JDK-8202384 > >Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.01 From sanhong.lsh at alibaba-inc.com Tue Jun 5 09:06:29 2018 From: sanhong.lsh at alibaba-inc.com (=?UTF-8?B?5p2O5LiJ57qiKOS4iee6oik=?=) Date: Tue, 05 Jun 2018 17:06:29 +0800 Subject: JEP: https://bugs.openjdk.java.net/browse/JDK-8203832 Message-ID: <254101d3fcac$7de91c80$79bb5580$@alibaba-inc.com> Hi Tobias, Thanks for your questions, see my inline comments. (As the formatting in my last mail was messed up, just resend it again.) Thanks! Sanhong -----????----- ???: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] ?? Tobias Hartmann ????: 2018?6?4? 15:30 ???: yumin qi ??: hotspot-dev at openjdk.java.net ??: Re: JEP: https://bugs.openjdk.java.net/browse/JDK-8203832 Hi Yumin, thanks for the details! On 01.06.2018 05:01, yumin qi wrote: > Thanks for your review/questions. First I would introduce some > background of JWarmup application on use scenario and how we > implement the interaction between application and scheduling (dispatch system, DS). > > The load of each application is controlled by DS. The profiling data > is collected against real input data (so it mostly matches the > application run in production environments, thus reduce the > deoptimization chance). When run with profiling data, application gets > notification from DS when compiling should start, application then > calls API to notify JVM the hot methods recorded in file can be compiled, after the compilations, a message sent out to DS so DS will dispatch load into this application. Could you elaborate a bit more on how the communication between the DS and the application works? A generic user application should not be aware of the pre-compilation, right? Let's assume I run a little Hello World program, when/how is pre-compilation triggered? The user application will use API to tell JWarmup to kickoff pre-compilation at some appropriate point, generally after app initialization done, the basic workflow as follows: - DS freezes incoming user requests. - App does the necessary initialization. - After initialization done, notify JWarmup to kickoff pre-compilation(via *API*). - JWarmup does the compilation work - The app gets notified after the compilation is done(via *API* in polling way) - DS resumes the requests, the application now is ready for service. This is case how we use JWarmup, but we do believe the above process cloud be generalized and any app running inside cloud datacenter could benefit from the model by integrating java compilation with DS. By this way, the java platform can provide flexible mechanism for cloud scheduling system to define compilation behavior according to load time. Do I understand correctly that the profile information is only used for a "standalone" compilations of a method or is it also used for inlining? For example, if we have profile information for method B and method A inlines method B, does it use the profile information available for B when there is no profile information available for A? It does support inlining. Actually, in "recording" phase, JWarmup also records the "MethodData" information, which can be used for compilation in next run. > A: During run with pre-compiled methods, deoptimization is only seen > with null-check elimination so it is not eliminated. The profile data > is not updated and re-used. That is, after deoptimized, it starts from interpreter mode like freshly loaded. Why do you only see deoptimizations with null-check elimination? A pre-compiled method can still have uncommon traps for reasons like an out of bounds array access or some loop predicate that does not hold, right? We saw null-check elimination caused the de-optimization in most cases, that's the reason this has been disabled by default in JWarmup. But you are correct, assumption might be made wrong in some other cases, that's the reason JWarmup provides the option to user to deoptimize the pre-compiled methods after peak load via -XX:CompilationWarmUpDeoptTime control flag, which allows user to choose a time roughly after the peak time to do the deoptimization. Thanks, Tobias From aph at redhat.com Tue Jun 5 09:27:16 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 5 Jun 2018 10:27:16 +0100 Subject: ARM port consolidation In-Reply-To: References: Message-ID: <7aae9027-266d-46e0-0df5-bc74d6530af5@redhat.com> On 06/04/2018 09:34 PM, Bob Vandette wrote: > The community at large (especially RedHat, BellSoft, Linaro and Cavium) > have done a great job of enhancing and keeping the AArch64 port up to > date with current and new Hotspot features. As a result, I propose that > we standardize the 64-bit ARM implementation on this port. > > If there are no objections, I will file a JEP to remove the 64-bit ARM > port sources that reside in jdk/open/src/hotspot/src/cpu/arm > along with any build logic. This will leave the Oracle contributed > 32-bit ARM port and the AArch64 64-bit ARM port. Sounds good to me. Over to practical considerations: is there some code we should look at porting over to the tAArch64 port? Minimal VM, perhaps? -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From erik.helin at oracle.com Tue Jun 5 09:27:18 2018 From: erik.helin at oracle.com (Erik Helin) Date: Tue, 5 Jun 2018 11:27:18 +0200 Subject: RFR: 8204168: Increase small heap sizes in tests to accommodate ZGC In-Reply-To: References: Message-ID: <102083fa-f2b0-dc2d-7d8b-89a748c2c5db@oracle.com> On 05/31/2018 02:32 PM, Stefan Karlsson wrote: > Hi all, Hey Stefan, > Please review this patch to increase the heap size for tests that sets a > small heap size. > > http://cr.openjdk.java.net/~stefank/8204168/webrev.01 > https://bugs.openjdk.java.net/browse/JDK-8204168 I read through each test in compiler, gc and runtime carefully to check that the increased heap size wouldn't render the test useless (i.e. I ensured that the tests still test something). Good news, it seems (to me at least) that increasing the heap size for all of the above tests will work fine. Please consider the changes to those tests Reviewed by me, nice work! Unfortunately I don't know the "nsk" tests well enough to review those changes... :( Thanks, Erik From sgehwolf at redhat.com Tue Jun 5 09:40:26 2018 From: sgehwolf at redhat.com (Severin Gehwolf) Date: Tue, 05 Jun 2018 11:40:26 +0200 Subject: RFR: 8203188: Add JEP-181 support to the Zero interpreter In-Reply-To: References: <9aa2709edbf7e1b417ce47ed93a2f53d591984cd.camel@redhat.com> Message-ID: <4a295598fd72fb1eff1536545111def62c5ef20f.camel@redhat.com> Hi David, Thanks for the review! On Tue, 2018-06-05 at 14:44 +1000, David Holmes wrote: > Hi Severin, > > On 5/06/2018 1:26 AM, Severin Gehwolf wrote: > > Hi, > > > > Could I please get a review of this change adding support for JEP-181 - > > a.k.a Nestmates - to Zero. This patch depends on David Holmes' > > Nestmates implementation via JDK-8010319. Thanks to David Holmes and > > Chris Phillips for their initial reviews prior to this RFR. > > > > Bug: https://bugs.openjdk.java.net/browse/JDK-8203188 > > webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8203188/webrev.02/ > > src/hotspot/cpu/zero/methodHandles_zero.cpp > > The change here seems to be an existing bug unrelated to nestmate > changes. Agreed. > IT also begs the question as to what happens in the same > circumstance with a removed static or "special" method? (I thought I had > a test for that in the nestmates changes ... will need to double-check > and add it if missing!). It might bomb in the same way (NULL dereference). I'm currently looking at some other potential issues in this area... > src/hotspot/share/interpreter/bytecodeInterpreter.cpp > > Interpreter changes seem fine - mirroring what is done elsewhere. You > can delete these incorrect comments: > > 2576 // This code isn't produced by javac, but could be produced by > 2577 // another compliant java compiler. > > That code path is taken in more circumstances than the author of that > comment realized. :) Done. > > Testing: > > > > Zero on Linux-x86_64 with the following test set: > > > > test/jdk/java/lang/invoke/AccessControlTest.java > > test/jdk/java/lang/invoke/FinalVirtualCallFromInterface.java > > test/jdk/java/lang/invoke/PrivateInterfaceCall.java > > test/jdk/java/lang/invoke/SpecialInterfaceCall.java > > test/jdk/java/lang/reflect/Nestmates > > test/hotspot/jtreg/runtime/SelectionResolution/InvokeInterfaceICCE.java > > test/hotspot/jtreg/runtime/SelectionResolution/InvokeInterfaceSuccessTest.java > > test/hotspot/jtreg/runtime/Nestmates > > > > I cannot run this through the submit repo since the main Nestmates > > patch hasn't yet landed in JDK 11. Currently testing a Zero bootcycle- > > images build on x86_64. Thoughts? FWIW, bootcycle-images build passed on linux x86_64 Zero. > I can bundle this in with the nestmate changes when I push them later > this week. Just send me a pointer to the finalized changeset once its > finalized. I'll run it all through a final step of testing equivalent > (actually more than) the submit repo. OK, thanks! Latest webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8203188/webrev.03/ Thanks, Severin > Thanks, > David > > > Thanks, > > Severin > > From david.holmes at oracle.com Tue Jun 5 09:46:36 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 5 Jun 2018 19:46:36 +1000 Subject: RFR: 8203188: Add JEP-181 support to the Zero interpreter In-Reply-To: <4a295598fd72fb1eff1536545111def62c5ef20f.camel@redhat.com> References: <9aa2709edbf7e1b417ce47ed93a2f53d591984cd.camel@redhat.com> <4a295598fd72fb1eff1536545111def62c5ef20f.camel@redhat.com> Message-ID: Looks good. I'll push this with the nestmate changes later in the week. Thanks, David On 5/06/2018 7:40 PM, Severin Gehwolf wrote: > Hi David, > > Thanks for the review! > > On Tue, 2018-06-05 at 14:44 +1000, David Holmes wrote: >> Hi Severin, >> >> On 5/06/2018 1:26 AM, Severin Gehwolf wrote: >>> Hi, >>> >>> Could I please get a review of this change adding support for JEP-181 - >>> a.k.a Nestmates - to Zero. This patch depends on David Holmes' >>> Nestmates implementation via JDK-8010319. Thanks to David Holmes and >>> Chris Phillips for their initial reviews prior to this RFR. >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8203188 >>> webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8203188/webrev.02/ >> >> src/hotspot/cpu/zero/methodHandles_zero.cpp >> >> The change here seems to be an existing bug unrelated to nestmate >> changes. > > Agreed. > >> IT also begs the question as to what happens in the same >> circumstance with a removed static or "special" method? (I thought I had >> a test for that in the nestmates changes ... will need to double-check >> and add it if missing!). > > It might bomb in the same way (NULL dereference). I'm currently looking > at some other potential issues in this area... > >> src/hotspot/share/interpreter/bytecodeInterpreter.cpp >> >> Interpreter changes seem fine - mirroring what is done elsewhere. You >> can delete these incorrect comments: >> >> 2576 // This code isn't produced by javac, but could be produced by >> 2577 // another compliant java compiler. >> >> That code path is taken in more circumstances than the author of that >> comment realized. :) > > Done. > >>> Testing: >>> >>> Zero on Linux-x86_64 with the following test set: >>> >>> test/jdk/java/lang/invoke/AccessControlTest.java >>> test/jdk/java/lang/invoke/FinalVirtualCallFromInterface.java >>> test/jdk/java/lang/invoke/PrivateInterfaceCall.java >>> test/jdk/java/lang/invoke/SpecialInterfaceCall.java >>> test/jdk/java/lang/reflect/Nestmates >>> test/hotspot/jtreg/runtime/SelectionResolution/InvokeInterfaceICCE.java >>> test/hotspot/jtreg/runtime/SelectionResolution/InvokeInterfaceSuccessTest.java >>> test/hotspot/jtreg/runtime/Nestmates >>> >>> I cannot run this through the submit repo since the main Nestmates >>> patch hasn't yet landed in JDK 11. Currently testing a Zero bootcycle- >>> images build on x86_64. Thoughts? > > FWIW, bootcycle-images build passed on linux x86_64 Zero. > >> I can bundle this in with the nestmate changes when I push them later >> this week. Just send me a pointer to the finalized changeset once its >> finalized. I'll run it all through a final step of testing equivalent >> (actually more than) the submit repo. > > OK, thanks! > > Latest webrev: > http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8203188/webrev.03/ > > Thanks, > Severin > >> Thanks, >> David >> >>> Thanks, >>> Severin >>> From stefan.karlsson at oracle.com Tue Jun 5 09:45:32 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Tue, 5 Jun 2018 11:45:32 +0200 Subject: RFR: 8204168: Increase small heap sizes in tests to accommodate ZGC In-Reply-To: <102083fa-f2b0-dc2d-7d8b-89a748c2c5db@oracle.com> References: <102083fa-f2b0-dc2d-7d8b-89a748c2c5db@oracle.com> Message-ID: <2e2dba63-3b2a-dc2d-9e86-8b1baec680db@oracle.com> Thanks for reviewing, Erik! StefanK On 2018-06-05 11:27, Erik Helin wrote: > On 05/31/2018 02:32 PM, Stefan Karlsson wrote: >> Hi all, > > Hey Stefan, > >> Please review this patch to increase the heap size for tests that sets >> a small heap size. >> >> http://cr.openjdk.java.net/~stefank/8204168/webrev.01 >> https://bugs.openjdk.java.net/browse/JDK-8204168 > > I read through each test in compiler, gc and runtime carefully to check > that the increased heap size wouldn't render the test useless (i.e. I > ensured that the tests still test something). Good news, it seems (to me > at least) that increasing the heap size for all of the above tests will > work fine. Please consider the changes to those tests Reviewed by me, > nice work! > > Unfortunately I don't know the "nsk" tests well enough to review those > changes... :( > > Thanks, > Erik From per.liden at oracle.com Tue Jun 5 09:58:10 2018 From: per.liden at oracle.com (Per Liden) Date: Tue, 5 Jun 2018 11:58:10 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <0b3ddf69-15f7-217d-fe77-2861534c695a@oracle.com> Message-ID: <0b8bd35a-50bd-2711-6fc0-14c648a1e3b3@oracle.com> Hi, [...] >> Small nit in >> est/hotspot/jtreg/vmTestbase/nsk/jdi/ObjectReference/referringObjects/referringObjects001/referringObjects001.java: >> >> >> +??????? // G1 fails, just like ZGC, if en explicitly GC is done here. >> >> May I suggest s/en explicitly/an explicit/ ? >> Also maybe remove the comment `// forceGC();`, because it might later >> look like your comment commented out an earlier, pre-existing call to >> forceGC(). >> >> Same comment as above for instances003.java, instances001.java, >> instanceCounts001.java. > > Fixed. > > http://hg.openjdk.java.net/zgc/zgc/rev/42cd3b259870 Sorry, I botched the initial fix, here's an updated version. http://hg.openjdk.java.net/zgc/zgc/rev/1e4e97efc975 /Per From sgehwolf at redhat.com Tue Jun 5 09:59:03 2018 From: sgehwolf at redhat.com (Severin Gehwolf) Date: Tue, 05 Jun 2018 11:59:03 +0200 Subject: RFR: 8203188: Add JEP-181 support to the Zero interpreter In-Reply-To: References: <9aa2709edbf7e1b417ce47ed93a2f53d591984cd.camel@redhat.com> <4a295598fd72fb1eff1536545111def62c5ef20f.camel@redhat.com> Message-ID: <5e90e7fe77371ed326cbd364a027b5af67e207b2.camel@redhat.com> On Tue, 2018-06-05 at 19:46 +1000, David Holmes wrote: > Looks good. > > I'll push this with the nestmate changes later in the week. Thanks! HG exported changeset is here: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8203188/JDK-8203188.export.patch Cheers, Severin > Thanks, > David > > On 5/06/2018 7:40 PM, Severin Gehwolf wrote: > > Hi David, > > > > Thanks for the review! > > > > On Tue, 2018-06-05 at 14:44 +1000, David Holmes wrote: > > > Hi Severin, > > > > > > On 5/06/2018 1:26 AM, Severin Gehwolf wrote: > > > > Hi, > > > > > > > > Could I please get a review of this change adding support for JEP-181 - > > > > a.k.a Nestmates - to Zero. This patch depends on David Holmes' > > > > Nestmates implementation via JDK-8010319. Thanks to David Holmes and > > > > Chris Phillips for their initial reviews prior to this RFR. > > > > > > > > Bug: https://bugs.openjdk.java.net/browse/JDK-8203188 > > > > webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8203188/webrev.02/ > > > > > > src/hotspot/cpu/zero/methodHandles_zero.cpp > > > > > > The change here seems to be an existing bug unrelated to nestmate > > > changes. > > > > Agreed. > > > > > IT also begs the question as to what happens in the same > > > circumstance with a removed static or "special" method? (I thought I had > > > a test for that in the nestmates changes ... will need to double-check > > > and add it if missing!). > > > > It might bomb in the same way (NULL dereference). I'm currently looking > > at some other potential issues in this area... > > > > > src/hotspot/share/interpreter/bytecodeInterpreter.cpp > > > > > > Interpreter changes seem fine - mirroring what is done elsewhere. You > > > can delete these incorrect comments: > > > > > > 2576 // This code isn't produced by javac, but could be produced by > > > 2577 // another compliant java compiler. > > > > > > That code path is taken in more circumstances than the author of that > > > comment realized. :) > > > > Done. > > > > > > Testing: > > > > > > > > Zero on Linux-x86_64 with the following test set: > > > > > > > > test/jdk/java/lang/invoke/AccessControlTest.java > > > > test/jdk/java/lang/invoke/FinalVirtualCallFromInterface.java > > > > test/jdk/java/lang/invoke/PrivateInterfaceCall.java > > > > test/jdk/java/lang/invoke/SpecialInterfaceCall.java > > > > test/jdk/java/lang/reflect/Nestmates > > > > test/hotspot/jtreg/runtime/SelectionResolution/InvokeInterfaceICCE.java > > > > test/hotspot/jtreg/runtime/SelectionResolution/InvokeInterfaceSuccessTest.java > > > > test/hotspot/jtreg/runtime/Nestmates > > > > > > > > I cannot run this through the submit repo since the main Nestmates > > > > patch hasn't yet landed in JDK 11. Currently testing a Zero bootcycle- > > > > images build on x86_64. Thoughts? > > > > FWIW, bootcycle-images build passed on linux x86_64 Zero. > > > > > I can bundle this in with the nestmate changes when I push them later > > > this week. Just send me a pointer to the finalized changeset once its > > > finalized. I'll run it all through a final step of testing equivalent > > > (actually more than) the submit repo. > > > > OK, thanks! > > > > Latest webrev: > > http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8203188/webrev.03/ > > > > Thanks, > > Severin > > > > > Thanks, > > > David > > > > > > > Thanks, > > > > Severin > > > > From boris.ulasevich at bell-sw.com Tue Jun 5 10:27:09 2018 From: boris.ulasevich at bell-sw.com (Boris Ulasevich) Date: Tue, 5 Jun 2018 13:27:09 +0300 Subject: ARM port consolidation In-Reply-To: References: Message-ID: <2e526647-eba9-968f-6ef9-2c497976c67b@bell-sw.com> Hi Bob, I agree with your proposal. I'll be happy to help you ensure the ARM32 upstream port continues to be functional after the removal of ARM64 component and support it going forward. Boris Ulasevich, BellSoft On 04.06.2018 23:34, Bob Vandette wrote: > During the JDK 9 time frame, Oracle open sourced its 32-bit and 64-bit > ARM ports and contributed them to OpenJDK. These ports have been used for > years in the embedded and mobile market, making them very stable and > having the benefit of a single source base which can produce both 32 and > 64-bit binaries. The downside of this contribution is that it resulted > in two 64-bit ARM implementations being available in OpenJDK. > > I'd like to propose that we eliminate one of the 64-bit ARM ports and > encourage everyone to enhance and support the remaining 32 and 64 bit > ARM ports. This would avoid the creation of yet another port for these chip > architectures. The reduction of competing ports will allow everyone > to focus their attention on a single 64-bit port rather than diluting > our efforts. This will result in a higher quality and a more performant > implementation. > > The community at large (especially RedHat, BellSoft, Linaro and Cavium) > have done a great job of enhancing and keeping the AArch64 port up to > date with current and new Hotspot features. As a result, I propose that > we standardize the 64-bit ARM implementation on this port. > > If there are no objections, I will file a JEP to remove the 64-bit ARM > port sources that reside in jdk/open/src/hotspot/src/cpu/arm > along with any build logic. This will leave the Oracle contributed > 32-bit ARM port and the AArch64 64-bit ARM port. > > Let me know what you all think, > Bob Vandette > > From rkennke at redhat.com Tue Jun 5 10:31:17 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 5 Jun 2018 12:31:17 +0200 Subject: RFR: JDK-8203172: Primitive heap access for interpreter BarrierSetAssembler/aarch64 In-Reply-To: References: <5B152EA5.2060903@oracle.com> <23f0db5e-e2d2-a047-6111-d298c2234d28@redhat.com> <9e8589ca-5ac8-d7ee-a217-04048671366c@redhat.com> <36a33e42-1470-b153-dd7a-0ef26c89678b@redhat.com> Message-ID: <3aeba3d0-7250-bdf0-a534-2b0ce6992668@redhat.com> Hi Erik, We will set -UseFastJNIAccessors in Shenandoah for now. We might get back to extending the PC list to a range later, but we have more pressing issues to solve at this moment. Interesting question regarding this patch is, do we want to keep the BarrierSetAssembler stuff in JNI fast-get-field code, or do we want to rip it out. I tend to leave it in anyway. Roman > Hi Roman, > > On 2018-06-04 22:49, Roman Kennke wrote: >> Am 04.06.2018 um 22:16 schrieb Erik ?sterlund: >>> Hi Roman, >>> >>> On 2018-06-04 21:42, Roman Kennke wrote: >>>> Am 04.06.2018 um 18:43 schrieb Erik ?sterlund: >>>>> Hi Roman, >>>>> >>>>> On 2018-06-04 17:24, Roman Kennke wrote: >>>>>> Ok, right. Very good catch! >>>>>> >>>>>> This should do it, right? Sorry, I couldn't easily make an >>>>>> incremental >>>>>> diff: >>>>>> >>>>>> http://cr.openjdk.java.net/~rkennke/JDK-8203172/webrev.01/ >>>>> Unfortunately, I think there is one more problem for you. >>>>> The signal handler is supposed to catch SIGSEGV caused by speculative >>>>> loads shot from the fantastic jni fast get field code. But it >>>>> currently >>>>> expects an exact PC match: >>>>> >>>>> address JNI_FastGetField::find_slowcase_pc(address pc) { >>>>> ??? for (int i=0; i>>>> ????? if (speculative_load_pclist[i] == pc) { >>>>> ??????? return slowcase_entry_pclist[i]; >>>>> ????? } >>>>> ??? } >>>>> ??? return (address)-1; >>>>> } >>>>> >>>>> This means that the way this is written now, speculative_load_pclist >>>>> registers the __ pc() right before the access_load_at call. This puts >>>>> constraints on whatever is done inside of access_load_at to only >>>>> speculatively load on the first assembled instruction. >>>>> >>>>> If you imagine a scenario where you have a GC with Brooks pointers >>>>> that >>>>> also uncommits memory (like Shenandoah I presume), then I imagine you >>>>> would need something more here. If you start with a forwarding pointer >>>>> load, then that can trap (which is probably caught by the exact PC >>>>> match). But then there will be a subsequent load of the value in the >>>>> to-space object, which will not be protected. But this is also loaded >>>>> speculatively (as the subsequent safepoint counter check could >>>>> invalidate the result), and could therefore crash the VM unless >>>>> protected, as the signal handler code fails to recognize this is a >>>>> speculative load from jni fast get field. >>>>> >>>>> I imagine the solution to this would be to let speculative_load_pclist >>>>> specify a range for fuzzy SIGSEGV matching in the signal handler, >>>>> rather >>>>> than an exact PC (i.e. speculative_load_pclist_start and >>>>> speculative_load_pclist_end). That would give you enough freedom to >>>>> use >>>>> Brooks pointers in there. Sometimes I wonder if the lengths we go to >>>>> maintain jni fast get field is *really* worth it. >>>> I are probably right in general. But I also think we are fine with >>>> Shenandoah. Both the fwd ptr load and the field load are constructed >>>> with the same base operand. If the oop is NULL (or invalid memory) it >>>> will blow up on fwdptr load just the same as it would blow up on field >>>> load. We maintain an invariant that the fwd ptr of a valid oop results >>>> in a valid (and equivalent) oop. I therefore think we are fine for now. >>>> Should a GC ever need anything else here, I'd worry about it then. >>>> Until >>>> this happens, let's just hope to never need to touch this code again >>>> ;-) >>> No I'm afraid that is not safe. After loading the forwarding pointer, >>> the thread could be preempted, then any number of GC cycles could pass, >>> which means that the address that the at some point read forwarding >>> pointer points to, could be uncommitted memory. In fact it is unsafe >>> even without uncommitted memory. Because after resolving the jobject to >>> some address in the heap, the thread could get preempted, and any number >>> of GC cycles could pass, causing the forwarding pointer to be read from >>> some address in the heap that no longer is the forwarding pointer of an >>> object, but rather a random integer. This causes the second load to blow >>> up, even without uncommitting memory. >>> >>> Here is an attempt at showing different things that can go wrong: >>> >>> obj = *jobject >>> // preempted for N GC cycles, meaning obj might 1) be a valid pointer to >>> an object, or 2) be a random pointer inside of the heap or outside of >>> the heap >>> >>> forward_pointer = *obj // may 1) crash with SIGSEGV, 2) read a random >>> pointer, no longer representing the forwarding pointer, or 3) read a >>> consistent forwarding pointer >>> >>> // preempted for N GC cycles, causing forward_pointer to point at pretty >>> much anything >>> >>> result = *(forward_pointer + offset) // may 1) read a valid primitive >>> value, if previous two loads were not messed up, or 2) read some random >>> value that no longer corresponds to the object field, or 3) crash >>> because either the forwarding pointer did point at something valid that >>> subsequently got relocated and uncommitted before the load hits, or >>> because the forwarding pointer never pointed to anything valid in the >>> first place, because the forwarding pointer load read a random pointer >>> due to the object relocating after the jobject was resolved. >>> >>> The summary is that both loads need protection due to how the thread in >>> native state runs freely without necessarily caring about the GC running >>> any number of GC cycles concurrently, making the memory super slippery, >>> which risks crashing the VM without the proper protection. >> AWW WTF!? We are in native state in this code? > > Yes. This is one of the most dangerous code paths we have in the VM I > think. > >> It might be easier to just call bsa->resolve_for_read() (which emits the >> fwd ptr load), then issue another: >> >> speculative_load_pclist[count] = __ pc(); >> >> need to juggle with the counter and double-emit slowcase_entry_pclist, >> and all this conditionally for Shenandoah. Gaa. > > I think that by just having the speculative load PC list take a range as > opposed to a precise PC, and check that a given PC is in that range, and > not just exactly equal to a PC, the problem is solved for everyone. > >> Or just FLAG_SET_DEFAULT(UseFastJNIAccessors,false) in Shenandoah. > > Yeah, sometimes you wonder if it's really worth the maintenance to keep > this thing. > >> Funny how we had this code in Shenandoah literally for years, and >> nobody's ever tripped over it. > > Yeah it is a rather nasty race to detect. > >> It's one of those cases where I almost suspect it's been done in Java1.0 >> when lots of JNI code was in use because some stuff couldn't be done in >> fast in Java, but nowadays doesn't really make a difference. *Sigh* > > :) > >>>>>> Unfortunately, I cannot really test it because of: >>>>>> http://mail.openjdk.java.net/pipermail/aarch64-port-dev/2018-May/005843.html >>>>>> >>>>>> >>>>>> >>>>> That is unfortunate. If I were you, I would not dare to change >>>>> anything >>>>> in jni fast get field without testing it - it is very error prone. >>>> Yeah. I guess I'll just wait with testing until this is resolved. Or >>>> else resolve it myself. >>> Yeah. >>> >>>> Can I consider this change reviewed by you? >>> I think we should agree about the safety of doing this for Shenandoah in >>> particular first. I still think we need the PC range as opposed to exact >>> PC to be caught in the signal handler for this to be safe for your GC >>> algorithm. >> >> Yeah, I agree. I need to think this through a little bit. > > Yeah. Still think the PC range check solution should do the trick. > >> Thanks for pointing out this bug. I can already see nightly builds >> suddenly starting to fail over it, now that it's known :-) > > No problem! > > Thanks, > /Erik > >> Roman >> >> > From erik.helin at oracle.com Tue Jun 5 11:37:52 2018 From: erik.helin at oracle.com (Erik Helin) Date: Tue, 5 Jun 2018 13:37:52 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <0b3ddf69-15f7-217d-fe77-2861534c695a@oracle.com> Message-ID: <4c3ab68f-8752-a0d9-bc62-50f5cad2ef41@oracle.com> On 06/05/2018 09:21 AM, Per Liden wrote:> On 06/04/2018 03:47 PM, Erik Helin wrote: >> Could you please change the comment to say x86_64 or x64 (similar to >> other such comments in that file)? x86 is a bit ambiguous (could mean >> a 32-bit x86 CPU). > > Fixed. > > http://hg.openjdk.java.net/zgc/zgc/rev/a81777811000 Looks good, thanks. On 06/05/2018 09:21 AM, Per Liden wrote: > On 06/04/2018 03:47 PM, Erik Helin wrote: >> Small nit in src/hotspot/share/compiler/oopMap.cpp: >> >> +??????? if (ZGC_ONLY(!UseZGC &&) >> +??????????? ((((uintptr_t)loc & (sizeof(*loc)-1)) != 0) || >> +???????????? !Universe::heap()->is_in_or_null(*loc))) { >> >> Do we really need ZGC_ONLY around !UseZGC && here? The code is in an >> #ifdef ASSERT so it doesn't seem performance sensitive, and UseZGC >> will be just be false if ZGC isn't compiled, right? Or have I gotten >> this backwards? > > Fixed. > > http://hg.openjdk.java.net/zgc/zgc/rev/3f6db622400c Also good, thanks. On 06/05/2018 09:21 AM, Per Liden wrote: > On 06/04/2018 03:47 PM, Erik Helin wrote: >> Regarding src/hotspot/share/gc/shared/gcName.hpp, should we introduce >> a GCName class so that we can limit the scope of the Z och NA symbols? >> (Then GCNameHelper::to_string could also be moved into that class). >> Could also be done as a follow-up patch (if so, please file a bug). > > I agree, filed an RFE. > > https://bugs.openjdk.java.net/browse/JDK-8204324 Ok, lets tackle this in a separate patch, thanks for filing the RFE. On 06/05/2018 09:21 AM, Per Liden wrote: > On 06/04/2018 03:47 PM, Erik Helin wrote: >> Small nit in src/hotspot/share/jfr/metadata/metadata.xml: >> - >> \ No newline at end of file >> + >> >> Did you happen to add a newline here (I don't know why there should >> not be a newline, but the comment indicates so)? > > The "No newline at end of file" comment is actually generated by hg diff > and is not in the file itself. I think vim added it automatically, and I > think we probably should have a new line there, but I'll revert it from > this change. > > http://hg.openjdk.java.net/zgc/zgc/rev/a8e1aec31efa Ah, alright, I thought it was a comment in the source code file. Thanks for reverting this part of the patch, we can discuss later if we can (should?) add a newline to that file. On 06/05/2018 09:21 AM, Per Liden wrote: > On 06/04/2018 03:47 PM, Erik Helin wrote: >> Small nit in src/hotspot/share/opto/node.hpp: >> >> ??? virtual?????? uint? ideal_reg() const; >> + >> ??#ifndef PRODUCT >> >> Was the extra newline here added intentionally? > > Fixed. > > http://hg.openjdk.java.net/zgc/zgc/rev/6d6259917ded Looks good, thanks. On 06/05/2018 09:21 AM, Per Liden wrote: > On 06/04/2018 03:47 PM, Erik Helin wrote: >> In src/hotspot/share/prims/jvmtiTagMap.cpp, do you need to add an >> include of gc/z/zGlobals.hpp for ZAddressMetadataShift? Like >> >> +#if INCLUDE_ZGC >> +? #include "gc/z/c2/zGlobals.hpp" >> +#endif >> >> Or did I miss an include somewhere (wouldn't be the first time :)? > > Fixed. > > http://hg.openjdk.java.net/zgc/zgc/rev/b2e3b7c012af Also good, thanks. On 06/05/2018 09:21 AM, Per Liden wrote: > On 06/04/2018 03:47 PM, Erik Helin wrote: >> In src/hotspot/share/prims/whitebox.cpp, do we need the #if >> INCLUDE_ZGC guards or is `if (UseZGC)` enough? > > This is certainly up for discussion, but the model I think we've been > shooting for is that we don't have INCLUDE_ZGC only if there's a !UseZGC > condition. Some of the "if (UseZGC)" then have ZGC specific code inside > the scope, so you need the INCLUDE_ZGC anyway. In this particular case > we don't have any ZGC specific code in the true path, but we might in > the future. > > This is the model we're trying to follow, but as I said, we can discuss > if this is good or not. Hmm, ok, I see what you mean. I agree that for !UseZGC we should skip INCLUDE_ZGC guards and I see that you for the `if (UseZGC)` case. I would probably have skipped the guards even for this `if (UseZGC)` case, but I'm fine to leave them in. On 06/05/2018 09:21 AM, Per Liden wrote: > On 06/04/2018 03:47 PM, Erik Helin wrote: >> Same comment for src/hotspot/share/runtime/jniHandles.cpp, do we need >> the #if INCLUDE_ZGC guard? > > Fixed this and another similar thing in c1_LIRAssembler_x86.cpp. > > http://hg.openjdk.java.net/zgc/zgc/rev/2cf588273130 Good, thanks. >>> * ZGC Testing: >>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >> >> Again, great work here, particularly with upstreaming so many patches >> ahead of this one. I only have two small comments regarding the test >> changes: >> >> Small nit in >> est/hotspot/jtreg/vmTestbase/nsk/jdi/ObjectReference/referringObjects/referringObjects001/referringObjects001.java: >> >> >> +??????? // G1 fails, just like ZGC, if en explicitly GC is done here. >> >> May I suggest s/en explicitly/an explicit/ ? >> Also maybe remove the comment `// forceGC();`, because it might later >> look like your comment commented out an earlier, pre-existing call to >> forceGC(). >> >> Same comment as above for instances003.java, instances001.java, >> instanceCounts001.java. > > Fixed. > > http://hg.openjdk.java.net/zgc/zgc/rev/42cd3b259870 The updated version in your follow-up email looks good :) On 06/05/2018 09:21 AM, Per Liden wrote: > On 06/04/2018 03:47 PM, Erik Helin wrote: >> In jdk/java/lang/management/MemoryMXBean/MemoryTestZGC.sh you probably >> want to remove "@bug???? 4530538", the empty "@summary" and "@author >> Mandy Chung" > > Fixed. > > http://hg.openjdk.java.net/zgc/zgc/rev/ff780fec8423 Also good, thanks. The shared parts looks good to me now, consider those parts Reviewed by me (but don't count me as a formal reviewer for the C2 parts, someone with more C2 experience needs to look at those changes). Thanks, Erik From per.liden at oracle.com Tue Jun 5 11:48:56 2018 From: per.liden at oracle.com (Per Liden) Date: Tue, 5 Jun 2018 13:48:56 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <4c3ab68f-8752-a0d9-bc62-50f5cad2ef41@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <0b3ddf69-15f7-217d-fe77-2861534c695a@oracle.com> <4c3ab68f-8752-a0d9-bc62-50f5cad2ef41@oracle.com> Message-ID: <3400f89d-e8d0-a6c5-d3be-0bd614aa5cf9@oracle.com> On 06/05/2018 01:37 PM, Erik Helin wrote: > On 06/05/2018 09:21 AM, Per Liden wrote:> On 06/04/2018 03:47 PM, Erik > Helin wrote: >>> Could you please change the comment to say x86_64 or x64 (similar to >>> other such comments in that file)? x86 is a bit ambiguous (could mean >>> a 32-bit x86 CPU). >> >> Fixed. >> >> http://hg.openjdk.java.net/zgc/zgc/rev/a81777811000 > > Looks good, thanks. > > On 06/05/2018 09:21 AM, Per Liden wrote: >> On 06/04/2018 03:47 PM, Erik Helin wrote: >>> Small nit in src/hotspot/share/compiler/oopMap.cpp: >>> >>> +??????? if (ZGC_ONLY(!UseZGC &&) >>> +??????????? ((((uintptr_t)loc & (sizeof(*loc)-1)) != 0) || >>> +???????????? !Universe::heap()->is_in_or_null(*loc))) { >>> >>> Do we really need ZGC_ONLY around !UseZGC && here? The code is in an >>> #ifdef ASSERT so it doesn't seem performance sensitive, and UseZGC >>> will be just be false if ZGC isn't compiled, right? Or have I gotten >>> this backwards? >> >> Fixed. >> >> http://hg.openjdk.java.net/zgc/zgc/rev/3f6db622400c > > Also good, thanks. > > On 06/05/2018 09:21 AM, Per Liden wrote: >> On 06/04/2018 03:47 PM, Erik Helin wrote: >>> Regarding src/hotspot/share/gc/shared/gcName.hpp, should we introduce >>> a GCName class so that we can limit the scope of the Z och NA >>> symbols? (Then GCNameHelper::to_string could also be moved into that >>> class). Could also be done as a follow-up patch (if so, please file a >>> bug). >> >> I agree, filed an RFE. >> >> https://bugs.openjdk.java.net/browse/JDK-8204324 > > Ok, lets tackle this in a separate patch, thanks for filing the RFE. > > On 06/05/2018 09:21 AM, Per Liden wrote: >> On 06/04/2018 03:47 PM, Erik Helin wrote: >>> Small nit in src/hotspot/share/jfr/metadata/metadata.xml: >>> - >>> \ No newline at end of file >>> + >>> >>> Did you happen to add a newline here (I don't know why there should >>> not be a newline, but the comment indicates so)? >> >> The "No newline at end of file" comment is actually generated by hg >> diff and is not in the file itself. I think vim added it >> automatically, and I think we probably should have a new line there, >> but I'll revert it from this change. >> >> http://hg.openjdk.java.net/zgc/zgc/rev/a8e1aec31efa > > Ah, alright, I thought it was a comment in the source code file. Thanks > for reverting this part of the patch, we can discuss later if we can > (should?) add a newline to that file. > > On 06/05/2018 09:21 AM, Per Liden wrote: >> On 06/04/2018 03:47 PM, Erik Helin wrote: >>> Small nit in src/hotspot/share/opto/node.hpp: >>> >>> ??? virtual?????? uint? ideal_reg() const; >>> + >>> ??#ifndef PRODUCT >>> >>> Was the extra newline here added intentionally? >> >> Fixed. >> >> http://hg.openjdk.java.net/zgc/zgc/rev/6d6259917ded > > Looks good, thanks. > > On 06/05/2018 09:21 AM, Per Liden wrote: >> On 06/04/2018 03:47 PM, Erik Helin wrote: >>> In src/hotspot/share/prims/jvmtiTagMap.cpp, do you need to add an >>> include of gc/z/zGlobals.hpp for ZAddressMetadataShift? Like >>> >>> +#if INCLUDE_ZGC >>> +? #include "gc/z/c2/zGlobals.hpp" >>> +#endif >>> >>> Or did I miss an include somewhere (wouldn't be the first time :)? >> >> Fixed. >> >> http://hg.openjdk.java.net/zgc/zgc/rev/b2e3b7c012af > > Also good, thanks. > > > On 06/05/2018 09:21 AM, Per Liden wrote: >> On 06/04/2018 03:47 PM, Erik Helin wrote: >>> In src/hotspot/share/prims/whitebox.cpp, do we need the #if >>> INCLUDE_ZGC guards or is `if (UseZGC)` enough? >> >> This is certainly up for discussion, but the model I think we've been >> shooting for is that we don't have INCLUDE_ZGC only if there's a >> !UseZGC condition. Some of the "if (UseZGC)" then have ZGC specific >> code inside the scope, so you need the INCLUDE_ZGC anyway. In this >> particular case we don't have any ZGC specific code in the true path, >> but we might in the future. >> >> This is the model we're trying to follow, but as I said, we can >> discuss if this is good or not. > > Hmm, ok, I see what you mean. I agree that for !UseZGC we should skip > INCLUDE_ZGC guards and I see that you for the `if (UseZGC)` case. I > would probably have skipped the guards even for this `if (UseZGC)` case, > but I'm fine to leave them in. > > On 06/05/2018 09:21 AM, Per Liden wrote: >> On 06/04/2018 03:47 PM, Erik Helin wrote: >>> Same comment for src/hotspot/share/runtime/jniHandles.cpp, do we need >>> the #if INCLUDE_ZGC guard? >> >> Fixed this and another similar thing in c1_LIRAssembler_x86.cpp. >> >> http://hg.openjdk.java.net/zgc/zgc/rev/2cf588273130 > > Good, thanks. > >>>> * ZGC Testing: >>>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >>> >>> Again, great work here, particularly with upstreaming so many patches >>> ahead of this one. I only have two small comments regarding the test >>> changes: >>> >>> Small nit in >>> est/hotspot/jtreg/vmTestbase/nsk/jdi/ObjectReference/referringObjects/referringObjects001/referringObjects001.java: >>> >>> >>> +??????? // G1 fails, just like ZGC, if en explicitly GC is done here. >>> >>> May I suggest s/en explicitly/an explicit/ ? >>> Also maybe remove the comment `// forceGC();`, because it might later >>> look like your comment commented out an earlier, pre-existing call to >>> forceGC(). >>> >>> Same comment as above for instances003.java, instances001.java, >>> instanceCounts001.java. >> >> Fixed. >> >> http://hg.openjdk.java.net/zgc/zgc/rev/42cd3b259870 > > The updated version in your follow-up email looks good :) > > On 06/05/2018 09:21 AM, Per Liden wrote: >> On 06/04/2018 03:47 PM, Erik Helin wrote: >>> In jdk/java/lang/management/MemoryMXBean/MemoryTestZGC.sh you >>> probably want to remove "@bug???? 4530538", the empty "@summary" and >>> "@author Mandy Chung" >> >> Fixed. >> >> http://hg.openjdk.java.net/zgc/zgc/rev/ff780fec8423 > > Also good, thanks. > > The shared parts looks good to me now, consider those parts Reviewed by Thanks for reviewing, Erik! > me (but don't count me as a formal reviewer for the C2 parts, someone > with more C2 experience needs to look at those changes). Rickard will be looking at the C2 parts (and maybe others too). cheers, Per From jini.george at oracle.com Tue Jun 5 12:50:36 2018 From: jini.george at oracle.com (Jini George) Date: Tue, 5 Jun 2018 18:20:36 +0530 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> Message-ID: <12494192-b16d-55bc-120b-24d45cb34424@oracle.com> Hi Per, I have looked at only the SA portion. Some comments on that: ==> share/classes/sun/jvm/hotspot/oops/ObjectHeap.java The method collectLiveRegions() would need to include code to iterate through the Zpages, and collect the live regions. ==> share/classes/sun/jvm/hotspot/HSDB.java The addAnnotation() method needs to handle the case of collHeap being an instance of ZCollectedHeap to avoid "Unknown generation" being displayed while displaying the Stack Memory for a mutator thread. ==> share/classes/sun/jvm/hotspot/gc/shared/GCCause.java To the GCCause enum, it would be good to add the equivalents of the following GC causes. (though at this point, GCCause seems unused within SA). _z_timer, _z_warmup, _z_allocation_rate, _z_allocation_stall, _z_proactive, ==> share/classes/sun/jvm/hotspot/gc/shared/GCName.java Similarly, it would be good to add the equivalent of 'Z' in the GCName enum. ==> share/classes/sun/jvm/hotspot/runtime/VMOps.java Again, it would be good to add 'ZOperation' to the VMOps enum (though it looks like it is already not in sync). ==> share/classes/sun/jvm/hotspot/tools/HeapSummary.java The run() method would need to handle the ZGC case too to avoid the unknown CollectedHeap type exception with jhsdb jmap -heap: Also, the printGCAlgorithm() method would need to be updated to read in the UseZGC flag to avoid the default "Mark Sweep Compact GC" being displayed with jhsdb jmap -heap. ==> share/classes/sun/jvm/hotspot/gc/z/ZHeap.java It would be great if printOn() (for the clhsdb command 'universe') would print the address range of the java heap as we have in other GCs (with ZAddressSpaceStart and ZAddressSpaceEnd?) ==> test/hotspot/jtreg/serviceability/sa/TestUniverse.java Please modify the above test to include zgc or include a separate SA test to test the universe output for zgc. Thank you, Jini. On 6/2/2018 3:11 AM, Per Liden wrote: > Hi, > > Please review the implementation of JEP 333: ZGC: A Scalable Low-Latency > Garbage Collector (Experimental) > > Please see the JEP for more information about the project. The JEP is > currently in state "Proposed to Target" for JDK 11. > > https://bugs.openjdk.java.net/browse/JDK-8197831 > > Additional information in can also be found on the ZGC project wiki. > > https://wiki.openjdk.java.net/display/zgc/Main > > > Webrevs > ------- > > To make this easier to review, we've divided the change into two webrevs. > > * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master > > ? This patch contains the actual ZGC implementation, the new unit tests > and other changes needed in HotSpot. > > * ZGC Testing: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing > > ? This patch contains changes to existing tests needed by ZGC. > > > Overview of Changes > ------------------- > > Below follows a list of the files we add/modify in the master patch, > with a short summary describing each group. > > * Build support - Making ZGC an optional feature. > > ? make/autoconf/hotspot.m4 > ? make/hotspot/lib/JvmFeatures.gmk > ? src/hotspot/share/utilities/macros.hpp > > * C2 AD file - Additions needed to generate ZGC load barriers (adlc does > not currently offer a way to easily break this out). > > ? src/hotspot/cpu/x86/x86.ad > ? src/hotspot/cpu/x86/x86_64.ad > > * C2 - Things that can't be easily abstracted out into ZGC specific > code, most of which is guarded behind a #if INCLUDE_ZGC and/or if > (UseZGC) condition. There should only be two logic changes (one in > idealKit.cpp and one in node.cpp) that are still active when ZGC is > disabled. We believe these are low risk changes and should not introduce > any real change i behavior when using other GCs. > > ? src/hotspot/share/adlc/formssel.cpp > ? src/hotspot/share/opto/* > ? src/hotspot/share/compiler/compilerDirectives.hpp > > * General GC+Runtime - Registering ZGC as a collector. > > ? src/hotspot/share/gc/shared/* > ? src/hotspot/share/runtime/vmStructs.cpp > ? src/hotspot/share/runtime/vm_operations.hpp > ? src/hotspot/share/prims/whitebox.cpp > > * GC thread local data - Increasing the size of data area by 32 bytes. > > ? src/hotspot/share/gc/shared/gcThreadLocalData.hpp > > * ZGC - The collector itself. > > ? src/hotspot/share/gc/z/* > ? src/hotspot/cpu/x86/gc/z/* > ? src/hotspot/os_cpu/linux_x86/gc/z/* > ? test/hotspot/gtest/gc/z/* > > * JFR - Adding new event types. > > ? src/hotspot/share/jfr/* > ? src/jdk.jfr/share/conf/jfr/* > > * Logging - Adding new log tags. > > ? src/hotspot/share/logging/* > > * Metaspace - Adding a friend declaration. > > ? src/hotspot/share/memory/metaspace.hpp > > * InstanceRefKlass - Adjustments for concurrent reference processing. > > ? src/hotspot/share/oops/instanceRefKlass.inline.hpp > > * vmSymbol - Disabled clone intrinsic for ZGC. > > ? src/hotspot/share/classfile/vmSymbols.cpp > > * Oop Verification - In four cases we disabled oop verification because > it do not makes sense or is not applicable to a GC using load barriers. > > ? src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp > ? src/hotspot/cpu/x86/stubGenerator_x86_64.cpp > ? src/hotspot/share/compiler/oopMap.cpp > ? src/hotspot/share/runtime/jniHandles.cpp > > * StackValue - Apply a load barrier in case of OSR. This is a bit of a > hack. However, this will go away in the future, when we have the next > iteration of C2's load barriers in place (aka "C2 late barrier insertion"). > > ? src/hotspot/share/runtime/stackValue.cpp > > * JVMTI - Adding an assert() to catch problems if the tagmap hashing is > changed in the future. > > ? src/hotspot/share/prims/jvmtiTagMap.cpp > > * Legal - Adding copyright/license for 3rd party hash function used in > ZHash. > > ? src/java.base/share/legal/c-libutl.md > > * SA - Adding basic ZGC support. > > ? src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* > > > Testing > ------- > > * Unit testing > > ? A number of new ZGC specific gtests have been added, in > test/hotspot/gtest/gc/z/ > > * Regression testing > > ? No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} > ? No new failures in Mach5, with ZGC disabled, tier{1,2,3} > > * Stress testing > > ? We have been continuously been running a number stress tests > throughout the development, these include: > > ??? specjbb2000 > ??? specjbb2005 > ??? specjbb2015 > ??? specjvm98 > ??? specjvm2008 > ??? dacapo2009 > ??? test/hotspot/jtreg/gc/stress/gcold > ??? test/hotspot/jtreg/gc/stress/systemgc > ??? test/hotspot/jtreg/gc/stress/gclocker > ??? test/hotspot/jtreg/gc/stress/gcbasher > ??? test/hotspot/jtreg/gc/stress/finalizer > ??? Kitchensink > > > Thanks! > > /Per, Stefan & the ZGC team From doug.simon at oracle.com Tue Jun 5 13:34:06 2018 From: doug.simon at oracle.com (Doug Simon) Date: Tue, 5 Jun 2018 15:34:06 +0200 Subject: RFR (S) 8204237: Clean up incorrectly included .inline.hpp files from jvmciJavaClasses.hpp In-Reply-To: <5498e07a-8899-72fe-6809-5a8b12f696c5@oracle.com> References: <05e83d76-d784-6360-3b88-06c5db1848c2@oracle.com> <2463b3b2-317a-5e2e-4fb6-3cc113b8b576@oracle.com> <5498e07a-8899-72fe-6809-5a8b12f696c5@oracle.com> Message-ID: The changes look ok to me. I don't think there's any need for an "up stream" push Vladimir as these are only JVMCI changes, not Graal changes. -Doug > On 4 Jun 2018, at 20:41, coleen.phillimore at oracle.com wrote: > > > Thanks Vladimir and for including the graal-dev mailing list. > Coleen > > On 6/4/18 1:38 PM, Vladimir Kozlov wrote: >> Looks good to me. >> >> We need review from Labs and push it up-stream to Lab's jvmci repo. >> >> Thanks, >> Vladimir >> >> On 6/4/18 10:12 AM, coleen.phillimore at oracle.com wrote: >>> Summary: Reexpand macro to provide non-inline functions. >>> >>> open webrev at http://cr.openjdk.java.net/~coleenp/8204237.01/webrev >>> bug link https://bugs.openjdk.java.net/browse/JDK-8204237 >>> >>> Ran mach5 hs-tier1-2 on 4 Oracle platforms. There are no target-dependent changes. >>> >>> Thanks, >>> Coleen > From coleen.phillimore at oracle.com Tue Jun 5 13:43:39 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 5 Jun 2018 09:43:39 -0400 Subject: RFR (S) 8204237: Clean up incorrectly included .inline.hpp files from jvmciJavaClasses.hpp In-Reply-To: References: <05e83d76-d784-6360-3b88-06c5db1848c2@oracle.com> <2463b3b2-317a-5e2e-4fb6-3cc113b8b576@oracle.com> <5498e07a-8899-72fe-6809-5a8b12f696c5@oracle.com> Message-ID: <17aac388-30ce-d760-4f7d-5a6f9ddee596@oracle.com> Thank you for reviewing, Doug!? I think this should merge fine with the graal changes going in either direction. Coleen On 6/5/18 9:34 AM, Doug Simon wrote: > The changes look ok to me. > > I don't think there's any need for an "up stream" push Vladimir as these are only JVMCI changes, not Graal changes. > > -Doug > >> On 4 Jun 2018, at 20:41, coleen.phillimore at oracle.com wrote: >> >> >> Thanks Vladimir and for including the graal-dev mailing list. >> Coleen >> >> On 6/4/18 1:38 PM, Vladimir Kozlov wrote: >>> Looks good to me. >>> >>> We need review from Labs and push it up-stream to Lab's jvmci repo. >>> >>> Thanks, >>> Vladimir >>> >>> On 6/4/18 10:12 AM, coleen.phillimore at oracle.com wrote: >>>> Summary: Reexpand macro to provide non-inline functions. >>>> >>>> open webrev at http://cr.openjdk.java.net/~coleenp/8204237.01/webrev >>>> bug link https://bugs.openjdk.java.net/browse/JDK-8204237 >>>> >>>> Ran mach5 hs-tier1-2 on 4 Oracle platforms. There are no target-dependent changes. >>>> >>>> Thanks, >>>> Coleen From coleen.phillimore at oracle.com Tue Jun 5 13:44:52 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 5 Jun 2018 09:44:52 -0400 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> Message-ID: Hi, I was going to review the runtime pieces of this change but there are none!? Nicely factored and thank you for upstreaming the runtime changes to support this already. Coleen On 6/1/18 5:41 PM, Per Liden wrote: > Hi, > > Please review the implementation of JEP 333: ZGC: A Scalable > Low-Latency Garbage Collector (Experimental) > > Please see the JEP for more information about the project. The JEP is > currently in state "Proposed to Target" for JDK 11. > > https://bugs.openjdk.java.net/browse/JDK-8197831 > > Additional information in can also be found on the ZGC project wiki. > > https://wiki.openjdk.java.net/display/zgc/Main > > > Webrevs > ------- > > To make this easier to review, we've divided the change into two webrevs. > > * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master > > ? This patch contains the actual ZGC implementation, the new unit > tests and other changes needed in HotSpot. > > * ZGC Testing: > http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing > > ? This patch contains changes to existing tests needed by ZGC. > > > Overview of Changes > ------------------- > > Below follows a list of the files we add/modify in the master patch, > with a short summary describing each group. > > * Build support - Making ZGC an optional feature. > > ? make/autoconf/hotspot.m4 > ? make/hotspot/lib/JvmFeatures.gmk > ? src/hotspot/share/utilities/macros.hpp > > * C2 AD file - Additions needed to generate ZGC load barriers (adlc > does not currently offer a way to easily break this out). > > ? src/hotspot/cpu/x86/x86.ad > ? src/hotspot/cpu/x86/x86_64.ad > > * C2 - Things that can't be easily abstracted out into ZGC specific > code, most of which is guarded behind a #if INCLUDE_ZGC and/or if > (UseZGC) condition. There should only be two logic changes (one in > idealKit.cpp and one in node.cpp) that are still active when ZGC is > disabled. We believe these are low risk changes and should not > introduce any real change i behavior when using other GCs. > > ? src/hotspot/share/adlc/formssel.cpp > ? src/hotspot/share/opto/* > ? src/hotspot/share/compiler/compilerDirectives.hpp > > * General GC+Runtime - Registering ZGC as a collector. > > ? src/hotspot/share/gc/shared/* > ? src/hotspot/share/runtime/vmStructs.cpp > ? src/hotspot/share/runtime/vm_operations.hpp > ? src/hotspot/share/prims/whitebox.cpp > > * GC thread local data - Increasing the size of data area by 32 bytes. > > ? src/hotspot/share/gc/shared/gcThreadLocalData.hpp > > * ZGC - The collector itself. > > ? src/hotspot/share/gc/z/* > ? src/hotspot/cpu/x86/gc/z/* > ? src/hotspot/os_cpu/linux_x86/gc/z/* > ? test/hotspot/gtest/gc/z/* > > * JFR - Adding new event types. > > ? src/hotspot/share/jfr/* > ? src/jdk.jfr/share/conf/jfr/* > > * Logging - Adding new log tags. > > ? src/hotspot/share/logging/* > > * Metaspace - Adding a friend declaration. > > ? src/hotspot/share/memory/metaspace.hpp > > * InstanceRefKlass - Adjustments for concurrent reference processing. > > ? src/hotspot/share/oops/instanceRefKlass.inline.hpp > > * vmSymbol - Disabled clone intrinsic for ZGC. > > ? src/hotspot/share/classfile/vmSymbols.cpp > > * Oop Verification - In four cases we disabled oop verification > because it do not makes sense or is not applicable to a GC using load > barriers. > > ? src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp > ? src/hotspot/cpu/x86/stubGenerator_x86_64.cpp > ? src/hotspot/share/compiler/oopMap.cpp > ? src/hotspot/share/runtime/jniHandles.cpp > > * StackValue - Apply a load barrier in case of OSR. This is a bit of a > hack. However, this will go away in the future, when we have the next > iteration of C2's load barriers in place (aka "C2 late barrier > insertion"). > > ? src/hotspot/share/runtime/stackValue.cpp > > * JVMTI - Adding an assert() to catch problems if the tagmap hashing > is changed in the future. > > ? src/hotspot/share/prims/jvmtiTagMap.cpp > > * Legal - Adding copyright/license for 3rd party hash function used in > ZHash. > > ? src/java.base/share/legal/c-libutl.md > > * SA - Adding basic ZGC support. > > ? src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* > > > Testing > ------- > > * Unit testing > > ? A number of new ZGC specific gtests have been added, in > test/hotspot/gtest/gc/z/ > > * Regression testing > > ? No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} > ? No new failures in Mach5, with ZGC disabled, tier{1,2,3} > > * Stress testing > > ? We have been continuously been running a number stress tests > throughout the development, these include: > > ??? specjbb2000 > ??? specjbb2005 > ??? specjbb2015 > ??? specjvm98 > ??? specjvm2008 > ??? dacapo2009 > ??? test/hotspot/jtreg/gc/stress/gcold > ??? test/hotspot/jtreg/gc/stress/systemgc > ??? test/hotspot/jtreg/gc/stress/gclocker > ??? test/hotspot/jtreg/gc/stress/gcbasher > ??? test/hotspot/jtreg/gc/stress/finalizer > ??? Kitchensink > > > Thanks! > > /Per, Stefan & the ZGC team From coleen.phillimore at oracle.com Tue Jun 5 13:46:35 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 5 Jun 2018 09:46:35 -0400 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> Message-ID: <472f8872-62f4-326a-6fe6-87e44a9f125c@oracle.com> Ok, I stand corrected.? The few runtime changes look fine to me. Coleen On 6/5/18 9:44 AM, coleen.phillimore at oracle.com wrote: > > Hi, I was going to review the runtime pieces of this change but there > are none!? Nicely factored and thank you for upstreaming the runtime > changes to support this already. > > Coleen > > On 6/1/18 5:41 PM, Per Liden wrote: >> Hi, >> >> Please review the implementation of JEP 333: ZGC: A Scalable >> Low-Latency Garbage Collector (Experimental) >> >> Please see the JEP for more information about the project. The JEP is >> currently in state "Proposed to Target" for JDK 11. >> >> https://bugs.openjdk.java.net/browse/JDK-8197831 >> >> Additional information in can also be found on the ZGC project wiki. >> >> https://wiki.openjdk.java.net/display/zgc/Main >> >> >> Webrevs >> ------- >> >> To make this easier to review, we've divided the change into two >> webrevs. >> >> * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master >> >> ? This patch contains the actual ZGC implementation, the new unit >> tests and other changes needed in HotSpot. >> >> * ZGC Testing: >> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >> >> ? This patch contains changes to existing tests needed by ZGC. >> >> >> Overview of Changes >> ------------------- >> >> Below follows a list of the files we add/modify in the master patch, >> with a short summary describing each group. >> >> * Build support - Making ZGC an optional feature. >> >> ? make/autoconf/hotspot.m4 >> ? make/hotspot/lib/JvmFeatures.gmk >> ? src/hotspot/share/utilities/macros.hpp >> >> * C2 AD file - Additions needed to generate ZGC load barriers (adlc >> does not currently offer a way to easily break this out). >> >> ? src/hotspot/cpu/x86/x86.ad >> ? src/hotspot/cpu/x86/x86_64.ad >> >> * C2 - Things that can't be easily abstracted out into ZGC specific >> code, most of which is guarded behind a #if INCLUDE_ZGC and/or if >> (UseZGC) condition. There should only be two logic changes (one in >> idealKit.cpp and one in node.cpp) that are still active when ZGC is >> disabled. We believe these are low risk changes and should not >> introduce any real change i behavior when using other GCs. >> >> ? src/hotspot/share/adlc/formssel.cpp >> ? src/hotspot/share/opto/* >> ? src/hotspot/share/compiler/compilerDirectives.hpp >> >> * General GC+Runtime - Registering ZGC as a collector. >> >> ? src/hotspot/share/gc/shared/* >> ? src/hotspot/share/runtime/vmStructs.cpp >> ? src/hotspot/share/runtime/vm_operations.hpp >> ? src/hotspot/share/prims/whitebox.cpp >> >> * GC thread local data - Increasing the size of data area by 32 bytes. >> >> ? src/hotspot/share/gc/shared/gcThreadLocalData.hpp >> >> * ZGC - The collector itself. >> >> ? src/hotspot/share/gc/z/* >> ? src/hotspot/cpu/x86/gc/z/* >> ? src/hotspot/os_cpu/linux_x86/gc/z/* >> ? test/hotspot/gtest/gc/z/* >> >> * JFR - Adding new event types. >> >> ? src/hotspot/share/jfr/* >> ? src/jdk.jfr/share/conf/jfr/* >> >> * Logging - Adding new log tags. >> >> ? src/hotspot/share/logging/* >> >> * Metaspace - Adding a friend declaration. >> >> ? src/hotspot/share/memory/metaspace.hpp >> >> * InstanceRefKlass - Adjustments for concurrent reference processing. >> >> ? src/hotspot/share/oops/instanceRefKlass.inline.hpp >> >> * vmSymbol - Disabled clone intrinsic for ZGC. >> >> ? src/hotspot/share/classfile/vmSymbols.cpp >> >> * Oop Verification - In four cases we disabled oop verification >> because it do not makes sense or is not applicable to a GC using load >> barriers. >> >> ? src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp >> ? src/hotspot/cpu/x86/stubGenerator_x86_64.cpp >> ? src/hotspot/share/compiler/oopMap.cpp >> ? src/hotspot/share/runtime/jniHandles.cpp >> >> * StackValue - Apply a load barrier in case of OSR. This is a bit of >> a hack. However, this will go away in the future, when we have the >> next iteration of C2's load barriers in place (aka "C2 late barrier >> insertion"). >> >> ? src/hotspot/share/runtime/stackValue.cpp >> >> * JVMTI - Adding an assert() to catch problems if the tagmap hashing >> is changed in the future. >> >> ? src/hotspot/share/prims/jvmtiTagMap.cpp >> >> * Legal - Adding copyright/license for 3rd party hash function used >> in ZHash. >> >> ? src/java.base/share/legal/c-libutl.md >> >> * SA - Adding basic ZGC support. >> >> ? src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* >> >> >> Testing >> ------- >> >> * Unit testing >> >> ? A number of new ZGC specific gtests have been added, in >> test/hotspot/gtest/gc/z/ >> >> * Regression testing >> >> ? No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} >> ? No new failures in Mach5, with ZGC disabled, tier{1,2,3} >> >> * Stress testing >> >> ? We have been continuously been running a number stress tests >> throughout the development, these include: >> >> ??? specjbb2000 >> ??? specjbb2005 >> ??? specjbb2015 >> ??? specjvm98 >> ??? specjvm2008 >> ??? dacapo2009 >> ??? test/hotspot/jtreg/gc/stress/gcold >> ??? test/hotspot/jtreg/gc/stress/systemgc >> ??? test/hotspot/jtreg/gc/stress/gclocker >> ??? test/hotspot/jtreg/gc/stress/gcbasher >> ??? test/hotspot/jtreg/gc/stress/finalizer >> ??? Kitchensink >> >> >> Thanks! >> >> /Per, Stefan & the ZGC team > From adam.farley at uk.ibm.com Tue Jun 5 13:46:08 2018 From: adam.farley at uk.ibm.com (Adam Farley8) Date: Tue, 5 Jun 2018 14:46:08 +0100 Subject: RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers Message-ID: Hi All, Native memory allocation for DBBs is tracked in java.nio.Bits, but that only includes what the user thinks they are allocating. When the VM adds extra memory to the allocation amount this extra bit is not represented in the Bits total. A cursory glance shows, minimum, that we round the requested memory quantity up to the heap word size in the Unsafe.allocateMemory code, and something to do with nmt_header_size in os:malloc() (os.cpp) too. On its own, and in small quantities, align_up(sz, HeapWordSize) isn't that big of an issue. But when you allocate a lot of DBBs, and coupled with the nmt_header_size business, it makes the Bits values wrong. The more DBB allocations, the more inaccurate those numbers will be. To get the "+X", it seems to me that the best option would be to introduce an native method in Bits that fetches "X" directly from Hotspot, using the same code that Hotspot uses (so we'd have to abstract-out the Hotspot logic that adds X to the memory quantity). This way, anyone modifying the Hotspot logic won't risk rendering the Bits logic wrong again. That's only one way to fix the accuracy problem here though. Suggestions welcome. Best Regards Adam Farley Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU From rkennke at redhat.com Tue Jun 5 14:03:00 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 5 Jun 2018 16:03:00 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <3400f89d-e8d0-a6c5-d3be-0bd614aa5cf9@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <0b3ddf69-15f7-217d-fe77-2861534c695a@oracle.com> <4c3ab68f-8752-a0d9-bc62-50f5cad2ef41@oracle.com> <3400f89d-e8d0-a6c5-d3be-0bd614aa5cf9@oracle.com> Message-ID: Hi Per and all, I would like to review the changeset(s), but I see that many issues have already been addressed. Would it be possible to post an updated webrev, so that I see the most up-to-date version? Thanks, Roman > On 06/05/2018 01:37 PM, Erik Helin wrote: >> On 06/05/2018 09:21 AM, Per Liden wrote:> On 06/04/2018 03:47 PM, Erik >> Helin wrote: >>>> Could you please change the comment to say x86_64 or x64 (similar to >>>> other such comments in that file)? x86 is a bit ambiguous (could >>>> mean a 32-bit x86 CPU). >>> >>> Fixed. >>> >>> http://hg.openjdk.java.net/zgc/zgc/rev/a81777811000 >> >> Looks good, thanks. >> >> On 06/05/2018 09:21 AM, Per Liden wrote: >>> On 06/04/2018 03:47 PM, Erik Helin wrote: >>>> Small nit in src/hotspot/share/compiler/oopMap.cpp: >>>> >>>> +??????? if (ZGC_ONLY(!UseZGC &&) >>>> +??????????? ((((uintptr_t)loc & (sizeof(*loc)-1)) != 0) || >>>> +???????????? !Universe::heap()->is_in_or_null(*loc))) { >>>> >>>> Do we really need ZGC_ONLY around !UseZGC && here? The code is in an >>>> #ifdef ASSERT so it doesn't seem performance sensitive, and UseZGC >>>> will be just be false if ZGC isn't compiled, right? Or have I gotten >>>> this backwards? >>> >>> Fixed. >>> >>> http://hg.openjdk.java.net/zgc/zgc/rev/3f6db622400c >> >> Also good, thanks. >> >> On 06/05/2018 09:21 AM, Per Liden wrote: >>> On 06/04/2018 03:47 PM, Erik Helin wrote: >>>> Regarding src/hotspot/share/gc/shared/gcName.hpp, should we >>>> introduce a GCName class so that we can limit the scope of the Z och >>>> NA symbols? (Then GCNameHelper::to_string could also be moved into >>>> that class). Could also be done as a follow-up patch (if so, please >>>> file a bug). >>> >>> I agree, filed an RFE. >>> >>> https://bugs.openjdk.java.net/browse/JDK-8204324 >> >> Ok, lets tackle this in a separate patch, thanks for filing the RFE. >> >> On 06/05/2018 09:21 AM, Per Liden wrote: >>> On 06/04/2018 03:47 PM, Erik Helin wrote: >>>> Small nit in src/hotspot/share/jfr/metadata/metadata.xml: >>>> - >>>> \ No newline at end of file >>>> + >>>> >>>> Did you happen to add a newline here (I don't know why there should >>>> not be a newline, but the comment indicates so)? >>> >>> The "No newline at end of file" comment is actually generated by hg >>> diff and is not in the file itself. I think vim added it >>> automatically, and I think we probably should have a new line there, >>> but I'll revert it from this change. >>> >>> http://hg.openjdk.java.net/zgc/zgc/rev/a8e1aec31efa >> >> Ah, alright, I thought it was a comment in the source code file. >> Thanks for reverting this part of the patch, we can discuss later if >> we can (should?) add a newline to that file. >> >> On 06/05/2018 09:21 AM, Per Liden wrote: >>> On 06/04/2018 03:47 PM, Erik Helin wrote: >>>> Small nit in src/hotspot/share/opto/node.hpp: >>>> >>>> ??? virtual?????? uint? ideal_reg() const; >>>> + >>>> ??#ifndef PRODUCT >>>> >>>> Was the extra newline here added intentionally? >>> >>> Fixed. >>> >>> http://hg.openjdk.java.net/zgc/zgc/rev/6d6259917ded >> >> Looks good, thanks. >> >> On 06/05/2018 09:21 AM, Per Liden wrote: >>> On 06/04/2018 03:47 PM, Erik Helin wrote: >>>> In src/hotspot/share/prims/jvmtiTagMap.cpp, do you need to add an >>>> include of gc/z/zGlobals.hpp for ZAddressMetadataShift? Like >>>> >>>> +#if INCLUDE_ZGC >>>> +? #include "gc/z/c2/zGlobals.hpp" >>>> +#endif >>>> >>>> Or did I miss an include somewhere (wouldn't be the first time :)? >>> >>> Fixed. >>> >>> http://hg.openjdk.java.net/zgc/zgc/rev/b2e3b7c012af >> >> Also good, thanks. >> >> >> On 06/05/2018 09:21 AM, Per Liden wrote: >>> On 06/04/2018 03:47 PM, Erik Helin wrote: >>>> In src/hotspot/share/prims/whitebox.cpp, do we need the #if >>>> INCLUDE_ZGC guards or is `if (UseZGC)` enough? >>> >>> This is certainly up for discussion, but the model I think we've been >>> shooting for is that we don't have INCLUDE_ZGC only if there's a >>> !UseZGC condition. Some of the "if (UseZGC)" then have ZGC specific >>> code inside the scope, so you need the INCLUDE_ZGC anyway. In this >>> particular case we don't have any ZGC specific code in the true path, >>> but we might in the future. >>> >>> This is the model we're trying to follow, but as I said, we can >>> discuss if this is good or not. >> >> Hmm, ok, I see what you mean. I agree that for !UseZGC we should skip >> INCLUDE_ZGC guards and I see that you for the `if (UseZGC)` case. I >> would probably have skipped the guards even for this `if (UseZGC)` >> case, but I'm fine to leave them in. >> >> On 06/05/2018 09:21 AM, Per Liden wrote: >>> On 06/04/2018 03:47 PM, Erik Helin wrote: >>>> Same comment for src/hotspot/share/runtime/jniHandles.cpp, do we >>>> need the #if INCLUDE_ZGC guard? >>> >>> Fixed this and another similar thing in c1_LIRAssembler_x86.cpp. >>> >>> http://hg.openjdk.java.net/zgc/zgc/rev/2cf588273130 >> >> Good, thanks. >> >>>>> * ZGC Testing: >>>>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >>>> >>>> Again, great work here, particularly with upstreaming so many >>>> patches ahead of this one. I only have two small comments regarding >>>> the test changes: >>>> >>>> Small nit in >>>> est/hotspot/jtreg/vmTestbase/nsk/jdi/ObjectReference/referringObjects/referringObjects001/referringObjects001.java: >>>> >>>> >>>> +??????? // G1 fails, just like ZGC, if en explicitly GC is done here. >>>> >>>> May I suggest s/en explicitly/an explicit/ ? >>>> Also maybe remove the comment `// forceGC();`, because it might >>>> later look like your comment commented out an earlier, pre-existing >>>> call to forceGC(). >>>> >>>> Same comment as above for instances003.java, instances001.java, >>>> instanceCounts001.java. >>> >>> Fixed. >>> >>> http://hg.openjdk.java.net/zgc/zgc/rev/42cd3b259870 >> >> The updated version in your follow-up email looks good :) >> >> On 06/05/2018 09:21 AM, Per Liden wrote: >>> On 06/04/2018 03:47 PM, Erik Helin wrote: >>>> In jdk/java/lang/management/MemoryMXBean/MemoryTestZGC.sh you >>>> probably want to remove "@bug???? 4530538", the empty "@summary" and >>>> "@author Mandy Chung" >>> >>> Fixed. >>> >>> http://hg.openjdk.java.net/zgc/zgc/rev/ff780fec8423 >> >> Also good, thanks. >> >> The shared parts looks good to me now, consider those parts Reviewed by > > Thanks for reviewing, Erik! > >> me (but don't count me as a formal reviewer for the C2 parts, someone >> with more C2 experience needs to look at those changes). > > Rickard will be looking at the C2 parts (and maybe others too). > > cheers, > Per From bob.vandette at oracle.com Tue Jun 5 14:05:29 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Tue, 5 Jun 2018 10:05:29 -0400 Subject: ARM port consolidation In-Reply-To: <7aae9027-266d-46e0-0df5-bc74d6530af5@redhat.com> References: <7aae9027-266d-46e0-0df5-bc74d6530af5@redhat.com> Message-ID: <27CED1F9-5C87-43BF-A00B-53A5829ADD75@oracle.com> > On Jun 5, 2018, at 5:27 AM, Andrew Haley wrote: > > On 06/04/2018 09:34 PM, Bob Vandette wrote: >> The community at large (especially RedHat, BellSoft, Linaro and Cavium) >> have done a great job of enhancing and keeping the AArch64 port up to >> date with current and new Hotspot features. As a result, I propose that >> we standardize the 64-bit ARM implementation on this port. >> >> If there are no objections, I will file a JEP to remove the 64-bit ARM >> port sources that reside in jdk/open/src/hotspot/src/cpu/arm >> along with any build logic. This will leave the Oracle contributed >> 32-bit ARM port and the AArch64 64-bit ARM port. > > Sounds good to me. Over to practical considerations: is there some > code we should look at porting over to the tAArch64 port? Minimal > VM, perhaps? I would like to keep this effort as a straight forward code deletion exercise. Adding Minimal VM support is beyond the scope of what I had in mind. We no longer regularly build the minimal VM and would prefer to leave this work to someone that has a vested interest in using and supporting it. It appears that there has been some attempt to provide ifdefs in the aarch64 source directory for this but I have no idea if this builds today or if you can even support building the client only VM. I?d be willing to work with someone during the code removal to include minimal support at that time but I think that work should be handled separately. Bob. > > -- > Andrew Haley > Java Platform Lead Engineer > Red Hat UK Ltd. > EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From rkennke at redhat.com Tue Jun 5 14:07:40 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 5 Jun 2018 16:07:40 +0200 Subject: RFR: JDK-8200623: Primitive heap access for interpreter BarrierSetAssembler/x86 In-Reply-To: <5B155A4B.7020009@oracle.com> References: <5B155A4B.7020009@oracle.com> Message-ID: Hi Erik, >> JDK-8199417 added better modularization for interpreter barriers. >> Shenandoah and possibly future GCs also need barriers for primitive >> access. >> >> Some notes on implementation: >> - float/double/long access produced some headaches for the following >> reasons: >> >> ?? - float and double would either take XMMRegister which is not >> compatible with Register >> ?? - or load-from/store-to the floating point stack (see >> MacroAssembler::load/store_float/double) >> ?? - long access on x86_32 would load-into/store-from 2 registers, or >> else use a trick via the floating point stack to do atomic access >> >> None of this seemed easy/nice to do with the API. I helped myself by >> accepting noreg as dst/src argument, which means the corresponding tos >> (i.e. ltos, ftos, dtos) and the BSA would then access from/to >> xmm0/float-stack in case of float/double or the double-reg/float-stack >> in case of long/32bit, which is all that we ever need. > > It is indeed a bit painful that in hotspot, XMMRegister is not a > Register (unlike the Graal implementation). And I think I agree that if > it is indeed only ever needed by ToS, then this is the preferable > solution to having two almost identicaly APIs - one for integral types > and one for floating point types. It beats me though, that in this patch > you do not address the jni fast get field optimization on x86. It is > seemingly missing barriers now. Should probably make sure that one fits > in as well. Fortunately, I think it should work out pretty well. As mentioned in the review thread for JDK-8203172, we in Shenandoah land decided to disable JNI fastgetfield stuff for now. I am not sure whether or not we want to go through access_* anyway? It's probably more consistent if we do. If we decide that we do, I'll add it to this patch, if we don't, I'll rip it out of JDK-8203172 :-) Thanks, Roman From shade at redhat.com Tue Jun 5 14:11:37 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Tue, 5 Jun 2018 16:11:37 +0200 Subject: RFR: JDK-8202776: Modularize GC allocations in runtime In-Reply-To: References: <5B1555F6.5090909@oracle.com> <76f8679c-5f2e-ec3f-9913-307d0c187dfd@redhat.com> <5B155AE4.2090908@oracle.com> <0d6cff83-d6be-c51c-8629-a340ad5f7fe0@redhat.com> <737D8C93-6533-4B48-BDEB-E92EE8E91C9F@oracle.com> Message-ID: <92e085d9-8ba2-45e8-1038-d98caedfebe5@redhat.com> +1, looks good. -Aleksey On 06/04/2018 11:24 PM, Erik ?sterlund wrote: > Hi, > > Looks good. > > Thanks, > /Erik > > On 2018-06-04 23:20, Roman Kennke wrote: >> Hi Aleksey, Erik, >> >> thanks for reviewing and helping with this! >> >> Moved mem_allocate() under protected: >> Incremental: >> http://cr.openjdk.java.net/~rkennke/JDK-8202776/webrev.01.diff/ >> Full: >> http://cr.openjdk.java.net/~rkennke/JDK-8202776/webrev.01/ >> >> Good now? >> >> Thanks, >> Roman >> >> >>> Hi Aleksey, >>> >>> Sounds like a good idea. >>> >>> /Erik >>> >>>> On 4 Jun 2018, at 17:56, Aleksey Shipilev wrote: >>>> >>>> On 06/04/2018 05:29 PM, Erik ?sterlund wrote: >>>>>>> I agree the GC should be able to perform arbitrary allocations the way >>>>>>> it wants to. >>>>>>> However, I would prefer to do it this way: >>>>>>> http://cr.openjdk.java.net/~eosterlund/8202776/webrev.00/ >>>> This looks good. I think we better hide mem_allocate under "protected" now, so we would have: >>>> >>>> protected: >>>> ?? // TLAB path >>>> ?? inline static HeapWord* allocate_from_tlab(Klass* klass, size_t size, TRAPS); >>>> ?? static HeapWord* allocate_from_tlab_slow(Klass* klass, size_t size, TRAPS); >>>> >>>> ?? // Out-of-TLAB path >>>> ?? virtual HeapWord* mem_allocate(size_t size, >>>> ????????????????????????????????? bool* gc_overhead_limit_was_exceeded) = 0; >>>> >>>> public: >>>> ?? // Entry point >>>> ?? virtual HeapWord* obj_allocate_raw(Klass* klass, size_t size, >>>> ????????????????????????????????????? bool* gc_overhead_limit_was_exceeded, TRAPS); >>>> >>>> -Aleksey >>>> >> > From rkennke at redhat.com Tue Jun 5 14:16:32 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 5 Jun 2018 16:16:32 +0200 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: <5B1532C9.4070206@oracle.com> References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> Message-ID: <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> Am 04.06.2018 um 14:38 schrieb Erik ?sterlund: > Hi Roman, > > ?42 > ?43?? virtual void obj_equals(MacroAssembler* masm, DecoratorSet > decorators, > ?44?????????????????????????? Register obj1, Register obj2); > ?45 > > I don't think we need to pass in any decorators here. Perhaps one day > there will be some important semantic property to deal with, but today I > do not think there are any properties we care about, except possibly > AS_RAW, but that would never propagate into the BarrierSetAssembler anyway. > > On that topic, I noticed that today we do the raw version of e.g. > load_heap_oop inside of the BarrierSetAssembler, and to use it you would > call load_heap_oop(AS_RAW). But the cmpoop stuff does it in a different > way (cmpoop_raw in the macro assembler). I think it would be ideal if we > could do it the same way, which would involve calling cmpoop with AS_RAW > to get a raw oop comparison, residing in BarrierSetAssembler, with the > usual hardwiring in the corresponding macro assembler function when it > observes AS_RAW. > > So it would look something like this: > > void cmpoop(Register src1, Address src2, DecoratorSet decorators = > AS_NORMAL); > > What do you think? cmpoop_raw() is not the AS_RAW base implementation. It's only there to help BarrierSetAssembler to implement the base obj_equals(Address|Register, jobject). We cannot access cmp_literal32() from outside the MacroAssembler. The mentioned hardwiring to call straight to BSA is probably going away too: https://bugs.openjdk.java.net/browse/JDK-8203232 http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-May/032240.html Thanks, Roman From per.liden at oracle.com Tue Jun 5 14:25:19 2018 From: per.liden at oracle.com (Per Liden) Date: Tue, 5 Jun 2018 16:25:19 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <472f8872-62f4-326a-6fe6-87e44a9f125c@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <472f8872-62f4-326a-6fe6-87e44a9f125c@oracle.com> Message-ID: Thanks for reviewing those, Coleen! /Per On 2018-06-05 15:46, coleen.phillimore at oracle.com wrote: > > Ok, I stand corrected.? The few runtime changes look fine to me. > Coleen > > On 6/5/18 9:44 AM, coleen.phillimore at oracle.com wrote: >> >> Hi, I was going to review the runtime pieces of this change but there >> are none!? Nicely factored and thank you for upstreaming the runtime >> changes to support this already. >> >> Coleen >> >> On 6/1/18 5:41 PM, Per Liden wrote: >>> Hi, >>> >>> Please review the implementation of JEP 333: ZGC: A Scalable >>> Low-Latency Garbage Collector (Experimental) >>> >>> Please see the JEP for more information about the project. The JEP is >>> currently in state "Proposed to Target" for JDK 11. >>> >>> https://bugs.openjdk.java.net/browse/JDK-8197831 >>> >>> Additional information in can also be found on the ZGC project wiki. >>> >>> https://wiki.openjdk.java.net/display/zgc/Main >>> >>> >>> Webrevs >>> ------- >>> >>> To make this easier to review, we've divided the change into two >>> webrevs. >>> >>> * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master >>> >>> ? This patch contains the actual ZGC implementation, the new unit >>> tests and other changes needed in HotSpot. >>> >>> * ZGC Testing: >>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >>> >>> ? This patch contains changes to existing tests needed by ZGC. >>> >>> >>> Overview of Changes >>> ------------------- >>> >>> Below follows a list of the files we add/modify in the master patch, >>> with a short summary describing each group. >>> >>> * Build support - Making ZGC an optional feature. >>> >>> ? make/autoconf/hotspot.m4 >>> ? make/hotspot/lib/JvmFeatures.gmk >>> ? src/hotspot/share/utilities/macros.hpp >>> >>> * C2 AD file - Additions needed to generate ZGC load barriers (adlc >>> does not currently offer a way to easily break this out). >>> >>> ? src/hotspot/cpu/x86/x86.ad >>> ? src/hotspot/cpu/x86/x86_64.ad >>> >>> * C2 - Things that can't be easily abstracted out into ZGC specific >>> code, most of which is guarded behind a #if INCLUDE_ZGC and/or if >>> (UseZGC) condition. There should only be two logic changes (one in >>> idealKit.cpp and one in node.cpp) that are still active when ZGC is >>> disabled. We believe these are low risk changes and should not >>> introduce any real change i behavior when using other GCs. >>> >>> ? src/hotspot/share/adlc/formssel.cpp >>> ? src/hotspot/share/opto/* >>> ? src/hotspot/share/compiler/compilerDirectives.hpp >>> >>> * General GC+Runtime - Registering ZGC as a collector. >>> >>> ? src/hotspot/share/gc/shared/* >>> ? src/hotspot/share/runtime/vmStructs.cpp >>> ? src/hotspot/share/runtime/vm_operations.hpp >>> ? src/hotspot/share/prims/whitebox.cpp >>> >>> * GC thread local data - Increasing the size of data area by 32 bytes. >>> >>> ? src/hotspot/share/gc/shared/gcThreadLocalData.hpp >>> >>> * ZGC - The collector itself. >>> >>> ? src/hotspot/share/gc/z/* >>> ? src/hotspot/cpu/x86/gc/z/* >>> ? src/hotspot/os_cpu/linux_x86/gc/z/* >>> ? test/hotspot/gtest/gc/z/* >>> >>> * JFR - Adding new event types. >>> >>> ? src/hotspot/share/jfr/* >>> ? src/jdk.jfr/share/conf/jfr/* >>> >>> * Logging - Adding new log tags. >>> >>> ? src/hotspot/share/logging/* >>> >>> * Metaspace - Adding a friend declaration. >>> >>> ? src/hotspot/share/memory/metaspace.hpp >>> >>> * InstanceRefKlass - Adjustments for concurrent reference processing. >>> >>> ? src/hotspot/share/oops/instanceRefKlass.inline.hpp >>> >>> * vmSymbol - Disabled clone intrinsic for ZGC. >>> >>> ? src/hotspot/share/classfile/vmSymbols.cpp >>> >>> * Oop Verification - In four cases we disabled oop verification >>> because it do not makes sense or is not applicable to a GC using load >>> barriers. >>> >>> ? src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp >>> ? src/hotspot/cpu/x86/stubGenerator_x86_64.cpp >>> ? src/hotspot/share/compiler/oopMap.cpp >>> ? src/hotspot/share/runtime/jniHandles.cpp >>> >>> * StackValue - Apply a load barrier in case of OSR. This is a bit of >>> a hack. However, this will go away in the future, when we have the >>> next iteration of C2's load barriers in place (aka "C2 late barrier >>> insertion"). >>> >>> ? src/hotspot/share/runtime/stackValue.cpp >>> >>> * JVMTI - Adding an assert() to catch problems if the tagmap hashing >>> is changed in the future. >>> >>> ? src/hotspot/share/prims/jvmtiTagMap.cpp >>> >>> * Legal - Adding copyright/license for 3rd party hash function used >>> in ZHash. >>> >>> ? src/java.base/share/legal/c-libutl.md >>> >>> * SA - Adding basic ZGC support. >>> >>> ? src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* >>> >>> >>> Testing >>> ------- >>> >>> * Unit testing >>> >>> ? A number of new ZGC specific gtests have been added, in >>> test/hotspot/gtest/gc/z/ >>> >>> * Regression testing >>> >>> ? No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} >>> ? No new failures in Mach5, with ZGC disabled, tier{1,2,3} >>> >>> * Stress testing >>> >>> ? We have been continuously been running a number stress tests >>> throughout the development, these include: >>> >>> ??? specjbb2000 >>> ??? specjbb2005 >>> ??? specjbb2015 >>> ??? specjvm98 >>> ??? specjvm2008 >>> ??? dacapo2009 >>> ??? test/hotspot/jtreg/gc/stress/gcold >>> ??? test/hotspot/jtreg/gc/stress/systemgc >>> ??? test/hotspot/jtreg/gc/stress/gclocker >>> ??? test/hotspot/jtreg/gc/stress/gcbasher >>> ??? test/hotspot/jtreg/gc/stress/finalizer >>> ??? Kitchensink >>> >>> >>> Thanks! >>> >>> /Per, Stefan & the ZGC team >> > From per.liden at oracle.com Tue Jun 5 14:32:16 2018 From: per.liden at oracle.com (Per Liden) Date: Tue, 5 Jun 2018 16:32:16 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <0b3ddf69-15f7-217d-fe77-2861534c695a@oracle.com> <4c3ab68f-8752-a0d9-bc62-50f5cad2ef41@oracle.com> <3400f89d-e8d0-a6c5-d3be-0bd614aa5cf9@oracle.com> Message-ID: Hi Roman, A few comments have dropped in that we still haven't addressed, but we're working on those and will re-spin a webrev shortly. Btw, tomorrow is a national holiday here in Sweden, so some of us might be off-line. Thanks! /Per On 2018-06-05 16:03, Roman Kennke wrote: > Hi Per and all, > > I would like to review the changeset(s), but I see that many issues have > already been addressed. Would it be possible to post an updated webrev, > so that I see the most up-to-date version? > > Thanks, > Roman > > >> On 06/05/2018 01:37 PM, Erik Helin wrote: >>> On 06/05/2018 09:21 AM, Per Liden wrote:> On 06/04/2018 03:47 PM, Erik >>> Helin wrote: >>>>> Could you please change the comment to say x86_64 or x64 (similar to >>>>> other such comments in that file)? x86 is a bit ambiguous (could >>>>> mean a 32-bit x86 CPU). >>>> >>>> Fixed. >>>> >>>> http://hg.openjdk.java.net/zgc/zgc/rev/a81777811000 >>> >>> Looks good, thanks. >>> >>> On 06/05/2018 09:21 AM, Per Liden wrote: >>>> On 06/04/2018 03:47 PM, Erik Helin wrote: >>>>> Small nit in src/hotspot/share/compiler/oopMap.cpp: >>>>> >>>>> +??????? if (ZGC_ONLY(!UseZGC &&) >>>>> +??????????? ((((uintptr_t)loc & (sizeof(*loc)-1)) != 0) || >>>>> +???????????? !Universe::heap()->is_in_or_null(*loc))) { >>>>> >>>>> Do we really need ZGC_ONLY around !UseZGC && here? The code is in an >>>>> #ifdef ASSERT so it doesn't seem performance sensitive, and UseZGC >>>>> will be just be false if ZGC isn't compiled, right? Or have I gotten >>>>> this backwards? >>>> >>>> Fixed. >>>> >>>> http://hg.openjdk.java.net/zgc/zgc/rev/3f6db622400c >>> >>> Also good, thanks. >>> >>> On 06/05/2018 09:21 AM, Per Liden wrote: >>>> On 06/04/2018 03:47 PM, Erik Helin wrote: >>>>> Regarding src/hotspot/share/gc/shared/gcName.hpp, should we >>>>> introduce a GCName class so that we can limit the scope of the Z och >>>>> NA symbols? (Then GCNameHelper::to_string could also be moved into >>>>> that class). Could also be done as a follow-up patch (if so, please >>>>> file a bug). >>>> >>>> I agree, filed an RFE. >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8204324 >>> >>> Ok, lets tackle this in a separate patch, thanks for filing the RFE. >>> >>> On 06/05/2018 09:21 AM, Per Liden wrote: >>>> On 06/04/2018 03:47 PM, Erik Helin wrote: >>>>> Small nit in src/hotspot/share/jfr/metadata/metadata.xml: >>>>> - >>>>> \ No newline at end of file >>>>> + >>>>> >>>>> Did you happen to add a newline here (I don't know why there should >>>>> not be a newline, but the comment indicates so)? >>>> >>>> The "No newline at end of file" comment is actually generated by hg >>>> diff and is not in the file itself. I think vim added it >>>> automatically, and I think we probably should have a new line there, >>>> but I'll revert it from this change. >>>> >>>> http://hg.openjdk.java.net/zgc/zgc/rev/a8e1aec31efa >>> >>> Ah, alright, I thought it was a comment in the source code file. >>> Thanks for reverting this part of the patch, we can discuss later if >>> we can (should?) add a newline to that file. >>> >>> On 06/05/2018 09:21 AM, Per Liden wrote: >>>> On 06/04/2018 03:47 PM, Erik Helin wrote: >>>>> Small nit in src/hotspot/share/opto/node.hpp: >>>>> >>>>> ??? virtual?????? uint? ideal_reg() const; >>>>> + >>>>> ??#ifndef PRODUCT >>>>> >>>>> Was the extra newline here added intentionally? >>>> >>>> Fixed. >>>> >>>> http://hg.openjdk.java.net/zgc/zgc/rev/6d6259917ded >>> >>> Looks good, thanks. >>> >>> On 06/05/2018 09:21 AM, Per Liden wrote: >>>> On 06/04/2018 03:47 PM, Erik Helin wrote: >>>>> In src/hotspot/share/prims/jvmtiTagMap.cpp, do you need to add an >>>>> include of gc/z/zGlobals.hpp for ZAddressMetadataShift? Like >>>>> >>>>> +#if INCLUDE_ZGC >>>>> +? #include "gc/z/c2/zGlobals.hpp" >>>>> +#endif >>>>> >>>>> Or did I miss an include somewhere (wouldn't be the first time :)? >>>> >>>> Fixed. >>>> >>>> http://hg.openjdk.java.net/zgc/zgc/rev/b2e3b7c012af >>> >>> Also good, thanks. >>> >>> >>> On 06/05/2018 09:21 AM, Per Liden wrote: >>>> On 06/04/2018 03:47 PM, Erik Helin wrote: >>>>> In src/hotspot/share/prims/whitebox.cpp, do we need the #if >>>>> INCLUDE_ZGC guards or is `if (UseZGC)` enough? >>>> >>>> This is certainly up for discussion, but the model I think we've been >>>> shooting for is that we don't have INCLUDE_ZGC only if there's a >>>> !UseZGC condition. Some of the "if (UseZGC)" then have ZGC specific >>>> code inside the scope, so you need the INCLUDE_ZGC anyway. In this >>>> particular case we don't have any ZGC specific code in the true path, >>>> but we might in the future. >>>> >>>> This is the model we're trying to follow, but as I said, we can >>>> discuss if this is good or not. >>> >>> Hmm, ok, I see what you mean. I agree that for !UseZGC we should skip >>> INCLUDE_ZGC guards and I see that you for the `if (UseZGC)` case. I >>> would probably have skipped the guards even for this `if (UseZGC)` >>> case, but I'm fine to leave them in. >>> >>> On 06/05/2018 09:21 AM, Per Liden wrote: >>>> On 06/04/2018 03:47 PM, Erik Helin wrote: >>>>> Same comment for src/hotspot/share/runtime/jniHandles.cpp, do we >>>>> need the #if INCLUDE_ZGC guard? >>>> >>>> Fixed this and another similar thing in c1_LIRAssembler_x86.cpp. >>>> >>>> http://hg.openjdk.java.net/zgc/zgc/rev/2cf588273130 >>> >>> Good, thanks. >>> >>>>>> * ZGC Testing: >>>>>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >>>>> >>>>> Again, great work here, particularly with upstreaming so many >>>>> patches ahead of this one. I only have two small comments regarding >>>>> the test changes: >>>>> >>>>> Small nit in >>>>> est/hotspot/jtreg/vmTestbase/nsk/jdi/ObjectReference/referringObjects/referringObjects001/referringObjects001.java: >>>>> >>>>> >>>>> +??????? // G1 fails, just like ZGC, if en explicitly GC is done here. >>>>> >>>>> May I suggest s/en explicitly/an explicit/ ? >>>>> Also maybe remove the comment `// forceGC();`, because it might >>>>> later look like your comment commented out an earlier, pre-existing >>>>> call to forceGC(). >>>>> >>>>> Same comment as above for instances003.java, instances001.java, >>>>> instanceCounts001.java. >>>> >>>> Fixed. >>>> >>>> http://hg.openjdk.java.net/zgc/zgc/rev/42cd3b259870 >>> >>> The updated version in your follow-up email looks good :) >>> >>> On 06/05/2018 09:21 AM, Per Liden wrote: >>>> On 06/04/2018 03:47 PM, Erik Helin wrote: >>>>> In jdk/java/lang/management/MemoryMXBean/MemoryTestZGC.sh you >>>>> probably want to remove "@bug???? 4530538", the empty "@summary" and >>>>> "@author Mandy Chung" >>>> >>>> Fixed. >>>> >>>> http://hg.openjdk.java.net/zgc/zgc/rev/ff780fec8423 >>> >>> Also good, thanks. >>> >>> The shared parts looks good to me now, consider those parts Reviewed by >> >> Thanks for reviewing, Erik! >> >>> me (but don't count me as a formal reviewer for the C2 parts, someone >>> with more C2 experience needs to look at those changes). >> >> Rickard will be looking at the C2 parts (and maybe others too). >> >> cheers, >> Per > > From erik.osterlund at oracle.com Tue Jun 5 15:00:45 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 5 Jun 2018 17:00:45 +0200 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> Message-ID: <5B16A59D.7020609@oracle.com> Hi Roman, On 2018-06-05 16:16, Roman Kennke wrote: > Am 04.06.2018 um 14:38 schrieb Erik ?sterlund: >> Hi Roman, >> >> 42 >> 43 virtual void obj_equals(MacroAssembler* masm, DecoratorSet >> decorators, >> 44 Register obj1, Register obj2); >> 45 >> >> I don't think we need to pass in any decorators here. Perhaps one day >> there will be some important semantic property to deal with, but today I >> do not think there are any properties we care about, except possibly >> AS_RAW, but that would never propagate into the BarrierSetAssembler anyway. >> >> On that topic, I noticed that today we do the raw version of e.g. >> load_heap_oop inside of the BarrierSetAssembler, and to use it you would >> call load_heap_oop(AS_RAW). But the cmpoop stuff does it in a different >> way (cmpoop_raw in the macro assembler). I think it would be ideal if we >> could do it the same way, which would involve calling cmpoop with AS_RAW >> to get a raw oop comparison, residing in BarrierSetAssembler, with the >> usual hardwiring in the corresponding macro assembler function when it >> observes AS_RAW. >> >> So it would look something like this: >> >> void cmpoop(Register src1, Address src2, DecoratorSet decorators = >> AS_NORMAL); >> >> What do you think? > cmpoop_raw() is not the AS_RAW base implementation. It's only there to > help BarrierSetAssembler to implement the base > obj_equals(Address|Register, jobject). We cannot access cmp_literal32() > from outside the MacroAssembler. In other words, there is no AS_RAW option "exposed" to public use, right? Maybe there is no need for raw equals in our assembly code. > The mentioned hardwiring to call straight to BSA is probably going away too: > https://bugs.openjdk.java.net/browse/JDK-8203232 > http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-May/032240.html I'm not sure I'm convinced that is an improvement. The expected behaviour at the callsite is that the code in BarrierSetAssembler (which is the level of the hierarchy that implements raw accesses) is run, and nothing else. If anything else happens, it's a bug. So I hardwire that at the callsite to always match the expected behaviour. To instead let each level of the barrier class hierarchy remember to check for AS_RAW and delegate to the parent class in a way that ultimately has the exact same perceivable effect as the hardwiring, but in a much more error prone way, does not sound like an improvement to me. Perhaps I can be convinced otherwise if I understand what the concern is here and what problem we are trying to solve. Thanks, /Erik > Thanks, Roman From rkennke at redhat.com Tue Jun 5 15:01:24 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 5 Jun 2018 17:01:24 +0200 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: <5B16A59D.7020609@oracle.com> References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> Message-ID: Am 05.06.2018 um 17:00 schrieb Erik ?sterlund: > Hi Roman, > > On 2018-06-05 16:16, Roman Kennke wrote: >> Am 04.06.2018 um 14:38 schrieb Erik ?sterlund: >>> Hi Roman, >>> >>> ? 42 >>> ? 43?? virtual void obj_equals(MacroAssembler* masm, DecoratorSet >>> decorators, >>> ? 44?????????????????????????? Register obj1, Register obj2); >>> ? 45 >>> >>> I don't think we need to pass in any decorators here. Perhaps one day >>> there will be some important semantic property to deal with, but today I >>> do not think there are any properties we care about, except possibly >>> AS_RAW, but that would never propagate into the BarrierSetAssembler >>> anyway. >>> >>> On that topic, I noticed that today we do the raw version of e.g. >>> load_heap_oop inside of the BarrierSetAssembler, and to use it you would >>> call load_heap_oop(AS_RAW). But the cmpoop stuff does it in a different >>> way (cmpoop_raw in the macro assembler). I think it would be ideal if we >>> could do it the same way, which would involve calling cmpoop with AS_RAW >>> to get a raw oop comparison, residing in BarrierSetAssembler, with the >>> usual hardwiring in the corresponding macro assembler function when it >>> observes AS_RAW. >>> >>> So it would look something like this: >>> >>> void cmpoop(Register src1, Address src2, DecoratorSet decorators = >>> AS_NORMAL); >>> >>> What do you think? >> cmpoop_raw() is not the AS_RAW base implementation. It's only there to >> help BarrierSetAssembler to implement the base >> obj_equals(Address|Register, jobject). We cannot access cmp_literal32() >> from outside the MacroAssembler. > > In other words, there is no AS_RAW option "exposed" to public use, > right? Maybe there is no need for raw equals in our assembly code. Yes. That is correct. >> The mentioned hardwiring to call straight to BSA is probably going >> away too: >> https://bugs.openjdk.java.net/browse/JDK-8203232 >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-May/032240.html > > I'm not sure I'm convinced that is an improvement. The expected > behaviour at the callsite is that the code in BarrierSetAssembler (which > is the level of the hierarchy that implements raw accesses) is run, and > nothing else. If anything else happens, it's a bug. So I hardwire that > at the callsite to always match the expected behaviour. To instead let > each level of the barrier class hierarchy remember to check for AS_RAW > and delegate to the parent class in a way that ultimately has the exact > same perceivable effect as the hardwiring, but in a much more error > prone way, does not sound like an improvement to me. Perhaps I can be > convinced otherwise if I understand what the concern is here and what > problem we are trying to solve. I don't lean very much either way. But it should be discussed under JDK-8203232. Considering that there is no use for cmpoop_raw() except as helper for BSA, do you agree that we don't need the AS_RAW hardwiring for obj_equals() ? Can I consider this patch Reviewed? Thanks, Roman From aph at redhat.com Tue Jun 5 15:04:44 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 5 Jun 2018 16:04:44 +0100 Subject: ARM port consolidation In-Reply-To: <27CED1F9-5C87-43BF-A00B-53A5829ADD75@oracle.com> References: <7aae9027-266d-46e0-0df5-bc74d6530af5@redhat.com> <27CED1F9-5C87-43BF-A00B-53A5829ADD75@oracle.com> Message-ID: On 06/05/2018 03:05 PM, Bob Vandette wrote: > I would like to keep this effort as a straight forward code deletion exercise. > Adding Minimal VM support is beyond the scope of what I had in mind. > We no longer regularly build the minimal VM and would prefer to leave this work > to someone that has a vested interest in using and supporting it. Oh yes, definitely. There's no hurry, and in any case the code isn't going away. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From erik.osterlund at oracle.com Tue Jun 5 15:14:13 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 5 Jun 2018 17:14:13 +0200 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> Message-ID: <5B16A8C5.7040005@oracle.com> Hi Roman, Sure. As long as there is no need for AS_RAW equals in the assembly code, we don't need to add it now. However, that means that there are currently no properties in the decorators we care about at the moment. Therefore, the decorator parameter of obj_equals should be removed; it serves no purpose. Thanks, /Erik On 2018-06-05 17:01, Roman Kennke wrote: > Am 05.06.2018 um 17:00 schrieb Erik ?sterlund: >> Hi Roman, >> >> On 2018-06-05 16:16, Roman Kennke wrote: >>> Am 04.06.2018 um 14:38 schrieb Erik ?sterlund: >>>> Hi Roman, >>>> >>>> 42 >>>> 43 virtual void obj_equals(MacroAssembler* masm, DecoratorSet >>>> decorators, >>>> 44 Register obj1, Register obj2); >>>> 45 >>>> >>>> I don't think we need to pass in any decorators here. Perhaps one day >>>> there will be some important semantic property to deal with, but today I >>>> do not think there are any properties we care about, except possibly >>>> AS_RAW, but that would never propagate into the BarrierSetAssembler >>>> anyway. >>>> >>>> On that topic, I noticed that today we do the raw version of e.g. >>>> load_heap_oop inside of the BarrierSetAssembler, and to use it you would >>>> call load_heap_oop(AS_RAW). But the cmpoop stuff does it in a different >>>> way (cmpoop_raw in the macro assembler). I think it would be ideal if we >>>> could do it the same way, which would involve calling cmpoop with AS_RAW >>>> to get a raw oop comparison, residing in BarrierSetAssembler, with the >>>> usual hardwiring in the corresponding macro assembler function when it >>>> observes AS_RAW. >>>> >>>> So it would look something like this: >>>> >>>> void cmpoop(Register src1, Address src2, DecoratorSet decorators = >>>> AS_NORMAL); >>>> >>>> What do you think? >>> cmpoop_raw() is not the AS_RAW base implementation. It's only there to >>> help BarrierSetAssembler to implement the base >>> obj_equals(Address|Register, jobject). We cannot access cmp_literal32() >>> from outside the MacroAssembler. >> In other words, there is no AS_RAW option "exposed" to public use, >> right? Maybe there is no need for raw equals in our assembly code. > Yes. That is correct. > >>> The mentioned hardwiring to call straight to BSA is probably going >>> away too: >>> https://bugs.openjdk.java.net/browse/JDK-8203232 >>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-May/032240.html >> I'm not sure I'm convinced that is an improvement. The expected >> behaviour at the callsite is that the code in BarrierSetAssembler (which >> is the level of the hierarchy that implements raw accesses) is run, and >> nothing else. If anything else happens, it's a bug. So I hardwire that >> at the callsite to always match the expected behaviour. To instead let >> each level of the barrier class hierarchy remember to check for AS_RAW >> and delegate to the parent class in a way that ultimately has the exact >> same perceivable effect as the hardwiring, but in a much more error >> prone way, does not sound like an improvement to me. Perhaps I can be >> convinced otherwise if I understand what the concern is here and what >> problem we are trying to solve. > I don't lean very much either way. But it should be discussed under > JDK-8203232. Considering that there is no use for cmpoop_raw() except as > helper for BSA, do you agree that we don't need the AS_RAW hardwiring > for obj_equals() ? Can I consider this patch Reviewed? > > Thanks, Roman > From stefan.karlsson at oracle.com Tue Jun 5 15:14:08 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Tue, 5 Jun 2018 17:14:08 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <12494192-b16d-55bc-120b-24d45cb34424@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <12494192-b16d-55bc-120b-24d45cb34424@oracle.com> Message-ID: <17767bb7-91c6-3128-909d-29c85f0e9e04@oracle.com> Hi Jini, For this version experimental version of ZGC we only have basic SA support, so the collectLiveRegions feature is not implemented. Comments below: On 2018-06-05 14:50, Jini George wrote: > Hi Per, > > I have looked at only the SA portion. Some comments on that: > > ==>? share/classes/sun/jvm/hotspot/oops/ObjectHeap.java > > The method collectLiveRegions() would need to include code to iterate > through the Zpages, and collect the live regions. > > ==> share/classes/sun/jvm/hotspot/HSDB.java > > The addAnnotation() method needs to handle the case of collHeap being an > instance of ZCollectedHeap to avoid "Unknown generation" being displayed > while displaying the Stack Memory for a mutator thread. Fixed. > > ==> share/classes/sun/jvm/hotspot/gc/shared/GCCause.java > > To the GCCause enum, it would be good to add the equivalents of the > following GC causes. (though at this point, GCCause seems unused within > SA). > > ??? _z_timer, > ??? _z_warmup, > ??? _z_allocation_rate, > ??? _z_allocation_stall, > ??? _z_proactive, Fixed. > > ==> share/classes/sun/jvm/hotspot/gc/shared/GCName.java > > Similarly, it would be good to add the equivalent of 'Z' in the GCName > enum. Fixed. > > ==> share/classes/sun/jvm/hotspot/runtime/VMOps.java > > Again, it would be good to add 'ZOperation' to the VMOps enum (though it > looks like it is already not in sync). Fixed. > > ==> share/classes/sun/jvm/hotspot/tools/HeapSummary.java > > The run() method would need to handle the ZGC case too to avoid the > unknown CollectedHeap type exception with jhsdb jmap -heap: > > Also, the printGCAlgorithm() method would need to be updated to read in > the UseZGC flag to avoid the default "Mark Sweep Compact GC" being > displayed with jhsdb jmap -heap. Fixed. > > ==> share/classes/sun/jvm/hotspot/gc/z/ZHeap.java > > It would be great if printOn() (for the clhsdb command 'universe') would > print the address range of the java heap as we have in other GCs (with > ZAddressSpaceStart and ZAddressSpaceEnd?) ZGC uses three fixed 4 TB reserved memory ranges (on Linux x64). I don't think it's as important to print these ranges as it is for the other GCs. > > ==> test/hotspot/jtreg/serviceability/sa/TestUniverse.java > Please modify the above test to include zgc or include a separate SA > test to test the universe output for zgc. Fixed. Here's a quick webrev of your suggested changes: http://cr.openjdk.java.net/~stefank/8204210/webrev.sa.01/ Thanks, StefanK > > Thank you, > Jini. > > > On 6/2/2018 3:11 AM, Per Liden wrote: >> Hi, >> >> Please review the implementation of JEP 333: ZGC: A Scalable >> Low-Latency Garbage Collector (Experimental) >> >> Please see the JEP for more information about the project. The JEP is >> currently in state "Proposed to Target" for JDK 11. >> >> https://bugs.openjdk.java.net/browse/JDK-8197831 >> >> Additional information in can also be found on the ZGC project wiki. >> >> https://wiki.openjdk.java.net/display/zgc/Main >> >> >> Webrevs >> ------- >> >> To make this easier to review, we've divided the change into two webrevs. >> >> * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master >> >> ?? This patch contains the actual ZGC implementation, the new unit >> tests and other changes needed in HotSpot. >> >> * ZGC Testing: >> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >> >> ?? This patch contains changes to existing tests needed by ZGC. >> >> >> Overview of Changes >> ------------------- >> >> Below follows a list of the files we add/modify in the master patch, >> with a short summary describing each group. >> >> * Build support - Making ZGC an optional feature. >> >> ?? make/autoconf/hotspot.m4 >> ?? make/hotspot/lib/JvmFeatures.gmk >> ?? src/hotspot/share/utilities/macros.hpp >> >> * C2 AD file - Additions needed to generate ZGC load barriers (adlc >> does not currently offer a way to easily break this out). >> >> ?? src/hotspot/cpu/x86/x86.ad >> ?? src/hotspot/cpu/x86/x86_64.ad >> >> * C2 - Things that can't be easily abstracted out into ZGC specific >> code, most of which is guarded behind a #if INCLUDE_ZGC and/or if >> (UseZGC) condition. There should only be two logic changes (one in >> idealKit.cpp and one in node.cpp) that are still active when ZGC is >> disabled. We believe these are low risk changes and should not >> introduce any real change i behavior when using other GCs. >> >> ?? src/hotspot/share/adlc/formssel.cpp >> ?? src/hotspot/share/opto/* >> ?? src/hotspot/share/compiler/compilerDirectives.hpp >> >> * General GC+Runtime - Registering ZGC as a collector. >> >> ?? src/hotspot/share/gc/shared/* >> ?? src/hotspot/share/runtime/vmStructs.cpp >> ?? src/hotspot/share/runtime/vm_operations.hpp >> ?? src/hotspot/share/prims/whitebox.cpp >> >> * GC thread local data - Increasing the size of data area by 32 bytes. >> >> ?? src/hotspot/share/gc/shared/gcThreadLocalData.hpp >> >> * ZGC - The collector itself. >> >> ?? src/hotspot/share/gc/z/* >> ?? src/hotspot/cpu/x86/gc/z/* >> ?? src/hotspot/os_cpu/linux_x86/gc/z/* >> ?? test/hotspot/gtest/gc/z/* >> >> * JFR - Adding new event types. >> >> ?? src/hotspot/share/jfr/* >> ?? src/jdk.jfr/share/conf/jfr/* >> >> * Logging - Adding new log tags. >> >> ?? src/hotspot/share/logging/* >> >> * Metaspace - Adding a friend declaration. >> >> ?? src/hotspot/share/memory/metaspace.hpp >> >> * InstanceRefKlass - Adjustments for concurrent reference processing. >> >> ?? src/hotspot/share/oops/instanceRefKlass.inline.hpp >> >> * vmSymbol - Disabled clone intrinsic for ZGC. >> >> ?? src/hotspot/share/classfile/vmSymbols.cpp >> >> * Oop Verification - In four cases we disabled oop verification >> because it do not makes sense or is not applicable to a GC using load >> barriers. >> >> ?? src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp >> ?? src/hotspot/cpu/x86/stubGenerator_x86_64.cpp >> ?? src/hotspot/share/compiler/oopMap.cpp >> ?? src/hotspot/share/runtime/jniHandles.cpp >> >> * StackValue - Apply a load barrier in case of OSR. This is a bit of a >> hack. However, this will go away in the future, when we have the next >> iteration of C2's load barriers in place (aka "C2 late barrier >> insertion"). >> >> ?? src/hotspot/share/runtime/stackValue.cpp >> >> * JVMTI - Adding an assert() to catch problems if the tagmap hashing >> is changed in the future. >> >> ?? src/hotspot/share/prims/jvmtiTagMap.cpp >> >> * Legal - Adding copyright/license for 3rd party hash function used in >> ZHash. >> >> ?? src/java.base/share/legal/c-libutl.md >> >> * SA - Adding basic ZGC support. >> >> ?? src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* >> >> >> Testing >> ------- >> >> * Unit testing >> >> ?? A number of new ZGC specific gtests have been added, in >> test/hotspot/gtest/gc/z/ >> >> * Regression testing >> >> ?? No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} >> ?? No new failures in Mach5, with ZGC disabled, tier{1,2,3} >> >> * Stress testing >> >> ?? We have been continuously been running a number stress tests >> throughout the development, these include: >> >> ???? specjbb2000 >> ???? specjbb2005 >> ???? specjbb2015 >> ???? specjvm98 >> ???? specjvm2008 >> ???? dacapo2009 >> ???? test/hotspot/jtreg/gc/stress/gcold >> ???? test/hotspot/jtreg/gc/stress/systemgc >> ???? test/hotspot/jtreg/gc/stress/gclocker >> ???? test/hotspot/jtreg/gc/stress/gcbasher >> ???? test/hotspot/jtreg/gc/stress/finalizer >> ???? Kitchensink >> >> >> Thanks! >> >> /Per, Stefan & the ZGC team From erik.osterlund at oracle.com Tue Jun 5 15:33:37 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 5 Jun 2018 17:33:37 +0200 Subject: RFR: JDK-8200623: Primitive heap access for interpreter BarrierSetAssembler/x86 In-Reply-To: References: <5B155A4B.7020009@oracle.com> Message-ID: <5B16AD51.4090009@oracle.com> Hi Roman, On 2018-06-05 16:07, Roman Kennke wrote: > Hi Erik, > >>> JDK-8199417 added better modularization for interpreter barriers. >>> Shenandoah and possibly future GCs also need barriers for primitive >>> access. >>> >>> Some notes on implementation: >>> - float/double/long access produced some headaches for the following >>> reasons: >>> >>> - float and double would either take XMMRegister which is not >>> compatible with Register >>> - or load-from/store-to the floating point stack (see >>> MacroAssembler::load/store_float/double) >>> - long access on x86_32 would load-into/store-from 2 registers, or >>> else use a trick via the floating point stack to do atomic access >>> >>> None of this seemed easy/nice to do with the API. I helped myself by >>> accepting noreg as dst/src argument, which means the corresponding tos >>> (i.e. ltos, ftos, dtos) and the BSA would then access from/to >>> xmm0/float-stack in case of float/double or the double-reg/float-stack >>> in case of long/32bit, which is all that we ever need. >> It is indeed a bit painful that in hotspot, XMMRegister is not a >> Register (unlike the Graal implementation). And I think I agree that if >> it is indeed only ever needed by ToS, then this is the preferable >> solution to having two almost identicaly APIs - one for integral types >> and one for floating point types. It beats me though, that in this patch >> you do not address the jni fast get field optimization on x86. It is >> seemingly missing barriers now. Should probably make sure that one fits >> in as well. Fortunately, I think it should work out pretty well. > As mentioned in the review thread for JDK-8203172, we in Shenandoah land > decided to disable JNI fastgetfield stuff for now. I am not sure whether > or not we want to go through access_* anyway? It's probably more > consistent if we do. If we decide that we do, I'll add it to this patch, > if we don't, I'll rip it out of JDK-8203172 :-) Okay so let's not deal with JNI fast get field in either of those these two patches, and leave that exercise to future adventurers that feel brave enough to change that code. If you decide later to add modularization for that to enable this fantastic performance optimization on Shenandoah, then perhaps we can have a new patch with the appropriate code (possibly the speculative PC range filter I proposed) then in the new modularization. In that case, it looks reasonable the way it is now. Perhaps there should be a T_ADDRESS case for stores for consistency though? You can now load a T_ADDRESS but not store it, which is a bit surprising I suppose. Otherwise it looks good. Thanks, /Erik > Thanks, Roman > From Zhongwei.Yao at arm.com Tue Jun 5 15:41:40 2018 From: Zhongwei.Yao at arm.com (Zhongwei Yao) Date: Tue, 5 Jun 2018 15:41:40 +0000 Subject: RFR: 8204331: AArch64: fix CAS not embedded in normal graph error. Message-ID: Hi, Bug: https://bugs.openjdk.java.net/browse/JDK-8204331 Webrev: http://cr.openjdk.java.net/~zyao/8204331/webrev.00/ This patch fixes an assertion error on aarch64 in several jtreg tests. The failure assertion is in needs_acquiring_load_exclusive() in aarch64.ad when checking whether the graph is in "leading_to_normal" shape. The abnormal shape is generated in LibraryCallKit::inline_unsafe_load_store(). This patch fixes it by swap the order of "Pin SCMProj node" and "Insert post barrier" in LibraryCallKit::inline_unsafe_load_store(). -- Best regards, Zhongwei From Zhongwei.Yao at arm.com Tue Jun 5 15:45:32 2018 From: Zhongwei.Yao at arm.com (Zhongwei Yao) Date: Tue, 5 Jun 2018 15:45:32 +0000 Subject: 8204331: AArch64: fix CAS not embedded in normal graph error. In-Reply-To: References: Message-ID: I forget to add "RFR" in the subject. So could you help have a review? Thanks. -- Best regards, Zhongwei ________________________________________ From: Zhongwei Yao Sent: Tuesday, June 5, 2018 11:41:40 PM To: hotspot-dev at openjdk.java.net Cc: nd Subject: RFR: 8204331: AArch64: fix CAS not embedded in normal graph error. Hi, Bug: https://bugs.openjdk.java.net/browse/JDK-8204331 Webrev: http://cr.openjdk.java.net/~zyao/8204331/webrev.00/ This patch fixes an assertion error on aarch64 in several jtreg tests. The failure assertion is in needs_acquiring_load_exclusive() in aarch64.ad when checking whether the graph is in "leading_to_normal" shape. The abnormal shape is generated in LibraryCallKit::inline_unsafe_load_store(). This patch fixes it by swap the order of "Pin SCMProj node" and "Insert post barrier" in LibraryCallKit::inline_unsafe_load_store(). -- Best regards, Zhongwei From erik.joelsson at oracle.com Tue Jun 5 15:47:12 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Tue, 5 Jun 2018 08:47:12 -0700 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> Message-ID: <03efde10-d6a7-962d-54d5-339565d1d133@oracle.com> Hello, On 2018-06-04 23:10, David Holmes wrote: > Sorry to be late to this party ... > > On 5/06/2018 6:10 AM, Erik Joelsson wrote: >> New webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.02/ >> >> Renamed the new jvm variant to "hardened". > > As it is a hardened server build I'd prefer if that were somehow > reflected in the name. Though really I don't see why this should be > restricted this way ... to be honest I don't see hardened as a variant > of server vs. client vs. zero etc at all, you should be able to harden > any of those. > I agree, and you sort of can. By adding the jvm feature "no-speculative-cti" to any jvm variant, you get the flags. The name of the predefined variant can be discussed. I initially suggested altserver because I, as you, thought it should include server in the name. But ultimately, I don't care that much about a name. There is also little point in defining a whole set of predefined variants that nobody has requested. If we ever need more specialized variants in the same image, we will add them. > So IIUC with this change we will: > - always build JDK native code "hardened" (if toolchain supports it) Yes, this is correct. The reason being that no significant performance impact was detected, so there is no cost. > - only build hotspot "hardened" if requested; and in that case > ? - jvm.cfg will list -server and -hardened with server as default Correct. > > Is that right? I can see that we may choose to always build Oracle JDK > this way but it isn't clear to me that its suitable for OpenJDK. Nor > why hotspot is selectable but JDK is not. ?? > We would prefer to always build with security features enabled, but the performance impact on the JVM is so high that we want to leave it to the user to decide, both at bulid time and at runtime. With these changes, Oracle will build OracleJDK, and OpenJDK, with dual JVMs by default, but any other person or entity just building the OpenJDK source will just get the server variant for now (as has been the default for a long time), unless they specifically ask for "hardened" or activate the new jvm feature (--with-jvm-feature=no-speculative-cti). We don't see the point in giving the choice on the JDK libraries simply because there is no drawback to enabling the flags. /Erik > Sorry. > > David > ----- > >> /Erik >> >> >> On 2018-06-04 09:54, jesper.wilhelmsson at oracle.com wrote: >>>> On 4 Jun 2018, at 17:52, Erik Joelsson >>>> wrote: >>>> >>>> Hello, >>>> >>>> On 2018-06-01 14:00, Aleksey Shipilev wrote: >>>>> On 06/01/2018 10:53 PM, Erik Joelsson wrote: >>>>>> This patch defines flags for disabling speculative execution for >>>>>> GCC and Visual Studio and applies >>>>>> them to all binaries except libjvm when available in the >>>>>> compiler. It defines a new jvm feature >>>>>> no-speculative-cti, which is used to control whether to use the >>>>>> flags for libjvm. It also defines a >>>>>> new jvm variant "altserver" which is the same as server, but with >>>>>> this new feature added. >>>>> I think the classic name for such product configuration is >>>>> "hardened", no? >>>> I don't know. I'm open to suggestions on naming. >>> "hardened" sounds good to me. >>> >>> The change looks good as well. >>> /Jesper >>> >>>> /Erik >>>>> -Aleksey >>>>> >> From erik.joelsson at oracle.com Tue Jun 5 15:50:37 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Tue, 5 Jun 2018 08:50:37 -0700 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with In-Reply-To: <3f7c0b36458a467b85c41ed467b41614@sap.com> References: <3f7c0b36458a467b85c41ed467b41614@sap.com> Message-ID: Hello Matthias, For GCC, you need 7.3.0 or later. For Microsoft you need VS2017 and I think some minimal update version (the option is called -Qspectre), we use 15.5.5. I was not involved in the benchmarking so I don't know any details there, only the conclusion. /Erik On 2018-06-05 01:30, Baesken, Matthias wrote: > > Hi Erik , is there ?some info available about ?the performance impact > when ?disabling ?disabling speculative execution? ? > > And which compiler versions are needed for this ? > > Best regards, Matthias > > >We need to add compilation flags for disabling speculative execution to > > >our native libraries and executables. In order to allow for users not > > >affected by problems with speculative execution to run a JVM at full > > >speed, we need to be able to ship two JVM libraries - one that is > > >compiled with speculative execution enabled, and one that is compiled > > >without. Note that this applies to the build time C++ flags, not the > > >compiler in the JVM itself. Luckily adding these flags to the rest of > > >the native libraries did not have a significant performance impact so > > >there is no need for making it optional there. > > > > > >This patch defines flags for disabling speculative execution for GCC and > > >Visual Studio and applies them to all binaries except libjvm when > > >available in the compiler. It defines a new jvm feature > > >no-speculative-cti, which is used to control whether to use the flags > > >for libjvm. It also defines a new jvm variant "altserver" which is the > > >same as server, but with this new feature added. > > > > > >For Oracle builds, we are changing the default for linux-x64 and > > >windows-x64 to build both server and altserver, giving the choice to the > > >user which JVM they want to use. If others would prefer this default, we > > >could make it default in configure as well. > > > > > >The change in GensrcJFR.gmk fixes a newly introduced race that appears > > >when building multiple jvm variants. > > > > > >Bug: https://bugs.openjdk.java.net/browse/JDK-8202384 > > > > > >Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.01 > > From volker.simonis at gmail.com Tue Jun 5 16:05:49 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 5 Jun 2018 18:05:49 +0200 Subject: RFR(XXS): 8204335: [ppc] Assembler::add_const_optimized incorrect for some inputs Message-ID: Hi, can I please have a review for this trivial, day-one, ppc-only fix: http://cr.openjdk.java.net/~simonis/webrevs/2018/8204335/ https://bugs.openjdk.java.net/browse/JDK-8204335 There's a typo in Assembler::add_const_optimized() which makes it return incorrect results for some input values. The fix is trivial. Repeated here for your convenience: diff -r 1d476feca3c9 src/hotspot/cpu/ppc/assembler_ppc.cpp --- a/src/hotspot/cpu/ppc/assembler_ppc.cpp Mon Jun 04 11:19:54 2018 +0200 +++ b/src/hotspot/cpu/ppc/assembler_ppc.cpp Tue Jun 05 11:21:08 2018 +0200 @@ -486,7 +486,7 @@ // Case 2: Can use addis. if (xd == 0) { short xc = rem & 0xFFFF; // 2nd 16-bit chunk. - rem = (rem >> 16) + ((unsigned short)xd >> 15); + rem = (rem >> 16) + ((unsigned short)xc >> 15); if (rem == 0) { addis(d, s, xc); return 0; Thank you and best regards, Volker From thomas.stuefe at gmail.com Tue Jun 5 16:10:02 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 5 Jun 2018 18:10:02 +0200 Subject: RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers In-Reply-To: References: Message-ID: On Tue, Jun 5, 2018 at 3:46 PM, Adam Farley8 wrote: > Hi All, > > Native memory allocation for DBBs is tracked in java.nio.Bits, but that > only includes what the user thinks they are allocating. > Which is exactly what I would expect as a user... > When the VM adds extra memory to the allocation amount this extra bit is > not represented in the Bits total. A cursory glance > shows, minimum, that we round the requested memory quantity up to the heap > word size in the Unsafe.allocateMemory code which I do not understand either - why do we do this? After all, normal allocations from inside hotspot do not get aligned up in size, and the java doc to Unsafe allocateMemory does not state anything about the size being aligned. In addition to questioning the align up of the user requested size, I would be in favor of adding a new NMT tag for these, maybe "mtUnsafe"? That would be an easy fix. >, and > something to do with nmt_header_size in os:malloc() (os.cpp) too. That is mighty unspecific and also wrong. The align-up mentioned above goes into the size reported by Bits; the nmt header size does not. > > On its own, and in small quantities, align_up(sz, HeapWordSize) isn't that > big of an issue. But when you allocate a lot of DBBs, > and coupled with the nmt_header_size business, it makes the Bits values > wrong. The more DBB allocations, the more inaccurate those > numbers will be. To be annoyingly precise, it will never be more wrong than 1:7 on 64bit machines :) - if all memory requested via Unsafe.allocateMemory would be of size 1 byte. > > To get the "+X", it seems to me that the best option would be to introduce > an native method in Bits that fetches "X" directly > from Hotspot, using the same code that Hotspot uses (so we'd have to > abstract-out the Hotspot logic that adds X to the memory > quantity). This way, anyone modifying the Hotspot logic won't risk > rendering the Bits logic wrong again. I don't follow that. > > That's only one way to fix the accuracy problem here though. Suggestions > welcome. You are throwing two effects together: - As mentioned above, I consider the align-up of the user requested size to be at least questionable. It shows up as user size in NMT which should not be. I also fail to see a compelling reason for it, but maybe someone else can enlighten me. - But anything else - NMT headers, overwriter guards, etc added by the VM I consider in the same class as any other overhead incurred e.g. by the CRT or the OS when calling malloc (e.g. malloc allocator bucket size). Basically, rss will go up by more than size requested by malloc. Something maybe worth noting, but IMHO not as part of the numbers returned by java.nio.Bits. Just my 2 cents. Best Regards, Thomas > > Best Regards > > Adam Farley > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU From rkennke at redhat.com Tue Jun 5 16:29:08 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 5 Jun 2018 18:29:08 +0200 Subject: RFR: JDK-8200623: Primitive heap access for interpreter BarrierSetAssembler/x86 In-Reply-To: <5B16AD51.4090009@oracle.com> References: <5B155A4B.7020009@oracle.com> <5B16AD51.4090009@oracle.com> Message-ID: Am 05.06.2018 um 17:33 schrieb Erik ?sterlund: > Hi Roman, > > On 2018-06-05 16:07, Roman Kennke wrote: >> Hi Erik, >> >>>> JDK-8199417 added better modularization for interpreter barriers. >>>> Shenandoah and possibly future GCs also need barriers for primitive >>>> access. >>>> >>>> Some notes on implementation: >>>> - float/double/long access produced some headaches for the following >>>> reasons: >>>> >>>> ??? - float and double would either take XMMRegister which is not >>>> compatible with Register >>>> ??? - or load-from/store-to the floating point stack (see >>>> MacroAssembler::load/store_float/double) >>>> ??? - long access on x86_32 would load-into/store-from 2 registers, or >>>> else use a trick via the floating point stack to do atomic access >>>> >>>> None of this seemed easy/nice to do with the API. I helped myself by >>>> accepting noreg as dst/src argument, which means the corresponding tos >>>> (i.e. ltos, ftos, dtos) and the BSA would then access from/to >>>> xmm0/float-stack in case of float/double or the double-reg/float-stack >>>> in case of long/32bit, which is all that we ever need. >>> It is indeed a bit painful that in hotspot, XMMRegister is not a >>> Register (unlike the Graal implementation). And I think I agree that if >>> it is indeed only ever needed by ToS, then this is the preferable >>> solution to having two almost identicaly APIs - one for integral types >>> and one for floating point types. It beats me though, that in this patch >>> you do not address the jni fast get field optimization on x86. It is >>> seemingly missing barriers now. Should probably make sure that one fits >>> in as well. Fortunately, I think it should work out pretty well. >> As mentioned in the review thread for JDK-8203172, we in Shenandoah land >> decided to disable JNI fastgetfield stuff for now. I am not sure whether >> or not we want to go through access_* anyway? It's probably more >> consistent if we do. If we decide that we do, I'll add it to this patch, >> if we don't, I'll rip it out of JDK-8203172 :-) > > Okay so let's not deal with JNI fast get field in either of those these > two patches, and leave that exercise to future adventurers that feel > brave enough to change that code. If you decide later to add > modularization for that to enable this fantastic performance > optimization on Shenandoah, then perhaps we can have a new patch with > the appropriate code (possibly the speculative PC range filter I > proposed) then in the new modularization. > > In that case, it looks reasonable the way it is now. Perhaps there > should be a T_ADDRESS case for stores for consistency though? You can > now load a T_ADDRESS but not store it, which is a bit surprising I > suppose. Otherwise it looks good. > Ok I've added it: Incremental: http://cr.openjdk.java.net/~rkennke/JDK-8200623/webrev.01.diff/ Full: http://cr.openjdk.java.net/~rkennke/JDK-8200623/webrev.01/ Good now? Thanks, Roman From rkennke at redhat.com Tue Jun 5 16:31:22 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 5 Jun 2018 18:31:22 +0200 Subject: RFR: JDK-8202776: Modularize GC allocations in runtime In-Reply-To: <92e085d9-8ba2-45e8-1038-d98caedfebe5@redhat.com> References: <5B1555F6.5090909@oracle.com> <76f8679c-5f2e-ec3f-9913-307d0c187dfd@redhat.com> <5B155AE4.2090908@oracle.com> <0d6cff83-d6be-c51c-8629-a340ad5f7fe0@redhat.com> <737D8C93-6533-4B48-BDEB-E92EE8E91C9F@oracle.com> <92e085d9-8ba2-45e8-1038-d98caedfebe5@redhat.com> Message-ID: <0ea46d5f-739b-31c6-60ba-c0ea724e3da2@redhat.com> Submit repo came back with unstable. See below. Is it related to the change? If so, can somebody with access give me a clue? Build Details: 2018-06-05-1435301.roman.source 28 Failed Tests Test Tier Platform Keywords Description Task tools/javadoc/api/basic/GetTask_WriterTest.java tier1 windows-x64 bug6493690 Exception: java.lang.Exception: ... errors found task tools/javadoc/api/basic/GetTask_FileManagerTest.java tier1 windows-x64 bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found task tools/javadoc/api/basic/GetTask_FileObjectsTest.java tier1 windows-x64 bug6493690 Exception: java.lang.Exception: ... errors found task jdk/javadoc/tool/api/basic/GetTask_FileManagerTest.java tier1 windows-x64 bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found task jdk/javadoc/tool/api/basic/GetTask_FileObjectsTest.java tier1 windows-x64 bug6493690 Exception: java.lang.Exception: ... errors found task jdk/javadoc/tool/api/basic/GetTask_WriterTest.java tier1 windows-x64 bug6493690 Exception: java.lang.Exception: ... errors found task jdk/javadoc/doclet/testSearch/TestSearch.java tier1 windows-x64 bug8141492 bug8071982 bug8141636 bug8147890 bug8166175 bug8168965 bug8176794 bug8175218 bug8147881 bug8181622 bug8182263 bug8074407 bug8187521 bug8198522 bug8182765 bug8199278 bug8196201 bug8196202 Exception: FAILED: out-2\\jquery\\jquery-1.10.2.js: file not found: task tools/javadoc/api/basic/GetTask_FileObjectsTest.java tier1 macosx-x64 bug6493690 Exception: java.lang.Exception: ... errors found task tools/javadoc/api/basic/GetTask_WriterTest.java tier1 macosx-x64 bug6493690 Exception: java.lang.Exception: ... errors found task tools/javadoc/api/basic/GetTask_FileManagerTest.java tier1 macosx-x64 bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found task jdk/javadoc/tool/api/basic/GetTask_WriterTest.java tier1 macosx-x64 bug6493690 Exception: java.lang.Exception: ... errors found task jdk/javadoc/tool/api/basic/GetTask_FileManagerTest.java tier1 macosx-x64 bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found task jdk/javadoc/tool/api/basic/GetTask_FileObjectsTest.java tier1 macosx-x64 bug6493690 Exception: java.lang.Exception: ... errors found task jdk/javadoc/doclet/testSearch/TestSearch.java tier1 macosx-x64 bug8141492 bug8071982 bug8141636 bug8147890 bug8166175 bug8168965 bug8176794 bug8175218 bug8147881 bug8181622 bug8182263 bug8074407 bug8187521 bug8198522 bug8182765 bug8199278 bug8196201 bug8196202 Exception: FAILED: out-1/jquery/jquery-1.10.2.js: file not found: task tools/javadoc/api/basic/GetTask_FileObjectsTest.java tier1 linux-x64 bug6493690 Exception: java.lang.Exception: ... errors found task tools/javadoc/api/basic/GetTask_WriterTest.java tier1 linux-x64 bug6493690 Exception: java.lang.Exception: ... errors found task tools/javadoc/api/basic/GetTask_FileManagerTest.java tier1 linux-x64 bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found task tools/javadoc/api/basic/GetTask_FileManagerTest.java tier1 linux-x64-open bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found task tools/javadoc/api/basic/GetTask_FileObjectsTest.java tier1 linux-x64-open bug6493690 Exception: java.lang.Exception: ... errors found task tools/javadoc/api/basic/GetTask_WriterTest.java tier1 linux-x64-open bug6493690 Exception: java.lang.Exception: ... errors found task jdk/javadoc/tool/api/basic/GetTask_FileManagerTest.java tier1 linux-x64 bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found task jdk/javadoc/tool/api/basic/GetTask_FileObjectsTest.java tier1 linux-x64 bug6493690 Exception: java.lang.Exception: ... errors found task jdk/javadoc/tool/api/basic/GetTask_WriterTest.java tier1 linux-x64 bug6493690 Exception: java.lang.Exception: ... errors found task jdk/javadoc/doclet/testSearch/TestSearch.java tier1 linux-x64 bug8141492 bug8071982 bug8141636 bug8147890 bug8166175 bug8168965 bug8176794 bug8175218 bug8147881 bug8181622 bug8182263 bug8074407 bug8187521 bug8198522 bug8182765 bug8199278 bug8196201 bug8196202 Exception: FAILED: out-1/jquery/jquery-1.10.2.js: file not found: task jdk/javadoc/tool/api/basic/GetTask_WriterTest.java tier1 linux-x64-open bug6493690 Exception: java.lang.Exception: ... errors found task jdk/javadoc/tool/api/basic/GetTask_FileManagerTest.java tier1 linux-x64-open bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found task jdk/javadoc/tool/api/basic/GetTask_FileObjectsTest.java tier1 linux-x64-open bug6493690 Exception: java.lang.Exception: ... errors found task jdk/javadoc/doclet/testSearch/TestSearch.java tier1 linux-x64-open bug8141492 bug8071982 bug8141636 bug8147890 bug8166175 bug8168965 bug8176794 bug8175218 bug8147881 bug8181622 bug8182263 bug8074407 bug8187521 bug8198522 bug8182765 bug8199278 bug8196201 bug8196202 Exception: FAILED: out-1/jquery/jquery-1.10.2.js: file not found: task Mach5 Tasks Results Summary NA: 0 EXECUTED_WITH_FAILURE: 4 PASSED: 71 UNABLE_TO_RUN: 0 FAILED: 0 KILLED: 0 Test 4 Executed with failure jdk_open_test_langtools_tier1-linux-x64-71 Results: total: 3874, passed: 3867; failed: 7 jdk_open_test_langtools_tier1-linux-x64-open-72 Results: total: 3874, passed: 3867; failed: 7 jdk_open_test_langtools_tier1-macosx-x64-73 Results: total: 3874, passed: 3867; failed: 7 jdk_open_test_langtools_tier1-windows-x64-74 Results: total: 3871, passed: 3864; failed: 7 > +1, looks good. > > -Aleksey > > On 06/04/2018 11:24 PM, Erik ?sterlund wrote: >> Hi, >> >> Looks good. >> >> Thanks, >> /Erik >> >> On 2018-06-04 23:20, Roman Kennke wrote: >>> Hi Aleksey, Erik, >>> >>> thanks for reviewing and helping with this! >>> >>> Moved mem_allocate() under protected: >>> Incremental: >>> http://cr.openjdk.java.net/~rkennke/JDK-8202776/webrev.01.diff/ >>> Full: >>> http://cr.openjdk.java.net/~rkennke/JDK-8202776/webrev.01/ >>> >>> Good now? >>> >>> Thanks, >>> Roman >>> >>> >>>> Hi Aleksey, >>>> >>>> Sounds like a good idea. >>>> >>>> /Erik >>>> >>>>> On 4 Jun 2018, at 17:56, Aleksey Shipilev wrote: >>>>> >>>>> On 06/04/2018 05:29 PM, Erik ?sterlund wrote: >>>>>>>> I agree the GC should be able to perform arbitrary allocations the way >>>>>>>> it wants to. >>>>>>>> However, I would prefer to do it this way: >>>>>>>> http://cr.openjdk.java.net/~eosterlund/8202776/webrev.00/ >>>>> This looks good. I think we better hide mem_allocate under "protected" now, so we would have: >>>>> >>>>> protected: >>>>> ?? // TLAB path >>>>> ?? inline static HeapWord* allocate_from_tlab(Klass* klass, size_t size, TRAPS); >>>>> ?? static HeapWord* allocate_from_tlab_slow(Klass* klass, size_t size, TRAPS); >>>>> >>>>> ?? // Out-of-TLAB path >>>>> ?? virtual HeapWord* mem_allocate(size_t size, >>>>> ????????????????????????????????? bool* gc_overhead_limit_was_exceeded) = 0; >>>>> >>>>> public: >>>>> ?? // Entry point >>>>> ?? virtual HeapWord* obj_allocate_raw(Klass* klass, size_t size, >>>>> ????????????????????????????????????? bool* gc_overhead_limit_was_exceeded, TRAPS); >>>>> >>>>> -Aleksey >>>>> >>> >> > > From volker.simonis at gmail.com Tue Jun 5 16:35:44 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 5 Jun 2018 18:35:44 +0200 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: <03efde10-d6a7-962d-54d5-339565d1d133@oracle.com> References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> <03efde10-d6a7-962d-54d5-339565d1d133@oracle.com> Message-ID: Hi Erik, you wrote: "Note that this applies to the build time C++ flags, not the compiler in the JVM itself." So what about the code generated by the HotSpot (i.e. stubs, template interpreter, c1, c2, Graal)? Is this code already "hardened" against speculative execution? If yes, that's fine, if not, I don't see the point in hardening the HotSpot code itself if the VM still generates potentially "insecure" code. And I still don't fully understand how things work if you build both variants by default (as you intend to do it for Oracle builds). Will you end up with two subdirectories (lib/server/ and lib/altserver) where both contain a libjvm.so and the user can use "java -altserver" on the command line to choose the hardened version? Thank you and best regards, Volker On Tue, Jun 5, 2018 at 5:47 PM, Erik Joelsson wrote: > Hello, > > On 2018-06-04 23:10, David Holmes wrote: >> >> Sorry to be late to this party ... >> >> On 5/06/2018 6:10 AM, Erik Joelsson wrote: >>> >>> New webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.02/ >>> >>> Renamed the new jvm variant to "hardened". >> >> >> As it is a hardened server build I'd prefer if that were somehow reflected >> in the name. Though really I don't see why this should be restricted this >> way ... to be honest I don't see hardened as a variant of server vs. client >> vs. zero etc at all, you should be able to harden any of those. >> > I agree, and you sort of can. By adding the jvm feature "no-speculative-cti" > to any jvm variant, you get the flags. The name of the predefined variant > can be discussed. I initially suggested altserver because I, as you, thought > it should include server in the name. But ultimately, I don't care that much > about a name. There is also little point in defining a whole set of > predefined variants that nobody has requested. If we ever need more > specialized variants in the same image, we will add them. >> >> So IIUC with this change we will: >> - always build JDK native code "hardened" (if toolchain supports it) > > Yes, this is correct. The reason being that no significant performance > impact was detected, so there is no cost. >> >> - only build hotspot "hardened" if requested; and in that case >> - jvm.cfg will list -server and -hardened with server as default > > Correct. >> >> >> Is that right? I can see that we may choose to always build Oracle JDK >> this way but it isn't clear to me that its suitable for OpenJDK. Nor why >> hotspot is selectable but JDK is not. ?? >> > We would prefer to always build with security features enabled, but the > performance impact on the JVM is so high that we want to leave it to the > user to decide, both at bulid time and at runtime. With these changes, > Oracle will build OracleJDK, and OpenJDK, with dual JVMs by default, but any > other person or entity just building the OpenJDK source will just get the > server variant for now (as has been the default for a long time), unless > they specifically ask for "hardened" or activate the new jvm feature > (--with-jvm-feature=no-speculative-cti). > > We don't see the point in giving the choice on the JDK libraries simply > because there is no drawback to enabling the flags. > > /Erik > >> Sorry. >> >> David >> ----- >> >>> /Erik >>> >>> >>> On 2018-06-04 09:54, jesper.wilhelmsson at oracle.com wrote: >>>>> >>>>> On 4 Jun 2018, at 17:52, Erik Joelsson >>>>> wrote: >>>>> >>>>> Hello, >>>>> >>>>> On 2018-06-01 14:00, Aleksey Shipilev wrote: >>>>>> >>>>>> On 06/01/2018 10:53 PM, Erik Joelsson wrote: >>>>>>> >>>>>>> This patch defines flags for disabling speculative execution for GCC >>>>>>> and Visual Studio and applies >>>>>>> them to all binaries except libjvm when available in the compiler. It >>>>>>> defines a new jvm feature >>>>>>> no-speculative-cti, which is used to control whether to use the flags >>>>>>> for libjvm. It also defines a >>>>>>> new jvm variant "altserver" which is the same as server, but with >>>>>>> this new feature added. >>>>>> >>>>>> I think the classic name for such product configuration is "hardened", >>>>>> no? >>>>> >>>>> I don't know. I'm open to suggestions on naming. >>>> >>>> "hardened" sounds good to me. >>>> >>>> The change looks good as well. >>>> /Jesper >>>> >>>>> /Erik >>>>>> >>>>>> -Aleksey >>>>>> >>> > From erik.osterlund at oracle.com Tue Jun 5 16:45:49 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 5 Jun 2018 18:45:49 +0200 Subject: RFR (M) 8203837: Split nmethod unloading from nmethod cache cleaning In-Reply-To: <7847553d-0f61-f7ce-146f-1e6663cdca95@oracle.com> References: <7847553d-0f61-f7ce-146f-1e6663cdca95@oracle.com> Message-ID: Hi Coleen, Looks like a nice cleanup. I don't mind the cheeky logging changes squeezed into this change. Reviewed. Thanks, /Erik On 2018-05-30 14:23, coleen.phillimore at oracle.com wrote: > Summary: Refactor cleaning inline caches to after GC do_unloading. > > See CR for more information.? This patch refactors > CompiledMethod::do_unloading() to unload nmethods in case of !is_alive > oop.? If the nmethod is not unloaded, cleans the inline caches, and > exception cache, for unloaded classes and unloaded nmethods.? The > CodeCache walk in gc_epilogue is moved earlier to combine with cleanup > for class unloading. > > It doesn't add CodeCache walks to any of the GCs, and keeps the G1 > parallel nmethod unloading intact.? This patch also uses common code > for CompiledMethod::clean_inline_caches which was duplicated by the G1 > functions. > > The patch also fixed a case in AOT where clear_inline_caches should be > called instead of clean_inline_caches.?? I think neither is necessary > for the nmethods that are deoptimized because of redefinition, but > clear_inline_caches clears up redefined Methods* not for unloaded > nmethods.? Once the method is cleaned by the sweeper, > clean_inline_caches will be called on it.? clear vs. clean ... > > The patch also converts TraceScavenge to -Xlog:gc+nmethod=trace. I can > revert this part and do it separately; I had just converted it while > looking at the output. > > open webrev at http://cr.openjdk.java.net/~coleenp/8203837.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8203837 > > Tested with mach5 hs-tier1-5, the gc-test-suite (including > specjbb2015, dacapo, gcbasher), runThese with all GCs with and without > class unloading. > > This is an enhancement that we can use for making nmethod cleaning > concurrent in ZGC. > > Thanks, > Coleen From erik.joelsson at oracle.com Tue Jun 5 16:47:26 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Tue, 5 Jun 2018 09:47:26 -0700 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> <03efde10-d6a7-962d-54d5-339565d1d133@oracle.com> Message-ID: <3981fa4d-2265-4559-eabd-98ba32f27fc7@oracle.com> Hello Volker, On 2018-06-05 09:35, Volker Simonis wrote: > Hi Erik, > > you wrote: "Note that this applies to the build time C++ flags, not > the compiler in the JVM itself." So what about the code generated by > the HotSpot (i.e. stubs, template interpreter, c1, c2, Graal)? Is this > code already "hardened" against speculative execution? If yes, that's > fine, if not, I don't see the point in hardening the HotSpot code > itself if the VM still generates potentially "insecure" code. Correct. These are just the build changes for the build time compiler options. Further work will be done by Hotspot engineers. > And I still don't fully understand how things work if you build both > variants by default (as you intend to do it for Oracle builds). Will > you end up with two subdirectories (lib/server/ and lib/altserver) > where both contain a libjvm.so and the user can use "java -altserver" > on the command line to choose the hardened version? Correct, we use the old jvm variant mechanism, so this will work just like -server/-client used to work, two completely separate libjvm.so in separate sub directories. /Erik > Thank you and best regards, > Volker > > > On Tue, Jun 5, 2018 at 5:47 PM, Erik Joelsson wrote: >> Hello, >> >> On 2018-06-04 23:10, David Holmes wrote: >>> Sorry to be late to this party ... >>> >>> On 5/06/2018 6:10 AM, Erik Joelsson wrote: >>>> New webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.02/ >>>> >>>> Renamed the new jvm variant to "hardened". >>> >>> As it is a hardened server build I'd prefer if that were somehow reflected >>> in the name. Though really I don't see why this should be restricted this >>> way ... to be honest I don't see hardened as a variant of server vs. client >>> vs. zero etc at all, you should be able to harden any of those. >>> >> I agree, and you sort of can. By adding the jvm feature "no-speculative-cti" >> to any jvm variant, you get the flags. The name of the predefined variant >> can be discussed. I initially suggested altserver because I, as you, thought >> it should include server in the name. But ultimately, I don't care that much >> about a name. There is also little point in defining a whole set of >> predefined variants that nobody has requested. If we ever need more >> specialized variants in the same image, we will add them. >>> So IIUC with this change we will: >>> - always build JDK native code "hardened" (if toolchain supports it) >> Yes, this is correct. The reason being that no significant performance >> impact was detected, so there is no cost. >>> - only build hotspot "hardened" if requested; and in that case >>> - jvm.cfg will list -server and -hardened with server as default >> Correct. >>> >>> Is that right? I can see that we may choose to always build Oracle JDK >>> this way but it isn't clear to me that its suitable for OpenJDK. Nor why >>> hotspot is selectable but JDK is not. ?? >>> >> We would prefer to always build with security features enabled, but the >> performance impact on the JVM is so high that we want to leave it to the >> user to decide, both at bulid time and at runtime. With these changes, >> Oracle will build OracleJDK, and OpenJDK, with dual JVMs by default, but any >> other person or entity just building the OpenJDK source will just get the >> server variant for now (as has been the default for a long time), unless >> they specifically ask for "hardened" or activate the new jvm feature >> (--with-jvm-feature=no-speculative-cti). >> >> We don't see the point in giving the choice on the JDK libraries simply >> because there is no drawback to enabling the flags. >> >> /Erik >> >>> Sorry. >>> >>> David >>> ----- >>> >>>> /Erik >>>> >>>> >>>> On 2018-06-04 09:54, jesper.wilhelmsson at oracle.com wrote: >>>>>> On 4 Jun 2018, at 17:52, Erik Joelsson >>>>>> wrote: >>>>>> >>>>>> Hello, >>>>>> >>>>>> On 2018-06-01 14:00, Aleksey Shipilev wrote: >>>>>>> On 06/01/2018 10:53 PM, Erik Joelsson wrote: >>>>>>>> This patch defines flags for disabling speculative execution for GCC >>>>>>>> and Visual Studio and applies >>>>>>>> them to all binaries except libjvm when available in the compiler. It >>>>>>>> defines a new jvm feature >>>>>>>> no-speculative-cti, which is used to control whether to use the flags >>>>>>>> for libjvm. It also defines a >>>>>>>> new jvm variant "altserver" which is the same as server, but with >>>>>>>> this new feature added. >>>>>>> I think the classic name for such product configuration is "hardened", >>>>>>> no? >>>>>> I don't know. I'm open to suggestions on naming. >>>>> "hardened" sounds good to me. >>>>> >>>>> The change looks good as well. >>>>> /Jesper >>>>> >>>>>> /Erik >>>>>>> -Aleksey >>>>>>> From erik.joelsson at oracle.com Tue Jun 5 16:48:23 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Tue, 5 Jun 2018 09:48:23 -0700 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with In-Reply-To: References: <3f7c0b36458a467b85c41ed467b41614@sap.com> Message-ID: Hello again, Looks like Jesper updated the bug description with more info. /Erik On 2018-06-05 08:50, Erik Joelsson wrote: > Hello Matthias, > > For GCC, you need 7.3.0 or later. For Microsoft you need VS2017 and I > think some minimal update version (the option is called -Qspectre), we > use 15.5.5. > > I was not involved in the benchmarking so I don't know any details > there, only the conclusion. > > /Erik > > > On 2018-06-05 01:30, Baesken, Matthias wrote: >> >> Hi Erik , is there ?some info available about ?the performance impact >> when ?disabling ?disabling speculative execution? ? >> >> And which compiler versions are needed for this ? >> >> Best regards, Matthias >> >> >We need to add compilation flags for disabling speculative execution to >> >> >our native libraries and executables. In order to allow for users not >> >> >affected by problems with speculative execution to run a JVM at full >> >> >speed, we need to be able to ship two JVM libraries - one that is >> >> >compiled with speculative execution enabled, and one that is compiled >> >> >without. Note that this applies to the build time C++ flags, not the >> >> >compiler in the JVM itself. Luckily adding these flags to the rest of >> >> >the native libraries did not have a significant performance impact so >> >> >there is no need for making it optional there. >> >> > >> >> >This patch defines flags for disabling speculative execution for GCC >> and >> >> >Visual Studio and applies them to all binaries except libjvm when >> >> >available in the compiler. It defines a new jvm feature >> >> >no-speculative-cti, which is used to control whether to use the flags >> >> >for libjvm. It also defines a new jvm variant "altserver" which is the >> >> >same as server, but with this new feature added. >> >> > >> >> >For Oracle builds, we are changing the default for linux-x64 and >> >> >windows-x64 to build both server and altserver, giving the choice to >> the >> >> >user which JVM they want to use. If others would prefer this >> default, we >> >> >could make it default in configure as well. >> >> > >> >> >The change in GensrcJFR.gmk fixes a newly introduced race that appears >> >> >when building multiple jvm variants. >> >> > >> >> >Bug: https://bugs.openjdk.java.net/browse/JDK-8202384 >> >> > >> >> >Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.01 >> >> > From erik.osterlund at oracle.com Tue Jun 5 16:48:19 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 5 Jun 2018 18:48:19 +0200 Subject: RFR: JDK-8200623: Primitive heap access for interpreter BarrierSetAssembler/x86 In-Reply-To: References: <5B155A4B.7020009@oracle.com> <5B16AD51.4090009@oracle.com> Message-ID: <2f29589d-8dd4-4bfa-69b7-2b1c3019b372@oracle.com> Hi Roman, Looks good. Thanks, /Erik On 2018-06-05 18:29, Roman Kennke wrote: > Am 05.06.2018 um 17:33 schrieb Erik ?sterlund: >> Hi Roman, >> >> On 2018-06-05 16:07, Roman Kennke wrote: >>> Hi Erik, >>> >>>>> JDK-8199417 added better modularization for interpreter barriers. >>>>> Shenandoah and possibly future GCs also need barriers for primitive >>>>> access. >>>>> >>>>> Some notes on implementation: >>>>> - float/double/long access produced some headaches for the following >>>>> reasons: >>>>> >>>>> ??? - float and double would either take XMMRegister which is not >>>>> compatible with Register >>>>> ??? - or load-from/store-to the floating point stack (see >>>>> MacroAssembler::load/store_float/double) >>>>> ??? - long access on x86_32 would load-into/store-from 2 registers, or >>>>> else use a trick via the floating point stack to do atomic access >>>>> >>>>> None of this seemed easy/nice to do with the API. I helped myself by >>>>> accepting noreg as dst/src argument, which means the corresponding tos >>>>> (i.e. ltos, ftos, dtos) and the BSA would then access from/to >>>>> xmm0/float-stack in case of float/double or the double-reg/float-stack >>>>> in case of long/32bit, which is all that we ever need. >>>> It is indeed a bit painful that in hotspot, XMMRegister is not a >>>> Register (unlike the Graal implementation). And I think I agree that if >>>> it is indeed only ever needed by ToS, then this is the preferable >>>> solution to having two almost identicaly APIs - one for integral types >>>> and one for floating point types. It beats me though, that in this patch >>>> you do not address the jni fast get field optimization on x86. It is >>>> seemingly missing barriers now. Should probably make sure that one fits >>>> in as well. Fortunately, I think it should work out pretty well. >>> As mentioned in the review thread for JDK-8203172, we in Shenandoah land >>> decided to disable JNI fastgetfield stuff for now. I am not sure whether >>> or not we want to go through access_* anyway? It's probably more >>> consistent if we do. If we decide that we do, I'll add it to this patch, >>> if we don't, I'll rip it out of JDK-8203172 :-) >> Okay so let's not deal with JNI fast get field in either of those these >> two patches, and leave that exercise to future adventurers that feel >> brave enough to change that code. If you decide later to add >> modularization for that to enable this fantastic performance >> optimization on Shenandoah, then perhaps we can have a new patch with >> the appropriate code (possibly the speculative PC range filter I >> proposed) then in the new modularization. >> >> In that case, it looks reasonable the way it is now. Perhaps there >> should be a T_ADDRESS case for stores for consistency though? You can >> now load a T_ADDRESS but not store it, which is a bit surprising I >> suppose. Otherwise it looks good. >> > Ok I've added it: > > Incremental: > http://cr.openjdk.java.net/~rkennke/JDK-8200623/webrev.01.diff/ > Full: > http://cr.openjdk.java.net/~rkennke/JDK-8200623/webrev.01/ > > Good now? > > Thanks, Roman > From volker.simonis at gmail.com Tue Jun 5 16:54:27 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 5 Jun 2018 18:54:27 +0200 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: <3981fa4d-2265-4559-eabd-98ba32f27fc7@oracle.com> References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> <03efde10-d6a7-962d-54d5-339565d1d133@oracle.com> <3981fa4d-2265-4559-eabd-98ba32f27fc7@oracle.com> Message-ID: On Tue, Jun 5, 2018 at 6:47 PM, Erik Joelsson wrote: > Hello Volker, > > On 2018-06-05 09:35, Volker Simonis wrote: >> >> Hi Erik, >> >> you wrote: "Note that this applies to the build time C++ flags, not >> the compiler in the JVM itself." So what about the code generated by >> the HotSpot (i.e. stubs, template interpreter, c1, c2, Graal)? Is this >> code already "hardened" against speculative execution? If yes, that's >> fine, if not, I don't see the point in hardening the HotSpot code >> itself if the VM still generates potentially "insecure" code. > > Correct. These are just the build changes for the build time compiler > options. Further work will be done by Hotspot engineers. >> >> And I still don't fully understand how things work if you build both >> variants by default (as you intend to do it for Oracle builds). Will >> you end up with two subdirectories (lib/server/ and lib/altserver) >> where both contain a libjvm.so and the user can use "java -altserver" >> on the command line to choose the hardened version? > > Correct, we use the old jvm variant mechanism, so this will work just like > -server/-client used to work, two completely separate libjvm.so in separate > sub directories. > OK. Thanks for the quick confirmation. > /Erik > >> Thank you and best regards, >> Volker >> >> >> On Tue, Jun 5, 2018 at 5:47 PM, Erik Joelsson >> wrote: >>> >>> Hello, >>> >>> On 2018-06-04 23:10, David Holmes wrote: >>>> >>>> Sorry to be late to this party ... >>>> >>>> On 5/06/2018 6:10 AM, Erik Joelsson wrote: >>>>> >>>>> New webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.02/ >>>>> >>>>> Renamed the new jvm variant to "hardened". >>>> >>>> >>>> As it is a hardened server build I'd prefer if that were somehow >>>> reflected >>>> in the name. Though really I don't see why this should be restricted >>>> this >>>> way ... to be honest I don't see hardened as a variant of server vs. >>>> client >>>> vs. zero etc at all, you should be able to harden any of those. >>>> >>> I agree, and you sort of can. By adding the jvm feature >>> "no-speculative-cti" >>> to any jvm variant, you get the flags. The name of the predefined variant >>> can be discussed. I initially suggested altserver because I, as you, >>> thought >>> it should include server in the name. But ultimately, I don't care that >>> much >>> about a name. There is also little point in defining a whole set of >>> predefined variants that nobody has requested. If we ever need more >>> specialized variants in the same image, we will add them. >>>> >>>> So IIUC with this change we will: >>>> - always build JDK native code "hardened" (if toolchain supports it) >>> >>> Yes, this is correct. The reason being that no significant performance >>> impact was detected, so there is no cost. >>>> >>>> - only build hotspot "hardened" if requested; and in that case >>>> - jvm.cfg will list -server and -hardened with server as default >>> >>> Correct. >>>> >>>> >>>> Is that right? I can see that we may choose to always build Oracle JDK >>>> this way but it isn't clear to me that its suitable for OpenJDK. Nor why >>>> hotspot is selectable but JDK is not. ?? >>>> >>> We would prefer to always build with security features enabled, but the >>> performance impact on the JVM is so high that we want to leave it to the >>> user to decide, both at bulid time and at runtime. With these changes, >>> Oracle will build OracleJDK, and OpenJDK, with dual JVMs by default, but >>> any >>> other person or entity just building the OpenJDK source will just get the >>> server variant for now (as has been the default for a long time), unless >>> they specifically ask for "hardened" or activate the new jvm feature >>> (--with-jvm-feature=no-speculative-cti). >>> >>> We don't see the point in giving the choice on the JDK libraries simply >>> because there is no drawback to enabling the flags. >>> >>> /Erik >>> >>>> Sorry. >>>> >>>> David >>>> ----- >>>> >>>>> /Erik >>>>> >>>>> >>>>> On 2018-06-04 09:54, jesper.wilhelmsson at oracle.com wrote: >>>>>>> >>>>>>> On 4 Jun 2018, at 17:52, Erik Joelsson >>>>>>> wrote: >>>>>>> >>>>>>> Hello, >>>>>>> >>>>>>> On 2018-06-01 14:00, Aleksey Shipilev wrote: >>>>>>>> >>>>>>>> On 06/01/2018 10:53 PM, Erik Joelsson wrote: >>>>>>>>> >>>>>>>>> This patch defines flags for disabling speculative execution for >>>>>>>>> GCC >>>>>>>>> and Visual Studio and applies >>>>>>>>> them to all binaries except libjvm when available in the compiler. >>>>>>>>> It >>>>>>>>> defines a new jvm feature >>>>>>>>> no-speculative-cti, which is used to control whether to use the >>>>>>>>> flags >>>>>>>>> for libjvm. It also defines a >>>>>>>>> new jvm variant "altserver" which is the same as server, but with >>>>>>>>> this new feature added. >>>>>>>> >>>>>>>> I think the classic name for such product configuration is >>>>>>>> "hardened", >>>>>>>> no? >>>>>>> >>>>>>> I don't know. I'm open to suggestions on naming. >>>>>> >>>>>> "hardened" sounds good to me. >>>>>> >>>>>> The change looks good as well. >>>>>> /Jesper >>>>>> >>>>>>> /Erik >>>>>>>> >>>>>>>> -Aleksey >>>>>>>> > From jesper.wilhelmsson at oracle.com Tue Jun 5 16:59:20 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Tue, 5 Jun 2018 18:59:20 +0200 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> Message-ID: <2A71CDF2-A6F5-4985-9005-4886D1047F6C@oracle.com> > On 5 Jun 2018, at 08:10, David Holmes wrote: > > Sorry to be late to this party ... > > On 5/06/2018 6:10 AM, Erik Joelsson wrote: >> New webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.02/ >> Renamed the new jvm variant to "hardened". > > As it is a hardened server build I'd prefer if that were somehow reflected in the name. Though really I don't see why this should be restricted this way ... to be honest I don't see hardened as a variant of server vs. client vs. zero etc at all, you should be able to harden any of those. > > So IIUC with this change we will: > - always build JDK native code "hardened" (if toolchain supports it) > - only build hotspot "hardened" if requested; and in that case > - jvm.cfg will list -server and -hardened with server as default > > Is that right? I can see that we may choose to always build Oracle JDK this way but it isn't clear to me that its suitable for OpenJDK. Nor why hotspot is selectable but JDK is not. ?? Sorry for the lack of information here. There has been a lot of off-list discussions behind this change, I've added the background to the bug now. The short version is that we see a ~25% regression in startup times if the JVM is compiled with the gcc flags to avoid speculative execution. We have not observed any performance regressions due to compiling the rest of the native libraries with these gcc flags, so there doesn't seem to be any reason to have different versions of other libraries. /Jesper > Sorry. > > David > ----- > >> /Erik >> On 2018-06-04 09:54, jesper.wilhelmsson at oracle.com wrote: >>>> On 4 Jun 2018, at 17:52, Erik Joelsson wrote: >>>> >>>> Hello, >>>> >>>> On 2018-06-01 14:00, Aleksey Shipilev wrote: >>>>> On 06/01/2018 10:53 PM, Erik Joelsson wrote: >>>>>> This patch defines flags for disabling speculative execution for GCC and Visual Studio and applies >>>>>> them to all binaries except libjvm when available in the compiler. It defines a new jvm feature >>>>>> no-speculative-cti, which is used to control whether to use the flags for libjvm. It also defines a >>>>>> new jvm variant "altserver" which is the same as server, but with this new feature added. >>>>> I think the classic name for such product configuration is "hardened", no? >>>> I don't know. I'm open to suggestions on naming. >>> "hardened" sounds good to me. >>> >>> The change looks good as well. >>> /Jesper >>> >>>> /Erik >>>>> -Aleksey >>>>> From jesper.wilhelmsson at oracle.com Tue Jun 5 17:11:21 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Tue, 5 Jun 2018 19:11:21 +0200 Subject: RFR: JDK-8202776: Modularize GC allocations in runtime In-Reply-To: <0ea46d5f-739b-31c6-60ba-c0ea724e3da2@redhat.com> References: <5B1555F6.5090909@oracle.com> <76f8679c-5f2e-ec3f-9913-307d0c187dfd@redhat.com> <5B155AE4.2090908@oracle.com> <0d6cff83-d6be-c51c-8629-a340ad5f7fe0@redhat.com> <737D8C93-6533-4B48-BDEB-E92EE8E91C9F@oracle.com> <92e085d9-8ba2-45e8-1038-d98caedfebe5@redhat.com> <0ea46d5f-739b-31c6-60ba-c0ea724e3da2@redhat.com> Message-ID: <2B7948AE-76CD-42CE-8653-19E9083956EF@oracle.com> All failures in 2018-06-05-1435301.roman.source are due to JDK-8203780. You can ignore them. /Jesper > On 5 Jun 2018, at 18:31, Roman Kennke wrote: > > Submit repo came back with unstable. See below. Is it related to the > change? If so, can somebody with access give me a clue? > > Build Details: 2018-06-05-1435301.roman.source > 28 Failed Tests > Test Tier Platform Keywords Description Task > tools/javadoc/api/basic/GetTask_WriterTest.java tier1 windows-x64 > bug6493690 Exception: java.lang.Exception: ... errors found task > tools/javadoc/api/basic/GetTask_FileManagerTest.java tier1 windows-x64 > bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found > task > tools/javadoc/api/basic/GetTask_FileObjectsTest.java tier1 windows-x64 > bug6493690 Exception: java.lang.Exception: ... errors found task > jdk/javadoc/tool/api/basic/GetTask_FileManagerTest.java tier1 > windows-x64 bug6493690 bug8024434 Exception: java.lang.Exception: ... > errors found task > jdk/javadoc/tool/api/basic/GetTask_FileObjectsTest.java tier1 > windows-x64 bug6493690 Exception: java.lang.Exception: ... errors > found task > jdk/javadoc/tool/api/basic/GetTask_WriterTest.java tier1 windows-x64 > bug6493690 Exception: java.lang.Exception: ... errors found task > jdk/javadoc/doclet/testSearch/TestSearch.java tier1 windows-x64 > bug8141492 bug8071982 bug8141636 bug8147890 bug8166175 bug8168965 > bug8176794 bug8175218 bug8147881 bug8181622 bug8182263 bug8074407 > bug8187521 bug8198522 bug8182765 bug8199278 bug8196201 bug8196202 > Exception: FAILED: out-2\\jquery\\jquery-1.10.2.js: file not found: task > tools/javadoc/api/basic/GetTask_FileObjectsTest.java tier1 macosx-x64 > bug6493690 Exception: java.lang.Exception: ... errors found task > tools/javadoc/api/basic/GetTask_WriterTest.java tier1 macosx-x64 > bug6493690 Exception: java.lang.Exception: ... errors found task > tools/javadoc/api/basic/GetTask_FileManagerTest.java tier1 macosx-x64 > bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found > task > jdk/javadoc/tool/api/basic/GetTask_WriterTest.java tier1 macosx-x64 > bug6493690 Exception: java.lang.Exception: ... errors found task > jdk/javadoc/tool/api/basic/GetTask_FileManagerTest.java tier1 > macosx-x64 bug6493690 bug8024434 Exception: java.lang.Exception: ... > errors found task > jdk/javadoc/tool/api/basic/GetTask_FileObjectsTest.java tier1 > macosx-x64 bug6493690 Exception: java.lang.Exception: ... errors found > task > jdk/javadoc/doclet/testSearch/TestSearch.java tier1 macosx-x64 > bug8141492 bug8071982 bug8141636 bug8147890 bug8166175 bug8168965 > bug8176794 bug8175218 bug8147881 bug8181622 bug8182263 bug8074407 > bug8187521 bug8198522 bug8182765 bug8199278 bug8196201 bug8196202 > Exception: FAILED: out-1/jquery/jquery-1.10.2.js: file not found: task > tools/javadoc/api/basic/GetTask_FileObjectsTest.java tier1 linux-x64 > bug6493690 Exception: java.lang.Exception: ... errors found task > tools/javadoc/api/basic/GetTask_WriterTest.java tier1 linux-x64 > bug6493690 Exception: java.lang.Exception: ... errors found task > tools/javadoc/api/basic/GetTask_FileManagerTest.java tier1 linux-x64 > bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found > task > tools/javadoc/api/basic/GetTask_FileManagerTest.java tier1 > linux-x64-open bug6493690 bug8024434 Exception: java.lang.Exception: > ... errors found task > tools/javadoc/api/basic/GetTask_FileObjectsTest.java tier1 > linux-x64-open bug6493690 Exception: java.lang.Exception: ... errors > found task > tools/javadoc/api/basic/GetTask_WriterTest.java tier1 linux-x64-open > bug6493690 Exception: java.lang.Exception: ... errors found task > jdk/javadoc/tool/api/basic/GetTask_FileManagerTest.java tier1 > linux-x64 bug6493690 bug8024434 Exception: java.lang.Exception: ... > errors found task > jdk/javadoc/tool/api/basic/GetTask_FileObjectsTest.java tier1 > linux-x64 bug6493690 Exception: java.lang.Exception: ... errors found > task > jdk/javadoc/tool/api/basic/GetTask_WriterTest.java tier1 linux-x64 > bug6493690 Exception: java.lang.Exception: ... errors found task > jdk/javadoc/doclet/testSearch/TestSearch.java tier1 linux-x64 > bug8141492 bug8071982 bug8141636 bug8147890 bug8166175 bug8168965 > bug8176794 bug8175218 bug8147881 bug8181622 bug8182263 bug8074407 > bug8187521 bug8198522 bug8182765 bug8199278 bug8196201 bug8196202 > Exception: FAILED: out-1/jquery/jquery-1.10.2.js: file not found: task > jdk/javadoc/tool/api/basic/GetTask_WriterTest.java tier1 > linux-x64-open bug6493690 Exception: java.lang.Exception: ... errors > found task > jdk/javadoc/tool/api/basic/GetTask_FileManagerTest.java tier1 > linux-x64-open bug6493690 bug8024434 Exception: java.lang.Exception: > ... errors found task > jdk/javadoc/tool/api/basic/GetTask_FileObjectsTest.java tier1 > linux-x64-open bug6493690 Exception: java.lang.Exception: ... errors > found task > jdk/javadoc/doclet/testSearch/TestSearch.java tier1 linux-x64-open > bug8141492 bug8071982 bug8141636 bug8147890 bug8166175 bug8168965 > bug8176794 bug8175218 bug8147881 bug8181622 bug8182263 bug8074407 > bug8187521 bug8198522 bug8182765 bug8199278 bug8196201 bug8196202 > Exception: FAILED: out-1/jquery/jquery-1.10.2.js: file not found: task > Mach5 Tasks Results Summary > > NA: 0 > EXECUTED_WITH_FAILURE: 4 > PASSED: 71 > UNABLE_TO_RUN: 0 > FAILED: 0 > KILLED: 0 > Test > > 4 Executed with failure > jdk_open_test_langtools_tier1-linux-x64-71 Results: total: > 3874, passed: 3867; failed: 7 > jdk_open_test_langtools_tier1-linux-x64-open-72 Results: > total: 3874, passed: 3867; failed: 7 > jdk_open_test_langtools_tier1-macosx-x64-73 Results: total: > 3874, passed: 3867; failed: 7 > jdk_open_test_langtools_tier1-windows-x64-74 Results: total: > 3871, passed: 3864; failed: 7 > > >> +1, looks good. >> >> -Aleksey >> >> On 06/04/2018 11:24 PM, Erik ?sterlund wrote: >>> Hi, >>> >>> Looks good. >>> >>> Thanks, >>> /Erik >>> >>> On 2018-06-04 23:20, Roman Kennke wrote: >>>> Hi Aleksey, Erik, >>>> >>>> thanks for reviewing and helping with this! >>>> >>>> Moved mem_allocate() under protected: >>>> Incremental: >>>> http://cr.openjdk.java.net/~rkennke/JDK-8202776/webrev.01.diff/ >>>> Full: >>>> http://cr.openjdk.java.net/~rkennke/JDK-8202776/webrev.01/ >>>> >>>> Good now? >>>> >>>> Thanks, >>>> Roman >>>> >>>> >>>>> Hi Aleksey, >>>>> >>>>> Sounds like a good idea. >>>>> >>>>> /Erik >>>>> >>>>>> On 4 Jun 2018, at 17:56, Aleksey Shipilev wrote: >>>>>> >>>>>> On 06/04/2018 05:29 PM, Erik ?sterlund wrote: >>>>>>>>> I agree the GC should be able to perform arbitrary allocations the way >>>>>>>>> it wants to. >>>>>>>>> However, I would prefer to do it this way: >>>>>>>>> http://cr.openjdk.java.net/~eosterlund/8202776/webrev.00/ >>>>>> This looks good. I think we better hide mem_allocate under "protected" now, so we would have: >>>>>> >>>>>> protected: >>>>>> // TLAB path >>>>>> inline static HeapWord* allocate_from_tlab(Klass* klass, size_t size, TRAPS); >>>>>> static HeapWord* allocate_from_tlab_slow(Klass* klass, size_t size, TRAPS); >>>>>> >>>>>> // Out-of-TLAB path >>>>>> virtual HeapWord* mem_allocate(size_t size, >>>>>> bool* gc_overhead_limit_was_exceeded) = 0; >>>>>> >>>>>> public: >>>>>> // Entry point >>>>>> virtual HeapWord* obj_allocate_raw(Klass* klass, size_t size, >>>>>> bool* gc_overhead_limit_was_exceeded, TRAPS); >>>>>> >>>>>> -Aleksey >>>>>> >>>> >>> >> >> > > From rkennke at redhat.com Tue Jun 5 17:15:32 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 5 Jun 2018 19:15:32 +0200 Subject: RFR: JDK-8202776: Modularize GC allocations in runtime In-Reply-To: <2B7948AE-76CD-42CE-8653-19E9083956EF@oracle.com> References: <5B1555F6.5090909@oracle.com> <76f8679c-5f2e-ec3f-9913-307d0c187dfd@redhat.com> <5B155AE4.2090908@oracle.com> <0d6cff83-d6be-c51c-8629-a340ad5f7fe0@redhat.com> <737D8C93-6533-4B48-BDEB-E92EE8E91C9F@oracle.com> <92e085d9-8ba2-45e8-1038-d98caedfebe5@redhat.com> <0ea46d5f-739b-31c6-60ba-c0ea724e3da2@redhat.com> <2B7948AE-76CD-42CE-8653-19E9083956EF@oracle.com> Message-ID: Thanks Jepser! I pushed my changes. Cheers, Roman > All failures in 2018-06-05-1435301.roman.source are due to JDK-8203780. You can ignore them. > /Jesper > > >> On 5 Jun 2018, at 18:31, Roman Kennke wrote: >> >> Submit repo came back with unstable. See below. Is it related to the >> change? If so, can somebody with access give me a clue? >> >> Build Details: 2018-06-05-1435301.roman.source >> 28 Failed Tests >> Test Tier Platform Keywords Description Task >> tools/javadoc/api/basic/GetTask_WriterTest.java tier1 windows-x64 >> bug6493690 Exception: java.lang.Exception: ... errors found task >> tools/javadoc/api/basic/GetTask_FileManagerTest.java tier1 windows-x64 >> bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found >> task >> tools/javadoc/api/basic/GetTask_FileObjectsTest.java tier1 windows-x64 >> bug6493690 Exception: java.lang.Exception: ... errors found task >> jdk/javadoc/tool/api/basic/GetTask_FileManagerTest.java tier1 >> windows-x64 bug6493690 bug8024434 Exception: java.lang.Exception: ... >> errors found task >> jdk/javadoc/tool/api/basic/GetTask_FileObjectsTest.java tier1 >> windows-x64 bug6493690 Exception: java.lang.Exception: ... errors >> found task >> jdk/javadoc/tool/api/basic/GetTask_WriterTest.java tier1 windows-x64 >> bug6493690 Exception: java.lang.Exception: ... errors found task >> jdk/javadoc/doclet/testSearch/TestSearch.java tier1 windows-x64 >> bug8141492 bug8071982 bug8141636 bug8147890 bug8166175 bug8168965 >> bug8176794 bug8175218 bug8147881 bug8181622 bug8182263 bug8074407 >> bug8187521 bug8198522 bug8182765 bug8199278 bug8196201 bug8196202 >> Exception: FAILED: out-2\\jquery\\jquery-1.10.2.js: file not found: task >> tools/javadoc/api/basic/GetTask_FileObjectsTest.java tier1 macosx-x64 >> bug6493690 Exception: java.lang.Exception: ... errors found task >> tools/javadoc/api/basic/GetTask_WriterTest.java tier1 macosx-x64 >> bug6493690 Exception: java.lang.Exception: ... errors found task >> tools/javadoc/api/basic/GetTask_FileManagerTest.java tier1 macosx-x64 >> bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found >> task >> jdk/javadoc/tool/api/basic/GetTask_WriterTest.java tier1 macosx-x64 >> bug6493690 Exception: java.lang.Exception: ... errors found task >> jdk/javadoc/tool/api/basic/GetTask_FileManagerTest.java tier1 >> macosx-x64 bug6493690 bug8024434 Exception: java.lang.Exception: ... >> errors found task >> jdk/javadoc/tool/api/basic/GetTask_FileObjectsTest.java tier1 >> macosx-x64 bug6493690 Exception: java.lang.Exception: ... errors found >> task >> jdk/javadoc/doclet/testSearch/TestSearch.java tier1 macosx-x64 >> bug8141492 bug8071982 bug8141636 bug8147890 bug8166175 bug8168965 >> bug8176794 bug8175218 bug8147881 bug8181622 bug8182263 bug8074407 >> bug8187521 bug8198522 bug8182765 bug8199278 bug8196201 bug8196202 >> Exception: FAILED: out-1/jquery/jquery-1.10.2.js: file not found: task >> tools/javadoc/api/basic/GetTask_FileObjectsTest.java tier1 linux-x64 >> bug6493690 Exception: java.lang.Exception: ... errors found task >> tools/javadoc/api/basic/GetTask_WriterTest.java tier1 linux-x64 >> bug6493690 Exception: java.lang.Exception: ... errors found task >> tools/javadoc/api/basic/GetTask_FileManagerTest.java tier1 linux-x64 >> bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found >> task >> tools/javadoc/api/basic/GetTask_FileManagerTest.java tier1 >> linux-x64-open bug6493690 bug8024434 Exception: java.lang.Exception: >> ... errors found task >> tools/javadoc/api/basic/GetTask_FileObjectsTest.java tier1 >> linux-x64-open bug6493690 Exception: java.lang.Exception: ... errors >> found task >> tools/javadoc/api/basic/GetTask_WriterTest.java tier1 linux-x64-open >> bug6493690 Exception: java.lang.Exception: ... errors found task >> jdk/javadoc/tool/api/basic/GetTask_FileManagerTest.java tier1 >> linux-x64 bug6493690 bug8024434 Exception: java.lang.Exception: ... >> errors found task >> jdk/javadoc/tool/api/basic/GetTask_FileObjectsTest.java tier1 >> linux-x64 bug6493690 Exception: java.lang.Exception: ... errors found >> task >> jdk/javadoc/tool/api/basic/GetTask_WriterTest.java tier1 linux-x64 >> bug6493690 Exception: java.lang.Exception: ... errors found task >> jdk/javadoc/doclet/testSearch/TestSearch.java tier1 linux-x64 >> bug8141492 bug8071982 bug8141636 bug8147890 bug8166175 bug8168965 >> bug8176794 bug8175218 bug8147881 bug8181622 bug8182263 bug8074407 >> bug8187521 bug8198522 bug8182765 bug8199278 bug8196201 bug8196202 >> Exception: FAILED: out-1/jquery/jquery-1.10.2.js: file not found: task >> jdk/javadoc/tool/api/basic/GetTask_WriterTest.java tier1 >> linux-x64-open bug6493690 Exception: java.lang.Exception: ... errors >> found task >> jdk/javadoc/tool/api/basic/GetTask_FileManagerTest.java tier1 >> linux-x64-open bug6493690 bug8024434 Exception: java.lang.Exception: >> ... errors found task >> jdk/javadoc/tool/api/basic/GetTask_FileObjectsTest.java tier1 >> linux-x64-open bug6493690 Exception: java.lang.Exception: ... errors >> found task >> jdk/javadoc/doclet/testSearch/TestSearch.java tier1 linux-x64-open >> bug8141492 bug8071982 bug8141636 bug8147890 bug8166175 bug8168965 >> bug8176794 bug8175218 bug8147881 bug8181622 bug8182263 bug8074407 >> bug8187521 bug8198522 bug8182765 bug8199278 bug8196201 bug8196202 >> Exception: FAILED: out-1/jquery/jquery-1.10.2.js: file not found: task >> Mach5 Tasks Results Summary >> >> NA: 0 >> EXECUTED_WITH_FAILURE: 4 >> PASSED: 71 >> UNABLE_TO_RUN: 0 >> FAILED: 0 >> KILLED: 0 >> Test >> >> 4 Executed with failure >> jdk_open_test_langtools_tier1-linux-x64-71 Results: total: >> 3874, passed: 3867; failed: 7 >> jdk_open_test_langtools_tier1-linux-x64-open-72 Results: >> total: 3874, passed: 3867; failed: 7 >> jdk_open_test_langtools_tier1-macosx-x64-73 Results: total: >> 3874, passed: 3867; failed: 7 >> jdk_open_test_langtools_tier1-windows-x64-74 Results: total: >> 3871, passed: 3864; failed: 7 >> >> >>> +1, looks good. >>> >>> -Aleksey >>> >>> On 06/04/2018 11:24 PM, Erik ?sterlund wrote: >>>> Hi, >>>> >>>> Looks good. >>>> >>>> Thanks, >>>> /Erik >>>> >>>> On 2018-06-04 23:20, Roman Kennke wrote: >>>>> Hi Aleksey, Erik, >>>>> >>>>> thanks for reviewing and helping with this! >>>>> >>>>> Moved mem_allocate() under protected: >>>>> Incremental: >>>>> http://cr.openjdk.java.net/~rkennke/JDK-8202776/webrev.01.diff/ >>>>> Full: >>>>> http://cr.openjdk.java.net/~rkennke/JDK-8202776/webrev.01/ >>>>> >>>>> Good now? >>>>> >>>>> Thanks, >>>>> Roman >>>>> >>>>> >>>>>> Hi Aleksey, >>>>>> >>>>>> Sounds like a good idea. >>>>>> >>>>>> /Erik >>>>>> >>>>>>> On 4 Jun 2018, at 17:56, Aleksey Shipilev wrote: >>>>>>> >>>>>>> On 06/04/2018 05:29 PM, Erik ?sterlund wrote: >>>>>>>>>> I agree the GC should be able to perform arbitrary allocations the way >>>>>>>>>> it wants to. >>>>>>>>>> However, I would prefer to do it this way: >>>>>>>>>> http://cr.openjdk.java.net/~eosterlund/8202776/webrev.00/ >>>>>>> This looks good. I think we better hide mem_allocate under "protected" now, so we would have: >>>>>>> >>>>>>> protected: >>>>>>> // TLAB path >>>>>>> inline static HeapWord* allocate_from_tlab(Klass* klass, size_t size, TRAPS); >>>>>>> static HeapWord* allocate_from_tlab_slow(Klass* klass, size_t size, TRAPS); >>>>>>> >>>>>>> // Out-of-TLAB path >>>>>>> virtual HeapWord* mem_allocate(size_t size, >>>>>>> bool* gc_overhead_limit_was_exceeded) = 0; >>>>>>> >>>>>>> public: >>>>>>> // Entry point >>>>>>> virtual HeapWord* obj_allocate_raw(Klass* klass, size_t size, >>>>>>> bool* gc_overhead_limit_was_exceeded, TRAPS); >>>>>>> >>>>>>> -Aleksey >>>>>>> >>>>> >>>> >>> >>> >> >> > From coleen.phillimore at oracle.com Tue Jun 5 17:35:41 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 5 Jun 2018 13:35:41 -0400 Subject: RFR (M) 8203837: Split nmethod unloading from nmethod cache cleaning In-Reply-To: References: <7847553d-0f61-f7ce-146f-1e6663cdca95@oracle.com> Message-ID: On 6/5/18 12:45 PM, Erik ?sterlund wrote: > Hi Coleen, > > Looks like a nice cleanup. I don't mind the cheeky logging changes > squeezed into this change. Reviewed. Thank you, Erik! Coleen > > Thanks, > /Erik > > On 2018-05-30 14:23, coleen.phillimore at oracle.com wrote: >> Summary: Refactor cleaning inline caches to after GC do_unloading. >> >> See CR for more information.? This patch refactors >> CompiledMethod::do_unloading() to unload nmethods in case of >> !is_alive oop.? If the nmethod is not unloaded, cleans the inline >> caches, and exception cache, for unloaded classes and unloaded >> nmethods.? The CodeCache walk in gc_epilogue is moved earlier to >> combine with cleanup for class unloading. >> >> It doesn't add CodeCache walks to any of the GCs, and keeps the G1 >> parallel nmethod unloading intact.? This patch also uses common code >> for CompiledMethod::clean_inline_caches which was duplicated by the >> G1 functions. >> >> The patch also fixed a case in AOT where clear_inline_caches should >> be called instead of clean_inline_caches.?? I think neither is >> necessary for the nmethods that are deoptimized because of >> redefinition, but clear_inline_caches clears up redefined Methods* >> not for unloaded nmethods.? Once the method is cleaned by the >> sweeper, clean_inline_caches will be called on it.? clear vs. clean ... >> >> The patch also converts TraceScavenge to -Xlog:gc+nmethod=trace. I >> can revert this part and do it separately; I had just converted it >> while looking at the output. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8203837.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8203837 >> >> Tested with mach5 hs-tier1-5, the gc-test-suite (including >> specjbb2015, dacapo, gcbasher), runThese with all GCs with and >> without class unloading. >> >> This is an enhancement that we can use for making nmethod cleaning >> concurrent in ZGC. >> >> Thanks, >> Coleen > From rkennke at redhat.com Tue Jun 5 18:48:53 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 5 Jun 2018 20:48:53 +0200 Subject: RFR: JDK-8200623: Primitive heap access for interpreter BarrierSetAssembler/x86 In-Reply-To: <2f29589d-8dd4-4bfa-69b7-2b1c3019b372@oracle.com> References: <5B155A4B.7020009@oracle.com> <5B16AD51.4090009@oracle.com> <2f29589d-8dd4-4bfa-69b7-2b1c3019b372@oracle.com> Message-ID: <97464ef7-e100-2484-32f1-33d97f394e07@redhat.com> Hello all, submit came back with failures again. See below. From afar it looks like the same stuff caused by JDK-8203780 that made the tests for "JDK-8202776: Modularize GC allocations in runtime" fail. Can you please confirm that it's ok to push (or not)? Thanks, Roman > Hi Roman, > > Looks good. > > Thanks, > /Erik > > On 2018-06-05 18:29, Roman Kennke wrote: >> Am 05.06.2018 um 17:33 schrieb Erik ?sterlund: >>> Hi Roman, >>> >>> On 2018-06-05 16:07, Roman Kennke wrote: >>>> Hi Erik, >>>> >>>>>> JDK-8199417 added better modularization for interpreter barriers. >>>>>> Shenandoah and possibly future GCs also need barriers for primitive >>>>>> access. >>>>>> >>>>>> Some notes on implementation: >>>>>> - float/double/long access produced some headaches for the following >>>>>> reasons: >>>>>> >>>>>> ???? - float and double would either take XMMRegister which is not >>>>>> compatible with Register >>>>>> ???? - or load-from/store-to the floating point stack (see >>>>>> MacroAssembler::load/store_float/double) >>>>>> ???? - long access on x86_32 would load-into/store-from 2 >>>>>> registers, or >>>>>> else use a trick via the floating point stack to do atomic access >>>>>> >>>>>> None of this seemed easy/nice to do with the API. I helped myself by >>>>>> accepting noreg as dst/src argument, which means the corresponding >>>>>> tos >>>>>> (i.e. ltos, ftos, dtos) and the BSA would then access from/to >>>>>> xmm0/float-stack in case of float/double or the >>>>>> double-reg/float-stack >>>>>> in case of long/32bit, which is all that we ever need. >>>>> It is indeed a bit painful that in hotspot, XMMRegister is not a >>>>> Register (unlike the Graal implementation). And I think I agree >>>>> that if >>>>> it is indeed only ever needed by ToS, then this is the preferable >>>>> solution to having two almost identicaly APIs - one for integral types >>>>> and one for floating point types. It beats me though, that in this >>>>> patch >>>>> you do not address the jni fast get field optimization on x86. It is >>>>> seemingly missing barriers now. Should probably make sure that one >>>>> fits >>>>> in as well. Fortunately, I think it should work out pretty well. >>>> As mentioned in the review thread for JDK-8203172, we in Shenandoah >>>> land >>>> decided to disable JNI fastgetfield stuff for now. I am not sure >>>> whether >>>> or not we want to go through access_* anyway? It's probably more >>>> consistent if we do. If we decide that we do, I'll add it to this >>>> patch, >>>> if we don't, I'll rip it out of JDK-8203172 :-) >>> Okay so let's not deal with JNI fast get field in either of those these >>> two patches, and leave that exercise to future adventurers that feel >>> brave enough to change that code. If you decide later to add >>> modularization for that to enable this fantastic performance >>> optimization on Shenandoah, then perhaps we can have a new patch with >>> the appropriate code (possibly the speculative PC range filter I >>> proposed) then in the new modularization. >>> >>> In that case, it looks reasonable the way it is now. Perhaps there >>> should be a T_ADDRESS case for stores for consistency though? You can >>> now load a T_ADDRESS but not store it, which is a bit surprising I >>> suppose. Otherwise it looks good. >>> >> Ok I've added it: >> >> Incremental: >> http://cr.openjdk.java.net/~rkennke/JDK-8200623/webrev.01.diff/ >> Full: >> http://cr.openjdk.java.net/~rkennke/JDK-8200623/webrev.01/ >> >> Good now? >> >> Thanks, Roman >> > From rkennke at redhat.com Tue Jun 5 18:49:27 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 5 Jun 2018 20:49:27 +0200 Subject: RFR: JDK-8200623: Primitive heap access for interpreter BarrierSetAssembler/x86 In-Reply-To: <97464ef7-e100-2484-32f1-33d97f394e07@redhat.com> References: <5B155A4B.7020009@oracle.com> <5B16AD51.4090009@oracle.com> <2f29589d-8dd4-4bfa-69b7-2b1c3019b372@oracle.com> <97464ef7-e100-2484-32f1-33d97f394e07@redhat.com> Message-ID: Here's the output from submit: Build Details: 2018-06-05-1735591.roman.source 28 Failed Tests Test Tier Platform Keywords Description Task tools/javadoc/api/basic/GetTask_WriterTest.java tier1 windows-x64 bug6493690 Exception: java.lang.Exception: ... errors found task tools/javadoc/api/basic/GetTask_FileManagerTest.java tier1 windows-x64 bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found task tools/javadoc/api/basic/GetTask_FileObjectsTest.java tier1 windows-x64 bug6493690 Exception: java.lang.Exception: ... errors found task tools/javadoc/api/basic/GetTask_FileManagerTest.java tier1 macosx-x64 bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found task tools/javadoc/api/basic/GetTask_FileObjectsTest.java tier1 macosx-x64 bug6493690 Exception: java.lang.Exception: ... errors found task tools/javadoc/api/basic/GetTask_WriterTest.java tier1 macosx-x64 bug6493690 Exception: java.lang.Exception: ... errors found task tools/javadoc/api/basic/GetTask_WriterTest.java tier1 linux-x64-open bug6493690 Exception: java.lang.Exception: ... errors found task tools/javadoc/api/basic/GetTask_FileManagerTest.java tier1 linux-x64-open bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found task tools/javadoc/api/basic/GetTask_FileObjectsTest.java tier1 linux-x64-open bug6493690 Exception: java.lang.Exception: ... errors found task tools/javadoc/api/basic/GetTask_WriterTest.java tier1 linux-x64 bug6493690 Exception: java.lang.Exception: ... errors found task tools/javadoc/api/basic/GetTask_FileManagerTest.java tier1 linux-x64 bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found task tools/javadoc/api/basic/GetTask_FileObjectsTest.java tier1 linux-x64 bug6493690 Exception: java.lang.Exception: ... errors found task jdk/javadoc/tool/api/basic/GetTask_FileObjectsTest.java tier1 windows-x64 bug6493690 Exception: java.lang.Exception: ... errors found task jdk/javadoc/tool/api/basic/GetTask_WriterTest.java tier1 windows-x64 bug6493690 Exception: java.lang.Exception: ... errors found task jdk/javadoc/tool/api/basic/GetTask_FileManagerTest.java tier1 windows-x64 bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found task jdk/javadoc/doclet/testSearch/TestSearch.java tier1 windows-x64 bug8141492 bug8071982 bug8141636 bug8147890 bug8166175 bug8168965 bug8176794 bug8175218 bug8147881 bug8181622 bug8182263 bug8074407 bug8187521 bug8198522 bug8182765 bug8199278 bug8196201 bug8196202 Exception: FAILED: out-2\\jquery\\jquery-1.10.2.js: file not found: task jdk/javadoc/tool/api/basic/GetTask_FileManagerTest.java tier1 macosx-x64 bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found task jdk/javadoc/tool/api/basic/GetTask_FileObjectsTest.java tier1 macosx-x64 bug6493690 Exception: java.lang.Exception: ... errors found task jdk/javadoc/tool/api/basic/GetTask_WriterTest.java tier1 macosx-x64 bug6493690 Exception: java.lang.Exception: ... errors found task jdk/javadoc/doclet/testSearch/TestSearch.java tier1 macosx-x64 bug8141492 bug8071982 bug8141636 bug8147890 bug8166175 bug8168965 bug8176794 bug8175218 bug8147881 bug8181622 bug8182263 bug8074407 bug8187521 bug8198522 bug8182765 bug8199278 bug8196201 bug8196202 Exception: FAILED: out-2/jquery/jquery-1.10.2.js: file not found: task jdk/javadoc/tool/api/basic/GetTask_FileManagerTest.java tier1 linux-x64-open bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found task jdk/javadoc/tool/api/basic/GetTask_FileObjectsTest.java tier1 linux-x64-open bug6493690 Exception: java.lang.Exception: ... errors found task jdk/javadoc/tool/api/basic/GetTask_WriterTest.java tier1 linux-x64-open bug6493690 Exception: java.lang.Exception: ... errors found task jdk/javadoc/tool/api/basic/GetTask_FileObjectsTest.java tier1 linux-x64 bug6493690 Exception: java.lang.Exception: ... errors found task jdk/javadoc/tool/api/basic/GetTask_WriterTest.java tier1 linux-x64 bug6493690 Exception: java.lang.Exception: ... errors found task jdk/javadoc/doclet/testSearch/TestSearch.java tier1 linux-x64-open bug8141492 bug8071982 bug8141636 bug8147890 bug8166175 bug8168965 bug8176794 bug8175218 bug8147881 bug8181622 bug8182263 bug8074407 bug8187521 bug8198522 bug8182765 bug8199278 bug8196201 bug8196202 Exception: FAILED: out-1/jquery/jquery-1.10.2.js: file not found: task jdk/javadoc/tool/api/basic/GetTask_FileManagerTest.java tier1 linux-x64 bug6493690 bug8024434 Exception: java.lang.Exception: ... errors found task jdk/javadoc/doclet/testSearch/TestSearch.java tier1 linux-x64 bug8141492 bug8071982 bug8141636 bug8147890 bug8166175 bug8168965 bug8176794 bug8175218 bug8147881 bug8181622 bug8182263 bug8074407 bug8187521 bug8198522 bug8182765 bug8199278 bug8196201 bug8196202 Exception: FAILED: out-1/jquery/jquery-1.10.2.js: file not found: task Mach5 Tasks Results Summary NA: 0 EXECUTED_WITH_FAILURE: 4 PASSED: 71 UNABLE_TO_RUN: 0 FAILED: 0 KILLED: 0 Test 4 Executed with failure jdk_open_test_langtools_tier1-linux-x64-71 Results: total: 3874, passed: 3867; failed: 7 jdk_open_test_langtools_tier1-linux-x64-open-72 Results: total: 3874, passed: 3867; failed: 7 jdk_open_test_langtools_tier1-macosx-x64-73 Results: total: 3874, passed: 3867; failed: 7 jdk_open_test_langtools_tier1-windows-x64-74 Results: total: 3871, passed: 3864; failed: 7 > Hello all, > > submit came back with failures again. See below. From afar it looks like > the same stuff caused by JDK-8203780 that made the tests for > "JDK-8202776: Modularize GC allocations in runtime" fail. Can you please > confirm that it's ok to push (or not)? > > Thanks, Roman > > >> Hi Roman, >> >> Looks good. >> >> Thanks, >> /Erik >> >> On 2018-06-05 18:29, Roman Kennke wrote: >>> Am 05.06.2018 um 17:33 schrieb Erik ?sterlund: >>>> Hi Roman, >>>> >>>> On 2018-06-05 16:07, Roman Kennke wrote: >>>>> Hi Erik, >>>>> >>>>>>> JDK-8199417 added better modularization for interpreter barriers. >>>>>>> Shenandoah and possibly future GCs also need barriers for primitive >>>>>>> access. >>>>>>> >>>>>>> Some notes on implementation: >>>>>>> - float/double/long access produced some headaches for the following >>>>>>> reasons: >>>>>>> >>>>>>> ???? - float and double would either take XMMRegister which is not >>>>>>> compatible with Register >>>>>>> ???? - or load-from/store-to the floating point stack (see >>>>>>> MacroAssembler::load/store_float/double) >>>>>>> ???? - long access on x86_32 would load-into/store-from 2 >>>>>>> registers, or >>>>>>> else use a trick via the floating point stack to do atomic access >>>>>>> >>>>>>> None of this seemed easy/nice to do with the API. I helped myself by >>>>>>> accepting noreg as dst/src argument, which means the corresponding >>>>>>> tos >>>>>>> (i.e. ltos, ftos, dtos) and the BSA would then access from/to >>>>>>> xmm0/float-stack in case of float/double or the >>>>>>> double-reg/float-stack >>>>>>> in case of long/32bit, which is all that we ever need. >>>>>> It is indeed a bit painful that in hotspot, XMMRegister is not a >>>>>> Register (unlike the Graal implementation). And I think I agree >>>>>> that if >>>>>> it is indeed only ever needed by ToS, then this is the preferable >>>>>> solution to having two almost identicaly APIs - one for integral types >>>>>> and one for floating point types. It beats me though, that in this >>>>>> patch >>>>>> you do not address the jni fast get field optimization on x86. It is >>>>>> seemingly missing barriers now. Should probably make sure that one >>>>>> fits >>>>>> in as well. Fortunately, I think it should work out pretty well. >>>>> As mentioned in the review thread for JDK-8203172, we in Shenandoah >>>>> land >>>>> decided to disable JNI fastgetfield stuff for now. I am not sure >>>>> whether >>>>> or not we want to go through access_* anyway? It's probably more >>>>> consistent if we do. If we decide that we do, I'll add it to this >>>>> patch, >>>>> if we don't, I'll rip it out of JDK-8203172 :-) >>>> Okay so let's not deal with JNI fast get field in either of those these >>>> two patches, and leave that exercise to future adventurers that feel >>>> brave enough to change that code. If you decide later to add >>>> modularization for that to enable this fantastic performance >>>> optimization on Shenandoah, then perhaps we can have a new patch with >>>> the appropriate code (possibly the speculative PC range filter I >>>> proposed) then in the new modularization. >>>> >>>> In that case, it looks reasonable the way it is now. Perhaps there >>>> should be a T_ADDRESS case for stores for consistency though? You can >>>> now load a T_ADDRESS but not store it, which is a bit surprising I >>>> suppose. Otherwise it looks good. >>>> >>> Ok I've added it: >>> >>> Incremental: >>> http://cr.openjdk.java.net/~rkennke/JDK-8200623/webrev.01.diff/ >>> Full: >>> http://cr.openjdk.java.net/~rkennke/JDK-8200623/webrev.01/ >>> >>> Good now? >>> >>> Thanks, Roman >>> >> > > From coleen.phillimore at oracle.com Tue Jun 5 19:01:22 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 5 Jun 2018 15:01:22 -0400 Subject: RFR: 8204168: Increase small heap sizes in tests to accommodate ZGC In-Reply-To: <102083fa-f2b0-dc2d-7d8b-89a748c2c5db@oracle.com> References: <102083fa-f2b0-dc2d-7d8b-89a748c2c5db@oracle.com> Message-ID: <53fc28f4-cb1e-2e4e-f7c6-7a6c1a6c5c87@oracle.com> I included the serviceability-dev list for the jdi tests.? I don't think this change increases the heap for these tests enough to cause any problems, so I think this change is fine. thanks, Coleen On 6/5/18 5:27 AM, Erik Helin wrote: > On 05/31/2018 02:32 PM, Stefan Karlsson wrote: >> Hi all, > > Hey Stefan, > >> Please review this patch to increase the heap size for tests that >> sets a small heap size. >> >> http://cr.openjdk.java.net/~stefank/8204168/webrev.01 >> https://bugs.openjdk.java.net/browse/JDK-8204168 > > I read through each test in compiler, gc and runtime carefully to > check that the increased heap size wouldn't render the test useless > (i.e. I ensured that the tests still test something). Good news, it > seems (to me at least) that increasing the heap size for all of the > above tests will work fine. Please consider the changes to those tests > Reviewed by me, nice work! > > Unfortunately I don't know the "nsk" tests well enough to review those > changes... :( > > Thanks, > Erik From rkennke at redhat.com Tue Jun 5 19:26:25 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 5 Jun 2018 21:26:25 +0200 Subject: RFR: JDK-8203172: Primitive heap access for interpreter BarrierSetAssembler/aarch64 In-Reply-To: References: <5B152EA5.2060903@oracle.com> <23f0db5e-e2d2-a047-6111-d298c2234d28@redhat.com> <9e8589ca-5ac8-d7ee-a217-04048671366c@redhat.com> <36a33e42-1470-b153-dd7a-0ef26c89678b@redhat.com> Message-ID: <0f6bc000-b1f9-ba23-7bb1-4397a0459db7@redhat.com> As mentioned in another thread, we in Shenandoah have decided to skip JNI fast getfield stuff for now. We'll probably address it and implement the extended range speculative PC thing later, in a separate RFE. I ripped out the jniFastGetField changes from the patch: http://cr.openjdk.java.net/~rkennke/JDK-8203172/webrev.02/ Is it good now to push? Roman > Hi Roman, > > On 2018-06-04 22:49, Roman Kennke wrote: >> Am 04.06.2018 um 22:16 schrieb Erik ?sterlund: >>> Hi Roman, >>> >>> On 2018-06-04 21:42, Roman Kennke wrote: >>>> Am 04.06.2018 um 18:43 schrieb Erik ?sterlund: >>>>> Hi Roman, >>>>> >>>>> On 2018-06-04 17:24, Roman Kennke wrote: >>>>>> Ok, right. Very good catch! >>>>>> >>>>>> This should do it, right? Sorry, I couldn't easily make an >>>>>> incremental >>>>>> diff: >>>>>> >>>>>> http://cr.openjdk.java.net/~rkennke/JDK-8203172/webrev.01/ >>>>> Unfortunately, I think there is one more problem for you. >>>>> The signal handler is supposed to catch SIGSEGV caused by speculative >>>>> loads shot from the fantastic jni fast get field code. But it >>>>> currently >>>>> expects an exact PC match: >>>>> >>>>> address JNI_FastGetField::find_slowcase_pc(address pc) { >>>>> ??? for (int i=0; i>>>> ????? if (speculative_load_pclist[i] == pc) { >>>>> ??????? return slowcase_entry_pclist[i]; >>>>> ????? } >>>>> ??? } >>>>> ??? return (address)-1; >>>>> } >>>>> >>>>> This means that the way this is written now, speculative_load_pclist >>>>> registers the __ pc() right before the access_load_at call. This puts >>>>> constraints on whatever is done inside of access_load_at to only >>>>> speculatively load on the first assembled instruction. >>>>> >>>>> If you imagine a scenario where you have a GC with Brooks pointers >>>>> that >>>>> also uncommits memory (like Shenandoah I presume), then I imagine you >>>>> would need something more here. If you start with a forwarding pointer >>>>> load, then that can trap (which is probably caught by the exact PC >>>>> match). But then there will be a subsequent load of the value in the >>>>> to-space object, which will not be protected. But this is also loaded >>>>> speculatively (as the subsequent safepoint counter check could >>>>> invalidate the result), and could therefore crash the VM unless >>>>> protected, as the signal handler code fails to recognize this is a >>>>> speculative load from jni fast get field. >>>>> >>>>> I imagine the solution to this would be to let speculative_load_pclist >>>>> specify a range for fuzzy SIGSEGV matching in the signal handler, >>>>> rather >>>>> than an exact PC (i.e. speculative_load_pclist_start and >>>>> speculative_load_pclist_end). That would give you enough freedom to >>>>> use >>>>> Brooks pointers in there. Sometimes I wonder if the lengths we go to >>>>> maintain jni fast get field is *really* worth it. >>>> I are probably right in general. But I also think we are fine with >>>> Shenandoah. Both the fwd ptr load and the field load are constructed >>>> with the same base operand. If the oop is NULL (or invalid memory) it >>>> will blow up on fwdptr load just the same as it would blow up on field >>>> load. We maintain an invariant that the fwd ptr of a valid oop results >>>> in a valid (and equivalent) oop. I therefore think we are fine for now. >>>> Should a GC ever need anything else here, I'd worry about it then. >>>> Until >>>> this happens, let's just hope to never need to touch this code again >>>> ;-) >>> No I'm afraid that is not safe. After loading the forwarding pointer, >>> the thread could be preempted, then any number of GC cycles could pass, >>> which means that the address that the at some point read forwarding >>> pointer points to, could be uncommitted memory. In fact it is unsafe >>> even without uncommitted memory. Because after resolving the jobject to >>> some address in the heap, the thread could get preempted, and any number >>> of GC cycles could pass, causing the forwarding pointer to be read from >>> some address in the heap that no longer is the forwarding pointer of an >>> object, but rather a random integer. This causes the second load to blow >>> up, even without uncommitting memory. >>> >>> Here is an attempt at showing different things that can go wrong: >>> >>> obj = *jobject >>> // preempted for N GC cycles, meaning obj might 1) be a valid pointer to >>> an object, or 2) be a random pointer inside of the heap or outside of >>> the heap >>> >>> forward_pointer = *obj // may 1) crash with SIGSEGV, 2) read a random >>> pointer, no longer representing the forwarding pointer, or 3) read a >>> consistent forwarding pointer >>> >>> // preempted for N GC cycles, causing forward_pointer to point at pretty >>> much anything >>> >>> result = *(forward_pointer + offset) // may 1) read a valid primitive >>> value, if previous two loads were not messed up, or 2) read some random >>> value that no longer corresponds to the object field, or 3) crash >>> because either the forwarding pointer did point at something valid that >>> subsequently got relocated and uncommitted before the load hits, or >>> because the forwarding pointer never pointed to anything valid in the >>> first place, because the forwarding pointer load read a random pointer >>> due to the object relocating after the jobject was resolved. >>> >>> The summary is that both loads need protection due to how the thread in >>> native state runs freely without necessarily caring about the GC running >>> any number of GC cycles concurrently, making the memory super slippery, >>> which risks crashing the VM without the proper protection. >> AWW WTF!? We are in native state in this code? > > Yes. This is one of the most dangerous code paths we have in the VM I > think. > >> It might be easier to just call bsa->resolve_for_read() (which emits the >> fwd ptr load), then issue another: >> >> speculative_load_pclist[count] = __ pc(); >> >> need to juggle with the counter and double-emit slowcase_entry_pclist, >> and all this conditionally for Shenandoah. Gaa. > > I think that by just having the speculative load PC list take a range as > opposed to a precise PC, and check that a given PC is in that range, and > not just exactly equal to a PC, the problem is solved for everyone. > >> Or just FLAG_SET_DEFAULT(UseFastJNIAccessors,false) in Shenandoah. > > Yeah, sometimes you wonder if it's really worth the maintenance to keep > this thing. > >> Funny how we had this code in Shenandoah literally for years, and >> nobody's ever tripped over it. > > Yeah it is a rather nasty race to detect. > >> It's one of those cases where I almost suspect it's been done in Java1.0 >> when lots of JNI code was in use because some stuff couldn't be done in >> fast in Java, but nowadays doesn't really make a difference. *Sigh* > > :) > >>>>>> Unfortunately, I cannot really test it because of: >>>>>> http://mail.openjdk.java.net/pipermail/aarch64-port-dev/2018-May/005843.html >>>>>> >>>>>> >>>>>> >>>>> That is unfortunate. If I were you, I would not dare to change >>>>> anything >>>>> in jni fast get field without testing it - it is very error prone. >>>> Yeah. I guess I'll just wait with testing until this is resolved. Or >>>> else resolve it myself. >>> Yeah. >>> >>>> Can I consider this change reviewed by you? >>> I think we should agree about the safety of doing this for Shenandoah in >>> particular first. I still think we need the PC range as opposed to exact >>> PC to be caught in the signal handler for this to be safe for your GC >>> algorithm. >> >> Yeah, I agree. I need to think this through a little bit. > > Yeah. Still think the PC range check solution should do the trick. > >> Thanks for pointing out this bug. I can already see nightly builds >> suddenly starting to fail over it, now that it's known :-) > > No problem! > > Thanks, > /Erik > >> Roman >> >> > From jesper.wilhelmsson at oracle.com Tue Jun 5 19:28:50 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Tue, 5 Jun 2018 21:28:50 +0200 Subject: RFR: JDK-8200623: Primitive heap access for interpreter BarrierSetAssembler/x86 In-Reply-To: <97464ef7-e100-2484-32f1-33d97f394e07@redhat.com> References: <5B155A4B.7020009@oracle.com> <5B16AD51.4090009@oracle.com> <2f29589d-8dd4-4bfa-69b7-2b1c3019b372@oracle.com> <97464ef7-e100-2484-32f1-33d97f394e07@redhat.com> Message-ID: <26BCF2F4-E9A4-45DC-AAF4-AD2055FFEBFB@oracle.com> Yes, all failures in 2018-06-05-1735591.roman.source are due to JDK-8203780 (or I should probably say JDK-8204321 which is the bug to handle the issue). /Jesper > On 5 Jun 2018, at 20:48, Roman Kennke wrote: > > Hello all, > > submit came back with failures again. See below. From afar it looks like > the same stuff caused by JDK-8203780 that made the tests for > "JDK-8202776: Modularize GC allocations in runtime" fail. Can you please > confirm that it's ok to push (or not)? > > Thanks, Roman > > >> Hi Roman, >> >> Looks good. >> >> Thanks, >> /Erik >> >> On 2018-06-05 18:29, Roman Kennke wrote: >>> Am 05.06.2018 um 17:33 schrieb Erik ?sterlund: >>>> Hi Roman, >>>> >>>> On 2018-06-05 16:07, Roman Kennke wrote: >>>>> Hi Erik, >>>>> >>>>>>> JDK-8199417 added better modularization for interpreter barriers. >>>>>>> Shenandoah and possibly future GCs also need barriers for primitive >>>>>>> access. >>>>>>> >>>>>>> Some notes on implementation: >>>>>>> - float/double/long access produced some headaches for the following >>>>>>> reasons: >>>>>>> >>>>>>> - float and double would either take XMMRegister which is not >>>>>>> compatible with Register >>>>>>> - or load-from/store-to the floating point stack (see >>>>>>> MacroAssembler::load/store_float/double) >>>>>>> - long access on x86_32 would load-into/store-from 2 >>>>>>> registers, or >>>>>>> else use a trick via the floating point stack to do atomic access >>>>>>> >>>>>>> None of this seemed easy/nice to do with the API. I helped myself by >>>>>>> accepting noreg as dst/src argument, which means the corresponding >>>>>>> tos >>>>>>> (i.e. ltos, ftos, dtos) and the BSA would then access from/to >>>>>>> xmm0/float-stack in case of float/double or the >>>>>>> double-reg/float-stack >>>>>>> in case of long/32bit, which is all that we ever need. >>>>>> It is indeed a bit painful that in hotspot, XMMRegister is not a >>>>>> Register (unlike the Graal implementation). And I think I agree >>>>>> that if >>>>>> it is indeed only ever needed by ToS, then this is the preferable >>>>>> solution to having two almost identicaly APIs - one for integral types >>>>>> and one for floating point types. It beats me though, that in this >>>>>> patch >>>>>> you do not address the jni fast get field optimization on x86. It is >>>>>> seemingly missing barriers now. Should probably make sure that one >>>>>> fits >>>>>> in as well. Fortunately, I think it should work out pretty well. >>>>> As mentioned in the review thread for JDK-8203172, we in Shenandoah >>>>> land >>>>> decided to disable JNI fastgetfield stuff for now. I am not sure >>>>> whether >>>>> or not we want to go through access_* anyway? It's probably more >>>>> consistent if we do. If we decide that we do, I'll add it to this >>>>> patch, >>>>> if we don't, I'll rip it out of JDK-8203172 :-) >>>> Okay so let's not deal with JNI fast get field in either of those these >>>> two patches, and leave that exercise to future adventurers that feel >>>> brave enough to change that code. If you decide later to add >>>> modularization for that to enable this fantastic performance >>>> optimization on Shenandoah, then perhaps we can have a new patch with >>>> the appropriate code (possibly the speculative PC range filter I >>>> proposed) then in the new modularization. >>>> >>>> In that case, it looks reasonable the way it is now. Perhaps there >>>> should be a T_ADDRESS case for stores for consistency though? You can >>>> now load a T_ADDRESS but not store it, which is a bit surprising I >>>> suppose. Otherwise it looks good. >>>> >>> Ok I've added it: >>> >>> Incremental: >>> http://cr.openjdk.java.net/~rkennke/JDK-8200623/webrev.01.diff/ >>> Full: >>> http://cr.openjdk.java.net/~rkennke/JDK-8200623/webrev.01/ >>> >>> Good now? >>> >>> Thanks, Roman >>> >> > > From rkennke at redhat.com Tue Jun 5 19:34:55 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 5 Jun 2018 21:34:55 +0200 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: <5B16A8C5.7040005@oracle.com> References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> <5B16A8C5.7040005@oracle.com> Message-ID: <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> Ok, done here: Incremental: http://cr.openjdk.java.net/~rkennke/JDK-8203157/webrev.01.diff/ Full: http://cr.openjdk.java.net/~rkennke/JDK-8203157/webrev.01/ Good now? Thanks, Roman > Hi Roman, > > Sure. As long as there is no need for AS_RAW equals in the assembly > code, we don't need to add it now. > However, that means that there are currently no properties in the > decorators we care about at the moment. Therefore, the decorator > parameter of obj_equals should be removed; it serves no purpose. > > Thanks, > /Erik > > On 2018-06-05 17:01, Roman Kennke wrote: >> Am 05.06.2018 um 17:00 schrieb Erik ?sterlund: >>> Hi Roman, >>> >>> On 2018-06-05 16:16, Roman Kennke wrote: >>>> Am 04.06.2018 um 14:38 schrieb Erik ?sterlund: >>>>> Hi Roman, >>>>> >>>>> ?? 42 >>>>> ?? 43?? virtual void obj_equals(MacroAssembler* masm, DecoratorSet >>>>> decorators, >>>>> ?? 44?????????????????????????? Register obj1, Register obj2); >>>>> ?? 45 >>>>> >>>>> I don't think we need to pass in any decorators here. Perhaps one day >>>>> there will be some important semantic property to deal with, but >>>>> today I >>>>> do not think there are any properties we care about, except possibly >>>>> AS_RAW, but that would never propagate into the BarrierSetAssembler >>>>> anyway. >>>>> >>>>> On that topic, I noticed that today we do the raw version of e.g. >>>>> load_heap_oop inside of the BarrierSetAssembler, and to use it you >>>>> would >>>>> call load_heap_oop(AS_RAW). But the cmpoop stuff does it in a >>>>> different >>>>> way (cmpoop_raw in the macro assembler). I think it would be ideal >>>>> if we >>>>> could do it the same way, which would involve calling cmpoop with >>>>> AS_RAW >>>>> to get a raw oop comparison, residing in BarrierSetAssembler, with the >>>>> usual hardwiring in the corresponding macro assembler function when it >>>>> observes AS_RAW. >>>>> >>>>> So it would look something like this: >>>>> >>>>> void cmpoop(Register src1, Address src2, DecoratorSet decorators = >>>>> AS_NORMAL); >>>>> >>>>> What do you think? >>>> cmpoop_raw() is not the AS_RAW base implementation. It's only there to >>>> help BarrierSetAssembler to implement the base >>>> obj_equals(Address|Register, jobject). We cannot access cmp_literal32() >>>> from outside the MacroAssembler. >>> In other words, there is no AS_RAW option "exposed" to public use, >>> right? Maybe there is no need for raw equals in our assembly code. >> Yes. That is correct. >> >>>> The mentioned hardwiring to call straight to BSA is probably going >>>> away too: >>>> https://bugs.openjdk.java.net/browse/JDK-8203232 >>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-May/032240.html >>> I'm not sure I'm convinced that is an improvement. The expected >>> behaviour at the callsite is that the code in BarrierSetAssembler (which >>> is the level of the hierarchy that implements raw accesses) is run, and >>> nothing else. If anything else happens, it's a bug. So I hardwire that >>> at the callsite to always match the expected behaviour. To instead let >>> each level of the barrier class hierarchy remember to check for AS_RAW >>> and delegate to the parent class in a way that ultimately has the exact >>> same perceivable effect as the hardwiring, but in a much more error >>> prone way, does not sound like an improvement to me. Perhaps I can be >>> convinced otherwise if I understand what the concern is here and what >>> problem we are trying to solve. >> I don't lean very much either way. But it should be discussed under >> JDK-8203232. Considering that there is no use for cmpoop_raw() except as >> helper for BSA, do you agree that we don't need the AS_RAW hardwiring >> for obj_equals() ? Can I consider this patch Reviewed? >> >> Thanks, Roman >> > From coleen.phillimore at oracle.com Tue Jun 5 21:19:53 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 5 Jun 2018 17:19:53 -0400 Subject: RFR (Tedious) 8204301: Make OrderAccess functions available to hpp rather than inline.hpp files Message-ID: <2babb670-e230-667d-aac0-e2b4e51f6a74@oracle.com> Summary: move orderAccess.inline.hpp into orderAccess.hpp and remove os.hpp inclusion and conditional os::is_MP() for fence on x86 platforms See discussion in bug.? Left os::is_MP() conditional for arm32. Tested by Boris U, thanks! open webrev at http://cr.openjdk.java.net/~coleenp/8204301.01/webrev bug link https://bugs.openjdk.java.net/browse/JDK-8204301 Tested on mach5 hs-tier1-2 on Oracle platforms: linux-x64, windows-x64, macosx-x64 and solaris-sparcv9.? Built on linux-aarch64 and linux-zero.?? Boris built on arm32.? There were no actual changes on s390 or ppc. Thanks, Coleen From coleen.phillimore at oracle.com Tue Jun 5 21:25:23 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 5 Jun 2018 17:25:23 -0400 Subject: RFR (Tedious) 8204301: Make OrderAccess functions available to hpp rather than inline.hpp files In-Reply-To: <2babb670-e230-667d-aac0-e2b4e51f6a74@oracle.com> References: <2babb670-e230-667d-aac0-e2b4e51f6a74@oracle.com> Message-ID: <677c55a1-45e5-9ce8-4a2e-a45e22361b1d@oracle.com> Also, sorry that webrev doesn't work for this.? The orderAccess_os_cpu.inline.hpp files were renamed to orderAccess_os_cpu.hpp and there are minor differences in each that you have to use your browser back button to navigate.?? The rest of the changes are orderAccess.inline.hpp => orderAccess.hpp. Lastly, I'll update copyrights during commit. thanks, Coleen On 6/5/18 5:19 PM, coleen.phillimore at oracle.com wrote: > Summary: move orderAccess.inline.hpp into orderAccess.hpp and remove > os.hpp inclusion and conditional os::is_MP() for fence on x86 platforms > > See discussion in bug.? Left os::is_MP() conditional for arm32. Tested > by Boris U, thanks! > > open webrev at http://cr.openjdk.java.net/~coleenp/8204301.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8204301 > > Tested on mach5 hs-tier1-2 on Oracle platforms: linux-x64, > windows-x64, macosx-x64 and solaris-sparcv9.? Built on linux-aarch64 > and linux-zero.?? Boris built on arm32.? There were no actual changes > on s390 or ppc. > > Thanks, > Coleen From stefan.karlsson at oracle.com Tue Jun 5 21:32:08 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Tue, 5 Jun 2018 23:32:08 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> Message-ID: <91c643c4-4ed9-ba58-4587-15511b52c0e6@oracle.com> Hi Leonid, On 2018-06-05 03:46, Leonid Mesnik wrote: > Hi > > GC stress tests gcold, gcbasher, locker don't depend from GC. However they are not executed with default GC. They contain separate tests which define used collector and skip if any other collector is set explicitly. An example is > http://hg.openjdk.java.net/jdk/jdk/file/tip/test/hotspot/jtreg/gc/stress/gcold/TestGCOldWithG1.java > > Do you have similar tests for ZGC? I didn't found them in test patch. No, we didn't have those for ZGC. Here's a webrev that adds those tests: ?http://cr.openjdk.java.net/~stefank/8204210/webrev.gcstress.01/ Thanks for looking at this! StefanK > > Leonid > >> On Jun 1, 2018, at 2:41 PM, Per Liden wrote: >> >> Hi, >> >> Please review the implementation of JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) >> >> Please see the JEP for more information about the project. The JEP is currently in state "Proposed to Target" for JDK 11. >> >> https://bugs.openjdk.java.net/browse/JDK-8197831 >> >> Additional information in can also be found on the ZGC project wiki. >> >> https://wiki.openjdk.java.net/display/zgc/Main >> >> >> Webrevs >> ------- >> >> To make this easier to review, we've divided the change into two webrevs. >> >> * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master >> >> This patch contains the actual ZGC implementation, the new unit tests and other changes needed in HotSpot. >> >> * ZGC Testing: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >> >> This patch contains changes to existing tests needed by ZGC. >> >> >> Overview of Changes >> ------------------- >> >> Below follows a list of the files we add/modify in the master patch, with a short summary describing each group. >> >> * Build support - Making ZGC an optional feature. >> >> make/autoconf/hotspot.m4 >> make/hotspot/lib/JvmFeatures.gmk >> src/hotspot/share/utilities/macros.hpp >> >> * C2 AD file - Additions needed to generate ZGC load barriers (adlc does not currently offer a way to easily break this out). >> >> src/hotspot/cpu/x86/x86.ad >> src/hotspot/cpu/x86/x86_64.ad >> >> * C2 - Things that can't be easily abstracted out into ZGC specific code, most of which is guarded behind a #if INCLUDE_ZGC and/or if (UseZGC) condition. There should only be two logic changes (one in idealKit.cpp and one in node.cpp) that are still active when ZGC is disabled. We believe these are low risk changes and should not introduce any real change i behavior when using other GCs. >> >> src/hotspot/share/adlc/formssel.cpp >> src/hotspot/share/opto/* >> src/hotspot/share/compiler/compilerDirectives.hpp >> >> * General GC+Runtime - Registering ZGC as a collector. >> >> src/hotspot/share/gc/shared/* >> src/hotspot/share/runtime/vmStructs.cpp >> src/hotspot/share/runtime/vm_operations.hpp >> src/hotspot/share/prims/whitebox.cpp >> >> * GC thread local data - Increasing the size of data area by 32 bytes. >> >> src/hotspot/share/gc/shared/gcThreadLocalData.hpp >> >> * ZGC - The collector itself. >> >> src/hotspot/share/gc/z/* >> src/hotspot/cpu/x86/gc/z/* >> src/hotspot/os_cpu/linux_x86/gc/z/* >> test/hotspot/gtest/gc/z/* >> >> * JFR - Adding new event types. >> >> src/hotspot/share/jfr/* >> src/jdk.jfr/share/conf/jfr/* >> >> * Logging - Adding new log tags. >> >> src/hotspot/share/logging/* >> >> * Metaspace - Adding a friend declaration. >> >> src/hotspot/share/memory/metaspace.hpp >> >> * InstanceRefKlass - Adjustments for concurrent reference processing. >> >> src/hotspot/share/oops/instanceRefKlass.inline.hpp >> >> * vmSymbol - Disabled clone intrinsic for ZGC. >> >> src/hotspot/share/classfile/vmSymbols.cpp >> >> * Oop Verification - In four cases we disabled oop verification because it do not makes sense or is not applicable to a GC using load barriers. >> >> src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp >> src/hotspot/cpu/x86/stubGenerator_x86_64.cpp >> src/hotspot/share/compiler/oopMap.cpp >> src/hotspot/share/runtime/jniHandles.cpp >> >> * StackValue - Apply a load barrier in case of OSR. This is a bit of a hack. However, this will go away in the future, when we have the next iteration of C2's load barriers in place (aka "C2 late barrier insertion"). >> >> src/hotspot/share/runtime/stackValue.cpp >> >> * JVMTI - Adding an assert() to catch problems if the tagmap hashing is changed in the future. >> >> src/hotspot/share/prims/jvmtiTagMap.cpp >> >> * Legal - Adding copyright/license for 3rd party hash function used in ZHash. >> >> src/java.base/share/legal/c-libutl.md >> >> * SA - Adding basic ZGC support. >> >> src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* >> >> >> Testing >> ------- >> >> * Unit testing >> >> A number of new ZGC specific gtests have been added, in test/hotspot/gtest/gc/z/ >> >> * Regression testing >> >> No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} >> No new failures in Mach5, with ZGC disabled, tier{1,2,3} >> >> * Stress testing >> >> We have been continuously been running a number stress tests throughout the development, these include: >> >> specjbb2000 >> specjbb2005 >> specjbb2015 >> specjvm98 >> specjvm2008 >> dacapo2009 >> test/hotspot/jtreg/gc/stress/gcold >> test/hotspot/jtreg/gc/stress/systemgc >> test/hotspot/jtreg/gc/stress/gclocker >> test/hotspot/jtreg/gc/stress/gcbasher >> test/hotspot/jtreg/gc/stress/finalizer >> Kitchensink >> >> >> Thanks! >> >> /Per, Stefan & the ZGC team From per.liden at oracle.com Tue Jun 5 22:48:10 2018 From: per.liden at oracle.com (Per Liden) Date: Wed, 6 Jun 2018 00:48:10 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> Message-ID: Hi all, Here are updated webrevs reflecting the feedback received so far. ZGC Master Incremental: http://cr.openjdk.java.net/~pliden/8204210/webrev.0vs1-master Full: http://cr.openjdk.java.net/~pliden/8204210/webrev.1-master ZGC Testing Incremental: http://cr.openjdk.java.net/~pliden/8204210/webrev.0vs1-testing Full: http://cr.openjdk.java.net/~pliden/8204210/webrev.1-testing Thanks! /Per On 06/01/2018 11:41 PM, Per Liden wrote: > Hi, > > Please review the implementation of JEP 333: ZGC: A Scalable Low-Latency > Garbage Collector (Experimental) > > Please see the JEP for more information about the project. The JEP is > currently in state "Proposed to Target" for JDK 11. > > https://bugs.openjdk.java.net/browse/JDK-8197831 > > Additional information in can also be found on the ZGC project wiki. > > https://wiki.openjdk.java.net/display/zgc/Main > > > Webrevs > ------- > > To make this easier to review, we've divided the change into two webrevs. > > * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master > > This patch contains the actual ZGC implementation, the new unit tests > and other changes needed in HotSpot. > > * ZGC Testing: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing > > This patch contains changes to existing tests needed by ZGC. > > > Overview of Changes > ------------------- > > Below follows a list of the files we add/modify in the master patch, > with a short summary describing each group. > > * Build support - Making ZGC an optional feature. > > make/autoconf/hotspot.m4 > make/hotspot/lib/JvmFeatures.gmk > src/hotspot/share/utilities/macros.hpp > > * C2 AD file - Additions needed to generate ZGC load barriers (adlc does > not currently offer a way to easily break this out). > > src/hotspot/cpu/x86/x86.ad > src/hotspot/cpu/x86/x86_64.ad > > * C2 - Things that can't be easily abstracted out into ZGC specific > code, most of which is guarded behind a #if INCLUDE_ZGC and/or if > (UseZGC) condition. There should only be two logic changes (one in > idealKit.cpp and one in node.cpp) that are still active when ZGC is > disabled. We believe these are low risk changes and should not introduce > any real change i behavior when using other GCs. > > src/hotspot/share/adlc/formssel.cpp > src/hotspot/share/opto/* > src/hotspot/share/compiler/compilerDirectives.hpp > > * General GC+Runtime - Registering ZGC as a collector. > > src/hotspot/share/gc/shared/* > src/hotspot/share/runtime/vmStructs.cpp > src/hotspot/share/runtime/vm_operations.hpp > src/hotspot/share/prims/whitebox.cpp > > * GC thread local data - Increasing the size of data area by 32 bytes. > > src/hotspot/share/gc/shared/gcThreadLocalData.hpp > > * ZGC - The collector itself. > > src/hotspot/share/gc/z/* > src/hotspot/cpu/x86/gc/z/* > src/hotspot/os_cpu/linux_x86/gc/z/* > test/hotspot/gtest/gc/z/* > > * JFR - Adding new event types. > > src/hotspot/share/jfr/* > src/jdk.jfr/share/conf/jfr/* > > * Logging - Adding new log tags. > > src/hotspot/share/logging/* > > * Metaspace - Adding a friend declaration. > > src/hotspot/share/memory/metaspace.hpp > > * InstanceRefKlass - Adjustments for concurrent reference processing. > > src/hotspot/share/oops/instanceRefKlass.inline.hpp > > * vmSymbol - Disabled clone intrinsic for ZGC. > > src/hotspot/share/classfile/vmSymbols.cpp > > * Oop Verification - In four cases we disabled oop verification because > it do not makes sense or is not applicable to a GC using load barriers. > > src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp > src/hotspot/cpu/x86/stubGenerator_x86_64.cpp > src/hotspot/share/compiler/oopMap.cpp > src/hotspot/share/runtime/jniHandles.cpp > > * StackValue - Apply a load barrier in case of OSR. This is a bit of a > hack. However, this will go away in the future, when we have the next > iteration of C2's load barriers in place (aka "C2 late barrier insertion"). > > src/hotspot/share/runtime/stackValue.cpp > > * JVMTI - Adding an assert() to catch problems if the tagmap hashing is > changed in the future. > > src/hotspot/share/prims/jvmtiTagMap.cpp > > * Legal - Adding copyright/license for 3rd party hash function used in > ZHash. > > src/java.base/share/legal/c-libutl.md > > * SA - Adding basic ZGC support. > > src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* > > > Testing > ------- > > * Unit testing > > A number of new ZGC specific gtests have been added, in > test/hotspot/gtest/gc/z/ > > * Regression testing > > No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} > No new failures in Mach5, with ZGC disabled, tier{1,2,3} > > * Stress testing > > We have been continuously been running a number stress tests > throughout the development, these include: > > specjbb2000 > specjbb2005 > specjbb2015 > specjvm98 > specjvm2008 > dacapo2009 > test/hotspot/jtreg/gc/stress/gcold > test/hotspot/jtreg/gc/stress/systemgc > test/hotspot/jtreg/gc/stress/gclocker > test/hotspot/jtreg/gc/stress/gcbasher > test/hotspot/jtreg/gc/stress/finalizer > Kitchensink > > > Thanks! > > /Per, Stefan & the ZGC team From david.holmes at oracle.com Wed Jun 6 00:45:05 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 6 Jun 2018 10:45:05 +1000 Subject: RFR (Tedious) 8204301: Make OrderAccess functions available to hpp rather than inline.hpp files In-Reply-To: <2babb670-e230-667d-aac0-e2b4e51f6a74@oracle.com> References: <2babb670-e230-667d-aac0-e2b4e51f6a74@oracle.com> Message-ID: Hi Coleen, On 6/06/2018 7:19 AM, coleen.phillimore at oracle.com wrote: > Summary: move orderAccess.inline.hpp into orderAccess.hpp and remove > os.hpp inclusion and conditional os::is_MP() for fence on x86 platforms > > See discussion in bug.? Left os::is_MP() conditional for arm32. Tested > by Boris U, thanks! > > open webrev at http://cr.openjdk.java.net/~coleenp/8204301.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8204301 That all looks fine to me. Only observation I have is that I think the compiler_barrier() calls in the x86 fence routines (except perhaps Windows) are redundant. I think the original code should have been: if (os::is_MP()) { __ asm lock add ... else compiler_barrier(); } but this is harmless and has no runtime impact. Thanks, David > Tested on mach5 hs-tier1-2 on Oracle platforms: linux-x64, windows-x64, > macosx-x64 and solaris-sparcv9.? Built on linux-aarch64 and > linux-zero.?? Boris built on arm32.? There were no actual changes on > s390 or ppc. > > Thanks, > Coleen From zgu at redhat.com Wed Jun 6 00:58:18 2018 From: zgu at redhat.com (Zhengyu Gu) Date: Tue, 5 Jun 2018 20:58:18 -0400 Subject: RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers In-Reply-To: References: Message-ID: <9c1af95d-2c0f-b96d-76a7-5b40228996de@redhat.com> On 06/05/2018 12:10 PM, Thomas St?fe wrote:> On Tue, Jun 5, 2018 at 3:46 PM, Adam Farley8 wrote: >> Hi All, >> >> Native memory allocation for DBBs is tracked in java.nio.Bits, but that >> only includes what the user thinks they are allocating. >> > > Which is exactly what I would expect as a user... > I agree with Thomas, there is no point for a user to aware of tracking overhead, and the overhead only incurs when native memory tracking is on. As a matter of fact, it can really confuse user that values can be varied, depending on whether native memory tracking is on. Thanks, -Zhengyu >> When the VM adds extra memory to the allocation amount this extra bit is >> not represented in the Bits total. A cursory glance >> shows, minimum, that we round the requested memory quantity up to the heap >> word size in the Unsafe.allocateMemory code > > which I do not understand either - why do we do this? After all, > normal allocations from inside hotspot do not get aligned up in size, > and the java doc to Unsafe allocateMemory does not state anything > about the size being aligned. > > In addition to questioning the align up of the user requested size, I > would be in favor of adding a new NMT tag for these, maybe "mtUnsafe"? > That would be an easy fix. > >> , and >> something to do with nmt_header_size in os:malloc() (os.cpp) too. > > That is mighty unspecific and also wrong. The align-up mentioned above > goes into the size reported by Bits; the nmt header size does not. > >> >> On its own, and in small quantities, align_up(sz, HeapWordSize) isn't that >> big of an issue. But when you allocate a lot of DBBs, >> and coupled with the nmt_header_size business, it makes the Bits values >> wrong. The more DBB allocations, the more inaccurate those >> numbers will be. > > To be annoyingly precise, it will never be more wrong than 1:7 on > 64bit machines :) - if all memory requested via Unsafe.allocateMemory > would be of size 1 byte. > >> >> To get the "+X", it seems to me that the best option would be to introduce >> an native method in Bits that fetches "X" directly >> from Hotspot, using the same code that Hotspot uses (so we'd have to >> abstract-out the Hotspot logic that adds X to the memory >> quantity). This way, anyone modifying the Hotspot logic won't risk >> rendering the Bits logic wrong again. > > I don't follow that. > >> >> That's only one way to fix the accuracy problem here though. Suggestions >> welcome. > > You are throwing two effects together: > > - As mentioned above, I consider the align-up of the user requested > size to be at least questionable. It shows up as user size in NMT > which should not be. I also fail to see a compelling reason for it, > but maybe someone else can enlighten me. > > - But anything else - NMT headers, overwriter guards, etc added by the > VM I consider in the same class as any other overhead incurred e.g. by > the CRT or the OS when calling malloc (e.g. malloc allocator bucket > size). Basically, rss will go up by more than size requested by > malloc. Something maybe worth noting, but IMHO not as part of the > numbers returned by java.nio.Bits. > > Just my 2 cents. > > Best Regards, Thomas > >> >> Best Regards >> >> Adam Farley >> Unless stated otherwise above: >> IBM United Kingdom Limited - Registered in England and Wales with number >> 741598. >> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU From coleen.phillimore at oracle.com Wed Jun 6 02:29:10 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 5 Jun 2018 22:29:10 -0400 Subject: RFR (Tedious) 8204301: Make OrderAccess functions available to hpp rather than inline.hpp files In-Reply-To: References: <2babb670-e230-667d-aac0-e2b4e51f6a74@oracle.com> Message-ID: <7f6c6061-b7bc-4791-b878-1177cc7a898b@oracle.com> On 6/5/18 8:45 PM, David Holmes wrote: > Hi Coleen, > > On 6/06/2018 7:19 AM, coleen.phillimore at oracle.com wrote: >> Summary: move orderAccess.inline.hpp into orderAccess.hpp and remove >> os.hpp inclusion and conditional os::is_MP() for fence on x86 platforms >> >> See discussion in bug.? Left os::is_MP() conditional for arm32. >> Tested by Boris U, thanks! >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8204301.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8204301 > > That all looks fine to me. > > Only observation I have is that I think the compiler_barrier() calls > in the x86 fence routines (except perhaps Windows) are redundant. I > think the original code should have been: > > if (os::is_MP()) { > ? __ asm lock add ... > else > ? compiler_barrier(); > } > > but this is harmless and has no runtime impact. I see.? I'll leave them so I don't have to generate another webrev. Thanks for the review and discussion! Coleen > > Thanks, > David > >> Tested on mach5 hs-tier1-2 on Oracle platforms: linux-x64, >> windows-x64, macosx-x64 and solaris-sparcv9.? Built on linux-aarch64 >> and linux-zero.?? Boris built on arm32.? There were no actual changes >> on s390 or ppc. >> >> Thanks, >> Coleen From david.holmes at oracle.com Wed Jun 6 04:17:49 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 6 Jun 2018 14:17:49 +1000 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: <2A71CDF2-A6F5-4985-9005-4886D1047F6C@oracle.com> References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> <2A71CDF2-A6F5-4985-9005-4886D1047F6C@oracle.com> Message-ID: Hi Erik, Jesper, On 6/06/2018 2:59 AM, jesper.wilhelmsson at oracle.com wrote: >> On 5 Jun 2018, at 08:10, David Holmes wrote: >> >> Sorry to be late to this party ... >> >> On 5/06/2018 6:10 AM, Erik Joelsson wrote: >>> New webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.02/ >>> Renamed the new jvm variant to "hardened". >> >> As it is a hardened server build I'd prefer if that were somehow reflected in the name. Though really I don't see why this should be restricted this way ... to be honest I don't see hardened as a variant of server vs. client vs. zero etc at all, you should be able to harden any of those. >> >> So IIUC with this change we will: >> - always build JDK native code "hardened" (if toolchain supports it) >> - only build hotspot "hardened" if requested; and in that case >> - jvm.cfg will list -server and -hardened with server as default >> >> Is that right? I can see that we may choose to always build Oracle JDK this way but it isn't clear to me that its suitable for OpenJDK. Nor why hotspot is selectable but JDK is not. ?? > > Sorry for the lack of information here. There has been a lot of off-list discussions behind this change, I've added the background to the bug now. > > The short version is that we see a ~25% regression in startup times if the JVM is compiled with the gcc flags to avoid speculative execution. We have not observed any performance regressions due to compiling the rest of the native libraries with these gcc flags, so there doesn't seem to be any reason to have different versions of other libraries. So "benevolent dictatorship"? ;-) My main concern is that the updated toolchains that support this have all been produced in a mad rush and quite frankly I expect them to be buggy. I don't think it is hard to enable the builder of OpenJDK to have full choice and control here. Cheers, David > /Jesper > >> Sorry. >> >> David >> ----- >> >>> /Erik >>> On 2018-06-04 09:54, jesper.wilhelmsson at oracle.com wrote: >>>>> On 4 Jun 2018, at 17:52, Erik Joelsson wrote: >>>>> >>>>> Hello, >>>>> >>>>> On 2018-06-01 14:00, Aleksey Shipilev wrote: >>>>>> On 06/01/2018 10:53 PM, Erik Joelsson wrote: >>>>>>> This patch defines flags for disabling speculative execution for GCC and Visual Studio and applies >>>>>>> them to all binaries except libjvm when available in the compiler. It defines a new jvm feature >>>>>>> no-speculative-cti, which is used to control whether to use the flags for libjvm. It also defines a >>>>>>> new jvm variant "altserver" which is the same as server, but with this new feature added. >>>>>> I think the classic name for such product configuration is "hardened", no? >>>>> I don't know. I'm open to suggestions on naming. >>>> "hardened" sounds good to me. >>>> >>>> The change looks good as well. >>>> /Jesper >>>> >>>>> /Erik >>>>>> -Aleksey >>>>>> > From matthias.baesken at sap.com Wed Jun 6 07:59:56 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Wed, 6 Jun 2018 07:59:56 +0000 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with In-Reply-To: References: <3f7c0b36458a467b85c41ed467b41614@sap.com> Message-ID: <524eaa757f934d25a3f42d97daa5216b@sap.com> Thanks for the clarification ! From: Erik Joelsson [mailto:erik.joelsson at oracle.com] Sent: Dienstag, 5. Juni 2018 17:51 To: Baesken, Matthias ; 'hotspot-dev at openjdk.java.net' ; 'build-dev at openjdk.java.net' Cc: Zeller, Arno Subject: Re: RFR: JDK-8202384: Introduce altserver jvm variant with Hello Matthias, For GCC, you need 7.3.0 or later. For Microsoft you need VS2017 and I think some minimal update version (the option is called -Qspectre), we use 15.5.5. I was not involved in the benchmarking so I don't know any details there, only the conclusion. /Erik On 2018-06-05 01:30, Baesken, Matthias wrote: Hi Erik , is there some info available about the performance impact when disabling disabling speculative execution ? And which compiler versions are needed for this ? Best regards, Matthias >We need to add compilation flags for disabling speculative execution to >our native libraries and executables. In order to allow for users not >affected by problems with speculative execution to run a JVM at full >speed, we need to be able to ship two JVM libraries - one that is >compiled with speculative execution enabled, and one that is compiled >without. Note that this applies to the build time C++ flags, not the >compiler in the JVM itself. Luckily adding these flags to the rest of >the native libraries did not have a significant performance impact so >there is no need for making it optional there. > >This patch defines flags for disabling speculative execution for GCC and >Visual Studio and applies them to all binaries except libjvm when >available in the compiler. It defines a new jvm feature >no-speculative-cti, which is used to control whether to use the flags >for libjvm. It also defines a new jvm variant "altserver" which is the >same as server, but with this new feature added. > >For Oracle builds, we are changing the default for linux-x64 and >windows-x64 to build both server and altserver, giving the choice to the >user which JVM they want to use. If others would prefer this default, we >could make it default in configure as well. > >The change in GensrcJFR.gmk fixes a newly introduced race that appears >when building multiple jvm variants. > >Bug: https://bugs.openjdk.java.net/browse/JDK-8202384 > >Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.01 From martin.doerr at sap.com Wed Jun 6 08:26:56 2018 From: martin.doerr at sap.com (Doerr, Martin) Date: Wed, 6 Jun 2018 08:26:56 +0000 Subject: RFR(XXS): 8204335: [ppc] Assembler::add_const_optimized incorrect for some inputs In-Reply-To: References: Message-ID: <080b2356a3c243be8f21f66b5fa1a4b5@sap.com> Hi Volker, thank you for fixing. It's a very bad copy&paste bug. Your change looks good. Best regards, Martin -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Volker Simonis Sent: Dienstag, 5. Juni 2018 18:06 To: HotSpot Open Source Developers Subject: RFR(XXS): 8204335: [ppc] Assembler::add_const_optimized incorrect for some inputs Hi, can I please have a review for this trivial, day-one, ppc-only fix: http://cr.openjdk.java.net/~simonis/webrevs/2018/8204335/ https://bugs.openjdk.java.net/browse/JDK-8204335 There's a typo in Assembler::add_const_optimized() which makes it return incorrect results for some input values. The fix is trivial. Repeated here for your convenience: diff -r 1d476feca3c9 src/hotspot/cpu/ppc/assembler_ppc.cpp --- a/src/hotspot/cpu/ppc/assembler_ppc.cpp Mon Jun 04 11:19:54 2018 +0200 +++ b/src/hotspot/cpu/ppc/assembler_ppc.cpp Tue Jun 05 11:21:08 2018 +0200 @@ -486,7 +486,7 @@ // Case 2: Can use addis. if (xd == 0) { short xc = rem & 0xFFFF; // 2nd 16-bit chunk. - rem = (rem >> 16) + ((unsigned short)xd >> 15); + rem = (rem >> 16) + ((unsigned short)xc >> 15); if (rem == 0) { addis(d, s, xc); return 0; Thank you and best regards, Volker From rkennke at redhat.com Wed Jun 6 09:32:37 2018 From: rkennke at redhat.com (Roman Kennke) Date: Wed, 6 Jun 2018 11:32:37 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> Message-ID: I'm looking mostly at shared code changes. I can't really say much about C2, JFR, SA, tests and ZGC itself. Some comments/questions: - src/hotspot/share/classfile/vmSymbols.cpp: why are you enabling the clone intrinsic unconditionally and not under the usual: if (!InlineObjectCopy || !InlineArrayCopy) return true; ? - src/hotspot/share/oops/instanceRefKlass.inline.hpp I wonder if this makes sense to upstream separately? Also, I'm curious why we need to distinguish between weak and phantom? - src/hotspot/share/runtime/stackValue.cpp There's no reasonable way to abstract this via GC interface? Very nice work! Thanks, Roman > Hi all, > > Here are updated webrevs reflecting the feedback received so far. > > ZGC Master > ? Incremental: > http://cr.openjdk.java.net/~pliden/8204210/webrev.0vs1-master > ? Full: http://cr.openjdk.java.net/~pliden/8204210/webrev.1-master > > ZGC Testing > ? Incremental: > http://cr.openjdk.java.net/~pliden/8204210/webrev.0vs1-testing > ? Full: http://cr.openjdk.java.net/~pliden/8204210/webrev.1-testing > > Thanks! > > /Per > > On 06/01/2018 11:41 PM, Per Liden wrote: >> Hi, >> >> Please review the implementation of JEP 333: ZGC: A Scalable >> Low-Latency Garbage Collector (Experimental) >> >> Please see the JEP for more information about the project. The JEP is >> currently in state "Proposed to Target" for JDK 11. >> >> https://bugs.openjdk.java.net/browse/JDK-8197831 >> >> Additional information in can also be found on the ZGC project wiki. >> >> https://wiki.openjdk.java.net/display/zgc/Main >> >> >> Webrevs >> ------- >> >> To make this easier to review, we've divided the change into two webrevs. >> >> * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master >> >> ?? This patch contains the actual ZGC implementation, the new unit >> tests and other changes needed in HotSpot. >> >> * ZGC Testing: >> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >> >> ?? This patch contains changes to existing tests needed by ZGC. >> >> >> Overview of Changes >> ------------------- >> >> Below follows a list of the files we add/modify in the master patch, >> with a short summary describing each group. >> >> * Build support - Making ZGC an optional feature. >> >> ?? make/autoconf/hotspot.m4 >> ?? make/hotspot/lib/JvmFeatures.gmk >> ?? src/hotspot/share/utilities/macros.hpp >> >> * C2 AD file - Additions needed to generate ZGC load barriers (adlc >> does not currently offer a way to easily break this out). >> >> ?? src/hotspot/cpu/x86/x86.ad >> ?? src/hotspot/cpu/x86/x86_64.ad >> >> * C2 - Things that can't be easily abstracted out into ZGC specific >> code, most of which is guarded behind a #if INCLUDE_ZGC and/or if >> (UseZGC) condition. There should only be two logic changes (one in >> idealKit.cpp and one in node.cpp) that are still active when ZGC is >> disabled. We believe these are low risk changes and should not >> introduce any real change i behavior when using other GCs. >> >> ?? src/hotspot/share/adlc/formssel.cpp >> ?? src/hotspot/share/opto/* >> ?? src/hotspot/share/compiler/compilerDirectives.hpp >> >> * General GC+Runtime - Registering ZGC as a collector. >> >> ?? src/hotspot/share/gc/shared/* >> ?? src/hotspot/share/runtime/vmStructs.cpp >> ?? src/hotspot/share/runtime/vm_operations.hpp >> ?? src/hotspot/share/prims/whitebox.cpp >> >> * GC thread local data - Increasing the size of data area by 32 bytes. >> >> ?? src/hotspot/share/gc/shared/gcThreadLocalData.hpp >> >> * ZGC - The collector itself. >> >> ?? src/hotspot/share/gc/z/* >> ?? src/hotspot/cpu/x86/gc/z/* >> ?? src/hotspot/os_cpu/linux_x86/gc/z/* >> ?? test/hotspot/gtest/gc/z/* >> >> * JFR - Adding new event types. >> >> ?? src/hotspot/share/jfr/* >> ?? src/jdk.jfr/share/conf/jfr/* >> >> * Logging - Adding new log tags. >> >> ?? src/hotspot/share/logging/* >> >> * Metaspace - Adding a friend declaration. >> >> ?? src/hotspot/share/memory/metaspace.hpp >> >> * InstanceRefKlass - Adjustments for concurrent reference processing. >> >> ?? src/hotspot/share/oops/instanceRefKlass.inline.hpp >> >> * vmSymbol - Disabled clone intrinsic for ZGC. >> >> ?? src/hotspot/share/classfile/vmSymbols.cpp >> >> * Oop Verification - In four cases we disabled oop verification >> because it do not makes sense or is not applicable to a GC using load >> barriers. >> >> ?? src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp >> ?? src/hotspot/cpu/x86/stubGenerator_x86_64.cpp >> ?? src/hotspot/share/compiler/oopMap.cpp >> ?? src/hotspot/share/runtime/jniHandles.cpp >> >> * StackValue - Apply a load barrier in case of OSR. This is a bit of a >> hack. However, this will go away in the future, when we have the next >> iteration of C2's load barriers in place (aka "C2 late barrier >> insertion"). >> >> ?? src/hotspot/share/runtime/stackValue.cpp >> >> * JVMTI - Adding an assert() to catch problems if the tagmap hashing >> is changed in the future. >> >> ?? src/hotspot/share/prims/jvmtiTagMap.cpp >> >> * Legal - Adding copyright/license for 3rd party hash function used in >> ZHash. >> >> ?? src/java.base/share/legal/c-libutl.md >> >> * SA - Adding basic ZGC support. >> >> ?? src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* >> >> >> Testing >> ------- >> >> * Unit testing >> >> ?? A number of new ZGC specific gtests have been added, in >> test/hotspot/gtest/gc/z/ >> >> * Regression testing >> >> ?? No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} >> ?? No new failures in Mach5, with ZGC disabled, tier{1,2,3} >> >> * Stress testing >> >> ?? We have been continuously been running a number stress tests >> throughout the development, these include: >> >> ???? specjbb2000 >> ???? specjbb2005 >> ???? specjbb2015 >> ???? specjvm98 >> ???? specjvm2008 >> ???? dacapo2009 >> ???? test/hotspot/jtreg/gc/stress/gcold >> ???? test/hotspot/jtreg/gc/stress/systemgc >> ???? test/hotspot/jtreg/gc/stress/gclocker >> ???? test/hotspot/jtreg/gc/stress/gcbasher >> ???? test/hotspot/jtreg/gc/stress/finalizer >> ???? Kitchensink >> >> >> Thanks! >> >> /Per, Stefan & the ZGC team From aph at redhat.com Wed Jun 6 09:41:08 2018 From: aph at redhat.com (Andrew Haley) Date: Wed, 6 Jun 2018 10:41:08 +0100 Subject: RFR: 8204331: AArch64: fix CAS not embedded in normal graph error. In-Reply-To: References: Message-ID: On 06/05/2018 04:41 PM, Zhongwei Yao wrote: > Hi, > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8204331 > > Webrev: > http://cr.openjdk.java.net/~zyao/8204331/webrev.00/ > > This patch fixes an assertion error on aarch64 in several jtreg tests. > > The failure assertion is in needs_acquiring_load_exclusive() in aarch64.ad when checking whether the graph is in "leading_to_normal" shape. The abnormal shape is generated in LibraryCallKit::inline_unsafe_load_store(). This patch fixes it by swap the order of "Pin SCMProj node" and "Insert post barrier" in LibraryCallKit::inline_unsafe_load_store(). > I don't think this is the right way to fix it. It'd be better to fix the code in aarhc64.ad. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From rickard.backman at oracle.com Wed Jun 6 09:48:05 2018 From: rickard.backman at oracle.com (Rickard =?utf-8?Q?B=C3=A4ckman?=) Date: Wed, 6 Jun 2018 11:48:05 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> Message-ID: <20180606094805.ml34woy2x7apyrfs@rbackman> Hi, I've looked at the C2 parts of things with Nils by my side. There are a couple of small things to note. classes.cpp misses an undef for optionalmacro. compile.cpp the print_method should probably be within the {} of macroExpand. escape.cpp has two else if cases where the code looks very common. Please make this into a function if possible? opcodes.cpp misses an undef for optionalmacro. In C2 in general, maybe BarrierSet::barrier_set()->barrier_set_c2() coule be Compile::barrier_set()? Looks good, great work everyone! /R On 06/01, Per Liden wrote: > Hi, > > Please review the implementation of JEP 333: ZGC: A Scalable Low-Latency > Garbage Collector (Experimental) > > Please see the JEP for more information about the project. The JEP is > currently in state "Proposed to Target" for JDK 11. > > https://bugs.openjdk.java.net/browse/JDK-8197831 > > Additional information in can also be found on the ZGC project wiki. > > https://wiki.openjdk.java.net/display/zgc/Main > > > Webrevs > ------- > > To make this easier to review, we've divided the change into two webrevs. > > * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master > > This patch contains the actual ZGC implementation, the new unit tests and > other changes needed in HotSpot. > > * ZGC Testing: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing > > This patch contains changes to existing tests needed by ZGC. > > > Overview of Changes > ------------------- > > Below follows a list of the files we add/modify in the master patch, with a > short summary describing each group. > > * Build support - Making ZGC an optional feature. > > make/autoconf/hotspot.m4 > make/hotspot/lib/JvmFeatures.gmk > src/hotspot/share/utilities/macros.hpp > > * C2 AD file - Additions needed to generate ZGC load barriers (adlc does not > currently offer a way to easily break this out). > > src/hotspot/cpu/x86/x86.ad > src/hotspot/cpu/x86/x86_64.ad > > * C2 - Things that can't be easily abstracted out into ZGC specific code, > most of which is guarded behind a #if INCLUDE_ZGC and/or if (UseZGC) > condition. There should only be two logic changes (one in idealKit.cpp and > one in node.cpp) that are still active when ZGC is disabled. We believe > these are low risk changes and should not introduce any real change i > behavior when using other GCs. > > src/hotspot/share/adlc/formssel.cpp > src/hotspot/share/opto/* > src/hotspot/share/compiler/compilerDirectives.hpp > > * General GC+Runtime - Registering ZGC as a collector. > > src/hotspot/share/gc/shared/* > src/hotspot/share/runtime/vmStructs.cpp > src/hotspot/share/runtime/vm_operations.hpp > src/hotspot/share/prims/whitebox.cpp > > * GC thread local data - Increasing the size of data area by 32 bytes. > > src/hotspot/share/gc/shared/gcThreadLocalData.hpp > > * ZGC - The collector itself. > > src/hotspot/share/gc/z/* > src/hotspot/cpu/x86/gc/z/* > src/hotspot/os_cpu/linux_x86/gc/z/* > test/hotspot/gtest/gc/z/* > > * JFR - Adding new event types. > > src/hotspot/share/jfr/* > src/jdk.jfr/share/conf/jfr/* > > * Logging - Adding new log tags. > > src/hotspot/share/logging/* > > * Metaspace - Adding a friend declaration. > > src/hotspot/share/memory/metaspace.hpp > > * InstanceRefKlass - Adjustments for concurrent reference processing. > > src/hotspot/share/oops/instanceRefKlass.inline.hpp > > * vmSymbol - Disabled clone intrinsic for ZGC. > > src/hotspot/share/classfile/vmSymbols.cpp > > * Oop Verification - In four cases we disabled oop verification because it > do not makes sense or is not applicable to a GC using load barriers. > > src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp > src/hotspot/cpu/x86/stubGenerator_x86_64.cpp > src/hotspot/share/compiler/oopMap.cpp > src/hotspot/share/runtime/jniHandles.cpp > > * StackValue - Apply a load barrier in case of OSR. This is a bit of a hack. > However, this will go away in the future, when we have the next iteration of > C2's load barriers in place (aka "C2 late barrier insertion"). > > src/hotspot/share/runtime/stackValue.cpp > > * JVMTI - Adding an assert() to catch problems if the tagmap hashing is > changed in the future. > > src/hotspot/share/prims/jvmtiTagMap.cpp > > * Legal - Adding copyright/license for 3rd party hash function used in > ZHash. > > src/java.base/share/legal/c-libutl.md > > * SA - Adding basic ZGC support. > > src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* > > > Testing > ------- > > * Unit testing > > A number of new ZGC specific gtests have been added, in > test/hotspot/gtest/gc/z/ > > * Regression testing > > No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} > No new failures in Mach5, with ZGC disabled, tier{1,2,3} > > * Stress testing > > We have been continuously been running a number stress tests throughout > the development, these include: > > specjbb2000 > specjbb2005 > specjbb2015 > specjvm98 > specjvm2008 > dacapo2009 > test/hotspot/jtreg/gc/stress/gcold > test/hotspot/jtreg/gc/stress/systemgc > test/hotspot/jtreg/gc/stress/gclocker > test/hotspot/jtreg/gc/stress/gcbasher > test/hotspot/jtreg/gc/stress/finalizer > Kitchensink > > > Thanks! > > /Per, Stefan & the ZGC team From aph at redhat.com Wed Jun 6 10:03:02 2018 From: aph at redhat.com (Andrew Haley) Date: Wed, 6 Jun 2018 11:03:02 +0100 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> <5B16A8C5.7040005@oracle.com> <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> Message-ID: <48b8839a-8af7-e64c-a9c6-e9f67ea099e4@redhat.com> On 06/05/2018 08:34 PM, Roman Kennke wrote: > Ok, done here: > > Incremental: > http://cr.openjdk.java.net/~rkennke/JDK-8203157/webrev.01.diff/ > Full: > http://cr.openjdk.java.net/~rkennke/JDK-8203157/webrev.01/ > > Good now? It's be better to fix this up in LIR generation than to use jobject2reg: 1910 break; 1911 case T_OBJECT: 1912 case T_ARRAY: 1913 jobject2reg(opr2->as_constant_ptr()->as_jobject(), rscratch1); 1914 __ cmpoop(reg1, rscratch1); 1915 return; -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From harold.seigel at oracle.com Wed Jun 6 13:58:11 2018 From: harold.seigel at oracle.com (Harold David Seigel) Date: Wed, 6 Jun 2018 09:58:11 -0400 Subject: RFR (Tedious) 8204301: Make OrderAccess functions available to hpp rather than inline.hpp files In-Reply-To: <2babb670-e230-667d-aac0-e2b4e51f6a74@oracle.com> References: <2babb670-e230-667d-aac0-e2b4e51f6a74@oracle.com> Message-ID: <53f40078-900a-01c8-f7cb-292a0864cd29@oracle.com> Hi Coleen, This looks good! Thanks, Harold On 6/5/2018 5:19 PM, coleen.phillimore at oracle.com wrote: > Summary: move orderAccess.inline.hpp into orderAccess.hpp and remove > os.hpp inclusion and conditional os::is_MP() for fence on x86 platforms > > See discussion in bug.? Left os::is_MP() conditional for arm32. Tested > by Boris U, thanks! > > open webrev at http://cr.openjdk.java.net/~coleenp/8204301.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8204301 > > Tested on mach5 hs-tier1-2 on Oracle platforms: linux-x64, > windows-x64, macosx-x64 and solaris-sparcv9.? Built on linux-aarch64 > and linux-zero.?? Boris built on arm32.? There were no actual changes > on s390 or ppc. > > Thanks, > Coleen From coleen.phillimore at oracle.com Wed Jun 6 13:59:38 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 6 Jun 2018 09:59:38 -0400 Subject: RFR (Tedious) 8204301: Make OrderAccess functions available to hpp rather than inline.hpp files In-Reply-To: <53f40078-900a-01c8-f7cb-292a0864cd29@oracle.com> References: <2babb670-e230-667d-aac0-e2b4e51f6a74@oracle.com> <53f40078-900a-01c8-f7cb-292a0864cd29@oracle.com> Message-ID: <53f4ecb8-489a-60d4-40c9-5503cf7b8ffb@oracle.com> Thanks Harold! Coleen On 6/6/18 9:58 AM, Harold David Seigel wrote: > Hi Coleen, > > This looks good! > > Thanks, Harold > > > On 6/5/2018 5:19 PM, coleen.phillimore at oracle.com wrote: >> Summary: move orderAccess.inline.hpp into orderAccess.hpp and remove >> os.hpp inclusion and conditional os::is_MP() for fence on x86 platforms >> >> See discussion in bug.? Left os::is_MP() conditional for arm32. >> Tested by Boris U, thanks! >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8204301.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8204301 >> >> Tested on mach5 hs-tier1-2 on Oracle platforms: linux-x64, >> windows-x64, macosx-x64 and solaris-sparcv9.? Built on linux-aarch64 >> and linux-zero.?? Boris built on arm32.? There were no actual changes >> on s390 or ppc. >> >> Thanks, >> Coleen > From per.liden at oracle.com Wed Jun 6 14:16:32 2018 From: per.liden at oracle.com (Per Liden) Date: Wed, 6 Jun 2018 16:16:32 +0200 Subject: RFR: 8204474: Have instanceRefKlass use HeapAccess when loading the referent Message-ID: <80a4d595-6dd1-b4ad-352d-3ddb36242e1e@oracle.com> Hi, To support concurrent reference processing in ZGC, instanceRefKlass::try_discover() can no longer use RawAccess to load the referent field. Instead it should use HeapAccess or HeapAccess depending on the reference type. This patch also adjusts InstanceRefKlass::trace_reference_gc() for the same reason. Bug: https://bugs.openjdk.java.net/browse/JDK-8204474 Webrev: http://cr.openjdk.java.net/~pliden/8204474/webrev.0 Testing: This patch has been part of the ZGC repository for quite some time and gone through various testing, including tier{1,2,3,4,5,6} in mach5. /Per From per.liden at oracle.com Wed Jun 6 14:16:37 2018 From: per.liden at oracle.com (Per Liden) Date: Wed, 6 Jun 2018 16:16:37 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> Message-ID: <23565148-02aa-b381-1728-c1cfe90054e3@oracle.com> Hi Roman, On 06/06/2018 11:32 AM, Roman Kennke wrote: > I'm looking mostly at shared code changes. I can't really say much about > C2, JFR, SA, tests and ZGC itself. > > Some comments/questions: > > - src/hotspot/share/classfile/vmSymbols.cpp: > why are you enabling the clone intrinsic unconditionally and not under > the usual: > if (!InlineObjectCopy || !InlineArrayCopy) return true; > ? Please note that the function is called is_disabled_by_flags(). So we're actually disabling (not enabling) it unconditionally for ZGC. > > - src/hotspot/share/oops/instanceRefKlass.inline.hpp > I wonder if this makes sense to upstream separately? Also, I'm curious Yes, I think you're right. I don't see a reason why this couldn't be upstreamed separately. I filed https://bugs.openjdk.java.net/browse/JDK-8204474 and sent an RFR to hotspot-dev. > why we need to distinguish between weak and phantom? Generally speaking, we should always annotate oop accesses with the proper strengths, which is why we distinguish between weak and phantom here. For ZGC, in this very specific case, we will deep down in the barrier code eventually do the same thing in both cases. However, such decisions should be made by the GC/BarrierSet and not the access call-site. A different GC might want to make a different decision. An alternative would be to use ON_UNKNOWN_OOP_REF, which tells the BarrierSet that is needs to apply additional logic to figure out what the strength really is. However, this comes with a run-time overhead, and since we already have the type in try_discover() we can use that to do the proper access and avoid that overhead. > > - src/hotspot/share/runtime/stackValue.cpp > There's no reasonable way to abstract this via GC interface? This code is there to solve a very ZGC specific issue, which is that a deopt can happens between a load and its load barrier. As I mentioned in the initial RFR mail, this will go away in our next iteration of our C2 load barriers [1], which will by design make sure that this can't happen. We therefore didn't think it was worth the effort to create a abstraction for this, since it's highly ZGC specific and such an abstraction would become useless/unused pretty soon anyway. [1] Nils is currently working on this, but it will not be part of the initial upstreaming of ZGC. > > Very nice work! Thanks Roman! And thanks a lot for reviewing! /Per > > Thanks, > Roman > > >> Hi all, >> >> Here are updated webrevs reflecting the feedback received so far. >> >> ZGC Master >> Incremental: >> http://cr.openjdk.java.net/~pliden/8204210/webrev.0vs1-master >> Full: http://cr.openjdk.java.net/~pliden/8204210/webrev.1-master >> >> ZGC Testing >> Incremental: >> http://cr.openjdk.java.net/~pliden/8204210/webrev.0vs1-testing >> Full: http://cr.openjdk.java.net/~pliden/8204210/webrev.1-testing >> >> Thanks! >> >> /Per >> >> On 06/01/2018 11:41 PM, Per Liden wrote: >>> Hi, >>> >>> Please review the implementation of JEP 333: ZGC: A Scalable >>> Low-Latency Garbage Collector (Experimental) >>> >>> Please see the JEP for more information about the project. The JEP is >>> currently in state "Proposed to Target" for JDK 11. >>> >>> https://bugs.openjdk.java.net/browse/JDK-8197831 >>> >>> Additional information in can also be found on the ZGC project wiki. >>> >>> https://wiki.openjdk.java.net/display/zgc/Main >>> >>> >>> Webrevs >>> ------- >>> >>> To make this easier to review, we've divided the change into two webrevs. >>> >>> * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master >>> >>> This patch contains the actual ZGC implementation, the new unit >>> tests and other changes needed in HotSpot. >>> >>> * ZGC Testing: >>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >>> >>> This patch contains changes to existing tests needed by ZGC. >>> >>> >>> Overview of Changes >>> ------------------- >>> >>> Below follows a list of the files we add/modify in the master patch, >>> with a short summary describing each group. >>> >>> * Build support - Making ZGC an optional feature. >>> >>> make/autoconf/hotspot.m4 >>> make/hotspot/lib/JvmFeatures.gmk >>> src/hotspot/share/utilities/macros.hpp >>> >>> * C2 AD file - Additions needed to generate ZGC load barriers (adlc >>> does not currently offer a way to easily break this out). >>> >>> src/hotspot/cpu/x86/x86.ad >>> src/hotspot/cpu/x86/x86_64.ad >>> >>> * C2 - Things that can't be easily abstracted out into ZGC specific >>> code, most of which is guarded behind a #if INCLUDE_ZGC and/or if >>> (UseZGC) condition. There should only be two logic changes (one in >>> idealKit.cpp and one in node.cpp) that are still active when ZGC is >>> disabled. We believe these are low risk changes and should not >>> introduce any real change i behavior when using other GCs. >>> >>> src/hotspot/share/adlc/formssel.cpp >>> src/hotspot/share/opto/* >>> src/hotspot/share/compiler/compilerDirectives.hpp >>> >>> * General GC+Runtime - Registering ZGC as a collector. >>> >>> src/hotspot/share/gc/shared/* >>> src/hotspot/share/runtime/vmStructs.cpp >>> src/hotspot/share/runtime/vm_operations.hpp >>> src/hotspot/share/prims/whitebox.cpp >>> >>> * GC thread local data - Increasing the size of data area by 32 bytes. >>> >>> src/hotspot/share/gc/shared/gcThreadLocalData.hpp >>> >>> * ZGC - The collector itself. >>> >>> src/hotspot/share/gc/z/* >>> src/hotspot/cpu/x86/gc/z/* >>> src/hotspot/os_cpu/linux_x86/gc/z/* >>> test/hotspot/gtest/gc/z/* >>> >>> * JFR - Adding new event types. >>> >>> src/hotspot/share/jfr/* >>> src/jdk.jfr/share/conf/jfr/* >>> >>> * Logging - Adding new log tags. >>> >>> src/hotspot/share/logging/* >>> >>> * Metaspace - Adding a friend declaration. >>> >>> src/hotspot/share/memory/metaspace.hpp >>> >>> * InstanceRefKlass - Adjustments for concurrent reference processing. >>> >>> src/hotspot/share/oops/instanceRefKlass.inline.hpp >>> >>> * vmSymbol - Disabled clone intrinsic for ZGC. >>> >>> src/hotspot/share/classfile/vmSymbols.cpp >>> >>> * Oop Verification - In four cases we disabled oop verification >>> because it do not makes sense or is not applicable to a GC using load >>> barriers. >>> >>> src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp >>> src/hotspot/cpu/x86/stubGenerator_x86_64.cpp >>> src/hotspot/share/compiler/oopMap.cpp >>> src/hotspot/share/runtime/jniHandles.cpp >>> >>> * StackValue - Apply a load barrier in case of OSR. This is a bit of a >>> hack. However, this will go away in the future, when we have the next >>> iteration of C2's load barriers in place (aka "C2 late barrier >>> insertion"). >>> >>> src/hotspot/share/runtime/stackValue.cpp >>> >>> * JVMTI - Adding an assert() to catch problems if the tagmap hashing >>> is changed in the future. >>> >>> src/hotspot/share/prims/jvmtiTagMap.cpp >>> >>> * Legal - Adding copyright/license for 3rd party hash function used in >>> ZHash. >>> >>> src/java.base/share/legal/c-libutl.md >>> >>> * SA - Adding basic ZGC support. >>> >>> src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* >>> >>> >>> Testing >>> ------- >>> >>> * Unit testing >>> >>> A number of new ZGC specific gtests have been added, in >>> test/hotspot/gtest/gc/z/ >>> >>> * Regression testing >>> >>> No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} >>> No new failures in Mach5, with ZGC disabled, tier{1,2,3} >>> >>> * Stress testing >>> >>> We have been continuously been running a number stress tests >>> throughout the development, these include: >>> >>> specjbb2000 >>> specjbb2005 >>> specjbb2015 >>> specjvm98 >>> specjvm2008 >>> dacapo2009 >>> test/hotspot/jtreg/gc/stress/gcold >>> test/hotspot/jtreg/gc/stress/systemgc >>> test/hotspot/jtreg/gc/stress/gclocker >>> test/hotspot/jtreg/gc/stress/gcbasher >>> test/hotspot/jtreg/gc/stress/finalizer >>> Kitchensink >>> >>> >>> Thanks! >>> >>> /Per, Stefan & the ZGC team > > From rkennke at redhat.com Wed Jun 6 14:36:22 2018 From: rkennke at redhat.com (Roman Kennke) Date: Wed, 6 Jun 2018 16:36:22 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <23565148-02aa-b381-1728-c1cfe90054e3@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <23565148-02aa-b381-1728-c1cfe90054e3@oracle.com> Message-ID: <06e05d14-6851-a1fe-cfaf-d10e44ae89e6@redhat.com> Am 06.06.2018 um 16:16 schrieb Per Liden: > Hi Roman, > > On 06/06/2018 11:32 AM, Roman Kennke wrote: >> I'm looking mostly at shared code changes. I can't really say much about >> C2, JFR, SA, tests and ZGC itself. >> >> Some comments/questions: >> >> - src/hotspot/share/classfile/vmSymbols.cpp: >> ? why are you enabling the clone intrinsic unconditionally and not under >> the usual: >> if (!InlineObjectCopy || !InlineArrayCopy) return true; >> ? > > Please note that the function is called is_disabled_by_flags(). So we're > actually disabling (not enabling) it unconditionally for ZGC. Ah. Confusing double-negation. Might want to turn the whole thing around (not here). Also, I wonder if GCs might want to have a say about which intrinsics to enable/disable generally. Probably not very important. >> - src/hotspot/share/oops/instanceRefKlass.inline.hpp >> I wonder if this makes sense to upstream separately? Also, I'm curious > > Yes, I think you're right. I don't see a reason why this couldn't be > upstreamed separately. > > I filed https://bugs.openjdk.java.net/browse/JDK-8204474 and sent an RFR > to hotspot-dev. Thanks! >> why we need to distinguish between weak and phantom? > > Generally speaking, we should always annotate oop accesses with the > proper strengths, which is why we distinguish between weak and phantom > here. For ZGC, in this very specific case, we will deep down in the > barrier code eventually do the same thing in both cases. However, such > decisions should be made by the GC/BarrierSet and not the access > call-site. A different GC might want to make a different decision. > > An alternative would be to use ON_UNKNOWN_OOP_REF, which tells the > BarrierSet that is needs to apply additional logic to figure out what > the strength really is. However, this comes with a run-time overhead, > and since we already have the type in try_discover() we can use that to > do the proper access and avoid that overhead. Ok, good. I was only asking out of curiousity. >> - src/hotspot/share/runtime/stackValue.cpp >> There's no reasonable way to abstract this via GC interface? > > This code is there to solve a very ZGC specific issue, which is that a > deopt can happens between a load and its load barrier. As I mentioned in > the initial RFR mail, this will go away in our next iteration of our C2 > load barriers [1], which will by design make sure that this can't > happen. We therefore didn't think it was worth the effort to create a > abstraction for this, since it's highly ZGC specific and such an > abstraction would become useless/unused pretty soon anyway. Ah yes, very good then. >> Very nice work! > > Thanks Roman! And thanks a lot for reviewing! I intend to make one or more additional passes soon (over zgc itself, and probably skimming c2 land). Thanks, Roman From per.liden at oracle.com Wed Jun 6 15:23:32 2018 From: per.liden at oracle.com (Per Liden) Date: Wed, 6 Jun 2018 17:23:32 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <06e05d14-6851-a1fe-cfaf-d10e44ae89e6@redhat.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <23565148-02aa-b381-1728-c1cfe90054e3@oracle.com> <06e05d14-6851-a1fe-cfaf-d10e44ae89e6@redhat.com> Message-ID: Hi Roman, On 06/06/2018 04:36 PM, Roman Kennke wrote: > Am 06.06.2018 um 16:16 schrieb Per Liden: >> Hi Roman, >> >> On 06/06/2018 11:32 AM, Roman Kennke wrote: >>> I'm looking mostly at shared code changes. I can't really say much about >>> C2, JFR, SA, tests and ZGC itself. >>> >>> Some comments/questions: >>> >>> - src/hotspot/share/classfile/vmSymbols.cpp: >>> why are you enabling the clone intrinsic unconditionally and not under >>> the usual: >>> if (!InlineObjectCopy || !InlineArrayCopy) return true; >>> ? >> >> Please note that the function is called is_disabled_by_flags(). So we're >> actually disabling (not enabling) it unconditionally for ZGC. > > Ah. Confusing double-negation. Might want to turn the whole thing around > (not here). Also, I wonder if GCs might want to have a say about which > intrinsics to enable/disable generally. Probably not very important. I would kind of think that all GCs want to support all intrinsics, with the exception (like in our case) of some being temporarily disabled until the proper support is in place. > >>> - src/hotspot/share/oops/instanceRefKlass.inline.hpp >>> I wonder if this makes sense to upstream separately? Also, I'm curious >> >> Yes, I think you're right. I don't see a reason why this couldn't be >> upstreamed separately. >> >> I filed https://bugs.openjdk.java.net/browse/JDK-8204474 and sent an RFR >> to hotspot-dev. > > Thanks! > > >>> why we need to distinguish between weak and phantom? >> >> Generally speaking, we should always annotate oop accesses with the >> proper strengths, which is why we distinguish between weak and phantom >> here. For ZGC, in this very specific case, we will deep down in the >> barrier code eventually do the same thing in both cases. However, such >> decisions should be made by the GC/BarrierSet and not the access >> call-site. A different GC might want to make a different decision. >> >> An alternative would be to use ON_UNKNOWN_OOP_REF, which tells the >> BarrierSet that is needs to apply additional logic to figure out what >> the strength really is. However, this comes with a run-time overhead, >> and since we already have the type in try_discover() we can use that to >> do the proper access and avoid that overhead. > > Ok, good. I was only asking out of curiousity. > >>> - src/hotspot/share/runtime/stackValue.cpp >>> There's no reasonable way to abstract this via GC interface? >> >> This code is there to solve a very ZGC specific issue, which is that a >> deopt can happens between a load and its load barrier. As I mentioned in >> the initial RFR mail, this will go away in our next iteration of our C2 >> load barriers [1], which will by design make sure that this can't >> happen. We therefore didn't think it was worth the effort to create a >> abstraction for this, since it's highly ZGC specific and such an >> abstraction would become useless/unused pretty soon anyway. > > Ah yes, very good then. > >>> Very nice work! >> >> Thanks Roman! And thanks a lot for reviewing! > > I intend to make one or more additional passes soon (over zgc itself, > and probably skimming c2 land). Ok! thanks, Per > > Thanks, > Roman > From ChrisPhi at LGonQn.Org Wed Jun 6 15:47:18 2018 From: ChrisPhi at LGonQn.Org (Chris Phillips) Date: Wed, 6 Jun 2018 11:47:18 -0400 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code Message-ID: Hi, Please review this set of changes to shared code related to S390 (31bit) Zero self-build type mis-match failures. Bug: https://bugs.openjdk.java.net/browse/JDK-8203030 webrev: http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 Cheers! Chris Phillips @ Red Hat in T.O. From erik.osterlund at oracle.com Wed Jun 6 16:27:07 2018 From: erik.osterlund at oracle.com (Erik Osterlund) Date: Wed, 6 Jun 2018 18:27:07 +0200 Subject: RFR: 8204474: Have instanceRefKlass use HeapAccess when loading the referent In-Reply-To: <80a4d595-6dd1-b4ad-352d-3ddb36242e1e@oracle.com> References: <80a4d595-6dd1-b4ad-352d-3ddb36242e1e@oracle.com> Message-ID: Hi Per, Looks good. Thanks, /Erik > On 6 Jun 2018, at 16:16, Per Liden wrote: > > Hi, > > To support concurrent reference processing in ZGC, instanceRefKlass::try_discover() can no longer use RawAccess to load the referent field. Instead it should use > > HeapAccess > > or > > HeapAccess > > depending on the reference type. This patch also adjusts InstanceRefKlass::trace_reference_gc() for the same reason. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8204474 > Webrev: http://cr.openjdk.java.net/~pliden/8204474/webrev.0 > > Testing: This patch has been part of the ZGC repository for quite some time and gone through various testing, including tier{1,2,3,4,5,6} in mach5. > > /Per From aph at redhat.com Wed Jun 6 16:29:22 2018 From: aph at redhat.com (Andrew Haley) Date: Wed, 6 Jun 2018 17:29:22 +0100 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: References: Message-ID: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> On 06/06/2018 04:47 PM, Chris Phillips wrote: > Please review this set of changes to shared code > related to S390 (31bit) Zero self-build type mis-match failures. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8203030 > webrev: http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 Can you explain this a little more? What is the type of size_t on s390x? What is the type of uintptr_t? What are the errors? -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From per.liden at oracle.com Wed Jun 6 17:49:40 2018 From: per.liden at oracle.com (Per Liden) Date: Wed, 6 Jun 2018 19:49:40 +0200 Subject: RFR: 8204474: Have instanceRefKlass use HeapAccess when loading the referent In-Reply-To: References: <80a4d595-6dd1-b4ad-352d-3ddb36242e1e@oracle.com> Message-ID: Thanks for reviewing, Erik! /Per On 2018-06-06 18:27, Erik Osterlund wrote: > Hi Per, > > Looks good. > > Thanks, > /Erik > >> On 6 Jun 2018, at 16:16, Per Liden wrote: >> >> Hi, >> >> To support concurrent reference processing in ZGC, instanceRefKlass::try_discover() can no longer use RawAccess to load the referent field. Instead it should use >> >> HeapAccess >> >> or >> >> HeapAccess >> >> depending on the reference type. This patch also adjusts InstanceRefKlass::trace_reference_gc() for the same reason. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8204474 >> Webrev: http://cr.openjdk.java.net/~pliden/8204474/webrev.0 >> >> Testing: This patch has been part of the ZGC repository for quite some time and gone through various testing, including tier{1,2,3,4,5,6} in mach5. >> >> /Per > From vladimir.kozlov at oracle.com Wed Jun 6 18:04:15 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 6 Jun 2018 11:04:15 -0700 Subject: RFC: C2 Object Initialization - Using XMM/YMM registers In-Reply-To: References: <6ddfaac5-a5ad-dc27-8185-d2237a5b1695@oracle.com> Message-ID: <3301e0ef-e43c-6c7a-5b23-e4b296c86178@oracle.com> Thank you, Rohit This change looks reasonable. Let me test it. Thanks, Vladimir On 5/30/18 9:55 PM, Rohit Arul Raj wrote: > Thanks Vladimir, > > I made the changes as you had suggested and it works now. > Please find attached the updated patch, relevant test case as well as > the micro-benchmark performance data. > Sorry for the delay. > > **************** P A T C H ************** > > diff --git a/src/hotspot/cpu/x86/globals_x86.hpp > b/src/hotspot/cpu/x86/globals_x86.hpp > --- a/src/hotspot/cpu/x86/globals_x86.hpp > +++ b/src/hotspot/cpu/x86/globals_x86.hpp > @@ -150,6 +150,9 @@ > product(bool, UseUnalignedLoadStores, false, \ > "Use SSE2 MOVDQU instruction for Arraycopy") \ > \ > + product(bool, UseXMMForObjInit, false, \ > + "Use XMM/YMM MOVDQU instruction for Object Initialization") \ > + \ > product(bool, UseFastStosb, false, \ > "Use fast-string operation for zeroing: rep stosb") \ > \ > diff --git a/src/hotspot/cpu/x86/macroAssembler_x86.cpp > b/src/hotspot/cpu/x86/macroAssembler_x86.cpp > --- a/src/hotspot/cpu/x86/macroAssembler_x86.cpp > +++ b/src/hotspot/cpu/x86/macroAssembler_x86.cpp > @@ -6775,7 +6775,58 @@ > > } > > -void MacroAssembler::clear_mem(Register base, Register cnt, Register > tmp, bool is_large) { > +// clear memory of size 'cnt' qwords, starting at 'base' using > XMM/YMM registers > +void MacroAssembler::xmm_clear_mem(Register base, Register cnt, > XMMRegister xtmp) { > + // cnt - number of qwords (8-byte words). > + // base - start address, qword aligned. > + Label L_zero_64_bytes, L_loop, L_sloop, L_tail, L_end; > + if (UseAVX >= 2) > + vpxor(xtmp, xtmp, xtmp, AVX_256bit); > + else > + vpxor(xtmp, xtmp, xtmp, AVX_128bit); > + jmp(L_zero_64_bytes); > + > + BIND(L_loop); > + if (UseAVX >= 2) { > + vmovdqu(Address(base, 0), xtmp); > + vmovdqu(Address(base, 32), xtmp); > + } else { > + movdqu(Address(base, 0), xtmp); > + movdqu(Address(base, 16), xtmp); > + movdqu(Address(base, 32), xtmp); > + movdqu(Address(base, 48), xtmp); > + } > + addptr(base, 64); > + > + BIND(L_zero_64_bytes); > + subptr(cnt, 8); > + jccb(Assembler::greaterEqual, L_loop); > + addptr(cnt, 4); > + jccb(Assembler::less, L_tail); > + // Copy trailing 32 bytes > + if (UseAVX >= 2) { > + vmovdqu(Address(base, 0), xtmp); > + } else { > + movdqu(Address(base, 0), xtmp); > + movdqu(Address(base, 16), xtmp); > + } > + addptr(base, 32); > + subptr(cnt, 4); > + > + BIND(L_tail); > + addptr(cnt, 4); > + jccb(Assembler::lessEqual, L_end); > + decrement(cnt); > + > + BIND(L_sloop); > + movq(Address(base, 0), xtmp); > + addptr(base, 8); > + decrement(cnt); > + jccb(Assembler::greaterEqual, L_sloop); > + BIND(L_end); > +} > + > +void MacroAssembler::clear_mem(Register base, Register cnt, Register > tmp, XMMRegister xtmp, bool is_large) { > // cnt - number of qwords (8-byte words). > // base - start address, qword aligned. > // is_large - if optimizers know cnt is larger than InitArrayShortSize > @@ -6787,7 +6838,9 @@ > > Label DONE; > > - xorptr(tmp, tmp); > + if (!is_large || !(UseXMMForObjInit && UseUnalignedLoadStores)) { > + xorptr(tmp, tmp); > + } > > if (!is_large) { > Label LOOP, LONG; > @@ -6813,6 +6866,9 @@ > if (UseFastStosb) { > shlptr(cnt, 3); // convert to number of bytes > rep_stosb(); > + } else if (UseXMMForObjInit && UseUnalignedLoadStores) { > + movptr(tmp, base); > + xmm_clear_mem(tmp, cnt, xtmp); > } else { > NOT_LP64(shlptr(cnt, 1);) // convert to number of 32-bit words > for 32-bit VM > rep_stos(); > diff --git a/src/hotspot/cpu/x86/macroAssembler_x86.hpp > b/src/hotspot/cpu/x86/macroAssembler_x86.hpp > --- a/src/hotspot/cpu/x86/macroAssembler_x86.hpp > +++ b/src/hotspot/cpu/x86/macroAssembler_x86.hpp > @@ -1578,7 +1578,10 @@ > > // clear memory of size 'cnt' qwords, starting at 'base'; > // if 'is_large' is set, do not try to produce short loop > - void clear_mem(Register base, Register cnt, Register rtmp, bool is_large); > + void clear_mem(Register base, Register cnt, Register rtmp, > XMMRegister xtmp, bool is_large); > + > + // clear memory of size 'cnt' qwords, starting at 'base' using > XMM/YMM registers > + void xmm_clear_mem(Register base, Register cnt, XMMRegister xtmp); > > #ifdef COMPILER2 > void string_indexof_char(Register str1, Register cnt1, Register ch, > Register result, > diff --git a/src/hotspot/cpu/x86/x86_32.ad b/src/hotspot/cpu/x86/x86_32.ad > --- a/src/hotspot/cpu/x86/x86_32.ad > +++ b/src/hotspot/cpu/x86/x86_32.ad > @@ -11482,13 +11482,15 @@ > > // ======================================================================= > // fast clearing of an array > -instruct rep_stos(eCXRegI cnt, eDIRegP base, eAXRegI zero, Universe > dummy, eFlagsReg cr) %{ > +instruct rep_stos(eCXRegI cnt, eDIRegP base, regD tmp, eAXRegI zero, > Universe dummy, eFlagsReg cr) %{ > predicate(!((ClearArrayNode*)n)->is_large()); > match(Set dummy (ClearArray cnt base)); > - effect(USE_KILL cnt, USE_KILL base, KILL zero, KILL cr); > + effect(USE_KILL cnt, USE_KILL base, TEMP tmp, KILL zero, KILL cr); > > format %{ $$template > - $$emit$$"XOR EAX,EAX\t# ClearArray:\n\t" > + if (!is_large || !(UseXMMForObjInit && UseUnalignedLoadStores)) { > + $$emit$$"XOR EAX,EAX\t# ClearArray:\n\t" > + } > $$emit$$"CMP InitArrayShortSize,rcx\n\t" > $$emit$$"JG LARGE\n\t" > $$emit$$"SHL ECX, 1\n\t" > @@ -11502,6 +11504,32 @@ > if (UseFastStosb) { > $$emit$$"SHL ECX,3\t# Convert doublewords to bytes\n\t" > $$emit$$"REP STOSB\t# store EAX into [EDI++] while ECX--\n\t" > + } else if (UseXMMForObjInit && UseUnalignedLoadStores) { > + $$emit$$"MOV RDI,RAX\n\t" > + $$emit$$"VPXOR YMM0,YMM0,YMM0\n\t" > + $$emit$$"JMPQ L_zero_64_bytes\n\t" > + $$emit$$"# L_loop:\t# 64-byte LOOP\n\t" > + $$emit$$"VMOVDQU YMM0,(RAX)\n\t" > + $$emit$$"VMOVDQU YMM0,0x20(RAX)\n\t" > + $$emit$$"ADD 0x40,RAX\n\t" > + $$emit$$"# L_zero_64_bytes:\n\t" > + $$emit$$"SUB 0x8,RCX\n\t" > + $$emit$$"JGE L_loop\n\t" > + $$emit$$"ADD 0x4,RCX\n\t" > + $$emit$$"JL L_tail\n\t" > + $$emit$$"VMOVDQU YMM0,(RAX)\n\t" > + $$emit$$"ADD 0x20,RAX\n\t" > + $$emit$$"SUB 0x4,RCX\n\t" > + $$emit$$"# L_tail:\t# Clearing tail bytes\n\t" > + $$emit$$"ADD 0x4,RCX\n\t" > + $$emit$$"JLE L_end\n\t" > + $$emit$$"DEC RCX\n\t" > + $$emit$$"# L_sloop:\t# 8-byte short loop\n\t" > + $$emit$$"VMOVQ XMM0,(RAX)\n\t" > + $$emit$$"ADD 0x8,RAX\n\t" > + $$emit$$"DEC RCX\n\t" > + $$emit$$"JGE L_sloop\n\t" > + $$emit$$"# L_end:\n\t" > } else { > $$emit$$"SHL ECX,1\t# Convert doublewords to words\n\t" > $$emit$$"REP STOS\t# store EAX into [EDI++] while ECX--\n\t" > @@ -11509,20 +11537,49 @@ > $$emit$$"# DONE" > %} > ins_encode %{ > - __ clear_mem($base$$Register, $cnt$$Register, $zero$$Register, false); > - %} > - ins_pipe( pipe_slow ); > -%} > - > -instruct rep_stos_large(eCXRegI cnt, eDIRegP base, eAXRegI zero, > Universe dummy, eFlagsReg cr) %{ > + __ clear_mem($base$$Register, $cnt$$Register, $zero$$Register, > + $tmp$$XMMRegister, false); > + %} > + ins_pipe( pipe_slow ); > +%} > + > +instruct rep_stos_large(eCXRegI cnt, eDIRegP base, regD tmp, eAXRegI > zero, Universe dummy, eFlagsReg cr) %{ > predicate(((ClearArrayNode*)n)->is_large()); > match(Set dummy (ClearArray cnt base)); > - effect(USE_KILL cnt, USE_KILL base, KILL zero, KILL cr); > + effect(USE_KILL cnt, USE_KILL base, TEMP tmp, KILL zero, KILL cr); > format %{ $$template > - $$emit$$"XOR EAX,EAX\t# ClearArray:\n\t" > + if (!is_large || !(UseXMMForObjInit && UseUnalignedLoadStores)) { > + $$emit$$"XOR EAX,EAX\t# ClearArray:\n\t" > + } > if (UseFastStosb) { > $$emit$$"SHL ECX,3\t# Convert doublewords to bytes\n\t" > $$emit$$"REP STOSB\t# store EAX into [EDI++] while ECX--\n\t" > + } else if (UseXMMForObjInit && UseUnalignedLoadStores) { > + $$emit$$"MOV RDI,RAX\n\t" > + $$emit$$"VPXOR YMM0,YMM0,YMM0\n\t" > + $$emit$$"JMPQ L_zero_64_bytes\n\t" > + $$emit$$"# L_loop:\t# 64-byte LOOP\n\t" > + $$emit$$"VMOVDQU YMM0,(RAX)\n\t" > + $$emit$$"VMOVDQU YMM0,0x20(RAX)\n\t" > + $$emit$$"ADD 0x40,RAX\n\t" > + $$emit$$"# L_zero_64_bytes:\n\t" > + $$emit$$"SUB 0x8,RCX\n\t" > + $$emit$$"JGE L_loop\n\t" > + $$emit$$"ADD 0x4,RCX\n\t" > + $$emit$$"JL L_tail\n\t" > + $$emit$$"VMOVDQU YMM0,(RAX)\n\t" > + $$emit$$"ADD 0x20,RAX\n\t" > + $$emit$$"SUB 0x4,RCX\n\t" > + $$emit$$"# L_tail:\t# Clearing tail bytes\n\t" > + $$emit$$"ADD 0x4,RCX\n\t" > + $$emit$$"JLE L_end\n\t" > + $$emit$$"DEC RCX\n\t" > + $$emit$$"# L_sloop:\t# 8-byte short loop\n\t" > + $$emit$$"VMOVQ XMM0,(RAX)\n\t" > + $$emit$$"ADD 0x8,RAX\n\t" > + $$emit$$"DEC RCX\n\t" > + $$emit$$"JGE L_sloop\n\t" > + $$emit$$"# L_end:\n\t" > } else { > $$emit$$"SHL ECX,1\t# Convert doublewords to words\n\t" > $$emit$$"REP STOS\t# store EAX into [EDI++] while ECX--\n\t" > @@ -11530,7 +11587,8 @@ > $$emit$$"# DONE" > %} > ins_encode %{ > - __ clear_mem($base$$Register, $cnt$$Register, $zero$$Register, true); > + __ clear_mem($base$$Register, $cnt$$Register, $zero$$Register, > + $tmp$$XMMRegister, true); > %} > ins_pipe( pipe_slow ); > %} > diff --git a/src/hotspot/cpu/x86/x86_64.ad b/src/hotspot/cpu/x86/x86_64.ad > --- a/src/hotspot/cpu/x86/x86_64.ad > +++ b/src/hotspot/cpu/x86/x86_64.ad > @@ -10625,15 +10625,17 @@ > > // ======================================================================= > // fast clearing of an array > -instruct rep_stos(rcx_RegL cnt, rdi_RegP base, rax_RegI zero, Universe dummy, > - rFlagsReg cr) > +instruct rep_stos(rcx_RegL cnt, rdi_RegP base, regD tmp, rax_RegI zero, > + Universe dummy, rFlagsReg cr) > %{ > predicate(!((ClearArrayNode*)n)->is_large()); > match(Set dummy (ClearArray cnt base)); > - effect(USE_KILL cnt, USE_KILL base, KILL zero, KILL cr); > + effect(USE_KILL cnt, USE_KILL base, TEMP tmp, KILL zero, KILL cr); > > format %{ $$template > - $$emit$$"xorq rax, rax\t# ClearArray:\n\t" > + if (!is_large || !(UseXMMForObjInit && UseUnalignedLoadStores)) { > + $$emit$$"xorq rax, rax\t# ClearArray:\n\t" > + } > $$emit$$"cmp InitArrayShortSize,rcx\n\t" > $$emit$$"jg LARGE\n\t" > $$emit$$"dec rcx\n\t" > @@ -10646,35 +10648,91 @@ > if (UseFastStosb) { > $$emit$$"shlq rcx,3\t# Convert doublewords to bytes\n\t" > $$emit$$"rep stosb\t# Store rax to *rdi++ while rcx--\n\t" > + } else if (UseXMMForObjInit && UseUnalignedLoadStores) { > + $$emit$$"mov rdi,rax\n\t" > + $$emit$$"vpxor ymm0,ymm0,ymm0\n\t" > + $$emit$$"jmpq L_zero_64_bytes\n\t" > + $$emit$$"# L_loop:\t# 64-byte LOOP\n\t" > + $$emit$$"vmovdqu ymm0,(rax)\n\t" > + $$emit$$"vmovdqu ymm0,0x20(rax)\n\t" > + $$emit$$"add 0x40,rax\n\t" > + $$emit$$"# L_zero_64_bytes:\n\t" > + $$emit$$"sub 0x8,rcx\n\t" > + $$emit$$"jge L_loop\n\t" > + $$emit$$"add 0x4,rcx\n\t" > + $$emit$$"jl L_tail\n\t" > + $$emit$$"vmovdqu ymm0,(rax)\n\t" > + $$emit$$"add 0x20,rax\n\t" > + $$emit$$"sub 0x4,rcx\n\t" > + $$emit$$"# L_tail:\t# Clearing tail bytes\n\t" > + $$emit$$"add 0x4,rcx\n\t" > + $$emit$$"jle L_end\n\t" > + $$emit$$"dec rcx\n\t" > + $$emit$$"# L_sloop:\t# 8-byte short loop\n\t" > + $$emit$$"vmovq xmm0,(rax)\n\t" > + $$emit$$"add 0x8,rax\n\t" > + $$emit$$"dec rcx\n\t" > + $$emit$$"jge L_sloop\n\t" > + $$emit$$"# L_end:\n\t" > } else { > $$emit$$"rep stosq\t# Store rax to *rdi++ while rcx--\n\t" > } > $$emit$$"# DONE" > %} > ins_encode %{ > - __ clear_mem($base$$Register, $cnt$$Register, $zero$$Register, false); > + __ clear_mem($base$$Register, $cnt$$Register, $zero$$Register, > + $tmp$$XMMRegister, false); > %} > ins_pipe(pipe_slow); > %} > > -instruct rep_stos_large(rcx_RegL cnt, rdi_RegP base, rax_RegI zero, > Universe dummy, > - rFlagsReg cr) > +instruct rep_stos_large(rcx_RegL cnt, rdi_RegP base, regD tmp, rax_RegI zero, > + Universe dummy, rFlagsReg cr) > %{ > predicate(((ClearArrayNode*)n)->is_large()); > match(Set dummy (ClearArray cnt base)); > - effect(USE_KILL cnt, USE_KILL base, KILL zero, KILL cr); > + effect(USE_KILL cnt, USE_KILL base, TEMP tmp, KILL zero, KILL cr); > > format %{ $$template > - $$emit$$"xorq rax, rax\t# ClearArray:\n\t" > + if (!is_large || !(UseXMMForObjInit && UseUnalignedLoadStores)) { > + $$emit$$"xorq rax, rax\t# ClearArray:\n\t" > + } > if (UseFastStosb) { > $$emit$$"shlq rcx,3\t# Convert doublewords to bytes\n\t" > $$emit$$"rep stosb\t# Store rax to *rdi++ while rcx--" > + } else if (UseXMMForObjInit && UseUnalignedLoadStores) { > + $$emit$$"mov rdi,rax\n\t" > + $$emit$$"vpxor ymm0,ymm0,ymm0\n\t" > + $$emit$$"jmpq L_zero_64_bytes\n\t" > + $$emit$$"# L_loop:\t# 64-byte LOOP\n\t" > + $$emit$$"vmovdqu ymm0,(rax)\n\t" > + $$emit$$"vmovdqu ymm0,0x20(rax)\n\t" > + $$emit$$"add 0x40,rax\n\t" > + $$emit$$"# L_zero_64_bytes:\n\t" > + $$emit$$"sub 0x8,rcx\n\t" > + $$emit$$"jge L_loop\n\t" > + $$emit$$"add 0x4,rcx\n\t" > + $$emit$$"jl L_tail\n\t" > + $$emit$$"vmovdqu ymm0,(rax)\n\t" > + $$emit$$"add 0x20,rax\n\t" > + $$emit$$"sub 0x4,rcx\n\t" > + $$emit$$"# L_tail:\t# Clearing tail bytes\n\t" > + $$emit$$"add 0x4,rcx\n\t" > + $$emit$$"jle L_end\n\t" > + $$emit$$"dec rcx\n\t" > + $$emit$$"# L_sloop:\t# 8-byte short loop\n\t" > + $$emit$$"vmovq xmm0,(rax)\n\t" > + $$emit$$"add 0x8,rax\n\t" > + $$emit$$"dec rcx\n\t" > + $$emit$$"jge L_sloop\n\t" > + $$emit$$"# L_end:\n\t" > } else { > $$emit$$"rep stosq\t# Store rax to *rdi++ while rcx--" > } > %} > ins_encode %{ > - __ clear_mem($base$$Register, $cnt$$Register, $zero$$Register, true); > + __ clear_mem($base$$Register, $cnt$$Register, $zero$$Register, > + $tmp$$XMMRegister, true); > %} > ins_pipe(pipe_slow); > %} > > > *********************** END of P A T C H ******************* > > > Generated assembly code after change: > ------------------------------------------------------ > 0x00002b771c0016e4: mov %rdx,%rdi > 0x00002b771c0016e7: add $0x10,%rdi > 0x00002b771c0016eb: mov $0x14,%ecx > 0x00002b771c0016f0: mov %rdi,%rax > 0x00002b771c0016f3: vpxor %ymm0,%ymm0,%ymm0 > 0x00002b771c0016f7: jmpq 0x00002b771c001709 > 0x00002b771c0016fc: vmovdqu %ymm0,(%rax) > 0x00002b771c001700: vmovdqu %ymm0,0x20(%rax) > 0x00002b771c001705: add $0x40,%rax > 0x00002b771c001709: sub $0x8,%rcx > 0x00002b771c00170d: jge 0x00002b771c0016fc > 0x00002b771c00170f: add $0x4,%rcx > 0x00002b771c001713: jl 0x00002b771c001721 > 0x00002b771c001715: vmovdqu %ymm0,(%rax) > 0x00002b771c001719: add $0x20,%rax > 0x00002b771c00171d: sub $0x4,%rcx > 0x00002b771c001721: add $0x4,%rcx > 0x00002b771c001725: jle 0x00002b771c001737 > 0x00002b771c001727: dec %rcx > 0x00002b771c00172a: vmovq %xmm0,(%rax) > 0x00002b771c00172e: add $0x8,%rax > 0x00002b771c001732: dec %rcx > 0x00002b771c001735: jge 0x00002b771c00172a > 0x00002b771c001737: > > > I have done regression testing (changeset: > 50250:04f9bb270ab8/24May2018) on 32-bit as well as 64-bit builds and > didn't find any regressions. > $make run-test TEST="tier1 tier2" JTREG="JOBS=1" > CONF=linux-x86_64-normal-server-release > > Please let me know your comments. > > Regards, > Rohit > > > > On Tue, Apr 24, 2018 at 12:33 AM, Vladimir Kozlov > wrote: >> Sorry for delay. >> >> In general you can't use arbitrary registers without letting know JIT >> compilers that you use it. It will definitely cause problems. >> You need to pass it as additional XMMRegister argument and described it as >> TEMP in .ad files. >> >> See byte_array_inflate() as example. >> >> >> On 4/11/18 7:25 PM, Rohit Arul Raj wrote: >>>>> >>>>> When I use XMM0 as a temporary register, the micro-benchmark crashes. >>>>> Saving and Restoring the XMM0 register before and after use works >>>>> fine. >>>>> >>>>> Looking at the "hotspot/src/cpu/x86/vm/x86.ad" file, XMM0 as with >>>>> other XMM registers has been mentioned as Save-On-Call registers and >>>>> on Linux ABI, no register is preserved across function calls though >>>>> XMM0-XMM7 might hold parameters. So I assumed using XMM0 without >>>>> saving/restoring should be fine. >>>>> >>>>> Is it incorrect use XMM* registers without saving/restoring them? >>>>> Using XMM10 register as temporary register works fine without having >>>>> to save and restore it. >>> >>> >>> Any comments/suggestions on the usage of XMM* registers? >>> >>> Thanks, >>> Rohit >>> >>> On Thu, Apr 5, 2018 at 11:38 PM, Vladimir Kozlov >>> wrote: >>>> >>>> Good suggestion, Rohit >>>> >>>> I created new RFE. Please add you suggestion and performance data there: >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8201193 >>>> >>>> Thanks, >>>> Vladimir >>>> >>>> >>>> On 4/5/18 12:19 AM, Rohit Arul Raj wrote: >>>>> >>>>> >>>>> Hi All, >>>>> >>>>> I was going through the C2 object initialization (zeroing) code based >>>>> on the below bug entry: >>>>> https://bugs.openjdk.java.net/browse/JDK-8146801 >>>>> >>>>> Right now, for longer lengths we use "rep stos" instructions on x86. I >>>>> was experimenting with using XMM/YMM registers (on AMD EPYC processor) >>>>> and found that they do improve performance for certain lengths: >>>>> >>>>> For lengths > 64 bytes - 512 bytes : improvement is in the range of 8% >>>>> to >>>>> 44% >>>>> For lengths > 512bytes : some lengths show slight >>>>> improvement in the range of 2% to 7%, others almost same as "rep stos" >>>>> numbers. >>>>> >>>>> I have attached the complete performance data (data.txt) for reference . >>>>> Can we add this as an user option similar to UseXMMForArrayCopy? >>>>> >>>>> I have used the same test case as in >>>>> (http://cr.openjdk.java.net/~shade/8146801/benchmarks.jar) with >>>>> additional sizes. >>>>> >>>>> Initial Patch: >>>>> I haven't added the check for 32-bit mode as I need some help with the >>>>> code (description given below the patch). >>>>> The code is similar to the one used in array copy stubs >>>>> (copy_bytes_forward). >>>>> >>>>> diff --git a/src/hotspot/cpu/x86/globals_x86.hpp >>>>> b/src/hotspot/cpu/x86/globals_x86.hpp >>>>> --- a/src/hotspot/cpu/x86/globals_x86.hpp >>>>> +++ b/src/hotspot/cpu/x86/globals_x86.hpp >>>>> @@ -150,6 +150,9 @@ >>>>> product(bool, UseUnalignedLoadStores, false, >>>>> \ >>>>> "Use SSE2 MOVDQU instruction for Arraycopy") >>>>> \ >>>>> >>>>> \ >>>>> + product(bool, UseXMMForObjInit, false, >>>>> \ >>>>> + "Use XMM/YMM MOVDQU instruction for Object Initialization") >>>>> \ >>>>> + >>>>> \ >>>>> product(bool, UseFastStosb, false, >>>>> \ >>>>> "Use fast-string operation for zeroing: rep stosb") >>>>> \ >>>>> >>>>> \ >>>>> diff --git a/src/hotspot/cpu/x86/macroAssembler_x86.cpp >>>>> b/src/hotspot/cpu/x86/macroAssembler_x86.cpp >>>>> --- a/src/hotspot/cpu/x86/macroAssembler_x86.cpp >>>>> +++ b/src/hotspot/cpu/x86/macroAssembler_x86.cpp >>>>> @@ -7106,6 +7106,56 @@ >>>>> if (UseFastStosb) { >>>>> shlptr(cnt, 3); // convert to number of bytes >>>>> rep_stosb(); >>>>> + } else if (UseXMMForObjInit && UseUnalignedLoadStores) { >>>>> + Label L_loop, L_sloop, L_check, L_tail, L_end; >>>>> + push(base); >>>>> + if (UseAVX >= 2) >>>>> + vpxor(xmm10, xmm10, xmm10, AVX_256bit); >>>>> + else >>>>> + vpxor(xmm10, xmm10, xmm10, AVX_128bit); >>>>> + >>>>> + jmp(L_check); >>>>> + >>>>> + BIND(L_loop); >>>>> + if (UseAVX >= 2) { >>>>> + vmovdqu(Address(base, 0), xmm10); >>>>> + vmovdqu(Address(base, 32), xmm10); >>>>> + } else { >>>>> + movdqu(Address(base, 0), xmm10); >>>>> + movdqu(Address(base, 16), xmm10); >>>>> + movdqu(Address(base, 32), xmm10); >>>>> + movdqu(Address(base, 48), xmm10); >>>>> + } >>>>> + addptr(base, 64); >>>>> + >>>>> + BIND(L_check); >>>>> + subptr(cnt, 8); >>>>> + jccb(Assembler::greaterEqual, L_loop); >>>>> + addptr(cnt, 4); >>>>> + jccb(Assembler::less, L_tail); >>>>> + // Copy trailing 32 bytes >>>>> + if (UseAVX >= 2) { >>>>> + vmovdqu(Address(base, 0), xmm10); >>>>> + } else { >>>>> + movdqu(Address(base, 0), xmm10); >>>>> + movdqu(Address(base, 16), xmm10); >>>>> + } >>>>> + addptr(base, 32); >>>>> + subptr(cnt, 4); >>>>> + >>>>> + BIND(L_tail); >>>>> + addptr(cnt, 4); >>>>> + jccb(Assembler::lessEqual, L_end); >>>>> + decrement(cnt); >>>>> + >>>>> + BIND(L_sloop); >>>>> + movptr(Address(base, 0), tmp); >>>>> + addptr(base, 8); >>>>> + decrement(cnt); >>>>> + jccb(Assembler::greaterEqual, L_sloop); >>>>> + >>>>> + BIND(L_end); >>>>> + pop(base); >>>>> } else { >>>>> NOT_LP64(shlptr(cnt, 1);) // convert to number of 32-bit words >>>>> for 32-bit VM >>>>> rep_stos(); >>>>> >>>>> >>>>> When I use XMM0 as a temporary register, the micro-benchmark crashes. >>>>> Saving and Restoring the XMM0 register before and after use works >>>>> fine. >>>>> >>>>> Looking at the "hotspot/src/cpu/x86/vm/x86.ad" file, XMM0 as with >>>>> other XMM registers has been mentioned as Save-On-Call registers and >>>>> on Linux ABI, no register is preserved across function calls though >>>>> XMM0-XMM7 might hold parameters. So I assumed using XMM0 without >>>>> saving/restoring should be fine. >>>>> >>>>> Is it incorrect use XMM* registers without saving/restoring them? >>>>> Using XMM10 register as temporary register works fine without having >>>>> to save and restore it. >>>>> >>>>> Please let me know your comments. >>>>> >>>>> Regards, >>>>> Rohit >>>>> >>>> >> From per.liden at oracle.com Wed Jun 6 18:23:34 2018 From: per.liden at oracle.com (Per Liden) Date: Wed, 6 Jun 2018 20:23:34 +0200 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> Message-ID: <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> On 2018-06-06 18:29, Andrew Haley wrote: > On 06/06/2018 04:47 PM, Chris Phillips wrote: >> Please review this set of changes to shared code >> related to S390 (31bit) Zero self-build type mis-match failures. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8203030 >> webrev: http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 > > Can you explain this a little more? What is the type of size_t on > s390x? What is the type of uintptr_t? What are the errors? I would like to understand this too. cheers, Per From ChrisPhi at LGonQn.Org Wed Jun 6 19:36:34 2018 From: ChrisPhi at LGonQn.Org (Chris Phillips) Date: Wed, 6 Jun 2018 15:36:34 -0400 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> Message-ID: <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> Hi, On 06/06/18 02:23 PM, Per Liden wrote: > On 2018-06-06 18:29, Andrew Haley wrote: >> On 06/06/2018 04:47 PM, Chris Phillips wrote: >>> Please review this set of changes to shared code >>> related to S390 (31bit) Zero self-build type mis-match failures. >>> >>> Bug:??? https://bugs.openjdk.java.net/browse/JDK-8203030 >>> webrev: http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 >> >> Can you explain this a little more?? What is the type of size_t on >> s390x?? What is the type of uintptr_t?? What are the errors? > > I would like to understand this too. > > cheers, > Per > > Quoting from the original bug review request: http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html "This is a problem when one parameter is of size_t type and the second of uintx type and the platform has size_t defined as eg. unsigned long as on s390 (32-bit)." Hope that helps, Chris (I'll answer further if needed but the info is in the bugs and review thread mostly) See: https://bugs.openjdk.java.net/browse/JDK-8203030 and: http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html https://bugs.openjdk.java.net/browse/JDK-8046938 https://bugs.openjdk.java.net/browse/JDK-8074459 For more info. From per.liden at oracle.com Wed Jun 6 19:44:35 2018 From: per.liden at oracle.com (Per Liden) Date: Wed, 6 Jun 2018 21:44:35 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <20180606094805.ml34woy2x7apyrfs@rbackman> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <20180606094805.ml34woy2x7apyrfs@rbackman> Message-ID: <2cadcbb6-ba38-18a0-b5ab-8c08fdedd972@oracle.com> Hi Rickard, Thanks a lot for reviewing, much appreciated! Comments below. On 06/06/2018 11:48 AM, Rickard B?ckman wrote: > Hi, > > I've looked at the C2 parts of things with Nils by my side. > There are a couple of small things to note. > > classes.cpp misses an undef for optionalmacro. Will fix. > > compile.cpp the print_method should probably be within the {} of > macroExpand. Will fix. > > escape.cpp has two else if cases where the code looks very common. > Please make this into a function if possible? It would be possible, but I'm not sure it's a great idea in this case. The reason is that this seems to be the style in which these switch-statements are written here. Just looking at the case statements immediately above and below, they follow the same (duplication) pattern. In the first switch is looks like this: [...] case Op_Proj: { // we are only interested in the oop result projection from a call if (n->as_Proj()->_con == TypeFunc::Parms && n->in(0)->is_Call() && n->in(0)->as_Call()->returns_pointer()) { add_local_var_and_edge(n, PointsToNode::NoEscape, n->in(0), delayed_worklist); } #if INCLUDE_ZGC else if (UseZGC) { if (n->as_Proj()->_con == LoadBarrierNode::Oop && n->in(0)->is_LoadBarrier()) { add_local_var_and_edge(n, PointsToNode::NoEscape, n->in(0)->in(LoadBarrierNode::Oop), delayed_worklist); } } #endif break; } case Op_Rethrow: // Exception object escapes case Op_Return: { if (n->req() > TypeFunc::Parms && igvn->type(n->in(TypeFunc::Parms))->isa_oopptr()) { // Treat Return value as LocalVar with GlobalEscape escape state. add_local_var_and_edge(n, PointsToNode::GlobalEscape, n->in(TypeFunc::Parms), delayed_worklist); } break; } [...] And in the second switch it looks like this: [...] case Op_Proj: { // we are only interested in the oop result projection from a call if (n->as_Proj()->_con == TypeFunc::Parms && n->in(0)->is_Call() && n->in(0)->as_Call()->returns_pointer()) { add_local_var_and_edge(n, PointsToNode::NoEscape, n->in(0), NULL); break; } #if INCLUDE_ZGC else if (UseZGC) { if (n->as_Proj()->_con == LoadBarrierNode::Oop && n->in(0)->is_LoadBarrier()) { add_local_var_and_edge(n, PointsToNode::NoEscape, n->in(0)->in(LoadBarrierNode::Oop), NULL); break; } } #endif ELSE_FAIL("Op_Proj"); } case Op_Rethrow: // Exception object escapes case Op_Return: { if (n->req() > TypeFunc::Parms && _igvn->type(n->in(TypeFunc::Parms))->isa_oopptr()) { // Treat Return value as LocalVar with GlobalEscape escape state. add_local_var_and_edge(n, PointsToNode::GlobalEscape, n->in(TypeFunc::Parms), NULL); break; } ELSE_FAIL("Op_Return"); } [...] So it would maybe look a bit odd if we use a different style for the code we add, wouldn't you agree? Also, since our code is in #if INCLUDE_ZGC blocks, breaking this out would mean we would have to add a few more #if INCLUDE_ZGC blocks in hpp/cpp to protect the new function. So, unless you strongly object, I'd like to suggest that we keep it as is. > > opcodes.cpp misses an undef for optionalmacro. Will fix. > > In C2 in general, maybe BarrierSet::barrier_set()->barrier_set_c2() > coule be Compile::barrier_set()? I agree that these names are a bit long, but may I suggest that we don't do this as part of the ZGC patch? The reason is that there are already 21 pre-existing calls to BarrierSet::barrier_set()->barrier_set_c2() in src/hotspot/share/opto code (we're adding 4 more in our patch). There are another ~70 calls to the BarrierSet::barrier_set()->barrier_set_{c1,assembler}() functions throughout compiler/asm-related code. While shortening these names might be a good idea, I'd prefer if that was handled separately from the ZGC patch. Makes sense? > > Looks good, great work everyone! Thanks! And again, thanks for reviewing! /Per > > /R > > On 06/01, Per Liden wrote: >> Hi, >> >> Please review the implementation of JEP 333: ZGC: A Scalable Low-Latency >> Garbage Collector (Experimental) >> >> Please see the JEP for more information about the project. The JEP is >> currently in state "Proposed to Target" for JDK 11. >> >> https://bugs.openjdk.java.net/browse/JDK-8197831 >> >> Additional information in can also be found on the ZGC project wiki. >> >> https://wiki.openjdk.java.net/display/zgc/Main >> >> >> Webrevs >> ------- >> >> To make this easier to review, we've divided the change into two webrevs. >> >> * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master >> >> This patch contains the actual ZGC implementation, the new unit tests and >> other changes needed in HotSpot. >> >> * ZGC Testing: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >> >> This patch contains changes to existing tests needed by ZGC. >> >> >> Overview of Changes >> ------------------- >> >> Below follows a list of the files we add/modify in the master patch, with a >> short summary describing each group. >> >> * Build support - Making ZGC an optional feature. >> >> make/autoconf/hotspot.m4 >> make/hotspot/lib/JvmFeatures.gmk >> src/hotspot/share/utilities/macros.hpp >> >> * C2 AD file - Additions needed to generate ZGC load barriers (adlc does not >> currently offer a way to easily break this out). >> >> src/hotspot/cpu/x86/x86.ad >> src/hotspot/cpu/x86/x86_64.ad >> >> * C2 - Things that can't be easily abstracted out into ZGC specific code, >> most of which is guarded behind a #if INCLUDE_ZGC and/or if (UseZGC) >> condition. There should only be two logic changes (one in idealKit.cpp and >> one in node.cpp) that are still active when ZGC is disabled. We believe >> these are low risk changes and should not introduce any real change i >> behavior when using other GCs. >> >> src/hotspot/share/adlc/formssel.cpp >> src/hotspot/share/opto/* >> src/hotspot/share/compiler/compilerDirectives.hpp >> >> * General GC+Runtime - Registering ZGC as a collector. >> >> src/hotspot/share/gc/shared/* >> src/hotspot/share/runtime/vmStructs.cpp >> src/hotspot/share/runtime/vm_operations.hpp >> src/hotspot/share/prims/whitebox.cpp >> >> * GC thread local data - Increasing the size of data area by 32 bytes. >> >> src/hotspot/share/gc/shared/gcThreadLocalData.hpp >> >> * ZGC - The collector itself. >> >> src/hotspot/share/gc/z/* >> src/hotspot/cpu/x86/gc/z/* >> src/hotspot/os_cpu/linux_x86/gc/z/* >> test/hotspot/gtest/gc/z/* >> >> * JFR - Adding new event types. >> >> src/hotspot/share/jfr/* >> src/jdk.jfr/share/conf/jfr/* >> >> * Logging - Adding new log tags. >> >> src/hotspot/share/logging/* >> >> * Metaspace - Adding a friend declaration. >> >> src/hotspot/share/memory/metaspace.hpp >> >> * InstanceRefKlass - Adjustments for concurrent reference processing. >> >> src/hotspot/share/oops/instanceRefKlass.inline.hpp >> >> * vmSymbol - Disabled clone intrinsic for ZGC. >> >> src/hotspot/share/classfile/vmSymbols.cpp >> >> * Oop Verification - In four cases we disabled oop verification because it >> do not makes sense or is not applicable to a GC using load barriers. >> >> src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp >> src/hotspot/cpu/x86/stubGenerator_x86_64.cpp >> src/hotspot/share/compiler/oopMap.cpp >> src/hotspot/share/runtime/jniHandles.cpp >> >> * StackValue - Apply a load barrier in case of OSR. This is a bit of a hack. >> However, this will go away in the future, when we have the next iteration of >> C2's load barriers in place (aka "C2 late barrier insertion"). >> >> src/hotspot/share/runtime/stackValue.cpp >> >> * JVMTI - Adding an assert() to catch problems if the tagmap hashing is >> changed in the future. >> >> src/hotspot/share/prims/jvmtiTagMap.cpp >> >> * Legal - Adding copyright/license for 3rd party hash function used in >> ZHash. >> >> src/java.base/share/legal/c-libutl.md >> >> * SA - Adding basic ZGC support. >> >> src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* >> >> >> Testing >> ------- >> >> * Unit testing >> >> A number of new ZGC specific gtests have been added, in >> test/hotspot/gtest/gc/z/ >> >> * Regression testing >> >> No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} >> No new failures in Mach5, with ZGC disabled, tier{1,2,3} >> >> * Stress testing >> >> We have been continuously been running a number stress tests throughout >> the development, these include: >> >> specjbb2000 >> specjbb2005 >> specjbb2015 >> specjvm98 >> specjvm2008 >> dacapo2009 >> test/hotspot/jtreg/gc/stress/gcold >> test/hotspot/jtreg/gc/stress/systemgc >> test/hotspot/jtreg/gc/stress/gclocker >> test/hotspot/jtreg/gc/stress/gcbasher >> test/hotspot/jtreg/gc/stress/finalizer >> Kitchensink >> >> >> Thanks! >> >> /Per, Stefan & the ZGC team From ChrisPhi at LGonQn.Org Wed Jun 6 19:50:15 2018 From: ChrisPhi at LGonQn.Org (Chris Phillips) Date: Wed, 6 Jun 2018 15:50:15 -0400 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> Message-ID: Hi On 06/06/18 03:36 PM, Chris Phillips wrote: > Hi, > > On 06/06/18 02:23 PM, Per Liden wrote: >> On 2018-06-06 18:29, Andrew Haley wrote: >>> On 06/06/2018 04:47 PM, Chris Phillips wrote: >>>> Please review this set of changes to shared code >>>> related to S390 (31bit) Zero self-build type mis-match failures. >>>> >>>> Bug:??? https://bugs.openjdk.java.net/browse/JDK-8203030 >>>> webrev: http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 >>> >>> Can you explain this a little more?? What is the type of size_t on >>> s390x?? What is the type of uintptr_t?? What are the errors? >> >> I would like to understand this too. >> >> cheers, >> Per >> >> > Quoting from the original bug review request: > http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html > "This > is a problem when one parameter is of size_t type and the second of > uintx type and the platform has size_t defined as eg. unsigned long as > on s390 (32-bit)." > > Hope that helps, > Chris > > (I'll answer further if needed but the info is in the bugs and > review thread mostly) > See: > https://bugs.openjdk.java.net/browse/JDK-8203030 > and: > http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html > https://bugs.openjdk.java.net/browse/JDK-8046938 > https://bugs.openjdk.java.net/browse/JDK-8074459 > For more info. > > > Attached is the output of the submit q run. Chris From dan at danny.cz Wed Jun 6 19:52:08 2018 From: dan at danny.cz (Dan =?UTF-8?B?SG9yw6Fr?=) Date: Wed, 6 Jun 2018 21:52:08 +0200 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> Message-ID: <20180606215208.5565e9c35030499a2799e940@danny.cz> On Wed, 6 Jun 2018 15:36:34 -0400 Chris Phillips wrote: > Hi, > > On 06/06/18 02:23 PM, Per Liden wrote: > > On 2018-06-06 18:29, Andrew Haley wrote: > >> On 06/06/2018 04:47 PM, Chris Phillips wrote: > >>> Please review this set of changes to shared code > >>> related to S390 (31bit) Zero self-build type mis-match failures. > >>> > >>> Bug:??? https://bugs.openjdk.java.net/browse/JDK-8203030 > >>> webrev: http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 > >> > >> Can you explain this a little more?? What is the type of size_t on > >> s390x?? What is the type of uintptr_t?? What are the errors? > > > > I would like to understand this too. > > > > cheers, > > Per > > > > > Quoting from the original bug review request: > http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html > "This > is a problem when one parameter is of size_t type and the second of > uintx type and the platform has size_t defined as eg. unsigned long as > on s390 (32-bit)." Chris, thanks for continuing with that, I hoped we won't need it any more after we dropped the 32/31-bit s390 from Fedora 2 years ago. In short 32-bit s390 (31 bits are used for address space) has size_t defined as "unsigned long" (don't ask me why) while all other arches (AFAIK, including 64-bit s390x) use "unsigned int". And the difference is causing problems especially when C++ templates are used. Dan > Hope that helps, > Chris > > (I'll answer further if needed but the info is in the bugs and > review thread mostly) > See: > https://bugs.openjdk.java.net/browse/JDK-8203030 > and: > http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html > https://bugs.openjdk.java.net/browse/JDK-8046938 > https://bugs.openjdk.java.net/browse/JDK-8074459 > For more info. From ChrisPhi at LGonQn.Org Wed Jun 6 20:04:05 2018 From: ChrisPhi at LGonQn.Org (Chris Phillips) Date: Wed, 6 Jun 2018 16:04:05 -0400 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: <20180606215208.5565e9c35030499a2799e940@danny.cz> References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> <20180606215208.5565e9c35030499a2799e940@danny.cz> Message-ID: Hi Dan! On 06/06/18 03:52 PM, Dan Hor?k wrote: > On Wed, 6 Jun 2018 15:36:34 -0400 > Chris Phillips wrote: > >> Hi, >> >> On 06/06/18 02:23 PM, Per Liden wrote: >>> On 2018-06-06 18:29, Andrew Haley wrote: >>>> On 06/06/2018 04:47 PM, Chris Phillips wrote: >>>>> Please review this set of changes to shared code >>>>> related to S390 (31bit) Zero self-build type mis-match failures. >>>>> >>>>> Bug:??? https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>> webrev: http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 >>>> >>>> Can you explain this a little more?? What is the type of size_t on >>>> s390x?? What is the type of uintptr_t?? What are the errors? >>> >>> I would like to understand this too. >>> >>> cheers, >>> Per >>> >>> >> Quoting from the original bug review request: >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >> "This >> is a problem when one parameter is of size_t type and the second of >> uintx type and the platform has size_t defined as eg. unsigned long as >> on s390 (32-bit)." > > Chris, thanks for continuing with that, I hoped we won't need it any > more after we dropped the 32/31-bit s390 from Fedora 2 years ago. > > In short 32-bit s390 (31 bits are used for address space) has size_t > defined as "unsigned long" (don't ask me why) while all other arches > (AFAIK, including 64-bit s390x) use "unsigned int". And the difference > is causing problems especially when C++ templates are used. > > > Dan > Thanks for clarifying! >> Hope that helps, >> Chris >> >> (I'll answer further if needed but the info is in the bugs and >> review thread mostly) >> See: >> https://bugs.openjdk.java.net/browse/JDK-8203030 >> and: >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >> https://bugs.openjdk.java.net/browse/JDK-8046938 >> https://bugs.openjdk.java.net/browse/JDK-8074459 >> For more info. > > > Chris From per.liden at oracle.com Wed Jun 6 20:47:51 2018 From: per.liden at oracle.com (Per Liden) Date: Wed, 6 Jun 2018 22:47:51 +0200 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> Message-ID: <034bbb1e-5f79-c429-878c-79e8f824dc3d@oracle.com> Hi Chris, On 06/06/2018 09:36 PM, Chris Phillips wrote: > Hi, > > On 06/06/18 02:23 PM, Per Liden wrote: >> On 2018-06-06 18:29, Andrew Haley wrote: >>> On 06/06/2018 04:47 PM, Chris Phillips wrote: >>>> Please review this set of changes to shared code >>>> related to S390 (31bit) Zero self-build type mis-match failures. >>>> >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8203030 >>>> webrev: http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 >>> >>> Can you explain this a little more? What is the type of size_t on >>> s390x? What is the type of uintptr_t? What are the errors? >> >> I would like to understand this too. >> >> cheers, >> Per >> >> > Quoting from the original bug review request: > http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html > "This > is a problem when one parameter is of size_t type and the second of > uintx type and the platform has size_t defined as eg. unsigned long as > on s390 (32-bit)." Please clarify what the sizes of uintx (i.e. uintptr_t) and size_t are on s390? I fail to see how any of this matters to _entries here? What am I missing? src/hotspot/share/gc/g1/g1StringDedupTable.hpp @@ -120,11 +120,11 @@ // Cache for reuse and fast alloc/free of table entries. static G1StringDedupEntryCache* _entry_cache; G1StringDedupEntry** _buckets; size_t _size; - uintx _entries; + size_t _entries; uintx _shrink_threshold; uintx _grow_threshold; bool _rehash_needed; cheers, Per > > Hope that helps, > Chris > > (I'll answer further if needed but the info is in the bugs and > review thread mostly) > See: > https://bugs.openjdk.java.net/browse/JDK-8203030 > and: > http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html > https://bugs.openjdk.java.net/browse/JDK-8046938 > https://bugs.openjdk.java.net/browse/JDK-8074459 > For more info. > From ChrisPhi at LGonQn.Org Wed Jun 6 21:15:17 2018 From: ChrisPhi at LGonQn.Org (Chris Phillips) Date: Wed, 6 Jun 2018 17:15:17 -0400 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: <034bbb1e-5f79-c429-878c-79e8f824dc3d@oracle.com> References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> <034bbb1e-5f79-c429-878c-79e8f824dc3d@oracle.com> Message-ID: Hi Per, On 06/06/18 04:47 PM, Per Liden wrote: > Hi Chris, > > On 06/06/2018 09:36 PM, Chris Phillips wrote: >> Hi, >> >> On 06/06/18 02:23 PM, Per Liden wrote: >>> On 2018-06-06 18:29, Andrew Haley wrote: >>>> On 06/06/2018 04:47 PM, Chris Phillips wrote: >>>>> Please review this set of changes to shared code >>>>> related to S390 (31bit) Zero self-build type mis-match failures. >>>>> >>>>> Bug:??? https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>> webrev: http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 >>>> >>>> Can you explain this a little more?? What is the type of size_t on >>>> s390x?? What is the type of uintptr_t?? What are the errors? >>> >>> I would like to understand this too. >>> >>> cheers, >>> Per >>> >>> >> Quoting from the original bug? review request: >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >> "This >> is a problem when one parameter is of size_t type and the second of >> uintx type and the platform has size_t defined as eg. unsigned long as >> on s390 (32-bit)." > > Please clarify what the sizes of uintx (i.e. uintptr_t) and size_t are > on s390? See Dan's explanation. > > I fail to see how any of this matters to _entries here? What am I missing? > By changing the type, to its actual usage, we avoid the necessity of patching in src/hotspot/share/gc/g1/g1StringDedupTable.cpp around line 617, since its consistent usage and local I patched at the definition. - _table->_entries, percent_of(_table->_entries, _table->_size), _entry_cache->size(), _entries_added, _entries_removed); + _table->_entries, percent_of( (size_t)(_table->_entries), _table->_size), _entry_cache->size(), _entries_added, _entries_removed); percent_of will complain about types otherwise. > src/hotspot/share/gc/g1/g1StringDedupTable.hpp > @@ -120,11 +120,11 @@ > ?? // Cache for reuse and fast alloc/free of table entries. > ?? static G1StringDedupEntryCache* _entry_cache; > > ?? G1StringDedupEntry**??????????? _buckets; > ?? size_t????????????????????????? _size; > -? uintx?????????????????????????? _entries; > +? size_t????????????????????????? _entries; > ?? uintx?????????????????????????? _shrink_threshold; > ?? uintx?????????????????????????? _grow_threshold; > ?? bool??????????????????????????? _rehash_needed; > > cheers, > Per > >> >> Hope that helps, >> Chris >> >> (I'll answer further if needed but the info is in the bugs and >> review thread mostly) >> See: >> https://bugs.openjdk.java.net/browse/JDK-8203030 >> and: >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >> https://bugs.openjdk.java.net/browse/JDK-8046938 >> https://bugs.openjdk.java.net/browse/JDK-8074459 >> For more info. >> > > From dan at danny.cz Wed Jun 6 21:42:06 2018 From: dan at danny.cz (Dan =?UTF-8?B?SG9yw6Fr?=) Date: Wed, 6 Jun 2018 23:42:06 +0200 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> <034bbb1e-5f79-c429-878c-79e8f824dc3d@oracle.com> Message-ID: <20180606234206.43bb8b335bd38860ea679065@danny.cz> On Wed, 6 Jun 2018 17:15:17 -0400 Chris Phillips wrote: > Hi Per, > > On 06/06/18 04:47 PM, Per Liden wrote: > > Hi Chris, > > > > On 06/06/2018 09:36 PM, Chris Phillips wrote: > >> Hi, > >> > >> On 06/06/18 02:23 PM, Per Liden wrote: > >>> On 2018-06-06 18:29, Andrew Haley wrote: > >>>> On 06/06/2018 04:47 PM, Chris Phillips wrote: > >>>>> Please review this set of changes to shared code > >>>>> related to S390 (31bit) Zero self-build type mis-match failures. > >>>>> > >>>>> Bug:??? https://bugs.openjdk.java.net/browse/JDK-8203030 > >>>>> webrev: > >>>>> http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 > >>>> > >>>> Can you explain this a little more?? What is the type of size_t > >>>> on s390x?? What is the type of uintptr_t?? What are the errors? > >>> > >>> I would like to understand this too. > >>> > >>> cheers, > >>> Per > >>> > >>> > >> Quoting from the original bug? review request: > >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html > >> "This > >> is a problem when one parameter is of size_t type and the second of > >> uintx type and the platform has size_t defined as eg. unsigned > >> long as on s390 (32-bit)." > > > > Please clarify what the sizes of uintx (i.e. uintptr_t) and size_t > > are on s390? > See Dan's explanation. the size is the same (32 bits), the problem is the definition Dan > > > > I fail to see how any of this matters to _entries here? What am I > > missing? > > > > By changing the type, to its actual usage, we avoid the > necessity of patching in > src/hotspot/share/gc/g1/g1StringDedupTable.cpp around line 617, since > its consistent usage and local I patched at the definition. > > - _table->_entries, percent_of(_table->_entries, _table->_size), > _entry_cache->size(), _entries_added, _entries_removed); > + _table->_entries, percent_of( (size_t)(_table->_entries), > _table->_size), _entry_cache->size(), _entries_added, > _entries_removed); > > percent_of will complain about types otherwise. > > > > src/hotspot/share/gc/g1/g1StringDedupTable.hpp > > @@ -120,11 +120,11 @@ > > ?? // Cache for reuse and fast alloc/free of table entries. > > ?? static G1StringDedupEntryCache* _entry_cache; > > > > ?? G1StringDedupEntry**??????????? _buckets; > > ?? size_t????????????????????????? _size; > > -? uintx?????????????????????????? _entries; > > +? size_t????????????????????????? _entries; > > ?? uintx?????????????????????????? _shrink_threshold; > > ?? uintx?????????????????????????? _grow_threshold; > > ?? bool??????????????????????????? _rehash_needed; > > > > cheers, > > Per > > > >> > >> Hope that helps, > >> Chris > >> > >> (I'll answer further if needed but the info is in the bugs and > >> review thread mostly) > >> See: > >> https://bugs.openjdk.java.net/browse/JDK-8203030 > >> and: > >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html > >> https://bugs.openjdk.java.net/browse/JDK-8046938 > >> https://bugs.openjdk.java.net/browse/JDK-8074459 > >> For more info. > >> > > > > From per.liden at oracle.com Wed Jun 6 21:48:57 2018 From: per.liden at oracle.com (Per Liden) Date: Wed, 6 Jun 2018 23:48:57 +0200 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> <034bbb1e-5f79-c429-878c-79e8f824dc3d@oracle.com> Message-ID: <7c8571dc-8781-cfdf-1bb9-d0c3b3297c17@oracle.com> Hi Chris, On 06/06/2018 11:15 PM, Chris Phillips wrote: > Hi Per, > > On 06/06/18 04:47 PM, Per Liden wrote: >> Hi Chris, >> >> On 06/06/2018 09:36 PM, Chris Phillips wrote: >>> Hi, >>> >>> On 06/06/18 02:23 PM, Per Liden wrote: >>>> On 2018-06-06 18:29, Andrew Haley wrote: >>>>> On 06/06/2018 04:47 PM, Chris Phillips wrote: >>>>>> Please review this set of changes to shared code >>>>>> related to S390 (31bit) Zero self-build type mis-match failures. >>>>>> >>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>> webrev: http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 >>>>> >>>>> Can you explain this a little more? What is the type of size_t on >>>>> s390x? What is the type of uintptr_t? What are the errors? >>>> >>>> I would like to understand this too. >>>> >>>> cheers, >>>> Per >>>> >>>> >>> Quoting from the original bug review request: >>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>> "This >>> is a problem when one parameter is of size_t type and the second of >>> uintx type and the platform has size_t defined as eg. unsigned long as >>> on s390 (32-bit)." >> >> Please clarify what the sizes of uintx (i.e. uintptr_t) and size_t are >> on s390? > See Dan's explanation. >> >> I fail to see how any of this matters to _entries here? What am I missing? >> > > By changing the type, to its actual usage, we avoid the > necessity of patching in src/hotspot/share/gc/g1/g1StringDedupTable.cpp > around line 617, since its consistent usage and local I patched at the > definition. > > - _table->_entries, percent_of(_table->_entries, _table->_size), > _entry_cache->size(), _entries_added, _entries_removed); > + _table->_entries, percent_of( (size_t)(_table->_entries), > _table->_size), _entry_cache->size(), _entries_added, _entries_removed); > > percent_of will complain about types otherwise. Ok, so why don't you just cast it in the call to percent_of? Your current patch has ripple effects that you fail to take into account. For example, _entries is still printed using UINTX_FORMAT and compared against other uintx variables. You're now mixing types in an unsound way. cheers, Per > > >> src/hotspot/share/gc/g1/g1StringDedupTable.hpp >> @@ -120,11 +120,11 @@ >> // Cache for reuse and fast alloc/free of table entries. >> static G1StringDedupEntryCache* _entry_cache; >> >> G1StringDedupEntry** _buckets; >> size_t _size; >> - uintx _entries; >> + size_t _entries; >> uintx _shrink_threshold; >> uintx _grow_threshold; >> bool _rehash_needed; >> >> cheers, >> Per >> >>> >>> Hope that helps, >>> Chris >>> >>> (I'll answer further if needed but the info is in the bugs and >>> review thread mostly) >>> See: >>> https://bugs.openjdk.java.net/browse/JDK-8203030 >>> and: >>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>> https://bugs.openjdk.java.net/browse/JDK-8046938 >>> https://bugs.openjdk.java.net/browse/JDK-8074459 >>> For more info. >>> >> >> From ChrisPhi at LGonQn.Org Wed Jun 6 21:56:02 2018 From: ChrisPhi at LGonQn.Org (Chris Phillips) Date: Wed, 6 Jun 2018 17:56:02 -0400 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: <7c8571dc-8781-cfdf-1bb9-d0c3b3297c17@oracle.com> References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> <034bbb1e-5f79-c429-878c-79e8f824dc3d@oracle.com> <7c8571dc-8781-cfdf-1bb9-d0c3b3297c17@oracle.com> Message-ID: Hi Per, On 06/06/18 05:48 PM, Per Liden wrote: > Hi Chris, > > On 06/06/2018 11:15 PM, Chris Phillips wrote: >> Hi Per, >> >> On 06/06/18 04:47 PM, Per Liden wrote: >>> Hi Chris, >>> >>> On 06/06/2018 09:36 PM, Chris Phillips wrote: >>>> Hi, >>>> >>>> On 06/06/18 02:23 PM, Per Liden wrote: >>>>> On 2018-06-06 18:29, Andrew Haley wrote: >>>>>> On 06/06/2018 04:47 PM, Chris Phillips wrote: >>>>>>> Please review this set of changes to shared code >>>>>>> related to S390 (31bit) Zero self-build type mis-match failures. >>>>>>> >>>>>>> Bug:??? https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>>> webrev: http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 >>>>>> >>>>>> Can you explain this a little more?? What is the type of size_t on >>>>>> s390x?? What is the type of uintptr_t?? What are the errors? >>>>> >>>>> I would like to understand this too. >>>>> >>>>> cheers, >>>>> Per >>>>> >>>>> >>>> Quoting from the original bug? review request: >>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>> >>>> "This >>>> is a problem when one parameter is of size_t type and the second of >>>> uintx type and the platform has size_t defined as eg. unsigned long as >>>> on s390 (32-bit)." >>> >>> Please clarify what the sizes of uintx (i.e. uintptr_t) and size_t are >>> on s390? >> See Dan's explanation. >>> >>> I fail to see how any of this matters to _entries here? What am I >>> missing? >>> >> >> By changing the type, to its actual usage, we avoid the >> necessity of patching in src/hotspot/share/gc/g1/g1StringDedupTable.cpp >> around line 617, since its consistent usage and local I patched at the >> definition. >> >> - _table->_entries, percent_of(_table->_entries, _table->_size), >> _entry_cache->size(), _entries_added, _entries_removed); >> + _table->_entries, percent_of( (size_t)(_table->_entries), >> _table->_size), _entry_cache->size(), _entries_added, _entries_removed); >> >> percent_of will complain about types otherwise. > > Ok, so why don't you just cast it in the call to percent_of? Your > current patch has ripple effects that you fail to take into account. For > example, _entries is still printed using UINTX_FORMAT and compared > against other uintx variables. You're now mixing types in an unsound way. Hmm missed that, so will do the cast instead as you suggest. (Fixing at the defn is what was suggested the last time around so I tried to do that where it was consistent, obviously this is not. Thanks. > cheers, > Per > >> >> >>> src/hotspot/share/gc/g1/g1StringDedupTable.hpp >>> @@ -120,11 +120,11 @@ >>> ??? // Cache for reuse and fast alloc/free of table entries. >>> ??? static G1StringDedupEntryCache* _entry_cache; >>> >>> ??? G1StringDedupEntry**??????????? _buckets; >>> ??? size_t????????????????????????? _size; >>> -? uintx?????????????????????????? _entries; >>> +? size_t????????????????????????? _entries; >>> ??? uintx?????????????????????????? _shrink_threshold; >>> ??? uintx?????????????????????????? _grow_threshold; >>> ??? bool??????????????????????????? _rehash_needed; >>> >>> cheers, >>> Per >>> >>>> >>>> Hope that helps, >>>> Chris >>>> >>>> (I'll answer further if needed but the info is in the bugs and >>>> review thread mostly) >>>> See: >>>> https://bugs.openjdk.java.net/browse/JDK-8203030 >>>> and: >>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8046938 >>>> https://bugs.openjdk.java.net/browse/JDK-8074459 >>>> For more info. >>>> >>> >>> > > Cheers! Chris From gil at azul.com Thu Jun 7 04:23:29 2018 From: gil at azul.com (Gil Tene) Date: Thu, 7 Jun 2018 04:23:29 +0000 Subject: ARM port consolidation In-Reply-To: <247c1b0c-a3f6-57e3-f00b-2d9a1488213e@oracle.com> References: <247c1b0c-a3f6-57e3-f00b-2d9a1488213e@oracle.com> Message-ID: This makes sense to me on the Aarch64 side. However, on the ARM32 side, I don't think the situation is as straightforward as what is being presented below, and I think more discussion and exploration of alternatives is needed. Much like with AArch64, there is an existing, active, community-developed and community-supported AArch32 port in OpenJDK that predates Oracle's open sourcing of their ARM32 version. That port is being used by multiple downstream builds and, at least for the past year+, it seems to have had more attention and ongoing engineering commitment around it than the Oracle variant. Before making a choice of one AArch32 port vs the other (if such a choice even needs to be made), I would like to hear more about the resources being committed towards maintaining each, keeping each up to date, testing them on various platforms (e.g. including building, testing, and supporting the popular softfloat ABI variants imposed by some OS packages) and working on bug fixes as needs appear. ? Gil. > On Jun 4, 2018, at 6:24 PM, David Holmes wrote: > > Hi Bob, > > Looping in porters-dev, aarch32-port-dev and aarch64-port-dev. > > I think this is a good idea. > > Thanks, > David > > On 5/06/2018 6:34 AM, Bob Vandette wrote: >> During the JDK 9 time frame, Oracle open sourced its 32-bit and 64-bit >> ARM ports and contributed them to OpenJDK. These ports have been used for >> years in the embedded and mobile market, making them very stable and >> having the benefit of a single source base which can produce both 32 and >> 64-bit binaries. The downside of this contribution is that it resulted >> in two 64-bit ARM implementations being available in OpenJDK. >> I'd like to propose that we eliminate one of the 64-bit ARM ports and >> encourage everyone to enhance and support the remaining 32 and 64 bit >> ARM ports. This would avoid the creation of yet another port for these chip >> architectures. The reduction of competing ports will allow everyone >> to focus their attention on a single 64-bit port rather than diluting >> our efforts. This will result in a higher quality and a more performant >> implementation. >> The community at large (especially RedHat, BellSoft, Linaro and Cavium) >> have done a great job of enhancing and keeping the AArch64 port up to >> date with current and new Hotspot features. As a result, I propose that >> we standardize the 64-bit ARM implementation on this port. >> If there are no objections, I will file a JEP to remove the 64-bit ARM >> port sources that reside in jdk/open/src/hotspot/src/cpu/arm >> along with any build logic. This will leave the Oracle contributed >> 32-bit ARM port and the AArch64 64-bit ARM port. >> Let me know what you all think, >> Bob Vandette From david.holmes at oracle.com Thu Jun 7 04:56:08 2018 From: david.holmes at oracle.com (David Holmes) Date: Thu, 7 Jun 2018 14:56:08 +1000 Subject: ARM port consolidation In-Reply-To: References: <247c1b0c-a3f6-57e3-f00b-2d9a1488213e@oracle.com> Message-ID: <290c14ed-f7c7-00d5-41ff-a335b1c7bac3@oracle.com> Hi Gil, On 7/06/2018 2:23 PM, Gil Tene wrote: > This makes sense to me on the Aarch64 side. > > However, on the ARM32 side, I don't think the situation is as straightforward as > what is being presented below, and I think more discussion and exploration of > alternatives is needed. > > Much like with AArch64, there is an existing, active, community-developed and > community-supported AArch32 port in OpenJDK that predates Oracle's open > sourcing of their ARM32 version. That port is being used by multiple downstream > builds and, at least for the past year+, it seems to have had more attention and > ongoing engineering commitment around it than the Oracle variant. To clarify: "AArch32 is the 32-bit sub-architecture within the ARMv8 architecture. The port will be fully compatible with ARMv7 and may support ARMv6 depending on community interest." [1] whereas the 32-bit ARM port that Oracle contributed is for ARMv5, v6 and v7. There's obviously some overlap. If the Aarch32 project reaches a point (like Aarch64) where it is desirable to bring it into the mainline OpenJDK then that would seem like the opportune time to reevaluate the co-existence (or not) of the two ports. David [1] http://openjdk.java.net/projects/aarch32-port/ > Before making a choice of one AArch32 port vs the other (if such a choice > even needs to be made), I would like to hear more about the resources being > committed towards maintaining each, keeping each up to date, testing them on > various platforms (e.g. including building, testing, and supporting the popular > softfloat ABI variants imposed by some OS packages) and working on bug > fixes as needs appear. > ? Gil. > >> On Jun 4, 2018, at 6:24 PM, David Holmes wrote: >> >> Hi Bob, >> >> Looping in porters-dev, aarch32-port-dev and aarch64-port-dev. >> >> I think this is a good idea. >> >> Thanks, >> David >> >> On 5/06/2018 6:34 AM, Bob Vandette wrote: >>> During the JDK 9 time frame, Oracle open sourced its 32-bit and 64-bit >>> ARM ports and contributed them to OpenJDK. These ports have been used for >>> years in the embedded and mobile market, making them very stable and >>> having the benefit of a single source base which can produce both 32 and >>> 64-bit binaries. The downside of this contribution is that it resulted >>> in two 64-bit ARM implementations being available in OpenJDK. >>> I'd like to propose that we eliminate one of the 64-bit ARM ports and >>> encourage everyone to enhance and support the remaining 32 and 64 bit >>> ARM ports. This would avoid the creation of yet another port for these chip >>> architectures. The reduction of competing ports will allow everyone >>> to focus their attention on a single 64-bit port rather than diluting >>> our efforts. This will result in a higher quality and a more performant >>> implementation. >>> The community at large (especially RedHat, BellSoft, Linaro and Cavium) >>> have done a great job of enhancing and keeping the AArch64 port up to >>> date with current and new Hotspot features. As a result, I propose that >>> we standardize the 64-bit ARM implementation on this port. >>> If there are no objections, I will file a JEP to remove the 64-bit ARM >>> port sources that reside in jdk/open/src/hotspot/src/cpu/arm >>> along with any build logic. This will leave the Oracle contributed >>> 32-bit ARM port and the AArch64 64-bit ARM port. >>> Let me know what you all think, >>> Bob Vandette > From jini.george at oracle.com Thu Jun 7 04:57:24 2018 From: jini.george at oracle.com (Jini George) Date: Thu, 7 Jun 2018 10:27:24 +0530 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <17767bb7-91c6-3128-909d-29c85f0e9e04@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <12494192-b16d-55bc-120b-24d45cb34424@oracle.com> <17767bb7-91c6-3128-909d-29c85f0e9e04@oracle.com> Message-ID: <08ef1db7-b411-3a23-5cde-b5ad2d10e23a@oracle.com> Hi Stefan, The changes look good overall. Please update the copyright year for the changed files, where applicable. Some minor nits: * GCCause.java: There is an extra space before the newly added _z* enum values. * HSDB.java: Pls add a space after ZHeap in: anno ="ZHeap"; Thanks! Jini On 6/5/2018 8:44 PM, Stefan Karlsson wrote: > Hi Jini, > > For this version experimental version of ZGC we only have basic SA > support, so the collectLiveRegions feature is not implemented. > > Comments below: > > On 2018-06-05 14:50, Jini George wrote: >> Hi Per, >> >> I have looked at only the SA portion. Some comments on that: >> >> ==>? share/classes/sun/jvm/hotspot/oops/ObjectHeap.java >> >> The method collectLiveRegions() would need to include code to iterate >> through the Zpages, and collect the live regions. >> >> ==> share/classes/sun/jvm/hotspot/HSDB.java >> >> The addAnnotation() method needs to handle the case of collHeap being >> an instance of ZCollectedHeap to avoid "Unknown generation" being >> displayed while displaying the Stack Memory for a mutator thread. > > Fixed. > >> >> ==> share/classes/sun/jvm/hotspot/gc/shared/GCCause.java >> >> To the GCCause enum, it would be good to add the equivalents of the >> following GC causes. (though at this point, GCCause seems unused >> within SA). >> >> ???? _z_timer, >> ???? _z_warmup, >> ???? _z_allocation_rate, >> ???? _z_allocation_stall, >> ???? _z_proactive, > > Fixed. > >> >> ==> share/classes/sun/jvm/hotspot/gc/shared/GCName.java >> >> Similarly, it would be good to add the equivalent of 'Z' in the GCName >> enum. > > Fixed. > >> >> ==> share/classes/sun/jvm/hotspot/runtime/VMOps.java >> >> Again, it would be good to add 'ZOperation' to the VMOps enum (though >> it looks like it is already not in sync). > > Fixed. > >> >> ==> share/classes/sun/jvm/hotspot/tools/HeapSummary.java >> >> The run() method would need to handle the ZGC case too to avoid the >> unknown CollectedHeap type exception with jhsdb jmap -heap: >> >> Also, the printGCAlgorithm() method would need to be updated to read >> in the UseZGC flag to avoid the default "Mark Sweep Compact GC" being >> displayed with jhsdb jmap -heap. > > Fixed. > >> >> ==> share/classes/sun/jvm/hotspot/gc/z/ZHeap.java >> >> It would be great if printOn() (for the clhsdb command 'universe') >> would print the address range of the java heap as we have in other GCs >> (with ZAddressSpaceStart and ZAddressSpaceEnd?) > > ZGC uses three fixed 4 TB reserved memory ranges (on Linux x64). I don't > think it's as important to print these ranges as it is for the other GCs. > >> >> ==> test/hotspot/jtreg/serviceability/sa/TestUniverse.java >> Please modify the above test to include zgc or include a separate SA >> test to test the universe output for zgc. > > Fixed. > > Here's a quick webrev of your suggested changes: > http://cr.openjdk.java.net/~stefank/8204210/webrev.sa.01/ > > Thanks, > StefanK > >> >> Thank you, >> Jini. >> >> >> On 6/2/2018 3:11 AM, Per Liden wrote: >>> Hi, >>> >>> Please review the implementation of JEP 333: ZGC: A Scalable >>> Low-Latency Garbage Collector (Experimental) >>> >>> Please see the JEP for more information about the project. The JEP is >>> currently in state "Proposed to Target" for JDK 11. >>> >>> https://bugs.openjdk.java.net/browse/JDK-8197831 >>> >>> Additional information in can also be found on the ZGC project wiki. >>> >>> https://wiki.openjdk.java.net/display/zgc/Main >>> >>> >>> Webrevs >>> ------- >>> >>> To make this easier to review, we've divided the change into two >>> webrevs. >>> >>> * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master >>> >>> ?? This patch contains the actual ZGC implementation, the new unit >>> tests and other changes needed in HotSpot. >>> >>> * ZGC Testing: >>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >>> >>> ?? This patch contains changes to existing tests needed by ZGC. >>> >>> >>> Overview of Changes >>> ------------------- >>> >>> Below follows a list of the files we add/modify in the master patch, >>> with a short summary describing each group. >>> >>> * Build support - Making ZGC an optional feature. >>> >>> ?? make/autoconf/hotspot.m4 >>> ?? make/hotspot/lib/JvmFeatures.gmk >>> ?? src/hotspot/share/utilities/macros.hpp >>> >>> * C2 AD file - Additions needed to generate ZGC load barriers (adlc >>> does not currently offer a way to easily break this out). >>> >>> ?? src/hotspot/cpu/x86/x86.ad >>> ?? src/hotspot/cpu/x86/x86_64.ad >>> >>> * C2 - Things that can't be easily abstracted out into ZGC specific >>> code, most of which is guarded behind a #if INCLUDE_ZGC and/or if >>> (UseZGC) condition. There should only be two logic changes (one in >>> idealKit.cpp and one in node.cpp) that are still active when ZGC is >>> disabled. We believe these are low risk changes and should not >>> introduce any real change i behavior when using other GCs. >>> >>> ?? src/hotspot/share/adlc/formssel.cpp >>> ?? src/hotspot/share/opto/* >>> ?? src/hotspot/share/compiler/compilerDirectives.hpp >>> >>> * General GC+Runtime - Registering ZGC as a collector. >>> >>> ?? src/hotspot/share/gc/shared/* >>> ?? src/hotspot/share/runtime/vmStructs.cpp >>> ?? src/hotspot/share/runtime/vm_operations.hpp >>> ?? src/hotspot/share/prims/whitebox.cpp >>> >>> * GC thread local data - Increasing the size of data area by 32 bytes. >>> >>> ?? src/hotspot/share/gc/shared/gcThreadLocalData.hpp >>> >>> * ZGC - The collector itself. >>> >>> ?? src/hotspot/share/gc/z/* >>> ?? src/hotspot/cpu/x86/gc/z/* >>> ?? src/hotspot/os_cpu/linux_x86/gc/z/* >>> ?? test/hotspot/gtest/gc/z/* >>> >>> * JFR - Adding new event types. >>> >>> ?? src/hotspot/share/jfr/* >>> ?? src/jdk.jfr/share/conf/jfr/* >>> >>> * Logging - Adding new log tags. >>> >>> ?? src/hotspot/share/logging/* >>> >>> * Metaspace - Adding a friend declaration. >>> >>> ?? src/hotspot/share/memory/metaspace.hpp >>> >>> * InstanceRefKlass - Adjustments for concurrent reference processing. >>> >>> ?? src/hotspot/share/oops/instanceRefKlass.inline.hpp >>> >>> * vmSymbol - Disabled clone intrinsic for ZGC. >>> >>> ?? src/hotspot/share/classfile/vmSymbols.cpp >>> >>> * Oop Verification - In four cases we disabled oop verification >>> because it do not makes sense or is not applicable to a GC using load >>> barriers. >>> >>> ?? src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp >>> ?? src/hotspot/cpu/x86/stubGenerator_x86_64.cpp >>> ?? src/hotspot/share/compiler/oopMap.cpp >>> ?? src/hotspot/share/runtime/jniHandles.cpp >>> >>> * StackValue - Apply a load barrier in case of OSR. This is a bit of >>> a hack. However, this will go away in the future, when we have the >>> next iteration of C2's load barriers in place (aka "C2 late barrier >>> insertion"). >>> >>> ?? src/hotspot/share/runtime/stackValue.cpp >>> >>> * JVMTI - Adding an assert() to catch problems if the tagmap hashing >>> is changed in the future. >>> >>> ?? src/hotspot/share/prims/jvmtiTagMap.cpp >>> >>> * Legal - Adding copyright/license for 3rd party hash function used >>> in ZHash. >>> >>> ?? src/java.base/share/legal/c-libutl.md >>> >>> * SA - Adding basic ZGC support. >>> >>> ?? src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* >>> >>> >>> Testing >>> ------- >>> >>> * Unit testing >>> >>> ?? A number of new ZGC specific gtests have been added, in >>> test/hotspot/gtest/gc/z/ >>> >>> * Regression testing >>> >>> ?? No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} >>> ?? No new failures in Mach5, with ZGC disabled, tier{1,2,3} >>> >>> * Stress testing >>> >>> ?? We have been continuously been running a number stress tests >>> throughout the development, these include: >>> >>> ???? specjbb2000 >>> ???? specjbb2005 >>> ???? specjbb2015 >>> ???? specjvm98 >>> ???? specjvm2008 >>> ???? dacapo2009 >>> ???? test/hotspot/jtreg/gc/stress/gcold >>> ???? test/hotspot/jtreg/gc/stress/systemgc >>> ???? test/hotspot/jtreg/gc/stress/gclocker >>> ???? test/hotspot/jtreg/gc/stress/gcbasher >>> ???? test/hotspot/jtreg/gc/stress/finalizer >>> ???? Kitchensink >>> >>> >>> Thanks! >>> >>> /Per, Stefan & the ZGC team From stefan.karlsson at oracle.com Thu Jun 7 05:30:13 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 7 Jun 2018 07:30:13 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <08ef1db7-b411-3a23-5cde-b5ad2d10e23a@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <12494192-b16d-55bc-120b-24d45cb34424@oracle.com> <17767bb7-91c6-3128-909d-29c85f0e9e04@oracle.com> <08ef1db7-b411-3a23-5cde-b5ad2d10e23a@oracle.com> Message-ID: <56e74aeb-17d4-ed17-fba2-bb7695fc513c@oracle.com> Hi Jini, Thanks for reviewing this! Here are fixes to the nits and copyright years: ? http://cr.openjdk.java.net/~stefank/8204210/webrev.sa.02.delta/ ? http://cr.openjdk.java.net/~stefank/8204210/webrev.sa.01/ StefanK On 2018-06-07 06:57, Jini George wrote: > Hi Stefan, > > The changes look good overall. Please update the copyright year for > the changed files, where applicable. Some minor nits: > > * GCCause.java: > > There is an extra space before the newly added _z* enum values. > > * HSDB.java: > > Pls add a space after ZHeap in: > > anno ="ZHeap"; > > Thanks! > Jini > > > On 6/5/2018 8:44 PM, Stefan Karlsson wrote: >> Hi Jini, >> >> For this version experimental version of ZGC we only have basic SA >> support, so the collectLiveRegions feature is not implemented. >> >> Comments below: >> >> On 2018-06-05 14:50, Jini George wrote: >>> Hi Per, >>> >>> I have looked at only the SA portion. Some comments on that: >>> >>> ==>? share/classes/sun/jvm/hotspot/oops/ObjectHeap.java >>> >>> The method collectLiveRegions() would need to include code to >>> iterate through the Zpages, and collect the live regions. >>> >>> ==> share/classes/sun/jvm/hotspot/HSDB.java >>> >>> The addAnnotation() method needs to handle the case of collHeap >>> being an instance of ZCollectedHeap to avoid "Unknown generation" >>> being displayed while displaying the Stack Memory for a mutator thread. >> >> Fixed. >> >>> >>> ==> share/classes/sun/jvm/hotspot/gc/shared/GCCause.java >>> >>> To the GCCause enum, it would be good to add the equivalents of the >>> following GC causes. (though at this point, GCCause seems unused >>> within SA). >>> >>> ???? _z_timer, >>> ???? _z_warmup, >>> ???? _z_allocation_rate, >>> ???? _z_allocation_stall, >>> ???? _z_proactive, >> >> Fixed. >> >>> >>> ==> share/classes/sun/jvm/hotspot/gc/shared/GCName.java >>> >>> Similarly, it would be good to add the equivalent of 'Z' in the >>> GCName enum. >> >> Fixed. >> >>> >>> ==> share/classes/sun/jvm/hotspot/runtime/VMOps.java >>> >>> Again, it would be good to add 'ZOperation' to the VMOps enum >>> (though it looks like it is already not in sync). >> >> Fixed. >> >>> >>> ==> share/classes/sun/jvm/hotspot/tools/HeapSummary.java >>> >>> The run() method would need to handle the ZGC case too to avoid the >>> unknown CollectedHeap type exception with jhsdb jmap -heap: >>> >>> Also, the printGCAlgorithm() method would need to be updated to read >>> in the UseZGC flag to avoid the default "Mark Sweep Compact GC" >>> being displayed with jhsdb jmap -heap. >> >> Fixed. >> >>> >>> ==> share/classes/sun/jvm/hotspot/gc/z/ZHeap.java >>> >>> It would be great if printOn() (for the clhsdb command 'universe') >>> would print the address range of the java heap as we have in other >>> GCs (with ZAddressSpaceStart and ZAddressSpaceEnd?) >> >> ZGC uses three fixed 4 TB reserved memory ranges (on Linux x64). I >> don't think it's as important to print these ranges as it is for the >> other GCs. >> >>> >>> ==> test/hotspot/jtreg/serviceability/sa/TestUniverse.java >>> Please modify the above test to include zgc or include a separate SA >>> test to test the universe output for zgc. >> >> Fixed. >> >> Here's a quick webrev of your suggested changes: >> http://cr.openjdk.java.net/~stefank/8204210/webrev.sa.01/ >> >> Thanks, >> StefanK >> >>> >>> Thank you, >>> Jini. >>> >>> >>> On 6/2/2018 3:11 AM, Per Liden wrote: >>>> Hi, >>>> >>>> Please review the implementation of JEP 333: ZGC: A Scalable >>>> Low-Latency Garbage Collector (Experimental) >>>> >>>> Please see the JEP for more information about the project. The JEP >>>> is currently in state "Proposed to Target" for JDK 11. >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8197831 >>>> >>>> Additional information in can also be found on the ZGC project wiki. >>>> >>>> https://wiki.openjdk.java.net/display/zgc/Main >>>> >>>> >>>> Webrevs >>>> ------- >>>> >>>> To make this easier to review, we've divided the change into two >>>> webrevs. >>>> >>>> * ZGC Master: >>>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master >>>> >>>> ?? This patch contains the actual ZGC implementation, the new unit >>>> tests and other changes needed in HotSpot. >>>> >>>> * ZGC Testing: >>>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >>>> >>>> ?? This patch contains changes to existing tests needed by ZGC. >>>> >>>> >>>> Overview of Changes >>>> ------------------- >>>> >>>> Below follows a list of the files we add/modify in the master >>>> patch, with a short summary describing each group. >>>> >>>> * Build support - Making ZGC an optional feature. >>>> >>>> ?? make/autoconf/hotspot.m4 >>>> ?? make/hotspot/lib/JvmFeatures.gmk >>>> ?? src/hotspot/share/utilities/macros.hpp >>>> >>>> * C2 AD file - Additions needed to generate ZGC load barriers (adlc >>>> does not currently offer a way to easily break this out). >>>> >>>> ?? src/hotspot/cpu/x86/x86.ad >>>> ?? src/hotspot/cpu/x86/x86_64.ad >>>> >>>> * C2 - Things that can't be easily abstracted out into ZGC specific >>>> code, most of which is guarded behind a #if INCLUDE_ZGC and/or if >>>> (UseZGC) condition. There should only be two logic changes (one in >>>> idealKit.cpp and one in node.cpp) that are still active when ZGC is >>>> disabled. We believe these are low risk changes and should not >>>> introduce any real change i behavior when using other GCs. >>>> >>>> ?? src/hotspot/share/adlc/formssel.cpp >>>> ?? src/hotspot/share/opto/* >>>> ?? src/hotspot/share/compiler/compilerDirectives.hpp >>>> >>>> * General GC+Runtime - Registering ZGC as a collector. >>>> >>>> ?? src/hotspot/share/gc/shared/* >>>> ?? src/hotspot/share/runtime/vmStructs.cpp >>>> ?? src/hotspot/share/runtime/vm_operations.hpp >>>> ?? src/hotspot/share/prims/whitebox.cpp >>>> >>>> * GC thread local data - Increasing the size of data area by 32 bytes. >>>> >>>> ?? src/hotspot/share/gc/shared/gcThreadLocalData.hpp >>>> >>>> * ZGC - The collector itself. >>>> >>>> ?? src/hotspot/share/gc/z/* >>>> ?? src/hotspot/cpu/x86/gc/z/* >>>> ?? src/hotspot/os_cpu/linux_x86/gc/z/* >>>> ?? test/hotspot/gtest/gc/z/* >>>> >>>> * JFR - Adding new event types. >>>> >>>> ?? src/hotspot/share/jfr/* >>>> ?? src/jdk.jfr/share/conf/jfr/* >>>> >>>> * Logging - Adding new log tags. >>>> >>>> ?? src/hotspot/share/logging/* >>>> >>>> * Metaspace - Adding a friend declaration. >>>> >>>> ?? src/hotspot/share/memory/metaspace.hpp >>>> >>>> * InstanceRefKlass - Adjustments for concurrent reference processing. >>>> >>>> ?? src/hotspot/share/oops/instanceRefKlass.inline.hpp >>>> >>>> * vmSymbol - Disabled clone intrinsic for ZGC. >>>> >>>> ?? src/hotspot/share/classfile/vmSymbols.cpp >>>> >>>> * Oop Verification - In four cases we disabled oop verification >>>> because it do not makes sense or is not applicable to a GC using >>>> load barriers. >>>> >>>> ?? src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp >>>> ?? src/hotspot/cpu/x86/stubGenerator_x86_64.cpp >>>> ?? src/hotspot/share/compiler/oopMap.cpp >>>> ?? src/hotspot/share/runtime/jniHandles.cpp >>>> >>>> * StackValue - Apply a load barrier in case of OSR. This is a bit >>>> of a hack. However, this will go away in the future, when we have >>>> the next iteration of C2's load barriers in place (aka "C2 late >>>> barrier insertion"). >>>> >>>> ?? src/hotspot/share/runtime/stackValue.cpp >>>> >>>> * JVMTI - Adding an assert() to catch problems if the tagmap >>>> hashing is changed in the future. >>>> >>>> ?? src/hotspot/share/prims/jvmtiTagMap.cpp >>>> >>>> * Legal - Adding copyright/license for 3rd party hash function used >>>> in ZHash. >>>> >>>> ?? src/java.base/share/legal/c-libutl.md >>>> >>>> * SA - Adding basic ZGC support. >>>> >>>> ?? src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* >>>> >>>> >>>> Testing >>>> ------- >>>> >>>> * Unit testing >>>> >>>> ?? A number of new ZGC specific gtests have been added, in >>>> test/hotspot/gtest/gc/z/ >>>> >>>> * Regression testing >>>> >>>> ?? No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} >>>> ?? No new failures in Mach5, with ZGC disabled, tier{1,2,3} >>>> >>>> * Stress testing >>>> >>>> ?? We have been continuously been running a number stress tests >>>> throughout the development, these include: >>>> >>>> ???? specjbb2000 >>>> ???? specjbb2005 >>>> ???? specjbb2015 >>>> ???? specjvm98 >>>> ???? specjvm2008 >>>> ???? dacapo2009 >>>> ???? test/hotspot/jtreg/gc/stress/gcold >>>> ???? test/hotspot/jtreg/gc/stress/systemgc >>>> ???? test/hotspot/jtreg/gc/stress/gclocker >>>> ???? test/hotspot/jtreg/gc/stress/gcbasher >>>> ???? test/hotspot/jtreg/gc/stress/finalizer >>>> ???? Kitchensink >>>> >>>> >>>> Thanks! >>>> >>>> /Per, Stefan & the ZGC team From jini.george at oracle.com Thu Jun 7 06:01:22 2018 From: jini.george at oracle.com (Jini George) Date: Thu, 7 Jun 2018 11:31:22 +0530 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <56e74aeb-17d4-ed17-fba2-bb7695fc513c@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <12494192-b16d-55bc-120b-24d45cb34424@oracle.com> <17767bb7-91c6-3128-909d-29c85f0e9e04@oracle.com> <08ef1db7-b411-3a23-5cde-b5ad2d10e23a@oracle.com> <56e74aeb-17d4-ed17-fba2-bb7695fc513c@oracle.com> Message-ID: <25d96af3-3452-150c-8152-e58f1c1243be@oracle.com> Thank you for making the changes, Stefan. One minor nit: * in HSDB.java, I meant that "ZHeap"; needs to be changed to "ZHeap "; Sorry for not making it clear! Everything else looks good. I don't need to see another webrev. Thanks, Jini. On 6/7/2018 11:00 AM, Stefan Karlsson wrote: > Hi Jini, > > Thanks for reviewing this! > > Here are fixes to the nits and copyright years: > ? http://cr.openjdk.java.net/~stefank/8204210/webrev.sa.02.delta/ > ? http://cr.openjdk.java.net/~stefank/8204210/webrev.sa.01/ > > StefanK > > On 2018-06-07 06:57, Jini George wrote: >> Hi Stefan, >> >> The changes look good overall. Please update the copyright year for >> the changed files, where applicable. Some minor nits: >> >> * GCCause.java: >> >> There is an extra space before the newly added _z* enum values. >> >> * HSDB.java: >> >> Pls add a space after ZHeap in: >> >> anno ="ZHeap"; >> >> Thanks! >> Jini >> >> >> On 6/5/2018 8:44 PM, Stefan Karlsson wrote: >>> Hi Jini, >>> >>> For this version experimental version of ZGC we only have basic SA >>> support, so the collectLiveRegions feature is not implemented. >>> >>> Comments below: >>> >>> On 2018-06-05 14:50, Jini George wrote: >>>> Hi Per, >>>> >>>> I have looked at only the SA portion. Some comments on that: >>>> >>>> ==>? share/classes/sun/jvm/hotspot/oops/ObjectHeap.java >>>> >>>> The method collectLiveRegions() would need to include code to >>>> iterate through the Zpages, and collect the live regions. >>>> >>>> ==> share/classes/sun/jvm/hotspot/HSDB.java >>>> >>>> The addAnnotation() method needs to handle the case of collHeap >>>> being an instance of ZCollectedHeap to avoid "Unknown generation" >>>> being displayed while displaying the Stack Memory for a mutator thread. >>> >>> Fixed. >>> >>>> >>>> ==> share/classes/sun/jvm/hotspot/gc/shared/GCCause.java >>>> >>>> To the GCCause enum, it would be good to add the equivalents of the >>>> following GC causes. (though at this point, GCCause seems unused >>>> within SA). >>>> >>>> ???? _z_timer, >>>> ???? _z_warmup, >>>> ???? _z_allocation_rate, >>>> ???? _z_allocation_stall, >>>> ???? _z_proactive, >>> >>> Fixed. >>> >>>> >>>> ==> share/classes/sun/jvm/hotspot/gc/shared/GCName.java >>>> >>>> Similarly, it would be good to add the equivalent of 'Z' in the >>>> GCName enum. >>> >>> Fixed. >>> >>>> >>>> ==> share/classes/sun/jvm/hotspot/runtime/VMOps.java >>>> >>>> Again, it would be good to add 'ZOperation' to the VMOps enum >>>> (though it looks like it is already not in sync). >>> >>> Fixed. >>> >>>> >>>> ==> share/classes/sun/jvm/hotspot/tools/HeapSummary.java >>>> >>>> The run() method would need to handle the ZGC case too to avoid the >>>> unknown CollectedHeap type exception with jhsdb jmap -heap: >>>> >>>> Also, the printGCAlgorithm() method would need to be updated to read >>>> in the UseZGC flag to avoid the default "Mark Sweep Compact GC" >>>> being displayed with jhsdb jmap -heap. >>> >>> Fixed. >>> >>>> >>>> ==> share/classes/sun/jvm/hotspot/gc/z/ZHeap.java >>>> >>>> It would be great if printOn() (for the clhsdb command 'universe') >>>> would print the address range of the java heap as we have in other >>>> GCs (with ZAddressSpaceStart and ZAddressSpaceEnd?) >>> >>> ZGC uses three fixed 4 TB reserved memory ranges (on Linux x64). I >>> don't think it's as important to print these ranges as it is for the >>> other GCs. >>> >>>> >>>> ==> test/hotspot/jtreg/serviceability/sa/TestUniverse.java >>>> Please modify the above test to include zgc or include a separate SA >>>> test to test the universe output for zgc. >>> >>> Fixed. >>> >>> Here's a quick webrev of your suggested changes: >>> http://cr.openjdk.java.net/~stefank/8204210/webrev.sa.01/ >>> >>> Thanks, >>> StefanK >>> >>>> >>>> Thank you, >>>> Jini. >>>> >>>> >>>> On 6/2/2018 3:11 AM, Per Liden wrote: >>>>> Hi, >>>>> >>>>> Please review the implementation of JEP 333: ZGC: A Scalable >>>>> Low-Latency Garbage Collector (Experimental) >>>>> >>>>> Please see the JEP for more information about the project. The JEP >>>>> is currently in state "Proposed to Target" for JDK 11. >>>>> >>>>> https://bugs.openjdk.java.net/browse/JDK-8197831 >>>>> >>>>> Additional information in can also be found on the ZGC project wiki. >>>>> >>>>> https://wiki.openjdk.java.net/display/zgc/Main >>>>> >>>>> >>>>> Webrevs >>>>> ------- >>>>> >>>>> To make this easier to review, we've divided the change into two >>>>> webrevs. >>>>> >>>>> * ZGC Master: >>>>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master >>>>> >>>>> ?? This patch contains the actual ZGC implementation, the new unit >>>>> tests and other changes needed in HotSpot. >>>>> >>>>> * ZGC Testing: >>>>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >>>>> >>>>> ?? This patch contains changes to existing tests needed by ZGC. >>>>> >>>>> >>>>> Overview of Changes >>>>> ------------------- >>>>> >>>>> Below follows a list of the files we add/modify in the master >>>>> patch, with a short summary describing each group. >>>>> >>>>> * Build support - Making ZGC an optional feature. >>>>> >>>>> ?? make/autoconf/hotspot.m4 >>>>> ?? make/hotspot/lib/JvmFeatures.gmk >>>>> ?? src/hotspot/share/utilities/macros.hpp >>>>> >>>>> * C2 AD file - Additions needed to generate ZGC load barriers (adlc >>>>> does not currently offer a way to easily break this out). >>>>> >>>>> ?? src/hotspot/cpu/x86/x86.ad >>>>> ?? src/hotspot/cpu/x86/x86_64.ad >>>>> >>>>> * C2 - Things that can't be easily abstracted out into ZGC specific >>>>> code, most of which is guarded behind a #if INCLUDE_ZGC and/or if >>>>> (UseZGC) condition. There should only be two logic changes (one in >>>>> idealKit.cpp and one in node.cpp) that are still active when ZGC is >>>>> disabled. We believe these are low risk changes and should not >>>>> introduce any real change i behavior when using other GCs. >>>>> >>>>> ?? src/hotspot/share/adlc/formssel.cpp >>>>> ?? src/hotspot/share/opto/* >>>>> ?? src/hotspot/share/compiler/compilerDirectives.hpp >>>>> >>>>> * General GC+Runtime - Registering ZGC as a collector. >>>>> >>>>> ?? src/hotspot/share/gc/shared/* >>>>> ?? src/hotspot/share/runtime/vmStructs.cpp >>>>> ?? src/hotspot/share/runtime/vm_operations.hpp >>>>> ?? src/hotspot/share/prims/whitebox.cpp >>>>> >>>>> * GC thread local data - Increasing the size of data area by 32 bytes. >>>>> >>>>> ?? src/hotspot/share/gc/shared/gcThreadLocalData.hpp >>>>> >>>>> * ZGC - The collector itself. >>>>> >>>>> ?? src/hotspot/share/gc/z/* >>>>> ?? src/hotspot/cpu/x86/gc/z/* >>>>> ?? src/hotspot/os_cpu/linux_x86/gc/z/* >>>>> ?? test/hotspot/gtest/gc/z/* >>>>> >>>>> * JFR - Adding new event types. >>>>> >>>>> ?? src/hotspot/share/jfr/* >>>>> ?? src/jdk.jfr/share/conf/jfr/* >>>>> >>>>> * Logging - Adding new log tags. >>>>> >>>>> ?? src/hotspot/share/logging/* >>>>> >>>>> * Metaspace - Adding a friend declaration. >>>>> >>>>> ?? src/hotspot/share/memory/metaspace.hpp >>>>> >>>>> * InstanceRefKlass - Adjustments for concurrent reference processing. >>>>> >>>>> ?? src/hotspot/share/oops/instanceRefKlass.inline.hpp >>>>> >>>>> * vmSymbol - Disabled clone intrinsic for ZGC. >>>>> >>>>> ?? src/hotspot/share/classfile/vmSymbols.cpp >>>>> >>>>> * Oop Verification - In four cases we disabled oop verification >>>>> because it do not makes sense or is not applicable to a GC using >>>>> load barriers. >>>>> >>>>> ?? src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp >>>>> ?? src/hotspot/cpu/x86/stubGenerator_x86_64.cpp >>>>> ?? src/hotspot/share/compiler/oopMap.cpp >>>>> ?? src/hotspot/share/runtime/jniHandles.cpp >>>>> >>>>> * StackValue - Apply a load barrier in case of OSR. This is a bit >>>>> of a hack. However, this will go away in the future, when we have >>>>> the next iteration of C2's load barriers in place (aka "C2 late >>>>> barrier insertion"). >>>>> >>>>> ?? src/hotspot/share/runtime/stackValue.cpp >>>>> >>>>> * JVMTI - Adding an assert() to catch problems if the tagmap >>>>> hashing is changed in the future. >>>>> >>>>> ?? src/hotspot/share/prims/jvmtiTagMap.cpp >>>>> >>>>> * Legal - Adding copyright/license for 3rd party hash function used >>>>> in ZHash. >>>>> >>>>> ?? src/java.base/share/legal/c-libutl.md >>>>> >>>>> * SA - Adding basic ZGC support. >>>>> >>>>> ?? src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* >>>>> >>>>> >>>>> Testing >>>>> ------- >>>>> >>>>> * Unit testing >>>>> >>>>> ?? A number of new ZGC specific gtests have been added, in >>>>> test/hotspot/gtest/gc/z/ >>>>> >>>>> * Regression testing >>>>> >>>>> ?? No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} >>>>> ?? No new failures in Mach5, with ZGC disabled, tier{1,2,3} >>>>> >>>>> * Stress testing >>>>> >>>>> ?? We have been continuously been running a number stress tests >>>>> throughout the development, these include: >>>>> >>>>> ???? specjbb2000 >>>>> ???? specjbb2005 >>>>> ???? specjbb2015 >>>>> ???? specjvm98 >>>>> ???? specjvm2008 >>>>> ???? dacapo2009 >>>>> ???? test/hotspot/jtreg/gc/stress/gcold >>>>> ???? test/hotspot/jtreg/gc/stress/systemgc >>>>> ???? test/hotspot/jtreg/gc/stress/gclocker >>>>> ???? test/hotspot/jtreg/gc/stress/gcbasher >>>>> ???? test/hotspot/jtreg/gc/stress/finalizer >>>>> ???? Kitchensink >>>>> >>>>> >>>>> Thanks! >>>>> >>>>> /Per, Stefan & the ZGC team > > From HORIE at jp.ibm.com Thu Jun 7 06:01:25 2018 From: HORIE at jp.ibm.com (Michihiro Horie) Date: Thu, 7 Jun 2018 15:01:25 +0900 Subject: RFR(M): 8204524: Unnecessary memory barriers in G1ParScanThreadState::copy_to_survivor_space Message-ID: Dear all, Would you please review the following change? Bug: https://bugs.openjdk.java.net/browse/JDK-8204524 Webrev: http://cr.openjdk.java.net/~mhorie/8204524/webrev.00 G1ParScanThreadState::copy_to_survivor_space tries to move live objects to a different location. It uses a forwarding technique and allows multiple threads to compete for performing the copy step. A copy is performed after a thread succeeds in the CAS. CAS-failed threads are not allowed to dereference the forwardee concurrently. Current code is already written so that CAS-failed threads do not dereference the forwardee. Also, this constraint is documented in a caller function mark_forwarded_object as ?the object might be in the process of being copied by another worker so we cannot trust that its to-space image is well-formed?. There is no copy that must finish before the CAS. Threads that failed in the CAS must not dereference the forwardee. Therefore, no fence is necessary before and after the CAS. I measured SPECjbb2015 with this change. As a result, critical-jOPS performance improved by 27% on POWER8. Best regards, -- Michihiro, IBM Research - Tokyo From rickard.backman at oracle.com Thu Jun 7 06:10:56 2018 From: rickard.backman at oracle.com (Rickard =?utf-8?Q?B=C3=A4ckman?=) Date: Thu, 7 Jun 2018 08:10:56 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <2cadcbb6-ba38-18a0-b5ab-8c08fdedd972@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <20180606094805.ml34woy2x7apyrfs@rbackman> <2cadcbb6-ba38-18a0-b5ab-8c08fdedd972@oracle.com> Message-ID: <20180607061056.wqvwvtdemylrwznr@rbackman> Per, I agree with your comments. Consider it reviewed. /R On 06/06, Per Liden wrote: > Hi Rickard, > > Thanks a lot for reviewing, much appreciated! Comments below. > > On 06/06/2018 11:48 AM, Rickard B?ckman wrote: > > Hi, > > > > I've looked at the C2 parts of things with Nils by my side. > > There are a couple of small things to note. > > > > classes.cpp misses an undef for optionalmacro. > > Will fix. > > > > > compile.cpp the print_method should probably be within the {} of > > macroExpand. > > Will fix. > > > > > escape.cpp has two else if cases where the code looks very common. > > Please make this into a function if possible? > > It would be possible, but I'm not sure it's a great idea in this case. The > reason is that this seems to be the style in which these switch-statements > are written here. Just looking at the case statements immediately above and > below, they follow the same (duplication) pattern. > > In the first switch is looks like this: > > [...] > case Op_Proj: { > // we are only interested in the oop result projection from a call > if (n->as_Proj()->_con == TypeFunc::Parms && n->in(0)->is_Call() && > n->in(0)->as_Call()->returns_pointer()) { > add_local_var_and_edge(n, PointsToNode::NoEscape, > n->in(0), delayed_worklist); > } > #if INCLUDE_ZGC > else if (UseZGC) { > if (n->as_Proj()->_con == LoadBarrierNode::Oop && > n->in(0)->is_LoadBarrier()) { > add_local_var_and_edge(n, PointsToNode::NoEscape, > n->in(0)->in(LoadBarrierNode::Oop), delayed_worklist); > } > } > #endif > break; > } > case Op_Rethrow: // Exception object escapes > case Op_Return: { > if (n->req() > TypeFunc::Parms && > igvn->type(n->in(TypeFunc::Parms))->isa_oopptr()) { > // Treat Return value as LocalVar with GlobalEscape escape state. > add_local_var_and_edge(n, PointsToNode::GlobalEscape, > n->in(TypeFunc::Parms), delayed_worklist); > } > break; > } > [...] > > And in the second switch it looks like this: > > [...] > case Op_Proj: { > // we are only interested in the oop result projection from a call > if (n->as_Proj()->_con == TypeFunc::Parms && n->in(0)->is_Call() && > n->in(0)->as_Call()->returns_pointer()) { > add_local_var_and_edge(n, PointsToNode::NoEscape, n->in(0), NULL); > break; > } > #if INCLUDE_ZGC > else if (UseZGC) { > if (n->as_Proj()->_con == LoadBarrierNode::Oop && > n->in(0)->is_LoadBarrier()) { > add_local_var_and_edge(n, PointsToNode::NoEscape, > n->in(0)->in(LoadBarrierNode::Oop), NULL); > break; > } > } > #endif > ELSE_FAIL("Op_Proj"); > } > case Op_Rethrow: // Exception object escapes > case Op_Return: { > if (n->req() > TypeFunc::Parms && > _igvn->type(n->in(TypeFunc::Parms))->isa_oopptr()) { > // Treat Return value as LocalVar with GlobalEscape escape state. > add_local_var_and_edge(n, PointsToNode::GlobalEscape, > n->in(TypeFunc::Parms), NULL); > break; > } > ELSE_FAIL("Op_Return"); > } > [...] > > So it would maybe look a bit odd if we use a different style for the code we > add, wouldn't you agree? > > Also, since our code is in #if INCLUDE_ZGC blocks, breaking this out would > mean we would have to add a few more #if INCLUDE_ZGC blocks in hpp/cpp to > protect the new function. So, unless you strongly object, I'd like to > suggest that we keep it as is. > > > > > opcodes.cpp misses an undef for optionalmacro. > > Will fix. > > > > > In C2 in general, maybe BarrierSet::barrier_set()->barrier_set_c2() > > coule be Compile::barrier_set()? > > I agree that these names are a bit long, but may I suggest that we don't do > this as part of the ZGC patch? The reason is that there are already 21 > pre-existing calls to BarrierSet::barrier_set()->barrier_set_c2() in > src/hotspot/share/opto code (we're adding 4 more in our patch). There are > another ~70 calls to the > BarrierSet::barrier_set()->barrier_set_{c1,assembler}() functions throughout > compiler/asm-related code. While shortening these names might be a good > idea, I'd prefer if that was handled separately from the ZGC patch. Makes > sense? > > > > > Looks good, great work everyone! > > Thanks! And again, thanks for reviewing! > > /Per > > > > > /R > > > > On 06/01, Per Liden wrote: > > > Hi, > > > > > > Please review the implementation of JEP 333: ZGC: A Scalable Low-Latency > > > Garbage Collector (Experimental) > > > > > > Please see the JEP for more information about the project. The JEP is > > > currently in state "Proposed to Target" for JDK 11. > > > > > > https://bugs.openjdk.java.net/browse/JDK-8197831 > > > > > > Additional information in can also be found on the ZGC project wiki. > > > > > > https://wiki.openjdk.java.net/display/zgc/Main > > > > > > > > > Webrevs > > > ------- > > > > > > To make this easier to review, we've divided the change into two webrevs. > > > > > > * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master > > > > > > This patch contains the actual ZGC implementation, the new unit tests and > > > other changes needed in HotSpot. > > > > > > * ZGC Testing: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing > > > > > > This patch contains changes to existing tests needed by ZGC. > > > > > > > > > Overview of Changes > > > ------------------- > > > > > > Below follows a list of the files we add/modify in the master patch, with a > > > short summary describing each group. > > > > > > * Build support - Making ZGC an optional feature. > > > > > > make/autoconf/hotspot.m4 > > > make/hotspot/lib/JvmFeatures.gmk > > > src/hotspot/share/utilities/macros.hpp > > > > > > * C2 AD file - Additions needed to generate ZGC load barriers (adlc does not > > > currently offer a way to easily break this out). > > > > > > src/hotspot/cpu/x86/x86.ad > > > src/hotspot/cpu/x86/x86_64.ad > > > > > > * C2 - Things that can't be easily abstracted out into ZGC specific code, > > > most of which is guarded behind a #if INCLUDE_ZGC and/or if (UseZGC) > > > condition. There should only be two logic changes (one in idealKit.cpp and > > > one in node.cpp) that are still active when ZGC is disabled. We believe > > > these are low risk changes and should not introduce any real change i > > > behavior when using other GCs. > > > > > > src/hotspot/share/adlc/formssel.cpp > > > src/hotspot/share/opto/* > > > src/hotspot/share/compiler/compilerDirectives.hpp > > > > > > * General GC+Runtime - Registering ZGC as a collector. > > > > > > src/hotspot/share/gc/shared/* > > > src/hotspot/share/runtime/vmStructs.cpp > > > src/hotspot/share/runtime/vm_operations.hpp > > > src/hotspot/share/prims/whitebox.cpp > > > > > > * GC thread local data - Increasing the size of data area by 32 bytes. > > > > > > src/hotspot/share/gc/shared/gcThreadLocalData.hpp > > > > > > * ZGC - The collector itself. > > > > > > src/hotspot/share/gc/z/* > > > src/hotspot/cpu/x86/gc/z/* > > > src/hotspot/os_cpu/linux_x86/gc/z/* > > > test/hotspot/gtest/gc/z/* > > > > > > * JFR - Adding new event types. > > > > > > src/hotspot/share/jfr/* > > > src/jdk.jfr/share/conf/jfr/* > > > > > > * Logging - Adding new log tags. > > > > > > src/hotspot/share/logging/* > > > > > > * Metaspace - Adding a friend declaration. > > > > > > src/hotspot/share/memory/metaspace.hpp > > > > > > * InstanceRefKlass - Adjustments for concurrent reference processing. > > > > > > src/hotspot/share/oops/instanceRefKlass.inline.hpp > > > > > > * vmSymbol - Disabled clone intrinsic for ZGC. > > > > > > src/hotspot/share/classfile/vmSymbols.cpp > > > > > > * Oop Verification - In four cases we disabled oop verification because it > > > do not makes sense or is not applicable to a GC using load barriers. > > > > > > src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp > > > src/hotspot/cpu/x86/stubGenerator_x86_64.cpp > > > src/hotspot/share/compiler/oopMap.cpp > > > src/hotspot/share/runtime/jniHandles.cpp > > > > > > * StackValue - Apply a load barrier in case of OSR. This is a bit of a hack. > > > However, this will go away in the future, when we have the next iteration of > > > C2's load barriers in place (aka "C2 late barrier insertion"). > > > > > > src/hotspot/share/runtime/stackValue.cpp > > > > > > * JVMTI - Adding an assert() to catch problems if the tagmap hashing is > > > changed in the future. > > > > > > src/hotspot/share/prims/jvmtiTagMap.cpp > > > > > > * Legal - Adding copyright/license for 3rd party hash function used in > > > ZHash. > > > > > > src/java.base/share/legal/c-libutl.md > > > > > > * SA - Adding basic ZGC support. > > > > > > src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* > > > > > > > > > Testing > > > ------- > > > > > > * Unit testing > > > > > > A number of new ZGC specific gtests have been added, in > > > test/hotspot/gtest/gc/z/ > > > > > > * Regression testing > > > > > > No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} > > > No new failures in Mach5, with ZGC disabled, tier{1,2,3} > > > > > > * Stress testing > > > > > > We have been continuously been running a number stress tests throughout > > > the development, these include: > > > > > > specjbb2000 > > > specjbb2005 > > > specjbb2015 > > > specjvm98 > > > specjvm2008 > > > dacapo2009 > > > test/hotspot/jtreg/gc/stress/gcold > > > test/hotspot/jtreg/gc/stress/systemgc > > > test/hotspot/jtreg/gc/stress/gclocker > > > test/hotspot/jtreg/gc/stress/gcbasher > > > test/hotspot/jtreg/gc/stress/finalizer > > > Kitchensink > > > > > > > > > Thanks! > > > > > > /Per, Stefan & the ZGC team From per.liden at oracle.com Thu Jun 7 06:31:28 2018 From: per.liden at oracle.com (Per Liden) Date: Thu, 7 Jun 2018 08:31:28 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <20180607061056.wqvwvtdemylrwznr@rbackman> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <20180606094805.ml34woy2x7apyrfs@rbackman> <2cadcbb6-ba38-18a0-b5ab-8c08fdedd972@oracle.com> <20180607061056.wqvwvtdemylrwznr@rbackman> Message-ID: <07a89839-e20a-8258-4509-c6c52c78a08b@oracle.com> Thanks Rickard! /Per On 06/07/2018 08:10 AM, Rickard B?ckman wrote: > Per, > > I agree with your comments. Consider it reviewed. > > /R > > On 06/06, Per Liden wrote: >> Hi Rickard, >> >> Thanks a lot for reviewing, much appreciated! Comments below. >> >> On 06/06/2018 11:48 AM, Rickard B?ckman wrote: >>> Hi, >>> >>> I've looked at the C2 parts of things with Nils by my side. >>> There are a couple of small things to note. >>> >>> classes.cpp misses an undef for optionalmacro. >> >> Will fix. >> >>> >>> compile.cpp the print_method should probably be within the {} of >>> macroExpand. >> >> Will fix. >> >>> >>> escape.cpp has two else if cases where the code looks very common. >>> Please make this into a function if possible? >> >> It would be possible, but I'm not sure it's a great idea in this case. The >> reason is that this seems to be the style in which these switch-statements >> are written here. Just looking at the case statements immediately above and >> below, they follow the same (duplication) pattern. >> >> In the first switch is looks like this: >> >> [...] >> case Op_Proj: { >> // we are only interested in the oop result projection from a call >> if (n->as_Proj()->_con == TypeFunc::Parms && n->in(0)->is_Call() && >> n->in(0)->as_Call()->returns_pointer()) { >> add_local_var_and_edge(n, PointsToNode::NoEscape, >> n->in(0), delayed_worklist); >> } >> #if INCLUDE_ZGC >> else if (UseZGC) { >> if (n->as_Proj()->_con == LoadBarrierNode::Oop && >> n->in(0)->is_LoadBarrier()) { >> add_local_var_and_edge(n, PointsToNode::NoEscape, >> n->in(0)->in(LoadBarrierNode::Oop), delayed_worklist); >> } >> } >> #endif >> break; >> } >> case Op_Rethrow: // Exception object escapes >> case Op_Return: { >> if (n->req() > TypeFunc::Parms && >> igvn->type(n->in(TypeFunc::Parms))->isa_oopptr()) { >> // Treat Return value as LocalVar with GlobalEscape escape state. >> add_local_var_and_edge(n, PointsToNode::GlobalEscape, >> n->in(TypeFunc::Parms), delayed_worklist); >> } >> break; >> } >> [...] >> >> And in the second switch it looks like this: >> >> [...] >> case Op_Proj: { >> // we are only interested in the oop result projection from a call >> if (n->as_Proj()->_con == TypeFunc::Parms && n->in(0)->is_Call() && >> n->in(0)->as_Call()->returns_pointer()) { >> add_local_var_and_edge(n, PointsToNode::NoEscape, n->in(0), NULL); >> break; >> } >> #if INCLUDE_ZGC >> else if (UseZGC) { >> if (n->as_Proj()->_con == LoadBarrierNode::Oop && >> n->in(0)->is_LoadBarrier()) { >> add_local_var_and_edge(n, PointsToNode::NoEscape, >> n->in(0)->in(LoadBarrierNode::Oop), NULL); >> break; >> } >> } >> #endif >> ELSE_FAIL("Op_Proj"); >> } >> case Op_Rethrow: // Exception object escapes >> case Op_Return: { >> if (n->req() > TypeFunc::Parms && >> _igvn->type(n->in(TypeFunc::Parms))->isa_oopptr()) { >> // Treat Return value as LocalVar with GlobalEscape escape state. >> add_local_var_and_edge(n, PointsToNode::GlobalEscape, >> n->in(TypeFunc::Parms), NULL); >> break; >> } >> ELSE_FAIL("Op_Return"); >> } >> [...] >> >> So it would maybe look a bit odd if we use a different style for the code we >> add, wouldn't you agree? >> >> Also, since our code is in #if INCLUDE_ZGC blocks, breaking this out would >> mean we would have to add a few more #if INCLUDE_ZGC blocks in hpp/cpp to >> protect the new function. So, unless you strongly object, I'd like to >> suggest that we keep it as is. >> >>> >>> opcodes.cpp misses an undef for optionalmacro. >> >> Will fix. >> >>> >>> In C2 in general, maybe BarrierSet::barrier_set()->barrier_set_c2() >>> coule be Compile::barrier_set()? >> >> I agree that these names are a bit long, but may I suggest that we don't do >> this as part of the ZGC patch? The reason is that there are already 21 >> pre-existing calls to BarrierSet::barrier_set()->barrier_set_c2() in >> src/hotspot/share/opto code (we're adding 4 more in our patch). There are >> another ~70 calls to the >> BarrierSet::barrier_set()->barrier_set_{c1,assembler}() functions throughout >> compiler/asm-related code. While shortening these names might be a good >> idea, I'd prefer if that was handled separately from the ZGC patch. Makes >> sense? >> >>> >>> Looks good, great work everyone! >> >> Thanks! And again, thanks for reviewing! >> >> /Per >> >>> >>> /R >>> >>> On 06/01, Per Liden wrote: >>>> Hi, >>>> >>>> Please review the implementation of JEP 333: ZGC: A Scalable Low-Latency >>>> Garbage Collector (Experimental) >>>> >>>> Please see the JEP for more information about the project. The JEP is >>>> currently in state "Proposed to Target" for JDK 11. >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8197831 >>>> >>>> Additional information in can also be found on the ZGC project wiki. >>>> >>>> https://wiki.openjdk.java.net/display/zgc/Main >>>> >>>> >>>> Webrevs >>>> ------- >>>> >>>> To make this easier to review, we've divided the change into two webrevs. >>>> >>>> * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master >>>> >>>> This patch contains the actual ZGC implementation, the new unit tests and >>>> other changes needed in HotSpot. >>>> >>>> * ZGC Testing: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >>>> >>>> This patch contains changes to existing tests needed by ZGC. >>>> >>>> >>>> Overview of Changes >>>> ------------------- >>>> >>>> Below follows a list of the files we add/modify in the master patch, with a >>>> short summary describing each group. >>>> >>>> * Build support - Making ZGC an optional feature. >>>> >>>> make/autoconf/hotspot.m4 >>>> make/hotspot/lib/JvmFeatures.gmk >>>> src/hotspot/share/utilities/macros.hpp >>>> >>>> * C2 AD file - Additions needed to generate ZGC load barriers (adlc does not >>>> currently offer a way to easily break this out). >>>> >>>> src/hotspot/cpu/x86/x86.ad >>>> src/hotspot/cpu/x86/x86_64.ad >>>> >>>> * C2 - Things that can't be easily abstracted out into ZGC specific code, >>>> most of which is guarded behind a #if INCLUDE_ZGC and/or if (UseZGC) >>>> condition. There should only be two logic changes (one in idealKit.cpp and >>>> one in node.cpp) that are still active when ZGC is disabled. We believe >>>> these are low risk changes and should not introduce any real change i >>>> behavior when using other GCs. >>>> >>>> src/hotspot/share/adlc/formssel.cpp >>>> src/hotspot/share/opto/* >>>> src/hotspot/share/compiler/compilerDirectives.hpp >>>> >>>> * General GC+Runtime - Registering ZGC as a collector. >>>> >>>> src/hotspot/share/gc/shared/* >>>> src/hotspot/share/runtime/vmStructs.cpp >>>> src/hotspot/share/runtime/vm_operations.hpp >>>> src/hotspot/share/prims/whitebox.cpp >>>> >>>> * GC thread local data - Increasing the size of data area by 32 bytes. >>>> >>>> src/hotspot/share/gc/shared/gcThreadLocalData.hpp >>>> >>>> * ZGC - The collector itself. >>>> >>>> src/hotspot/share/gc/z/* >>>> src/hotspot/cpu/x86/gc/z/* >>>> src/hotspot/os_cpu/linux_x86/gc/z/* >>>> test/hotspot/gtest/gc/z/* >>>> >>>> * JFR - Adding new event types. >>>> >>>> src/hotspot/share/jfr/* >>>> src/jdk.jfr/share/conf/jfr/* >>>> >>>> * Logging - Adding new log tags. >>>> >>>> src/hotspot/share/logging/* >>>> >>>> * Metaspace - Adding a friend declaration. >>>> >>>> src/hotspot/share/memory/metaspace.hpp >>>> >>>> * InstanceRefKlass - Adjustments for concurrent reference processing. >>>> >>>> src/hotspot/share/oops/instanceRefKlass.inline.hpp >>>> >>>> * vmSymbol - Disabled clone intrinsic for ZGC. >>>> >>>> src/hotspot/share/classfile/vmSymbols.cpp >>>> >>>> * Oop Verification - In four cases we disabled oop verification because it >>>> do not makes sense or is not applicable to a GC using load barriers. >>>> >>>> src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp >>>> src/hotspot/cpu/x86/stubGenerator_x86_64.cpp >>>> src/hotspot/share/compiler/oopMap.cpp >>>> src/hotspot/share/runtime/jniHandles.cpp >>>> >>>> * StackValue - Apply a load barrier in case of OSR. This is a bit of a hack. >>>> However, this will go away in the future, when we have the next iteration of >>>> C2's load barriers in place (aka "C2 late barrier insertion"). >>>> >>>> src/hotspot/share/runtime/stackValue.cpp >>>> >>>> * JVMTI - Adding an assert() to catch problems if the tagmap hashing is >>>> changed in the future. >>>> >>>> src/hotspot/share/prims/jvmtiTagMap.cpp >>>> >>>> * Legal - Adding copyright/license for 3rd party hash function used in >>>> ZHash. >>>> >>>> src/java.base/share/legal/c-libutl.md >>>> >>>> * SA - Adding basic ZGC support. >>>> >>>> src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* >>>> >>>> >>>> Testing >>>> ------- >>>> >>>> * Unit testing >>>> >>>> A number of new ZGC specific gtests have been added, in >>>> test/hotspot/gtest/gc/z/ >>>> >>>> * Regression testing >>>> >>>> No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} >>>> No new failures in Mach5, with ZGC disabled, tier{1,2,3} >>>> >>>> * Stress testing >>>> >>>> We have been continuously been running a number stress tests throughout >>>> the development, these include: >>>> >>>> specjbb2000 >>>> specjbb2005 >>>> specjbb2015 >>>> specjvm98 >>>> specjvm2008 >>>> dacapo2009 >>>> test/hotspot/jtreg/gc/stress/gcold >>>> test/hotspot/jtreg/gc/stress/systemgc >>>> test/hotspot/jtreg/gc/stress/gclocker >>>> test/hotspot/jtreg/gc/stress/gcbasher >>>> test/hotspot/jtreg/gc/stress/finalizer >>>> Kitchensink >>>> >>>> >>>> Thanks! >>>> >>>> /Per, Stefan & the ZGC team From stefan.karlsson at oracle.com Thu Jun 7 06:58:50 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 7 Jun 2018 08:58:50 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <25d96af3-3452-150c-8152-e58f1c1243be@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <12494192-b16d-55bc-120b-24d45cb34424@oracle.com> <17767bb7-91c6-3128-909d-29c85f0e9e04@oracle.com> <08ef1db7-b411-3a23-5cde-b5ad2d10e23a@oracle.com> <56e74aeb-17d4-ed17-fba2-bb7695fc513c@oracle.com> <25d96af3-3452-150c-8152-e58f1c1243be@oracle.com> Message-ID: On 2018-06-07 08:01, Jini George wrote: > Thank you for making the changes, Stefan. One minor nit: > > * in HSDB.java, I meant that > > "ZHeap"; > > needs to be changed to > > "ZHeap "; Fixed. > > Sorry for not making it clear! Everything else looks good. I don't need > to see another webrev. Thanks, StefanK > > Thanks, > Jini. > > > On 6/7/2018 11:00 AM, Stefan Karlsson wrote: >> Hi Jini, >> >> Thanks for reviewing this! >> >> Here are fixes to the nits and copyright years: >> ?? http://cr.openjdk.java.net/~stefank/8204210/webrev.sa.02.delta/ >> ?? http://cr.openjdk.java.net/~stefank/8204210/webrev.sa.01/ >> >> StefanK >> >> On 2018-06-07 06:57, Jini George wrote: >>> Hi Stefan, >>> >>> The changes look good overall. Please update the copyright year for >>> the changed files, where applicable. Some minor nits: >>> >>> * GCCause.java: >>> >>> There is an extra space before the newly added _z* enum values. >>> >>> * HSDB.java: >>> >>> Pls add a space after ZHeap in: >>> >>> anno ="ZHeap"; >>> >>> Thanks! >>> Jini >>> >>> >>> On 6/5/2018 8:44 PM, Stefan Karlsson wrote: >>>> Hi Jini, >>>> >>>> For this version experimental version of ZGC we only have basic SA >>>> support, so the collectLiveRegions feature is not implemented. >>>> >>>> Comments below: >>>> >>>> On 2018-06-05 14:50, Jini George wrote: >>>>> Hi Per, >>>>> >>>>> I have looked at only the SA portion. Some comments on that: >>>>> >>>>> ==>? share/classes/sun/jvm/hotspot/oops/ObjectHeap.java >>>>> >>>>> The method collectLiveRegions() would need to include code to >>>>> iterate through the Zpages, and collect the live regions. >>>>> >>>>> ==> share/classes/sun/jvm/hotspot/HSDB.java >>>>> >>>>> The addAnnotation() method needs to handle the case of collHeap >>>>> being an instance of ZCollectedHeap to avoid "Unknown generation" >>>>> being displayed while displaying the Stack Memory for a mutator >>>>> thread. >>>> >>>> Fixed. >>>> >>>>> >>>>> ==> share/classes/sun/jvm/hotspot/gc/shared/GCCause.java >>>>> >>>>> To the GCCause enum, it would be good to add the equivalents of the >>>>> following GC causes. (though at this point, GCCause seems unused >>>>> within SA). >>>>> >>>>> ???? _z_timer, >>>>> ???? _z_warmup, >>>>> ???? _z_allocation_rate, >>>>> ???? _z_allocation_stall, >>>>> ???? _z_proactive, >>>> >>>> Fixed. >>>> >>>>> >>>>> ==> share/classes/sun/jvm/hotspot/gc/shared/GCName.java >>>>> >>>>> Similarly, it would be good to add the equivalent of 'Z' in the >>>>> GCName enum. >>>> >>>> Fixed. >>>> >>>>> >>>>> ==> share/classes/sun/jvm/hotspot/runtime/VMOps.java >>>>> >>>>> Again, it would be good to add 'ZOperation' to the VMOps enum >>>>> (though it looks like it is already not in sync). >>>> >>>> Fixed. >>>> >>>>> >>>>> ==> share/classes/sun/jvm/hotspot/tools/HeapSummary.java >>>>> >>>>> The run() method would need to handle the ZGC case too to avoid the >>>>> unknown CollectedHeap type exception with jhsdb jmap -heap: >>>>> >>>>> Also, the printGCAlgorithm() method would need to be updated to >>>>> read in the UseZGC flag to avoid the default "Mark Sweep Compact >>>>> GC" being displayed with jhsdb jmap -heap. >>>> >>>> Fixed. >>>> >>>>> >>>>> ==> share/classes/sun/jvm/hotspot/gc/z/ZHeap.java >>>>> >>>>> It would be great if printOn() (for the clhsdb command 'universe') >>>>> would print the address range of the java heap as we have in other >>>>> GCs (with ZAddressSpaceStart and ZAddressSpaceEnd?) >>>> >>>> ZGC uses three fixed 4 TB reserved memory ranges (on Linux x64). I >>>> don't think it's as important to print these ranges as it is for the >>>> other GCs. >>>> >>>>> >>>>> ==> test/hotspot/jtreg/serviceability/sa/TestUniverse.java >>>>> Please modify the above test to include zgc or include a separate >>>>> SA test to test the universe output for zgc. >>>> >>>> Fixed. >>>> >>>> Here's a quick webrev of your suggested changes: >>>> http://cr.openjdk.java.net/~stefank/8204210/webrev.sa.01/ >>>> >>>> Thanks, >>>> StefanK >>>> >>>>> >>>>> Thank you, >>>>> Jini. >>>>> >>>>> >>>>> On 6/2/2018 3:11 AM, Per Liden wrote: >>>>>> Hi, >>>>>> >>>>>> Please review the implementation of JEP 333: ZGC: A Scalable >>>>>> Low-Latency Garbage Collector (Experimental) >>>>>> >>>>>> Please see the JEP for more information about the project. The JEP >>>>>> is currently in state "Proposed to Target" for JDK 11. >>>>>> >>>>>> https://bugs.openjdk.java.net/browse/JDK-8197831 >>>>>> >>>>>> Additional information in can also be found on the ZGC project wiki. >>>>>> >>>>>> https://wiki.openjdk.java.net/display/zgc/Main >>>>>> >>>>>> >>>>>> Webrevs >>>>>> ------- >>>>>> >>>>>> To make this easier to review, we've divided the change into two >>>>>> webrevs. >>>>>> >>>>>> * ZGC Master: >>>>>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master >>>>>> >>>>>> ?? This patch contains the actual ZGC implementation, the new unit >>>>>> tests and other changes needed in HotSpot. >>>>>> >>>>>> * ZGC Testing: >>>>>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >>>>>> >>>>>> ?? This patch contains changes to existing tests needed by ZGC. >>>>>> >>>>>> >>>>>> Overview of Changes >>>>>> ------------------- >>>>>> >>>>>> Below follows a list of the files we add/modify in the master >>>>>> patch, with a short summary describing each group. >>>>>> >>>>>> * Build support - Making ZGC an optional feature. >>>>>> >>>>>> ?? make/autoconf/hotspot.m4 >>>>>> ?? make/hotspot/lib/JvmFeatures.gmk >>>>>> ?? src/hotspot/share/utilities/macros.hpp >>>>>> >>>>>> * C2 AD file - Additions needed to generate ZGC load barriers >>>>>> (adlc does not currently offer a way to easily break this out). >>>>>> >>>>>> ?? src/hotspot/cpu/x86/x86.ad >>>>>> ?? src/hotspot/cpu/x86/x86_64.ad >>>>>> >>>>>> * C2 - Things that can't be easily abstracted out into ZGC >>>>>> specific code, most of which is guarded behind a #if INCLUDE_ZGC >>>>>> and/or if (UseZGC) condition. There should only be two logic >>>>>> changes (one in idealKit.cpp and one in node.cpp) that are still >>>>>> active when ZGC is disabled. We believe these are low risk changes >>>>>> and should not introduce any real change i behavior when using >>>>>> other GCs. >>>>>> >>>>>> ?? src/hotspot/share/adlc/formssel.cpp >>>>>> ?? src/hotspot/share/opto/* >>>>>> ?? src/hotspot/share/compiler/compilerDirectives.hpp >>>>>> >>>>>> * General GC+Runtime - Registering ZGC as a collector. >>>>>> >>>>>> ?? src/hotspot/share/gc/shared/* >>>>>> ?? src/hotspot/share/runtime/vmStructs.cpp >>>>>> ?? src/hotspot/share/runtime/vm_operations.hpp >>>>>> ?? src/hotspot/share/prims/whitebox.cpp >>>>>> >>>>>> * GC thread local data - Increasing the size of data area by 32 >>>>>> bytes. >>>>>> >>>>>> ?? src/hotspot/share/gc/shared/gcThreadLocalData.hpp >>>>>> >>>>>> * ZGC - The collector itself. >>>>>> >>>>>> ?? src/hotspot/share/gc/z/* >>>>>> ?? src/hotspot/cpu/x86/gc/z/* >>>>>> ?? src/hotspot/os_cpu/linux_x86/gc/z/* >>>>>> ?? test/hotspot/gtest/gc/z/* >>>>>> >>>>>> * JFR - Adding new event types. >>>>>> >>>>>> ?? src/hotspot/share/jfr/* >>>>>> ?? src/jdk.jfr/share/conf/jfr/* >>>>>> >>>>>> * Logging - Adding new log tags. >>>>>> >>>>>> ?? src/hotspot/share/logging/* >>>>>> >>>>>> * Metaspace - Adding a friend declaration. >>>>>> >>>>>> ?? src/hotspot/share/memory/metaspace.hpp >>>>>> >>>>>> * InstanceRefKlass - Adjustments for concurrent reference processing. >>>>>> >>>>>> ?? src/hotspot/share/oops/instanceRefKlass.inline.hpp >>>>>> >>>>>> * vmSymbol - Disabled clone intrinsic for ZGC. >>>>>> >>>>>> ?? src/hotspot/share/classfile/vmSymbols.cpp >>>>>> >>>>>> * Oop Verification - In four cases we disabled oop verification >>>>>> because it do not makes sense or is not applicable to a GC using >>>>>> load barriers. >>>>>> >>>>>> ?? src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp >>>>>> ?? src/hotspot/cpu/x86/stubGenerator_x86_64.cpp >>>>>> ?? src/hotspot/share/compiler/oopMap.cpp >>>>>> ?? src/hotspot/share/runtime/jniHandles.cpp >>>>>> >>>>>> * StackValue - Apply a load barrier in case of OSR. This is a bit >>>>>> of a hack. However, this will go away in the future, when we have >>>>>> the next iteration of C2's load barriers in place (aka "C2 late >>>>>> barrier insertion"). >>>>>> >>>>>> ?? src/hotspot/share/runtime/stackValue.cpp >>>>>> >>>>>> * JVMTI - Adding an assert() to catch problems if the tagmap >>>>>> hashing is changed in the future. >>>>>> >>>>>> ?? src/hotspot/share/prims/jvmtiTagMap.cpp >>>>>> >>>>>> * Legal - Adding copyright/license for 3rd party hash function >>>>>> used in ZHash. >>>>>> >>>>>> ?? src/java.base/share/legal/c-libutl.md >>>>>> >>>>>> * SA - Adding basic ZGC support. >>>>>> >>>>>> ?? src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* >>>>>> >>>>>> >>>>>> Testing >>>>>> ------- >>>>>> >>>>>> * Unit testing >>>>>> >>>>>> ?? A number of new ZGC specific gtests have been added, in >>>>>> test/hotspot/gtest/gc/z/ >>>>>> >>>>>> * Regression testing >>>>>> >>>>>> ?? No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} >>>>>> ?? No new failures in Mach5, with ZGC disabled, tier{1,2,3} >>>>>> >>>>>> * Stress testing >>>>>> >>>>>> ?? We have been continuously been running a number stress tests >>>>>> throughout the development, these include: >>>>>> >>>>>> ???? specjbb2000 >>>>>> ???? specjbb2005 >>>>>> ???? specjbb2015 >>>>>> ???? specjvm98 >>>>>> ???? specjvm2008 >>>>>> ???? dacapo2009 >>>>>> ???? test/hotspot/jtreg/gc/stress/gcold >>>>>> ???? test/hotspot/jtreg/gc/stress/systemgc >>>>>> ???? test/hotspot/jtreg/gc/stress/gclocker >>>>>> ???? test/hotspot/jtreg/gc/stress/gcbasher >>>>>> ???? test/hotspot/jtreg/gc/stress/finalizer >>>>>> ???? Kitchensink >>>>>> >>>>>> >>>>>> Thanks! >>>>>> >>>>>> /Per, Stefan & the ZGC team >> >> From rene.schuenemann at gmail.com Thu Jun 7 07:29:59 2018 From: rene.schuenemann at gmail.com (=?UTF-8?B?UmVuw6kgU2Now7xuZW1hbm4=?=) Date: Thu, 7 Jun 2018 09:29:59 +0200 Subject: RFR: 8204477: Count linkage errors and print in Exceptions::print_exception_counts_on_error Message-ID: Hi, can I please get a review for the following change: Bug: https://bugs.openjdk.java.net/browse/JDK-8204477 Webrev: http://cr.openjdk.java.net/~goetz/wr18/rene/webrev_8204477/01/ This change counts linkage errors and prints the number of linkage errors thrown in the Exceptions::print_exception_counts_on_error, which is used when writing the hs_error file. Thank you, Rene From thomas.stuefe at gmail.com Thu Jun 7 07:33:47 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 7 Jun 2018 09:33:47 +0200 Subject: RFR: 8204477: Count linkage errors and print in Exceptions::print_exception_counts_on_error In-Reply-To: References: Message-ID: Hi Rene, the bug description is empty? Best Regards, Thomas On Thu, Jun 7, 2018 at 9:29 AM, Ren? Sch?nemann wrote: > Hi, > > can I please get a review for the following change: > > Bug: https://bugs.openjdk.java.net/browse/JDK-8204477 > Webrev: http://cr.openjdk.java.net/~goetz/wr18/rene/webrev_8204477/01/ > > This change counts linkage errors and prints the number of linkage > errors thrown in the Exceptions::print_exception_counts_on_error, > which is used when writing the hs_error file. > > Thank you, > Rene From rene.schuenemann at gmail.com Thu Jun 7 07:37:47 2018 From: rene.schuenemann at gmail.com (=?UTF-8?B?UmVuw6kgU2Now7xuZW1hbm4=?=) Date: Thu, 7 Jun 2018 09:37:47 +0200 Subject: RFR: 8204476: Add additional statistics to CodeCache::print_summary Message-ID: Hi, can I please get a review for the following change: Bug: https://bugs.openjdk.java.net/browse/JDK-8204476 Webrev: http://cr.openjdk.java.net/~goetz/wr18/rene/webrev_8204476/01/ This change adds the following: (1) In CodeCache::print_summary prints the code cache full count for each code heap. (2) Adds additional counters to class CompileBroker: _total_compiler_stopped_count: The number of times the compilation has been stopped. _total_compiler_restarted_count: The number of times the compilation has been restarted. This counters are also added to CodeCache::print_summary. Thank you, Rene From stefan.karlsson at oracle.com Thu Jun 7 07:42:41 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 7 Jun 2018 09:42:41 +0200 Subject: RFR: 8204474: Have instanceRefKlass use HeapAccess when loading the referent In-Reply-To: <80a4d595-6dd1-b4ad-352d-3ddb36242e1e@oracle.com> References: <80a4d595-6dd1-b4ad-352d-3ddb36242e1e@oracle.com> Message-ID: <2b18bb12-949f-7c38-e96d-6cfd1cd1e4b7@oracle.com> Looks good. StefanK On 2018-06-06 16:16, Per Liden wrote: > Hi, > > To support concurrent reference processing in ZGC, > instanceRefKlass::try_discover() can no longer use RawAccess to load the > referent field. Instead it should use > > ? HeapAccess > > or > > ? HeapAccess > > depending on the reference type. This patch also adjusts > InstanceRefKlass::trace_reference_gc() for the same reason. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8204474 > Webrev: http://cr.openjdk.java.net/~pliden/8204474/webrev.0 > > Testing: This patch has been part of the ZGC repository for quite some > time and gone through various testing, including tier{1,2,3,4,5,6} in > mach5. > > /Per From per.liden at oracle.com Thu Jun 7 07:46:22 2018 From: per.liden at oracle.com (Per Liden) Date: Thu, 7 Jun 2018 09:46:22 +0200 Subject: RFR: 8204474: Have instanceRefKlass use HeapAccess when loading the referent In-Reply-To: <2b18bb12-949f-7c38-e96d-6cfd1cd1e4b7@oracle.com> References: <80a4d595-6dd1-b4ad-352d-3ddb36242e1e@oracle.com> <2b18bb12-949f-7c38-e96d-6cfd1cd1e4b7@oracle.com> Message-ID: <2145477f-0a3a-701c-5a08-78a8cb430006@oracle.com> Thanks! /Per On 06/07/2018 09:42 AM, Stefan Karlsson wrote: > Looks good. > > StefanK > > On 2018-06-06 16:16, Per Liden wrote: >> Hi, >> >> To support concurrent reference processing in ZGC, >> instanceRefKlass::try_discover() can no longer use RawAccess to load >> the referent field. Instead it should use >> >> ?? HeapAccess >> >> or >> >> ?? HeapAccess >> >> depending on the reference type. This patch also adjusts >> InstanceRefKlass::trace_reference_gc() for the same reason. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8204474 >> Webrev: http://cr.openjdk.java.net/~pliden/8204474/webrev.0 >> >> Testing: This patch has been part of the ZGC repository for quite some >> time and gone through various testing, including tier{1,2,3,4,5,6} in >> mach5. >> >> /Per From rene.schuenemann at gmail.com Thu Jun 7 07:48:22 2018 From: rene.schuenemann at gmail.com (=?UTF-8?B?UmVuw6kgU2Now7xuZW1hbm4=?=) Date: Thu, 7 Jun 2018 09:48:22 +0200 Subject: RFR: 8204477: Count linkage errors and print in Exceptions::print_exception_counts_on_error In-Reply-To: References: Message-ID: Hi Thomas, I have added the bug description. Rene On Thu, Jun 7, 2018 at 9:33 AM, Thomas St?fe wrote: > Hi Rene, > > the bug description is empty? > > Best Regards, Thomas > > > On Thu, Jun 7, 2018 at 9:29 AM, Ren? Sch?nemann > wrote: >> Hi, >> >> can I please get a review for the following change: >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8204477 >> Webrev: http://cr.openjdk.java.net/~goetz/wr18/rene/webrev_8204477/01/ >> >> This change counts linkage errors and prints the number of linkage >> errors thrown in the Exceptions::print_exception_counts_on_error, >> which is used when writing the hs_error file. >> >> Thank you, >> Rene From erik.osterlund at oracle.com Thu Jun 7 08:29:45 2018 From: erik.osterlund at oracle.com (Erik Osterlund) Date: Thu, 7 Jun 2018 10:29:45 +0200 Subject: RFR: 8204331: AArch64: fix CAS not embedded in normal graph error. In-Reply-To: References: Message-ID: Hi Andrew, I agree; it would be preferable to fix this in the ad file. Thanks, /Erik > On 6 Jun 2018, at 11:41, Andrew Haley wrote: > >> On 06/05/2018 04:41 PM, Zhongwei Yao wrote: >> Hi, >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8204331 >> >> Webrev: >> http://cr.openjdk.java.net/~zyao/8204331/webrev.00/ >> >> This patch fixes an assertion error on aarch64 in several jtreg tests. >> >> The failure assertion is in needs_acquiring_load_exclusive() in aarch64.ad when checking whether the graph is in "leading_to_normal" shape. The abnormal shape is generated in LibraryCallKit::inline_unsafe_load_store(). This patch fixes it by swap the order of "Pin SCMProj node" and "Insert post barrier" in LibraryCallKit::inline_unsafe_load_store(). >> > > I don't think this is the right way to fix it. It'd be better to > fix the code in aarhc64.ad. > > -- > Andrew Haley > Java Platform Lead Engineer > Red Hat UK Ltd. > EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From magnus.ihse.bursie at oracle.com Thu Jun 7 09:41:41 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Thu, 7 Jun 2018 11:41:41 +0200 Subject: ARM port consolidation In-Reply-To: <247c1b0c-a3f6-57e3-f00b-2d9a1488213e@oracle.com> References: <247c1b0c-a3f6-57e3-f00b-2d9a1488213e@oracle.com> Message-ID: From a build perspective, it would certainly simplify things if we have just a single port, so I'm in favour of this proposal. /Magnus On 2018-06-05 06:24, David Holmes wrote: > Hi Bob, > > Looping in porters-dev, aarch32-port-dev and aarch64-port-dev. > > I think this is a good idea. > > Thanks, > David > > On 5/06/2018 6:34 AM, Bob Vandette wrote: >> During the JDK 9 time frame, Oracle open sourced its 32-bit and 64-bit >> ARM ports and contributed them to OpenJDK.? These ports have been >> used for >> years in the embedded and mobile market, making them very stable and >> having the benefit of a single source base which can produce both 32 and >> 64-bit binaries.? The downside of this contribution is that it resulted >> in two 64-bit ARM implementations being available in OpenJDK. >> >> I'd like to propose that we eliminate one of the 64-bit ARM ports and >> encourage everyone to enhance and support the remaining 32 and 64 bit >> ARM ports. This would avoid the creation of yet another port for >> these chip >> architectures.? The reduction of competing ports will allow everyone >> to focus their attention on a single 64-bit port rather than diluting >> our efforts.? This will result in a higher quality and a more performant >> implementation. >> >> The community at large (especially RedHat, BellSoft, Linaro and Cavium) >> have done a great job of enhancing and keeping the AArch64 port up to >> date with current and new Hotspot features.? As a result, I propose that >> we standardize the 64-bit ARM implementation on this port. >> >> If there are no objections, I will file a JEP to remove the 64-bit ARM >> port sources that reside in jdk/open/src/hotspot/src/cpu/arm >> along with any build logic.? This will leave the Oracle contributed >> 32-bit ARM port and the AArch64 64-bit ARM port. >> >> Let me know what you all think, >> Bob Vandette >> >> From erik.osterlund at oracle.com Thu Jun 7 14:27:24 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 7 Jun 2018 16:27:24 +0200 Subject: RFR: 8204554: JFR TLAB tracing broken after 8202776 Message-ID: <5B1940CC.6000402@oracle.com> Hi, The recent allocation path modularization (8202776) broke JFR TLAB sampling. This was discovered in tier 5 testing. The problem is that there was previously an early exit TLAB path, that should not run the tracing code when not returning NULL, and a mem_allocate call that should run the tracing code when not returning NULL. However, these paths were joined in a virtual member function, making them look the same to the tracing code, which caused the non-TLAB tracing code to be run on TLAB allocations as well. The solution I propose is to move the TLAB tracing code into the new virtual member function. It seems that whatever GC overrides this code, should also decide what to do about the tracing code there anyway. Webrev: http://cr.openjdk.java.net/~eosterlund/8204554/webrev.00/ Bug: https://bugs.openjdk.java.net/browse/JDK-8204554 I have run failing tests locally to verify they have been fixed with the proposed patch. I am now running hs-tier1-3 as well, but am waiting for the results. Thanks, /Erik From rkennke at redhat.com Thu Jun 7 14:41:18 2018 From: rkennke at redhat.com (Roman Kennke) Date: Thu, 7 Jun 2018 16:41:18 +0200 Subject: RFR: 8204554: JFR TLAB tracing broken after 8202776 In-Reply-To: <5B1940CC.6000402@oracle.com> References: <5B1940CC.6000402@oracle.com> Message-ID: <0a160288-8438-c0a3-5764-fdb71d44fb39@redhat.com> Hi Erik, > The recent allocation path modularization (8202776) broke JFR TLAB > sampling. This was discovered in tier 5 testing. > > The problem is that there was previously an early exit TLAB path, that > should not run the tracing code when not returning NULL, and a > mem_allocate call that should run the tracing code when not returning > NULL. However, these paths were joined in a virtual member function, > making them look the same to the tracing code, which caused the non-TLAB > tracing code to be run on TLAB allocations as well. > > The solution I propose is to move the TLAB tracing code into the new > virtual member function. It seems that whatever GC overrides this code, > should also decide what to do about the tracing code there anyway. > > Webrev: > http://cr.openjdk.java.net/~eosterlund/8204554/webrev.00/ > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8204554 > > I have run failing tests locally to verify they have been fixed with the > proposed patch. I am now running hs-tier1-3 as well, but am waiting for > the results. This looks good to me. Thank you! Roman From bob.vandette at oracle.com Thu Jun 7 14:48:16 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Thu, 7 Jun 2018 10:48:16 -0400 Subject: ARM port consolidation In-Reply-To: <290c14ed-f7c7-00d5-41ff-a335b1c7bac3@oracle.com> References: <247c1b0c-a3f6-57e3-f00b-2d9a1488213e@oracle.com> <290c14ed-f7c7-00d5-41ff-a335b1c7bac3@oracle.com> Message-ID: <794BFAA0-6FCB-4730-9A89-B6B3F3D728BF@oracle.com> I agree with David. I do know that our implementation does run in AArch32 mode and it should be very easy to add dynamic AArch32 detection in order to make use of the few new AArch32 specific instructions such as the memory barrier instructions (LDAR/STLR). Since our current port already contains ARMv8 instruction pneumonics, we are already 1/2 way there. Bob. > On Jun 7, 2018, at 12:56 AM, David Holmes wrote: > > Hi Gil, > > On 7/06/2018 2:23 PM, Gil Tene wrote: >> This makes sense to me on the Aarch64 side. >> However, on the ARM32 side, I don't think the situation is as straightforward as >> what is being presented below, and I think more discussion and exploration of >> alternatives is needed. >> Much like with AArch64, there is an existing, active, community-developed and >> community-supported AArch32 port in OpenJDK that predates Oracle's open >> sourcing of their ARM32 version. That port is being used by multiple downstream >> builds and, at least for the past year+, it seems to have had more attention and >> ongoing engineering commitment around it than the Oracle variant. > > To clarify: > > "AArch32 is the 32-bit sub-architecture within the ARMv8 architecture. The port will be fully compatible with ARMv7 and may support ARMv6 depending on community interest." [1] > > whereas the 32-bit ARM port that Oracle contributed is for ARMv5, v6 and v7. There's obviously some overlap. If the Aarch32 project reaches a point (like Aarch64) where it is desirable to bring it into the mainline OpenJDK then that would seem like the opportune time to reevaluate the co-existence (or not) of the two ports. > > David > > [1] http://openjdk.java.net/projects/aarch32-port/ > >> Before making a choice of one AArch32 port vs the other (if such a choice >> even needs to be made), I would like to hear more about the resources being >> committed towards maintaining each, keeping each up to date, testing them on >> various platforms (e.g. including building, testing, and supporting the popular >> softfloat ABI variants imposed by some OS packages) and working on bug >> fixes as needs appear. >> ? Gil. >>> On Jun 4, 2018, at 6:24 PM, David Holmes wrote: >>> >>> Hi Bob, >>> >>> Looping in porters-dev, aarch32-port-dev and aarch64-port-dev. >>> >>> I think this is a good idea. >>> >>> Thanks, >>> David >>> >>> On 5/06/2018 6:34 AM, Bob Vandette wrote: >>>> During the JDK 9 time frame, Oracle open sourced its 32-bit and 64-bit >>>> ARM ports and contributed them to OpenJDK. These ports have been used for >>>> years in the embedded and mobile market, making them very stable and >>>> having the benefit of a single source base which can produce both 32 and >>>> 64-bit binaries. The downside of this contribution is that it resulted >>>> in two 64-bit ARM implementations being available in OpenJDK. >>>> I'd like to propose that we eliminate one of the 64-bit ARM ports and >>>> encourage everyone to enhance and support the remaining 32 and 64 bit >>>> ARM ports. This would avoid the creation of yet another port for these chip >>>> architectures. The reduction of competing ports will allow everyone >>>> to focus their attention on a single 64-bit port rather than diluting >>>> our efforts. This will result in a higher quality and a more performant >>>> implementation. >>>> The community at large (especially RedHat, BellSoft, Linaro and Cavium) >>>> have done a great job of enhancing and keeping the AArch64 port up to >>>> date with current and new Hotspot features. As a result, I propose that >>>> we standardize the 64-bit ARM implementation on this port. >>>> If there are no objections, I will file a JEP to remove the 64-bit ARM >>>> port sources that reside in jdk/open/src/hotspot/src/cpu/arm >>>> along with any build logic. This will leave the Oracle contributed >>>> 32-bit ARM port and the AArch64 64-bit ARM port. >>>> Let me know what you all think, >>>> Bob Vandette From erik.osterlund at oracle.com Thu Jun 7 14:57:14 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 7 Jun 2018 16:57:14 +0200 Subject: RFR: 8204554: JFR TLAB tracing broken after 8202776 In-Reply-To: <0a160288-8438-c0a3-5764-fdb71d44fb39@redhat.com> References: <5B1940CC.6000402@oracle.com> <0a160288-8438-c0a3-5764-fdb71d44fb39@redhat.com> Message-ID: <5B1947CA.2010707@oracle.com> Hi Roman, Thanks for the review. StefanK thought it would be nicer to move the outside of TLAB path into a new function. So I did that. Hope you agree that is a good idea. If you want to override that behaviour you can still override obj_allocate_raw. Full webrev: cr.openjdk.java.net/~eosterlund/8204554/webrev.01/ Incremental webrev: cr.openjdk.java.net/~eosterlund/8204554/webrev.00_01/ Checked again that tests pass now, and it passes. Thanks, /Erik On 2018-06-07 16:41, Roman Kennke wrote: > Hi Erik, > >> The recent allocation path modularization (8202776) broke JFR TLAB >> sampling. This was discovered in tier 5 testing. >> >> The problem is that there was previously an early exit TLAB path, that >> should not run the tracing code when not returning NULL, and a >> mem_allocate call that should run the tracing code when not returning >> NULL. However, these paths were joined in a virtual member function, >> making them look the same to the tracing code, which caused the non-TLAB >> tracing code to be run on TLAB allocations as well. >> >> The solution I propose is to move the TLAB tracing code into the new >> virtual member function. It seems that whatever GC overrides this code, >> should also decide what to do about the tracing code there anyway. >> >> Webrev: >> http://cr.openjdk.java.net/~eosterlund/8204554/webrev.00/ >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8204554 >> >> I have run failing tests locally to verify they have been fixed with the >> proposed patch. I am now running hs-tier1-3 as well, but am waiting for >> the results. > This looks good to me. Thank you! > > Roman > From erik.osterlund at oracle.com Thu Jun 7 15:00:44 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 7 Jun 2018 17:00:44 +0200 Subject: RFR: 8204554: JFR TLAB tracing broken after 8202776 In-Reply-To: <5B1947CA.2010707@oracle.com> References: <5B1940CC.6000402@oracle.com> <0a160288-8438-c0a3-5764-fdb71d44fb39@redhat.com> <5B1947CA.2010707@oracle.com> Message-ID: <5B19489C.8@oracle.com> Hi, StefanK says this looks good, but is unavailable for emailing right now. One more review to go. Thanks, /Erik On 2018-06-07 16:57, Erik ?sterlund wrote: > Hi Roman, > > Thanks for the review. > StefanK thought it would be nicer to move the outside of TLAB path > into a new function. So I did that. Hope you agree that is a good > idea. If you want to override that behaviour you can still override > obj_allocate_raw. > > Full webrev: > cr.openjdk.java.net/~eosterlund/8204554/webrev.01/ > > Incremental webrev: > cr.openjdk.java.net/~eosterlund/8204554/webrev.00_01/ > > Checked again that tests pass now, and it passes. > > Thanks, > /Erik > > On 2018-06-07 16:41, Roman Kennke wrote: >> Hi Erik, >> >>> The recent allocation path modularization (8202776) broke JFR TLAB >>> sampling. This was discovered in tier 5 testing. >>> >>> The problem is that there was previously an early exit TLAB path, that >>> should not run the tracing code when not returning NULL, and a >>> mem_allocate call that should run the tracing code when not returning >>> NULL. However, these paths were joined in a virtual member function, >>> making them look the same to the tracing code, which caused the >>> non-TLAB >>> tracing code to be run on TLAB allocations as well. >>> >>> The solution I propose is to move the TLAB tracing code into the new >>> virtual member function. It seems that whatever GC overrides this code, >>> should also decide what to do about the tracing code there anyway. >>> >>> Webrev: >>> http://cr.openjdk.java.net/~eosterlund/8204554/webrev.00/ >>> >>> Bug: >>> https://bugs.openjdk.java.net/browse/JDK-8204554 >>> >>> I have run failing tests locally to verify they have been fixed with >>> the >>> proposed patch. I am now running hs-tier1-3 as well, but am waiting for >>> the results. >> This looks good to me. Thank you! >> >> Roman >> > From rkennke at redhat.com Thu Jun 7 15:11:21 2018 From: rkennke at redhat.com (Roman Kennke) Date: Thu, 7 Jun 2018 17:11:21 +0200 Subject: RFR: 8204554: JFR TLAB tracing broken after 8202776 In-Reply-To: <5B1947CA.2010707@oracle.com> References: <5B1940CC.6000402@oracle.com> <0a160288-8438-c0a3-5764-fdb71d44fb39@redhat.com> <5B1947CA.2010707@oracle.com> Message-ID: <858e9add-804c-7d92-6445-eb0e21979811@redhat.com> Looks good. Go! Thanks, Roman > Hi Roman, > > Thanks for the review. > StefanK thought it would be nicer to move the outside of TLAB path into > a new function. So I did that. Hope you agree that is a good idea. If > you want to override that behaviour you can still override > obj_allocate_raw. > > Full webrev: > cr.openjdk.java.net/~eosterlund/8204554/webrev.01/ > > Incremental webrev: > cr.openjdk.java.net/~eosterlund/8204554/webrev.00_01/ > > Checked again that tests pass now, and it passes. > > Thanks, > /Erik > > On 2018-06-07 16:41, Roman Kennke wrote: >> Hi Erik, >> >>> The recent allocation path modularization (8202776) broke JFR TLAB >>> sampling. This was discovered in tier 5 testing. >>> >>> The problem is that there was previously an early exit TLAB path, that >>> should not run the tracing code when not returning NULL, and a >>> mem_allocate call that should run the tracing code when not returning >>> NULL. However, these paths were joined in a virtual member function, >>> making them look the same to the tracing code, which caused the non-TLAB >>> tracing code to be run on TLAB allocations as well. >>> >>> The solution I propose is to move the TLAB tracing code into the new >>> virtual member function. It seems that whatever GC overrides this code, >>> should also decide what to do about the tracing code there anyway. >>> >>> Webrev: >>> http://cr.openjdk.java.net/~eosterlund/8204554/webrev.00/ >>> >>> Bug: >>> https://bugs.openjdk.java.net/browse/JDK-8204554 >>> >>> I have run failing tests locally to verify they have been fixed with the >>> proposed patch. I am now running hs-tier1-3 as well, but am waiting for >>> the results. >> This looks good to me. Thank you! >> >> Roman >> > From erik.osterlund at oracle.com Thu Jun 7 15:20:25 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 7 Jun 2018 17:20:25 +0200 Subject: RFR: 8204554: JFR TLAB tracing broken after 8202776 In-Reply-To: <858e9add-804c-7d92-6445-eb0e21979811@redhat.com> References: <5B1940CC.6000402@oracle.com> <0a160288-8438-c0a3-5764-fdb71d44fb39@redhat.com> <5B1947CA.2010707@oracle.com> <858e9add-804c-7d92-6445-eb0e21979811@redhat.com> Message-ID: <5B194D39.5060008@oracle.com> Hi Roman, Thanks for the review. /Erik On 2018-06-07 17:11, Roman Kennke wrote: > Looks good. Go! > > Thanks, > Roman > >> Hi Roman, >> >> Thanks for the review. >> StefanK thought it would be nicer to move the outside of TLAB path into >> a new function. So I did that. Hope you agree that is a good idea. If >> you want to override that behaviour you can still override >> obj_allocate_raw. >> >> Full webrev: >> cr.openjdk.java.net/~eosterlund/8204554/webrev.01/ >> >> Incremental webrev: >> cr.openjdk.java.net/~eosterlund/8204554/webrev.00_01/ >> >> Checked again that tests pass now, and it passes. >> >> Thanks, >> /Erik >> >> On 2018-06-07 16:41, Roman Kennke wrote: >>> Hi Erik, >>> >>>> The recent allocation path modularization (8202776) broke JFR TLAB >>>> sampling. This was discovered in tier 5 testing. >>>> >>>> The problem is that there was previously an early exit TLAB path, that >>>> should not run the tracing code when not returning NULL, and a >>>> mem_allocate call that should run the tracing code when not returning >>>> NULL. However, these paths were joined in a virtual member function, >>>> making them look the same to the tracing code, which caused the non-TLAB >>>> tracing code to be run on TLAB allocations as well. >>>> >>>> The solution I propose is to move the TLAB tracing code into the new >>>> virtual member function. It seems that whatever GC overrides this code, >>>> should also decide what to do about the tracing code there anyway. >>>> >>>> Webrev: >>>> http://cr.openjdk.java.net/~eosterlund/8204554/webrev.00/ >>>> >>>> Bug: >>>> https://bugs.openjdk.java.net/browse/JDK-8204554 >>>> >>>> I have run failing tests locally to verify they have been fixed with the >>>> proposed patch. I am now running hs-tier1-3 as well, but am waiting for >>>> the results. >>> This looks good to me. Thank you! >>> >>> Roman >>> > From erik.osterlund at oracle.com Thu Jun 7 15:32:02 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 7 Jun 2018 17:32:02 +0200 Subject: RFR: 8204504: Fix for 8198285 breaks slowdebug builds Message-ID: <5B194FF2.3060103@oracle.com> Hi, Recent changes to arraycopying (8198285) broke slowdebug builds on windows and solaris. The problem is that the RawAccessBarrierArrayCopy::arraycopy function is expanded for a whole bunch of different new cases after JNI code started using this API. The reported linking problems: void AccessInternal::arraycopy_arrayof_conjoint(__type_0*,__type_0*,unsigned long) /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o void AccessInternal::arraycopy_arrayof_conjoint(__type_0*,__type_0*,unsigned long) /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o void AccessInternal::arraycopy_conjoint_atomic(__type_0*,__type_0*,unsigned long) /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o void AccessInternal::arraycopy_conjoint_atomic(__type_0*,__type_0*,unsigned long) /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o void AccessInternal::arraycopy_conjoint_atomic(__type_0*,__type_0*,unsigned long) /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o void AccessInternal::arraycopy_conjoint_atomic(__type_0*,__type_0*,unsigned long) /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o void AccessInternal::arraycopy_arrayof_conjoint(__type_0*,__type_0*,unsigned long) /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o void AccessInternal::arraycopy_arrayof_conjoint(__type_0*,__type_0*,unsigned long) /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o ...are all variants that the backend does not yet support as there are no current uses of it. And there really are no current uses of these still - these are all false positives. However, the code that currently chooses whether to use arrayof, conjoing, disjoint heap words, with possibly atomic variations, is all checked with if statements. But each case of the if statements are compiled in the template expansion despite being statically known to be dead code in a whole bunch of template expansions. The optimized code generation is clever enough to just ignore that dead code, while the slowdebug builds on windows and solaris complain that this dead code (that was only spuriously expanded by accident but is never called) does not exist. The solution I propose to this is to fold away the different cases the linker is complaining about using SFINAE instead of if statements. That way, they are never template expanded spuriously when it is statically known that they will not be called; they are never considered as valid overloads. I have verified on a Solaris x86 machine that it did not build before, but builds fine with this patch applied. Webrev: http://cr.openjdk.java.net/~eosterlund/8204504/webrev.00/ Bug: https://bugs.openjdk.java.net/browse/JDK-8204504 Thanks, /Erik From vladimir.kozlov at oracle.com Thu Jun 7 16:20:11 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 7 Jun 2018 09:20:11 -0700 Subject: RFR: 8204476: Add additional statistics to CodeCache::print_summary In-Reply-To: References: Message-ID: <7340f2d0-7998-1158-aaf9-fa2da97106a9@oracle.com> Hi Ren?, Change look good except changes to main print where you added full_count. This output is used by several tests and they will fail. Can you move it into your new print statement instead? Thanks, Vladimir On 6/7/18 12:37 AM, Ren? Sch?nemann wrote: > Hi, > > can I please get a review for the following change: > > Bug: https://bugs.openjdk.java.net/browse/JDK-8204476 > Webrev: http://cr.openjdk.java.net/~goetz/wr18/rene/webrev_8204476/01/ > > This change adds the following: > > (1) In CodeCache::print_summary prints the code cache full count for > each code heap. > > (2) Adds additional counters to class CompileBroker: > _total_compiler_stopped_count: The number of times the compilation > has been stopped. > _total_compiler_restarted_count: The number of times the > compilation has been restarted. > This counters are also added to CodeCache::print_summary. > > > Thank you, > Rene > From bob.vandette at oracle.com Thu Jun 7 17:43:23 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Thu, 7 Jun 2018 13:43:23 -0400 Subject: Ping!! Re: RFR: 8203357 Container Metrics In-Reply-To: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> References: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> Message-ID: Can I get one more reviewer for this RFE so I can integrate it? > http://cr.openjdk.java.net/~bobv/8203357/webrev.01 Mandy Chung has reviewed this change. I?ve run Mach5 hotspot and core lib tests. I?ve reviewed the tests which were written by Harsha Wardhana I filed a CSR for the command line change and it?s now approved and closed. Thanks, Bob. > On May 30, 2018, at 3:45 PM, Bob Vandette wrote: > > Please review the following RFE which adds an internal API, along with jtreg tests that provide > access to Docker container configuration data and metrics. In addition to the API which we hope to > take advantage of in the future with Java Flight Recorder and a JMX Mbean, I?ve added an additional > option to -XshowSettings:system than dumps out the container or host cgroup confguration > information. See the sample output below: > > RFE: Container Metrics > > https://bugs.openjdk.java.net/browse/JDK-8203357 > > WEBREV: > > http://cr.openjdk.java.net/~bobv/8203357/webrev.01 > > > This commit will also include a fix for the following bug. > > BUG: [TESTBUG] Test /runtime/containers/cgroup/PlainRead.java fails > > https://bugs.openjdk.java.net/browse/JDK-8203691 > > WEBREV: > > http://cr.openjdk.java.net/~bobv/8203357/webrev.00/test/hotspot/jtreg/runtime/containers/cgroup/PlainRead.java.sdiff.html > > SAMPLE USAGE and OUTPUT: > > docker run ?memory=256m --cpuset-cpus 4-7 -it ubuntu bash > ./java -XshowSettings:system > Operating System Metrics: > Provider: cgroupv1 > Effective CPU Count: 4 > CPU Period: 100000 > CPU Quota: -1 > CPU Shares: -1 > List of Processors, 4 total: > 4 5 6 7 > List of Effective Processors, 4 total: > 4 5 6 7 > List of Memory Nodes, 2 total: > 0 1 > List of Available Memory Nodes, 2 total: > 0 1 > CPUSet Memory Pressure Enabled: false > Memory Limit: 256.00M > Memory Soft Limit: Unlimited > Memory & Swap Limit: 512.00M > Kernel Memory Limit: Unlimited > TCP Memory Limit: Unlimited > Out Of Memory Killer Enabled: true > > TEST RESULTS: > > testing runtime container APIs > Directory "JTwork" not found: creating > Passed: runtime/containers/cgroup/PlainRead.java > Passed: runtime/containers/docker/DockerBasicTest.java > Passed: runtime/containers/docker/TestCPUAwareness.java > Passed: runtime/containers/docker/TestCPUSets.java > Passed: runtime/containers/docker/TestMemoryAwareness.java > Passed: runtime/containers/docker/TestMisc.java > Test results: passed: 6 > Results written to /export/users/bobv/jdk11/build/jtreg/JTwork > > testing jdk.internal.platform APIs > Passed: jdk/internal/platform/cgroup/TestCgroupMetrics.java > Passed: jdk/internal/platform/docker/TestDockerCpuMetrics.java > Passed: jdk/internal/platform/docker/TestDockerMemoryMetrics.java > Passed: jdk/internal/platform/docker/TestSystemMetrics.java > Test results: passed: 4 > Results written to /export/users/bobv/jdk11/build/jtreg/JTwork > > testing -XshowSettings:system launcher option > Passed: tools/launcher/Settings.java > Test results: passed: 1 > > > Bob. > > From jesper.wilhelmsson at oracle.com Thu Jun 7 18:56:13 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Thu, 7 Jun 2018 20:56:13 +0200 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> <2A71CDF2-A6F5-4985-9005-4886D1047F6C@oracle.com> Message-ID: > On 6 Jun 2018, at 06:17, David Holmes wrote: > > Hi Erik, Jesper, > > On 6/06/2018 2:59 AM, jesper.wilhelmsson at oracle.com wrote: >>> On 5 Jun 2018, at 08:10, David Holmes wrote: >>> >>> Sorry to be late to this party ... >>> >>> On 5/06/2018 6:10 AM, Erik Joelsson wrote: >>>> New webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.02/ >>>> Renamed the new jvm variant to "hardened". >>> >>> As it is a hardened server build I'd prefer if that were somehow reflected in the name. Though really I don't see why this should be restricted this way ... to be honest I don't see hardened as a variant of server vs. client vs. zero etc at all, you should be able to harden any of those. >>> >>> So IIUC with this change we will: >>> - always build JDK native code "hardened" (if toolchain supports it) >>> - only build hotspot "hardened" if requested; and in that case >>> - jvm.cfg will list -server and -hardened with server as default >>> >>> Is that right? I can see that we may choose to always build Oracle JDK this way but it isn't clear to me that its suitable for OpenJDK. Nor why hotspot is selectable but JDK is not. ?? >> Sorry for the lack of information here. There has been a lot of off-list discussions behind this change, I've added the background to the bug now. >> The short version is that we see a ~25% regression in startup times if the JVM is compiled with the gcc flags to avoid speculative execution. We have not observed any performance regressions due to compiling the rest of the native libraries with these gcc flags, so there doesn't seem to be any reason to have different versions of other libraries. > > So "benevolent dictatorship"? ;-) > > My main concern is that the updated toolchains that support this have all been produced in a mad rush and quite frankly I expect them to be buggy. I don't think it is hard to enable the builder of OpenJDK to have full choice and control here. My assumption has been, and still is, that we're not the only ones that will use gcc 7.3.0 with these flags. If there were bugs in the new code they would most likely have been found already. The experience from our own work in this area is that the bugs are unlikely to be crashes due to the new code, but rather weird corner cases where the new code is not inserted where it was needed, leaving speculative execution unblocked in that single case. That said, I have no strong opinions on what is possible to configure in the build, as long as the Oracle OpenJDK builds comes with two JVM libraries and one copy of all other libraries. But that is of course a slightly different issue as long as it is possible to do. Thanks, /Jesper > > Cheers, > David > >> /Jesper >>> Sorry. >>> >>> David >>> ----- >>> >>>> /Erik >>>> On 2018-06-04 09:54, jesper.wilhelmsson at oracle.com wrote: >>>>>> On 4 Jun 2018, at 17:52, Erik Joelsson wrote: >>>>>> >>>>>> Hello, >>>>>> >>>>>> On 2018-06-01 14:00, Aleksey Shipilev wrote: >>>>>>> On 06/01/2018 10:53 PM, Erik Joelsson wrote: >>>>>>>> This patch defines flags for disabling speculative execution for GCC and Visual Studio and applies >>>>>>>> them to all binaries except libjvm when available in the compiler. It defines a new jvm feature >>>>>>>> no-speculative-cti, which is used to control whether to use the flags for libjvm. It also defines a >>>>>>>> new jvm variant "altserver" which is the same as server, but with this new feature added. >>>>>>> I think the classic name for such product configuration is "hardened", no? >>>>>> I don't know. I'm open to suggestions on naming. >>>>> "hardened" sounds good to me. >>>>> >>>>> The change looks good as well. >>>>> /Jesper >>>>> >>>>>> /Erik >>>>>>> -Aleksey From thomas.stuefe at gmail.com Thu Jun 7 19:11:29 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 7 Jun 2018 21:11:29 +0200 Subject: RFR: 8204476: Add additional statistics to CodeCache::print_summary In-Reply-To: References: Message-ID: On Thu, Jun 7, 2018 at 9:37 AM, Ren? Sch?nemann wrote: > Hi, > > can I please get a review for the following change: > > Bug: https://bugs.openjdk.java.net/browse/JDK-8204476 > Webrev: http://cr.openjdk.java.net/~goetz/wr18/rene/webrev_8204476/01/ > > This change adds the following: > > (1) In CodeCache::print_summary prints the code cache full count for > each code heap. > > (2) Adds additional counters to class CompileBroker: > _total_compiler_stopped_count: The number of times the compilation > has been stopped. > _total_compiler_restarted_count: The number of times the > compilation has been restarted. > This counters are also added to CodeCache::print_summary. > > > Thank you, > Rene Hi Rene, Apart from what Vladimir wrote: - small nit: _total_compiler_restarted_count += 1; -> _total_compiler_restarted_count ++; - Please either use %d for printing int or change the type to uint32_t. But seeing that the other counters are int, I would use %d. - More of a question to others: I am not familiar with compiler coding, but signed int as counters seem a bit small? Is there no danger of ever overflowing on long running VMs? Or does it not matter if they do? Thanks, Thomas From brent.christian at oracle.com Thu Jun 7 19:13:46 2018 From: brent.christian at oracle.com (Brent Christian) Date: Thu, 7 Jun 2018 12:13:46 -0700 Subject: RFR 8204565 : (spec) Document java.{vm.}?specification.version system properties' relation to $FEATURE Message-ID: <7ff4c005-bcb3-7715-554c-22bc5570b03a@oracle.com> Hi, Please review this doc-only change. From the bug report: 'With the integration of JEP 322 "Time-Based Release Versioning" into JDK 10, VERSION_FEATURE is used to set the value of the system properties "java.specification.version" [1] and "java.vm.specification.version" [2] (though the term "major" is still used in VM code, see JDK-8193719). We can update the System.getProperties() javadoc to be more specific about the value reported in these system properties.' Issue: https://bugs.openjdk.java.net/browse/JDK-8204565 Webrev: http://cr.openjdk.java.net/~bchristi/8204565/webrev/ Thanks, -Brent 1. http://hg.openjdk.java.net/jdk/jdk/rev/d2a837cf9ff1#l6.23 2. http://hg.openjdk.java.net/jdk/jdk/rev/d2a837cf9ff1#l15.17 From daniel.daugherty at oracle.com Thu Jun 7 19:16:27 2018 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Thu, 7 Jun 2018 15:16:27 -0400 Subject: RFR: 8203885: ConcurrentLocksDump::dump_at_safepoint() should not allocate array in resource area In-Reply-To: <23bdf1ee-0e95-df0a-09e9-322195673f50@oracle.com> References: <99967815-2c84-7e09-7934-1b5c22639425@oracle.com> <08E2B22F-DDE0-4F20-9F87-B0B803721245@oracle.com> <1d42feb8-6e91-9b4b-ba6d-47dd82739cda@oracle.com> <23bdf1ee-0e95-df0a-09e9-322195673f50@oracle.com> Message-ID: On 5/29/18 7:48 AM, coleen.phillimore at oracle.com wrote: > > > On 5/29/18 2:20 AM, Per Liden wrote: >> Hi Kim, >> >> On 05/29/2018 08:09 AM, Kim Barrett wrote: >>>> On May 28, 2018, at 8:16 AM, Per Liden wrote: >>>> >>>> ConcurrentLocksDump::dump_at_safepoint() creates a GrowableArray, >>>> which gets allocated in a resource area. This array is than passed >>>> down a call chain, where it can't control that another ResourceMark >>>> isn't created. In the leaf of this call chain, a closure >>>> (FindInstanceClosure) is executed, which appends to the array, >>>> which means it might need to be resized. This doesn't work if a new >>>> ResourceMark has been created, since the array resize will happen >>>> in a nested ResourceArea context. As a result, the append operation >>>> fails in GenericGrowableArray::check_nesting(). >>>> >>>> This has so far gone unnoticed because >>>> CollectedHeap::object_iterate() in existing collectors typically >>>> don't create new ResourceMarks. This is not true for ZGC (and >>>> potentially other concurrent collectors), which needs to walk >>>> thread stacks, which in turn requires a ResourceMark. >>>> >>>> The proposed fix is to make this array C Heap allocated. >>>> >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8203885 >>>> Webrev: http://cr.openjdk.java.net/~pliden/8203885/webrev.0 >>>> >>>> Testing: hs-tier{1,3} >>>> >>>> /Per >>> >>> Looks good. >> >> Thanks for reviewing! >> >>> >>> I was going to ask why the GrowableArray needs to be heap allocated? >>> Why not just >>> >>> ?? GrowableArray aos_objects(INITIAL_ARRAY_SIZE, true); >>> ?? ... s/aos_objects/&aos_objects/ ... >>> ?? // delete aos_objects no longer needed >>> >>> After some digging, probably because of this, from growableArray.hpp: >>> >>> 119??? assert(!on_C_heap() || allocated_on_C_heap(), "growable array >>> must be on C heap if elements are"); >> >> I had the same reaction and initially made it stack allocated, but >> noticed that GrowableArray doesn't allow that :( >> >>> >>> Conflating the allocation of the GrowableArray object with the >>> allocation of the underlying array.? That seems wrong, but not a >>> problem to be solved as part of this change. >>> >> >> I agree. I suspect someone wanted to protect against potential >> problems with having different life cycles of the GrowableArray and >> the backing array. But this seems overly strict. > > Yes, this is exactly the reason, and we did have bugs.? That's why we > have this check. If I remember the history correctly, you could have a GrowableArray object that was stack allocated and elements that were C-heap allocated. When the stack allocated GrowableArray object was freed, we lost track of the C-heap allocated elements... Yes, we could have changed the stack allocated GrowableArray object to free the C-heap allocated elements, but then the life-cycle started to get confusing... As in: why was my C-heap allocated data freed without my permission? Oh... I didn't C-heap allocate my GrowableArray object so my C-heap allocated data was freed... stuff like that... Dan Dan > > Coleen > >> >> /Per > From thomas.stuefe at gmail.com Thu Jun 7 19:34:40 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 7 Jun 2018 21:34:40 +0200 Subject: RFR: 8204477: Count linkage errors and print in Exceptions::print_exception_counts_on_error In-Reply-To: References: Message-ID: Hi Rene, Looks good overall. This is a useful addition. - 155 Atomic::inc(&Exceptions::_linkage_errors); you can loose the "Exceptions::" scope since we are in the Exceptions class. - Can you please add #include runtime/atomic.hpp to the file? It is missing that header. - Please make _linkage_errors class private. It is not directly accessed from outside. (_stack_overflow_errors on the other hand is, so it has to be public). If you fix these points, I do not need a new webrev. Best Regards, Thomas On Thu, Jun 7, 2018 at 9:29 AM, Ren? Sch?nemann wrote: > Hi, > > can I please get a review for the following change: > > Bug: https://bugs.openjdk.java.net/browse/JDK-8204477 > Webrev: http://cr.openjdk.java.net/~goetz/wr18/rene/webrev_8204477/01/ > > This change counts linkage errors and prints the number of linkage > errors thrown in the Exceptions::print_exception_counts_on_error, > which is used when writing the hs_error file. > > Thank you, > Rene From thomas.stuefe at gmail.com Thu Jun 7 19:35:42 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 7 Jun 2018 21:35:42 +0200 Subject: RFR: 8204477: Count linkage errors and print in Exceptions::print_exception_counts_on_error In-Reply-To: References: Message-ID: On Thu, Jun 7, 2018 at 9:34 PM, Thomas St?fe wrote: > Hi Rene, > > Looks good overall. This is a useful addition. > > - 155 Atomic::inc(&Exceptions::_linkage_errors); > you can loose the "Exceptions::" scope since we are in the Exceptions class. > > - Can you please add #include runtime/atomic.hpp to the file? It is > missing that header. > (I mean exceptions.cpp) > - Please make _linkage_errors class private. It is not directly > accessed from outside. (_stack_overflow_errors on the other hand is, > so it has to be public). > > If you fix these points, I do not need a new webrev. > > Best Regards, Thomas > > > On Thu, Jun 7, 2018 at 9:29 AM, Ren? Sch?nemann > wrote: >> Hi, >> >> can I please get a review for the following change: >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8204477 >> Webrev: http://cr.openjdk.java.net/~goetz/wr18/rene/webrev_8204477/01/ >> >> This change counts linkage errors and prints the number of linkage >> errors thrown in the Exceptions::print_exception_counts_on_error, >> which is used when writing the hs_error file. >> >> Thank you, >> Rene From erik.joelsson at oracle.com Thu Jun 7 20:11:55 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Thu, 7 Jun 2018 13:11:55 -0700 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> <2A71CDF2-A6F5-4985-9005-4886D1047F6C@oracle.com> Message-ID: <2e2ace1c-22ad-e6e3-5b05-e2688b5b1b5c@oracle.com> On 2018-06-07 11:56, jesper.wilhelmsson at oracle.com wrote: >> On 6 Jun 2018, at 06:17, David Holmes > > wrote: >> >> Hi Erik, Jesper, >> >> >> So "benevolent dictatorship"? ?;-) >> >> My main concern is that the updated toolchains that support this have >> all been produced in a mad rush and quite frankly I expect them to be >> buggy. I don't think it is hard to enable the builder of OpenJDK to >> have full choice and control here. > > My assumption has been, and still is, that we're not the only ones > that will use gcc 7.3.0 with these flags. If there were bugs in the > new code they would most likely have been found already. The > experience from our own work in this area is that the bugs are > unlikely to be crashes due to the new code, but rather weird corner > cases where the new code is not inserted where it was needed, leaving > speculative execution unblocked in that single case. > > That said, I have no strong opinions on what is possible to configure > in the build, as long as the Oracle OpenJDK builds comes with two JVM > libraries and one copy of all other libraries. But that is of course a > slightly different issue as long as it is possible to do. > I just don't think the extra work is warranted or should be prioritized at this point. I also cannot think of a combination of options required for what you are suggesting that wouldn't be confusing to the user. If someone truly feels like these flags are forced on them and can't live with them, we or preferably that person can fix it then. I don't think that's dictatorship. OpenJDK is still open source and anyone can contribute. /Erik From mandy.chung at oracle.com Thu Jun 7 20:24:49 2018 From: mandy.chung at oracle.com (mandy chung) Date: Thu, 7 Jun 2018 13:24:49 -0700 Subject: RFR 8204565 : (spec) Document java.{vm.}?specification.version system properties' relation to $FEATURE In-Reply-To: <7ff4c005-bcb3-7715-554c-22bc5570b03a@oracle.com> References: <7ff4c005-bcb3-7715-554c-22bc5570b03a@oracle.com> Message-ID: <05f55a3a-a7d5-4b5d-899d-2f2fbfe1d2d8@oracle.com> Hi Brent, On 6/7/18 12:13 PM, Brent Christian wrote: > Hi, > > Please review this doc-only change.? From the bug report: > > 'With the integration of JEP 322 "Time-Based Release Versioning" > into JDK 10, VERSION_FEATURE is used to set the value of the system > properties "java.specification.version" [1] and > "java.vm.specification.version" [2] (though the term "major" is still > used in VM code, see JDK-8193719). > > We can update the System.getProperties() javadoc to be more specific > about the value reported in these system properties.' > > Issue: > https://bugs.openjdk.java.net/browse/JDK-8204565 > > Webrev: > http://cr.openjdk.java.net/~bchristi/8204565/webrev/ Is there an existing test validating this? If not, we should add a test for this change? Mandy From mikhailo.seledtsov at oracle.com Thu Jun 7 21:30:24 2018 From: mikhailo.seledtsov at oracle.com (Mikhailo Seledtsov) Date: Thu, 07 Jun 2018 14:30:24 -0700 Subject: Ping!! Re: RFR: 8203357 Container Metrics In-Reply-To: References: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> Message-ID: <5B19A3F0.1010804@oracle.com> Hi Bob, I looked at the tests. In general they look good. I am a bit concerned about the use of ERROR_MARGIN in one of the tests. We need to make sure that the tests are stable, and do not produce intermittent failures. Thank you, Misha On 6/7/18, 10:43 AM, Bob Vandette wrote: > Can I get one more reviewer for this RFE so I can integrate it? > >> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 > Mandy Chung has reviewed this change. > > I?ve run Mach5 hotspot and core lib tests. > > I?ve reviewed the tests which were written by Harsha Wardhana > > I filed a CSR for the command line change and it?s now approved and closed. > > Thanks, > Bob. > > >> On May 30, 2018, at 3:45 PM, Bob Vandette wrote: >> >> Please review the following RFE which adds an internal API, along with jtreg tests that provide >> access to Docker container configuration data and metrics. In addition to the API which we hope to >> take advantage of in the future with Java Flight Recorder and a JMX Mbean, I?ve added an additional >> option to -XshowSettings:system than dumps out the container or host cgroup confguration >> information. See the sample output below: >> >> RFE: Container Metrics >> >> https://bugs.openjdk.java.net/browse/JDK-8203357 >> >> WEBREV: >> >> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 >> >> >> This commit will also include a fix for the following bug. >> >> BUG: [TESTBUG] Test /runtime/containers/cgroup/PlainRead.java fails >> >> https://bugs.openjdk.java.net/browse/JDK-8203691 >> >> WEBREV: >> >> http://cr.openjdk.java.net/~bobv/8203357/webrev.00/test/hotspot/jtreg/runtime/containers/cgroup/PlainRead.java.sdiff.html >> >> SAMPLE USAGE and OUTPUT: >> >> docker run ?memory=256m --cpuset-cpus 4-7 -it ubuntu bash >> ./java -XshowSettings:system >> Operating System Metrics: >> Provider: cgroupv1 >> Effective CPU Count: 4 >> CPU Period: 100000 >> CPU Quota: -1 >> CPU Shares: -1 >> List of Processors, 4 total: >> 4 5 6 7 >> List of Effective Processors, 4 total: >> 4 5 6 7 >> List of Memory Nodes, 2 total: >> 0 1 >> List of Available Memory Nodes, 2 total: >> 0 1 >> CPUSet Memory Pressure Enabled: false >> Memory Limit: 256.00M >> Memory Soft Limit: Unlimited >> Memory& Swap Limit: 512.00M >> Kernel Memory Limit: Unlimited >> TCP Memory Limit: Unlimited >> Out Of Memory Killer Enabled: true >> >> TEST RESULTS: >> >> testing runtime container APIs >> Directory "JTwork" not found: creating >> Passed: runtime/containers/cgroup/PlainRead.java >> Passed: runtime/containers/docker/DockerBasicTest.java >> Passed: runtime/containers/docker/TestCPUAwareness.java >> Passed: runtime/containers/docker/TestCPUSets.java >> Passed: runtime/containers/docker/TestMemoryAwareness.java >> Passed: runtime/containers/docker/TestMisc.java >> Test results: passed: 6 >> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >> >> testing jdk.internal.platform APIs >> Passed: jdk/internal/platform/cgroup/TestCgroupMetrics.java >> Passed: jdk/internal/platform/docker/TestDockerCpuMetrics.java >> Passed: jdk/internal/platform/docker/TestDockerMemoryMetrics.java >> Passed: jdk/internal/platform/docker/TestSystemMetrics.java >> Test results: passed: 4 >> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >> >> testing -XshowSettings:system launcher option >> Passed: tools/launcher/Settings.java >> Test results: passed: 1 >> >> >> Bob. >> >> From yumin.qi at gmail.com Thu Jun 7 23:13:49 2018 From: yumin.qi at gmail.com (yumin qi) Date: Thu, 7 Jun 2018 16:13:49 -0700 Subject: Question on SharedDictionary Message-ID: Hi, Ioi In set_shared_dictionary(..): void SystemDictionary::set_shared_dictionary(HashtableBucket* t, int length, int number_of_entries) { assert(length == _shared_dictionary_size * sizeof(HashtableBucket), "bad shared dictionary size."); _shared_dictionary = new Dictionary(ClassLoaderData::the_null_class_loader_data(), _shared_dictionary_size, t, number_of_entries); } I just wonder why not using SharedDictionary here? Searched workspace (11) for SharedDictionary, looks it is not used (as instance object). There is many cast from DictionaryEntry to SharedDictionaryEntry, there should be not many if _shared_dictionary is an instance of SharedDictionary, right? Thanks Yumin From david.holmes at oracle.com Fri Jun 8 00:30:20 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 8 Jun 2018 10:30:20 +1000 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: <2e2ace1c-22ad-e6e3-5b05-e2688b5b1b5c@oracle.com> References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> <2A71CDF2-A6F5-4985-9005-4886D1047F6C@oracle.com> <2e2ace1c-22ad-e6e3-5b05-e2688b5b1b5c@oracle.com> Message-ID: <965cfb76-1d06-14b5-7f1e-44481ef2c54d@oracle.com> On 8/06/2018 6:11 AM, Erik Joelsson wrote: > On 2018-06-07 11:56, jesper.wilhelmsson at oracle.com wrote: >>> On 6 Jun 2018, at 06:17, David Holmes >> > wrote: >>> >>> Hi Erik, Jesper, >>> >>> >>> So "benevolent dictatorship"? ?;-) >>> >>> My main concern is that the updated toolchains that support this have >>> all been produced in a mad rush and quite frankly I expect them to be >>> buggy. I don't think it is hard to enable the builder of OpenJDK to >>> have full choice and control here. >> >> My assumption has been, and still is, that we're not the only ones >> that will use gcc 7.3.0 with these flags. If there were bugs in the >> new code they would most likely have been found already. The >> experience from our own work in this area is that the bugs are >> unlikely to be crashes due to the new code, but rather weird corner >> cases where the new code is not inserted where it was needed, leaving >> speculative execution unblocked in that single case. >> >> That said, I have no strong opinions on what is possible to configure >> in the build, as long as the Oracle OpenJDK builds comes with two JVM >> libraries and one copy of all other libraries. But that is of course a >> slightly different issue as long as it is possible to do. >> > I just don't think the extra work is warranted or should be prioritized > at this point. I also cannot think of a combination of options required > for what you are suggesting that wouldn't be confusing to the user. If > someone truly feels like these flags are forced on them and can't live > with them, we or preferably that person can fix it then. I don't think > that's dictatorship. OpenJDK is still open source and anyone can contribute. I don't see why --enable-hardened-jdk and --enable-hardened-hotspot to add to the right flags would be either complicated or confusing. David > /Erik > From matthias.baesken at sap.com Fri Jun 8 08:04:05 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Fri, 8 Jun 2018 08:04:05 +0000 Subject: RFR [XS] : 8204598 : add more thread-related system settings info to hs_error file on Linux Message-ID: Hi could you please review this small Linux related change ? In linux os::print_os_info , I print additional info about a number of system parameters influencing thread creation on Linux. We noticed the influence of these parameters when looking into an application creating over 10.000 threads on Linux at the same time; there we got an OOM : unable to create new native thread which was caused by a failing pthread_create (error EAGAIN) . The machine had plenty of memory, so we looked into various kernel params and in the end noticed that /proc/sys/kernel/pid_max was too low. The other added parameters "threads-max" and "max_map_count" are also known to be related to problems when running with high thread numbers, so I add them too . Webrev : http://cr.openjdk.java.net/~mbaesken/webrevs/8204598/ Bug : https://bugs.openjdk.java.net/browse/JDK-8204598 Thanks, Matthias From tobias.hartmann at oracle.com Fri Jun 8 08:07:41 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Fri, 8 Jun 2018 10:07:41 +0200 Subject: JEP: https://bugs.openjdk.java.net/browse/JDK-8203832 In-Reply-To: <254101d3fcac$7de91c80$79bb5580$@alibaba-inc.com> References: <254101d3fcac$7de91c80$79bb5580$@alibaba-inc.com> Message-ID: <8a9edb5e-f0c7-dbec-f07c-b6b188245e71@oracle.com> Hi Sanhong, thanks for the details, this answered all my questions! Thanks, Tobias On 05.06.2018 11:06, ???(??) wrote: > Hi Tobias, > Thanks for your questions, see my inline comments. > (As the formatting in my last mail was messed up, just resend it again.) > > Thanks! > Sanhong > -----????----- > ???: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] ?? Tobias Hartmann > ????: 2018?6?4? 15:30 > ???: yumin qi > ??: hotspot-dev at openjdk.java.net > ??: Re: JEP: https://bugs.openjdk.java.net/browse/JDK-8203832 > > Hi Yumin, > > thanks for the details! > > On 01.06.2018 05:01, yumin qi wrote: >> Thanks for your review/questions. First I would introduce some >> background of JWarmup application on use scenario and how we >> implement the interaction between application and scheduling (dispatch system, DS). >> >> The load of each application is controlled by DS. The profiling data >> is collected against real input data (so it mostly matches the >> application run in production environments, thus reduce the >> deoptimization chance). When run with profiling data, application gets >> notification from DS when compiling should start, application then >> calls API to notify JVM the hot methods recorded in file can be compiled, after the compilations, a message sent out to DS so DS will dispatch load into this application. > > Could you elaborate a bit more on how the communication between the DS and the application works? A generic user application should not be aware of the pre-compilation, right? Let's assume I run a little Hello World program, when/how is pre-compilation triggered? > > The user application will use API to tell JWarmup to kickoff pre-compilation at some appropriate point, generally after app initialization done, the basic workflow as follows: > - DS freezes incoming user requests. > - App does the necessary initialization. > - After initialization done, notify JWarmup to kickoff pre-compilation(via *API*). > - JWarmup does the compilation work > - The app gets notified after the compilation is done(via *API* in polling way) > - DS resumes the requests, the application now is ready for service. > > This is case how we use JWarmup, but we do believe the above process cloud be generalized and any app running inside cloud datacenter could benefit from the model by integrating java compilation with DS. > By this way, the java platform can provide flexible mechanism for cloud scheduling system to define compilation behavior according to load time. > > Do I understand correctly that the profile information is only used for a "standalone" compilations of a method or is it also used for inlining? For example, if we have profile information for method B and method A inlines method B, does it use the profile information available for B when there is no profile information available for A? > > It does support inlining. > Actually, in "recording" phase, JWarmup also records the "MethodData" information, which can be used for compilation in next run. > >> A: During run with pre-compiled methods, deoptimization is only seen >> with null-check elimination so it is not eliminated. The profile data >> is not updated and re-used. That is, after deoptimized, it starts from interpreter mode like freshly loaded. > > Why do you only see deoptimizations with null-check elimination? A pre-compiled method can still have uncommon traps for reasons like an out of bounds array access or some loop predicate that does not hold, right? > > We saw null-check elimination caused the de-optimization in most cases, that's the reason this has been disabled by default in JWarmup. > But you are correct, assumption might be made wrong in some other cases, that's the reason JWarmup provides the option to user to deoptimize the pre-compiled methods after peak load via -XX:CompilationWarmUpDeoptTime control flag, which allows user to choose a time roughly after the peak time to do the deoptimization. > > > Thanks, > Tobias > From david.holmes at oracle.com Fri Jun 8 08:09:55 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 8 Jun 2018 18:09:55 +1000 Subject: RFR [XS] : 8204598 : add more thread-related system settings info to hs_error file on Linux In-Reply-To: References: Message-ID: Hi Matthias, This seems okay. Not sure how useful max_map_count may be. Thanks, David On 8/06/2018 6:04 PM, Baesken, Matthias wrote: > Hi could you please review this small Linux related change ? > > In linux os::print_os_info , I print additional info about a number of system parameters influencing thread creation on Linux. > > We noticed the influence of these parameters when looking into an application creating over 10.000 threads on Linux at the same time; there we got an OOM : unable to create new native thread > which was caused by a failing pthread_create (error EAGAIN) . > > The machine had plenty of memory, so we looked into various kernel params and in the end noticed that /proc/sys/kernel/pid_max was too low. > The other added parameters "threads-max" and "max_map_count" are also known to be related to problems when running with high thread numbers, so I add them too . > > > > Webrev : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8204598/ > > Bug : > > https://bugs.openjdk.java.net/browse/JDK-8204598 > > > Thanks, Matthias > From goetz.lindenmaier at sap.com Fri Jun 8 08:22:42 2018 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Fri, 8 Jun 2018 08:22:42 +0000 Subject: RFR(XXS): 8204335: [ppc] Assembler::add_const_optimized incorrect for some inputs In-Reply-To: References: Message-ID: <23528da598cc483daf347ed93f2ef390@sap.com> Hi Volker, looks good. I will sponsor this for you. Best regards, Goetz. > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On > Behalf Of Volker Simonis > Sent: Dienstag, 5. Juni 2018 18:06 > To: HotSpot Open Source Developers > Subject: RFR(XXS): 8204335: [ppc] Assembler::add_const_optimized incorrect > for some inputs > > Hi, > > can I please have a review for this trivial, day-one, ppc-only fix: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8204335/ > https://bugs.openjdk.java.net/browse/JDK-8204335 > > There's a typo in Assembler::add_const_optimized() which makes it > return incorrect results for some input values. The fix is trivial. > Repeated here for your convenience: > > diff -r 1d476feca3c9 src/hotspot/cpu/ppc/assembler_ppc.cpp > --- a/src/hotspot/cpu/ppc/assembler_ppc.cpp Mon Jun 04 11:19:54 2018 > +0200 > +++ b/src/hotspot/cpu/ppc/assembler_ppc.cpp Tue Jun 05 11:21:08 2018 > +0200 > @@ -486,7 +486,7 @@ > // Case 2: Can use addis. > if (xd == 0) { > short xc = rem & 0xFFFF; // 2nd 16-bit chunk. > - rem = (rem >> 16) + ((unsigned short)xd >> 15); > + rem = (rem >> 16) + ((unsigned short)xc >> 15); > if (rem == 0) { > addis(d, s, xc); > return 0; > > Thank you and best regards, > Volker From matthias.baesken at sap.com Fri Jun 8 08:30:51 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Fri, 8 Jun 2018 08:30:51 +0000 Subject: RFR [XS] : 8204598 : add more thread-related system settings info to hs_error file on Linux In-Reply-To: References: Message-ID: <896b79507e8f4dd9a840d4ed9598cbb2@sap.com> Thanks ! Can I please get a second review ? > -----Original Message----- > From: David Holmes [mailto:david.holmes at oracle.com] > Sent: Freitag, 8. Juni 2018 10:10 > To: Baesken, Matthias ; 'hotspot- > dev at openjdk.java.net' > Cc: Stuefe, Thomas > Subject: Re: RFR [XS] : 8204598 : add more thread-related system settings > info to hs_error file on Linux > > Hi Matthias, > > This seems okay. > > Not sure how useful max_map_count may be. > > Thanks, > David > > On 8/06/2018 6:04 PM, Baesken, Matthias wrote: > > Hi could you please review this small Linux related change ? > > > > In linux os::print_os_info , I print additional info about a number of > system parameters influencing thread creation on Linux. > > > > We noticed the influence of these parameters when looking into an > application creating over 10.000 threads on Linux at the same time; there we > got an OOM : unable to create new native thread > > which was caused by a failing pthread_create (error EAGAIN) . > > > > The machine had plenty of memory, so we looked into various kernel > params and in the end noticed that /proc/sys/kernel/pid_max was too low. > > The other added parameters "threads-max" and "max_map_count" are > also known to be related to problems when running with high thread > numbers, so I add them too . > > > > > > > > Webrev : > > > > http://cr.openjdk.java.net/~mbaesken/webrevs/8204598/ > > > > Bug : > > > > https://bugs.openjdk.java.net/browse/JDK-8204598 > > > > > > Thanks, Matthias > > From thomas.stuefe at gmail.com Fri Jun 8 09:25:36 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Fri, 8 Jun 2018 11:25:36 +0200 Subject: RFR [XS] : 8204598 : add more thread-related system settings info to hs_error file on Linux In-Reply-To: References: Message-ID: Hi David, On Fri, Jun 8, 2018 at 10:09 AM, David Holmes wrote: > Hi Matthias, > > This seems okay. > > Not sure how useful max_map_count may be. > You'd be surprised :) Customers like to tweak this setting in the assumption that this would limit memory consumption. We saw limits as low as 1000 in the field. If you hit that limit, weird things happen. In 80% of all cases this means metaspace allocation will fail, since metaspace uses a lot of small mappings. But you also could fail e.g. to load a shared library. ..Thomas > Thanks, > David > > > On 8/06/2018 6:04 PM, Baesken, Matthias wrote: >> >> Hi could you please review this small Linux related change ? >> >> In linux os::print_os_info , I print additional info about a number of >> system parameters influencing thread creation on Linux. >> >> We noticed the influence of these parameters when looking into an >> application creating over 10.000 threads on Linux at the same time; there we >> got an OOM : unable to create new native thread >> which was caused by a failing pthread_create (error EAGAIN) . >> >> The machine had plenty of memory, so we looked into various kernel params >> and in the end noticed that /proc/sys/kernel/pid_max was too low. >> The other added parameters "threads-max" and "max_map_count" are also >> known to be related to problems when running with high thread numbers, so I >> add them too . >> >> >> >> Webrev : >> >> http://cr.openjdk.java.net/~mbaesken/webrevs/8204598/ >> >> Bug : >> >> https://bugs.openjdk.java.net/browse/JDK-8204598 >> >> >> Thanks, Matthias >> > From thomas.stuefe at gmail.com Fri Jun 8 09:42:44 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Fri, 8 Jun 2018 11:42:44 +0200 Subject: RFR [XS] : 8204598 : add more thread-related system settings info to hs_error file on Linux In-Reply-To: References: Message-ID: Hi Matthias, Thanks for that change, this is useful. -- I do not like lumping max_map_count together with the other variables in ".._thread_info" since it has nothing to do with threads. I would probably just rename the function to something different (e.g. "print_procs_sys_info" as in "print information taken from kernel variables in /proc/sys...") or spread them to other functions. -- I dislike the "out->print("\n....\n") style. Could you please reformulate like this: out->cr(); out->print_cr("..."); to make newlines more explicit? -- Can you please reformulate: "kernel system-wide limit on the number of threads" -> "system-wide limit on the number of kernel threads" or just "system-wide limit on the number of threads" "maximum number of unique process identifiers the system can support)" -> "system-wide limit on number of process identifiers" Thank you! Thomas On Fri, Jun 8, 2018 at 10:04 AM, Baesken, Matthias wrote: > Hi could you please review this small Linux related change ? > > In linux os::print_os_info , I print additional info about a number of system parameters influencing thread creation on Linux. > > We noticed the influence of these parameters when looking into an application creating over 10.000 threads on Linux at the same time; there we got an OOM : unable to create new native thread > which was caused by a failing pthread_create (error EAGAIN) . > > The machine had plenty of memory, so we looked into various kernel params and in the end noticed that /proc/sys/kernel/pid_max was too low. > The other added parameters "threads-max" and "max_map_count" are also known to be related to problems when running with high thread numbers, so I add them too . > > > > Webrev : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8204598/ > > Bug : > > https://bugs.openjdk.java.net/browse/JDK-8204598 > > > Thanks, Matthias > From markus.gronlund at oracle.com Fri Jun 8 09:46:28 2018 From: markus.gronlund at oracle.com (Markus Gronlund) Date: Fri, 8 Jun 2018 02:46:28 -0700 (PDT) Subject: RFR: 8204504: Fix for 8198285 breaks slowdebug builds In-Reply-To: <5B194FF2.3060103@oracle.com> References: <5B194FF2.3060103@oracle.com> Message-ID: <06d38667-52d0-4df5-ae7d-e536d6cd750e@default> Hi Erik, I have successfully tested your patch for building Windows slowdebug. Looks good and thank you for quickly addressing this. Thanks Markus -----Original Message----- From: Erik ?sterlund Sent: den 7 juni 2018 17:32 To: hotspot-dev developers Subject: RFR: 8204504: Fix for 8198285 breaks slowdebug builds Hi, Recent changes to arraycopying (8198285) broke slowdebug builds on windows and solaris. The problem is that the RawAccessBarrierArrayCopy::arraycopy function is expanded for a whole bunch of different new cases after JNI code started using this API. The reported linking problems: void AccessInternal::arraycopy_arrayof_conjoint(__type_0*,__type_0*,unsigned long) /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o void AccessInternal::arraycopy_arrayof_conjoint(__type_0*,__type_0*,unsigned long) /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o void AccessInternal::arraycopy_conjoint_atomic(__type_0*,__type_0*,unsigned long) /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o void AccessInternal::arraycopy_conjoint_atomic(__type_0*,__type_0*,unsigned long) /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o void AccessInternal::arraycopy_conjoint_atomic(__type_0*,__type_0*,unsigned long) /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o void AccessInternal::arraycopy_conjoint_atomic(__type_0*,__type_0*,unsigned long) /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o void AccessInternal::arraycopy_arrayof_conjoint(__type_0*,__type_0*,unsigned long) /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o void AccessInternal::arraycopy_arrayof_conjoint(__type_0*,__type_0*,unsigned long) /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o ...are all variants that the backend does not yet support as there are no current uses of it. And there really are no current uses of these still - these are all false positives. However, the code that currently chooses whether to use arrayof, conjoing, disjoint heap words, with possibly atomic variations, is all checked with if statements. But each case of the if statements are compiled in the template expansion despite being statically known to be dead code in a whole bunch of template expansions. The optimized code generation is clever enough to just ignore that dead code, while the slowdebug builds on windows and solaris complain that this dead code (that was only spuriously expanded by accident but is never called) does not exist. The solution I propose to this is to fold away the different cases the linker is complaining about using SFINAE instead of if statements. That way, they are never template expanded spuriously when it is statically known that they will not be called; they are never considered as valid overloads. I have verified on a Solaris x86 machine that it did not build before, but builds fine with this patch applied. Webrev: http://cr.openjdk.java.net/~eosterlund/8204504/webrev.00/ Bug: https://bugs.openjdk.java.net/browse/JDK-8204504 Thanks, /Erik From erik.osterlund at oracle.com Fri Jun 8 09:52:21 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Fri, 8 Jun 2018 11:52:21 +0200 Subject: RFR: 8204504: Fix for 8198285 breaks slowdebug builds In-Reply-To: <06d38667-52d0-4df5-ae7d-e536d6cd750e@default> References: <5B194FF2.3060103@oracle.com> <06d38667-52d0-4df5-ae7d-e536d6cd750e@default> Message-ID: <5B1A51D5.2030404@oracle.com> Hi Markus, Thanks for the review. /Erik On 2018-06-08 11:46, Markus Gronlund wrote: > Hi Erik, > > I have successfully tested your patch for building Windows slowdebug. > > Looks good and thank you for quickly addressing this. > > Thanks > Markus > > -----Original Message----- > From: Erik ?sterlund > Sent: den 7 juni 2018 17:32 > To: hotspot-dev developers > Subject: RFR: 8204504: Fix for 8198285 breaks slowdebug builds > > Hi, > > Recent changes to arraycopying (8198285) broke slowdebug builds on windows and solaris. > > The problem is that the RawAccessBarrierArrayCopy::arraycopy > function is expanded for a whole bunch of different new cases after JNI code started using this API. The reported linking problems: > > void > AccessInternal::arraycopy_arrayof_conjoint(__type_0*,__type_0*,unsigned > long) > /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o > void > AccessInternal::arraycopy_arrayof_conjoint(__type_0*,__type_0*,unsigned > long) > /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o > void AccessInternal::arraycopy_conjoint_atomic short>(__type_0*,__type_0*,unsigned long) > /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o > void AccessInternal::arraycopy_conjoint_atomic char>(__type_0*,__type_0*,unsigned long) > /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o > void > AccessInternal::arraycopy_conjoint_atomic(__type_0*,__type_0*,unsigned > long) > /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o > void > AccessInternal::arraycopy_conjoint_atomic(__type_0*,__type_0*,unsigned > long) > /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o > void AccessInternal::arraycopy_arrayof_conjoint short>(__type_0*,__type_0*,unsigned long) > /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o > void AccessInternal::arraycopy_arrayof_conjoint char>(__type_0*,__type_0*,unsigned long) > /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o > > > ...are all variants that the backend does not yet support as there are > no current uses of it. And there really are no current uses of these > still - these are all false positives. However, the code that currently > chooses whether to use arrayof, conjoing, disjoint heap words, with > possibly atomic variations, is all checked with if statements. But each > case of the if statements are compiled in the template expansion despite > being statically known to be dead code in a whole bunch of template > expansions. The optimized code generation is clever enough to just > ignore that dead code, while the slowdebug builds on windows and solaris > complain that this dead code (that was only spuriously expanded by > accident but is never called) does not exist. > > The solution I propose to this is to fold away the different cases the > linker is complaining about using SFINAE instead of if statements. That > way, they are never template expanded spuriously when it is statically > known that they will not be called; they are never considered as valid > overloads. > > I have verified on a Solaris x86 machine that it did not build before, > but builds fine with this patch applied. > > Webrev: > http://cr.openjdk.java.net/~eosterlund/8204504/webrev.00/ > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8204504 > > Thanks, > /Erik From rkennke at redhat.com Fri Jun 8 09:58:26 2018 From: rkennke at redhat.com (Roman Kennke) Date: Fri, 8 Jun 2018 11:58:26 +0200 Subject: RFR: 8204504: Fix for 8198285 breaks slowdebug builds In-Reply-To: <5B194FF2.3060103@oracle.com> References: <5B194FF2.3060103@oracle.com> Message-ID: <61343c0a-d84a-089f-1e22-20ce1bc0ee4e@redhat.com> Am 07.06.2018 um 17:32 schrieb Erik ?sterlund: > Hi, > > Recent changes to arraycopying (8198285) broke slowdebug builds on > windows and solaris. > > The problem is that the RawAccessBarrierArrayCopy::arraycopy > function is expanded for a whole bunch of different new cases after JNI > code started using this API. The reported linking problems: > > void > AccessInternal::arraycopy_arrayof_conjoint(__type_0*,__type_0*,unsigned > long) > /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o > > void > AccessInternal::arraycopy_arrayof_conjoint(__type_0*,__type_0*,unsigned > long) > /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o > > void AccessInternal::arraycopy_conjoint_atomic short>(__type_0*,__type_0*,unsigned long) > /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o > > void AccessInternal::arraycopy_conjoint_atomic char>(__type_0*,__type_0*,unsigned long) > /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o > > void > AccessInternal::arraycopy_conjoint_atomic(__type_0*,__type_0*,unsigned > long) > /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o > > void > AccessInternal::arraycopy_conjoint_atomic(__type_0*,__type_0*,unsigned > long) > /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o > > void AccessInternal::arraycopy_arrayof_conjoint short>(__type_0*,__type_0*,unsigned long) > /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o > > void AccessInternal::arraycopy_arrayof_conjoint char>(__type_0*,__type_0*,unsigned long) > /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o > > > ...are all variants that the backend does not yet support as there are > no current uses of it. And there really are no current uses of these > still - these are all false positives. However, the code that currently > chooses whether to use arrayof, conjoing, disjoint heap words, with > possibly atomic variations, is all checked with if statements. But each > case of the if statements are compiled in the template expansion despite > being statically known to be dead code in a whole bunch of template > expansions. The optimized code generation is clever enough to just > ignore that dead code, while the slowdebug builds on windows and solaris > complain that this dead code (that was only spuriously expanded by > accident but is never called) does not exist. > > The solution I propose to this is to fold away the different cases the > linker is complaining about using SFINAE instead of if statements. That > way, they are never template expanded spuriously when it is statically > known that they will not be called; they are never considered as valid > overloads. > > I have verified on a Solaris x86 machine that it did not build before, > but builds fine with this patch applied. > > Webrev: > http://cr.openjdk.java.net/~eosterlund/8204504/webrev.00/ > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8204504 > > Thanks, > /Erik OMG WTF templates voodoo. ;-) Patch is ok. Thanks for fixing this! Roman From erik.osterlund at oracle.com Fri Jun 8 10:07:54 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Fri, 8 Jun 2018 12:07:54 +0200 Subject: RFR: 8204504: Fix for 8198285 breaks slowdebug builds In-Reply-To: <61343c0a-d84a-089f-1e22-20ce1bc0ee4e@redhat.com> References: <5B194FF2.3060103@oracle.com> <61343c0a-d84a-089f-1e22-20ce1bc0ee4e@redhat.com> Message-ID: <5B1A557A.1050109@oracle.com> Hi Roman, Thanks for the review. /Erik On 2018-06-08 11:58, Roman Kennke wrote: > Am 07.06.2018 um 17:32 schrieb Erik ?sterlund: >> Hi, >> >> Recent changes to arraycopying (8198285) broke slowdebug builds on >> windows and solaris. >> >> The problem is that the RawAccessBarrierArrayCopy::arraycopy >> function is expanded for a whole bunch of different new cases after JNI >> code started using this API. The reported linking problems: >> >> void >> AccessInternal::arraycopy_arrayof_conjoint(__type_0*,__type_0*,unsigned >> long) >> /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o >> >> void >> AccessInternal::arraycopy_arrayof_conjoint(__type_0*,__type_0*,unsigned >> long) >> /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o >> >> void AccessInternal::arraycopy_conjoint_atomic> short>(__type_0*,__type_0*,unsigned long) >> /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o >> >> void AccessInternal::arraycopy_conjoint_atomic> char>(__type_0*,__type_0*,unsigned long) >> /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o >> >> void >> AccessInternal::arraycopy_conjoint_atomic(__type_0*,__type_0*,unsigned >> long) >> /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o >> >> void >> AccessInternal::arraycopy_conjoint_atomic(__type_0*,__type_0*,unsigned >> long) >> /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o >> >> void AccessInternal::arraycopy_arrayof_conjoint> short>(__type_0*,__type_0*,unsigned long) >> /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o >> >> void AccessInternal::arraycopy_arrayof_conjoint> char>(__type_0*,__type_0*,unsigned long) >> /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o >> >> >> ...are all variants that the backend does not yet support as there are >> no current uses of it. And there really are no current uses of these >> still - these are all false positives. However, the code that currently >> chooses whether to use arrayof, conjoing, disjoint heap words, with >> possibly atomic variations, is all checked with if statements. But each >> case of the if statements are compiled in the template expansion despite >> being statically known to be dead code in a whole bunch of template >> expansions. The optimized code generation is clever enough to just >> ignore that dead code, while the slowdebug builds on windows and solaris >> complain that this dead code (that was only spuriously expanded by >> accident but is never called) does not exist. >> >> The solution I propose to this is to fold away the different cases the >> linker is complaining about using SFINAE instead of if statements. That >> way, they are never template expanded spuriously when it is statically >> known that they will not be called; they are never considered as valid >> overloads. >> >> I have verified on a Solaris x86 machine that it did not build before, >> but builds fine with this patch applied. >> >> Webrev: >> http://cr.openjdk.java.net/~eosterlund/8204504/webrev.00/ >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8204504 >> >> Thanks, >> /Erik > OMG WTF templates voodoo. ;-) > > Patch is ok. > > Thanks for fixing this! > > Roman > From zgu at redhat.com Fri Jun 8 12:06:22 2018 From: zgu at redhat.com (Zhengyu Gu) Date: Fri, 8 Jun 2018 08:06:22 -0400 Subject: RFR(M) 8203641: Refactor String Deduplication into shared In-Reply-To: <02361438-b787-93c8-25b1-5f13f5a49a29@redhat.com> References: <02361438-b787-93c8-25b1-5f13f5a49a29@redhat.com> Message-ID: <269898c0-f2aa-dbab-1981-5d52758a5e7f@redhat.com> Ping! Could any G1 experts review this refactoring? Thanks, -Zhengyu On 06/01/2018 03:58 PM, Roman Kennke wrote: > Am 28.05.2018 um 23:11 schrieb Zhengyu Gu: >> Hi, >> >> Please review this refactoring of G1 string deduplication into shared >> directory, so that other GCs (such as Shenandoah) can advantage of >> existing infrastructure and plugin their own implementation. >> >> This refactoring preserves G1's String Deduplication infrastructure >> (please see the comments in stringDedup.hpp for details), so that there >> is no change to G1 outside of string deduplication code. >> >> Following changes are made to support different GCs: >> >> 1. Allows plugin new dedup queue implementation. >> ?? While it keeps G1's dedup queue static interface, queue itself now is >> a pure virtual class. Different GC can provide different implementation >> to fit its own enqueuing mechanism. >> ?? For example, G1 enqueues deduplication candidates during STW >> evacuate/mark pause, while Shenandoah implementation does it during >> concurrent mark. >> >> 2. Abstracted out generation related statistics out of StringDedupStat >> base class, cause not all GCs are generational. >> ?? G1StringDedupStat simply extends the base to add generational >> statistics. >> >> 3. Moved table and queue's parallel processing logic from closure >> (StringDedupUnlinkOrOopsDoClosure) to corresponding table and queue. >> This gives flexibility to construct closure to share among the workers >> (as G1 does), as well as private closure for each worker (as Shenandoah >> does). >> >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8203641 >> Webrev: http://cr.openjdk.java.net/~zgu/8203641/webrev.00/index.html >> >> Test: >> >> ? Submit test came back clean. >> > > This change looks good to me. Thank you! Should wait a bit for G1 > engineers to comment too. > > Roman > > From matthias.baesken at sap.com Fri Jun 8 12:22:49 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Fri, 8 Jun 2018 12:22:49 +0000 Subject: RFR [XS] : 8204598 : add more thread-related system settings info to hs_error file on Linux In-Reply-To: References: Message-ID: <3f87efe55bce4a07a1894d7c1ad0d8a2@sap.com> Hi Thomas / David , thanks for the reviews . Thomas, I created a second webrev (renamed the function to print_proc_sys_info and changed the output slightly ) : http://cr.openjdk.java.net/~mbaesken/webrevs/8204598.1/ Best regards, Matthias > -----Original Message----- > From: Thomas St?fe [mailto:thomas.stuefe at gmail.com] > Sent: Freitag, 8. Juni 2018 11:43 > To: Baesken, Matthias > Cc: hotspot-dev at openjdk.java.net > Subject: Re: RFR [XS] : 8204598 : add more thread-related system settings > info to hs_error file on Linux > > Hi Matthias, > > > Thanks for that change, this is useful. > > -- > > I do not like lumping max_map_count together with the other variables > in ".._thread_info" since it has nothing to do with threads. > > I would probably just rename the function to something different (e.g. > "print_procs_sys_info" as in "print information taken from kernel > variables in /proc/sys...") or spread them to other functions. > > -- > > I dislike the "out->print("\n....\n") style. Could you please > reformulate like this: > > out->cr(); > out->print_cr("..."); > > to make newlines more explicit? > > -- > > Can you please reformulate: > > "kernel system-wide limit on the number of threads" -> "system-wide > limit on the number of kernel threads" or just "system-wide limit on > the number of threads" > "maximum number of unique process identifiers the system can support)" > -> "system-wide limit on number of process identifiers" > > Thank you! > > Thomas > > > > > On Fri, Jun 8, 2018 at 10:04 AM, Baesken, Matthias > wrote: > > Hi could you please review this small Linux related change ? > > > > In linux os::print_os_info , I print additional info about a number of > system parameters influencing thread creation on Linux. > > > > We noticed the influence of these parameters when looking into an > application creating over 10.000 threads on Linux at the same time; there we > got an OOM : unable to create new native thread > > which was caused by a failing pthread_create (error EAGAIN) . > > > > The machine had plenty of memory, so we looked into various kernel > params and in the end noticed that /proc/sys/kernel/pid_max was too low. > > The other added parameters "threads-max" and "max_map_count" are > also known to be related to problems when running with high thread > numbers, so I add them too . > > > > > > > > Webrev : > > > > http://cr.openjdk.java.net/~mbaesken/webrevs/8204598/ > > > > Bug : > > > > https://bugs.openjdk.java.net/browse/JDK-8204598 > > > > > > Thanks, Matthias > > From thomas.stuefe at gmail.com Fri Jun 8 12:41:31 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Fri, 8 Jun 2018 14:41:31 +0200 Subject: RFR [XS] : 8204598 : add more thread-related system settings info to hs_error file on Linux In-Reply-To: <3f87efe55bce4a07a1894d7c1ad0d8a2@sap.com> References: <3f87efe55bce4a07a1894d7c1ad0d8a2@sap.com> Message-ID: Looks good to me. Thanks! .. Thomas On Fri, Jun 8, 2018, 14:22 Baesken, Matthias wrote: > Hi Thomas / David , thanks for the reviews . > > Thomas, I created a second webrev (renamed the function to > print_proc_sys_info and changed the output slightly ) : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8204598.1/ > > Best regards, Matthias > > > > -----Original Message----- > > From: Thomas St?fe [mailto:thomas.stuefe at gmail.com] > > Sent: Freitag, 8. Juni 2018 11:43 > > To: Baesken, Matthias > > Cc: hotspot-dev at openjdk.java.net > > Subject: Re: RFR [XS] : 8204598 : add more thread-related system settings > > info to hs_error file on Linux > > > > Hi Matthias, > > > > > > Thanks for that change, this is useful. > > > > -- > > > > I do not like lumping max_map_count together with the other variables > > in ".._thread_info" since it has nothing to do with threads. > > > > I would probably just rename the function to something different (e.g. > > "print_procs_sys_info" as in "print information taken from kernel > > variables in /proc/sys...") or spread them to other functions. > > > > -- > > > > I dislike the "out->print("\n....\n") style. Could you please > > reformulate like this: > > > > out->cr(); > > out->print_cr("..."); > > > > to make newlines more explicit? > > > > -- > > > > Can you please reformulate: > > > > "kernel system-wide limit on the number of threads" -> "system-wide > > limit on the number of kernel threads" or just "system-wide limit on > > the number of threads" > > "maximum number of unique process identifiers the system can support)" > > -> "system-wide limit on number of process identifiers" > > > > Thank you! > > > > Thomas > > > > > > > > > > On Fri, Jun 8, 2018 at 10:04 AM, Baesken, Matthias > > wrote: > > > Hi could you please review this small Linux related change ? > > > > > > In linux os::print_os_info , I print additional info about a number > of > > system parameters influencing thread creation on Linux. > > > > > > We noticed the influence of these parameters when looking into an > > application creating over 10.000 threads on Linux at the same time; > there we > > got an OOM : unable to create new native thread > > > which was caused by a failing pthread_create (error EAGAIN) . > > > > > > The machine had plenty of memory, so we looked into various kernel > > params and in the end noticed that /proc/sys/kernel/pid_max was too low. > > > The other added parameters "threads-max" and "max_map_count" are > > also known to be related to problems when running with high thread > > numbers, so I add them too . > > > > > > > > > > > > Webrev : > > > > > > http://cr.openjdk.java.net/~mbaesken/webrevs/8204598/ > > > > > > Bug : > > > > > > https://bugs.openjdk.java.net/browse/JDK-8204598 > > > > > > > > > Thanks, Matthias > > > > From daniel.daugherty at oracle.com Fri Jun 8 14:36:13 2018 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Fri, 8 Jun 2018 10:36:13 -0400 Subject: RFR: 8204504: Fix for 8198285 breaks slowdebug builds In-Reply-To: <5B1A557A.1050109@oracle.com> References: <5B194FF2.3060103@oracle.com> <61343c0a-d84a-089f-1e22-20ce1bc0ee4e@redhat.com> <5B1A557A.1050109@oracle.com> Message-ID: <8f0766c3-2985-5da6-5353-1be0c4db70f5@oracle.com> Just for the OpenJDK record: I also tested Erik's patch on my Solaris-X64 server yesterday and builds work again for 'release', 'fastdebug' and 'slowdebug'. Roman> OMG WTF templates voodoo. ;-) Yup. I now understand why I wasn't able to fix this 'simple build problem' quickly myself... :-) Dan On 6/8/18 6:07 AM, Erik ?sterlund wrote: > Hi Roman, > > Thanks for the review. > > /Erik > > On 2018-06-08 11:58, Roman Kennke wrote: >> Am 07.06.2018 um 17:32 schrieb Erik ?sterlund: >>> Hi, >>> >>> Recent changes to arraycopying (8198285) broke slowdebug builds on >>> windows and solaris. >>> >>> The problem is that the >>> RawAccessBarrierArrayCopy::arraycopy >>> function is expanded for a whole bunch of different new cases after JNI >>> code started using this API. The reported linking problems: >>> >>> void >>> AccessInternal::arraycopy_arrayof_conjoint(__type_0*,__type_0*,unsigned >>> >>> long) >>> /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o >>> >>> >>> void >>> AccessInternal::arraycopy_arrayof_conjoint(__type_0*,__type_0*,unsigned >>> >>> long) >>> /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o >>> >>> >>> void AccessInternal::arraycopy_conjoint_atomic>> short>(__type_0*,__type_0*,unsigned long) >>> /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o >>> >>> >>> void AccessInternal::arraycopy_conjoint_atomic>> char>(__type_0*,__type_0*,unsigned long) >>> /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o >>> >>> >>> void >>> AccessInternal::arraycopy_conjoint_atomic(__type_0*,__type_0*,unsigned >>> >>> long) >>> /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o >>> >>> >>> void >>> AccessInternal::arraycopy_conjoint_atomic(__type_0*,__type_0*,unsigned >>> >>> long) >>> /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o >>> >>> >>> void AccessInternal::arraycopy_arrayof_conjoint>> short>(__type_0*,__type_0*,unsigned long) >>> /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o >>> >>> >>> void AccessInternal::arraycopy_arrayof_conjoint>> char>(__type_0*,__type_0*,unsigned long) >>> /work/shared/mirrors/src_clones/jdk/jdk_baseline/build/solaris-x86_64-normal-server-slowdebug/hotspot/variant-server/libjvm/objs/jni.o >>> >>> >>> >>> ...are all variants that the backend does not yet support as there are >>> no current uses of it. And there really are no current uses of these >>> still - these are all false positives. However, the code that currently >>> chooses whether to use arrayof, conjoing, disjoint heap words, with >>> possibly atomic variations, is all checked with if statements. But each >>> case of the if statements are compiled in the template expansion >>> despite >>> being statically known to be dead code in a whole bunch of template >>> expansions. The optimized code generation is clever enough to just >>> ignore that dead code, while the slowdebug builds on windows and >>> solaris >>> complain that this dead code (that was only spuriously expanded by >>> accident but is never called) does not exist. >>> >>> The solution I propose to this is to fold away the different cases the >>> linker is complaining about using SFINAE instead of if statements. That >>> way, they are never template expanded spuriously when it is statically >>> known that they will not be called; they are never considered as valid >>> overloads. >>> >>> I have verified on a Solaris x86 machine that it did not build before, >>> but builds fine with this patch applied. >>> >>> Webrev: >>> http://cr.openjdk.java.net/~eosterlund/8204504/webrev.00/ >>> >>> Bug: >>> https://bugs.openjdk.java.net/browse/JDK-8204504 >>> >>> Thanks, >>> /Erik >> OMG WTF templates voodoo. ;-) >> >> Patch is ok. >> >> Thanks for fixing this! >> >> Roman >> > From bob.vandette at oracle.com Fri Jun 8 15:03:23 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Fri, 8 Jun 2018 11:03:23 -0400 Subject: Ping!! Re: RFR: 8203357 Container Metrics In-Reply-To: <78ad68bd-0020-836f-7511-2b7a49dc6d73@oracle.com> References: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> <5B19A3F0.1010804@oracle.com> <78ad68bd-0020-836f-7511-2b7a49dc6d73@oracle.com> Message-ID: <4B4CBD97-1081-4DB6-871F-5BF292BF4DD0@oracle.com> I didn?t actually have any ERROR_MARGIN problems during testing. I had issues with the testCpuConsumption test in http://cr.openjdk.java.net/~bobv/8203357/webrev.01/test/lib/jdk/test/lib/containers/cgroup/MetricsTester.java.html I had to initialize the cpu usage values during setup rather than inside the test to ensure that sufficient cpu usage had occurred by the time the test was run. The original code executed and received the same values after attempting to exec a linux utility. My change uses the time taken to run several tests instead. This seems to have eliminated any intermittent failures. Bob. > On Jun 8, 2018, at 12:30 AM, Harsha Wardhana B wrote: > > [Replying to all mailing-lists] > Hi Misha, > > The ERROR_MARGIN in tests was introduced to make the tests stable. There are times where metric values (specifically CPU usage) can change drastically in between two reads. The metrics value got from the API and the cgroup file can be different and 0.1 ERROR_MARGIN should take care of that, though at times even that may not be enough. Hence the CPU usage related tests only print a warning if ERROR_MARGIN is exceeded. > > Thanks > Harsha > > On Friday 08 June 2018 03:00 AM, Mikhailo Seledtsov wrote: >> Hi Bob, >> >> I looked at the tests. In general they look good. I am a bit concerned about the use of ERROR_MARGIN in one of the tests. We need to make sure that the tests are stable, and do not produce intermittent failures. >> >> >> Thank you, >> Misha >> >> On 6/7/18, 10:43 AM, Bob Vandette wrote: >>> Can I get one more reviewer for this RFE so I can integrate it? >>> >>>> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 >>> Mandy Chung has reviewed this change. >>> >>> I?ve run Mach5 hotspot and core lib tests. >>> >>> I?ve reviewed the tests which were written by Harsha Wardhana >>> >>> I filed a CSR for the command line change and it?s now approved and closed. >>> >>> Thanks, >>> Bob. >>> >>> >>>> On May 30, 2018, at 3:45 PM, Bob Vandette wrote: >>>> >>>> Please review the following RFE which adds an internal API, along with jtreg tests that provide >>>> access to Docker container configuration data and metrics. In addition to the API which we hope to >>>> take advantage of in the future with Java Flight Recorder and a JMX Mbean, I?ve added an additional >>>> option to -XshowSettings:system than dumps out the container or host cgroup confguration >>>> information. See the sample output below: >>>> >>>> RFE: Container Metrics >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8203357 >>>> >>>> WEBREV: >>>> >>>> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 >>>> >>>> >>>> This commit will also include a fix for the following bug. >>>> >>>> BUG: [TESTBUG] Test /runtime/containers/cgroup/PlainRead.java fails >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8203691 >>>> >>>> WEBREV: >>>> >>>> http://cr.openjdk.java.net/~bobv/8203357/webrev.00/test/hotspot/jtreg/runtime/containers/cgroup/PlainRead.java.sdiff.html >>>> >>>> SAMPLE USAGE and OUTPUT: >>>> >>>> docker run ?memory=256m --cpuset-cpus 4-7 -it ubuntu bash >>>> ./java -XshowSettings:system >>>> Operating System Metrics: >>>> Provider: cgroupv1 >>>> Effective CPU Count: 4 >>>> CPU Period: 100000 >>>> CPU Quota: -1 >>>> CPU Shares: -1 >>>> List of Processors, 4 total: >>>> 4 5 6 7 >>>> List of Effective Processors, 4 total: >>>> 4 5 6 7 >>>> List of Memory Nodes, 2 total: >>>> 0 1 >>>> List of Available Memory Nodes, 2 total: >>>> 0 1 >>>> CPUSet Memory Pressure Enabled: false >>>> Memory Limit: 256.00M >>>> Memory Soft Limit: Unlimited >>>> Memory& Swap Limit: 512.00M >>>> Kernel Memory Limit: Unlimited >>>> TCP Memory Limit: Unlimited >>>> Out Of Memory Killer Enabled: true >>>> >>>> TEST RESULTS: >>>> >>>> testing runtime container APIs >>>> Directory "JTwork" not found: creating >>>> Passed: runtime/containers/cgroup/PlainRead.java >>>> Passed: runtime/containers/docker/DockerBasicTest.java >>>> Passed: runtime/containers/docker/TestCPUAwareness.java >>>> Passed: runtime/containers/docker/TestCPUSets.java >>>> Passed: runtime/containers/docker/TestMemoryAwareness.java >>>> Passed: runtime/containers/docker/TestMisc.java >>>> Test results: passed: 6 >>>> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >>>> >>>> testing jdk.internal.platform APIs >>>> Passed: jdk/internal/platform/cgroup/TestCgroupMetrics.java >>>> Passed: jdk/internal/platform/docker/TestDockerCpuMetrics.java >>>> Passed: jdk/internal/platform/docker/TestDockerMemoryMetrics.java >>>> Passed: jdk/internal/platform/docker/TestSystemMetrics.java >>>> Test results: passed: 4 >>>> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >>>> >>>> testing -XshowSettings:system launcher option >>>> Passed: tools/launcher/Settings.java >>>> Test results: passed: 1 >>>> >>>> >>>> Bob. >>>> >>>> > From mikhailo.seledtsov at oracle.com Fri Jun 8 15:31:52 2018 From: mikhailo.seledtsov at oracle.com (mikhailo) Date: Fri, 8 Jun 2018 08:31:52 -0700 Subject: Ping!! Re: RFR: 8203357 Container Metrics In-Reply-To: <78ad68bd-0020-836f-7511-2b7a49dc6d73@oracle.com> References: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> <5B19A3F0.1010804@oracle.com> <78ad68bd-0020-836f-7511-2b7a49dc6d73@oracle.com> Message-ID: <4f9ac7ba-5a81-c9e4-feaa-42023d9f518f@oracle.com> Hi Harsha, ? Thank you for the explanation, makes sense to me. Please be aware, if a specific test turns out to be unstable in CI testing, it should be problem listed until solution is found to make it more stable. If the test is highly intermittent (fails intermittently but rarely) then it should be tagged with intermittent keyword. Overall tests look good to me, Thank you, Misha On 06/07/2018 09:30 PM, Harsha Wardhana B wrote: > [Replying to all mailing-lists] > Hi Misha, > > The ERROR_MARGIN in tests was introduced to make the tests stable. > There are times where metric values (specifically CPU usage) can > change drastically in between two reads. The metrics value got from > the API and the cgroup file can be different and 0.1 ERROR_MARGIN > should take care of that, though at times even that may not be enough. > Hence the CPU usage related tests only print a warning if ERROR_MARGIN > is exceeded. > > Thanks > Harsha > > On Friday 08 June 2018 03:00 AM, Mikhailo Seledtsov wrote: >> Hi Bob, >> >> ? I looked at the tests. In general they look good. I am a bit >> concerned about the use of ERROR_MARGIN in one of the tests. We need >> to make sure that the tests are stable, and do not produce >> intermittent failures. >> >> >> Thank you, >> Misha >> >> On 6/7/18, 10:43 AM, Bob Vandette wrote: >>> Can I get one more reviewer for this RFE so I can integrate it? >>> >>>> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 >>> Mandy Chung has reviewed this change. >>> >>> I?ve run Mach5 hotspot and core lib tests. >>> >>> I?ve reviewed the tests which were written by Harsha Wardhana >>> >>> I filed a CSR for the command line change and it?s now approved and >>> closed. >>> >>> Thanks, >>> Bob. >>> >>> >>>> On May 30, 2018, at 3:45 PM, Bob Vandette? >>>> wrote: >>>> >>>> Please review the following RFE which adds an internal API, along >>>> with jtreg tests that provide >>>> access to Docker container configuration data and metrics. In >>>> addition to the API which we hope to >>>> take advantage of in the future with Java Flight Recorder and a JMX >>>> Mbean, I?ve added an additional >>>> option to -XshowSettings:system than dumps out the container or >>>> host cgroup confguration >>>> information.? See the sample output below: >>>> >>>> RFE: Container Metrics >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8203357 >>>> >>>> WEBREV: >>>> >>>> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 >>>> >>>> >>>> This commit will also include a fix for the following bug. >>>> >>>> BUG: [TESTBUG] Test /runtime/containers/cgroup/PlainRead.java fails >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8203691 >>>> >>>> WEBREV: >>>> >>>> http://cr.openjdk.java.net/~bobv/8203357/webrev.00/test/hotspot/jtreg/runtime/containers/cgroup/PlainRead.java.sdiff.html >>>> >>>> >>>> SAMPLE USAGE and OUTPUT: >>>> >>>> docker run ?memory=256m --cpuset-cpus 4-7 -it ubuntu bash >>>> ./java -XshowSettings:system >>>> Operating System Metrics: >>>> ??? Provider: cgroupv1 >>>> ??? Effective CPU Count: 4 >>>> ??? CPU Period: 100000 >>>> ??? CPU Quota: -1 >>>> ??? CPU Shares: -1 >>>> ??? List of Processors, 4 total: >>>> ??? 4 5 6 7 >>>> ??? List of Effective Processors, 4 total: >>>> ??? 4 5 6 7 >>>> ??? List of Memory Nodes, 2 total: >>>> ??? 0 1 >>>> ??? List of Available Memory Nodes, 2 total: >>>> ??? 0 1 >>>> ??? CPUSet Memory Pressure Enabled: false >>>> ??? Memory Limit: 256.00M >>>> ??? Memory Soft Limit: Unlimited >>>> ??? Memory&? Swap Limit: 512.00M >>>> ??? Kernel Memory Limit: Unlimited >>>> ??? TCP Memory Limit: Unlimited >>>> ??? Out Of Memory Killer Enabled: true >>>> >>>> TEST RESULTS: >>>> >>>> testing runtime container APIs >>>> Directory "JTwork" not found: creating >>>> Passed: runtime/containers/cgroup/PlainRead.java >>>> Passed: runtime/containers/docker/DockerBasicTest.java >>>> Passed: runtime/containers/docker/TestCPUAwareness.java >>>> Passed: runtime/containers/docker/TestCPUSets.java >>>> Passed: runtime/containers/docker/TestMemoryAwareness.java >>>> Passed: runtime/containers/docker/TestMisc.java >>>> Test results: passed: 6 >>>> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >>>> >>>> testing jdk.internal.platform APIs >>>> Passed: jdk/internal/platform/cgroup/TestCgroupMetrics.java >>>> Passed: jdk/internal/platform/docker/TestDockerCpuMetrics.java >>>> Passed: jdk/internal/platform/docker/TestDockerMemoryMetrics.java >>>> Passed: jdk/internal/platform/docker/TestSystemMetrics.java >>>> Test results: passed: 4 >>>> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >>>> >>>> testing -XshowSettings:system launcher option >>>> Passed: tools/launcher/Settings.java >>>> Test results: passed: 1 >>>> >>>> >>>> Bob. >>>> >>>> > From per.liden at oracle.com Fri Jun 8 18:20:12 2018 From: per.liden at oracle.com (Per Liden) Date: Fri, 8 Jun 2018 20:20:12 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> Message-ID: <47ea4414-fd84-7c9d-807e-a0bbdba23860@oracle.com> Hi all, Here are updated webrevs, which address all the feedback and comments received. These webrevs are also rebased on today's jdk/jdk. We're looking for any final comments people might have, and if things go well we hope to be able to push this some time (preferably early) next week. These webrevs have passed tier{1,2,3,4,5,6} on Linux-x64, and tier{1,2,3} on all other Oracle supported platforms. ZGC Master http://cr.openjdk.java.net/~pliden/8204210/webrev.2-master ZGC Testing http://cr.openjdk.java.net/~pliden/8204210/webrev.2-testing Thanks! /Per & Stefan On 06/06/2018 12:48 AM, Per Liden wrote: > Hi all, > > Here are updated webrevs reflecting the feedback received so far. > > ZGC Master > Incremental: > http://cr.openjdk.java.net/~pliden/8204210/webrev.0vs1-master > Full: http://cr.openjdk.java.net/~pliden/8204210/webrev.1-master > > ZGC Testing > Incremental: > http://cr.openjdk.java.net/~pliden/8204210/webrev.0vs1-testing > Full: http://cr.openjdk.java.net/~pliden/8204210/webrev.1-testing > > Thanks! > > /Per > > On 06/01/2018 11:41 PM, Per Liden wrote: >> Hi, >> >> Please review the implementation of JEP 333: ZGC: A Scalable >> Low-Latency Garbage Collector (Experimental) >> >> Please see the JEP for more information about the project. The JEP is >> currently in state "Proposed to Target" for JDK 11. >> >> https://bugs.openjdk.java.net/browse/JDK-8197831 >> >> Additional information in can also be found on the ZGC project wiki. >> >> https://wiki.openjdk.java.net/display/zgc/Main >> >> >> Webrevs >> ------- >> >> To make this easier to review, we've divided the change into two webrevs. >> >> * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master >> >> This patch contains the actual ZGC implementation, the new unit >> tests and other changes needed in HotSpot. >> >> * ZGC Testing: >> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >> >> This patch contains changes to existing tests needed by ZGC. >> >> >> Overview of Changes >> ------------------- >> >> Below follows a list of the files we add/modify in the master patch, >> with a short summary describing each group. >> >> * Build support - Making ZGC an optional feature. >> >> make/autoconf/hotspot.m4 >> make/hotspot/lib/JvmFeatures.gmk >> src/hotspot/share/utilities/macros.hpp >> >> * C2 AD file - Additions needed to generate ZGC load barriers (adlc >> does not currently offer a way to easily break this out). >> >> src/hotspot/cpu/x86/x86.ad >> src/hotspot/cpu/x86/x86_64.ad >> >> * C2 - Things that can't be easily abstracted out into ZGC specific >> code, most of which is guarded behind a #if INCLUDE_ZGC and/or if >> (UseZGC) condition. There should only be two logic changes (one in >> idealKit.cpp and one in node.cpp) that are still active when ZGC is >> disabled. We believe these are low risk changes and should not >> introduce any real change i behavior when using other GCs. >> >> src/hotspot/share/adlc/formssel.cpp >> src/hotspot/share/opto/* >> src/hotspot/share/compiler/compilerDirectives.hpp >> >> * General GC+Runtime - Registering ZGC as a collector. >> >> src/hotspot/share/gc/shared/* >> src/hotspot/share/runtime/vmStructs.cpp >> src/hotspot/share/runtime/vm_operations.hpp >> src/hotspot/share/prims/whitebox.cpp >> >> * GC thread local data - Increasing the size of data area by 32 bytes. >> >> src/hotspot/share/gc/shared/gcThreadLocalData.hpp >> >> * ZGC - The collector itself. >> >> src/hotspot/share/gc/z/* >> src/hotspot/cpu/x86/gc/z/* >> src/hotspot/os_cpu/linux_x86/gc/z/* >> test/hotspot/gtest/gc/z/* >> >> * JFR - Adding new event types. >> >> src/hotspot/share/jfr/* >> src/jdk.jfr/share/conf/jfr/* >> >> * Logging - Adding new log tags. >> >> src/hotspot/share/logging/* >> >> * Metaspace - Adding a friend declaration. >> >> src/hotspot/share/memory/metaspace.hpp >> >> * InstanceRefKlass - Adjustments for concurrent reference processing. >> >> src/hotspot/share/oops/instanceRefKlass.inline.hpp >> >> * vmSymbol - Disabled clone intrinsic for ZGC. >> >> src/hotspot/share/classfile/vmSymbols.cpp >> >> * Oop Verification - In four cases we disabled oop verification >> because it do not makes sense or is not applicable to a GC using load >> barriers. >> >> src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp >> src/hotspot/cpu/x86/stubGenerator_x86_64.cpp >> src/hotspot/share/compiler/oopMap.cpp >> src/hotspot/share/runtime/jniHandles.cpp >> >> * StackValue - Apply a load barrier in case of OSR. This is a bit of a >> hack. However, this will go away in the future, when we have the next >> iteration of C2's load barriers in place (aka "C2 late barrier >> insertion"). >> >> src/hotspot/share/runtime/stackValue.cpp >> >> * JVMTI - Adding an assert() to catch problems if the tagmap hashing >> is changed in the future. >> >> src/hotspot/share/prims/jvmtiTagMap.cpp >> >> * Legal - Adding copyright/license for 3rd party hash function used in >> ZHash. >> >> src/java.base/share/legal/c-libutl.md >> >> * SA - Adding basic ZGC support. >> >> src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* >> >> >> Testing >> ------- >> >> * Unit testing >> >> A number of new ZGC specific gtests have been added, in >> test/hotspot/gtest/gc/z/ >> >> * Regression testing >> >> No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} >> No new failures in Mach5, with ZGC disabled, tier{1,2,3} >> >> * Stress testing >> >> We have been continuously been running a number stress tests >> throughout the development, these include: >> >> specjbb2000 >> specjbb2005 >> specjbb2015 >> specjvm98 >> specjvm2008 >> dacapo2009 >> test/hotspot/jtreg/gc/stress/gcold >> test/hotspot/jtreg/gc/stress/systemgc >> test/hotspot/jtreg/gc/stress/gclocker >> test/hotspot/jtreg/gc/stress/gcbasher >> test/hotspot/jtreg/gc/stress/finalizer >> Kitchensink >> >> >> Thanks! >> >> /Per, Stefan & the ZGC team From brent.christian at oracle.com Fri Jun 8 19:11:49 2018 From: brent.christian at oracle.com (Brent Christian) Date: Fri, 8 Jun 2018 12:11:49 -0700 Subject: RFR 8204565 : (spec) Document java.{vm.}?specification.version system properties' relation to $FEATURE In-Reply-To: <05f55a3a-a7d5-4b5d-899d-2f2fbfe1d2d8@oracle.com> References: <7ff4c005-bcb3-7715-554c-22bc5570b03a@oracle.com> <05f55a3a-a7d5-4b5d-899d-2f2fbfe1d2d8@oracle.com> Message-ID: <2786fbb3-4cff-427c-2f72-2d0dfbf37031@oracle.com> On 6/7/18 1:24 PM, mandy chung wrote: >> >> Issue: >> https://bugs.openjdk.java.net/browse/JDK-8204565 >> >> Webrev: >> http://cr.openjdk.java.net/~bchristi/8204565/webrev/ > > Is there an existing test validating this? Looks like there is (kind of), for libs and for hotspot. I've rev'ed the webrev in place with how the tests might be updated. (or I can leave the hotspot test to be updated along with 8193719 if that's preferred). Thanks, -Brent From mandy.chung at oracle.com Fri Jun 8 19:27:57 2018 From: mandy.chung at oracle.com (mandy chung) Date: Fri, 8 Jun 2018 12:27:57 -0700 Subject: RFR 8204565 : (spec) Document java.{vm.}?specification.version system properties' relation to $FEATURE In-Reply-To: <2786fbb3-4cff-427c-2f72-2d0dfbf37031@oracle.com> References: <7ff4c005-bcb3-7715-554c-22bc5570b03a@oracle.com> <05f55a3a-a7d5-4b5d-899d-2f2fbfe1d2d8@oracle.com> <2786fbb3-4cff-427c-2f72-2d0dfbf37031@oracle.com> Message-ID: On 6/8/18 12:11 PM, Brent Christian wrote: > On 6/7/18 1:24 PM, mandy chung wrote: >>> >>> Issue: >>> https://bugs.openjdk.java.net/browse/JDK-8204565 >>> >>> Webrev: >>> http://cr.openjdk.java.net/~bchristi/8204565/webrev/ >> >> Is there an existing test validating this? > > Looks like there is (kind of), for libs and for hotspot.? I've rev'ed > the webrev in place with how the tests might be updated. (or I can leave > the hotspot test to be updated along with 8193719 if that's preferred). test/jdk/java/lang/System/Versions.java it can also verify java.vm.specification.version. The hotspot test looks to me that it should expect the test be run with OpenJDK build and the vendor verification should consider that. That's a separate issue unrelated to this change. I suggest to remove the comment at line 36. Otherwise, looks good. Mandy From rkennke at redhat.com Fri Jun 8 20:17:51 2018 From: rkennke at redhat.com (Roman Kennke) Date: Fri, 8 Jun 2018 22:17:51 +0200 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: <48b8839a-8af7-e64c-a9c6-e9f67ea099e4@redhat.com> References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> <5B16A8C5.7040005@oracle.com> <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> <48b8839a-8af7-e64c-a9c6-e9f67ea099e4@redhat.com> Message-ID: Am 06.06.2018 um 12:03 schrieb Andrew Haley: > On 06/05/2018 08:34 PM, Roman Kennke wrote: >> Ok, done here: >> >> Incremental: >> http://cr.openjdk.java.net/~rkennke/JDK-8203157/webrev.01.diff/ >> Full: >> http://cr.openjdk.java.net/~rkennke/JDK-8203157/webrev.01/ >> >> Good now? > > It's be better to fix this up in LIR generation than to use jobject2reg: > > 1910 break; > 1911 case T_OBJECT: > 1912 case T_ARRAY: > 1913 jobject2reg(opr2->as_constant_ptr()->as_jobject(), rscratch1); > 1914 __ cmpoop(reg1, rscratch1); > 1915 return; > Why is it better? And how would I do that? It sounds like a fairly complex undertaking for a special case. Notice that if the oop doesn't qualify as immediate operand (quite likely for an oop?) it used to be moved into rscratch1 anyway a few lines below. Roman From rkennke at redhat.com Fri Jun 8 20:19:54 2018 From: rkennke at redhat.com (Roman Kennke) Date: Fri, 8 Jun 2018 22:19:54 +0200 Subject: RFR: JDK-8203172: Primitive heap access for interpreter BarrierSetAssembler/aarch64 In-Reply-To: <0f6bc000-b1f9-ba23-7bb1-4397a0459db7@redhat.com> References: <5B152EA5.2060903@oracle.com> <23f0db5e-e2d2-a047-6111-d298c2234d28@redhat.com> <9e8589ca-5ac8-d7ee-a217-04048671366c@redhat.com> <36a33e42-1470-b153-dd7a-0ef26c89678b@redhat.com> <0f6bc000-b1f9-ba23-7bb1-4397a0459db7@redhat.com> Message-ID: Ping? > As mentioned in another thread, we in Shenandoah have decided to skip > JNI fast getfield stuff for now. We'll probably address it and implement > the extended range speculative PC thing later, in a separate RFE. I > ripped out the jniFastGetField changes from the patch: > > http://cr.openjdk.java.net/~rkennke/JDK-8203172/webrev.02/ > > Is it good now to push? > > Roman > >> Hi Roman, >> >> On 2018-06-04 22:49, Roman Kennke wrote: >>> Am 04.06.2018 um 22:16 schrieb Erik ?sterlund: >>>> Hi Roman, >>>> >>>> On 2018-06-04 21:42, Roman Kennke wrote: >>>>> Am 04.06.2018 um 18:43 schrieb Erik ?sterlund: >>>>>> Hi Roman, >>>>>> >>>>>> On 2018-06-04 17:24, Roman Kennke wrote: >>>>>>> Ok, right. Very good catch! >>>>>>> >>>>>>> This should do it, right? Sorry, I couldn't easily make an >>>>>>> incremental >>>>>>> diff: >>>>>>> >>>>>>> http://cr.openjdk.java.net/~rkennke/JDK-8203172/webrev.01/ >>>>>> Unfortunately, I think there is one more problem for you. >>>>>> The signal handler is supposed to catch SIGSEGV caused by speculative >>>>>> loads shot from the fantastic jni fast get field code. But it >>>>>> currently >>>>>> expects an exact PC match: >>>>>> >>>>>> address JNI_FastGetField::find_slowcase_pc(address pc) { >>>>>> ??? for (int i=0; i>>>>> ????? if (speculative_load_pclist[i] == pc) { >>>>>> ??????? return slowcase_entry_pclist[i]; >>>>>> ????? } >>>>>> ??? } >>>>>> ??? return (address)-1; >>>>>> } >>>>>> >>>>>> This means that the way this is written now, speculative_load_pclist >>>>>> registers the __ pc() right before the access_load_at call. This puts >>>>>> constraints on whatever is done inside of access_load_at to only >>>>>> speculatively load on the first assembled instruction. >>>>>> >>>>>> If you imagine a scenario where you have a GC with Brooks pointers >>>>>> that >>>>>> also uncommits memory (like Shenandoah I presume), then I imagine you >>>>>> would need something more here. If you start with a forwarding pointer >>>>>> load, then that can trap (which is probably caught by the exact PC >>>>>> match). But then there will be a subsequent load of the value in the >>>>>> to-space object, which will not be protected. But this is also loaded >>>>>> speculatively (as the subsequent safepoint counter check could >>>>>> invalidate the result), and could therefore crash the VM unless >>>>>> protected, as the signal handler code fails to recognize this is a >>>>>> speculative load from jni fast get field. >>>>>> >>>>>> I imagine the solution to this would be to let speculative_load_pclist >>>>>> specify a range for fuzzy SIGSEGV matching in the signal handler, >>>>>> rather >>>>>> than an exact PC (i.e. speculative_load_pclist_start and >>>>>> speculative_load_pclist_end). That would give you enough freedom to >>>>>> use >>>>>> Brooks pointers in there. Sometimes I wonder if the lengths we go to >>>>>> maintain jni fast get field is *really* worth it. >>>>> I are probably right in general. But I also think we are fine with >>>>> Shenandoah. Both the fwd ptr load and the field load are constructed >>>>> with the same base operand. If the oop is NULL (or invalid memory) it >>>>> will blow up on fwdptr load just the same as it would blow up on field >>>>> load. We maintain an invariant that the fwd ptr of a valid oop results >>>>> in a valid (and equivalent) oop. I therefore think we are fine for now. >>>>> Should a GC ever need anything else here, I'd worry about it then. >>>>> Until >>>>> this happens, let's just hope to never need to touch this code again >>>>> ;-) >>>> No I'm afraid that is not safe. After loading the forwarding pointer, >>>> the thread could be preempted, then any number of GC cycles could pass, >>>> which means that the address that the at some point read forwarding >>>> pointer points to, could be uncommitted memory. In fact it is unsafe >>>> even without uncommitted memory. Because after resolving the jobject to >>>> some address in the heap, the thread could get preempted, and any number >>>> of GC cycles could pass, causing the forwarding pointer to be read from >>>> some address in the heap that no longer is the forwarding pointer of an >>>> object, but rather a random integer. This causes the second load to blow >>>> up, even without uncommitting memory. >>>> >>>> Here is an attempt at showing different things that can go wrong: >>>> >>>> obj = *jobject >>>> // preempted for N GC cycles, meaning obj might 1) be a valid pointer to >>>> an object, or 2) be a random pointer inside of the heap or outside of >>>> the heap >>>> >>>> forward_pointer = *obj // may 1) crash with SIGSEGV, 2) read a random >>>> pointer, no longer representing the forwarding pointer, or 3) read a >>>> consistent forwarding pointer >>>> >>>> // preempted for N GC cycles, causing forward_pointer to point at pretty >>>> much anything >>>> >>>> result = *(forward_pointer + offset) // may 1) read a valid primitive >>>> value, if previous two loads were not messed up, or 2) read some random >>>> value that no longer corresponds to the object field, or 3) crash >>>> because either the forwarding pointer did point at something valid that >>>> subsequently got relocated and uncommitted before the load hits, or >>>> because the forwarding pointer never pointed to anything valid in the >>>> first place, because the forwarding pointer load read a random pointer >>>> due to the object relocating after the jobject was resolved. >>>> >>>> The summary is that both loads need protection due to how the thread in >>>> native state runs freely without necessarily caring about the GC running >>>> any number of GC cycles concurrently, making the memory super slippery, >>>> which risks crashing the VM without the proper protection. >>> AWW WTF!? We are in native state in this code? >> >> Yes. This is one of the most dangerous code paths we have in the VM I >> think. >> >>> It might be easier to just call bsa->resolve_for_read() (which emits the >>> fwd ptr load), then issue another: >>> >>> speculative_load_pclist[count] = __ pc(); >>> >>> need to juggle with the counter and double-emit slowcase_entry_pclist, >>> and all this conditionally for Shenandoah. Gaa. >> >> I think that by just having the speculative load PC list take a range as >> opposed to a precise PC, and check that a given PC is in that range, and >> not just exactly equal to a PC, the problem is solved for everyone. >> >>> Or just FLAG_SET_DEFAULT(UseFastJNIAccessors,false) in Shenandoah. >> >> Yeah, sometimes you wonder if it's really worth the maintenance to keep >> this thing. >> >>> Funny how we had this code in Shenandoah literally for years, and >>> nobody's ever tripped over it. >> >> Yeah it is a rather nasty race to detect. >> >>> It's one of those cases where I almost suspect it's been done in Java1.0 >>> when lots of JNI code was in use because some stuff couldn't be done in >>> fast in Java, but nowadays doesn't really make a difference. *Sigh* >> >> :) >> >>>>>>> Unfortunately, I cannot really test it because of: >>>>>>> http://mail.openjdk.java.net/pipermail/aarch64-port-dev/2018-May/005843.html >>>>>>> >>>>>>> >>>>>>> >>>>>> That is unfortunate. If I were you, I would not dare to change >>>>>> anything >>>>>> in jni fast get field without testing it - it is very error prone. >>>>> Yeah. I guess I'll just wait with testing until this is resolved. Or >>>>> else resolve it myself. >>>> Yeah. >>>> >>>>> Can I consider this change reviewed by you? >>>> I think we should agree about the safety of doing this for Shenandoah in >>>> particular first. I still think we need the PC range as opposed to exact >>>> PC to be caught in the signal handler for this to be safe for your GC >>>> algorithm. >>> >>> Yeah, I agree. I need to think this through a little bit. >> >> Yeah. Still think the PC range check solution should do the trick. >> >>> Thanks for pointing out this bug. I can already see nightly builds >>> suddenly starting to fail over it, now that it's known :-) >> >> No problem! >> >> Thanks, >> /Erik >> >>> Roman >>> >>> >> > > From erik.osterlund at oracle.com Fri Jun 8 20:48:38 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Fri, 8 Jun 2018 22:48:38 +0200 Subject: RFR: JDK-8203172: Primitive heap access for interpreter BarrierSetAssembler/aarch64 In-Reply-To: References: <5B152EA5.2060903@oracle.com> <23f0db5e-e2d2-a047-6111-d298c2234d28@redhat.com> <9e8589ca-5ac8-d7ee-a217-04048671366c@redhat.com> <36a33e42-1470-b153-dd7a-0ef26c89678b@redhat.com> <0f6bc000-b1f9-ba23-7bb1-4397a0459db7@redhat.com> Message-ID: Hi Roman, Looks good. Thanks, /Erik On 2018-06-08 22:19, Roman Kennke wrote: > Ping? > >> As mentioned in another thread, we in Shenandoah have decided to skip >> JNI fast getfield stuff for now. We'll probably address it and implement >> the extended range speculative PC thing later, in a separate RFE. I >> ripped out the jniFastGetField changes from the patch: >> >> http://cr.openjdk.java.net/~rkennke/JDK-8203172/webrev.02/ >> >> Is it good now to push? >> >> Roman >> >>> Hi Roman, >>> >>> On 2018-06-04 22:49, Roman Kennke wrote: >>>> Am 04.06.2018 um 22:16 schrieb Erik ?sterlund: >>>>> Hi Roman, >>>>> >>>>> On 2018-06-04 21:42, Roman Kennke wrote: >>>>>> Am 04.06.2018 um 18:43 schrieb Erik ?sterlund: >>>>>>> Hi Roman, >>>>>>> >>>>>>> On 2018-06-04 17:24, Roman Kennke wrote: >>>>>>>> Ok, right. Very good catch! >>>>>>>> >>>>>>>> This should do it, right? Sorry, I couldn't easily make an >>>>>>>> incremental >>>>>>>> diff: >>>>>>>> >>>>>>>> http://cr.openjdk.java.net/~rkennke/JDK-8203172/webrev.01/ >>>>>>> Unfortunately, I think there is one more problem for you. >>>>>>> The signal handler is supposed to catch SIGSEGV caused by speculative >>>>>>> loads shot from the fantastic jni fast get field code. But it >>>>>>> currently >>>>>>> expects an exact PC match: >>>>>>> >>>>>>> address JNI_FastGetField::find_slowcase_pc(address pc) { >>>>>>> ??? for (int i=0; i>>>>>> ????? if (speculative_load_pclist[i] == pc) { >>>>>>> ??????? return slowcase_entry_pclist[i]; >>>>>>> ????? } >>>>>>> ??? } >>>>>>> ??? return (address)-1; >>>>>>> } >>>>>>> >>>>>>> This means that the way this is written now, speculative_load_pclist >>>>>>> registers the __ pc() right before the access_load_at call. This puts >>>>>>> constraints on whatever is done inside of access_load_at to only >>>>>>> speculatively load on the first assembled instruction. >>>>>>> >>>>>>> If you imagine a scenario where you have a GC with Brooks pointers >>>>>>> that >>>>>>> also uncommits memory (like Shenandoah I presume), then I imagine you >>>>>>> would need something more here. If you start with a forwarding pointer >>>>>>> load, then that can trap (which is probably caught by the exact PC >>>>>>> match). But then there will be a subsequent load of the value in the >>>>>>> to-space object, which will not be protected. But this is also loaded >>>>>>> speculatively (as the subsequent safepoint counter check could >>>>>>> invalidate the result), and could therefore crash the VM unless >>>>>>> protected, as the signal handler code fails to recognize this is a >>>>>>> speculative load from jni fast get field. >>>>>>> >>>>>>> I imagine the solution to this would be to let speculative_load_pclist >>>>>>> specify a range for fuzzy SIGSEGV matching in the signal handler, >>>>>>> rather >>>>>>> than an exact PC (i.e. speculative_load_pclist_start and >>>>>>> speculative_load_pclist_end). That would give you enough freedom to >>>>>>> use >>>>>>> Brooks pointers in there. Sometimes I wonder if the lengths we go to >>>>>>> maintain jni fast get field is *really* worth it. >>>>>> I are probably right in general. But I also think we are fine with >>>>>> Shenandoah. Both the fwd ptr load and the field load are constructed >>>>>> with the same base operand. If the oop is NULL (or invalid memory) it >>>>>> will blow up on fwdptr load just the same as it would blow up on field >>>>>> load. We maintain an invariant that the fwd ptr of a valid oop results >>>>>> in a valid (and equivalent) oop. I therefore think we are fine for now. >>>>>> Should a GC ever need anything else here, I'd worry about it then. >>>>>> Until >>>>>> this happens, let's just hope to never need to touch this code again >>>>>> ;-) >>>>> No I'm afraid that is not safe. After loading the forwarding pointer, >>>>> the thread could be preempted, then any number of GC cycles could pass, >>>>> which means that the address that the at some point read forwarding >>>>> pointer points to, could be uncommitted memory. In fact it is unsafe >>>>> even without uncommitted memory. Because after resolving the jobject to >>>>> some address in the heap, the thread could get preempted, and any number >>>>> of GC cycles could pass, causing the forwarding pointer to be read from >>>>> some address in the heap that no longer is the forwarding pointer of an >>>>> object, but rather a random integer. This causes the second load to blow >>>>> up, even without uncommitting memory. >>>>> >>>>> Here is an attempt at showing different things that can go wrong: >>>>> >>>>> obj = *jobject >>>>> // preempted for N GC cycles, meaning obj might 1) be a valid pointer to >>>>> an object, or 2) be a random pointer inside of the heap or outside of >>>>> the heap >>>>> >>>>> forward_pointer = *obj // may 1) crash with SIGSEGV, 2) read a random >>>>> pointer, no longer representing the forwarding pointer, or 3) read a >>>>> consistent forwarding pointer >>>>> >>>>> // preempted for N GC cycles, causing forward_pointer to point at pretty >>>>> much anything >>>>> >>>>> result = *(forward_pointer + offset) // may 1) read a valid primitive >>>>> value, if previous two loads were not messed up, or 2) read some random >>>>> value that no longer corresponds to the object field, or 3) crash >>>>> because either the forwarding pointer did point at something valid that >>>>> subsequently got relocated and uncommitted before the load hits, or >>>>> because the forwarding pointer never pointed to anything valid in the >>>>> first place, because the forwarding pointer load read a random pointer >>>>> due to the object relocating after the jobject was resolved. >>>>> >>>>> The summary is that both loads need protection due to how the thread in >>>>> native state runs freely without necessarily caring about the GC running >>>>> any number of GC cycles concurrently, making the memory super slippery, >>>>> which risks crashing the VM without the proper protection. >>>> AWW WTF!? We are in native state in this code? >>> Yes. This is one of the most dangerous code paths we have in the VM I >>> think. >>> >>>> It might be easier to just call bsa->resolve_for_read() (which emits the >>>> fwd ptr load), then issue another: >>>> >>>> speculative_load_pclist[count] = __ pc(); >>>> >>>> need to juggle with the counter and double-emit slowcase_entry_pclist, >>>> and all this conditionally for Shenandoah. Gaa. >>> I think that by just having the speculative load PC list take a range as >>> opposed to a precise PC, and check that a given PC is in that range, and >>> not just exactly equal to a PC, the problem is solved for everyone. >>> >>>> Or just FLAG_SET_DEFAULT(UseFastJNIAccessors,false) in Shenandoah. >>> Yeah, sometimes you wonder if it's really worth the maintenance to keep >>> this thing. >>> >>>> Funny how we had this code in Shenandoah literally for years, and >>>> nobody's ever tripped over it. >>> Yeah it is a rather nasty race to detect. >>> >>>> It's one of those cases where I almost suspect it's been done in Java1.0 >>>> when lots of JNI code was in use because some stuff couldn't be done in >>>> fast in Java, but nowadays doesn't really make a difference. *Sigh* >>> :) >>> >>>>>>>> Unfortunately, I cannot really test it because of: >>>>>>>> http://mail.openjdk.java.net/pipermail/aarch64-port-dev/2018-May/005843.html >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> That is unfortunate. If I were you, I would not dare to change >>>>>>> anything >>>>>>> in jni fast get field without testing it - it is very error prone. >>>>>> Yeah. I guess I'll just wait with testing until this is resolved. Or >>>>>> else resolve it myself. >>>>> Yeah. >>>>> >>>>>> Can I consider this change reviewed by you? >>>>> I think we should agree about the safety of doing this for Shenandoah in >>>>> particular first. I still think we need the PC range as opposed to exact >>>>> PC to be caught in the signal handler for this to be safe for your GC >>>>> algorithm. >>>> Yeah, I agree. I need to think this through a little bit. >>> Yeah. Still think the PC range check solution should do the trick. >>> >>>> Thanks for pointing out this bug. I can already see nightly builds >>>> suddenly starting to fail over it, now that it's known :-) >>> No problem! >>> >>> Thanks, >>> /Erik >>> >>>> Roman >>>> >>>> >> > From rkennke at redhat.com Fri Jun 8 21:29:55 2018 From: rkennke at redhat.com (Roman Kennke) Date: Fri, 8 Jun 2018 23:29:55 +0200 Subject: RFR: JDK-8203172: Primitive heap access for interpreter BarrierSetAssembler/aarch64 In-Reply-To: References: <5B152EA5.2060903@oracle.com> <23f0db5e-e2d2-a047-6111-d298c2234d28@redhat.com> <9e8589ca-5ac8-d7ee-a217-04048671366c@redhat.com> <36a33e42-1470-b153-dd7a-0ef26c89678b@redhat.com> <0f6bc000-b1f9-ba23-7bb1-4397a0459db7@redhat.com> Message-ID: <3ca080f3-5a4f-a0d0-bc61-14a94003c90d@redhat.com> Hi Erik, thanks for reviewing, and especially for pointing out the nasty jnifastgetfield bug :-) Cheers, Roman > Hi Roman, > > Looks good. > > Thanks, > /Erik > > On 2018-06-08 22:19, Roman Kennke wrote: >> Ping? >> >>> As mentioned in another thread, we in Shenandoah have decided to skip >>> JNI fast getfield stuff for now. We'll probably address it and implement >>> the extended range speculative PC thing later, in a separate RFE. I >>> ripped out the jniFastGetField changes from the patch: >>> >>> http://cr.openjdk.java.net/~rkennke/JDK-8203172/webrev.02/ >>> >>> Is it good now to push? >>> >>> Roman >>> >>>> Hi Roman, >>>> >>>> On 2018-06-04 22:49, Roman Kennke wrote: >>>>> Am 04.06.2018 um 22:16 schrieb Erik ?sterlund: >>>>>> Hi Roman, >>>>>> >>>>>> On 2018-06-04 21:42, Roman Kennke wrote: >>>>>>> Am 04.06.2018 um 18:43 schrieb Erik ?sterlund: >>>>>>>> Hi Roman, >>>>>>>> >>>>>>>> On 2018-06-04 17:24, Roman Kennke wrote: >>>>>>>>> Ok, right. Very good catch! >>>>>>>>> >>>>>>>>> This should do it, right? Sorry, I couldn't easily make an >>>>>>>>> incremental >>>>>>>>> diff: >>>>>>>>> >>>>>>>>> http://cr.openjdk.java.net/~rkennke/JDK-8203172/webrev.01/ >>>>>>>> Unfortunately, I think there is one more problem for you. >>>>>>>> The signal handler is supposed to catch SIGSEGV caused by >>>>>>>> speculative >>>>>>>> loads shot from the fantastic jni fast get field code. But it >>>>>>>> currently >>>>>>>> expects an exact PC match: >>>>>>>> >>>>>>>> address JNI_FastGetField::find_slowcase_pc(address pc) { >>>>>>>> ???? for (int i=0; i>>>>>>> ?????? if (speculative_load_pclist[i] == pc) { >>>>>>>> ???????? return slowcase_entry_pclist[i]; >>>>>>>> ?????? } >>>>>>>> ???? } >>>>>>>> ???? return (address)-1; >>>>>>>> } >>>>>>>> >>>>>>>> This means that the way this is written now, >>>>>>>> speculative_load_pclist >>>>>>>> registers the __ pc() right before the access_load_at call. This >>>>>>>> puts >>>>>>>> constraints on whatever is done inside of access_load_at to only >>>>>>>> speculatively load on the first assembled instruction. >>>>>>>> >>>>>>>> If you imagine a scenario where you have a GC with Brooks pointers >>>>>>>> that >>>>>>>> also uncommits memory (like Shenandoah I presume), then I >>>>>>>> imagine you >>>>>>>> would need something more here. If you start with a forwarding >>>>>>>> pointer >>>>>>>> load, then that can trap (which is probably caught by the exact PC >>>>>>>> match). But then there will be a subsequent load of the value in >>>>>>>> the >>>>>>>> to-space object, which will not be protected. But this is also >>>>>>>> loaded >>>>>>>> speculatively (as the subsequent safepoint counter check could >>>>>>>> invalidate the result), and could therefore crash the VM unless >>>>>>>> protected, as the signal handler code fails to recognize this is a >>>>>>>> speculative load from jni fast get field. >>>>>>>> >>>>>>>> I imagine the solution to this would be to let >>>>>>>> speculative_load_pclist >>>>>>>> specify a range for fuzzy SIGSEGV matching in the signal handler, >>>>>>>> rather >>>>>>>> than an exact PC (i.e. speculative_load_pclist_start and >>>>>>>> speculative_load_pclist_end). That would give you enough freedom to >>>>>>>> use >>>>>>>> Brooks pointers in there. Sometimes I wonder if the lengths we >>>>>>>> go to >>>>>>>> maintain jni fast get field is *really* worth it. >>>>>>> I are probably right in general. But I also think we are fine with >>>>>>> Shenandoah. Both the fwd ptr load and the field load are constructed >>>>>>> with the same base operand. If the oop is NULL (or invalid >>>>>>> memory) it >>>>>>> will blow up on fwdptr load just the same as it would blow up on >>>>>>> field >>>>>>> load. We maintain an invariant that the fwd ptr of a valid oop >>>>>>> results >>>>>>> in a valid (and equivalent) oop. I therefore think we are fine >>>>>>> for now. >>>>>>> Should a GC ever need anything else here, I'd worry about it then. >>>>>>> Until >>>>>>> this happens, let's just hope to never need to touch this code again >>>>>>> ;-) >>>>>> No I'm afraid that is not safe. After loading the forwarding pointer, >>>>>> the thread could be preempted, then any number of GC cycles could >>>>>> pass, >>>>>> which means that the address that the at some point read forwarding >>>>>> pointer points to, could be uncommitted memory. In fact it is unsafe >>>>>> even without uncommitted memory. Because after resolving the >>>>>> jobject to >>>>>> some address in the heap, the thread could get preempted, and any >>>>>> number >>>>>> of GC cycles could pass, causing the forwarding pointer to be read >>>>>> from >>>>>> some address in the heap that no longer is the forwarding pointer >>>>>> of an >>>>>> object, but rather a random integer. This causes the second load >>>>>> to blow >>>>>> up, even without uncommitting memory. >>>>>> >>>>>> Here is an attempt at showing different things that can go wrong: >>>>>> >>>>>> obj = *jobject >>>>>> // preempted for N GC cycles, meaning obj might 1) be a valid >>>>>> pointer to >>>>>> an object, or 2) be a random pointer inside of the heap or outside of >>>>>> the heap >>>>>> >>>>>> forward_pointer = *obj // may 1) crash with SIGSEGV, 2) read a random >>>>>> pointer, no longer representing the forwarding pointer, or 3) read a >>>>>> consistent forwarding pointer >>>>>> >>>>>> // preempted for N GC cycles, causing forward_pointer to point at >>>>>> pretty >>>>>> much anything >>>>>> >>>>>> result = *(forward_pointer + offset) // may 1) read a valid primitive >>>>>> value, if previous two loads were not messed up, or 2) read some >>>>>> random >>>>>> value that no longer corresponds to the object field, or 3) crash >>>>>> because either the forwarding pointer did point at something valid >>>>>> that >>>>>> subsequently got relocated and uncommitted before the load hits, or >>>>>> because the forwarding pointer never pointed to anything valid in the >>>>>> first place, because the forwarding pointer load read a random >>>>>> pointer >>>>>> due to the object relocating after the jobject was resolved. >>>>>> >>>>>> The summary is that both loads need protection due to how the >>>>>> thread in >>>>>> native state runs freely without necessarily caring about the GC >>>>>> running >>>>>> any number of GC cycles concurrently, making the memory super >>>>>> slippery, >>>>>> which risks crashing the VM without the proper protection. >>>>> AWW WTF!? We are in native state in this code? >>>> Yes. This is one of the most dangerous code paths we have in the VM I >>>> think. >>>> >>>>> It might be easier to just call bsa->resolve_for_read() (which >>>>> emits the >>>>> fwd ptr load), then issue another: >>>>> >>>>> speculative_load_pclist[count] = __ pc(); >>>>> >>>>> need to juggle with the counter and double-emit slowcase_entry_pclist, >>>>> and all this conditionally for Shenandoah. Gaa. >>>> I think that by just having the speculative load PC list take a >>>> range as >>>> opposed to a precise PC, and check that a given PC is in that range, >>>> and >>>> not just exactly equal to a PC, the problem is solved for everyone. >>>> >>>>> Or just FLAG_SET_DEFAULT(UseFastJNIAccessors,false) in Shenandoah. >>>> Yeah, sometimes you wonder if it's really worth the maintenance to keep >>>> this thing. >>>> >>>>> Funny how we had this code in Shenandoah literally for years, and >>>>> nobody's ever tripped over it. >>>> Yeah it is a rather nasty race to detect. >>>> >>>>> It's one of those cases where I almost suspect it's been done in >>>>> Java1.0 >>>>> when lots of JNI code was in use because some stuff couldn't be >>>>> done in >>>>> fast in Java, but nowadays doesn't really make a difference. *Sigh* >>>> :) >>>> >>>>>>>>> Unfortunately, I cannot really test it because of: >>>>>>>>> http://mail.openjdk.java.net/pipermail/aarch64-port-dev/2018-May/005843.html >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> That is unfortunate. If I were you, I would not dare to change >>>>>>>> anything >>>>>>>> in jni fast get field without testing it - it is very error prone. >>>>>>> Yeah. I guess I'll just wait with testing until this is resolved. Or >>>>>>> else resolve it myself. >>>>>> Yeah. >>>>>> >>>>>>> Can I consider this change reviewed by you? >>>>>> I think we should agree about the safety of doing this for >>>>>> Shenandoah in >>>>>> particular first. I still think we need the PC range as opposed to >>>>>> exact >>>>>> PC to be caught in the signal handler for this to be safe for your GC >>>>>> algorithm. >>>>> Yeah, I agree. I need to think this through a little bit. >>>> Yeah. Still think the PC range check solution should do the trick. >>>> >>>>> Thanks for pointing out this bug. I can already see nightly builds >>>>> suddenly starting to fail over it, now that it's known :-) >>>> No problem! >>>> >>>> Thanks, >>>> /Erik >>>> >>>>> Roman >>>>> >>>>> >>> >> > From brent.christian at oracle.com Fri Jun 8 21:39:55 2018 From: brent.christian at oracle.com (Brent Christian) Date: Fri, 8 Jun 2018 14:39:55 -0700 Subject: RFR 8204565 : (spec) Document java.{vm.}?specification.version system properties' relation to $FEATURE In-Reply-To: References: <7ff4c005-bcb3-7715-554c-22bc5570b03a@oracle.com> <05f55a3a-a7d5-4b5d-899d-2f2fbfe1d2d8@oracle.com> <2786fbb3-4cff-427c-2f72-2d0dfbf37031@oracle.com> Message-ID: <3977f83a-72d8-bf3c-663f-2f469505362d@oracle.com> On 6/8/18 12:27 PM, mandy chung wrote: >>>> >>>> Webrev: >>>> http://cr.openjdk.java.net/~bchristi/8204565/webrev/ >>> > > test/jdk/java/lang/System/Versions.java > ? it can also verify java.vm.specification.version. > > The hotspot test looks to me that it should expect the test be run > with OpenJDK build and the vendor verification should consider that. > That's a separate issue unrelated to this change.? I suggest to > remove the comment at line 36. > > Otherwise, looks good. OK, thanks - done. (Also updated the @bug, copyright year, and removed unused 'VMversion'. -Brent From erik.joelsson at oracle.com Fri Jun 8 21:50:31 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Fri, 8 Jun 2018 14:50:31 -0700 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: <965cfb76-1d06-14b5-7f1e-44481ef2c54d@oracle.com> References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> <2A71CDF2-A6F5-4985-9005-4886D1047F6C@oracle.com> <2e2ace1c-22ad-e6e3-5b05-e2688b5b1b5c@oracle.com> <965cfb76-1d06-14b5-7f1e-44481ef2c54d@oracle.com> Message-ID: On 2018-06-07 17:30, David Holmes wrote: > On 8/06/2018 6:11 AM, Erik Joelsson wrote: >> I just don't think the extra work is warranted or should be >> prioritized at this point. I also cannot think of a combination of >> options required for what you are suggesting that wouldn't be >> confusing to the user. If someone truly feels like these flags are >> forced on them and can't live with them, we or preferably that person >> can fix it then. I don't think that's dictatorship. OpenJDK is still >> open source and anyone can contribute. > > I don't see why --enable-hardened-jdk and --enable-hardened-hotspot to > add to the right flags would be either complicated or confusing. > For me the confusion surrounds the difference between --enable-hardened-hotspot and --with-jvm-variants=server, hardened and making the user understand it. But sure, it is doable. Here is a new webrev with those two options as I interpret them. Here is the help text: ?--enable-hardened-jdk?? enable hardenening compiler flags for all jdk ????????????????????????? libraries (except the JVM), typically disabling ????????????????????????? speculative cti. [disabled] ?--enable-hardened-hotspot ????????????????????????? enable hardenening compiler flags for hotspot (all ????????????????????????? jvm variants), typically disabling speculative cti. ????????????????????????? To make hardening of hotspot a runtime choice, ????????????????????????? consider the "hardened" jvm variant instead of this ????????????????????????? option. [disabled] Note that this changes the default for jdk libraries to not enable hardening unless the user requests it. Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.04/ /Erik From daniel.daugherty at oracle.com Fri Jun 8 21:53:19 2018 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Fri, 8 Jun 2018 17:53:19 -0400 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> <2A71CDF2-A6F5-4985-9005-4886D1047F6C@oracle.com> <2e2ace1c-22ad-e6e3-5b05-e2688b5b1b5c@oracle.com> <965cfb76-1d06-14b5-7f1e-44481ef2c54d@oracle.com> Message-ID: <6b0ae785-fa7c-56d6-35c4-de7dde391ef3@oracle.com> On 6/8/18 5:50 PM, Erik Joelsson wrote: > On 2018-06-07 17:30, David Holmes wrote: >> On 8/06/2018 6:11 AM, Erik Joelsson wrote: >>> I just don't think the extra work is warranted or should be >>> prioritized at this point. I also cannot think of a combination of >>> options required for what you are suggesting that wouldn't be >>> confusing to the user. If someone truly feels like these flags are >>> forced on them and can't live with them, we or preferably that >>> person can fix it then. I don't think that's dictatorship. OpenJDK >>> is still open source and anyone can contribute. >> >> I don't see why --enable-hardened-jdk and --enable-hardened-hotspot >> to add to the right flags would be either complicated or confusing. >> > For me the confusion surrounds the difference between > --enable-hardened-hotspot and --with-jvm-variants=server, hardened and > making the user understand it. But sure, it is doable. Here is a new > webrev with those two options as I interpret them. Here is the help text: > > ?--enable-hardened-jdk?? enable hardenening compiler flags for all jdk Typo: 'hardenening' -> 'hardening' > libraries (except the JVM), typically disabling > ????????????????????????? speculative cti. [disabled] > ?--enable-hardened-hotspot > ????????????????????????? enable hardenening compiler flags for > hotspot (all Typo: 'hardenening' -> 'hardening' > jvm variants), typically disabling speculative cti. > ????????????????????????? To make hardening of hotspot a runtime choice, > ????????????????????????? consider the "hardened" jvm variant instead > of this > ????????????????????????? option. [disabled] > > Note that this changes the default for jdk libraries to not enable > hardening unless the user requests it. > > Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.04/ > > /Erik Dan From erik.joelsson at oracle.com Fri Jun 8 21:55:44 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Fri, 8 Jun 2018 14:55:44 -0700 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: <6b0ae785-fa7c-56d6-35c4-de7dde391ef3@oracle.com> References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> <2A71CDF2-A6F5-4985-9005-4886D1047F6C@oracle.com> <2e2ace1c-22ad-e6e3-5b05-e2688b5b1b5c@oracle.com> <965cfb76-1d06-14b5-7f1e-44481ef2c54d@oracle.com> <6b0ae785-fa7c-56d6-35c4-de7dde391ef3@oracle.com> Message-ID: <33a3a9e0-3e48-a2f3-9ce2-1fc5aaeeb8c0@oracle.com> Doh, thanks, updated in place. /Erik On 2018-06-08 14:53, Daniel D. Daugherty wrote: > On 6/8/18 5:50 PM, Erik Joelsson wrote: >> On 2018-06-07 17:30, David Holmes wrote: >>> On 8/06/2018 6:11 AM, Erik Joelsson wrote: >>>> I just don't think the extra work is warranted or should be >>>> prioritized at this point. I also cannot think of a combination of >>>> options required for what you are suggesting that wouldn't be >>>> confusing to the user. If someone truly feels like these flags are >>>> forced on them and can't live with them, we or preferably that >>>> person can fix it then. I don't think that's dictatorship. OpenJDK >>>> is still open source and anyone can contribute. >>> >>> I don't see why --enable-hardened-jdk and --enable-hardened-hotspot >>> to add to the right flags would be either complicated or confusing. >>> >> For me the confusion surrounds the difference between >> --enable-hardened-hotspot and --with-jvm-variants=server, hardened >> and making the user understand it. But sure, it is doable. Here is a >> new webrev with those two options as I interpret them. Here is the >> help text: >> >> ?--enable-hardened-jdk?? enable hardenening compiler flags for all jdk > > Typo: 'hardenening' -> 'hardening' > > >> libraries (except the JVM), typically disabling >> ????????????????????????? speculative cti. [disabled] >> ?--enable-hardened-hotspot >> ????????????????????????? enable hardenening compiler flags for >> hotspot (all > > Typo: 'hardenening' -> 'hardening' > > >> jvm variants), typically disabling speculative cti. >> ????????????????????????? To make hardening of hotspot a runtime choice, >> ????????????????????????? consider the "hardened" jvm variant instead >> of this >> ????????????????????????? option. [disabled] >> >> Note that this changes the default for jdk libraries to not enable >> hardening unless the user requests it. >> >> Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.04/ >> >> /Erik > > Dan > From stuart.monteith at linaro.org Fri Jun 8 22:06:45 2018 From: stuart.monteith at linaro.org (Stuart Monteith) Date: Fri, 8 Jun 2018 23:06:45 +0100 Subject: RFR(S): 8204628 [AArch64] Assertion failure in BarrierSetAssembler::load_at Message-ID: Hello, Please review a patch for "8204628 [AArch64] Assertion failure in BarrierSetAssembler::load_at". Currently you won't be able to build aarch64 debug builds because of assertions tripping in load_at. JDK-8203353 refined the decorators somewhat, which worked fine on x86, but not aarch64. In this patch, I change the code to match x86's in that access_load_at is called instead of load_at, which means that AccessInternal::decorator_fixup is called, which fixes the expectations of the assertions in load_at and other places. The accesses of load_at and access_load_at (and store equivalents) now match x86. CR: https://bugs.openjdk.java.net/browse/JDK-8204628 webrev: http://cr.openjdk.java.net/~smonteith/8204628/webrev/ The assertions no longer trip when tested with C1 or C2. Many thanks, Stuart From Derek.White at cavium.com Fri Jun 8 22:22:46 2018 From: Derek.White at cavium.com (White, Derek) Date: Fri, 8 Jun 2018 22:22:46 +0000 Subject: [aarch64-port-dev ] RFR(S): 8204628 [AArch64] Assertion failure in BarrierSetAssembler::load_at In-Reply-To: References: Message-ID: Hi Stuart, Can you lose the declaration of BarrierSetAssembler bs at line 2116? This looks good otherwise. - Derek > -----Original Message----- > From: aarch64-port-dev [mailto:aarch64-port-dev- > bounces at openjdk.java.net] On Behalf Of Stuart Monteith > Sent: Friday, June 08, 2018 6:07 PM > To: aarch64-port-dev ; hotspot-dev > Source Developers ; Erik Osterlund > > Subject: [aarch64-port-dev ] RFR(S): 8204628 [AArch64] Assertion failure in > BarrierSetAssembler::load_at > > Hello, > Please review a patch for "8204628 [AArch64] Assertion failure in > BarrierSetAssembler::load_at". > > Currently you won't be able to build aarch64 debug builds because of > assertions tripping in load_at. > JDK-8203353 refined the decorators somewhat, which worked fine on x86, > but not aarch64. > > In this patch, I change the code to match x86's in that access_load_at is > called instead of load_at, which means that AccessInternal::decorator_fixup > is called, which fixes the expectations of the assertions in load_at and other > places. The accesses of load_at and access_load_at (and store equivalents) > now match x86. > > CR: https://bugs.openjdk.java.net/browse/JDK-8204628 > > webrev: http://cr.openjdk.java.net/~smonteith/8204628/webrev/ > > The assertions no longer trip when tested with C1 or C2. > > Many thanks, > Stuart From gromero at linux.vnet.ibm.com Fri Jun 8 23:36:42 2018 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Fri, 8 Jun 2018 20:36:42 -0300 Subject: UseNUMA membind Issue in openJDK In-Reply-To: References: <9a0310b7-2880-db69-cfbc-7abba844ecbf@oracle.com> <187f278f-5069-3afa-c61f-f49a1fc0d790@linux.vnet.ibm.com> <36dc8538-f984-d2df-e401-8362a429c6f3@linux.vnet.ibm.com> Message-ID: <71ac36b7-263b-9f48-47d7-aa4cdbe04983@linux.vnet.ibm.com> Hi Swati, Sorry, as usual I had to reserve a machine before trying it. I wanted to test it against a POWER9 with a NVIDIA Tesla V100 device attached. On such a machines numa nodes are quite sparse so I thought it would not be bad to check against them: available: 8 nodes (0,8,250-255) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 node 0 size: 261693 MB node 0 free: 233982 MB node 8 cpus: 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 node 8 size: 261748 MB node 8 free: 257078 MB node 250 cpus: node 250 size: 0 MB node 250 free: 0 MB node 251 cpus: node 251 size: 0 MB node 251 free: 0 MB node 252 cpus: node 252 size: 15360 MB node 252 free: 15360 MB node 253 cpus: node 253 size: 0 MB node 253 free: 0 MB node 254 cpus: node 254 size: 0 MB node 254 free: 0 MB node 255 cpus: node 255 size: 15360 MB node 255 free: 15360 MB node distances: node 0 8 250 251 252 253 254 255 0: 10 40 80 80 80 80 80 80 8: 40 10 80 80 80 80 80 80 250: 80 80 10 80 80 80 80 80 251: 80 80 80 10 80 80 80 80 252: 80 80 80 80 10 80 80 80 253: 80 80 80 80 80 10 80 80 254: 80 80 80 80 80 80 10 80 255: 80 80 80 80 80 80 80 10 Please, find my comments below, inlined. On 06/01/2018 08:10 AM, Swati Sharma wrote: > I will fix the thread binding issue in a separate patch. I would like to address it in this change. I think it's not good to leave such a "dangling" behavior for the cpus once the memory bind issue is addressed. I suggest the following simple check to fix it (in accordance to what we've discussed previously, i.e. remap cpu/node considering configuration, bind, and distance in rebuild_cpu_to_node_map(): - if (!isnode_in_configured_nodes(nindex_to_node()->at(i))) { + if (!isnode_in_configured_nodes(nindex_to_node()->at(i)) || + !isnode_in_bound_nodes(nindex_to_node()->at(i))) { closest_distance = INT_MAX; ... for (size_t m = 0; m < node_num; m++) { - if (m != i && isnode_in_configured_nodes(nindex_to_node()->at(m))) { + if (m != i && + isnode_in_configured_nodes(nindex_to_node()->at(m)) && + isnode_in_bound_nodes(nindex_to_node()->at(m))) { I tested it against the aforementioned topology and against the following one: available: 4 nodes (0-3) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 node 0 size: 55685 MB node 0 free: 53196 MB node 1 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 node 1 size: 53961 MB node 1 free: 49795 MB node 2 cpus: node 2 size: 21231 MB node 2 free: 21171 MB node 3 cpus: node 3 size: 22492 MB node 3 free: 22432 MB node distances: node 0 1 2 3 0: 10 20 40 40 1: 20 10 40 40 2: 40 40 10 20 3: 40 40 20 10 > Updated the previous patch by removing the structure and using the methods > provided by numa API.Here is the updated one with the changes(attached also). Thanks. > ========================PATCH========================= > diff --git a/src/hotspot/os/linux/os_linux.cpp b/src/hotspot/os/linux/os_linux.cpp > --- a/src/hotspot/os/linux/os_linux.cpp > +++ b/src/hotspot/os/linux/os_linux.cpp ... > @@ -4962,8 +4972,9 @@ > if (!Linux::libnuma_init()) { > UseNUMA = false; > } else { > - if ((Linux::numa_max_node() < 1)) { > - // There's only one node(they start from 0), disable NUMA. > + if ((Linux::numa_max_node() < 1) || Linux::isbound_to_single_node()) { > + // If there's only one node(they start from 0) or if the process ^ let's fix this missing space ... > + // Check if bound to only one numa node. > + // Returns true if bound to a single numa node, otherwise returns false. > + static bool isbound_to_single_node() { > + int single_node = 0; > + struct bitmask* bmp = NULL; > + unsigned int node = 0; > + unsigned int max_number_of_nodes = 0; > + if (_numa_get_membind != NULL && _numa_bitmask_nbytes != NULL) { > + bmp = _numa_get_membind(); > + max_number_of_nodes = _numa_bitmask_nbytes(bmp) * 8; > + } else { > + return false; > + } > + for (node = 0; node < max_number_of_nodes; node++) { > + if (_numa_bitmask_isbitset(bmp, node)) { > + single_node++; > + if (single_node == 2) { > + return false; > + } > + } > + } > + if (single_node == 1) { > + return true; > + } else { > + return false; > + } > + } Now that numa_bitmask_isbitset() is being used (instead of the previous version that iterated through an array of longs, I suggest to tweak it a bit, removing the if (single_node == 2) check. I don't think removing it will hurt. In fact, numa_bitmask_nbytes() returns the total amount of bytes the bitmask can hold. However the total number of nodes in the system is usually much smaller than numa_bitmask_nbytes() * 8. So for a x86_64 system like that with only 2 numa nodes: available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23 node 0 size: 131018 MB node 0 free: 101646 MB node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31 node 1 size: 98304 MB node 1 free: 91692 MB node distances: node 0 1 0: 10 11 1: 11 10 numa_bitmask_nbytes(): 64 => max_number_of_node = 512 numa_max_node(): 1 => 1 + 1 iterations and the value returned by numa_bitmask_nbytes() does not change for different bind configurations. It's fixed. Another example is that on Power with 4 numa nodes: available: 4 nodes (0-1,16-17) node 0 cpus: 0 8 16 24 32 node 0 size: 130722 MB node 0 free: 71930 MB node 1 cpus: 40 48 56 64 72 node 1 size: 0 MB node 1 free: 0 MB node 16 cpus: 80 88 96 104 112 node 16 size: 130599 MB node 16 free: 75934 MB node 17 cpus: 120 128 136 144 152 node 17 size: 0 MB node 17 free: 0 MB node distances: node 0 1 16 17 0: 10 20 40 40 1: 20 10 40 40 16: 40 40 10 20 17: 40 40 20 10 numa_bitmask_nbytes(): 32 => max_number_of_node = 256 numa_max_node(): 17 => 17 + 1 iterations So I understand it's better to set the iteration over numa_max_node() instead of numa_bitmask_nbytes(). Even more for Intel (with contiguous nodes) than for Power. For the POWER9 with NVIDIA Tesla it would be a worst case: only 8 numa nodes but numa_max_node is 255! But I understand it's a very rare case and I'm fine with that. So what about: + if (_numa_get_membind != NULL && _numa_max_node != NULL) { + bmp = _numa_get_membind(); + highest_node_number = _numa_max_node(); + } else { + return false; + } + + for (node = 0; node <= highest_node_number; node++) { + if (_numa_bitmask_isbitset(bmp, node)) { + nodes++; + } + } + + if (nodes == 1) { + return true; + } else { + return false; + } For convenience, I hosted a patch with all the changes above here: http://cr.openjdk.java.net/~gromero/8189922/draft/usenuma_v4.patch @Derek, could you please confirm that this change solves JDK-8189922? Swati, if Derek confirms it solves JDK-8189922? and you confirm it's fine for you I'll consider it's reviewed from my side and I can host that change for you so you can start a formal request for approval (remember I'm not a Reviewer, so you still need two additional reviews for the change). Finally, as a heads up, I could not find you (nor AMD?) in the OCA: http://www.oracle.com/technetwork/community/oca-486395.html#a If I'm not mistaken, you (individually) or AMD must sign it before contributing to OpenJDK. Best regards, Gustavo > ======================================================= > > Swati > > > > On Tue, May 29, 2018 at 6:53 PM, Gustavo Romero > wrote: > > > > Hi Swati, > > > > On 05/29/2018 06:12 AM, Swati Sharma wrote: > >> > >> I have incorporated some changes suggested by you. > >> > >> The use of struct bitmask's maskp for checking 64 bit in single iteration > >> is more optimized compared to numa_bitmask_isbitset() as by using this we > >> need to check each bit for 1024 times(SUSE case) and 64 times(Ubuntu Case). > >> If its fine to iterate at initialization time then I can change. > > > > > > Yes, I know, your version is more optimized. libnuma API should provide a > > ready-made solution for that... but that's another story. I'm curious to know > > what the time difference is on the worst case for both ways tho. Anyway, I > > just would like to point out that, regardless performance, it's possible to > > achieve the same result with current libnuma API. > > > > > >> For the answer to your question: > >> If it picks up node 16, not so bad, but what if it picks up node 0 or 1? > >> It can be checked based on numa_distance instead of picking up the lgrps randomly. > > > > > > That seems a good solution. You can do the checking very early, so > > lgrp_spaces()->find() does not even fail (return -1), i.e. by changing the CPU to > > node mapping on initialization (avoiding to change cas_allocate()). On that checking > > both numa distance and if the node is bound (or not) would be considered to generate > > the map. > > > > > > Best regards, > > Gustavo > > > >> Thanks, > >> Swati > >> > >> > >> > >> On Fri, May 25, 2018 at 4:54 AM, Gustavo Romero >> wrote: > >> > >> Hi Swati, > >> > >> > >> Thanks for CC:ing me. Sorry for the delay replying it, I had to reserve a few > >> specific machines before trying your patch :-) > >> > >> I think that UseNUMA's original task was to figure out the best binding > >> setup for the JVM automatically but I understand that it also has to be aware > >> that sometimes, for some (new) particular reasons, its binding task is > >> "modulated" by other external agents. Thanks for proposing a fix. > >> > >> I have just a question/concern on the proposal: how the JVM should behave if > >> CPUs are not bound in accordance to the bound memory nodes? For instance, what > >> happens if no '--cpunodebind' is passed and '--membind=0,1,16' is passed at > >> the same time on this numa topology: > >> > >> brianh at p215n12:~$ numactl -H > >> available: 4 nodes (0-1,16-17) > >> node 0 cpus: 0 1 2 3 8 9 10 11 16 17 18 19 24 25 26 27 32 33 34 35 > >> node 0 size: 65342 MB > >> node 0 free: 56902 MB > >> node 1 cpus: 40 41 42 43 48 49 50 51 56 57 58 59 64 65 66 67 72 73 74 75 > >> node 1 size: 65447 MB > >> node 1 free: 58322 MB > >> node 16 cpus: 80 81 82 83 88 89 90 91 96 97 98 99 104 105 106 107 112 113 114 115 > >> node 16 size: 65448 MB > >> node 16 free: 63096 MB > >> node 17 cpus: 120 121 122 123 128 129 130 131 136 137 138 139 144 145 146 147 152 153 154 155 > >> node 17 size: 65175 MB > >> node 17 free: 61522 MB > >> node distances: > >> node 0 1 16 17 > >> 0: 10 20 40 40 > >> 1: 20 10 40 40 > >> 16: 40 40 10 20 > >> 17: 40 40 20 10 > >> > >> > >> In that case JVM will spawn threads that will run on all CPUs, including those > >> CPUs in numa node 17. Then once in > >> src/hotspot/share/gc/parallel/mutableNUMASpace.cpp, in cas_allocate(): > >> > >> 834 // This version is lock-free. > >> 835 HeapWord* MutableNUMASpace::cas_allocate(size_t size) { > >> 836 Thread* thr = Thread::current(); > >> 837 int lgrp_id = thr->lgrp_id(); > >> 838 if (lgrp_id == -1 || !os::numa_has_group_homing()) { > >> 839 lgrp_id = os::numa_get_group_id(); > >> 840 thr->set_lgrp_id(lgrp_id); > >> 841 } > >> > >> a newly created thread will try to be mapped to a numa node given your CPU ID. > >> So if that CPU is in numa node 17 it will then not find it in: > >> > >> 843 int i = lgrp_spaces()->find(&lgrp_id, LGRPSpace::equals); > >> > >> and will fallback to a random map, picking up a random numa node among nodes > >> 0, 1, and 16: > >> > >> 846 if (i == -1) { > >> 847 i = os::random() % lgrp_spaces()->length(); > >> 848 } > >> > >> If it picks up node 16, not so bad, but what if it picks up node 0 or 1? > >> > >> I see that if one binds mem but leaves CPU unbound one has to know exactly what > >> she/he is doing, because it can be likely suboptimal. On the other hand, letting > >> the node being picked up randomly when there are memory nodes bound but no CPUs > >> seems even more suboptimal in some scenarios. Thus, should the JVM deal with it? > >> > >> @Zhengyu, do you have any opinion on that? > >> > >> Please find a few nits / comments inline. > >> > >> Note that I'm not a (R)eviewer so you still need two official reviews. > >> > >> > >> Best regards, > >> Gustavo > >> > >> On 05/21/2018 01:44 PM, Swati Sharma wrote: > >> > >> ======================PATCH============================== > >> diff --git a/src/hotspot/os/linux/os_linux.cpp b/src/hotspot/os/linux/os_linux.cpp > >> --- a/src/hotspot/os/linux/os_linux.cpp > >> +++ b/src/hotspot/os/linux/os_linux.cpp > >> @@ -2832,14 +2832,42 @@ > >> // Map all node ids in which is possible to allocate memory. Also nodes are > >> // not always consecutively available, i.e. available from 0 to the highest > >> // node number. > >> + // If the nodes have been bound explicitly using numactl membind, then > >> + // allocate memory from those nodes only. > >> > >> > >> I think ok to place that comment on the same existing line, like: > >> > >> - // node number. > >> + // node number. If the nodes have been bound explicitly using numactl membind, > >> + // then allocate memory from these nodes only. > >> > >> > >> for (size_t node = 0; node <= highest_node_number; node++) { > >> - if (Linux::isnode_in_configured_nodes(node)) { > >> + if (Linux::isnode_in_bounded_nodes(node)) { > >> > >> ---------------------------------^ s/bounded/bound/ > >> > >> > >> ids[i++] = node; > >> } > >> } > >> return i; > >> } > >> +extern "C" struct bitmask { > >> + unsigned long size; /* number of bits in the map */ > >> + unsigned long *maskp; > >> +}; > >> > >> > >> I think it's possible to move the function below to os_linux.hpp with its > >> friends and cope with the forward declaration of 'struct bitmask*` by using the > >> functions from numa API, notably numa_bitmask_nbytes() and > >> numa_bitmask_isbitset() only, avoiding the member dereferecing issue and the > >> need to add the above struct explicitly. > >> > >> > >> +// Check if single memory node bound. > >> +// Returns true if single memory node bound. > >> > >> > >> I suggest a minuscule improvement, something like: > >> > >> +// Check if bound to only one numa node. > >> +// Returns true if bound to a single numa node, otherwise returns false. > >> > >> > >> +bool os::Linux::issingle_node_bound() { > >> > >> > >> What about s/issingle_node_bound/isbound_to_single_node/ ? > >> > >> > >> + struct bitmask* bmp = _numa_get_membind != NULL ? _numa_get_membind() : NULL; > >> + if(!(bmp != NULL && bmp->maskp != NULL)) return false; > >> > >> -----^ > >> Are you sure this checking is necessary? I think if numa_get_membind succeed > >> bmp->maskp is always != NULL. > >> > >> Indentation here is odd. No space before 'if' and return on the same line. > >> > >> I would try to avoid lines over 80 chars. > >> > >> > >> + int issingle = 0; > >> + // System can have more than 64 nodes so check in all the elements of > >> + // unsigned long array > >> + for (unsigned long i = 0; i < (bmp->size / (8 * sizeof(unsigned long))); i++) { > >> + if (bmp->maskp[i] == 0) { > >> + continue; > >> + } else if ((bmp->maskp[i] & (bmp->maskp[i] - 1)) == 0) { > >> + issingle++; > >> + } else { > >> + return false; > >> + } > >> + } > >> + if (issingle == 1) > >> + return true; > >> + return false; > >> +} > >> + > >> > >> > >> As I mentioned, I think it could be moved to os_linux.hpp instead. Also, it > >> could be something like: > >> > >> +bool os::Linux::isbound_to_single_node(void) { > >> + struct bitmask* bmp; > >> + unsigned long mask; // a mask element in the mask array > >> + unsigned long max_num_masks; > >> + int single_node = 0; > >> + > >> + if (_numa_get_membind != NULL) { > >> + bmp = _numa_get_membind(); > >> + } else { > >> + return false; > >> + } > >> + > >> + max_num_masks = bmp->size / (8 * sizeof(unsigned long)); > >> + > >> + for (mask = 0; mask < max_num_masks; mask++) { > >> + if (bmp->maskp[mask] != 0) { // at least one numa node in the mask > >> + if (bmp->maskp[mask] & (bmp->maskp[mask] - 1) == 0) { > >> + single_node++; // a single numa node in the mask > >> + } else { > >> + return false; > >> + } > >> + } > >> + } > >> + > >> + if (single_node == 1) { > >> + return true; // only a single mask with a single numa node > >> + } else { > >> + return false; > >> + } > >> +} > >> > >> > >> bool os::get_page_info(char *start, page_info* info) { > >> return false; > >> } > >> @@ -2930,6 +2958,10 @@ > >> libnuma_dlsym(handle, "numa_bitmask_isbitset"))); > >> set_numa_distance(CAST_TO_FN_PTR(numa_distance_func_t, > >> libnuma_dlsym(handle, "numa_distance"))); > >> + set_numa_set_membind(CAST_TO_FN_PTR(numa_set_membind_func_t, > >> + libnuma_dlsym(handle, "numa_set_membind"))); > >> + set_numa_get_membind(CAST_TO_FN_PTR(numa_get_membind_func_t, > >> + libnuma_v2_dlsym(handle, "numa_get_membind"))); > >> if (numa_available() != -1) { > >> set_numa_all_nodes((unsigned long*)libnuma_dlsym(handle, "numa_all_nodes")); > >> @@ -3054,6 +3086,8 @@ > >> os::Linux::numa_set_bind_policy_func_t os::Linux::_numa_set_bind_policy; > >> os::Linux::numa_bitmask_isbitset_func_t os::Linux::_numa_bitmask_isbitset; > >> os::Linux::numa_distance_func_t os::Linux::_numa_distance; > >> +os::Linux::numa_set_membind_func_t os::Linux::_numa_set_membind; > >> +os::Linux::numa_get_membind_func_t os::Linux::_numa_get_membind; > >> unsigned long* os::Linux::_numa_all_nodes; > >> struct bitmask* os::Linux::_numa_all_nodes_ptr; > >> struct bitmask* os::Linux::_numa_nodes_ptr; > >> @@ -4962,8 +4996,9 @@ > >> if (!Linux::libnuma_init()) { > >> UseNUMA = false; > >> } else { > >> - if ((Linux::numa_max_node() < 1)) { > >> - // There's only one node(they start from 0), disable NUMA. > >> + if ((Linux::numa_max_node() < 1) || Linux::issingle_node_bound()) { > >> + // If there's only one node(they start from 0) or if the process > >> + // is bound explicitly to a single node using membind, disable NUMA. > >> UseNUMA = false; > >> } > >> } > >> diff --git a/src/hotspot/os/linux/os_linux.hpp b/src/hotspot/os/linux/os_linux.hpp > >> --- a/src/hotspot/os/linux/os_linux.hpp > >> +++ b/src/hotspot/os/linux/os_linux.hpp > >> @@ -228,6 +228,8 @@ > >> typedef int (*numa_tonode_memory_func_t)(void *start, size_t size, int node); > >> typedef void (*numa_interleave_memory_func_t)(void *start, size_t size, unsigned long *nodemask); > >> typedef void (*numa_interleave_memory_v2_func_t)(void *start, size_t size, struct bitmask* mask); > >> + typedef void (*numa_set_membind_func_t)(struct bitmask *mask); > >> + typedef struct bitmask* (*numa_get_membind_func_t)(void); > >> typedef void (*numa_set_bind_policy_func_t)(int policy); > >> typedef int (*numa_bitmask_isbitset_func_t)(struct bitmask *bmp, unsigned int n); > >> @@ -244,6 +246,8 @@ > >> static numa_set_bind_policy_func_t _numa_set_bind_policy; > >> static numa_bitmask_isbitset_func_t _numa_bitmask_isbitset; > >> static numa_distance_func_t _numa_distance; > >> + static numa_set_membind_func_t _numa_set_membind; > >> + static numa_get_membind_func_t _numa_get_membind; > >> static unsigned long* _numa_all_nodes; > >> static struct bitmask* _numa_all_nodes_ptr; > >> static struct bitmask* _numa_nodes_ptr; > >> @@ -259,6 +263,8 @@ > >> static void set_numa_set_bind_policy(numa_set_bind_policy_func_t func) { _numa_set_bind_policy = func; } > >> static void set_numa_bitmask_isbitset(numa_bitmask_isbitset_func_t func) { _numa_bitmask_isbitset = func; } > >> static void set_numa_distance(numa_distance_func_t func) { _numa_distance = func; } > >> + static void set_numa_set_membind(numa_set_membind_func_t func) { _numa_set_membind = func; } > >> + static void set_numa_get_membind(numa_get_membind_func_t func) { _numa_get_membind = func; } > >> static void set_numa_all_nodes(unsigned long* ptr) { _numa_all_nodes = ptr; } > >> static void set_numa_all_nodes_ptr(struct bitmask **ptr) { _numa_all_nodes_ptr = (ptr == NULL ? NULL : *ptr); } > >> static void set_numa_nodes_ptr(struct bitmask **ptr) { _numa_nodes_ptr = (ptr == NULL ? NULL : *ptr); } > >> @@ -320,6 +326,15 @@ > >> } else > >> return 0; > >> } > >> + // Check if node in bounded nodes > >> > >> > >> + // Check if node is in bound node set. Maybe? > >> > >> > >> + static bool isnode_in_bounded_nodes(int node) { > >> + struct bitmask* bmp = _numa_get_membind != NULL ? _numa_get_membind() : NULL; > >> + if (bmp != NULL && _numa_bitmask_isbitset != NULL && _numa_bitmask_isbitset(bmp, node)) { > >> + return true; > >> + } else > >> + return false; > >> + } > >> + static bool issingle_node_bound(); > >> > >> > >> Looks like it can be re-written like: > >> > >> + static bool isnode_in_bound_nodes(int node) { > >> + if (_numa_get_membind != NULL && _numa_bitmask_isbitset != NULL) { > >> + return _numa_bitmask_isbitset(_numa_get_membind(), node); > >> + } else { > >> + return false; > >> + } > >> + } > >> > >> ? > >> > >> > >> }; > >> #endif // OS_LINUX_VM_OS_LINUX_HPP > >> > >> > >> > > From vladimir.kozlov at oracle.com Fri Jun 8 23:56:05 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 8 Jun 2018 16:56:05 -0700 Subject: RFR: 8204476: Add additional statistics to CodeCache::print_summary In-Reply-To: References: Message-ID: On 6/7/18 12:11 PM, Thomas St?fe wrote: > On Thu, Jun 7, 2018 at 9:37 AM, Ren? Sch?nemann > wrote: >> Hi, >> >> can I please get a review for the following change: >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8204476 >> Webrev: http://cr.openjdk.java.net/~goetz/wr18/rene/webrev_8204476/01/ >> >> This change adds the following: >> >> (1) In CodeCache::print_summary prints the code cache full count for >> each code heap. >> >> (2) Adds additional counters to class CompileBroker: >> _total_compiler_stopped_count: The number of times the compilation >> has been stopped. >> _total_compiler_restarted_count: The number of times the >> compilation has been restarted. >> This counters are also added to CodeCache::print_summary. >> >> >> Thank you, >> Rene > > Hi Rene, > > Apart from what Vladimir wrote: > > - small nit: _total_compiler_restarted_count += 1; -> > _total_compiler_restarted_count ++; > > - Please either use %d for printing int or change the type to > uint32_t. But seeing that the other counters are int, I would use %d. Agree. I think Rene used existing code as example but we should use %d for signed integers. > > - More of a question to others: I am not familiar with compiler > coding, but signed int as counters seem a bit small? Is there no > danger of ever overflowing on long running VMs? Or does it not matter > if they do? I never observed compilations count over 1M (10^6). And there is no issue if they overflow - it is just number printed in statistic and logs. We have also use it as compilation task id and I think it is also safe. Note, in stable application case you should not have a lot of compilations. It should not be more than application's + system's hot methods. New counters in this change are much smaller - they count how many times CodeCache become full which should not happen in normal case. Regards, Vladimir > > Thanks, Thomas > From sangheon.kim at oracle.com Sat Jun 9 05:09:09 2018 From: sangheon.kim at oracle.com (sangheon.kim at oracle.com) Date: Fri, 8 Jun 2018 22:09:09 -0700 Subject: RFR(M) : 8202946 : [TESTBUG] Open source VM testbase OOM tests In-Reply-To: <2DD8F9C6-8471-4BF6-8573-0DA3F2B6C66B@oracle.com> References: <2DD8F9C6-8471-4BF6-8573-0DA3F2B6C66B@oracle.com> Message-ID: Hi Igor, On 5/15/18 4:16 PM, Igor Ignatyev wrote: > http://cr.openjdk.java.net/~iignatyev//8202946/webrev.00/index.html >> 1619 lines changed: 1619 ins; 0 del; 0 mod; > Hi all, > > could you please review this patch which open sources OOM tests from VM testbase? these tests test OutOfMemoryError throwing in different scenarios. > > As usually w/ VM testbase code, these tests are old, they have been run in hotspot testing for a long period of time. Originally, these tests were run by a test harness different from jtreg and had different build and execution schemes, some parts couldn't be easily translated to jtreg, so tests might have actions or pieces of code which look weird. In a long term, we are planning to rework them. > > JBS: https://bugs.openjdk.java.net/browse/JDK-8202946 > webrev: http://cr.openjdk.java.net/~iignatyev//8202946/webrev.00/index.html > testing: :vmTestbase_vm_oom test group Webrev.00 looks good to me but have minor nits. ------------------- test/hotspot/jtreg/TEST.groups 1164 # Test for OOME re-throwing after Java Heap exchausting - Typo: exchausting -> exhausting ------------------- test/hotspot/jtreg/vmTestbase/vm/oom/OOMTraceTest.java ? 68???? protected boolean isAlwaysOOM() { ? 69???????? return expectOOM; ? 70???? } - (optional) It is returning the variable of "expectOOM" but the name is "isAlwaysOOM" which makes me confused. If you prefer "isXXX" form of name, how about "isExpectingOOM()" etc..? Or you can defer this renaming, as you are planning to rework those tests. I don't need a new webrev for these. ------------------- Just random comment. - It would be better to use small fixed Java Heap size to trigger OOME for short test running time. Thanks, Sangheon > > Thanks, > -- Igor From erik.osterlund at oracle.com Sat Jun 9 06:47:29 2018 From: erik.osterlund at oracle.com (Erik Osterlund) Date: Sat, 9 Jun 2018 08:47:29 +0200 Subject: RFR(S): 8204628 [AArch64] Assertion failure in BarrierSetAssembler::load_at In-Reply-To: References: Message-ID: <2F2292FF-8DF8-4082-B1A1-804190993F7B@oracle.com> Hi Stuart, Looks good. Thanks, /Erik > On 9 Jun 2018, at 00:06, Stuart Monteith wrote: > > Hello, > Please review a patch for "8204628 [AArch64] Assertion failure in > BarrierSetAssembler::load_at". > > Currently you won't be able to build aarch64 debug builds because of > assertions tripping in load_at. > JDK-8203353 refined the decorators somewhat, which worked fine on x86, > but not aarch64. > > In this patch, I change the code to match x86's in that access_load_at > is called instead of load_at, which means that > AccessInternal::decorator_fixup is called, which fixes the > expectations of the assertions in load_at and other places. The > accesses of load_at and access_load_at (and store equivalents) now > match x86. > > CR: https://bugs.openjdk.java.net/browse/JDK-8204628 > > webrev: http://cr.openjdk.java.net/~smonteith/8204628/webrev/ > > The assertions no longer trip when tested with C1 or C2. > > Many thanks, > Stuart From thomas.stuefe at gmail.com Sat Jun 9 15:25:36 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Sat, 9 Jun 2018 17:25:36 +0200 Subject: RFR: 8204476: Add additional statistics to CodeCache::print_summary In-Reply-To: References: Message-ID: >> >> - More of a question to others: I am not familiar with compiler >> coding, but signed int as counters seem a bit small? Is there no >> danger of ever overflowing on long running VMs? Or does it not matter >> if they do? > > > I never observed compilations count over 1M (10^6). And there is no issue if > they overflow - it is just number printed in statistic and logs. We have > also use it as compilation task id and I think it is also safe. > > Note, in stable application case you should not have a lot of compilations. > It should not be more than application's + system's hot methods. > > New counters in this change are much smaller - they count how many times > CodeCache become full which should not happen in normal case. > Ah, thanks for clarifying. Best Regards, Thomas > Regards, > Vladimir > >> >> Thanks, Thomas >> > From david.holmes at oracle.com Sun Jun 10 13:28:26 2018 From: david.holmes at oracle.com (David Holmes) Date: Sun, 10 Jun 2018 23:28:26 +1000 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> <2A71CDF2-A6F5-4985-9005-4886D1047F6C@oracle.com> <2e2ace1c-22ad-e6e3-5b05-e2688b5b1b5c@oracle.com> <965cfb76-1d06-14b5-7f1e-44481ef2c54d@oracle.com> Message-ID: <66d3c352-01fc-a06c-c9e3-a5ccf317166b@oracle.com> On 9/06/2018 7:50 AM, Erik Joelsson wrote: > On 2018-06-07 17:30, David Holmes wrote: >> On 8/06/2018 6:11 AM, Erik Joelsson wrote: >>> I just don't think the extra work is warranted or should be >>> prioritized at this point. I also cannot think of a combination of >>> options required for what you are suggesting that wouldn't be >>> confusing to the user. If someone truly feels like these flags are >>> forced on them and can't live with them, we or preferably that person >>> can fix it then. I don't think that's dictatorship. OpenJDK is still >>> open source and anyone can contribute. >> >> I don't see why --enable-hardened-jdk and --enable-hardened-hotspot to >> add to the right flags would be either complicated or confusing. >> > For me the confusion surrounds the difference between > --enable-hardened-hotspot and --with-jvm-variants=server, hardened and That's the problem: "hardened" is not a jvm-variant as we have always defined that! "hardened" is a variation in the same way as product vs fastdebug versus slow-debug versus (the old) optimized. It is _not_ at all the same kind of thing as server versus client versus zero etc. The desire to ship "hardened" in the same image as non-hardened is what is causing the semantic conflict here. It is like shipping a product and debug VM together. Sure you can do it, but that's not how we've categorised things in the past. I understand the need to make things work this way, so in that sense selecting jvm-variant=hardened should be seen as specifying "--enable-hardened-hotspot --enable-hardened-jdk". But jvm-variant=hardened is really jvm-variant=hardened-server. > making the user understand it. But sure, it is doable. Here is a new > webrev with those two options as I interpret them. Here is the help text: > > ?--enable-hardened-jdk?? enable hardenening compiler flags for all jdk > ????????????????????????? libraries (except the JVM), typically disabling > ????????????????????????? speculative cti. [disabled] > ?--enable-hardened-hotspot > ????????????????????????? enable hardenening compiler flags for hotspot > (all > ????????????????????????? jvm variants), typically disabling > speculative cti. > ????????????????????????? To make hardening of hotspot a runtime choice, > ????????????????????????? consider the "hardened" jvm variant instead > of this > ????????????????????????? option. [disabled] > > Note that this changes the default for jdk libraries to not enable > hardening unless the user requests it. That's your call. I don't care what the default is as long as the developer has control over it. Thanks, David > Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.04/ > > /Erik From magnus.ihse.bursie at oracle.com Mon Jun 11 07:10:23 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Mon, 11 Jun 2018 09:10:23 +0200 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> <2A71CDF2-A6F5-4985-9005-4886D1047F6C@oracle.com> <2e2ace1c-22ad-e6e3-5b05-e2688b5b1b5c@oracle.com> <965cfb76-1d06-14b5-7f1e-44481ef2c54d@oracle.com> Message-ID: On 2018-06-08 23:50, Erik Joelsson wrote: > On 2018-06-07 17:30, David Holmes wrote: >> On 8/06/2018 6:11 AM, Erik Joelsson wrote: >>> I just don't think the extra work is warranted or should be >>> prioritized at this point. I also cannot think of a combination of >>> options required for what you are suggesting that wouldn't be >>> confusing to the user. If someone truly feels like these flags are >>> forced on them and can't live with them, we or preferably that >>> person can fix it then. I don't think that's dictatorship. OpenJDK >>> is still open source and anyone can contribute. >> >> I don't see why --enable-hardened-jdk and --enable-hardened-hotspot >> to add to the right flags would be either complicated or confusing. >> > For me the confusion surrounds the difference between > --enable-hardened-hotspot and --with-jvm-variants=server, hardened and > making the user understand it. But sure, it is doable. Here is a new > webrev with those two options as I interpret them. Here is the help text: > > ?--enable-hardened-jdk?? enable hardenening compiler flags for all jdk > ????????????????????????? libraries (except the JVM), typically disabling > ????????????????????????? speculative cti. [disabled] > ?--enable-hardened-hotspot > ????????????????????????? enable hardenening compiler flags for > hotspot (all > ????????????????????????? jvm variants), typically disabling > speculative cti. > ????????????????????????? To make hardening of hotspot a runtime choice, > ????????????????????????? consider the "hardened" jvm variant instead > of this > ????????????????????????? option. [disabled] > > Note that this changes the default for jdk libraries to not enable > hardening unless the user requests it. > > Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.04/ Hold it, hold it! I'm not sure how we ended up here, but I don't like it at all. :-( I think Eriks initial patch is much better than this. Some arguments in random order to defend this position: 1) Why should we have a configure option to disable security relevant flags for the JDK, if there has been no measured negative effect? We don't do this for any other compiler flags, especially not security relevant ones! I've re-read the entire thread to see if I could understand what could possibly motivate this, but the only thing I can find is David Holmes vague fear that these flags would not be well-tested enough. Let me counter with my own vague guesses: I believe the spectre mitigation methods to have been fully and properly tested, since they are rolled-out massively on all products. And let me complement with my own fear: the PR catastrophe if OpenJDK were *not* built with spectre mitigations, and someone were to exploit that! In fact, I could even argue that "server" should be hardened *by default*, and that we should instead introduce a non-hardened JVM named something akin to "quick-but-dangerous-server" instead. But I realize that a 25% performance hit is hard to swallow, so I won't push this agenda. 2) It is by no means clear that "--enable-hardened-jdk" does not harden all aspects of the JDK! If we should keep the option (which I definitely do not think we should!) it should be renamed to "--enable-hardened-libraries", or something like that. And it should be on by default, so it should be a "--disabled-hardened-jdk-libraries". Also, the general-sounding name "hardened" sounds like it might encompass more things than it does. What if I disabled a hardened jdk build, should I still get stack banging protection? If so, you need to move a lot more security-related flags to this option. (And, just to be absolutely clear: I don't think you should do that.) 3) Having two completely different ways of turning on Spectre protection for hotspot is just utterly confusing! This was a perfect example of how to use the JVM features, just as in the original patch. If you want to have spectre mitigation enabled for both server and client, by default, you would just need to run "configure --with-jvm-variants=server,client --with-jvm-features=no-speculative-cti", which will enable that feature for all variants. That's not really hard *at all* for anyone building OpenJDK. And it's way clearer what will happen, than a --enable-hardened-hotspot. 4) If you are a downstream provider building OpenJDK and you are dead set on not including Spectre mitigations in the JDK libraries, despite being shown to have no negative effects, then you can do just as any other downstream user with highly specialized requirements, and patch the source. I have no sympathies for this; I can't stop it but I don't think there's any reason for us to complicate the code to support this unlikely case. So, to recap, I think the webrev as published in http://cr.openjdk.java.net/~erikj/8202384/webrev.02/ (with "altserver" renamed to "hardened") is the way to go. /Magnus > > /Erik From magnus.ihse.bursie at oracle.com Mon Jun 11 07:21:23 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Mon, 11 Jun 2018 09:21:23 +0200 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: <66d3c352-01fc-a06c-c9e3-a5ccf317166b@oracle.com> References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> <2A71CDF2-A6F5-4985-9005-4886D1047F6C@oracle.com> <2e2ace1c-22ad-e6e3-5b05-e2688b5b1b5c@oracle.com> <965cfb76-1d06-14b5-7f1e-44481ef2c54d@oracle.com> <66d3c352-01fc-a06c-c9e3-a5ccf317166b@oracle.com> Message-ID: <4c7694f3-6b65-270b-86a7-6464d43d1986@oracle.com> On 2018-06-10 15:28, David Holmes wrote: > On 9/06/2018 7:50 AM, Erik Joelsson wrote: >> On 2018-06-07 17:30, David Holmes wrote: >>> On 8/06/2018 6:11 AM, Erik Joelsson wrote: >>>> I just don't think the extra work is warranted or should be >>>> prioritized at this point. I also cannot think of a combination of >>>> options required for what you are suggesting that wouldn't be >>>> confusing to the user. If someone truly feels like these flags are >>>> forced on them and can't live with them, we or preferably that >>>> person can fix it then. I don't think that's dictatorship. OpenJDK >>>> is still open source and anyone can contribute. >>> >>> I don't see why --enable-hardened-jdk and --enable-hardened-hotspot >>> to add to the right flags would be either complicated or confusing. >>> >> For me the confusion surrounds the difference between >> --enable-hardened-hotspot and --with-jvm-variants=server, hardened and > > That's the problem: "hardened" is not a jvm-variant as we have always > defined that! "hardened" is a variation in the same way as product vs > fastdebug versus slow-debug versus (the old) optimized. It is _not_ at > all the same kind of thing as server versus client versus zero etc. > The desire to ship "hardened" in the same image as non-hardened is > what is causing the semantic conflict here. It is like shipping a > product and debug VM together. Sure you can do it, but that's not how > we've categorised things in the past. I disagree. The "no-speculative-cti" is a perfectly fine JVM feature, which can be applied to any JVM variant. It is not a JVM feature as a separate software component (like cmsgc or compiler1) that could be left in or kept out and that affects the functionality of hotspot. Instead, it is a JVM feature very much like the existing link-time-opt, in that it affects all aspects of the JVM; not the functionality, but the performance (and security). The way we bundle a certain set of JVM features as a named JVM variant has always been a bit, well, semantically odd, but it has served us okay in the past and serve us just as well for this fix. > > I understand the need to make things work this way, so in that sense > selecting jvm-variant=hardened should be seen as specifying > "--enable-hardened-hotspot --enable-hardened-jdk". But > jvm-variant=hardened is really jvm-variant=hardened-server. Yes, jvm-variant=hardened is actually hardned-server. Despite the longer name, it might be more clear to use that name. It ties in into a bit into Erik's original "altserver" proposal. I think the reason just "hardened" sounds like a reasonable alternative to the more proper but longer "hardened-server" is due to how "server" has become the mainstream variant, even for clients, and "client" feels like it's being put on death row. Nobody really believes that it will survive in the long term, and nowadays Oracle don't even build it anymore (we stopped doing that when we stopped doing 32-bit builds). So "server" is increasingly incorrectly named, and should really just be considered a legacy name for what should perhaps be "default" or so. /Magnus > >> making the user understand it. But sure, it is doable. Here is a new >> webrev with those two options as I interpret them. Here is the help >> text: >> >> ??--enable-hardened-jdk?? enable hardenening compiler flags for all jdk >> ?????????????????????????? libraries (except the JVM), typically >> disabling >> ?????????????????????????? speculative cti. [disabled] >> ??--enable-hardened-hotspot >> ?????????????????????????? enable hardenening compiler flags for >> hotspot (all >> ?????????????????????????? jvm variants), typically disabling >> speculative cti. >> ?????????????????????????? To make hardening of hotspot a runtime >> choice, >> ?????????????????????????? consider the "hardened" jvm variant >> instead of this >> ?????????????????????????? option. [disabled] >> >> Note that this changes the default for jdk libraries to not enable >> hardening unless the user requests it. > > That's your call. I don't care what the default is as long as the > developer has control over it. > > Thanks, > David > >> Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.04/ >> >> /Erik From david.holmes at oracle.com Mon Jun 11 07:38:14 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 11 Jun 2018 17:38:14 +1000 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> <2A71CDF2-A6F5-4985-9005-4886D1047F6C@oracle.com> <2e2ace1c-22ad-e6e3-5b05-e2688b5b1b5c@oracle.com> <965cfb76-1d06-14b5-7f1e-44481ef2c54d@oracle.com> Message-ID: <28729cf3-2951-6557-fb00-28ddf52fd110@oracle.com> Hi Magnus, On 11/06/2018 5:10 PM, Magnus Ihse Bursie wrote: > On 2018-06-08 23:50, Erik Joelsson wrote: >> On 2018-06-07 17:30, David Holmes wrote: >>> On 8/06/2018 6:11 AM, Erik Joelsson wrote: >>>> I just don't think the extra work is warranted or should be >>>> prioritized at this point. I also cannot think of a combination of >>>> options required for what you are suggesting that wouldn't be >>>> confusing to the user. If someone truly feels like these flags are >>>> forced on them and can't live with them, we or preferably that >>>> person can fix it then. I don't think that's dictatorship. OpenJDK >>>> is still open source and anyone can contribute. >>> >>> I don't see why --enable-hardened-jdk and --enable-hardened-hotspot >>> to add to the right flags would be either complicated or confusing. >>> >> For me the confusion surrounds the difference between >> --enable-hardened-hotspot and --with-jvm-variants=server, hardened and >> making the user understand it. But sure, it is doable. Here is a new >> webrev with those two options as I interpret them. Here is the help text: >> >> ?--enable-hardened-jdk?? enable hardenening compiler flags for all jdk >> ????????????????????????? libraries (except the JVM), typically disabling >> ????????????????????????? speculative cti. [disabled] >> ?--enable-hardened-hotspot >> ????????????????????????? enable hardenening compiler flags for >> hotspot (all >> ????????????????????????? jvm variants), typically disabling >> speculative cti. >> ????????????????????????? To make hardening of hotspot a runtime choice, >> ????????????????????????? consider the "hardened" jvm variant instead >> of this >> ????????????????????????? option. [disabled] >> >> Note that this changes the default for jdk libraries to not enable >> hardening unless the user requests it. >> >> Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.04/ > > Hold it, hold it! I'm not sure how we ended up here, but I don't like it > at all. :-( > > I think Eriks initial patch is much better than this. Some arguments in > random order to defend this position: > > 1) Why should we have a configure option to disable security relevant > flags for the JDK, if there has been no measured negative effect? We > don't do this for any other compiler flags, especially not security > relevant ones! > > I've re-read the entire thread to see if I could understand what could > possibly motivate this, but the only thing I can find is David Holmes > vague fear that these flags would not be well-tested enough. Let me > counter with my own vague guesses: I believe the spectre mitigation > methods to have been fully and properly tested, since they are > rolled-out massively on all products. And let me complement with my own > fear: the PR catastrophe if OpenJDK were *not* built with spectre > mitigations, and someone were to exploit that! All I'm looking for is the ability to select whether you can build with or without this "hardening". The default OpenJDK build can of course churn out a "hardened" implementation. Anyone who opts out of that is on their own. I don't share your faith or confidence in the quality of any software rushed out in a fairly short space of time. Prudence, if nothing else, says you should be able to not build this way IMHO. > In fact, I could even argue that "server" should be hardened *by > default*, and that we should instead introduce a non-hardened JVM named > something akin to "quick-but-dangerous-server" instead. But I realize > that a 25% performance hit is hard to swallow, so I won't push this agenda. > > 2) It is by no means clear that "--enable-hardened-jdk" does not harden > all aspects of the JDK! If we should keep the option (which I definitely > do not think we should!) it should be renamed to > "--enable-hardened-libraries", or something like that. And it should be > on by default, so it should be a "--disabled-hardened-jdk-libraries". > > Also, the general-sounding name "hardened" sounds like it might > encompass more things than it does. What if I disabled a hardened jdk > build, should I still get stack banging protection? If so, you need to > move a lot more security-related flags to this option. (And, just to be > absolutely clear: I don't think you should do that.) > > 3) Having two completely different ways of turning on Spectre protection > for hotspot is just utterly confusing! This was a perfect example of how > to use the JVM features, just as in the original patch. Okay. I have had some confusion over "features" versus "variants" based on Eriks earlier comments. Erik's email from June 6 first states: "I agree, and you sort of can. By adding the jvm feature "no-speculative-cti" to any jvm variant, you get the flags." but then later said: "We don't see the point in giving the choice on the JDK libraries ..." by which I now think he meant not giving the choice at the VM variant level, but I mistook it for meaning at the "feature" level. Hence I came back with the two flags suggestion. If we can already select features arbitrarily at configure time then this is all addressed already. Apologies for the confusion. David ----- > If you want to have spectre mitigation enabled for both server and > client, by default, you would just need to run "configure > --with-jvm-variants=server,client > --with-jvm-features=no-speculative-cti", which will enable that feature > for all variants. That's not really hard *at all* for anyone building > OpenJDK. And it's way clearer what will happen, than a > --enable-hardened-hotspot. > > 4) If you are a downstream provider building OpenJDK and you are dead > set on not including Spectre mitigations in the JDK libraries, despite > being shown to have no negative effects, then you can do just as any > other downstream user with highly specialized requirements, and patch > the source. I have no sympathies for this; I can't stop it but I don't > think there's any reason for us to complicate the code to support this > unlikely case. > > So, to recap, I think the webrev as published in > http://cr.openjdk.java.net/~erikj/8202384/webrev.02/ (with "altserver" > renamed to "hardened") is the way to go. > > /Magnus > > > >> >> /Erik > From robbin.ehn at oracle.com Mon Jun 11 08:07:07 2018 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Mon, 11 Jun 2018 10:07:07 +0200 Subject: Ping!! Re: RFR: 8203357 Container Metrics In-Reply-To: References: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> Message-ID: Hi Bob, On 06/07/2018 07:43 PM, Bob Vandette wrote: > Can I get one more reviewer for this RFE so I can integrate it? > >> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 Seems okay. Metrics.java "Returns the length of the operating system time slice" Note that is is only true if you are using a batch scheduler. Otherwise this period may be split on multiple 'time slices'. In printSystemMetrics there is no units, maybe intentional? Do we have support now in mach5 for docker jtreg, or do we still run these separate? You can ship it. Thanks for fixing, and super thanks for fixing the bug in PlainRead also! /Robbin > > Mandy Chung has reviewed this change. > > I?ve run Mach5 hotspot and core lib tests. > > I?ve reviewed the tests which were written by Harsha Wardhana > > I filed a CSR for the command line change and it?s now approved and closed. > > Thanks, > Bob. > > >> On May 30, 2018, at 3:45 PM, Bob Vandette wrote: >> >> Please review the following RFE which adds an internal API, along with jtreg tests that provide >> access to Docker container configuration data and metrics. In addition to the API which we hope to >> take advantage of in the future with Java Flight Recorder and a JMX Mbean, I?ve added an additional >> option to -XshowSettings:system than dumps out the container or host cgroup confguration >> information. See the sample output below: >> >> RFE: Container Metrics >> >> https://bugs.openjdk.java.net/browse/JDK-8203357 >> >> WEBREV: >> >> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 >> >> >> This commit will also include a fix for the following bug. >> >> BUG: [TESTBUG] Test /runtime/containers/cgroup/PlainRead.java fails >> >> https://bugs.openjdk.java.net/browse/JDK-8203691 >> >> WEBREV: >> >> http://cr.openjdk.java.net/~bobv/8203357/webrev.00/test/hotspot/jtreg/runtime/containers/cgroup/PlainRead.java.sdiff.html >> >> SAMPLE USAGE and OUTPUT: >> >> docker run ?memory=256m --cpuset-cpus 4-7 -it ubuntu bash >> ./java -XshowSettings:system >> Operating System Metrics: >> Provider: cgroupv1 >> Effective CPU Count: 4 >> CPU Period: 100000 >> CPU Quota: -1 >> CPU Shares: -1 >> List of Processors, 4 total: >> 4 5 6 7 >> List of Effective Processors, 4 total: >> 4 5 6 7 >> List of Memory Nodes, 2 total: >> 0 1 >> List of Available Memory Nodes, 2 total: >> 0 1 >> CPUSet Memory Pressure Enabled: false >> Memory Limit: 256.00M >> Memory Soft Limit: Unlimited >> Memory & Swap Limit: 512.00M >> Kernel Memory Limit: Unlimited >> TCP Memory Limit: Unlimited >> Out Of Memory Killer Enabled: true >> >> TEST RESULTS: >> >> testing runtime container APIs >> Directory "JTwork" not found: creating >> Passed: runtime/containers/cgroup/PlainRead.java >> Passed: runtime/containers/docker/DockerBasicTest.java >> Passed: runtime/containers/docker/TestCPUAwareness.java >> Passed: runtime/containers/docker/TestCPUSets.java >> Passed: runtime/containers/docker/TestMemoryAwareness.java >> Passed: runtime/containers/docker/TestMisc.java >> Test results: passed: 6 >> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >> >> testing jdk.internal.platform APIs >> Passed: jdk/internal/platform/cgroup/TestCgroupMetrics.java >> Passed: jdk/internal/platform/docker/TestDockerCpuMetrics.java >> Passed: jdk/internal/platform/docker/TestDockerMemoryMetrics.java >> Passed: jdk/internal/platform/docker/TestSystemMetrics.java >> Test results: passed: 4 >> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >> >> testing -XshowSettings:system launcher option >> Passed: tools/launcher/Settings.java >> Test results: passed: 1 >> >> >> Bob. >> >> > From magnus.ihse.bursie at oracle.com Mon Jun 11 08:19:24 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Mon, 11 Jun 2018 10:19:24 +0200 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: <28729cf3-2951-6557-fb00-28ddf52fd110@oracle.com> References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> <2A71CDF2-A6F5-4985-9005-4886D1047F6C@oracle.com> <2e2ace1c-22ad-e6e3-5b05-e2688b5b1b5c@oracle.com> <965cfb76-1d06-14b5-7f1e-44481ef2c54d@oracle.com> <28729cf3-2951-6557-fb00-28ddf52fd110@oracle.com> Message-ID: <37e8bd6d-b906-b9d1-6e41-2229236c9c8c@oracle.com> On 2018-06-11 09:38, David Holmes wrote: > Hi Magnus, > > On 11/06/2018 5:10 PM, Magnus Ihse Bursie wrote: >> On 2018-06-08 23:50, Erik Joelsson wrote: >>> On 2018-06-07 17:30, David Holmes wrote: >>>> On 8/06/2018 6:11 AM, Erik Joelsson wrote: >>>>> I just don't think the extra work is warranted or should be >>>>> prioritized at this point. I also cannot think of a combination of >>>>> options required for what you are suggesting that wouldn't be >>>>> confusing to the user. If someone truly feels like these flags are >>>>> forced on them and can't live with them, we or preferably that >>>>> person can fix it then. I don't think that's dictatorship. OpenJDK >>>>> is still open source and anyone can contribute. >>>> >>>> I don't see why --enable-hardened-jdk and --enable-hardened-hotspot >>>> to add to the right flags would be either complicated or confusing. >>>> >>> For me the confusion surrounds the difference between >>> --enable-hardened-hotspot and --with-jvm-variants=server, hardened >>> and making the user understand it. But sure, it is doable. Here is a >>> new webrev with those two options as I interpret them. Here is the >>> help text: >>> >>> ?--enable-hardened-jdk?? enable hardenening compiler flags for all jdk >>> ????????????????????????? libraries (except the JVM), typically >>> disabling >>> ????????????????????????? speculative cti. [disabled] >>> ?--enable-hardened-hotspot >>> ????????????????????????? enable hardenening compiler flags for >>> hotspot (all >>> ????????????????????????? jvm variants), typically disabling >>> speculative cti. >>> ????????????????????????? To make hardening of hotspot a runtime >>> choice, >>> ????????????????????????? consider the "hardened" jvm variant >>> instead of this >>> ????????????????????????? option. [disabled] >>> >>> Note that this changes the default for jdk libraries to not enable >>> hardening unless the user requests it. >>> >>> Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.04/ >> >> Hold it, hold it! I'm not sure how we ended up here, but I don't like >> it at all. :-( >> >> I think Eriks initial patch is much better than this. Some arguments >> in random order to defend this position: >> >> 1) Why should we have a configure option to disable security relevant >> flags for the JDK, if there has been no measured negative effect? We >> don't do this for any other compiler flags, especially not security >> relevant ones! >> >> I've re-read the entire thread to see if I could understand what >> could possibly motivate this, but the only thing I can find is David >> Holmes vague fear that these flags would not be well-tested enough. >> Let me counter with my own vague guesses: I believe the spectre >> mitigation methods to have been fully and properly tested, since they >> are rolled-out massively on all products. And let me complement with >> my own fear: the PR catastrophe if OpenJDK were *not* built with >> spectre mitigations, and someone were to exploit that! > > All I'm looking for is the ability to select whether you can build > with or without this "hardening". The default OpenJDK build can of > course churn out a "hardened" implementation. Anyone who opts out of > that is on their own. With Erik's original proposal (webrev.02), you will, by default, get a hotspot "server" JVM variant that is identical to what you got without the patch. This should definitely cover your case. You will also get all the non-hotspot libraries built as hardened. You *can* get the JDK libraries built non-hardened, by removing the ${$2NO_SPECULATIVE_CTI_CFLAGS} from the line $1_CFLAGS_JDK="${$1_DEFINES_CPU_JDK} ${$1_CFLAGS_CPU} ${$1_CFLAGS_CPU_JDK} ${$1_TOOLCHAIN_CFLAGS} ${$2NO_SPECULATIVE_CTI_CFLAGS}". As I said, I believe this is enough to support that case. > > I don't share your faith or confidence in the quality of any software > rushed out in a fairly short space of time. Prudence, if nothing else, > says you should be able to not build this way IMHO. AFAIU, these compiler flags has received extensive testing inside Oracle. It is also part of a global, high-visibility project, where key players have had time to prepare for handling the issues ahead of the public awareness of the exploits. *And* it's been almost half a year since the Spectre exploit was made public. I have much more faith in enabling these flags than I'd have for e.g. upgrading to a newer version of Solaris Studio. :-) > >> In fact, I could even argue that "server" should be hardened *by >> default*, and that we should instead introduce a non-hardened JVM >> named something akin to "quick-but-dangerous-server" instead. But I >> realize that a 25% performance hit is hard to swallow, so I won't >> push this agenda. >> >> 2) It is by no means clear that "--enable-hardened-jdk" does not >> harden all aspects of the JDK! If we should keep the option (which I >> definitely do not think we should!) it should be renamed to >> "--enable-hardened-libraries", or something like that. And it should >> be on by default, so it should be a "--disabled-hardened-jdk-libraries". >> >> Also, the general-sounding name "hardened" sounds like it might >> encompass more things than it does. What if I disabled a hardened jdk >> build, should I still get stack banging protection? If so, you need >> to move a lot more security-related flags to this option. (And, just >> to be absolutely clear: I don't think you should do that.) >> >> 3) Having two completely different ways of turning on Spectre >> protection for hotspot is just utterly confusing! This was a perfect >> example of how to use the JVM features, just as in the original patch. > > Okay. I have had some confusion over "features" versus "variants" > based on Eriks earlier comments. Erik's email from June 6 first states: > > "I agree, and you sort of can. By adding the jvm feature > "no-speculative-cti" to any jvm variant, you get the flags." > > but then later said: > > "We don't see the point in giving the choice on the JDK libraries ..." > > by which I now think he meant not giving the choice at the VM variant > level, but I mistook it for meaning at the "feature" level. Hence I > came back with the two flags suggestion. If we can already select > features arbitrarily at configure time then this is all addressed > already. > > Apologies for the confusion. Well, now *I* am confused. ;-) Let's separarate two components: hotspot, and the rest of the native code (the "JDK libraries"). For hotspot, the following holds: * You can enable or disable no-speculative-cti as a feature on the configure command line, by "--with-jvm-features=no-speculative-cti" (to enable) or "--with-jvm-features=-no-speculative-cti" (to disable). This change applies to *all* JVM variants that are built; there is currently no command-line support for enabling or disabling features on a per-JVM-variant level. (There's no real hinderance to doing so; it's just not yet implemented). * Erik defines a new JVM variant, which is identical to server, but has also no-speculative-cti enabled. This is not built by default, but can be enabled by --with-jvm-variants=server,hardened. Oracle will build OpenJDK this way. If you do not want hardening, you just do not build the "hardened" variant, and you do not add the no-speculative-cti feature. If you want hardening all over the line (and no runtime user choice), you add --with-jvm-feature=no-speculative-cti. Alright? Erik's second comment "We don't see the point in giving the choice on the JDK libraries ..." applies not to Hotspot, but to the rest of the native libraries (the "JDK libraries"). Here, we will just add the Spectre mitigation flags, without a user runtime choice of using hardened or non-hardened libraries. The reason for this is that the hardening did not have a measurable performance impact. The builder of the JDK still has the ability to build the JDK libraries without the hardening flags, but that will require modifying the configure script, just as the case is today if the builder should wish to disable any other of all the flags we enable by default. /Magnus > > David > ----- > >> If you want to have spectre mitigation enabled for both server and >> client, by default, you would just need to run "configure >> --with-jvm-variants=server,client >> --with-jvm-features=no-speculative-cti", which will enable that >> feature for all variants. That's not really hard *at all* for anyone >> building OpenJDK. And it's way clearer what will happen, than a >> --enable-hardened-hotspot. >> >> 4) If you are a downstream provider building OpenJDK and you are dead >> set on not including Spectre mitigations in the JDK libraries, >> despite being shown to have no negative effects, then you can do just >> as any other downstream user with highly specialized requirements, >> and patch the source. I have no sympathies for this; I can't stop it >> but I don't think there's any reason for us to complicate the code to >> support this unlikely case. >> >> So, to recap, I think the webrev as published in >> http://cr.openjdk.java.net/~erikj/8202384/webrev.02/ (with >> "altserver" renamed to "hardened") is the way to go. >> >> /Magnus >> >> >> >>> >>> /Erik >> From david.holmes at oracle.com Mon Jun 11 08:24:11 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 11 Jun 2018 18:24:11 +1000 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: <37e8bd6d-b906-b9d1-6e41-2229236c9c8c@oracle.com> References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> <2A71CDF2-A6F5-4985-9005-4886D1047F6C@oracle.com> <2e2ace1c-22ad-e6e3-5b05-e2688b5b1b5c@oracle.com> <965cfb76-1d06-14b5-7f1e-44481ef2c54d@oracle.com> <28729cf3-2951-6557-fb00-28ddf52fd110@oracle.com> <37e8bd6d-b906-b9d1-6e41-2229236c9c8c@oracle.com> Message-ID: Sorry this is making my head spin. Doh! jvm-features only apply to the JVM. So I retract my last email - sorry. And with that I'm going to just bow out. You and Erik can figure it out. Thanks, David On 11/06/2018 6:19 PM, Magnus Ihse Bursie wrote: > > On 2018-06-11 09:38, David Holmes wrote: >> Hi Magnus, >> >> On 11/06/2018 5:10 PM, Magnus Ihse Bursie wrote: >>> On 2018-06-08 23:50, Erik Joelsson wrote: >>>> On 2018-06-07 17:30, David Holmes wrote: >>>>> On 8/06/2018 6:11 AM, Erik Joelsson wrote: >>>>>> I just don't think the extra work is warranted or should be >>>>>> prioritized at this point. I also cannot think of a combination of >>>>>> options required for what you are suggesting that wouldn't be >>>>>> confusing to the user. If someone truly feels like these flags are >>>>>> forced on them and can't live with them, we or preferably that >>>>>> person can fix it then. I don't think that's dictatorship. OpenJDK >>>>>> is still open source and anyone can contribute. >>>>> >>>>> I don't see why --enable-hardened-jdk and --enable-hardened-hotspot >>>>> to add to the right flags would be either complicated or confusing. >>>>> >>>> For me the confusion surrounds the difference between >>>> --enable-hardened-hotspot and --with-jvm-variants=server, hardened >>>> and making the user understand it. But sure, it is doable. Here is a >>>> new webrev with those two options as I interpret them. Here is the >>>> help text: >>>> >>>> ?--enable-hardened-jdk?? enable hardenening compiler flags for all jdk >>>> ????????????????????????? libraries (except the JVM), typically >>>> disabling >>>> ????????????????????????? speculative cti. [disabled] >>>> ?--enable-hardened-hotspot >>>> ????????????????????????? enable hardenening compiler flags for >>>> hotspot (all >>>> ????????????????????????? jvm variants), typically disabling >>>> speculative cti. >>>> ????????????????????????? To make hardening of hotspot a runtime >>>> choice, >>>> ????????????????????????? consider the "hardened" jvm variant >>>> instead of this >>>> ????????????????????????? option. [disabled] >>>> >>>> Note that this changes the default for jdk libraries to not enable >>>> hardening unless the user requests it. >>>> >>>> Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.04/ >>> >>> Hold it, hold it! I'm not sure how we ended up here, but I don't like >>> it at all. :-( >>> >>> I think Eriks initial patch is much better than this. Some arguments >>> in random order to defend this position: >>> >>> 1) Why should we have a configure option to disable security relevant >>> flags for the JDK, if there has been no measured negative effect? We >>> don't do this for any other compiler flags, especially not security >>> relevant ones! >>> >>> I've re-read the entire thread to see if I could understand what >>> could possibly motivate this, but the only thing I can find is David >>> Holmes vague fear that these flags would not be well-tested enough. >>> Let me counter with my own vague guesses: I believe the spectre >>> mitigation methods to have been fully and properly tested, since they >>> are rolled-out massively on all products. And let me complement with >>> my own fear: the PR catastrophe if OpenJDK were *not* built with >>> spectre mitigations, and someone were to exploit that! >> >> All I'm looking for is the ability to select whether you can build >> with or without this "hardening". The default OpenJDK build can of >> course churn out a "hardened" implementation. Anyone who opts out of >> that is on their own. > > With Erik's original proposal (webrev.02), you will, by default, get a > hotspot "server" JVM variant that is identical to what you got without > the patch. This should definitely cover your case. > > You will also get all the non-hotspot libraries built as hardened. You > *can* get the JDK libraries built non-hardened, by removing the > ${$2NO_SPECULATIVE_CTI_CFLAGS} from the line > $1_CFLAGS_JDK="${$1_DEFINES_CPU_JDK} ${$1_CFLAGS_CPU} > ${$1_CFLAGS_CPU_JDK} ${$1_TOOLCHAIN_CFLAGS} > ${$2NO_SPECULATIVE_CTI_CFLAGS}". > > As I said, I believe this is enough to support that case. > >> >> I don't share your faith or confidence in the quality of any software >> rushed out in a fairly short space of time. Prudence, if nothing else, >> says you should be able to not build this way IMHO. > AFAIU, these compiler flags has received extensive testing inside > Oracle. It is also part of a global, high-visibility project, where key > players have had time to prepare for handling the issues ahead of the > public awareness of the exploits. *And* it's been almost half a year > since the Spectre exploit was made public. > > I have much more faith in enabling these flags than I'd have for e.g. > upgrading to a newer version of Solaris Studio. :-) > > >> >>> In fact, I could even argue that "server" should be hardened *by >>> default*, and that we should instead introduce a non-hardened JVM >>> named something akin to "quick-but-dangerous-server" instead. But I >>> realize that a 25% performance hit is hard to swallow, so I won't >>> push this agenda. >>> >>> 2) It is by no means clear that "--enable-hardened-jdk" does not >>> harden all aspects of the JDK! If we should keep the option (which I >>> definitely do not think we should!) it should be renamed to >>> "--enable-hardened-libraries", or something like that. And it should >>> be on by default, so it should be a "--disabled-hardened-jdk-libraries". >>> >>> Also, the general-sounding name "hardened" sounds like it might >>> encompass more things than it does. What if I disabled a hardened jdk >>> build, should I still get stack banging protection? If so, you need >>> to move a lot more security-related flags to this option. (And, just >>> to be absolutely clear: I don't think you should do that.) >>> >>> 3) Having two completely different ways of turning on Spectre >>> protection for hotspot is just utterly confusing! This was a perfect >>> example of how to use the JVM features, just as in the original patch. >> >> Okay. I have had some confusion over "features" versus "variants" >> based on Eriks earlier comments. Erik's email from June 6 first states: >> >> "I agree, and you sort of can. By adding the jvm feature >> "no-speculative-cti" to any jvm variant, you get the flags." >> >> but then later said: >> >> "We don't see the point in giving the choice on the JDK libraries ..." >> >> by which I now think he meant not giving the choice at the VM variant >> level, but I mistook it for meaning at the "feature" level. Hence I >> came back with the two flags suggestion. If we can already select >> features arbitrarily at configure time then this is all addressed >> already. >> >> Apologies for the confusion. > > Well, now *I* am confused. ;-) > > Let's separarate two components: hotspot, and the rest of the native > code (the "JDK libraries"). > > For hotspot, the following holds: > * You can enable or disable no-speculative-cti as a feature on the > configure command line, by "--with-jvm-features=no-speculative-cti" (to > enable) or "--with-jvm-features=-no-speculative-cti" (to disable). This > change applies to *all* JVM variants that are built; there is currently > no command-line support for enabling or disabling features on a > per-JVM-variant level. (There's no real hinderance to doing so; it's > just not yet implemented). > * Erik defines a new JVM variant, which is identical to server, but has > also no-speculative-cti enabled. This is not built by default, but can > be enabled by --with-jvm-variants=server,hardened. Oracle will build > OpenJDK this way. > > If you do not want hardening, you just do not build the "hardened" > variant, and you do not add the no-speculative-cti feature. > > If you want hardening all over the line (and no runtime user choice), > you add --with-jvm-feature=no-speculative-cti. > > Alright? > > Erik's second comment "We don't see the point in giving the choice on > the JDK libraries ..." applies not to Hotspot, but to the rest of the > native libraries (the "JDK libraries"). Here, we will just add the > Spectre mitigation flags, without a user runtime choice of using > hardened or non-hardened libraries. The reason for this is that the > hardening did not have a measurable performance impact. > > The builder of the JDK still has the ability to build the JDK libraries > without the hardening flags, but that will require modifying the > configure script, just as the case is today if the builder should wish > to disable any other of all the flags we enable by default. > > /Magnus > > > >> >> David >> ----- >> >>> If you want to have spectre mitigation enabled for both server and >>> client, by default, you would just need to run "configure >>> --with-jvm-variants=server,client >>> --with-jvm-features=no-speculative-cti", which will enable that >>> feature for all variants. That's not really hard *at all* for anyone >>> building OpenJDK. And it's way clearer what will happen, than a >>> --enable-hardened-hotspot. >>> >>> 4) If you are a downstream provider building OpenJDK and you are dead >>> set on not including Spectre mitigations in the JDK libraries, >>> despite being shown to have no negative effects, then you can do just >>> as any other downstream user with highly specialized requirements, >>> and patch the source. I have no sympathies for this; I can't stop it >>> but I don't think there's any reason for us to complicate the code to >>> support this unlikely case. >>> >>> So, to recap, I think the webrev as published in >>> http://cr.openjdk.java.net/~erikj/8202384/webrev.02/ (with >>> "altserver" renamed to "hardened") is the way to go. >>> >>> /Magnus >>> >>> >>> >>>> >>>> /Erik >>> > From david.holmes at oracle.com Mon Jun 11 08:32:00 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 11 Jun 2018 18:32:00 +1000 Subject: Ping!! Re: RFR: 8203357 Container Metrics In-Reply-To: References: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> Message-ID: <1c57d8ee-1178-2e9a-4b7d-aafbe885473c@oracle.com> Sorry Bob I haven't had a chance to look at this detail. For the Java code ... methods that return arrays should return zero-length arrays when something is not available rather than null. For getCpuPeriod() the term "operating system time slice" can be misconstrued as being related to the scheduler timeslice that may, or may not, exist, depending on the scheduler and scheduling policy etc. This "timeslice" is something specific to cgroups - no? David On 8/06/2018 3:43 AM, Bob Vandette wrote: > Can I get one more reviewer for this RFE so I can integrate it? > >> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 > > Mandy Chung has reviewed this change. > > I?ve run Mach5 hotspot and core lib tests. > > I?ve reviewed the tests which were written by Harsha Wardhana > > I filed a CSR for the command line change and it?s now approved and closed. > > Thanks, > Bob. > > >> On May 30, 2018, at 3:45 PM, Bob Vandette wrote: >> >> Please review the following RFE which adds an internal API, along with jtreg tests that provide >> access to Docker container configuration data and metrics. In addition to the API which we hope to >> take advantage of in the future with Java Flight Recorder and a JMX Mbean, I?ve added an additional >> option to -XshowSettings:system than dumps out the container or host cgroup confguration >> information. See the sample output below: >> >> RFE: Container Metrics >> >> https://bugs.openjdk.java.net/browse/JDK-8203357 >> >> WEBREV: >> >> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 >> >> >> This commit will also include a fix for the following bug. >> >> BUG: [TESTBUG] Test /runtime/containers/cgroup/PlainRead.java fails >> >> https://bugs.openjdk.java.net/browse/JDK-8203691 >> >> WEBREV: >> >> http://cr.openjdk.java.net/~bobv/8203357/webrev.00/test/hotspot/jtreg/runtime/containers/cgroup/PlainRead.java.sdiff.html >> >> SAMPLE USAGE and OUTPUT: >> >> docker run ?memory=256m --cpuset-cpus 4-7 -it ubuntu bash >> ./java -XshowSettings:system >> Operating System Metrics: >> Provider: cgroupv1 >> Effective CPU Count: 4 >> CPU Period: 100000 >> CPU Quota: -1 >> CPU Shares: -1 >> List of Processors, 4 total: >> 4 5 6 7 >> List of Effective Processors, 4 total: >> 4 5 6 7 >> List of Memory Nodes, 2 total: >> 0 1 >> List of Available Memory Nodes, 2 total: >> 0 1 >> CPUSet Memory Pressure Enabled: false >> Memory Limit: 256.00M >> Memory Soft Limit: Unlimited >> Memory & Swap Limit: 512.00M >> Kernel Memory Limit: Unlimited >> TCP Memory Limit: Unlimited >> Out Of Memory Killer Enabled: true >> >> TEST RESULTS: >> >> testing runtime container APIs >> Directory "JTwork" not found: creating >> Passed: runtime/containers/cgroup/PlainRead.java >> Passed: runtime/containers/docker/DockerBasicTest.java >> Passed: runtime/containers/docker/TestCPUAwareness.java >> Passed: runtime/containers/docker/TestCPUSets.java >> Passed: runtime/containers/docker/TestMemoryAwareness.java >> Passed: runtime/containers/docker/TestMisc.java >> Test results: passed: 6 >> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >> >> testing jdk.internal.platform APIs >> Passed: jdk/internal/platform/cgroup/TestCgroupMetrics.java >> Passed: jdk/internal/platform/docker/TestDockerCpuMetrics.java >> Passed: jdk/internal/platform/docker/TestDockerMemoryMetrics.java >> Passed: jdk/internal/platform/docker/TestSystemMetrics.java >> Test results: passed: 4 >> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >> >> testing -XshowSettings:system launcher option >> Passed: tools/launcher/Settings.java >> Test results: passed: 1 >> >> >> Bob. >> >> > From thomas.schatzl at oracle.com Mon Jun 11 09:34:10 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Mon, 11 Jun 2018 11:34:10 +0200 Subject: RFR: 8204554: JFR TLAB tracing broken after 8202776 In-Reply-To: <5B1940CC.6000402@oracle.com> References: <5B1940CC.6000402@oracle.com> Message-ID: <2777e599697a3393a46e130ba822d1f4c0fdd1c6.camel@oracle.com> Hi, On Thu, 2018-06-07 at 16:27 +0200, Erik ?sterlund wrote: > Hi, > > The recent allocation path modularization (8202776) broke JFR TLAB > sampling. This was discovered in tier 5 testing. > > The problem is that there was previously an early exit TLAB path, > that should not run the tracing code when not returning NULL, and a > mem_allocate call that should run the tracing code when not > returning NULL. However, these paths were joined in a virtual member > function, making them look the same to the tracing code, which caused > the non-TLAB tracing code to be run on TLAB allocations as well. > > The solution I propose is to move the TLAB tracing code into the new > virtual member function. It seems that whatever GC overrides this > code, should also decide what to do about the tracing code there > anyway. > looks good. Thomas From swatibits14 at gmail.com Mon Jun 11 10:00:56 2018 From: swatibits14 at gmail.com (Swati Sharma) Date: Mon, 11 Jun 2018 15:30:56 +0530 Subject: UseNUMA membind Issue in openJDK In-Reply-To: <71ac36b7-263b-9f48-47d7-aa4cdbe04983@linux.vnet.ibm.com> References: <9a0310b7-2880-db69-cfbc-7abba844ecbf@oracle.com> <187f278f-5069-3afa-c61f-f49a1fc0d790@linux.vnet.ibm.com> <36dc8538-f984-d2df-e401-8362a429c6f3@linux.vnet.ibm.com> <71ac36b7-263b-9f48-47d7-aa4cdbe04983@linux.vnet.ibm.com> Message-ID: Hi Gustavo, May be you can remove the method "numa_bitmask_nbytes" as it's not getting used. I am ok with the changes,If Derek confirms we can go ahead. My name is there on the page "Swati Sharma - OpenJDK" , I have already signed the OCA on individual basis. Thanks, Swati On Sat, Jun 9, 2018 at 5:06 AM, Gustavo Romero wrote: > Hi Swati, > > Sorry, as usual I had to reserve a machine before trying it. > > I wanted to test it against a POWER9 with a NVIDIA Tesla V100 device > attached. > > On such a machines numa nodes are quite sparse so I thought it would not > be bad > to check against them: > > available: 8 nodes (0,8,250-255) > node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 > 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 > 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 > node 0 size: 261693 MB > node 0 free: 233982 MB > node 8 cpus: 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 > 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 > 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 > 126 127 > node 8 size: 261748 MB > node 8 free: 257078 MB > node 250 cpus: > node 250 size: 0 MB > node 250 free: 0 MB > node 251 cpus: > node 251 size: 0 MB > node 251 free: 0 MB > node 252 cpus: > node 252 size: 15360 MB > node 252 free: 15360 MB > node 253 cpus: > node 253 size: 0 MB > node 253 free: 0 MB > node 254 cpus: > node 254 size: 0 MB > node 254 free: 0 MB > node 255 cpus: > node 255 size: 15360 MB > node 255 free: 15360 MB > node distances: > node 0 8 250 251 252 253 254 255 > 0: 10 40 80 80 80 80 80 80 > 8: 40 10 80 80 80 80 80 80 > 250: 80 80 10 80 80 80 80 80 > 251: 80 80 80 10 80 80 80 80 > 252: 80 80 80 80 10 80 80 80 > 253: 80 80 80 80 80 10 80 80 > 254: 80 80 80 80 80 80 10 80 > 255: 80 80 80 80 80 80 80 10 > > > Please, find my comments below, inlined. > > On 06/01/2018 08:10 AM, Swati Sharma wrote: > >> I will fix the thread binding issue in a separate patch. >> > > I would like to address it in this change. I think it's not good to leave > such a > "dangling" behavior for the cpus once the memory bind issue is addressed. > > I suggest the following simple check to fix it (in accordance to what we've > discussed previously, i.e. remap cpu/node considering configuration, bind, > and > distance in rebuild_cpu_to_node_map(): > > - if (!isnode_in_configured_nodes(nindex_to_node()->at(i))) { > + if (!isnode_in_configured_nodes(nindex_to_node()->at(i)) || > + !isnode_in_bound_nodes(nindex_to_node()->at(i))) { > closest_distance = INT_MAX; > ... > for (size_t m = 0; m < node_num; m++) { > - if (m != i && isnode_in_configured_nodes(nindex_to_node()->at(m))) > { > + if (m != i && > + isnode_in_configured_nodes(nindex_to_node()->at(m)) && > + isnode_in_bound_nodes(nindex_to_node()->at(m))) { > > I tested it against the aforementioned topology and against the following > one: > > available: 4 nodes (0-3) > node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 > 24 25 26 27 28 29 30 31 > node 0 size: 55685 MB > node 0 free: 53196 MB > node 1 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 > 52 53 54 55 56 57 58 59 60 61 62 63 > node 1 size: 53961 MB > node 1 free: 49795 MB > node 2 cpus: > node 2 size: 21231 MB > node 2 free: 21171 MB > node 3 cpus: > node 3 size: 22492 MB > node 3 free: 22432 MB > node distances: > node 0 1 2 3 > 0: 10 20 40 40 > 1: 20 10 40 40 > 2: 40 40 10 20 > 3: 40 40 20 10 > > > >> Updated the previous patch by removing the structure and using the methods >> provided by numa API.Here is the updated one with the changes(attached >> also). >> > > Thanks. > > > ========================PATCH========================= >> diff --git a/src/hotspot/os/linux/os_linux.cpp >> b/src/hotspot/os/linux/os_linux.cpp >> --- a/src/hotspot/os/linux/os_linux.cpp >> +++ b/src/hotspot/os/linux/os_linux.cpp >> > > ... > > @@ -4962,8 +4972,9 @@ >> if (!Linux::libnuma_init()) { >> UseNUMA = false; >> } else { >> - if ((Linux::numa_max_node() < 1)) { >> - // There's only one node(they start from 0), disable NUMA. >> + if ((Linux::numa_max_node() < 1) || Linux::isbound_to_single_node()) >> { >> + // If there's only one node(they start from 0) or if the process >> > ^ let's fix this missing space > > ... > > > + // Check if bound to only one numa node. >> + // Returns true if bound to a single numa node, otherwise returns >> false. >> + static bool isbound_to_single_node() { >> + int single_node = 0; >> + struct bitmask* bmp = NULL; >> + unsigned int node = 0; >> + unsigned int max_number_of_nodes = 0; >> + if (_numa_get_membind != NULL && _numa_bitmask_nbytes != NULL) { >> + bmp = _numa_get_membind(); >> + max_number_of_nodes = _numa_bitmask_nbytes(bmp) * 8; >> + } else { >> + return false; >> + } >> + for (node = 0; node < max_number_of_nodes; node++) { >> + if (_numa_bitmask_isbitset(bmp, node)) { >> + single_node++; >> + if (single_node == 2) { >> + return false; >> + } >> + } >> + } >> + if (single_node == 1) { >> + return true; >> + } else { >> + return false; >> + } >> + } >> > > Now that numa_bitmask_isbitset() is being used (instead of the previous > version > that iterated through an array of longs, I suggest to tweak it a bit, > removing > the if (single_node == 2) check. > > I don't think removing it will hurt. In fact, numa_bitmask_nbytes() > returns the > total amount of bytes the bitmask can hold. However the total number of > nodes in > the system is usually much smaller than numa_bitmask_nbytes() * 8. > > So for a x86_64 system like that with only 2 numa nodes: > > available: 2 nodes (0-1) > node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23 > node 0 size: 131018 MB > node 0 free: 101646 MB > node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31 > node 1 size: 98304 MB > node 1 free: 91692 MB > node distances: > node 0 1 > 0: 10 11 > 1: 11 10 > > numa_bitmask_nbytes(): 64 => max_number_of_node = 512 > numa_max_node(): 1 => 1 + 1 iterations > > and the value returned by numa_bitmask_nbytes() does not change for > different > bind configurations. It's fixed. Another example is that on Power with 4 > numa > nodes: > > available: 4 nodes (0-1,16-17) > node 0 cpus: 0 8 16 24 32 > node 0 size: 130722 MB > node 0 free: 71930 MB > node 1 cpus: 40 48 56 64 72 > node 1 size: 0 MB > node 1 free: 0 MB > node 16 cpus: 80 88 96 104 112 > node 16 size: 130599 MB > node 16 free: 75934 MB > node 17 cpus: 120 128 136 144 152 > node 17 size: 0 MB > node 17 free: 0 MB > node distances: > node 0 1 16 17 > 0: 10 20 40 40 > 1: 20 10 40 40 > 16: 40 40 10 20 > 17: 40 40 20 10 > > numa_bitmask_nbytes(): 32 => max_number_of_node = 256 > numa_max_node(): 17 => 17 + 1 iterations > > So I understand it's better to set the iteration over numa_max_node() > instead of > numa_bitmask_nbytes(). Even more for Intel (with contiguous nodes) than for > Power. > > For the POWER9 with NVIDIA Tesla it would be a worst case: only 8 numa > nodes but > numa_max_node is 255! But I understand it's a very rare case and I'm fine > with > that. > > So what about: > > + if (_numa_get_membind != NULL && _numa_max_node != NULL) { > + bmp = _numa_get_membind(); > + highest_node_number = _numa_max_node(); > + } else { > + return false; > + } > + > + for (node = 0; node <= highest_node_number; node++) { > + if (_numa_bitmask_isbitset(bmp, node)) { > + nodes++; > + } > + } > + > + if (nodes == 1) { > + return true; > + } else { > + return false; > + } > > For convenience, I hosted a patch with all the changes above here: > http://cr.openjdk.java.net/~gromero/8189922/draft/usenuma_v4.patch > > @Derek, could you please confirm that this change solves JDK-8189922? > > Swati, if Derek confirms it solves JDK-8189922? and you confirm it's fine > for > you I'll consider it's reviewed from my side and I can host that change > for you > so you can start a formal request for approval (remember I'm not a > Reviewer, so > you still need two additional reviews for the change). > > Finally, as a heads up, I could not find you (nor AMD?) in the OCA: > > http://www.oracle.com/technetwork/community/oca-486395.html#a > > If I'm not mistaken, you (individually) or AMD must sign it before > contributing > to OpenJDK. > > > Best regards, > Gustavo > > > ======================================================= >> >> Swati >> >> >> >> >> On Tue, May 29, 2018 at 6:53 PM, Gustavo Romero < >> gromero at linux.vnet.ibm.com > wrote: >> > >> > Hi Swati, >> > >> > On 05/29/2018 06:12 AM, Swati Sharma wrote: >> >> >> >> I have incorporated some changes suggested by you. >> >> >> >> The use of struct bitmask's maskp for checking 64 bit in single >> iteration >> >> is more optimized compared to numa_bitmask_isbitset() as by using >> this we >> >> need to check each bit for 1024 times(SUSE case) and 64 times(Ubuntu >> Case). >> >> If its fine to iterate at initialization time then I can change. >> > >> > >> > Yes, I know, your version is more optimized. libnuma API should >> provide a >> > ready-made solution for that... but that's another story. I'm curious >> to know >> > what the time difference is on the worst case for both ways tho. >> Anyway, I >> > just would like to point out that, regardless performance, it's >> possible to >> > achieve the same result with current libnuma API. >> > >> > >> >> For the answer to your question: >> >> If it picks up node 16, not so bad, but what if it picks up node 0 or >> 1? >> >> It can be checked based on numa_distance instead of picking up the >> lgrps randomly. >> > >> > >> > That seems a good solution. You can do the checking very early, so >> > lgrp_spaces()->find() does not even fail (return -1), i.e. by changing >> the CPU to >> > node mapping on initialization (avoiding to change cas_allocate()). On >> that checking >> > both numa distance and if the node is bound (or not) would be >> considered to generate >> > the map. >> > >> > >> > Best regards, >> > Gustavo >> > >> >> Thanks, >> >> Swati >> >> >> >> >> >> >> >> On Fri, May 25, 2018 at 4:54 AM, Gustavo Romero < >> gromero at linux.vnet.ibm.com > gromero at linux.vnet.ibm.com >> wrote: >> >> >> >> Hi Swati, >> >> >> >> >> >> Thanks for CC:ing me. Sorry for the delay replying it, I had to >> reserve a few >> >> specific machines before trying your patch :-) >> >> >> >> I think that UseNUMA's original task was to figure out the best >> binding >> >> setup for the JVM automatically but I understand that it also has >> to be aware >> >> that sometimes, for some (new) particular reasons, its binding >> task is >> >> "modulated" by other external agents. Thanks for proposing a fix. >> >> >> >> I have just a question/concern on the proposal: how the JVM >> should behave if >> >> CPUs are not bound in accordance to the bound memory nodes? For >> instance, what >> >> happens if no '--cpunodebind' is passed and '--membind=0,1,16' is >> passed at >> >> the same time on this numa topology: >> >> >> >> brianh at p215n12:~$ numactl -H >> >> available: 4 nodes (0-1,16-17) >> >> node 0 cpus: 0 1 2 3 8 9 10 11 16 17 18 19 24 25 26 27 32 33 34 35 >> >> node 0 size: 65342 MB >> >> node 0 free: 56902 MB >> >> node 1 cpus: 40 41 42 43 48 49 50 51 56 57 58 59 64 65 66 67 72 >> 73 74 75 >> >> node 1 size: 65447 MB >> >> node 1 free: 58322 MB >> >> node 16 cpus: 80 81 82 83 88 89 90 91 96 97 98 99 104 105 106 107 >> 112 113 114 115 >> >> node 16 size: 65448 MB >> >> node 16 free: 63096 MB >> >> node 17 cpus: 120 121 122 123 128 129 130 131 136 137 138 139 144 >> 145 146 147 152 153 154 155 >> >> node 17 size: 65175 MB >> >> node 17 free: 61522 MB >> >> node distances: >> >> node 0 1 16 17 >> >> 0: 10 20 40 40 >> >> 1: 20 10 40 40 >> >> 16: 40 40 10 20 >> >> 17: 40 40 20 10 >> >> >> >> >> >> In that case JVM will spawn threads that will run on all CPUs, >> including those >> >> CPUs in numa node 17. Then once in >> >> src/hotspot/share/gc/parallel/mutableNUMASpace.cpp, in >> cas_allocate(): >> >> >> >> 834 // This version is lock-free. >> >> 835 HeapWord* MutableNUMASpace::cas_allocate(size_t size) { >> >> 836 Thread* thr = Thread::current(); >> >> 837 int lgrp_id = thr->lgrp_id(); >> >> 838 if (lgrp_id == -1 || !os::numa_has_group_homing()) { >> >> 839 lgrp_id = os::numa_get_group_id(); >> >> 840 thr->set_lgrp_id(lgrp_id); >> >> 841 } >> >> >> >> a newly created thread will try to be mapped to a numa node given >> your CPU ID. >> >> So if that CPU is in numa node 17 it will then not find it in: >> >> >> >> 843 int i = lgrp_spaces()->find(&lgrp_id, LGRPSpace::equals); >> >> >> >> and will fallback to a random map, picking up a random numa node >> among nodes >> >> 0, 1, and 16: >> >> >> >> 846 if (i == -1) { >> >> 847 i = os::random() % lgrp_spaces()->length(); >> >> 848 } >> >> >> >> If it picks up node 16, not so bad, but what if it picks up node >> 0 or 1? >> >> >> >> I see that if one binds mem but leaves CPU unbound one has to >> know exactly what >> >> she/he is doing, because it can be likely suboptimal. On the >> other hand, letting >> >> the node being picked up randomly when there are memory nodes >> bound but no CPUs >> >> seems even more suboptimal in some scenarios. Thus, should the >> JVM deal with it? >> >> >> >> @Zhengyu, do you have any opinion on that? >> >> >> >> Please find a few nits / comments inline. >> >> >> >> Note that I'm not a (R)eviewer so you still need two official >> reviews. >> >> >> >> >> >> Best regards, >> >> Gustavo >> >> >> >> On 05/21/2018 01:44 PM, Swati Sharma wrote: >> >> >> >> ======================PATCH============================== >> >> diff --git a/src/hotspot/os/linux/os_linux.cpp >> b/src/hotspot/os/linux/os_linux.cpp >> >> --- a/src/hotspot/os/linux/os_linux.cpp >> >> +++ b/src/hotspot/os/linux/os_linux.cpp >> >> @@ -2832,14 +2832,42 @@ >> >> // Map all node ids in which is possible to allocate >> memory. Also nodes are >> >> // not always consecutively available, i.e. available >> from 0 to the highest >> >> // node number. >> >> + // If the nodes have been bound explicitly using numactl >> membind, then >> >> + // allocate memory from those nodes only. >> >> >> >> >> >> I think ok to place that comment on the same existing line, like: >> >> >> >> - // node number. >> >> + // node number. If the nodes have been bound explicitly using >> numactl membind, >> >> + // then allocate memory from these nodes only. >> >> >> >> >> >> for (size_t node = 0; node <= highest_node_number; >> node++) { >> >> - if (Linux::isnode_in_configured_nodes(node)) { >> >> + if (Linux::isnode_in_bounded_nodes(node)) { >> >> >> >> ---------------------------------^ s/bounded/bound/ >> >> >> >> >> >> ids[i++] = node; >> >> } >> >> } >> >> return i; >> >> } >> >> +extern "C" struct bitmask { >> >> + unsigned long size; /* number of bits in the map */ >> >> + unsigned long *maskp; >> >> +}; >> >> >> >> >> >> I think it's possible to move the function below to os_linux.hpp >> with its >> >> friends and cope with the forward declaration of 'struct >> bitmask*` by using the >> >> functions from numa API, notably numa_bitmask_nbytes() and >> >> numa_bitmask_isbitset() only, avoiding the member dereferecing >> issue and the >> >> need to add the above struct explicitly. >> >> >> >> >> >> +// Check if single memory node bound. >> >> +// Returns true if single memory node bound. >> >> >> >> >> >> I suggest a minuscule improvement, something like: >> >> >> >> +// Check if bound to only one numa node. >> >> +// Returns true if bound to a single numa node, otherwise >> returns false. >> >> >> >> >> >> +bool os::Linux::issingle_node_bound() { >> >> >> >> >> >> What about s/issingle_node_bound/isbound_to_single_node/ ? >> >> >> >> >> >> + struct bitmask* bmp = _numa_get_membind != NULL ? >> _numa_get_membind() : NULL; >> >> + if(!(bmp != NULL && bmp->maskp != NULL)) return false; >> >> >> >> -----^ >> >> Are you sure this checking is necessary? I think if >> numa_get_membind succeed >> >> bmp->maskp is always != NULL. >> >> >> >> Indentation here is odd. No space before 'if' and return on the >> same line. >> >> >> >> I would try to avoid lines over 80 chars. >> >> >> >> >> >> + int issingle = 0; >> >> + // System can have more than 64 nodes so check in all the >> elements of >> >> + // unsigned long array >> >> + for (unsigned long i = 0; i < (bmp->size / (8 * >> sizeof(unsigned long))); i++) { >> >> + if (bmp->maskp[i] == 0) { >> >> + continue; >> >> + } else if ((bmp->maskp[i] & (bmp->maskp[i] - 1)) == 0) { >> >> + issingle++; >> >> + } else { >> >> + return false; >> >> + } >> >> + } >> >> + if (issingle == 1) >> >> + return true; >> >> + return false; >> >> +} >> >> + >> >> >> >> >> >> As I mentioned, I think it could be moved to os_linux.hpp >> instead. Also, it >> >> could be something like: >> >> >> >> +bool os::Linux::isbound_to_single_node(void) { >> >> + struct bitmask* bmp; >> >> + unsigned long mask; // a mask element in the mask array >> >> + unsigned long max_num_masks; >> >> + int single_node = 0; >> >> + >> >> + if (_numa_get_membind != NULL) { >> >> + bmp = _numa_get_membind(); >> >> + } else { >> >> + return false; >> >> + } >> >> + >> >> + max_num_masks = bmp->size / (8 * sizeof(unsigned long)); >> >> + >> >> + for (mask = 0; mask < max_num_masks; mask++) { >> >> + if (bmp->maskp[mask] != 0) { // at least one numa node in >> the mask >> >> + if (bmp->maskp[mask] & (bmp->maskp[mask] - 1) == 0) { >> >> + single_node++; // a single numa node in the mask >> >> + } else { >> >> + return false; >> >> + } >> >> + } >> >> + } >> >> + >> >> + if (single_node == 1) { >> >> + return true; // only a single mask with a single numa node >> >> + } else { >> >> + return false; >> >> + } >> >> +} >> >> >> >> >> >> bool os::get_page_info(char *start, page_info* info) { >> >> return false; >> >> } >> >> @@ -2930,6 +2958,10 @@ >> >> >> libnuma_dlsym(handle, "numa_bitmask_isbitset"))); >> >> set_numa_distance(CAST_TO_FN_P >> TR(numa_distance_func_t, >> >> >> libnuma_dlsym(handle, "numa_distance"))); >> >> + set_numa_set_membind(CAST_TO_F >> N_PTR(numa_set_membind_func_t, >> >> + >> libnuma_dlsym(handle, "numa_set_membind"))); >> >> + set_numa_get_membind(CAST_TO_F >> N_PTR(numa_get_membind_func_t, >> >> + >> libnuma_v2_dlsym(handle, "numa_get_membind"))); >> >> if (numa_available() != -1) { >> >> set_numa_all_nodes((unsigned >> long*)libnuma_dlsym(handle, "numa_all_nodes")); >> >> @@ -3054,6 +3086,8 @@ >> >> os::Linux::numa_set_bind_policy_func_t >> os::Linux::_numa_set_bind_policy; >> >> os::Linux::numa_bitmask_isbitset_func_t >> os::Linux::_numa_bitmask_isbitset; >> >> os::Linux::numa_distance_func_t os::Linux::_numa_distance; >> >> +os::Linux::numa_set_membind_func_t >> os::Linux::_numa_set_membind; >> >> +os::Linux::numa_get_membind_func_t >> os::Linux::_numa_get_membind; >> >> unsigned long* os::Linux::_numa_all_nodes; >> >> struct bitmask* os::Linux::_numa_all_nodes_ptr; >> >> struct bitmask* os::Linux::_numa_nodes_ptr; >> >> @@ -4962,8 +4996,9 @@ >> >> if (!Linux::libnuma_init()) { >> >> UseNUMA = false; >> >> } else { >> >> - if ((Linux::numa_max_node() < 1)) { >> >> - // There's only one node(they start from 0), disable >> NUMA. >> >> + if ((Linux::numa_max_node() < 1) || >> Linux::issingle_node_bound()) { >> >> + // If there's only one node(they start from 0) or if >> the process >> >> + // is bound explicitly to a single node using >> membind, disable NUMA. >> >> UseNUMA = false; >> >> } >> >> } >> >> diff --git a/src/hotspot/os/linux/os_linux.hpp >> b/src/hotspot/os/linux/os_linux.hpp >> >> --- a/src/hotspot/os/linux/os_linux.hpp >> >> +++ b/src/hotspot/os/linux/os_linux.hpp >> >> @@ -228,6 +228,8 @@ >> >> typedef int (*numa_tonode_memory_func_t)(void *start, >> size_t size, int node); >> >> typedef void (*numa_interleave_memory_func_t)(void >> *start, size_t size, unsigned long *nodemask); >> >> typedef void (*numa_interleave_memory_v2_func_t)(void >> *start, size_t size, struct bitmask* mask); >> >> + typedef void (*numa_set_membind_func_t)(struct bitmask >> *mask); >> >> + typedef struct bitmask* (*numa_get_membind_func_t)(void); >> >> typedef void (*numa_set_bind_policy_func_t)(int policy); >> >> typedef int (*numa_bitmask_isbitset_func_t)(struct >> bitmask *bmp, unsigned int n); >> >> @@ -244,6 +246,8 @@ >> >> static numa_set_bind_policy_func_t _numa_set_bind_policy; >> >> static numa_bitmask_isbitset_func_t >> _numa_bitmask_isbitset; >> >> static numa_distance_func_t _numa_distance; >> >> + static numa_set_membind_func_t _numa_set_membind; >> >> + static numa_get_membind_func_t _numa_get_membind; >> >> static unsigned long* _numa_all_nodes; >> >> static struct bitmask* _numa_all_nodes_ptr; >> >> static struct bitmask* _numa_nodes_ptr; >> >> @@ -259,6 +263,8 @@ >> >> static void set_numa_set_bind_policy(numa_set_bind_policy_func_t >> func) { _numa_set_bind_policy = func; } >> >> static void set_numa_bitmask_isbitset(numa_bitmask_isbitset_func_t >> func) { _numa_bitmask_isbitset = func; } >> >> static void set_numa_distance(numa_distance_func_t >> func) { _numa_distance = func; } >> >> + static void set_numa_set_membind(numa_set_membind_func_t >> func) { _numa_set_membind = func; } >> >> + static void set_numa_get_membind(numa_get_membind_func_t >> func) { _numa_get_membind = func; } >> >> static void set_numa_all_nodes(unsigned long* ptr) { >> _numa_all_nodes = ptr; } >> >> static void set_numa_all_nodes_ptr(struct bitmask **ptr) >> { _numa_all_nodes_ptr = (ptr == NULL ? NULL : *ptr); } >> >> static void set_numa_nodes_ptr(struct bitmask **ptr) { >> _numa_nodes_ptr = (ptr == NULL ? NULL : *ptr); } >> >> @@ -320,6 +326,15 @@ >> >> } else >> >> return 0; >> >> } >> >> + // Check if node in bounded nodes >> >> >> >> >> >> + // Check if node is in bound node set. Maybe? >> >> >> >> >> >> + static bool isnode_in_bounded_nodes(int node) { >> >> + struct bitmask* bmp = _numa_get_membind != NULL ? >> _numa_get_membind() : NULL; >> >> + if (bmp != NULL && _numa_bitmask_isbitset != NULL && >> _numa_bitmask_isbitset(bmp, node)) { >> >> + return true; >> >> + } else >> >> + return false; >> >> + } >> >> + static bool issingle_node_bound(); >> >> >> >> >> >> Looks like it can be re-written like: >> >> >> >> + static bool isnode_in_bound_nodes(int node) { >> >> + if (_numa_get_membind != NULL && _numa_bitmask_isbitset != >> NULL) { >> >> + return _numa_bitmask_isbitset(_numa_get_membind(), node); >> >> + } else { >> >> + return false; >> >> + } >> >> + } >> >> >> >> ? >> >> >> >> >> >> }; >> >> #endif // OS_LINUX_VM_OS_LINUX_HPP >> >> >> >> >> >> >> > >> > > From erik.osterlund at oracle.com Mon Jun 11 10:19:06 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 11 Jun 2018 12:19:06 +0200 Subject: RFR: 8204554: JFR TLAB tracing broken after 8202776 In-Reply-To: <2777e599697a3393a46e130ba822d1f4c0fdd1c6.camel@oracle.com> References: <5B1940CC.6000402@oracle.com> <2777e599697a3393a46e130ba822d1f4c0fdd1c6.camel@oracle.com> Message-ID: <5B1E4C9A.7090908@oracle.com> Hi Thomas, Thanks for the review. /Erik On 2018-06-11 11:34, Thomas Schatzl wrote: > Hi, > > On Thu, 2018-06-07 at 16:27 +0200, Erik ?sterlund wrote: >> Hi, >> >> The recent allocation path modularization (8202776) broke JFR TLAB >> sampling. This was discovered in tier 5 testing. >> >> The problem is that there was previously an early exit TLAB path, >> that should not run the tracing code when not returning NULL, and a >> mem_allocate call that should run the tracing code when not >> returning NULL. However, these paths were joined in a virtual member >> function, making them look the same to the tracing code, which caused >> the non-TLAB tracing code to be run on TLAB allocations as well. >> >> The solution I propose is to move the TLAB tracing code into the new >> virtual member function. It seems that whatever GC overrides this >> code, should also decide what to do about the tracing code there >> anyway. >> > looks good. > > Thomas > From thomas.schatzl at oracle.com Mon Jun 11 10:42:19 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Mon, 11 Jun 2018 12:42:19 +0200 Subject: RFR: 8204097: Simplify OopStorage::AllocateList block entry access In-Reply-To: <2A6B793E-AD54-430F-8C68-D92F964C0A37@oracle.com> References: <2A6B793E-AD54-430F-8C68-D92F964C0A37@oracle.com> Message-ID: Hi, On Wed, 2018-05-30 at 16:19 -0400, Kim Barrett wrote: > Please review this simplification of OopStorage::AllocateList, > removing the no longer used support for blocks being in multiple > lists > simultaneously. There is now only one list of blocks, the > _allocate_list. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8204097 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8204097/open.00/ > > Testing: > Mach5 tier{1,2,3} > looks good. Thomas From rene.schuenemann at gmail.com Mon Jun 11 10:52:55 2018 From: rene.schuenemann at gmail.com (=?UTF-8?B?UmVuw6kgU2Now7xuZW1hbm4=?=) Date: Mon, 11 Jun 2018 12:52:55 +0200 Subject: RFR: 8204476: Add additional statistics to CodeCache::print_summary In-Reply-To: References: Message-ID: Thank you Vladimir and Thomas! I have moved the output of code cache full count to a separated line. It now shows an accumulated number, but I think, that should be sufficient enough. Updated Webrev: http://cr.openjdk.java.net/~goetz/wr18/rene/webrev_8204476/02/ Regards, Rene On Sat, Jun 9, 2018 at 5:25 PM, Thomas St?fe wrote: > >>> >>> - More of a question to others: I am not familiar with compiler >>> coding, but signed int as counters seem a bit small? Is there no >>> danger of ever overflowing on long running VMs? Or does it not matter >>> if they do? >> >> >> I never observed compilations count over 1M (10^6). And there is no issue if >> they overflow - it is just number printed in statistic and logs. We have >> also use it as compilation task id and I think it is also safe. >> >> Note, in stable application case you should not have a lot of compilations. >> It should not be more than application's + system's hot methods. >> >> New counters in this change are much smaller - they count how many times >> CodeCache become full which should not happen in normal case. >> > > Ah, thanks for clarifying. > > Best Regards, Thomas > >> Regards, >> Vladimir >> >>> >>> Thanks, Thomas >>> >> From rene.schuenemann at gmail.com Mon Jun 11 10:55:24 2018 From: rene.schuenemann at gmail.com (=?UTF-8?B?UmVuw6kgU2Now7xuZW1hbm4=?=) Date: Mon, 11 Jun 2018 12:55:24 +0200 Subject: RFR: 8204477: Count linkage errors and print in Exceptions::print_exception_counts_on_error In-Reply-To: References: Message-ID: Hi Thomas, thank you for your review. I have fixed your remarks. Updated Webrev: http://cr.openjdk.java.net/~goetz/wr18/rene/webrev_8204477/02/ Regards, Rene On Thu, Jun 7, 2018 at 9:35 PM, Thomas St?fe wrote: > On Thu, Jun 7, 2018 at 9:34 PM, Thomas St?fe wrote: >> Hi Rene, >> >> Looks good overall. This is a useful addition. >> >> - 155 Atomic::inc(&Exceptions::_linkage_errors); >> you can loose the "Exceptions::" scope since we are in the Exceptions class. >> >> - Can you please add #include runtime/atomic.hpp to the file? It is >> missing that header. >> > > (I mean exceptions.cpp) > >> - Please make _linkage_errors class private. It is not directly >> accessed from outside. (_stack_overflow_errors on the other hand is, >> so it has to be public). >> >> If you fix these points, I do not need a new webrev. >> >> Best Regards, Thomas >> >> >> On Thu, Jun 7, 2018 at 9:29 AM, Ren? Sch?nemann >> wrote: >>> Hi, >>> >>> can I please get a review for the following change: >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8204477 >>> Webrev: http://cr.openjdk.java.net/~goetz/wr18/rene/webrev_8204477/01/ >>> >>> This change counts linkage errors and prints the number of linkage >>> errors thrown in the Exceptions::print_exception_counts_on_error, >>> which is used when writing the hs_error file. >>> >>> Thank you, >>> Rene From thomas.stuefe at gmail.com Mon Jun 11 12:25:26 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 11 Jun 2018 14:25:26 +0200 Subject: RFR: 8204477: Count linkage errors and print in Exceptions::print_exception_counts_on_error In-Reply-To: References: Message-ID: Looks good to me. Thank you for fixing. ..Thomas On Mon, Jun 11, 2018 at 12:55 PM, Ren? Sch?nemann wrote: > Hi Thomas, > > thank you for your review. I have fixed your remarks. > > Updated Webrev: http://cr.openjdk.java.net/~goetz/wr18/rene/webrev_8204477/02/ > > Regards, > Rene > > On Thu, Jun 7, 2018 at 9:35 PM, Thomas St?fe wrote: >> On Thu, Jun 7, 2018 at 9:34 PM, Thomas St?fe wrote: >>> Hi Rene, >>> >>> Looks good overall. This is a useful addition. >>> >>> - 155 Atomic::inc(&Exceptions::_linkage_errors); >>> you can loose the "Exceptions::" scope since we are in the Exceptions class. >>> >>> - Can you please add #include runtime/atomic.hpp to the file? It is >>> missing that header. >>> >> >> (I mean exceptions.cpp) >> >>> - Please make _linkage_errors class private. It is not directly >>> accessed from outside. (_stack_overflow_errors on the other hand is, >>> so it has to be public). >>> >>> If you fix these points, I do not need a new webrev. >>> >>> Best Regards, Thomas >>> >>> >>> On Thu, Jun 7, 2018 at 9:29 AM, Ren? Sch?nemann >>> wrote: >>>> Hi, >>>> >>>> can I please get a review for the following change: >>>> >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8204477 >>>> Webrev: http://cr.openjdk.java.net/~goetz/wr18/rene/webrev_8204477/01/ >>>> >>>> This change counts linkage errors and prints the number of linkage >>>> errors thrown in the Exceptions::print_exception_counts_on_error, >>>> which is used when writing the hs_error file. >>>> >>>> Thank you, >>>> Rene From thomas.stuefe at gmail.com Mon Jun 11 12:27:02 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 11 Jun 2018 14:27:02 +0200 Subject: RFR: 8204476: Add additional statistics to CodeCache::print_summary In-Reply-To: References: Message-ID: Looks good to me. Thanks, Thomas On Mon, Jun 11, 2018 at 12:52 PM, Ren? Sch?nemann wrote: > Thank you Vladimir and Thomas! > > I have moved the output of code cache full count to a separated line. > It now shows an accumulated number, but I think, that should be > sufficient enough. > > Updated Webrev: http://cr.openjdk.java.net/~goetz/wr18/rene/webrev_8204476/02/ > > Regards, > Rene > > On Sat, Jun 9, 2018 at 5:25 PM, Thomas St?fe wrote: >> >>>> >>>> - More of a question to others: I am not familiar with compiler >>>> coding, but signed int as counters seem a bit small? Is there no >>>> danger of ever overflowing on long running VMs? Or does it not matter >>>> if they do? >>> >>> >>> I never observed compilations count over 1M (10^6). And there is no issue if >>> they overflow - it is just number printed in statistic and logs. We have >>> also use it as compilation task id and I think it is also safe. >>> >>> Note, in stable application case you should not have a lot of compilations. >>> It should not be more than application's + system's hot methods. >>> >>> New counters in this change are much smaller - they count how many times >>> CodeCache become full which should not happen in normal case. >>> >> >> Ah, thanks for clarifying. >> >> Best Regards, Thomas >> >>> Regards, >>> Vladimir >>> >>>> >>>> Thanks, Thomas >>>> >>> From bob.vandette at oracle.com Mon Jun 11 14:12:29 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Mon, 11 Jun 2018 10:12:29 -0400 Subject: Ping!! Re: RFR: 8203357 Container Metrics In-Reply-To: <1c57d8ee-1178-2e9a-4b7d-aafbe885473c@oracle.com> References: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> <1c57d8ee-1178-2e9a-4b7d-aafbe885473c@oracle.com> Message-ID: > On Jun 11, 2018, at 4:32 AM, David Holmes wrote: > > Sorry Bob I haven't had a chance to look at this detail. > > For the Java code ... methods that return arrays should return zero-length arrays when something is not available rather than null. All methods do return zero length arrays except I missed the getPerCpuUsage. I?ll fix that one and correct the javadoc. > > For getCpuPeriod() the term "operating system time slice" can be misconstrued as being related to the scheduler timeslice that may, or may not, exist, depending on the scheduler and scheduling policy etc. This "timeslice" is something specific to cgroups - no? The comments reads: * Returns the length of the operating system time slice, in * milliseconds, for processes within the Isolation Group. The comment does infer that it?s process and cgroup (Isolation group) specific and not the generic os timeslice. Isn?t this sufficient? Thanks, Bob. > > David > > On 8/06/2018 3:43 AM, Bob Vandette wrote: >> Can I get one more reviewer for this RFE so I can integrate it? >>> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 >> Mandy Chung has reviewed this change. >> I?ve run Mach5 hotspot and core lib tests. >> I?ve reviewed the tests which were written by Harsha Wardhana >> I filed a CSR for the command line change and it?s now approved and closed. >> Thanks, >> Bob. >>> On May 30, 2018, at 3:45 PM, Bob Vandette wrote: >>> >>> Please review the following RFE which adds an internal API, along with jtreg tests that provide >>> access to Docker container configuration data and metrics. In addition to the API which we hope to >>> take advantage of in the future with Java Flight Recorder and a JMX Mbean, I?ve added an additional >>> option to -XshowSettings:system than dumps out the container or host cgroup confguration >>> information. See the sample output below: >>> >>> RFE: Container Metrics >>> >>> https://bugs.openjdk.java.net/browse/JDK-8203357 >>> >>> WEBREV: >>> >>> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 >>> >>> >>> This commit will also include a fix for the following bug. >>> >>> BUG: [TESTBUG] Test /runtime/containers/cgroup/PlainRead.java fails >>> >>> https://bugs.openjdk.java.net/browse/JDK-8203691 >>> >>> WEBREV: >>> >>> http://cr.openjdk.java.net/~bobv/8203357/webrev.00/test/hotspot/jtreg/runtime/containers/cgroup/PlainRead.java.sdiff.html >>> >>> SAMPLE USAGE and OUTPUT: >>> >>> docker run ?memory=256m --cpuset-cpus 4-7 -it ubuntu bash >>> ./java -XshowSettings:system >>> Operating System Metrics: >>> Provider: cgroupv1 >>> Effective CPU Count: 4 >>> CPU Period: 100000 >>> CPU Quota: -1 >>> CPU Shares: -1 >>> List of Processors, 4 total: >>> 4 5 6 7 >>> List of Effective Processors, 4 total: >>> 4 5 6 7 >>> List of Memory Nodes, 2 total: >>> 0 1 >>> List of Available Memory Nodes, 2 total: >>> 0 1 >>> CPUSet Memory Pressure Enabled: false >>> Memory Limit: 256.00M >>> Memory Soft Limit: Unlimited >>> Memory & Swap Limit: 512.00M >>> Kernel Memory Limit: Unlimited >>> TCP Memory Limit: Unlimited >>> Out Of Memory Killer Enabled: true >>> >>> TEST RESULTS: >>> >>> testing runtime container APIs >>> Directory "JTwork" not found: creating >>> Passed: runtime/containers/cgroup/PlainRead.java >>> Passed: runtime/containers/docker/DockerBasicTest.java >>> Passed: runtime/containers/docker/TestCPUAwareness.java >>> Passed: runtime/containers/docker/TestCPUSets.java >>> Passed: runtime/containers/docker/TestMemoryAwareness.java >>> Passed: runtime/containers/docker/TestMisc.java >>> Test results: passed: 6 >>> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >>> >>> testing jdk.internal.platform APIs >>> Passed: jdk/internal/platform/cgroup/TestCgroupMetrics.java >>> Passed: jdk/internal/platform/docker/TestDockerCpuMetrics.java >>> Passed: jdk/internal/platform/docker/TestDockerMemoryMetrics.java >>> Passed: jdk/internal/platform/docker/TestSystemMetrics.java >>> Test results: passed: 4 >>> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >>> >>> testing -XshowSettings:system launcher option >>> Passed: tools/launcher/Settings.java >>> Test results: passed: 1 >>> >>> >>> Bob. >>> >>> From aph at redhat.com Mon Jun 11 14:37:39 2018 From: aph at redhat.com (Andrew Haley) Date: Mon, 11 Jun 2018 15:37:39 +0100 Subject: RFR: 8204680: Disassembly does not display code strings in stubs Message-ID: So last Friday I was looking at the code we generate for the runtime stubs and I noticed that there were no comments in the disassembly. Which is odd, because I'm sure it used to work. I found a bug which prevented it from working, fixed it, but there was still no output. What??! This led me down a rabbit hole from which I was to emerge several hours later. It turns out there are two separate bugs. When we disassemble, the code strings are found in the CodeBlob that contains the code. Unfortunately, when we use -XX:+PrintStubCode the disassembly is done from a CodeBuffer before the code strings have actually been copied to the code blob, so the disassembler finds no code strings. Also, the code strings are only copied into the CodeBlob if PrintStubCode is true, so "call disnm()" in the debugger doesn't print any code strings because they were lost when the CodeBlob was created. With both of these fixed, we have fully-commented disassembly in the stubs again. http://cr.openjdk.java.net/~aph/8204680/ OK? -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From harsha.wardhana.b at oracle.com Fri Jun 8 04:30:39 2018 From: harsha.wardhana.b at oracle.com (Harsha Wardhana B) Date: Fri, 8 Jun 2018 10:00:39 +0530 Subject: Ping!! Re: RFR: 8203357 Container Metrics In-Reply-To: <5B19A3F0.1010804@oracle.com> References: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> <5B19A3F0.1010804@oracle.com> Message-ID: <78ad68bd-0020-836f-7511-2b7a49dc6d73@oracle.com> [Replying to all mailing-lists] Hi Misha, The ERROR_MARGIN in tests was introduced to make the tests stable. There are times where metric values (specifically CPU usage) can change drastically in between two reads. The metrics value got from the API and the cgroup file can be different and 0.1 ERROR_MARGIN should take care of that, though at times even that may not be enough. Hence the CPU usage related tests only print a warning if ERROR_MARGIN is exceeded. Thanks Harsha On Friday 08 June 2018 03:00 AM, Mikhailo Seledtsov wrote: > Hi Bob, > > ? I looked at the tests. In general they look good. I am a bit > concerned about the use of ERROR_MARGIN in one of the tests. We need > to make sure that the tests are stable, and do not produce > intermittent failures. > > > Thank you, > Misha > > On 6/7/18, 10:43 AM, Bob Vandette wrote: >> Can I get one more reviewer for this RFE so I can integrate it? >> >>> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 >> Mandy Chung has reviewed this change. >> >> I?ve run Mach5 hotspot and core lib tests. >> >> I?ve reviewed the tests which were written by Harsha Wardhana >> >> I filed a CSR for the command line change and it?s now approved and >> closed. >> >> Thanks, >> Bob. >> >> >>> On May 30, 2018, at 3:45 PM, Bob Vandette? >>> wrote: >>> >>> Please review the following RFE which adds an internal API, along >>> with jtreg tests that provide >>> access to Docker container configuration data and metrics.? In >>> addition to the API which we hope to >>> take advantage of in the future with Java Flight Recorder and a JMX >>> Mbean, I?ve added an additional >>> option to -XshowSettings:system than dumps out the container or host >>> cgroup confguration >>> information.? See the sample output below: >>> >>> RFE: Container Metrics >>> >>> https://bugs.openjdk.java.net/browse/JDK-8203357 >>> >>> WEBREV: >>> >>> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 >>> >>> >>> This commit will also include a fix for the following bug. >>> >>> BUG: [TESTBUG] Test /runtime/containers/cgroup/PlainRead.java fails >>> >>> https://bugs.openjdk.java.net/browse/JDK-8203691 >>> >>> WEBREV: >>> >>> http://cr.openjdk.java.net/~bobv/8203357/webrev.00/test/hotspot/jtreg/runtime/containers/cgroup/PlainRead.java.sdiff.html >>> >>> >>> SAMPLE USAGE and OUTPUT: >>> >>> docker run ?memory=256m --cpuset-cpus 4-7 -it ubuntu bash >>> ./java -XshowSettings:system >>> Operating System Metrics: >>> ??? Provider: cgroupv1 >>> ??? Effective CPU Count: 4 >>> ??? CPU Period: 100000 >>> ??? CPU Quota: -1 >>> ??? CPU Shares: -1 >>> ??? List of Processors, 4 total: >>> ??? 4 5 6 7 >>> ??? List of Effective Processors, 4 total: >>> ??? 4 5 6 7 >>> ??? List of Memory Nodes, 2 total: >>> ??? 0 1 >>> ??? List of Available Memory Nodes, 2 total: >>> ??? 0 1 >>> ??? CPUSet Memory Pressure Enabled: false >>> ??? Memory Limit: 256.00M >>> ??? Memory Soft Limit: Unlimited >>> ??? Memory&? Swap Limit: 512.00M >>> ??? Kernel Memory Limit: Unlimited >>> ??? TCP Memory Limit: Unlimited >>> ??? Out Of Memory Killer Enabled: true >>> >>> TEST RESULTS: >>> >>> testing runtime container APIs >>> Directory "JTwork" not found: creating >>> Passed: runtime/containers/cgroup/PlainRead.java >>> Passed: runtime/containers/docker/DockerBasicTest.java >>> Passed: runtime/containers/docker/TestCPUAwareness.java >>> Passed: runtime/containers/docker/TestCPUSets.java >>> Passed: runtime/containers/docker/TestMemoryAwareness.java >>> Passed: runtime/containers/docker/TestMisc.java >>> Test results: passed: 6 >>> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >>> >>> testing jdk.internal.platform APIs >>> Passed: jdk/internal/platform/cgroup/TestCgroupMetrics.java >>> Passed: jdk/internal/platform/docker/TestDockerCpuMetrics.java >>> Passed: jdk/internal/platform/docker/TestDockerMemoryMetrics.java >>> Passed: jdk/internal/platform/docker/TestSystemMetrics.java >>> Test results: passed: 4 >>> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >>> >>> testing -XshowSettings:system launcher option >>> Passed: tools/launcher/Settings.java >>> Test results: passed: 1 >>> >>> >>> Bob. >>> >>> From rkennke at redhat.com Mon Jun 11 15:25:17 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 11 Jun 2018 17:25:17 +0200 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> <5B16A8C5.7040005@oracle.com> <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> <48b8839a-8af7-e64c-a9c6-e9f67ea099e4@redhat.com> Message-ID: <65de0578-807a-dc47-986d-bcc9feb053c6@redhat.com> Am 08.06.2018 um 22:17 schrieb Roman Kennke: > Am 06.06.2018 um 12:03 schrieb Andrew Haley: >> On 06/05/2018 08:34 PM, Roman Kennke wrote: >>> Ok, done here: >>> >>> Incremental: >>> http://cr.openjdk.java.net/~rkennke/JDK-8203157/webrev.01.diff/ >>> Full: >>> http://cr.openjdk.java.net/~rkennke/JDK-8203157/webrev.01/ >>> >>> Good now? >> >> It's be better to fix this up in LIR generation than to use jobject2reg: >> >> 1910 break; >> 1911 case T_OBJECT: >> 1912 case T_ARRAY: >> 1913 jobject2reg(opr2->as_constant_ptr()->as_jobject(), rscratch1); >> 1914 __ cmpoop(reg1, rscratch1); >> 1915 return; >> > > Why is it better? And how would I do that? It sounds like a fairly > complex undertaking for a special case. Notice that if the oop doesn't > qualify as immediate operand (quite likely for an oop?) it used to be > moved into rscratch1 anyway a few lines below. > Hi Andrew, Ping? Also, this is a very plaform-specific thing. Doing it up in LIR generation would either do it for all platforms, or require to move it into LIR_Generator_$PLATFORM.cpp which I'd rather avoid. Can you please explain or confirm that it's ok to push anyway? Thanks, Roman From aph at redhat.com Mon Jun 11 15:56:53 2018 From: aph at redhat.com (Andrew Haley) Date: Mon, 11 Jun 2018 16:56:53 +0100 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> <5B16A8C5.7040005@oracle.com> <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> <48b8839a-8af7-e64c-a9c6-e9f67ea099e4@redhat.com> Message-ID: <5e67f600-dc99-4409-6c6d-c09d3786b215@redhat.com> On 06/08/2018 09:17 PM, Roman Kennke wrote: > Why is it better? And how would I do that? It sounds like a fairly > complex undertaking for a special case. Notice that if the oop doesn't > qualify as immediate operand (quite likely for an oop?) it used to be > moved into rscratch1 anyway a few lines below. Sorry for the slow reply. I'm looking now. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From aph at redhat.com Mon Jun 11 17:11:29 2018 From: aph at redhat.com (Andrew Haley) Date: Mon, 11 Jun 2018 18:11:29 +0100 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: <5e67f600-dc99-4409-6c6d-c09d3786b215@redhat.com> References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> <5B16A8C5.7040005@oracle.com> <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> <48b8839a-8af7-e64c-a9c6-e9f67ea099e4@redhat.com> <5e67f600-dc99-4409-6c6d-c09d3786b215@redhat.com> Message-ID: <38797d40-8a06-dc91-7d52-cc9e51222683@redhat.com> On 06/11/2018 04:56 PM, Andrew Haley wrote: > On 06/08/2018 09:17 PM, Roman Kennke wrote: >> Why is it better? And how would I do that? It sounds like a fairly >> complex undertaking for a special case. Notice that if the oop doesn't >> qualify as immediate operand (quite likely for an oop?) it used to be >> moved into rscratch1 anyway a few lines below. > > Sorry for the slow reply. I'm looking now. OK. The problem is that this is a very bad code smell: case T_ARRAY: jobject2reg(opr2->as_constant_ptr()->as_jobject(), rscratch1); __ cmpoop(reg1, rscratch1); I can't tell that this is correct. rscratch1 is used by assembler macros, and I don't know if some other GC (e.g. ZGC) might need to use rscratch1 inside cmpoop. The risk here is obvious. The Right Thing to do IMO is to generate a scratch register for pointer comparisons. Unless, I guess, we know that cmpoop never ever needs a scratch register for any forseeable garbage collector. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From vladimir.kozlov at oracle.com Mon Jun 11 17:26:40 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 11 Jun 2018 10:26:40 -0700 Subject: Updated: [11] RFR(M) 8184349: There should be some verification that EnableJVMCI is disabled if a GC not supporting JVMCI is selected In-Reply-To: <8b33f96f-48c5-6df3-5efe-f77dd19961c5@oracle.com> References: <8b33f96f-48c5-6df3-5efe-f77dd19961c5@oracle.com> Message-ID: http://cr.openjdk.java.net/~kvn/8184349/webrev.04/ I updated changes made back in May. I pushed CMS tests changes in separate fix 8202611. The main fix is for JVMCI to check if GC is supported and exit VM with error if not [1]. It is called from Arguments::apply_ergo() after GC is selected in GCConfig::initialize(). In Arguments::check_vm_args_consistency() I added compiler flags reset in -Xint case when Interpreter only used. ScavengeRootsInCode code for JVMCI is removed because the same code is executed already always in Arguments::parse() [2]. Added new Arguments::set_compiler_flags() called from apply_ergo() to combine all compiler flags ergo settings. One test CheckCompileThresholdScaling.java was modified because scaling compiler threshold is skipped in Interpreter mode (-Xint). Tested tier1,tier2,tier3-graal Thanks, Vladimir [1] http://cr.openjdk.java.net/~kvn/8184349/webrev.04/src/hotspot/share/jvmci/jvmci_globals.cpp.udiff.html [2] http://hg.openjdk.java.net/jdk/jdk/file/54fcaffa8fac/src/hotspot/share/runtime/arguments.cpp#l4111 From erik.osterlund at oracle.com Mon Jun 11 17:49:40 2018 From: erik.osterlund at oracle.com (Erik Osterlund) Date: Mon, 11 Jun 2018 19:49:40 +0200 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: <38797d40-8a06-dc91-7d52-cc9e51222683@redhat.com> References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> <5B16A8C5.7040005@oracle.com> <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> <48b8839a-8af7-e64c-a9c6-e9f67ea099e4@redhat.com> <5e67f600-dc99-4409-6c6d-c09d3786b215@redhat.com> <38797d40-8a06-dc91-7d52-cc9e51222683@redhat.com> Message-ID: Hi, For the record, ZGC is to-space invariant and does not require any equals barriers. Thanks, /Erik > On 11 Jun 2018, at 19:11, Andrew Haley wrote: > >> On 06/11/2018 04:56 PM, Andrew Haley wrote: >>> On 06/08/2018 09:17 PM, Roman Kennke wrote: >>> Why is it better? And how would I do that? It sounds like a fairly >>> complex undertaking for a special case. Notice that if the oop doesn't >>> qualify as immediate operand (quite likely for an oop?) it used to be >>> moved into rscratch1 anyway a few lines below. >> >> Sorry for the slow reply. I'm looking now. > > OK. The problem is that this is a very bad code smell: > > case T_ARRAY: > jobject2reg(opr2->as_constant_ptr()->as_jobject(), rscratch1); > __ cmpoop(reg1, rscratch1); > > I can't tell that this is correct. rscratch1 is used by assembler > macros, and I don't know if some other GC (e.g. ZGC) might need to use > rscratch1 inside cmpoop. The risk here is obvious. The Right Thing > to do IMO is to generate a scratch register for pointer comparisons. > > Unless, I guess, we know that cmpoop never ever needs a scratch > register for any forseeable garbage collector. > > -- > Andrew Haley > Java Platform Lead Engineer > Red Hat UK Ltd. > EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From aph at redhat.com Mon Jun 11 17:53:31 2018 From: aph at redhat.com (Andrew Haley) Date: Mon, 11 Jun 2018 18:53:31 +0100 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> <5B16A8C5.7040005@oracle.com> <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> <48b8839a-8af7-e64c-a9c6-e9f67ea099e4@redhat.com> <5e67f600-dc99-4409-6c6d-c09d3786b215@redhat.com> <38797d40-8a06-dc91-7d52-cc9e51222683@redhat.com> Message-ID: On 06/11/2018 06:49 PM, Erik Osterlund wrote: > For the record, ZGC is to-space invariant and does not require any equals barriers. OK. So does anyone know what the point of BarrierSet::obj_equals is? -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From bob.vandette at oracle.com Mon Jun 11 18:28:05 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Mon, 11 Jun 2018 14:28:05 -0400 Subject: Ping!! Re: RFR: 8203357 Container Metrics In-Reply-To: References: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> Message-ID: <54435D5C-5E6B-4174-8344-984CDDF5D46A@oracle.com> > On Jun 11, 2018, at 4:07 AM, Robbin Ehn wrote: > > Hi Bob, > > On 06/07/2018 07:43 PM, Bob Vandette wrote: >> Can I get one more reviewer for this RFE so I can integrate it? >>> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 > > Seems okay. > > Metrics.java > "Returns the length of the operating system time slice" > > Note that is is only true if you are using a batch scheduler. > Otherwise this period may be split on multiple 'time slices?. This is a cgroup metric which uses CFS not the OS time slice. 136 /** 137 * Returns the length of the operating system time slice, in 138 * milliseconds, for processes within the Isolation Group. > > In printSystemMetrics there is no units, maybe intentional? I?ll add ms for the quote/period output. The memory metrics do have units. > > Do we have support now in mach5 for docker jtreg, or do we still run these separate? > > You can ship it. Thanks! Bob. > > Thanks for fixing, and super thanks for fixing the bug in PlainRead also! > > /Robbin > >> Mandy Chung has reviewed this change. >> I?ve run Mach5 hotspot and core lib tests. >> I?ve reviewed the tests which were written by Harsha Wardhana >> I filed a CSR for the command line change and it?s now approved and closed. >> Thanks, >> Bob. >>> On May 30, 2018, at 3:45 PM, Bob Vandette wrote: >>> >>> Please review the following RFE which adds an internal API, along with jtreg tests that provide >>> access to Docker container configuration data and metrics. In addition to the API which we hope to >>> take advantage of in the future with Java Flight Recorder and a JMX Mbean, I?ve added an additional >>> option to -XshowSettings:system than dumps out the container or host cgroup confguration >>> information. See the sample output below: >>> >>> RFE: Container Metrics >>> >>> https://bugs.openjdk.java.net/browse/JDK-8203357 >>> >>> WEBREV: >>> >>> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 >>> >>> >>> This commit will also include a fix for the following bug. >>> >>> BUG: [TESTBUG] Test /runtime/containers/cgroup/PlainRead.java fails >>> >>> https://bugs.openjdk.java.net/browse/JDK-8203691 >>> >>> WEBREV: >>> >>> http://cr.openjdk.java.net/~bobv/8203357/webrev.00/test/hotspot/jtreg/runtime/containers/cgroup/PlainRead.java.sdiff.html >>> >>> SAMPLE USAGE and OUTPUT: >>> >>> docker run ?memory=256m --cpuset-cpus 4-7 -it ubuntu bash >>> ./java -XshowSettings:system >>> Operating System Metrics: >>> Provider: cgroupv1 >>> Effective CPU Count: 4 >>> CPU Period: 100000 >>> CPU Quota: -1 >>> CPU Shares: -1 >>> List of Processors, 4 total: >>> 4 5 6 7 >>> List of Effective Processors, 4 total: >>> 4 5 6 7 >>> List of Memory Nodes, 2 total: >>> 0 1 >>> List of Available Memory Nodes, 2 total: >>> 0 1 >>> CPUSet Memory Pressure Enabled: false >>> Memory Limit: 256.00M >>> Memory Soft Limit: Unlimited >>> Memory & Swap Limit: 512.00M >>> Kernel Memory Limit: Unlimited >>> TCP Memory Limit: Unlimited >>> Out Of Memory Killer Enabled: true >>> >>> TEST RESULTS: >>> >>> testing runtime container APIs >>> Directory "JTwork" not found: creating >>> Passed: runtime/containers/cgroup/PlainRead.java >>> Passed: runtime/containers/docker/DockerBasicTest.java >>> Passed: runtime/containers/docker/TestCPUAwareness.java >>> Passed: runtime/containers/docker/TestCPUSets.java >>> Passed: runtime/containers/docker/TestMemoryAwareness.java >>> Passed: runtime/containers/docker/TestMisc.java >>> Test results: passed: 6 >>> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >>> >>> testing jdk.internal.platform APIs >>> Passed: jdk/internal/platform/cgroup/TestCgroupMetrics.java >>> Passed: jdk/internal/platform/docker/TestDockerCpuMetrics.java >>> Passed: jdk/internal/platform/docker/TestDockerMemoryMetrics.java >>> Passed: jdk/internal/platform/docker/TestSystemMetrics.java >>> Test results: passed: 4 >>> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >>> >>> testing -XshowSettings:system launcher option >>> Passed: tools/launcher/Settings.java >>> Test results: passed: 1 >>> >>> >>> Bob. >>> >>> From glaubitz at physik.fu-berlin.de Mon Jun 11 18:55:34 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Mon, 11 Jun 2018 20:55:34 +0200 Subject: x86_64 build broken - error: incompatible types: sun.print.DialogOwner Message-ID: <01733239-b1ba-7836-d42e-c35535fc2783@physik.fu-berlin.de> Hi! Just did a "hg pull && hg update --clean" and ran into this during server and zero build on Linux-x86_64: === Output from failing command(s) repeated here === /usr/bin/printf "* For target jdk_modules_java.desktop__the.java.desktop_batch:\n" * For target jdk_modules_java.desktop__the.java.desktop_batch: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-zero-release/make-support/failure-logs/jdk_modules_java.desktop__the.java.desktop_batch.log || true) | /usr/bin/head -n 12 /srv/glaubitz/openjdk/jdk/src/java.desktop/unix/classes/sun/print/IPPPrintService.java:1409: error: incompatible types: sun.print.DialogOwner cannot be converted to javax.print.attribute.standard.DialogOwner if (DialogOwnerAccessor.getID(owner) != 0) { ^ Note: Some messages have been simplified; recompile with -Xdiags:verbose to get full output 1 error if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-zero-release/make-support/failure-logs/jdk_modules_java.desktop__the.java.desktop_batch.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "\n* All command lines available in /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-zero-release/make-support/failure-logs.\n" * All command lines available in /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-zero-release/make-support/failure-logs. /usr/bin/printf "=== End of repeated output ===\n" === End of repeated output === Anyone seen this? Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From paul.sandoz at oracle.com Mon Jun 11 19:00:01 2018 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Mon, 11 Jun 2018 12:00:01 -0700 Subject: x86_64 build broken - error: incompatible types: sun.print.DialogOwner In-Reply-To: <01733239-b1ba-7836-d42e-c35535fc2783@physik.fu-berlin.de> References: <01733239-b1ba-7836-d42e-c35535fc2783@physik.fu-berlin.de> Message-ID: Can you try and clean the desktop module before rebuilding? e.g. use the make target "clean-java.desktop? if you don?t want to clean the whole build. Paul. > On Jun 11, 2018, at 11:55 AM, John Paul Adrian Glaubitz wrote: > > Hi! > > Just did a "hg pull && hg update --clean" and ran into this during server and > zero build on Linux-x86_64: > > === Output from failing command(s) repeated here === > /usr/bin/printf "* For target jdk_modules_java.desktop__the.java.desktop_batch:\n" > * For target jdk_modules_java.desktop__the.java.desktop_batch: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-zero-release/make-support/failure-logs/jdk_modules_java.desktop__the.java.desktop_batch.log || true) | /usr/bin/head -n 12 > /srv/glaubitz/openjdk/jdk/src/java.desktop/unix/classes/sun/print/IPPPrintService.java:1409: error: incompatible types: sun.print.DialogOwner cannot be converted to javax.print.attribute.standard.DialogOwner > if (DialogOwnerAccessor.getID(owner) != 0) { > ^ > Note: Some messages have been simplified; recompile with -Xdiags:verbose to get full output > 1 error > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-zero-release/make-support/failure-logs/jdk_modules_java.desktop__the.java.desktop_batch.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "\n* All command lines available in /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-zero-release/make-support/failure-logs.\n" > > * All command lines available in /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-zero-release/make-support/failure-logs. > /usr/bin/printf "=== End of repeated output ===\n" > === End of repeated output === > > Anyone seen this? > > Adrian > > -- > .''`. John Paul Adrian Glaubitz > : :' : Debian Developer - glaubitz at debian.org > `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de > `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From rkennke at redhat.com Mon Jun 11 19:17:56 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 11 Jun 2018 21:17:56 +0200 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: <38797d40-8a06-dc91-7d52-cc9e51222683@redhat.com> References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> <5B16A8C5.7040005@oracle.com> <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> <48b8839a-8af7-e64c-a9c6-e9f67ea099e4@redhat.com> <5e67f600-dc99-4409-6c6d-c09d3786b215@redhat.com> <38797d40-8a06-dc91-7d52-cc9e51222683@redhat.com> Message-ID: <2a983bca-a0b6-d004-a7fa-16ccd9d7f809@redhat.com> Am 11.06.2018 um 19:11 schrieb Andrew Haley: > On 06/11/2018 04:56 PM, Andrew Haley wrote: >> On 06/08/2018 09:17 PM, Roman Kennke wrote: >>> Why is it better? And how would I do that? It sounds like a fairly >>> complex undertaking for a special case. Notice that if the oop doesn't >>> qualify as immediate operand (quite likely for an oop?) it used to be >>> moved into rscratch1 anyway a few lines below. >> >> Sorry for the slow reply. I'm looking now. > > OK. The problem is that this is a very bad code smell: > > case T_ARRAY: > jobject2reg(opr2->as_constant_ptr()->as_jobject(), rscratch1); > __ cmpoop(reg1, rscratch1); > > I can't tell that this is correct. rscratch1 is used by assembler > macros, and I don't know if some other GC (e.g. ZGC) might need to use > rscratch1 inside cmpoop. The risk here is obvious. The Right Thing > to do IMO is to generate a scratch register for pointer comparisons. > > Unless, I guess, we know that cmpoop never ever needs a scratch > register for any forseeable garbage collector. > I do know that Shenandoah does not require a tmp reg. I also do know that no other collector currently needs equals-barriers at all. I cannot see into the future. I prefer to be pragmatic and solve existing problems. How about I add a comment to the obj_equals() API that says 'don't use tmp reg X, and if you really need to, push/pop it or let the compiler generate one for you' ? Roman From glaubitz at physik.fu-berlin.de Mon Jun 11 19:30:32 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Mon, 11 Jun 2018 21:30:32 +0200 Subject: x86_64 build broken - error: incompatible types: sun.print.DialogOwner In-Reply-To: References: <01733239-b1ba-7836-d42e-c35535fc2783@physik.fu-berlin.de> Message-ID: Hi Paul! On 06/11/2018 09:00 PM, Paul Sandoz wrote: > Can you try and clean the desktop module before rebuilding? e.g. use the make target "clean-java.desktop? if you don?t want to clean the whole build. Yes, this fixes the build for me. Thank you! Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From kim.barrett at oracle.com Mon Jun 11 19:36:31 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 11 Jun 2018 15:36:31 -0400 Subject: RFC: 8204690: Simplify usage of Access API Message-ID: <4F2B66C2-7571-41D1-9780-9AA568067B01@oracle.com> JDK-8204690 is an enhancement request for simplifing the usage of the Access API. This RFE comes out of some discussions within the Oracle runtime and GC teams about difficulties encountered when using the Access API. We now have a concrete set of changes to propose (rather than just vague complaints), described in that RFE, which I'm duplicating below for further discussion. Most of the proposed changes are technically straight-forward; many are just changes of nomenclature. However, because they are name changes, they end up touching a bunch of files, including various platform-specific files. So we'll be asking for help with testing. We want to move ahead with these changes ASAP, because of the impact they will have to backporting to JDK 11 if not included in that release. However, a few of the changes significantly intersect other changes that are soon to be pushed to JDK 11, so some amount of scheduling will be needed to minimize overall work. Here's the description from the RFE: ---------- Simplify usage of Access API With 6+ months of usage of the Access API, some usage issues have been noted. In particular, there are some issues around decorator names and semantics which have caused confusion and led to some long discussions. While the underlying strategy is sound, there are some changes that would simplify usage. This proposal is in part the result of attempting to create a guide for choosing the decorators for some use of the Access API. We currently have several categories of decorators, with some categories having entries with overlapping semantics. We'd like to have a set of categories from which one chooses exactly one entry, and it should be "obvious" which one to choose for a given access. The first step is to determine where the operand is located. We presently have the following decorators to indicate the Access location: IN_HEAP, IN_HEAP_ARRAY, IN_ROOT, IN_CONCURRENT_ROOT, and IN_ARCHIVE_ROOT. Some of these overlap with or imply others; the goal is to have a disjoint set. IN_CONCURRENT_ROOT has generated much discussion about when and how it should be used. This might be better modelled as a Barrier Strength decorator, e.g. in the AS_ category. It was placed among the location decorators with the idea that some Access-roots would be identified as being fully processed during a safe-point (and so would not require GC barriers), while others (the "concurrent" roots) would require GC barriers. There was a question of whether we needed more fine-grained decorators, or whether just two categories that are the same for all collectors would be sufficient. So far, we've gotten along without introducing further granularity. But we've also found no significant need for the distinction at all. Proposal 1: IN_CONCURRENT_ROOT should be eliminated, and the corresponding behavior should be the default. Proposal 2: IN_ARCHIVE_ROOT should be eliminated; see JDK-8204585. IN_HEAP_ARRAY is effectively an additional orthogonal property layered over IN_HEAP. It would be better to actually make it an orthogonal property. Proposal 3: Remove IN_HEAP_ARRAY and add IS_ARRAY with similar semantics. (IS_ARRAY might only be valid in conjunction with IN_HEAP.) The use of "root" here differs from how that term is usually used in the context of GC. In particular, while GC-roots are Access-roots, not all Access-roots are GC-roots. This is a frequent source of confusion. Proposal 4: The use of "root" by Access should be replaced by "native". So IN_ROOT => IN_NATIVE, and RootAccess<> => NativeAccess<>. The second step is to determine the reference strength. The current API has been working well here. We have ON_STRONG_OOP_REF, ON_WEAK_OOP_REF, ON_PHANTOM_OOP_REF, and ON_UNKNOWN_OOP_REF, with ON_STRONG_OOP_REF being the default. No changes are being proposed in this area. Another step is to determine the barrier strength. We presently have the following decorators for this: AS_RAW, AS_DEST_NOT_INITIALIZED, AS_NO_KEEPALIVE, and AS_NORMAL. AS_DEST_NOT_INITIALIZED is somewhat out of place here, describing a property of the value rather than the access. It would be better to make it an orthogonal property. The existing name is also a little awkward, especially when turned into a variable and logically negated, e.g. bool is_dest_not_initialized = ...; ... !is_dest_not_initialized ... Proposal 5: Rename AS_DEST_NOT_INITIALIZED to IS_DEST_UNINITIALIZED. The fourth step is to determine the memory order. The current API has been working well here. We have MO_UNORDERED, MO_VOLATILE, MO_RELAXED, MO_ACQUIRE, MO_RELEASE, MO_SEQ_CST. No changes are being proposed in this area. In addition, we presently have OOP_NOT_NULL, all on its own in a separate category. There's no need for this separate category, and this can be renamed to be similar to other orthogonal properties proposed above. Proposal 6: Rename OOP_NOT_NULL to IS_NOT_NULL. Remove OOP_DECORATOR_MASK. Proposal 7: Add IS_DECORATOR_MASK, containing the values for IS_ARRAY, IS_NOT_NULL, and IS_DEST_UNINITIALIZED. There are also decorators for annotating arraycopy. These are highly tied in to the code, and are not discussed here. With these changes, the process of selecting the decorators for an access consists of first selecting one decorator in each of the following categories: (1) Operand location: IN_NATIVE, IN_HEAP. There is no default; one or the other must be explicitly specified. However, rather than using the decorators directly, use the NativeAccess<> and HeapAccess<> classes. (2) Access strength: AS_NORMAL, AS_RAW, AS_NO_KEEPALIVE. The default is AS_NORMAL. When accessing a primitive (non-object) value, use AS_RAW. (3) Reference strength (if not raw access): ON_STRONG_OOP_REF, ON_WEAK_OOP_REF, ON_PHANTOM_OOP_REF, ON_UNKNOWN_OOP_REF. The default is ON_STRONG_OOP_REF. This decorator is ignored and should be left empty if the access strength is AS_RAW. (4) Memory ordering: MO_UNORDERED, MO_VOLATILE, MO_RELAXED, MO_ACQUIRE, MO_RELEASE, MO_SEQ_CST. The default is MO_UNORDERED. Then, add any of the following "flag" decorators that are appropriate: IS_ARRAY, IS_NOT_NULL, IS_DEST_UNINITIALIZED. The default for these is that the flag is unset. Simplify usage of Access API With 6+ months of usage of the Access API, some usage issues have been noted. In particular, there are some issues around decorator names and semantics which have caused confusion and led to some long discussions. While the underlying strategy is sound, there are some changes that would simplify usage. This proposal is in part the result of attempting to create a guide for choosing the decorators for some use of the Access API. We currently have several categories of decorators, with some categories having entries with overlapping semantics. We'd like to have a set of categories from which one chooses exactly one entry, and it should be "obvious" which one to choose for a given access. The first step is to determine where the operand is located. We presently have the following decorators to indicate the Access location: IN_HEAP, IN_HEAP_ARRAY, IN_ROOT, IN_CONCURRENT_ROOT, and IN_ARCHIVE_ROOT. Some of these overlap with or imply others; the goal is to have a disjoint set. IN_CONCURRENT_ROOT has generated much discussion about when and how it should be used. This might be better modelled as a Barrier Strength decorator, e.g. in the AS_ category. It was placed among the location decorators with the idea that some Access-roots would be identified as being fully processed during a safe-point (and so would not require GC barriers), while others (the "concurrent" roots) would require GC barriers. There was a question of whether we needed more fine-grained decorators, or whether just two categories that are the same for all collectors would be sufficient. So far, we've gotten along without introducing further granularity. But we've also found no significant need for the distinction at all. Proposal 1: IN_CONCURRENT_ROOT should be eliminated, and the corresponding behavior should be the default. Proposal 2: IN_ARCHIVE_ROOT should be eliminated; see JDK-8204585. IN_HEAP_ARRAY is effectively an additional orthogonal property layered over IN_HEAP. It would be better to actually make it an orthogonal property. Proposal 3: Remove IN_HEAP_ARRAY and add IS_ARRAY with similar semantics. (IS_ARRAY might only be valid in conjunction with IN_HEAP.) The use of "root" here differs from how that term is usually used in the context of GC. In particular, while GC-roots are Access-roots, not all Access-roots are GC-roots. This is a frequent source of confusion. Proposal 4: The use of "root" by Access should be replaced by "native". So IN_ROOT => IN_NATIVE, and RootAccess<> => NativeAccess<>. The second step is to determine the reference strength. The current API has been working well here. We have ON_STRONG_OOP_REF, ON_WEAK_OOP_REF, ON_PHANTOM_OOP_REF, and ON_UNKNOWN_OOP_REF, with ON_STRONG_OOP_REF being the default. No changes are being proposed in this area. Another step is to determine the barrier strength. We presently have the following decorators for this: AS_RAW, AS_DEST_NOT_INITIALIZED, AS_NO_KEEPALIVE, and AS_NORMAL. AS_DEST_NOT_INITIALIZED is somewhat out of place here, describing a property of the value rather than the access. It would be better to make it an orthogonal property. The existing name is also a little awkward, especially when turned into a variable and logically negated, e.g. bool is_dest_not_initialized = ...; ... !is_dest_not_initialized ... Proposal 5: Rename AS_DEST_NOT_INITIALIZED to IS_DEST_UNINITIALIZED. The fourth step is to determine the memory order. The current API has been working well here. We have MO_UNORDERED, MO_VOLATILE, MO_RELAXED, MO_ACQUIRE, MO_RELEASE, MO_SEQ_CST. No changes are being proposed in this area. In addition, we presently have OOP_NOT_NULL, all on its own in a separate category. There's no need for this separate category, and this can be renamed to be similar to other orthogonal properties proposed above. Proposal 6: Rename OOP_NOT_NULL to IS_NOT_NULL. Remove OOP_DECORATOR_MASK. Proposal 7: Add IS_DECORATOR_MASK, containing the values for IS_ARRAY, IS_NOT_NULL, and IS_DEST_UNINITIALIZED. There are also decorators for annotating arraycopy. These are highly tied in to the code, and are not discussed here. With these changes, the process of selecting the decorators for an access consists of first selecting one decorator in each of the following categories: (1) Operand location: IN_NATIVE, IN_HEAP. There is no default; one or the other must be explicitly specified. However, rather than using the decorators directly, use the NativeAccess<> and HeapAccess<> classes. (2) Access strength: AS_NORMAL, AS_RAW, AS_NO_KEEPALIVE. The default is AS_NORMAL. When accessing a primitive (non-object) value, use AS_RAW. (3) Reference strength (if not raw access): ON_STRONG_OOP_REF, ON_WEAK_OOP_REF, ON_PHANTOM_OOP_REF, ON_UNKNOWN_OOP_REF. The default is ON_STRONG_OOP_REF. This decorator is ignored and should be left empty if the access strength is AS_RAW. (4) Memory ordering: MO_UNORDERED, MO_VOLATILE, MO_RELAXED, MO_ACQUIRE, MO_RELEASE, MO_SEQ_CST. The default is MO_UNORDERED. Then, add any of the following "flag" decorators that are appropriate: IS_ARRAY, IS_NOT_NULL, IS_DEST_UNINITIALIZED. The default for these is that the flag is unset. From rkennke at redhat.com Mon Jun 11 20:10:46 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 11 Jun 2018 22:10:46 +0200 Subject: RFC: 8204690: Simplify usage of Access API In-Reply-To: <4F2B66C2-7571-41D1-9780-9AA568067B01@oracle.com> References: <4F2B66C2-7571-41D1-9780-9AA568067B01@oracle.com> Message-ID: <78fb1b8a-f738-49e7-5fc8-0118bfe262b6@redhat.com> Hi Kim, I need to digest all this. But may I throw in another ambiguity: OOP_IS_NULL is, as far as I can tell, used to decorate an access where the *value* is known to be not null (for stores), or the value coming out of a load is known to be not null (for loads). This is useful for stuff like compressed oops, where a null-check can be elided if we know it's not null. However, at least when using this in Shenandoah land, it is also useful to know whether or not a target oop (the object being written to, or loaded from) is known to be not null, at least in compiled code. If it's known to be not null, we can elide a null-check on the read/write barrier around the memor access. I propose to disambiguate this by splitting the semantics into VALUE_NOT_NULL (or similar) and TARGET_NOT_NULL (or similar). Suggestions welcome! I'll dig more into the proposal and probably come up with more comments. Thanks!!! Roman > JDK-8204690 is an enhancement request for simplifing the usage of the > Access API. This RFE comes out of some discussions within the Oracle > runtime and GC teams about difficulties encountered when using the > Access API. We now have a concrete set of changes to propose (rather > than just vague complaints), described in that RFE, which I'm > duplicating below for further discussion. > > Most of the proposed changes are technically straight-forward; many > are just changes of nomenclature. However, because they are name > changes, they end up touching a bunch of files, including various > platform-specific files. So we'll be asking for help with testing. > > We want to move ahead with these changes ASAP, because of the impact > they will have to backporting to JDK 11 if not included in that > release. However, a few of the changes significantly intersect other > changes that are soon to be pushed to JDK 11, so some amount of > scheduling will be needed to minimize overall work. > > Here's the description from the RFE: > > ---------- > > Simplify usage of Access API > > With 6+ months of usage of the Access API, some usage issues have been > noted. In particular, there are some issues around decorator names and > semantics which have caused confusion and led to some long discussions. > While the underlying strategy is sound, there are some changes that would > simplify usage. This proposal is in part the result of attempting to create > a guide for choosing the decorators for some use of the Access API. > > We currently have several categories of decorators, with some categories > having entries with overlapping semantics. We'd like to have a set of > categories from which one chooses exactly one entry, and it should be > "obvious" which one to choose for a given access. > > The first step is to determine where the operand is located. We presently > have the following decorators to indicate the Access location: IN_HEAP, > IN_HEAP_ARRAY, IN_ROOT, IN_CONCURRENT_ROOT, and IN_ARCHIVE_ROOT. Some of > these overlap with or imply others; the goal is to have a disjoint set. > > IN_CONCURRENT_ROOT has generated much discussion about when and how it > should be used. This might be better modelled as a Barrier Strength > decorator, e.g. in the AS_ category. It was placed among the location > decorators with the idea that some Access-roots would be identified as being > fully processed during a safe-point (and so would not require GC barriers), > while others (the "concurrent" roots) would require GC barriers. There was a > question of whether we needed more fine-grained decorators, or whether just > two categories that are the same for all collectors would be sufficient. So > far, we've gotten along without introducing further granularity. But we've > also found no significant need for the distinction at all. > > Proposal 1: IN_CONCURRENT_ROOT should be eliminated, and the corresponding > behavior should be the default. > > Proposal 2: IN_ARCHIVE_ROOT should be eliminated; see JDK-8204585. > > IN_HEAP_ARRAY is effectively an additional orthogonal property layered over > IN_HEAP. It would be better to actually make it an orthogonal property. > > Proposal 3: Remove IN_HEAP_ARRAY and add IS_ARRAY with similar semantics. > (IS_ARRAY might only be valid in conjunction with IN_HEAP.) > > The use of "root" here differs from how that term is usually used in the > context of GC. In particular, while GC-roots are Access-roots, not all > Access-roots are GC-roots. This is a frequent source of confusion. > > Proposal 4: The use of "root" by Access should be replaced by "native". So > IN_ROOT => IN_NATIVE, and RootAccess<> => NativeAccess<>. > > The second step is to determine the reference strength. The current API has > been working well here. We have ON_STRONG_OOP_REF, ON_WEAK_OOP_REF, > ON_PHANTOM_OOP_REF, and ON_UNKNOWN_OOP_REF, with ON_STRONG_OOP_REF being the > default. No changes are being proposed in this area. > > Another step is to determine the barrier strength. We presently have the > following decorators for this: AS_RAW, AS_DEST_NOT_INITIALIZED, > AS_NO_KEEPALIVE, and AS_NORMAL. AS_DEST_NOT_INITIALIZED is somewhat out of > place here, describing a property of the value rather than the access. It > would be better to make it an orthogonal property. The existing name is also > a little awkward, especially when turned into a variable and logically > negated, e.g. > > bool is_dest_not_initialized = ...; > ... !is_dest_not_initialized ... > > Proposal 5: Rename AS_DEST_NOT_INITIALIZED to IS_DEST_UNINITIALIZED. > > The fourth step is to determine the memory order. The current API has been > working well here. We have MO_UNORDERED, MO_VOLATILE, MO_RELAXED, > MO_ACQUIRE, MO_RELEASE, MO_SEQ_CST. No changes are being proposed in this > area. > > In addition, we presently have OOP_NOT_NULL, all on its own in a separate > category. There's no need for this separate category, and this can be > renamed to be similar to other orthogonal properties proposed above. > > Proposal 6: Rename OOP_NOT_NULL to IS_NOT_NULL. Remove OOP_DECORATOR_MASK. > > Proposal 7: Add IS_DECORATOR_MASK, containing the values for IS_ARRAY, > IS_NOT_NULL, and IS_DEST_UNINITIALIZED. > > There are also decorators for annotating arraycopy. These are highly tied in > to the code, and are not discussed here. > > With these changes, the process of selecting the decorators for an access > consists of first selecting one decorator in each of the following > categories: > > (1) Operand location: IN_NATIVE, IN_HEAP. There is no default; one or the > other must be explicitly specified. However, rather than using the > decorators directly, use the NativeAccess<> and HeapAccess<> classes. > > (2) Access strength: AS_NORMAL, AS_RAW, AS_NO_KEEPALIVE. The default > is AS_NORMAL. When accessing a primitive (non-object) value, use > AS_RAW. > > (3) Reference strength (if not raw access): ON_STRONG_OOP_REF, > ON_WEAK_OOP_REF, ON_PHANTOM_OOP_REF, ON_UNKNOWN_OOP_REF. The default is > ON_STRONG_OOP_REF. This decorator is ignored and should be left empty if the > access strength is AS_RAW. > > (4) Memory ordering: MO_UNORDERED, MO_VOLATILE, MO_RELAXED, MO_ACQUIRE, > MO_RELEASE, MO_SEQ_CST. The default is MO_UNORDERED. > > Then, add any of the following "flag" decorators that are appropriate: > IS_ARRAY, IS_NOT_NULL, IS_DEST_UNINITIALIZED. The default for these is that > the flag is unset. > Simplify usage of Access API > > With 6+ months of usage of the Access API, some usage issues have been > noted. In particular, there are some issues around decorator names and > semantics which have caused confusion and led to some long discussions. > While the underlying strategy is sound, there are some changes that would > simplify usage. This proposal is in part the result of attempting to create > a guide for choosing the decorators for some use of the Access API. > > We currently have several categories of decorators, with some categories > having entries with overlapping semantics. We'd like to have a set of > categories from which one chooses exactly one entry, and it should be > "obvious" which one to choose for a given access. > > The first step is to determine where the operand is located. We presently > have the following decorators to indicate the Access location: IN_HEAP, > IN_HEAP_ARRAY, IN_ROOT, IN_CONCURRENT_ROOT, and IN_ARCHIVE_ROOT. Some of > these overlap with or imply others; the goal is to have a disjoint set. > > IN_CONCURRENT_ROOT has generated much discussion about when and how it > should be used. This might be better modelled as a Barrier Strength > decorator, e.g. in the AS_ category. It was placed among the location > decorators with the idea that some Access-roots would be identified as being > fully processed during a safe-point (and so would not require GC barriers), > while others (the "concurrent" roots) would require GC barriers. There was a > question of whether we needed more fine-grained decorators, or whether just > two categories that are the same for all collectors would be sufficient. So > far, we've gotten along without introducing further granularity. But we've > also found no significant need for the distinction at all. > > Proposal 1: IN_CONCURRENT_ROOT should be eliminated, and the corresponding > behavior should be the default. > > Proposal 2: IN_ARCHIVE_ROOT should be eliminated; see JDK-8204585. > > IN_HEAP_ARRAY is effectively an additional orthogonal property layered over > IN_HEAP. It would be better to actually make it an orthogonal property. > > Proposal 3: Remove IN_HEAP_ARRAY and add IS_ARRAY with similar semantics. > (IS_ARRAY might only be valid in conjunction with IN_HEAP.) > > The use of "root" here differs from how that term is usually used in the > context of GC. In particular, while GC-roots are Access-roots, not all > Access-roots are GC-roots. This is a frequent source of confusion. > > Proposal 4: The use of "root" by Access should be replaced by "native". So > IN_ROOT => IN_NATIVE, and RootAccess<> => NativeAccess<>. > > The second step is to determine the reference strength. The current API has > been working well here. We have ON_STRONG_OOP_REF, ON_WEAK_OOP_REF, > ON_PHANTOM_OOP_REF, and ON_UNKNOWN_OOP_REF, with ON_STRONG_OOP_REF being the > default. No changes are being proposed in this area. > > Another step is to determine the barrier strength. We presently have the > following decorators for this: AS_RAW, AS_DEST_NOT_INITIALIZED, > AS_NO_KEEPALIVE, and AS_NORMAL. AS_DEST_NOT_INITIALIZED is somewhat out of > place here, describing a property of the value rather than the access. It > would be better to make it an orthogonal property. The existing name is also > a little awkward, especially when turned into a variable and logically > negated, e.g. > > bool is_dest_not_initialized = ...; > ... !is_dest_not_initialized ... > > Proposal 5: Rename AS_DEST_NOT_INITIALIZED to IS_DEST_UNINITIALIZED. > > The fourth step is to determine the memory order. The current API has been > working well here. We have MO_UNORDERED, MO_VOLATILE, MO_RELAXED, > MO_ACQUIRE, MO_RELEASE, MO_SEQ_CST. No changes are being proposed in this > area. > > In addition, we presently have OOP_NOT_NULL, all on its own in a separate > category. There's no need for this separate category, and this can be > renamed to be similar to other orthogonal properties proposed above. > > Proposal 6: Rename OOP_NOT_NULL to IS_NOT_NULL. Remove OOP_DECORATOR_MASK. > > Proposal 7: Add IS_DECORATOR_MASK, containing the values for IS_ARRAY, > IS_NOT_NULL, and IS_DEST_UNINITIALIZED. > > There are also decorators for annotating arraycopy. These are highly tied in > to the code, and are not discussed here. > > With these changes, the process of selecting the decorators for an access > consists of first selecting one decorator in each of the following > categories: > > (1) Operand location: IN_NATIVE, IN_HEAP. There is no default; one or the > other must be explicitly specified. However, rather than using the > decorators directly, use the NativeAccess<> and HeapAccess<> classes. > > (2) Access strength: AS_NORMAL, AS_RAW, AS_NO_KEEPALIVE. The default is AS_NORMAL. When accessing a primitive (non-object) value, use AS_RAW. > > (3) Reference strength (if not raw access): ON_STRONG_OOP_REF, > ON_WEAK_OOP_REF, ON_PHANTOM_OOP_REF, ON_UNKNOWN_OOP_REF. The default is > ON_STRONG_OOP_REF. This decorator is ignored and should be left empty if the > access strength is AS_RAW. > > (4) Memory ordering: MO_UNORDERED, MO_VOLATILE, MO_RELAXED, MO_ACQUIRE, > MO_RELEASE, MO_SEQ_CST. The default is MO_UNORDERED. > > Then, add any of the following "flag" decorators that are appropriate: > IS_ARRAY, IS_NOT_NULL, IS_DEST_UNINITIALIZED. The default for these is that > the flag is unset. > From vladimir.kozlov at oracle.com Mon Jun 11 20:29:05 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 11 Jun 2018 13:29:05 -0700 Subject: RFR: 8204476: Add additional statistics to CodeCache::print_summary In-Reply-To: References: Message-ID: Looks good. Thanks, Vladimir On 6/11/18 3:52 AM, Ren? Sch?nemann wrote: > Thank you Vladimir and Thomas! > > I have moved the output of code cache full count to a separated line. > It now shows an accumulated number, but I think, that should be > sufficient enough. > > Updated Webrev: http://cr.openjdk.java.net/~goetz/wr18/rene/webrev_8204476/02/ > > Regards, > Rene > > On Sat, Jun 9, 2018 at 5:25 PM, Thomas St?fe wrote: >> >>>> >>>> - More of a question to others: I am not familiar with compiler >>>> coding, but signed int as counters seem a bit small? Is there no >>>> danger of ever overflowing on long running VMs? Or does it not matter >>>> if they do? >>> >>> >>> I never observed compilations count over 1M (10^6). And there is no issue if >>> they overflow - it is just number printed in statistic and logs. We have >>> also use it as compilation task id and I think it is also safe. >>> >>> Note, in stable application case you should not have a lot of compilations. >>> It should not be more than application's + system's hot methods. >>> >>> New counters in this change are much smaller - they count how many times >>> CodeCache become full which should not happen in normal case. >>> >> >> Ah, thanks for clarifying. >> >> Best Regards, Thomas >> >>> Regards, >>> Vladimir >>> >>>> >>>> Thanks, Thomas >>>> >>> From vladimir.kozlov at oracle.com Mon Jun 11 20:35:02 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 11 Jun 2018 13:35:02 -0700 Subject: RFR: 8204680: Disassembly does not display code strings in stubs In-Reply-To: References: Message-ID: <262821a6-5de6-0bed-332c-bab06e64b43f@oracle.com> Looks fine to me. Thanks, Vladimir On 6/11/18 7:37 AM, Andrew Haley wrote: > So last Friday I was looking at the code we generate for the runtime > stubs and I noticed that there were no comments in the disassembly. > Which is odd, because I'm sure it used to work. I found a bug which > prevented it from working, fixed it, but there was still no output. > What??! This led me down a rabbit hole from which I was to emerge > several hours later. > > It turns out there are two separate bugs. > > When we disassemble, the code strings are found in the CodeBlob that > contains the code. Unfortunately, when we use -XX:+PrintStubCode the > disassembly is done from a CodeBuffer before the code strings have > actually been copied to the code blob, so the disassembler finds no > code strings. > > Also, the code strings are only copied into the CodeBlob if > PrintStubCode is true, so "call disnm()" in the debugger doesn't print > any code strings because they were lost when the CodeBlob was created. > > With both of these fixed, we have fully-commented disassembly in the > stubs again. > > http://cr.openjdk.java.net/~aph/8204680/ > > OK? > From kim.barrett at oracle.com Mon Jun 11 20:35:54 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 11 Jun 2018 16:35:54 -0400 Subject: RFC: 8204690: Simplify usage of Access API In-Reply-To: <78fb1b8a-f738-49e7-5fc8-0118bfe262b6@redhat.com> References: <4F2B66C2-7571-41D1-9780-9AA568067B01@oracle.com> <78fb1b8a-f738-49e7-5fc8-0118bfe262b6@redhat.com> Message-ID: > On Jun 11, 2018, at 4:10 PM, Roman Kennke wrote: > > Hi Kim, > I need to digest all this. I understand; there?s a fair amount of detail here. > But may I throw in another ambiguity: > > OOP_IS_NULL is, as far as I can tell, used to decorate an access where > the *value* is known to be not null (for stores), or the value coming > out of a load is known to be not null (for loads). This is useful for > stuff like compressed oops, where a null-check can be elided if we know > it's not null. However, at least when using this in Shenandoah land, it > is also useful to know whether or not a target oop (the object being > written to, or loaded from) is known to be not null, at least in > compiled code. If it's known to be not null, we can elide a null-check > on the read/write barrier around the memor access. I propose to > disambiguate this by splitting the semantics into VALUE_NOT_NULL (or > similar) and TARGET_NOT_NULL (or similar). Suggestions welcome! I assume you meant s/OOP_IS_NULL/OOP_NOT_NULL/, which we?re proposing to rename IS_NOT_NULL. I think what you are describing should be IS_NULL, which would be a pure addition from where we are. Both IS_NULL and IS_NOT_NULL are properties of the value referred to by the access location. I thought about mentioning the possibility of IS_NULL in this RFE, but decided to limit to a transition from the current state to the (we think) more desirable naming and semantics, and not add new features as part of these changes. That feature might benefit G1 as well as Shenandoah. I?m not sure how much benefit there is though. It seems like it would mainly be useful for initialization stores. In the old pre-Access & associated compiler changes world, there was a mechanism for the compiler to elide the SATB pre-barrier for initialization stores, but I haven?t followed all the recent changes, so don?t know how that?s dealt with now. > I'll dig more into the proposal and probably come up with more comments. Good. From kim.barrett at oracle.com Mon Jun 11 20:37:42 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 11 Jun 2018 16:37:42 -0400 Subject: RFR: 8204585: Remove IN_ARCHIVE_ROOT from Access API Message-ID: <328F7DDB-D6B9-4865-84CF-E014D0108257@oracle.com> Please review this change to the implementation of CDS support to use a new collector-based protocol for handling archived Java mirrors. Rather than using the IN_ARCHIVE_ROOT feature of the Access API, we instead use a new protocol provided by the collected heap (only G1CollectedHeap for now, since only G1 supports this feature). This allows IN_ARCHIVE_ROOT to be removed from the Access API. This changeset is based on work by Stefan Karlsson. CR: https://bugs.openjdk.java.net/browse/JDK-8204585 Webrev: http://cr.openjdk.java.net/~kbarrett/8204585/open.00/ Testing: mach5 tier1,2,3, hs-tier4,5. Local testing of hotspot_cds and hotspot_appcds. From erik.joelsson at oracle.com Mon Jun 11 20:42:49 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Mon, 11 Jun 2018 13:42:49 -0700 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> <2A71CDF2-A6F5-4985-9005-4886D1047F6C@oracle.com> <2e2ace1c-22ad-e6e3-5b05-e2688b5b1b5c@oracle.com> <965cfb76-1d06-14b5-7f1e-44481ef2c54d@oracle.com> Message-ID: Hello, Based on the discussion here, I have reverted back to something more similar to webrev.02, but with a few changes. Mainly fixing a bug that caused JVM_FEATURES_hardened to not actually be the same as for server (if you have custom additions in configure). I also added a check so that configure fails if you try to enable either variant hardened or feature no-speculative-cti and the flags aren't available. Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.05/index.html /Erik On 2018-06-11 00:10, Magnus Ihse Bursie wrote: > On 2018-06-08 23:50, Erik Joelsson wrote: >> On 2018-06-07 17:30, David Holmes wrote: >>> On 8/06/2018 6:11 AM, Erik Joelsson wrote: >>>> I just don't think the extra work is warranted or should be >>>> prioritized at this point. I also cannot think of a combination of >>>> options required for what you are suggesting that wouldn't be >>>> confusing to the user. If someone truly feels like these flags are >>>> forced on them and can't live with them, we or preferably that >>>> person can fix it then. I don't think that's dictatorship. OpenJDK >>>> is still open source and anyone can contribute. >>> >>> I don't see why --enable-hardened-jdk and --enable-hardened-hotspot >>> to add to the right flags would be either complicated or confusing. >>> >> For me the confusion surrounds the difference between >> --enable-hardened-hotspot and --with-jvm-variants=server, hardened >> and making the user understand it. But sure, it is doable. Here is a >> new webrev with those two options as I interpret them. Here is the >> help text: >> >> ?--enable-hardened-jdk?? enable hardenening compiler flags for all jdk >> ????????????????????????? libraries (except the JVM), typically >> disabling >> ????????????????????????? speculative cti. [disabled] >> ?--enable-hardened-hotspot >> ????????????????????????? enable hardenening compiler flags for >> hotspot (all >> ????????????????????????? jvm variants), typically disabling >> speculative cti. >> ????????????????????????? To make hardening of hotspot a runtime choice, >> ????????????????????????? consider the "hardened" jvm variant instead >> of this >> ????????????????????????? option. [disabled] >> >> Note that this changes the default for jdk libraries to not enable >> hardening unless the user requests it. >> >> Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.04/ > > Hold it, hold it! I'm not sure how we ended up here, but I don't like > it at all. :-( > > I think Eriks initial patch is much better than this. Some arguments > in random order to defend this position: > > 1) Why should we have a configure option to disable security relevant > flags for the JDK, if there has been no measured negative effect? We > don't do this for any other compiler flags, especially not security > relevant ones! > > I've re-read the entire thread to see if I could understand what could > possibly motivate this, but the only thing I can find is David Holmes > vague fear that these flags would not be well-tested enough. Let me > counter with my own vague guesses: I believe the spectre mitigation > methods to have been fully and properly tested, since they are > rolled-out massively on all products. And let me complement with my > own fear: the PR catastrophe if OpenJDK were *not* built with spectre > mitigations, and someone were to exploit that! > > In fact, I could even argue that "server" should be hardened *by > default*, and that we should instead introduce a non-hardened JVM > named something akin to "quick-but-dangerous-server" instead. But I > realize that a 25% performance hit is hard to swallow, so I won't push > this agenda. > > 2) It is by no means clear that "--enable-hardened-jdk" does not > harden all aspects of the JDK! If we should keep the option (which I > definitely do not think we should!) it should be renamed to > "--enable-hardened-libraries", or something like that. And it should > be on by default, so it should be a "--disabled-hardened-jdk-libraries". > > Also, the general-sounding name "hardened" sounds like it might > encompass more things than it does. What if I disabled a hardened jdk > build, should I still get stack banging protection? If so, you need to > move a lot more security-related flags to this option. (And, just to > be absolutely clear: I don't think you should do that.) > > 3) Having two completely different ways of turning on Spectre > protection for hotspot is just utterly confusing! This was a perfect > example of how to use the JVM features, just as in the original patch. > > If you want to have spectre mitigation enabled for both server and > client, by default, you would just need to run "configure > --with-jvm-variants=server,client > --with-jvm-features=no-speculative-cti", which will enable that > feature for all variants. That's not really hard *at all* for anyone > building OpenJDK. And it's way clearer what will happen, than a > --enable-hardened-hotspot. > > 4) If you are a downstream provider building OpenJDK and you are dead > set on not including Spectre mitigations in the JDK libraries, despite > being shown to have no negative effects, then you can do just as any > other downstream user with highly specialized requirements, and patch > the source. I have no sympathies for this; I can't stop it but I don't > think there's any reason for us to complicate the code to support this > unlikely case. > > So, to recap, I think the webrev as published in > http://cr.openjdk.java.net/~erikj/8202384/webrev.02/ (with "altserver" > renamed to "hardened") is the way to go. > > /Magnus > > > >> >> /Erik > From igor.ignatyev at oracle.com Mon Jun 11 20:44:06 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Mon, 11 Jun 2018 13:44:06 -0700 Subject: RFR(M) : 8202946 : [TESTBUG] Open source VM testbase OOM tests In-Reply-To: References: <2DD8F9C6-8471-4BF6-8573-0DA3F2B6C66B@oracle.com> Message-ID: <1D6C49A7-C5D8-402D-B559-7E3B7E8D3AAB@oracle.com> Hi Sangheon, thanks for your review, please see my answers inline. Cheers, -- Igor > On Jun 8, 2018, at 10:09 PM, sangheon.kim at oracle.com wrote: > > Hi Igor, > > On 5/15/18 4:16 PM, Igor Ignatyev wrote: >> http://cr.openjdk.java.net/~iignatyev//8202946/webrev.00/index.html >>> 1619 lines changed: 1619 ins; 0 del; 0 mod; >> Hi all, >> >> could you please review this patch which open sources OOM tests from VM testbase? these tests test OutOfMemoryError throwing in different scenarios. >> >> As usually w/ VM testbase code, these tests are old, they have been run in hotspot testing for a long period of time. Originally, these tests were run by a test harness different from jtreg and had different build and execution schemes, some parts couldn't be easily translated to jtreg, so tests might have actions or pieces of code which look weird. In a long term, we are planning to rework them. >> >> JBS: https://bugs.openjdk.java.net/browse/JDK-8202946 >> webrev: http://cr.openjdk.java.net/~iignatyev//8202946/webrev.00/index.html >> testing: :vmTestbase_vm_oom test group > Webrev.00 looks good to me but have minor nits. > > ------------------- > test/hotspot/jtreg/TEST.groups > 1164 # Test for OOME re-throwing after Java Heap exchausting > - Typo: exchausting -> exhausting will fix before pushing, thanks for spotting. > > ------------------- > test/hotspot/jtreg/vmTestbase/vm/oom/OOMTraceTest.java > 68 protected boolean isAlwaysOOM() { > 69 return expectOOM; > 70 } > - (optional) It is returning the variable of "expectOOM" but the name is "isAlwaysOOM" which makes me confused. If you prefer "isXXX" form of name, how about "isExpectingOOM()" etc..? Or you can defer this renaming, as you are planning to rework those tests. created JDK-8204697 for that. > > I don't need a new webrev for these. > > ------------------- > Just random comment. > - It would be better to use small fixed Java Heap size to trigger OOME for short test running time. thanks for suggestion, created JDK-8204698. > > Thanks, > Sangheon > > >> >> Thanks, >> -- Igor From rkennke at redhat.com Mon Jun 11 20:46:50 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 11 Jun 2018 22:46:50 +0200 Subject: RFC: 8204690: Simplify usage of Access API In-Reply-To: References: <4F2B66C2-7571-41D1-9780-9AA568067B01@oracle.com> <78fb1b8a-f738-49e7-5fc8-0118bfe262b6@redhat.com> Message-ID: <37898c3b-d4b6-ba49-38a2-adc5a5e6c4c5@redhat.com> Am 11.06.2018 um 22:35 schrieb Kim Barrett: >> On Jun 11, 2018, at 4:10 PM, Roman Kennke wrote: >> >> Hi Kim, >> I need to digest all this. > > I understand; there?s a fair amount of detail here. > >> But may I throw in another ambiguity: >> >> OOP_IS_NULL is, as far as I can tell, used to decorate an access where >> the *value* is known to be not null (for stores), or the value coming >> out of a load is known to be not null (for loads). This is useful for >> stuff like compressed oops, where a null-check can be elided if we know >> it's not null. However, at least when using this in Shenandoah land, it >> is also useful to know whether or not a target oop (the object being >> written to, or loaded from) is known to be not null, at least in >> compiled code. If it's known to be not null, we can elide a null-check >> on the read/write barrier around the memor access. I propose to >> disambiguate this by splitting the semantics into VALUE_NOT_NULL (or >> similar) and TARGET_NOT_NULL (or similar). Suggestions welcome! > > I assume you meant s/OOP_IS_NULL/OOP_NOT_NULL/, which we?re > proposing to rename IS_NOT_NULL. Oops, yes. > I think what you are describing should be IS_NULL, which would be a pure > addition from where we are. No. What I meant is the distinction between the value (of a load or sore) that is known to be not null and the target oop (to which we store, from which we load) known to be not null. An IS_NULL property might be useful too, but as you say, I am not sure how much use it actually is. Thanks, Roman From coleen.phillimore at oracle.com Mon Jun 11 20:48:31 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 11 Jun 2018 16:48:31 -0400 Subject: RFR: 8204477: Count linkage errors and print in Exceptions::print_exception_counts_on_error In-Reply-To: References: Message-ID: <557bde38-396b-eb65-3d3b-56a36022355b@oracle.com> This looks good to me as well. Coleen On 6/11/18 6:55 AM, Ren? Sch?nemann wrote: > Hi Thomas, > > thank you for your review. I have fixed your remarks. > > Updated Webrev: http://cr.openjdk.java.net/~goetz/wr18/rene/webrev_8204477/02/ > > Regards, > Rene > > On Thu, Jun 7, 2018 at 9:35 PM, Thomas St?fe wrote: >> On Thu, Jun 7, 2018 at 9:34 PM, Thomas St?fe wrote: >>> Hi Rene, >>> >>> Looks good overall. This is a useful addition. >>> >>> - 155 Atomic::inc(&Exceptions::_linkage_errors); >>> you can loose the "Exceptions::" scope since we are in the Exceptions class. >>> >>> - Can you please add #include runtime/atomic.hpp to the file? It is >>> missing that header. >>> >> (I mean exceptions.cpp) >> >>> - Please make _linkage_errors class private. It is not directly >>> accessed from outside. (_stack_overflow_errors on the other hand is, >>> so it has to be public). >>> >>> If you fix these points, I do not need a new webrev. >>> >>> Best Regards, Thomas >>> >>> >>> On Thu, Jun 7, 2018 at 9:29 AM, Ren? Sch?nemann >>> wrote: >>>> Hi, >>>> >>>> can I please get a review for the following change: >>>> >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8204477 >>>> Webrev: http://cr.openjdk.java.net/~goetz/wr18/rene/webrev_8204477/01/ >>>> >>>> This change counts linkage errors and prints the number of linkage >>>> errors thrown in the Exceptions::print_exception_counts_on_error, >>>> which is used when writing the hs_error file. >>>> >>>> Thank you, >>>> Rene From coleen.phillimore at oracle.com Mon Jun 11 20:57:35 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 11 Jun 2018 16:57:35 -0400 Subject: RFC: 8204690: Simplify usage of Access API In-Reply-To: <4F2B66C2-7571-41D1-9780-9AA568067B01@oracle.com> References: <4F2B66C2-7571-41D1-9780-9AA568067B01@oracle.com> Message-ID: <120cd132-e9b6-5c01-7408-71757267ad91@oracle.com> On 6/11/18 3:36 PM, Kim Barrett wrote: > The use of "root" here differs from how that term is usually used in the > context of GC. In particular, while GC-roots are Access-roots, not all > Access-roots are GC-roots. This is a frequent source of confusion. Yes. I like these changes. It appears that this is long because you pasted it in twice (or is the second version different. I hope not because I didn't find the difference). Thanks, Coleen From david.holmes at oracle.com Mon Jun 11 21:21:23 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 12 Jun 2018 07:21:23 +1000 Subject: Ping!! Re: RFR: 8203357 Container Metrics In-Reply-To: References: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> <1c57d8ee-1178-2e9a-4b7d-aafbe885473c@oracle.com> Message-ID: <469a46b5-aa9c-c6fd-b270-6a4230b4e08a@oracle.com> On 12/06/2018 12:12 AM, Bob Vandette wrote: > >> On Jun 11, 2018, at 4:32 AM, David Holmes > > wrote: >> >> Sorry Bob I haven't had a chance to look at this detail. >> >> For the Java code ... methods that return arrays should return >> zero-length arrays when something is not available rather than null. > > All methods do return zero length arrays except I missed the > getPerCpuUsage. ?I?ll fix that one and correct the javadoc. There are a few more too: 231 * @return An array of available CPUs or null if metric is not available. 232 * 233 */ 234 public int[] getCpuSetCpus(); 242 * @return An array of available and online CPUs or null if the metric 243 * is not available. 244 * 245 */ 246 public int[] getEffectiveCpuSetCpus(); 256 * @return An array of available memory nodes or null if metric is not available. 257 * 258 */ 259 public int[] getCpuSetMems(); 267 * @return An array of available and online nodes or null if the metric 268 * is not available. 269 * 270 */ 271 public int[] getEffectiveCpuSetMems(); > >> >> For getCpuPeriod() the term "operating system time slice" can be >> misconstrued as being related to the scheduler timeslice that may, or >> may not, exist, depending on the scheduler and scheduling policy etc. >> This "timeslice" is something specific to cgroups - no? > > The comments reads: > > * Returns the length of the operating system time slice, in > * milliseconds, for processes within the Isolation Group. > > The comment does infer that it?s process and cgroup (Isolation group) > specific and not the generic os timeslice. > Isn?t this sufficient? The phrase "operating system" makes this sound like some kind of global timeslice notion - which it isn't. And I don't like to think of cpu periods/shares/quotas in terms of "time slice" anyway. I don't see the Docker or Cgroup documentation using "time slice" either. It suffices IMHO to just say for period: * Returns the length of the scheduling period, in * milliseconds, for processes within the Isolation Group. then for quota: * Returns the total available run-time allowed, in milliseconds, * during each scheduling period for all tasks in the Isolation Group. Thanks, David > > Thanks, > Bob. >> >> David >> >> On 8/06/2018 3:43 AM, Bob Vandette wrote: >>> Can I get one more reviewer for this RFE so I can integrate it? >>>> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 >>> Mandy Chung has reviewed this change. >>> I?ve run Mach5 hotspot and core lib tests. >>> I?ve reviewed the tests which were written by Harsha Wardhana >>> I filed a CSR for the command line change and it?s now approved and >>> closed. >>> Thanks, >>> Bob. >>>> On May 30, 2018, at 3:45 PM, Bob Vandette >>> > wrote: >>>> >>>> Please review the following RFE which adds an internal API, along >>>> with jtreg tests that provide >>>> access to Docker container configuration data and metrics. ?In >>>> addition to the API which we hope to >>>> take advantage of in the future with Java Flight Recorder and a JMX >>>> Mbean, I?ve added an additional >>>> option to -XshowSettings:system than dumps out the container or host >>>> cgroup confguration >>>> information. ?See the sample output below: >>>> >>>> RFE: Container Metrics >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8203357 >>>> >>>> WEBREV: >>>> >>>> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 >>>> >>>> >>>> This commit will also include a fix for the following bug. >>>> >>>> BUG: [TESTBUG] Test /runtime/containers/cgroup/PlainRead.java fails >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8203691 >>>> >>>> WEBREV: >>>> >>>> http://cr.openjdk.java.net/~bobv/8203357/webrev.00/test/hotspot/jtreg/runtime/containers/cgroup/PlainRead.java.sdiff.html >>>> >>>> SAMPLE USAGE and OUTPUT: >>>> >>>> docker run ?memory=256m --cpuset-cpus 4-7 -it ubuntu bash >>>> ./java -XshowSettings:system >>>> Operating System Metrics: >>>> ???Provider: cgroupv1 >>>> ???Effective CPU Count: 4 >>>> ???CPU Period: 100000 >>>> ???CPU Quota: -1 >>>> ???CPU Shares: -1 >>>> ???List of Processors, 4 total: >>>> ???4 5 6 7 >>>> ???List of Effective Processors, 4 total: >>>> ???4 5 6 7 >>>> ???List of Memory Nodes, 2 total: >>>> ???0 1 >>>> ???List of Available Memory Nodes, 2 total: >>>> ???0 1 >>>> ???CPUSet Memory Pressure Enabled: false >>>> ???Memory Limit: 256.00M >>>> ???Memory Soft Limit: Unlimited >>>> ???Memory & Swap Limit: 512.00M >>>> ???Kernel Memory Limit: Unlimited >>>> ???TCP Memory Limit: Unlimited >>>> ???Out Of Memory Killer Enabled: true >>>> >>>> TEST RESULTS: >>>> >>>> testing runtime container APIs >>>> Directory "JTwork" not found: creating >>>> Passed: runtime/containers/cgroup/PlainRead.java >>>> Passed: runtime/containers/docker/DockerBasicTest.java >>>> Passed: runtime/containers/docker/TestCPUAwareness.java >>>> Passed: runtime/containers/docker/TestCPUSets.java >>>> Passed: runtime/containers/docker/TestMemoryAwareness.java >>>> Passed: runtime/containers/docker/TestMisc.java >>>> Test results: passed: 6 >>>> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >>>> >>>> testing jdk.internal.platform APIs >>>> Passed: jdk/internal/platform/cgroup/TestCgroupMetrics.java >>>> Passed: jdk/internal/platform/docker/TestDockerCpuMetrics.java >>>> Passed: jdk/internal/platform/docker/TestDockerMemoryMetrics.java >>>> Passed: jdk/internal/platform/docker/TestSystemMetrics.java >>>> Test results: passed: 4 >>>> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >>>> >>>> testing -XshowSettings:system launcher option >>>> Passed: tools/launcher/Settings.java >>>> Test results: passed: 1 >>>> >>>> >>>> Bob. >>>> >>>> > From david.holmes at oracle.com Mon Jun 11 21:29:17 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 12 Jun 2018 07:29:17 +1000 Subject: RFC: 8204690: Simplify usage of Access API In-Reply-To: <4F2B66C2-7571-41D1-9780-9AA568067B01@oracle.com> References: <4F2B66C2-7571-41D1-9780-9AA568067B01@oracle.com> Message-ID: Hi Kim, I've avoided this area so far so can't comment on details as this is all foreign to me, however: > Proposal 4: The use of "root" by Access should be replaced by "native". So > IN_ROOT => IN_NATIVE, and RootAccess<> => NativeAccess<>. In what way is this "native"? And what would be a non-native access ?? Thanks, David On 12/06/2018 5:36 AM, Kim Barrett wrote: > JDK-8204690 is an enhancement request for simplifing the usage of the > Access API. This RFE comes out of some discussions within the Oracle > runtime and GC teams about difficulties encountered when using the > Access API. We now have a concrete set of changes to propose (rather > than just vague complaints), described in that RFE, which I'm > duplicating below for further discussion. > > Most of the proposed changes are technically straight-forward; many > are just changes of nomenclature. However, because they are name > changes, they end up touching a bunch of files, including various > platform-specific files. So we'll be asking for help with testing. > > We want to move ahead with these changes ASAP, because of the impact > they will have to backporting to JDK 11 if not included in that > release. However, a few of the changes significantly intersect other > changes that are soon to be pushed to JDK 11, so some amount of > scheduling will be needed to minimize overall work. > > Here's the description from the RFE: > > ---------- > > Simplify usage of Access API > > With 6+ months of usage of the Access API, some usage issues have been > noted. In particular, there are some issues around decorator names and > semantics which have caused confusion and led to some long discussions. > While the underlying strategy is sound, there are some changes that would > simplify usage. This proposal is in part the result of attempting to create > a guide for choosing the decorators for some use of the Access API. > > We currently have several categories of decorators, with some categories > having entries with overlapping semantics. We'd like to have a set of > categories from which one chooses exactly one entry, and it should be > "obvious" which one to choose for a given access. > > The first step is to determine where the operand is located. We presently > have the following decorators to indicate the Access location: IN_HEAP, > IN_HEAP_ARRAY, IN_ROOT, IN_CONCURRENT_ROOT, and IN_ARCHIVE_ROOT. Some of > these overlap with or imply others; the goal is to have a disjoint set. > > IN_CONCURRENT_ROOT has generated much discussion about when and how it > should be used. This might be better modelled as a Barrier Strength > decorator, e.g. in the AS_ category. It was placed among the location > decorators with the idea that some Access-roots would be identified as being > fully processed during a safe-point (and so would not require GC barriers), > while others (the "concurrent" roots) would require GC barriers. There was a > question of whether we needed more fine-grained decorators, or whether just > two categories that are the same for all collectors would be sufficient. So > far, we've gotten along without introducing further granularity. But we've > also found no significant need for the distinction at all. > > Proposal 1: IN_CONCURRENT_ROOT should be eliminated, and the corresponding > behavior should be the default. > > Proposal 2: IN_ARCHIVE_ROOT should be eliminated; see JDK-8204585. > > IN_HEAP_ARRAY is effectively an additional orthogonal property layered over > IN_HEAP. It would be better to actually make it an orthogonal property. > > Proposal 3: Remove IN_HEAP_ARRAY and add IS_ARRAY with similar semantics. > (IS_ARRAY might only be valid in conjunction with IN_HEAP.) > > The use of "root" here differs from how that term is usually used in the > context of GC. In particular, while GC-roots are Access-roots, not all > Access-roots are GC-roots. This is a frequent source of confusion. > > Proposal 4: The use of "root" by Access should be replaced by "native". So > IN_ROOT => IN_NATIVE, and RootAccess<> => NativeAccess<>. > > The second step is to determine the reference strength. The current API has > been working well here. We have ON_STRONG_OOP_REF, ON_WEAK_OOP_REF, > ON_PHANTOM_OOP_REF, and ON_UNKNOWN_OOP_REF, with ON_STRONG_OOP_REF being the > default. No changes are being proposed in this area. > > Another step is to determine the barrier strength. We presently have the > following decorators for this: AS_RAW, AS_DEST_NOT_INITIALIZED, > AS_NO_KEEPALIVE, and AS_NORMAL. AS_DEST_NOT_INITIALIZED is somewhat out of > place here, describing a property of the value rather than the access. It > would be better to make it an orthogonal property. The existing name is also > a little awkward, especially when turned into a variable and logically > negated, e.g. > > bool is_dest_not_initialized = ...; > ... !is_dest_not_initialized ... > > Proposal 5: Rename AS_DEST_NOT_INITIALIZED to IS_DEST_UNINITIALIZED. > > The fourth step is to determine the memory order. The current API has been > working well here. We have MO_UNORDERED, MO_VOLATILE, MO_RELAXED, > MO_ACQUIRE, MO_RELEASE, MO_SEQ_CST. No changes are being proposed in this > area. > > In addition, we presently have OOP_NOT_NULL, all on its own in a separate > category. There's no need for this separate category, and this can be > renamed to be similar to other orthogonal properties proposed above. > > Proposal 6: Rename OOP_NOT_NULL to IS_NOT_NULL. Remove OOP_DECORATOR_MASK. > > Proposal 7: Add IS_DECORATOR_MASK, containing the values for IS_ARRAY, > IS_NOT_NULL, and IS_DEST_UNINITIALIZED. > > There are also decorators for annotating arraycopy. These are highly tied in > to the code, and are not discussed here. > > With these changes, the process of selecting the decorators for an access > consists of first selecting one decorator in each of the following > categories: > > (1) Operand location: IN_NATIVE, IN_HEAP. There is no default; one or the > other must be explicitly specified. However, rather than using the > decorators directly, use the NativeAccess<> and HeapAccess<> classes. > > (2) Access strength: AS_NORMAL, AS_RAW, AS_NO_KEEPALIVE. The default > is AS_NORMAL. When accessing a primitive (non-object) value, use > AS_RAW. > > (3) Reference strength (if not raw access): ON_STRONG_OOP_REF, > ON_WEAK_OOP_REF, ON_PHANTOM_OOP_REF, ON_UNKNOWN_OOP_REF. The default is > ON_STRONG_OOP_REF. This decorator is ignored and should be left empty if the > access strength is AS_RAW. > > (4) Memory ordering: MO_UNORDERED, MO_VOLATILE, MO_RELAXED, MO_ACQUIRE, > MO_RELEASE, MO_SEQ_CST. The default is MO_UNORDERED. > > Then, add any of the following "flag" decorators that are appropriate: > IS_ARRAY, IS_NOT_NULL, IS_DEST_UNINITIALIZED. The default for these is that > the flag is unset. > Simplify usage of Access API > > With 6+ months of usage of the Access API, some usage issues have been > noted. In particular, there are some issues around decorator names and > semantics which have caused confusion and led to some long discussions. > While the underlying strategy is sound, there are some changes that would > simplify usage. This proposal is in part the result of attempting to create > a guide for choosing the decorators for some use of the Access API. > > We currently have several categories of decorators, with some categories > having entries with overlapping semantics. We'd like to have a set of > categories from which one chooses exactly one entry, and it should be > "obvious" which one to choose for a given access. > > The first step is to determine where the operand is located. We presently > have the following decorators to indicate the Access location: IN_HEAP, > IN_HEAP_ARRAY, IN_ROOT, IN_CONCURRENT_ROOT, and IN_ARCHIVE_ROOT. Some of > these overlap with or imply others; the goal is to have a disjoint set. > > IN_CONCURRENT_ROOT has generated much discussion about when and how it > should be used. This might be better modelled as a Barrier Strength > decorator, e.g. in the AS_ category. It was placed among the location > decorators with the idea that some Access-roots would be identified as being > fully processed during a safe-point (and so would not require GC barriers), > while others (the "concurrent" roots) would require GC barriers. There was a > question of whether we needed more fine-grained decorators, or whether just > two categories that are the same for all collectors would be sufficient. So > far, we've gotten along without introducing further granularity. But we've > also found no significant need for the distinction at all. > > Proposal 1: IN_CONCURRENT_ROOT should be eliminated, and the corresponding > behavior should be the default. > > Proposal 2: IN_ARCHIVE_ROOT should be eliminated; see JDK-8204585. > > IN_HEAP_ARRAY is effectively an additional orthogonal property layered over > IN_HEAP. It would be better to actually make it an orthogonal property. > > Proposal 3: Remove IN_HEAP_ARRAY and add IS_ARRAY with similar semantics. > (IS_ARRAY might only be valid in conjunction with IN_HEAP.) > > The use of "root" here differs from how that term is usually used in the > context of GC. In particular, while GC-roots are Access-roots, not all > Access-roots are GC-roots. This is a frequent source of confusion. > > Proposal 4: The use of "root" by Access should be replaced by "native". So > IN_ROOT => IN_NATIVE, and RootAccess<> => NativeAccess<>. > > The second step is to determine the reference strength. The current API has > been working well here. We have ON_STRONG_OOP_REF, ON_WEAK_OOP_REF, > ON_PHANTOM_OOP_REF, and ON_UNKNOWN_OOP_REF, with ON_STRONG_OOP_REF being the > default. No changes are being proposed in this area. > > Another step is to determine the barrier strength. We presently have the > following decorators for this: AS_RAW, AS_DEST_NOT_INITIALIZED, > AS_NO_KEEPALIVE, and AS_NORMAL. AS_DEST_NOT_INITIALIZED is somewhat out of > place here, describing a property of the value rather than the access. It > would be better to make it an orthogonal property. The existing name is also > a little awkward, especially when turned into a variable and logically > negated, e.g. > > bool is_dest_not_initialized = ...; > ... !is_dest_not_initialized ... > > Proposal 5: Rename AS_DEST_NOT_INITIALIZED to IS_DEST_UNINITIALIZED. > > The fourth step is to determine the memory order. The current API has been > working well here. We have MO_UNORDERED, MO_VOLATILE, MO_RELAXED, > MO_ACQUIRE, MO_RELEASE, MO_SEQ_CST. No changes are being proposed in this > area. > > In addition, we presently have OOP_NOT_NULL, all on its own in a separate > category. There's no need for this separate category, and this can be > renamed to be similar to other orthogonal properties proposed above. > > Proposal 6: Rename OOP_NOT_NULL to IS_NOT_NULL. Remove OOP_DECORATOR_MASK. > > Proposal 7: Add IS_DECORATOR_MASK, containing the values for IS_ARRAY, > IS_NOT_NULL, and IS_DEST_UNINITIALIZED. > > There are also decorators for annotating arraycopy. These are highly tied in > to the code, and are not discussed here. > > With these changes, the process of selecting the decorators for an access > consists of first selecting one decorator in each of the following > categories: > > (1) Operand location: IN_NATIVE, IN_HEAP. There is no default; one or the > other must be explicitly specified. However, rather than using the > decorators directly, use the NativeAccess<> and HeapAccess<> classes. > > (2) Access strength: AS_NORMAL, AS_RAW, AS_NO_KEEPALIVE. The default is AS_NORMAL. When accessing a primitive (non-object) value, use AS_RAW. > > (3) Reference strength (if not raw access): ON_STRONG_OOP_REF, > ON_WEAK_OOP_REF, ON_PHANTOM_OOP_REF, ON_UNKNOWN_OOP_REF. The default is > ON_STRONG_OOP_REF. This decorator is ignored and should be left empty if the > access strength is AS_RAW. > > (4) Memory ordering: MO_UNORDERED, MO_VOLATILE, MO_RELAXED, MO_ACQUIRE, > MO_RELEASE, MO_SEQ_CST. The default is MO_UNORDERED. > > Then, add any of the following "flag" decorators that are appropriate: > IS_ARRAY, IS_NOT_NULL, IS_DEST_UNINITIALIZED. The default for these is that > the flag is unset. > From jesper.wilhelmsson at oracle.com Mon Jun 11 21:31:43 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Mon, 11 Jun 2018 23:31:43 +0200 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> <2A71CDF2-A6F5-4985-9005-4886D1047F6C@oracle.com> <2e2ace1c-22ad-e6e3-5b05-e2688b5b1b5c@oracle.com> <965cfb76-1d06-14b5-7f1e-44481ef2c54d@oracle.com> Message-ID: <59120A4F-7DD8-412B-9E4B-8898071A30D7@oracle.com> Looks good to me. /Jesper > On 11 Jun 2018, at 22:42, Erik Joelsson wrote: > > Hello, > > Based on the discussion here, I have reverted back to something more similar to webrev.02, but with a few changes. Mainly fixing a bug that caused JVM_FEATURES_hardened to not actually be the same as for server (if you have custom additions in configure). I also added a check so that configure fails if you try to enable either variant hardened or feature no-speculative-cti and the flags aren't available. > > Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.05/index.html > > /Erik > > On 2018-06-11 00:10, Magnus Ihse Bursie wrote: >> On 2018-06-08 23:50, Erik Joelsson wrote: >>> On 2018-06-07 17:30, David Holmes wrote: >>>> On 8/06/2018 6:11 AM, Erik Joelsson wrote: >>>>> I just don't think the extra work is warranted or should be prioritized at this point. I also cannot think of a combination of options required for what you are suggesting that wouldn't be confusing to the user. If someone truly feels like these flags are forced on them and can't live with them, we or preferably that person can fix it then. I don't think that's dictatorship. OpenJDK is still open source and anyone can contribute. >>>> >>>> I don't see why --enable-hardened-jdk and --enable-hardened-hotspot to add to the right flags would be either complicated or confusing. >>>> >>> For me the confusion surrounds the difference between --enable-hardened-hotspot and --with-jvm-variants=server, hardened and making the user understand it. But sure, it is doable. Here is a new webrev with those two options as I interpret them. Here is the help text: >>> >>> --enable-hardened-jdk enable hardenening compiler flags for all jdk >>> libraries (except the JVM), typically disabling >>> speculative cti. [disabled] >>> --enable-hardened-hotspot >>> enable hardenening compiler flags for hotspot (all >>> jvm variants), typically disabling speculative cti. >>> To make hardening of hotspot a runtime choice, >>> consider the "hardened" jvm variant instead of this >>> option. [disabled] >>> >>> Note that this changes the default for jdk libraries to not enable hardening unless the user requests it. >>> >>> Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.04/ >> >> Hold it, hold it! I'm not sure how we ended up here, but I don't like it at all. :-( >> >> I think Eriks initial patch is much better than this. Some arguments in random order to defend this position: >> >> 1) Why should we have a configure option to disable security relevant flags for the JDK, if there has been no measured negative effect? We don't do this for any other compiler flags, especially not security relevant ones! >> >> I've re-read the entire thread to see if I could understand what could possibly motivate this, but the only thing I can find is David Holmes vague fear that these flags would not be well-tested enough. Let me counter with my own vague guesses: I believe the spectre mitigation methods to have been fully and properly tested, since they are rolled-out massively on all products. And let me complement with my own fear: the PR catastrophe if OpenJDK were *not* built with spectre mitigations, and someone were to exploit that! >> >> In fact, I could even argue that "server" should be hardened *by default*, and that we should instead introduce a non-hardened JVM named something akin to "quick-but-dangerous-server" instead. But I realize that a 25% performance hit is hard to swallow, so I won't push this agenda. >> >> 2) It is by no means clear that "--enable-hardened-jdk" does not harden all aspects of the JDK! If we should keep the option (which I definitely do not think we should!) it should be renamed to "--enable-hardened-libraries", or something like that. And it should be on by default, so it should be a "--disabled-hardened-jdk-libraries". >> >> Also, the general-sounding name "hardened" sounds like it might encompass more things than it does. What if I disabled a hardened jdk build, should I still get stack banging protection? If so, you need to move a lot more security-related flags to this option. (And, just to be absolutely clear: I don't think you should do that.) >> >> 3) Having two completely different ways of turning on Spectre protection for hotspot is just utterly confusing! This was a perfect example of how to use the JVM features, just as in the original patch. >> >> If you want to have spectre mitigation enabled for both server and client, by default, you would just need to run "configure --with-jvm-variants=server,client --with-jvm-features=no-speculative-cti", which will enable that feature for all variants. That's not really hard *at all* for anyone building OpenJDK. And it's way clearer what will happen, than a --enable-hardened-hotspot. >> >> 4) If you are a downstream provider building OpenJDK and you are dead set on not including Spectre mitigations in the JDK libraries, despite being shown to have no negative effects, then you can do just as any other downstream user with highly specialized requirements, and patch the source. I have no sympathies for this; I can't stop it but I don't think there's any reason for us to complicate the code to support this unlikely case. >> >> So, to recap, I think the webrev as published in http://cr.openjdk.java.net/~erikj/8202384/webrev.02/ (with "altserver" renamed to "hardened") is the way to go. >> >> /Magnus >> >> >> >>> >>> /Erik >> > From kim.barrett at oracle.com Mon Jun 11 21:41:39 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 11 Jun 2018 17:41:39 -0400 Subject: RFC: 8204690: Simplify usage of Access API In-Reply-To: References: <4F2B66C2-7571-41D1-9780-9AA568067B01@oracle.com> Message-ID: <536B96FA-6D97-4C52-97A7-2B2041EA2FBF@oracle.com> > On Jun 11, 2018, at 5:29 PM, David Holmes wrote: > > Hi Kim, > > I've avoided this area so far so can't comment on details as this is all foreign to me, however: > > > Proposal 4: The use of "root" by Access should be replaced by "native". So > > IN_ROOT => IN_NATIVE, and RootAccess<> => NativeAccess<>. > > In what way is this "native"? And what would be a non-native access ?? ?native? := off-heap, e.g. the location being accessed is in some C/C++ data structure (including global variables). So we have IN_NATIVE (for those) and IN_HEAP (for locations in the Java heap). From kim.barrett at oracle.com Mon Jun 11 21:44:14 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 11 Jun 2018 17:44:14 -0400 Subject: RFC: 8204690: Simplify usage of Access API In-Reply-To: <120cd132-e9b6-5c01-7408-71757267ad91@oracle.com> References: <4F2B66C2-7571-41D1-9780-9AA568067B01@oracle.com> <120cd132-e9b6-5c01-7408-71757267ad91@oracle.com> Message-ID: <4E7854C2-BFEC-434E-B3CB-264313681F9D@oracle.com> > On Jun 11, 2018, at 4:57 PM, coleen.phillimore at oracle.com wrote: > > > > On 6/11/18 3:36 PM, Kim Barrett wrote: >> The use of "root" here differs from how that term is usually used in the >> context of GC. In particular, while GC-roots are Access-roots, not all >> Access-roots are GC-roots. This is a frequent source of confusion. > Yes. > > It appears that this is long because you pasted it in twice (or is the second version different. I hope not because I didn't find the difference). Ack! Sorry about that. I?ll see if there are any differences and send an update with the proper version if needed. From kim.barrett at oracle.com Mon Jun 11 21:48:30 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 11 Jun 2018 17:48:30 -0400 Subject: RFC: 8204690: Simplify usage of Access API In-Reply-To: <4E7854C2-BFEC-434E-B3CB-264313681F9D@oracle.com> References: <4F2B66C2-7571-41D1-9780-9AA568067B01@oracle.com> <120cd132-e9b6-5c01-7408-71757267ad91@oracle.com> <4E7854C2-BFEC-434E-B3CB-264313681F9D@oracle.com> Message-ID: <92067ACD-FF0F-49CE-BD9A-746034214516@oracle.com> > On Jun 11, 2018, at 5:44 PM, Kim Barrett wrote: > >> On Jun 11, 2018, at 4:57 PM, coleen.phillimore at oracle.com wrote: >> It appears that this is long because you pasted it in twice (or is the second version different. I hope not because I didn't find the difference). > > Ack! Sorry about that. I?ll see if there are any differences and send an update with the proper version if needed. The only difference is a couple of missing line breaks in the first ?copy?, in the paragraph starting with "(2) Access strength:?. From kim.barrett at oracle.com Mon Jun 11 21:51:06 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 11 Jun 2018 17:51:06 -0400 Subject: RFC: 8204690: Simplify usage of Access API In-Reply-To: <92067ACD-FF0F-49CE-BD9A-746034214516@oracle.com> References: <4F2B66C2-7571-41D1-9780-9AA568067B01@oracle.com> <120cd132-e9b6-5c01-7408-71757267ad91@oracle.com> <4E7854C2-BFEC-434E-B3CB-264313681F9D@oracle.com> <92067ACD-FF0F-49CE-BD9A-746034214516@oracle.com> Message-ID: > On Jun 11, 2018, at 5:48 PM, Kim Barrett wrote: > >> On Jun 11, 2018, at 5:44 PM, Kim Barrett wrote: >> >>> On Jun 11, 2018, at 4:57 PM, coleen.phillimore at oracle.com wrote: >>> It appears that this is long because you pasted it in twice (or is the second version different. I hope not because I didn't find the difference). >> >> Ack! Sorry about that. I?ll see if there are any differences and send an update with the proper version if needed. > > The only difference is a couple of missing line breaks in the first ?copy?, in the paragraph starting with "(2) Access strength:?. missing line breaks in the *second* copy! From kim.barrett at oracle.com Mon Jun 11 22:20:18 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 11 Jun 2018 18:20:18 -0400 Subject: RFC: 8204690: Simplify usage of Access API In-Reply-To: <37898c3b-d4b6-ba49-38a2-adc5a5e6c4c5@redhat.com> References: <4F2B66C2-7571-41D1-9780-9AA568067B01@oracle.com> <78fb1b8a-f738-49e7-5fc8-0118bfe262b6@redhat.com> <37898c3b-d4b6-ba49-38a2-adc5a5e6c4c5@redhat.com> Message-ID: > On Jun 11, 2018, at 4:46 PM, Roman Kennke wrote: > > Am 11.06.2018 um 22:35 schrieb Kim Barrett: >>> On Jun 11, 2018, at 4:10 PM, Roman Kennke wrote: >>> But may I throw in another ambiguity: >>> >>> OOP_IS_NULL is, as far as I can tell, used to decorate an access where >>> the *value* is known to be not null (for stores), or the value coming >>> out of a load is known to be not null (for loads). This is useful for >>> stuff like compressed oops, where a null-check can be elided if we know >>> it's not null. However, at least when using this in Shenandoah land, it >>> is also useful to know whether or not a target oop (the object being >>> written to, or loaded from) is known to be not null, at least in >>> compiled code. If it's known to be not null, we can elide a null-check >>> on the read/write barrier around the memor access. I propose to >>> disambiguate this by splitting the semantics into VALUE_NOT_NULL (or >>> similar) and TARGET_NOT_NULL (or similar). Suggestions welcome! >> I think what you are describing should be IS_NULL, which would be a pure >> addition from where we are. > > No. What I meant is the distinction between the value (of a load or > sore) that is known to be not null and the target oop (to which we > store, from which we load) known to be not null. > > An IS_NULL property might be useful too, but as you say, I am not sure > how much use it actually is. Okay, I did misunderstand your suggestion. If I understand correctly, you want to be able to say obj != NULL in the following (via the Access decorators): Access<...>::oop_load_at(obj, offset) But when would that be useful? It seems to me that's an invariant that must be guaranteed by the caller, and that oop_load_at can just always make that assumption. Indeed, I have no idea what oop_load_at should do otherwise (other than segfault). From jiangli.zhou at oracle.com Mon Jun 11 22:26:21 2018 From: jiangli.zhou at oracle.com (Jiangli Zhou) Date: Mon, 11 Jun 2018 15:26:21 -0700 Subject: RFR: 8204585: Remove IN_ARCHIVE_ROOT from Access API In-Reply-To: <328F7DDB-D6B9-4865-84CF-E014D0108257@oracle.com> References: <328F7DDB-D6B9-4865-84CF-E014D0108257@oracle.com> Message-ID: <13628F9C-6DE9-40FF-B58C-C90E607A9255@oracle.com> Hi Kim, Both the changes and testing look good to me. Would it be better to rename MetaspaceShared::unarchive_heap_object() to MetaspaceShared::materialize_archived_object() to reflect the API in G1CollectedHeap? The use of ?materialize? in the GC API looks very good. Thank you for continuing improving the GC underlying support! Thanks! Jiangli > On Jun 11, 2018, at 1:37 PM, Kim Barrett wrote: > > Please review this change to the implementation of CDS support to use > a new collector-based protocol for handling archived Java mirrors. > Rather than using the IN_ARCHIVE_ROOT feature of the Access API, we > instead use a new protocol provided by the collected heap (only > G1CollectedHeap for now, since only G1 supports this feature). This > allows IN_ARCHIVE_ROOT to be removed from the Access API. > > This changeset is based on work by Stefan Karlsson. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8204585 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8204585/open.00/ > > Testing: > mach5 tier1,2,3, hs-tier4,5. > Local testing of hotspot_cds and hotspot_appcds. > From kim.barrett at oracle.com Mon Jun 11 22:58:57 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 11 Jun 2018 18:58:57 -0400 Subject: RFR: 8204585: Remove IN_ARCHIVE_ROOT from Access API In-Reply-To: <13628F9C-6DE9-40FF-B58C-C90E607A9255@oracle.com> References: <328F7DDB-D6B9-4865-84CF-E014D0108257@oracle.com> <13628F9C-6DE9-40FF-B58C-C90E607A9255@oracle.com> Message-ID: <7BA7673E-CFD0-41D8-AC62-EBA619B56E31@oracle.com> > On Jun 11, 2018, at 6:26 PM, Jiangli Zhou wrote: > > Hi Kim, > > Both the changes and testing look good to me. Would it be better to rename MetaspaceShared::unarchive_heap_object() to MetaspaceShared::materialize_archived_object() to reflect the API in G1CollectedHeap? The use of ?materialize? in the GC API looks very good. Thank you for continuing improving the GC underlying support! I like that suggestion. I?ll see what other folks think, but I?m inclined to do it. From coleen.phillimore at oracle.com Mon Jun 11 23:00:33 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 11 Jun 2018 19:00:33 -0400 Subject: RFR: 8204585: Remove IN_ARCHIVE_ROOT from Access API In-Reply-To: <13628F9C-6DE9-40FF-B58C-C90E607A9255@oracle.com> References: <328F7DDB-D6B9-4865-84CF-E014D0108257@oracle.com> <13628F9C-6DE9-40FF-B58C-C90E607A9255@oracle.com> Message-ID: <5e795f4d-43a7-5cf9-f0a1-3092746850c5@oracle.com> +1 This looks good. Coleen On 6/11/18 6:26 PM, Jiangli Zhou wrote: > Hi Kim, > > Both the changes and testing look good to me. Would it be better to rename MetaspaceShared::unarchive_heap_object() to MetaspaceShared::materialize_archived_object() to reflect the API in G1CollectedHeap? The use of ?materialize? in the GC API looks very good. Thank you for continuing improving the GC underlying support! > > Thanks! > > Jiangli > >> On Jun 11, 2018, at 1:37 PM, Kim Barrett wrote: >> >> Please review this change to the implementation of CDS support to use >> a new collector-based protocol for handling archived Java mirrors. >> Rather than using the IN_ARCHIVE_ROOT feature of the Access API, we >> instead use a new protocol provided by the collected heap (only >> G1CollectedHeap for now, since only G1 supports this feature). This >> allows IN_ARCHIVE_ROOT to be removed from the Access API. >> >> This changeset is based on work by Stefan Karlsson. >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8204585 >> >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8204585/open.00/ >> >> Testing: >> mach5 tier1,2,3, hs-tier4,5. >> Local testing of hotspot_cds and hotspot_appcds. >> From bob.vandette at oracle.com Mon Jun 11 23:30:04 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Mon, 11 Jun 2018 19:30:04 -0400 Subject: Ping!! Re: RFR: 8203357 Container Metrics In-Reply-To: <469a46b5-aa9c-c6fd-b270-6a4230b4e08a@oracle.com> References: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> <1c57d8ee-1178-2e9a-4b7d-aafbe885473c@oracle.com> <469a46b5-aa9c-c6fd-b270-6a4230b4e08a@oracle.com> Message-ID: > On Jun 11, 2018, at 5:21 PM, David Holmes wrote: > > On 12/06/2018 12:12 AM, Bob Vandette wrote: >>> On Jun 11, 2018, at 4:32 AM, David Holmes > wrote: >>> >>> Sorry Bob I haven't had a chance to look at this detail. >>> >>> For the Java code ... methods that return arrays should return zero-length arrays when something is not available rather than null. >> All methods do return zero length arrays except I missed the getPerCpuUsage. I?ll fix that one and correct the javadoc. > > There are a few more too: > Those are covered by the function that converts the string range. > 231 * @return An array of available CPUs or null if metric is not available. > 232 * > 233 */ > 234 public int[] getCpuSetCpus(); > > 242 * @return An array of available and online CPUs or null if the metric > 243 * is not available. > 244 * > 245 */ > 246 public int[] getEffectiveCpuSetCpus(); > > 256 * @return An array of available memory nodes or null if metric is not available. > 257 * > 258 */ > 259 public int[] getCpuSetMems(); > > 267 * @return An array of available and online nodes or null if the metric > 268 * is not available. > 269 * > 270 */ > 271 public int[] getEffectiveCpuSetMems(); >>> >>> For getCpuPeriod() the term "operating system time slice" can be misconstrued as being related to the scheduler timeslice that may, or may not, exist, depending on the scheduler and scheduling policy etc. This "timeslice" is something specific to cgroups - no? >> The comments reads: >> * Returns the length of the operating system time slice, in >> * milliseconds, for processes within the Isolation Group. >> The comment does infer that it?s process and cgroup (Isolation group) specific and not the generic os timeslice. >> Isn?t this sufficient? > > The phrase "operating system" makes this sound like some kind of global timeslice notion - which it isn't. And I don't like to think of cpu periods/shares/quotas in terms of "time slice" anyway. I don't see the Docker or Cgroup documentation using "time slice" either. It suffices IMHO to just say for period: > > * Returns the length of the scheduling period, in > * milliseconds, for processes within the Isolation Group. > > then for quota: > > * Returns the total available run-time allowed, in milliseconds, > * during each scheduling period for all tasks in the Isolation Group. > Ok. I?ll update the docs. Bob > Thanks, > David > >> Thanks, >> Bob. >>> >>> David >>> >>>> On 8/06/2018 3:43 AM, Bob Vandette wrote: >>>> Can I get one more reviewer for this RFE so I can integrate it? >>>>> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 >>>> Mandy Chung has reviewed this change. >>>> I?ve run Mach5 hotspot and core lib tests. >>>> I?ve reviewed the tests which were written by Harsha Wardhana >>>> I filed a CSR for the command line change and it?s now approved and closed. >>>> Thanks, >>>> Bob. >>>>> On May 30, 2018, at 3:45 PM, Bob Vandette > wrote: >>>>> >>>>> Please review the following RFE which adds an internal API, along with jtreg tests that provide >>>>> access to Docker container configuration data and metrics. In addition to the API which we hope to >>>>> take advantage of in the future with Java Flight Recorder and a JMX Mbean, I?ve added an additional >>>>> option to -XshowSettings:system than dumps out the container or host cgroup confguration >>>>> information. See the sample output below: >>>>> >>>>> RFE: Container Metrics >>>>> >>>>> https://bugs.openjdk.java.net/browse/JDK-8203357 >>>>> >>>>> WEBREV: >>>>> >>>>> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 >>>>> >>>>> >>>>> This commit will also include a fix for the following bug. >>>>> >>>>> BUG: [TESTBUG] Test /runtime/containers/cgroup/PlainRead.java fails >>>>> >>>>> https://bugs.openjdk.java.net/browse/JDK-8203691 >>>>> >>>>> WEBREV: >>>>> >>>>> http://cr.openjdk.java.net/~bobv/8203357/webrev.00/test/hotspot/jtreg/runtime/containers/cgroup/PlainRead.java.sdiff.html >>>>> >>>>> SAMPLE USAGE and OUTPUT: >>>>> >>>>> docker run ?memory=256m --cpuset-cpus 4-7 -it ubuntu bash >>>>> ./java -XshowSettings:system >>>>> Operating System Metrics: >>>>> Provider: cgroupv1 >>>>> Effective CPU Count: 4 >>>>> CPU Period: 100000 >>>>> CPU Quota: -1 >>>>> CPU Shares: -1 >>>>> List of Processors, 4 total: >>>>> 4 5 6 7 >>>>> List of Effective Processors, 4 total: >>>>> 4 5 6 7 >>>>> List of Memory Nodes, 2 total: >>>>> 0 1 >>>>> List of Available Memory Nodes, 2 total: >>>>> 0 1 >>>>> CPUSet Memory Pressure Enabled: false >>>>> Memory Limit: 256.00M >>>>> Memory Soft Limit: Unlimited >>>>> Memory & Swap Limit: 512.00M >>>>> Kernel Memory Limit: Unlimited >>>>> TCP Memory Limit: Unlimited >>>>> Out Of Memory Killer Enabled: true >>>>> >>>>> TEST RESULTS: >>>>> >>>>> testing runtime container APIs >>>>> Directory "JTwork" not found: creating >>>>> Passed: runtime/containers/cgroup/PlainRead.java >>>>> Passed: runtime/containers/docker/DockerBasicTest.java >>>>> Passed: runtime/containers/docker/TestCPUAwareness.java >>>>> Passed: runtime/containers/docker/TestCPUSets.java >>>>> Passed: runtime/containers/docker/TestMemoryAwareness.java >>>>> Passed: runtime/containers/docker/TestMisc.java >>>>> Test results: passed: 6 >>>>> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >>>>> >>>>> testing jdk.internal.platform APIs >>>>> Passed: jdk/internal/platform/cgroup/TestCgroupMetrics.java >>>>> Passed: jdk/internal/platform/docker/TestDockerCpuMetrics.java >>>>> Passed: jdk/internal/platform/docker/TestDockerMemoryMetrics.java >>>>> Passed: jdk/internal/platform/docker/TestSystemMetrics.java >>>>> Test results: passed: 4 >>>>> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >>>>> >>>>> testing -XshowSettings:system launcher option >>>>> Passed: tools/launcher/Settings.java >>>>> Test results: passed: 1 >>>>> >>>>> >>>>> Bob. >>>>> >>>>> From kim.barrett at oracle.com Tue Jun 12 00:45:17 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 11 Jun 2018 20:45:17 -0400 Subject: RFR: 8204097: Simplify OopStorage::AllocateList block entry access In-Reply-To: References: <2A6B793E-AD54-430F-8C68-D92F964C0A37@oracle.com> Message-ID: > On Jun 11, 2018, at 6:42 AM, Thomas Schatzl wrote: > > Hi, > > On Wed, 2018-05-30 at 16:19 -0400, Kim Barrett wrote: >> Please review this simplification of OopStorage::AllocateList, >> removing the no longer used support for blocks being in multiple >> lists >> simultaneously. There is now only one list of blocks, the >> _allocate_list. >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8204097 >> >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8204097/open.00/ >> >> Testing: >> Mach5 tier{1,2,3} >> > > looks good. > > Thomas Thanks. From kim.barrett at oracle.com Tue Jun 12 00:54:37 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 11 Jun 2018 20:54:37 -0400 Subject: RFR: 8204097: Simplify OopStorage::AllocateList block entry access In-Reply-To: References: <2A6B793E-AD54-430F-8C68-D92F964C0A37@oracle.com> Message-ID: <587DB4B1-0C00-4BDF-810F-D03AB89D5FF8@oracle.com> > On Jun 1, 2018, at 9:13 AM, coleen.phillimore at oracle.com wrote: > > > Hi Kim, This change looks fine, except these names caused me a lot of confusion. > > http://cr.openjdk.java.net/~kbarrett/8204097/open.00/src/hotspot/share/gc/shared/oopStorage.cpp.udiff.html > > + block.allocate_entry()._next = old; > + old->allocate_entry()._prev = █ > > This allocate_entry() call doesn't actually allocate anything but get the allocation list entry. Can these things be renamed in a subsequent RFE? Thanks for reviewing. https://bugs.openjdk.java.net/browse/JDK-8204834 > > thanks, > Coleen > > On 5/30/18 4:19 PM, Kim Barrett wrote: >> Please review this simplification of OopStorage::AllocateList, >> removing the no longer used support for blocks being in multiple lists >> simultaneously. There is now only one list of blocks, the >> _allocate_list. >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8204097 >> >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8204097/open.00/ >> >> Testing: >> Mach5 tier{1,2,3} From kim.barrett at oracle.com Tue Jun 12 01:09:48 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 11 Jun 2018 21:09:48 -0400 Subject: RFR: 8204585: Remove IN_ARCHIVE_ROOT from Access API In-Reply-To: <5e795f4d-43a7-5cf9-f0a1-3092746850c5@oracle.com> References: <328F7DDB-D6B9-4865-84CF-E014D0108257@oracle.com> <13628F9C-6DE9-40FF-B58C-C90E607A9255@oracle.com> <5e795f4d-43a7-5cf9-f0a1-3092746850c5@oracle.com> Message-ID: <1FA3FD8E-6407-4C9D-9A9E-1EA4EDDA9A92@oracle.com> > On Jun 11, 2018, at 7:00 PM, coleen.phillimore at oracle.com wrote: > > +1 > This looks good. > Coleen Thanks. > > On 6/11/18 6:26 PM, Jiangli Zhou wrote: >> Hi Kim, >> >> Both the changes and testing look good to me. Would it be better to rename MetaspaceShared::unarchive_heap_object() to MetaspaceShared::materialize_archived_object() to reflect the API in G1CollectedHeap? The use of ?materialize? in the GC API looks very good. Thank you for continuing improving the GC underlying support! >> >> Thanks! >> >> Jiangli >> >>> On Jun 11, 2018, at 1:37 PM, Kim Barrett wrote: >>> >>> Please review this change to the implementation of CDS support to use >>> a new collector-based protocol for handling archived Java mirrors. >>> Rather than using the IN_ARCHIVE_ROOT feature of the Access API, we >>> instead use a new protocol provided by the collected heap (only >>> G1CollectedHeap for now, since only G1 supports this feature). This >>> allows IN_ARCHIVE_ROOT to be removed from the Access API. >>> >>> This changeset is based on work by Stefan Karlsson. >>> >>> CR: >>> https://bugs.openjdk.java.net/browse/JDK-8204585 >>> >>> Webrev: >>> http://cr.openjdk.java.net/~kbarrett/8204585/open.00/ >>> >>> Testing: >>> mach5 tier1,2,3, hs-tier4,5. >>> Local testing of hotspot_cds and hotspot_appcds. From david.holmes at oracle.com Tue Jun 12 05:12:54 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 12 Jun 2018 15:12:54 +1000 Subject: Ping!! Re: RFR: 8203357 Container Metrics In-Reply-To: References: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> <1c57d8ee-1178-2e9a-4b7d-aafbe885473c@oracle.com> <469a46b5-aa9c-c6fd-b270-6a4230b4e08a@oracle.com> Message-ID: <2b705a94-afe0-c9bb-4377-accabf73696e@oracle.com> On 12/06/2018 9:30 AM, Bob Vandette wrote: > > >> On Jun 11, 2018, at 5:21 PM, David Holmes wrote: >> >> On 12/06/2018 12:12 AM, Bob Vandette wrote: >>>> On Jun 11, 2018, at 4:32 AM, David Holmes > wrote: >>>> >>>> Sorry Bob I haven't had a chance to look at this detail. >>>> >>>> For the Java code ... methods that return arrays should return zero-length arrays when something is not available rather than null. >>> All methods do return zero length arrays except I missed the getPerCpuUsage. I?ll fix that one and correct the javadoc. >> >> There are a few more too: >> > > Those are covered by the function that converts the string range. ??? I have no idea what you mean. Java API design style is to return zero-length arrays rather than null. [Ref: Effective Java First Edition, Item 27]. Cheers, David ----- >> 231 * @return An array of available CPUs or null if metric is not available. >> 232 * >> 233 */ >> 234 public int[] getCpuSetCpus(); >> >> 242 * @return An array of available and online CPUs or null if the metric >> 243 * is not available. >> 244 * >> 245 */ >> 246 public int[] getEffectiveCpuSetCpus(); >> >> 256 * @return An array of available memory nodes or null if metric is not available. >> 257 * >> 258 */ >> 259 public int[] getCpuSetMems(); >> >> 267 * @return An array of available and online nodes or null if the metric >> 268 * is not available. >> 269 * >> 270 */ >> 271 public int[] getEffectiveCpuSetMems(); >>>> >>>> For getCpuPeriod() the term "operating system time slice" can be misconstrued as being related to the scheduler timeslice that may, or may not, exist, depending on the scheduler and scheduling policy etc. This "timeslice" is something specific to cgroups - no? >>> The comments reads: >>> * Returns the length of the operating system time slice, in >>> * milliseconds, for processes within the Isolation Group. >>> The comment does infer that it?s process and cgroup (Isolation group) specific and not the generic os timeslice. >>> Isn?t this sufficient? >> >> The phrase "operating system" makes this sound like some kind of global timeslice notion - which it isn't. And I don't like to think of cpu periods/shares/quotas in terms of "time slice" anyway. I don't see the Docker or Cgroup documentation using "time slice" either. It suffices IMHO to just say for period: >> >> * Returns the length of the scheduling period, in >> * milliseconds, for processes within the Isolation Group. >> >> then for quota: >> >> * Returns the total available run-time allowed, in milliseconds, >> * during each scheduling period for all tasks in the Isolation Group. >> > > Ok. I?ll update the docs. > Bob > >> Thanks, >> David >> >>> Thanks, >>> Bob. >>>> >>>> David >>>> >>>>> On 8/06/2018 3:43 AM, Bob Vandette wrote: >>>>> Can I get one more reviewer for this RFE so I can integrate it? >>>>>> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 >>>>> Mandy Chung has reviewed this change. >>>>> I?ve run Mach5 hotspot and core lib tests. >>>>> I?ve reviewed the tests which were written by Harsha Wardhana >>>>> I filed a CSR for the command line change and it?s now approved and closed. >>>>> Thanks, >>>>> Bob. >>>>>> On May 30, 2018, at 3:45 PM, Bob Vandette > wrote: >>>>>> >>>>>> Please review the following RFE which adds an internal API, along with jtreg tests that provide >>>>>> access to Docker container configuration data and metrics. In addition to the API which we hope to >>>>>> take advantage of in the future with Java Flight Recorder and a JMX Mbean, I?ve added an additional >>>>>> option to -XshowSettings:system than dumps out the container or host cgroup confguration >>>>>> information. See the sample output below: >>>>>> >>>>>> RFE: Container Metrics >>>>>> >>>>>> https://bugs.openjdk.java.net/browse/JDK-8203357 >>>>>> >>>>>> WEBREV: >>>>>> >>>>>> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 >>>>>> >>>>>> >>>>>> This commit will also include a fix for the following bug. >>>>>> >>>>>> BUG: [TESTBUG] Test /runtime/containers/cgroup/PlainRead.java fails >>>>>> >>>>>> https://bugs.openjdk.java.net/browse/JDK-8203691 >>>>>> >>>>>> WEBREV: >>>>>> >>>>>> http://cr.openjdk.java.net/~bobv/8203357/webrev.00/test/hotspot/jtreg/runtime/containers/cgroup/PlainRead.java.sdiff.html >>>>>> >>>>>> SAMPLE USAGE and OUTPUT: >>>>>> >>>>>> docker run ?memory=256m --cpuset-cpus 4-7 -it ubuntu bash >>>>>> ./java -XshowSettings:system >>>>>> Operating System Metrics: >>>>>> Provider: cgroupv1 >>>>>> Effective CPU Count: 4 >>>>>> CPU Period: 100000 >>>>>> CPU Quota: -1 >>>>>> CPU Shares: -1 >>>>>> List of Processors, 4 total: >>>>>> 4 5 6 7 >>>>>> List of Effective Processors, 4 total: >>>>>> 4 5 6 7 >>>>>> List of Memory Nodes, 2 total: >>>>>> 0 1 >>>>>> List of Available Memory Nodes, 2 total: >>>>>> 0 1 >>>>>> CPUSet Memory Pressure Enabled: false >>>>>> Memory Limit: 256.00M >>>>>> Memory Soft Limit: Unlimited >>>>>> Memory & Swap Limit: 512.00M >>>>>> Kernel Memory Limit: Unlimited >>>>>> TCP Memory Limit: Unlimited >>>>>> Out Of Memory Killer Enabled: true >>>>>> >>>>>> TEST RESULTS: >>>>>> >>>>>> testing runtime container APIs >>>>>> Directory "JTwork" not found: creating >>>>>> Passed: runtime/containers/cgroup/PlainRead.java >>>>>> Passed: runtime/containers/docker/DockerBasicTest.java >>>>>> Passed: runtime/containers/docker/TestCPUAwareness.java >>>>>> Passed: runtime/containers/docker/TestCPUSets.java >>>>>> Passed: runtime/containers/docker/TestMemoryAwareness.java >>>>>> Passed: runtime/containers/docker/TestMisc.java >>>>>> Test results: passed: 6 >>>>>> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >>>>>> >>>>>> testing jdk.internal.platform APIs >>>>>> Passed: jdk/internal/platform/cgroup/TestCgroupMetrics.java >>>>>> Passed: jdk/internal/platform/docker/TestDockerCpuMetrics.java >>>>>> Passed: jdk/internal/platform/docker/TestDockerMemoryMetrics.java >>>>>> Passed: jdk/internal/platform/docker/TestSystemMetrics.java >>>>>> Test results: passed: 4 >>>>>> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >>>>>> >>>>>> testing -XshowSettings:system launcher option >>>>>> Passed: tools/launcher/Settings.java >>>>>> Test results: passed: 1 >>>>>> >>>>>> >>>>>> Bob. >>>>>> >>>>>> > From mandy.chung at oracle.com Tue Jun 12 05:31:05 2018 From: mandy.chung at oracle.com (mandy chung) Date: Mon, 11 Jun 2018 22:31:05 -0700 Subject: Ping!! Re: RFR: 8203357 Container Metrics In-Reply-To: <2b705a94-afe0-c9bb-4377-accabf73696e@oracle.com> References: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> <1c57d8ee-1178-2e9a-4b7d-aafbe885473c@oracle.com> <469a46b5-aa9c-c6fd-b270-6a4230b4e08a@oracle.com> <2b705a94-afe0-c9bb-4377-accabf73696e@oracle.com> Message-ID: On 6/11/18 10:12 PM, David Holmes wrote: >>>>> >>>>> For the Java code ... methods that return arrays should return >>>>> zero-length arrays when something is not available rather than null. >>>> All methods do return zero length arrays except I missed the >>>> getPerCpuUsage.? I?ll fix that one and correct the javadoc. >>> >>> There are a few more too: >>> >> >> Those are covered by the function that converts the string range. > > ??? I have no idea what you mean. I think the methods returning an array calls Subsystem::StringRangeToIntArray which returns an empty array. 171 public static int[] StringRangeToIntArray(String range) { 172 int[] ints = new int[0]; 173 174 if (range == null) return ints; Mandy From david.holmes at oracle.com Tue Jun 12 05:43:13 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 12 Jun 2018 15:43:13 +1000 Subject: Ping!! Re: RFR: 8203357 Container Metrics In-Reply-To: References: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> <1c57d8ee-1178-2e9a-4b7d-aafbe885473c@oracle.com> <469a46b5-aa9c-c6fd-b270-6a4230b4e08a@oracle.com> <2b705a94-afe0-c9bb-4377-accabf73696e@oracle.com> Message-ID: <73dd06ec-544f-3cc9-00d5-c5c8c8ffdacc@oracle.com> On 12/06/2018 3:31 PM, mandy chung wrote: > On 6/11/18 10:12 PM, David Holmes wrote: >>>>>> >>>>>> For the Java code ... methods that return arrays should return >>>>>> zero-length arrays when something is not available rather than null. >>>>> All methods do return zero length arrays except I missed the >>>>> getPerCpuUsage.? I?ll fix that one and correct the javadoc. >>>> >>>> There are a few more too: >>>> >>> >>> Those are covered by the function that converts the string range. >> >> ??? I have no idea what you mean. > > > I think the methods returning an array calls > Subsystem::StringRangeToIntArray which returns an empty array. > > ?171???? public static int[] StringRangeToIntArray(String range) { > ?172???????? int[] ints = new int[0]; > ?173 > ?174???????? if (range == null) return ints; I'm commenting on the specification of the Metrics interface: http://cr.openjdk.java.net/~bobv/8203357/webrev.01/src/java.base/share/classes/jdk/internal/platform/Metrics.java.html not any implementation. Cheers, David > > Mandy From mandy.chung at oracle.com Tue Jun 12 06:00:57 2018 From: mandy.chung at oracle.com (mandy chung) Date: Mon, 11 Jun 2018 23:00:57 -0700 Subject: Ping!! Re: RFR: 8203357 Container Metrics In-Reply-To: <73dd06ec-544f-3cc9-00d5-c5c8c8ffdacc@oracle.com> References: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> <1c57d8ee-1178-2e9a-4b7d-aafbe885473c@oracle.com> <469a46b5-aa9c-c6fd-b270-6a4230b4e08a@oracle.com> <2b705a94-afe0-c9bb-4377-accabf73696e@oracle.com> <73dd06ec-544f-3cc9-00d5-c5c8c8ffdacc@oracle.com> Message-ID: <413aa29c-1a7c-8cd2-ca27-201dc6b53432@oracle.com> On 6/11/18 10:43 PM, David Holmes wrote: > On 12/06/2018 3:31 PM, mandy chung wrote: >> On 6/11/18 10:12 PM, David Holmes wrote: >>>>>>> >>>>>>> For the Java code ... methods that return arrays should return >>>>>>> zero-length arrays when something is not available rather than null. >>>>>> All methods do return zero length arrays except I missed the >>>>>> getPerCpuUsage.? I?ll fix that one and correct the javadoc. >>>>> >>>>> There are a few more too: >>>>> >>>> >>>> Those are covered by the function that converts the string range. >>> >>> ??? I have no idea what you mean. >> >> >> I think the methods returning an array calls >> Subsystem::StringRangeToIntArray which returns an empty array. >> >> ??171???? public static int[] StringRangeToIntArray(String range) { >> ??172???????? int[] ints = new int[0]; >> ??173 >> ??174???????? if (range == null) return ints; > > I'm commenting on the specification of the Metrics interface: > > http://cr.openjdk.java.net/~bobv/8203357/webrev.01/src/java.base/share/classes/jdk/internal/platform/Metrics.java.html > not any implementation. The implementation returns empty array, which is good. Yes the javadoc should be updated. Mandy From magnus.ihse.bursie at oracle.com Tue Jun 12 06:29:03 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Tue, 12 Jun 2018 08:29:03 +0200 Subject: RFR: JDK-8202384: Introduce altserver jvm variant with speculative execution disabled In-Reply-To: References: <970b43b8-e2fb-1ae7-b6a4-535f2844f3c4@redhat.com> <4FDD968D-4793-42B2-8A17-52BB75F66B28@oracle.com> <88a9fc61-cbde-a7f0-78b7-5f26ff66fcff@oracle.com> <80471236-7511-45fa-86d4-05a2500c5752@oracle.com> <2A71CDF2-A6F5-4985-9005-4886D1047F6C@oracle.com> <2e2ace1c-22ad-e6e3-5b05-e2688b5b1b5c@oracle.com> <965cfb76-1d06-14b5-7f1e-44481ef2c54d@oracle.com> Message-ID: <53309106-f59d-f086-e20a-a3d6eb544dda@oracle.com> On 2018-06-11 22:42, Erik Joelsson wrote: > Hello, > > Based on the discussion here, I have reverted back to something more > similar to webrev.02, but with a few changes. Mainly fixing a bug that > caused JVM_FEATURES_hardened to not actually be the same as for server > (if you have custom additions in configure). I also added a check so > that configure fails if you try to enable either variant hardened or > feature no-speculative-cti and the flags aren't available. > > Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.05/index.html ?Looks good to me. Thanks for all the effort! /Magnus > > /Erik > > On 2018-06-11 00:10, Magnus Ihse Bursie wrote: >> On 2018-06-08 23:50, Erik Joelsson wrote: >>> On 2018-06-07 17:30, David Holmes wrote: >>>> On 8/06/2018 6:11 AM, Erik Joelsson wrote: >>>>> I just don't think the extra work is warranted or should be >>>>> prioritized at this point. I also cannot think of a combination of >>>>> options required for what you are suggesting that wouldn't be >>>>> confusing to the user. If someone truly feels like these flags are >>>>> forced on them and can't live with them, we or preferably that >>>>> person can fix it then. I don't think that's dictatorship. OpenJDK >>>>> is still open source and anyone can contribute. >>>> >>>> I don't see why --enable-hardened-jdk and --enable-hardened-hotspot >>>> to add to the right flags would be either complicated or confusing. >>>> >>> For me the confusion surrounds the difference between >>> --enable-hardened-hotspot and --with-jvm-variants=server, hardened >>> and making the user understand it. But sure, it is doable. Here is a >>> new webrev with those two options as I interpret them. Here is the >>> help text: >>> >>> ?--enable-hardened-jdk?? enable hardenening compiler flags for all jdk >>> ????????????????????????? libraries (except the JVM), typically >>> disabling >>> ????????????????????????? speculative cti. [disabled] >>> ?--enable-hardened-hotspot >>> ????????????????????????? enable hardenening compiler flags for >>> hotspot (all >>> ????????????????????????? jvm variants), typically disabling >>> speculative cti. >>> ????????????????????????? To make hardening of hotspot a runtime >>> choice, >>> ????????????????????????? consider the "hardened" jvm variant >>> instead of this >>> ????????????????????????? option. [disabled] >>> >>> Note that this changes the default for jdk libraries to not enable >>> hardening unless the user requests it. >>> >>> Webrev: http://cr.openjdk.java.net/~erikj/8202384/webrev.04/ >> >> Hold it, hold it! I'm not sure how we ended up here, but I don't like >> it at all. :-( >> >> I think Eriks initial patch is much better than this. Some arguments >> in random order to defend this position: >> >> 1) Why should we have a configure option to disable security relevant >> flags for the JDK, if there has been no measured negative effect? We >> don't do this for any other compiler flags, especially not security >> relevant ones! >> >> I've re-read the entire thread to see if I could understand what >> could possibly motivate this, but the only thing I can find is David >> Holmes vague fear that these flags would not be well-tested enough. >> Let me counter with my own vague guesses: I believe the spectre >> mitigation methods to have been fully and properly tested, since they >> are rolled-out massively on all products. And let me complement with >> my own fear: the PR catastrophe if OpenJDK were *not* built with >> spectre mitigations, and someone were to exploit that! >> >> In fact, I could even argue that "server" should be hardened *by >> default*, and that we should instead introduce a non-hardened JVM >> named something akin to "quick-but-dangerous-server" instead. But I >> realize that a 25% performance hit is hard to swallow, so I won't >> push this agenda. >> >> 2) It is by no means clear that "--enable-hardened-jdk" does not >> harden all aspects of the JDK! If we should keep the option (which I >> definitely do not think we should!) it should be renamed to >> "--enable-hardened-libraries", or something like that. And it should >> be on by default, so it should be a "--disabled-hardened-jdk-libraries". >> >> Also, the general-sounding name "hardened" sounds like it might >> encompass more things than it does. What if I disabled a hardened jdk >> build, should I still get stack banging protection? If so, you need >> to move a lot more security-related flags to this option. (And, just >> to be absolutely clear: I don't think you should do that.) >> >> 3) Having two completely different ways of turning on Spectre >> protection for hotspot is just utterly confusing! This was a perfect >> example of how to use the JVM features, just as in the original patch. >> >> If you want to have spectre mitigation enabled for both server and >> client, by default, you would just need to run "configure >> --with-jvm-variants=server,client >> --with-jvm-features=no-speculative-cti", which will enable that >> feature for all variants. That's not really hard *at all* for anyone >> building OpenJDK. And it's way clearer what will happen, than a >> --enable-hardened-hotspot. >> >> 4) If you are a downstream provider building OpenJDK and you are dead >> set on not including Spectre mitigations in the JDK libraries, >> despite being shown to have no negative effects, then you can do just >> as any other downstream user with highly specialized requirements, >> and patch the source. I have no sympathies for this; I can't stop it >> but I don't think there's any reason for us to complicate the code to >> support this unlikely case. >> >> So, to recap, I think the webrev as published in >> http://cr.openjdk.java.net/~erikj/8202384/webrev.02/ (with >> "altserver" renamed to "hardened") is the way to go. >> >> /Magnus >> >> >> >>> >>> /Erik >> > From shade at redhat.com Tue Jun 12 06:39:50 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Tue, 12 Jun 2018 08:39:50 +0200 Subject: RFR 8204850: BarrierSet::make_* should be static In-Reply-To: <0b6b7efe-996c-afe6-d6b5-4510caa8e4bf@redhat.com> References: <0b6b7efe-996c-afe6-d6b5-4510caa8e4bf@redhat.com> Message-ID: <4a75bce7-1ca2-bd06-e82c-bd7bcb456865@redhat.com> (resending with proper To: field) On 06/12/2018 08:38 AM, Aleksey Shipilev wrote: > RFE: > https://bugs.openjdk.java.net/browse/JDK-8204850 > > In Epsilon, we have the call like: > > EpsilonBarrierSet::EpsilonBarrierSet() : BarrierSet( > make_barrier_set_assembler(), > make_barrier_set_c1(), > make_barrier_set_c2(), > BarrierSet::FakeRtti(BarrierSet::EpsilonBarrierSet)) {}; > > ...and some compilers (notably Mac OS builds) complain that: > > /Users/yosemite/jdk-sandbox/src/hotspot/share/gc/epsilon/epsilonBarrierSet.cpp:40:11: error: base > class 'BarrierSet' is uninitialized when used here to access > 'BarrierSet::make_barrier_set_assembler' [-Werror,-Wuninitialized] > make_barrier_set_assembler(), > ^ > > This warning is legit: we are calling instance method of BarrierSet before initializing it. But, > those methods are just factory methods, and they could be static, resolving the warning. > > Fix: > > diff -r 7f166e010af4 src/hotspot/share/gc/shared/barrierSet.hpp > --- a/src/hotspot/share/gc/shared/barrierSet.hpp Mon Jun 11 22:35:07 2018 -0400 > +++ b/src/hotspot/share/gc/shared/barrierSet.hpp Tue Jun 12 08:34:40 2018 +0200 > @@ -103,17 +103,17 @@ > ~BarrierSet() { } > > template > - BarrierSetAssembler* make_barrier_set_assembler() { > + static BarrierSetAssembler* make_barrier_set_assembler() { > return NOT_ZERO(new BarrierSetAssemblerT()) ZERO_ONLY(NULL); > } > > template > - BarrierSetC1* make_barrier_set_c1() { > + static BarrierSetC1* make_barrier_set_c1() { > return COMPILER1_PRESENT(new BarrierSetC1T()) NOT_COMPILER1(NULL); > } > > template > - BarrierSetC2* make_barrier_set_c2() { > + static BarrierSetC2* make_barrier_set_c2() { > return COMPILER2_PRESENT(new BarrierSetC2T()) NOT_COMPILER2(NULL); > } > > > Testing: x86_64 build, Epsilon MacOS build > > Thanks, > -Aleksey > From thomas.stuefe at gmail.com Tue Jun 12 06:44:53 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 12 Jun 2018 08:44:53 +0200 Subject: RFR 8204850: BarrierSet::make_* should be static In-Reply-To: <4a75bce7-1ca2-bd06-e82c-bd7bcb456865@redhat.com> References: <0b6b7efe-996c-afe6-d6b5-4510caa8e4bf@redhat.com> <4a75bce7-1ca2-bd06-e82c-bd7bcb456865@redhat.com> Message-ID: Seems fine. Regards, Thomas On Tue, Jun 12, 2018 at 8:39 AM, Aleksey Shipilev wrote: > (resending with proper To: field) > > On 06/12/2018 08:38 AM, Aleksey Shipilev wrote: >> RFE: >> https://bugs.openjdk.java.net/browse/JDK-8204850 >> >> In Epsilon, we have the call like: >> >> EpsilonBarrierSet::EpsilonBarrierSet() : BarrierSet( >> make_barrier_set_assembler(), >> make_barrier_set_c1(), >> make_barrier_set_c2(), >> BarrierSet::FakeRtti(BarrierSet::EpsilonBarrierSet)) {}; >> >> ...and some compilers (notably Mac OS builds) complain that: >> >> /Users/yosemite/jdk-sandbox/src/hotspot/share/gc/epsilon/epsilonBarrierSet.cpp:40:11: error: base >> class 'BarrierSet' is uninitialized when used here to access >> 'BarrierSet::make_barrier_set_assembler' [-Werror,-Wuninitialized] >> make_barrier_set_assembler(), >> ^ >> >> This warning is legit: we are calling instance method of BarrierSet before initializing it. But, >> those methods are just factory methods, and they could be static, resolving the warning. >> >> Fix: >> >> diff -r 7f166e010af4 src/hotspot/share/gc/shared/barrierSet.hpp >> --- a/src/hotspot/share/gc/shared/barrierSet.hpp Mon Jun 11 22:35:07 2018 -0400 >> +++ b/src/hotspot/share/gc/shared/barrierSet.hpp Tue Jun 12 08:34:40 2018 +0200 >> @@ -103,17 +103,17 @@ >> ~BarrierSet() { } >> >> template >> - BarrierSetAssembler* make_barrier_set_assembler() { >> + static BarrierSetAssembler* make_barrier_set_assembler() { >> return NOT_ZERO(new BarrierSetAssemblerT()) ZERO_ONLY(NULL); >> } >> >> template >> - BarrierSetC1* make_barrier_set_c1() { >> + static BarrierSetC1* make_barrier_set_c1() { >> return COMPILER1_PRESENT(new BarrierSetC1T()) NOT_COMPILER1(NULL); >> } >> >> template >> - BarrierSetC2* make_barrier_set_c2() { >> + static BarrierSetC2* make_barrier_set_c2() { >> return COMPILER2_PRESENT(new BarrierSetC2T()) NOT_COMPILER2(NULL); >> } >> >> >> Testing: x86_64 build, Epsilon MacOS build >> >> Thanks, >> -Aleksey >> > > From thomas.stuefe at gmail.com Tue Jun 12 06:49:57 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 12 Jun 2018 08:49:57 +0200 Subject: RFR 8204850: BarrierSet::make_* should be static In-Reply-To: References: <0b6b7efe-996c-afe6-d6b5-4510caa8e4bf@redhat.com> <4a75bce7-1ca2-bd06-e82c-bd7bcb456865@redhat.com> Message-ID: (and I wonder whether BarrierSet::_barrier_set_assembler/_barrier_set_c1/_barrier_set_c2 can be made * const since they are not changed after construction). On Tue, Jun 12, 2018 at 8:44 AM, Thomas St?fe wrote: > Seems fine. > > Regards, Thomas > > On Tue, Jun 12, 2018 at 8:39 AM, Aleksey Shipilev wrote: >> (resending with proper To: field) >> >> On 06/12/2018 08:38 AM, Aleksey Shipilev wrote: >>> RFE: >>> https://bugs.openjdk.java.net/browse/JDK-8204850 >>> >>> In Epsilon, we have the call like: >>> >>> EpsilonBarrierSet::EpsilonBarrierSet() : BarrierSet( >>> make_barrier_set_assembler(), >>> make_barrier_set_c1(), >>> make_barrier_set_c2(), >>> BarrierSet::FakeRtti(BarrierSet::EpsilonBarrierSet)) {}; >>> >>> ...and some compilers (notably Mac OS builds) complain that: >>> >>> /Users/yosemite/jdk-sandbox/src/hotspot/share/gc/epsilon/epsilonBarrierSet.cpp:40:11: error: base >>> class 'BarrierSet' is uninitialized when used here to access >>> 'BarrierSet::make_barrier_set_assembler' [-Werror,-Wuninitialized] >>> make_barrier_set_assembler(), >>> ^ >>> >>> This warning is legit: we are calling instance method of BarrierSet before initializing it. But, >>> those methods are just factory methods, and they could be static, resolving the warning. >>> >>> Fix: >>> >>> diff -r 7f166e010af4 src/hotspot/share/gc/shared/barrierSet.hpp >>> --- a/src/hotspot/share/gc/shared/barrierSet.hpp Mon Jun 11 22:35:07 2018 -0400 >>> +++ b/src/hotspot/share/gc/shared/barrierSet.hpp Tue Jun 12 08:34:40 2018 +0200 >>> @@ -103,17 +103,17 @@ >>> ~BarrierSet() { } >>> >>> template >>> - BarrierSetAssembler* make_barrier_set_assembler() { >>> + static BarrierSetAssembler* make_barrier_set_assembler() { >>> return NOT_ZERO(new BarrierSetAssemblerT()) ZERO_ONLY(NULL); >>> } >>> >>> template >>> - BarrierSetC1* make_barrier_set_c1() { >>> + static BarrierSetC1* make_barrier_set_c1() { >>> return COMPILER1_PRESENT(new BarrierSetC1T()) NOT_COMPILER1(NULL); >>> } >>> >>> template >>> - BarrierSetC2* make_barrier_set_c2() { >>> + static BarrierSetC2* make_barrier_set_c2() { >>> return COMPILER2_PRESENT(new BarrierSetC2T()) NOT_COMPILER2(NULL); >>> } >>> >>> >>> Testing: x86_64 build, Epsilon MacOS build >>> >>> Thanks, >>> -Aleksey >>> >> >> From thomas.schatzl at oracle.com Tue Jun 12 07:50:42 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Tue, 12 Jun 2018 09:50:42 +0200 Subject: RFR 8204850: BarrierSet::make_* should be static In-Reply-To: <4a75bce7-1ca2-bd06-e82c-bd7bcb456865@redhat.com> References: <0b6b7efe-996c-afe6-d6b5-4510caa8e4bf@redhat.com> <4a75bce7-1ca2-bd06-e82c-bd7bcb456865@redhat.com> Message-ID: Hi, looks good. Thomas On Tue, 2018-06-12 at 08:39 +0200, Aleksey Shipilev wrote: > (resending with proper To: field) > > On 06/12/2018 08:38 AM, Aleksey Shipilev wrote: > > RFE: > > https://bugs.openjdk.java.net/browse/JDK-8204850 > > > > In Epsilon, we have the call like: > > > > EpsilonBarrierSet::EpsilonBarrierSet() : BarrierSet( > > make_barrier_set_assembler(), > > make_barrier_set_c1(), > > make_barrier_set_c2(), > > BarrierSet::FakeRtti(BarrierSet::EpsilonBarrierSet)) {}; > > > > ...and some compilers (notably Mac OS builds) complain that: > > > > /Users/yosemite/jdk- > > sandbox/src/hotspot/share/gc/epsilon/epsilonBarrierSet.cpp:40:11: > > error: base > > class 'BarrierSet' is uninitialized when used here to access > > 'BarrierSet::make_barrier_set_assembler' [- > > Werror,-Wuninitialized] > > make_barrier_set_assembler(), > > ^ > > > > This warning is legit: we are calling instance method of BarrierSet > > before initializing it. But, > > those methods are just factory methods, and they could be static, > > resolving the warning. > > > > Fix: > > > > diff -r 7f166e010af4 src/hotspot/share/gc/shared/barrierSet.hpp > > --- a/src/hotspot/share/gc/shared/barrierSet.hpp Mon Jun 11 > > 22:35:07 2018 -0400 > > +++ b/src/hotspot/share/gc/shared/barrierSet.hpp Tue Jun 12 > > 08:34:40 2018 +0200 > > @@ -103,17 +103,17 @@ > > ~BarrierSet() { } > > > > template > > - BarrierSetAssembler* make_barrier_set_assembler() { > > + static BarrierSetAssembler* make_barrier_set_assembler() { > > return NOT_ZERO(new BarrierSetAssemblerT()) ZERO_ONLY(NULL); > > } > > > > template > > - BarrierSetC1* make_barrier_set_c1() { > > + static BarrierSetC1* make_barrier_set_c1() { > > return COMPILER1_PRESENT(new BarrierSetC1T()) > > NOT_COMPILER1(NULL); > > } > > > > template > > - BarrierSetC2* make_barrier_set_c2() { > > + static BarrierSetC2* make_barrier_set_c2() { > > return COMPILER2_PRESENT(new BarrierSetC2T()) > > NOT_COMPILER2(NULL); > > } > > > > > > Testing: x86_64 build, Epsilon MacOS build > > > > Thanks, > > -Aleksey > > > > From thomas.schatzl at oracle.com Tue Jun 12 08:11:50 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Tue, 12 Jun 2018 10:11:50 +0200 Subject: RFR: 8204585: Remove IN_ARCHIVE_ROOT from Access API In-Reply-To: <7BA7673E-CFD0-41D8-AC62-EBA619B56E31@oracle.com> References: <328F7DDB-D6B9-4865-84CF-E014D0108257@oracle.com> <13628F9C-6DE9-40FF-B58C-C90E607A9255@oracle.com> <7BA7673E-CFD0-41D8-AC62-EBA619B56E31@oracle.com> Message-ID: <8bb0524fa8787a9a0fe1821d0449d1f70e370cd0.camel@oracle.com> Hi, On Mon, 2018-06-11 at 18:58 -0400, Kim Barrett wrote: > > On Jun 11, 2018, at 6:26 PM, Jiangli Zhou > > wrote: > > > > Hi Kim, > > > > Both the changes and testing look good to me. Would it be better to > > rename MetaspaceShared::unarchive_heap_object() to > > MetaspaceShared::materialize_archived_object() to reflect the API > > in G1CollectedHeap? The use of ?materialize? in the GC API looks > > very good. Thank you for continuing improving the GC underlying > > support! > > I like that suggestion. I?ll see what other folks think, but I?m > inclined to do it. > looks good. Agree with the suggested name change. Thanks, Thomas From jesper.wilhelmsson at oracle.com Tue Jun 12 08:59:37 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Tue, 12 Jun 2018 10:59:37 +0200 Subject: RFR: JDK-8203927 - Update version string to identify which VM is being used Message-ID: Hi, Please review this change to make the version string to identify which JVM is being used in the presence of a hardened JVM. This change relates to JDK-8202384 which is currently out for review as well. Testing: Local verification and tier 1. Bug: https://bugs.openjdk.java.net/browse/JDK-8203927 Webrev: http://cr.openjdk.java.net/~jwilhelm/8203927/webrev.00/ Thanks, /Jesper From aph at redhat.com Tue Jun 12 09:11:01 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 12 Jun 2018 10:11:01 +0100 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: <2a983bca-a0b6-d004-a7fa-16ccd9d7f809@redhat.com> References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> <5B16A8C5.7040005@oracle.com> <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> <48b8839a-8af7-e64c-a9c6-e9f67ea099e4@redhat.com> <5e67f600-dc99-4409-6c6d-c09d3786b215@redhat.com> <38797d40-8a06-dc91-7d52-cc9e51222683@redhat.com> <2a983bca-a0b6-d004-a7fa-16ccd9d7f809@redhat.com> Message-ID: On 06/11/2018 08:17 PM, Roman Kennke wrote: > Am 11.06.2018 um 19:11 schrieb Andrew Haley: >> On 06/11/2018 04:56 PM, Andrew Haley wrote: >>> On 06/08/2018 09:17 PM, Roman Kennke wrote: >>>> Why is it better? And how would I do that? It sounds like a fairly >>>> complex undertaking for a special case. Notice that if the oop doesn't >>>> qualify as immediate operand (quite likely for an oop?) it used to be >>>> moved into rscratch1 anyway a few lines below. >>> >>> Sorry for the slow reply. I'm looking now. >> >> OK. The problem is that this is a very bad code smell: >> >> case T_ARRAY: >> jobject2reg(opr2->as_constant_ptr()->as_jobject(), rscratch1); >> __ cmpoop(reg1, rscratch1); >> >> I can't tell that this is correct. rscratch1 is used by assembler >> macros, and I don't know if some other GC (e.g. ZGC) might need to use >> rscratch1 inside cmpoop. The risk here is obvious. The Right Thing >> to do IMO is to generate a scratch register for pointer comparisons. >> >> Unless, I guess, we know that cmpoop never ever needs a scratch >> register for any forseeable garbage collector. >> > > I do know that Shenandoah does not require a tmp reg. I also do know > that no other collector currently needs equals-barriers at all. So cmpoop() is literally useless. It does nothing except add a layer of obfuscation in the name of some possible future collector. > I cannot see into the future. I prefer to be pragmatic and solve > existing problems. Perhaps, but this change you're asking me to review doesn't solve a problem, it creates one. This is how technical debt happens. > How about I add a comment to the obj_equals() API that says 'don't > use tmp reg X, and if you really need to, push/pop it or let the > compiler generate one for you' ? It's awkward, isn't it? I know that this is the wrong way to solve the problem, but I'm as vulnerable to social pressure as anyone else. Perhaps I should give up and choose an easy life. :-) -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From rkennke at redhat.com Tue Jun 12 09:23:07 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 12 Jun 2018 11:23:07 +0200 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> <5B16A8C5.7040005@oracle.com> <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> <48b8839a-8af7-e64c-a9c6-e9f67ea099e4@redhat.com> <5e67f600-dc99-4409-6c6d-c09d3786b215@redhat.com> <38797d40-8a06-dc91-7d52-cc9e51222683@redhat.com> <2a983bca-a0b6-d004-a7fa-16ccd9d7f809@redhat.com> Message-ID: <4c3d8867-ce8e-76dc-5d98-a751a0924fff@redhat.com> Am 12.06.2018 um 11:11 schrieb Andrew Haley: > On 06/11/2018 08:17 PM, Roman Kennke wrote: >> Am 11.06.2018 um 19:11 schrieb Andrew Haley: >>> On 06/11/2018 04:56 PM, Andrew Haley wrote: >>>> On 06/08/2018 09:17 PM, Roman Kennke wrote: >>>>> Why is it better? And how would I do that? It sounds like a fairly >>>>> complex undertaking for a special case. Notice that if the oop doesn't >>>>> qualify as immediate operand (quite likely for an oop?) it used to be >>>>> moved into rscratch1 anyway a few lines below. >>>> >>>> Sorry for the slow reply. I'm looking now. >>> >>> OK. The problem is that this is a very bad code smell: >>> >>> case T_ARRAY: >>> jobject2reg(opr2->as_constant_ptr()->as_jobject(), rscratch1); >>> __ cmpoop(reg1, rscratch1); >>> >>> I can't tell that this is correct. rscratch1 is used by assembler >>> macros, and I don't know if some other GC (e.g. ZGC) might need to use >>> rscratch1 inside cmpoop. The risk here is obvious. The Right Thing >>> to do IMO is to generate a scratch register for pointer comparisons. >>> >>> Unless, I guess, we know that cmpoop never ever needs a scratch >>> register for any forseeable garbage collector. >>> >> >> I do know that Shenandoah does not require a tmp reg. I also do know >> that no other collector currently needs equals-barriers at all. > > So cmpoop() is literally useless. It does nothing except add a layer > of obfuscation in the name of some possible future collector. The layer of abstraction is needed by Shenandoah. We need special handling for comparing oops. It is certainly not useless. Or are we talking about different issues? >> I cannot see into the future. I prefer to be pragmatic and solve >> existing problems. > > Perhaps, but this change you're asking me to review doesn't solve a > problem, it creates one. This is how technical debt happens. > >> How about I add a comment to the obj_equals() API that says 'don't >> use tmp reg X, and if you really need to, push/pop it or let the >> compiler generate one for you' ? > > It's awkward, isn't it? I know that this is the wrong way to solve > the problem, but I'm as vulnerable to social pressure as anyone else. > Perhaps I should give up and choose an easy life. :-) Haha. No, your review is very appreciated. Thank you, Roman From david.holmes at oracle.com Tue Jun 12 09:24:36 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 12 Jun 2018 19:24:36 +1000 Subject: RFR: JDK-8203927 - Update version string to identify which VM is being used In-Reply-To: References: Message-ID: <1254ccb6-4b6b-a6e1-7e03-721707f88d38@oracle.com> Hi Jesper, Looks fine. (I wish there was a better way to combine string literals to avoid the repetition - but I don't know of one.) Thanks, David On 12/06/2018 6:59 PM, jesper.wilhelmsson at oracle.com wrote: > Hi, > > Please review this change to make the version string to identify which JVM is being used in the presence of a hardened JVM. This change relates to JDK-8202384 which is currently out for review as well. > > Testing: Local verification and tier 1. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8203927 > Webrev: http://cr.openjdk.java.net/~jwilhelm/8203927/webrev.00/ > > Thanks, > /Jesper > From thomas.schatzl at oracle.com Tue Jun 12 09:28:58 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Tue, 12 Jun 2018 11:28:58 +0200 Subject: RFR(M) 8203641: Refactor String Deduplication into shared In-Reply-To: References: Message-ID: <2952ecd7a31f0837a8852a9b53a3700418fb6bc2.camel@oracle.com> Hi, On Mon, 2018-05-28 at 17:11 -0400, Zhengyu Gu wrote: > Hi, > > Please review this refactoring of G1 string deduplication into > shared directory, so that other GCs (such as Shenandoah) can > advantage of existing infrastructure and plugin their own > implementation. > > This refactoring preserves G1's String Deduplication infrastructure > (please see the comments in stringDedup.hpp for details), so that > there is no change to G1 outside of string deduplication code. would it be possible to provide separate diffs for moving the files and applying the refactoring? (And do the minimal changes to make it compile). This would very likely decrease the amount of changes in the important change, the refactoring, significantly. Now everything is shown as "new" in diff tools, and we reviewers need to go through everything. It seems a bit of a stretch to call this "M" with 1800 lines of changed lines, both on the raw number of changes and the review complexity. Please, in the future use two CRs or provide two webrevs in one review in such a case. This would make the reviewing a lot less work for reviewers and turnaround a lot faster. Some initial comments anyway: - the change may not apply cleanly any more, sorry for the delay. At least it complains about "src/hotspot/share/gc/g1/g1StringDedup.[hc]pp is not empty after patch; not deleting". Maybe it is a limitation of the "patch" tool that incorrectly prints this. It builds though. Probably you first moved the files and then recreated them? - I am not sure why g1StringDedup.hpp still contains a general description of the mechanism at the top; that should probably move to the shared files. Also it duplicates the "Candidate selection" paragraphs apparently. Please avoid comment duplication. - the comment on G1StringDedupQueue does not describe the queue at all but seems to be some random implementation detail. Maybe put all the G1 specific considerations in g1StringDedup.hpp - and only these? (I saw that stringDedup.hpp refers to "gc specific" files, which is fine) - generally, if a definition of a method in a base class is commented, describing its contract, it is not necessary to duplicate it in the overriding methods. That just makes it prone to getting out of date. - maybe instead of a "queue_" prefix for the protected G1StringDedupQueue methods, use "_impl" as elsewhere. I am not sure that keeping the interface related to string deduplication all static and then use instance variables behind the scene makes it easily readable. Making everything static has to me been an implementation choice because there has only been one user (G1) before. I will need to bring this up with others in the (Oracle-)team what they think about this. Probably it's okay to keep this, and this could be done at another time. - in stringDedupStat.cpp remove commented out renmants of generational statistics (line 121+,152+) - some copyright years need to be updated I guess. - in StringDedupThread::do_deduplication the template parameter changes from "S" (in the definition) to "STAT". Not sure why; also we do not tend to use all-caps type names. I will run it through our testing infra with/without string dedup and then look through it some more. Thanks, Thomas From aph at redhat.com Tue Jun 12 10:11:31 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 12 Jun 2018 11:11:31 +0100 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: <4c3d8867-ce8e-76dc-5d98-a751a0924fff@redhat.com> References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> <5B16A8C5.7040005@oracle.com> <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> <48b8839a-8af7-e64c-a9c6-e9f67ea099e4@redhat.com> <5e67f600-dc99-4409-6c6d-c09d3786b215@redhat.com> <38797d40-8a06-dc91-7d52-cc9e51222683@redhat.com> <2a983bca-a0b6-d004-a7fa-16ccd9d7f809@redhat.com> <4c3d8867-ce8e-76dc-5d98-a751a0924fff@redhat.com> Message-ID: <127ae611-814c-1910-0d16-8c548963aa20@redhat.com> On 06/12/2018 10:23 AM, Roman Kennke wrote: > Am 12.06.2018 um 11:11 schrieb Andrew Haley: >> On 06/11/2018 08:17 PM, Roman Kennke wrote: >>> Am 11.06.2018 um 19:11 schrieb Andrew Haley: >>>> On 06/11/2018 04:56 PM, Andrew Haley wrote: >>>>> On 06/08/2018 09:17 PM, Roman Kennke wrote: >>>>>> Why is it better? And how would I do that? It sounds like a fairly >>>>>> complex undertaking for a special case. Notice that if the oop doesn't >>>>>> qualify as immediate operand (quite likely for an oop?) it used to be >>>>>> moved into rscratch1 anyway a few lines below. >>>>> >>>>> Sorry for the slow reply. I'm looking now. >>>> >>>> OK. The problem is that this is a very bad code smell: >>>> >>>> case T_ARRAY: >>>> jobject2reg(opr2->as_constant_ptr()->as_jobject(), rscratch1); >>>> __ cmpoop(reg1, rscratch1); >>>> >>>> I can't tell that this is correct. rscratch1 is used by assembler >>>> macros, and I don't know if some other GC (e.g. ZGC) might need to use >>>> rscratch1 inside cmpoop. The risk here is obvious. The Right Thing >>>> to do IMO is to generate a scratch register for pointer comparisons. >>>> >>>> Unless, I guess, we know that cmpoop never ever needs a scratch >>>> register for any forseeable garbage collector. >>>> >>> >>> I do know that Shenandoah does not require a tmp reg. I also do know >>> that no other collector currently needs equals-barriers at all. >> >> So cmpoop() is literally useless. It does nothing except add a layer >> of obfuscation in the name of some possible future collector. > > The layer of abstraction is needed by Shenandoah. We need special > handling for comparing oops. It is certainly not useless. Or are we > talking about different issues? Ah, okay. I'm looking at ShenandoahBarrierSetAssembler::obj_equals() and I see that it actually has a side effect on its operands rather than using scratch registers. Ewww. I get it now. OK, I withdraw my objection. It's very confusing code to read, but it is what it is. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From aph at redhat.com Tue Jun 12 10:12:30 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 12 Jun 2018 11:12:30 +0100 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: <46141938-e3a2-58e6-5f0f-c1964a06f0d4@redhat.com> References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> <5B16A8C5.7040005@oracle.com> <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> <48b8839a-8af7-e64c-a9c6-e9f67ea099e4@redhat.com> <5e67f600-dc99-4409-6c6d-c09d3786b215@redhat.com> <38797d40-8a06-dc91-7d52-cc9e51222683@redhat.com> <2a983bca-a0b6-d004-a7fa-16ccd9d7f809@redhat.com> <4c3d8867-ce8e-76dc-5d98-a751a0924fff@redhat.com> <46141938-e3a2-58e6-5f0f-c1964a06f0d4@redhat.com> Message-ID: <72ed0858-bcde-8bde-7522-0f42fdc98ff3@redhat.com> My mailer seems to have gone mad: it's sending half-finished mails. So, any incoherent replies are probably not my fault. :-) -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From rkennke at redhat.com Tue Jun 12 10:12:46 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 12 Jun 2018 12:12:46 +0200 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: <127ae611-814c-1910-0d16-8c548963aa20@redhat.com> References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> <5B16A8C5.7040005@oracle.com> <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> <48b8839a-8af7-e64c-a9c6-e9f67ea099e4@redhat.com> <5e67f600-dc99-4409-6c6d-c09d3786b215@redhat.com> <38797d40-8a06-dc91-7d52-cc9e51222683@redhat.com> <2a983bca-a0b6-d004-a7fa-16ccd9d7f809@redhat.com> <4c3d8867-ce8e-76dc-5d98-a751a0924fff@redhat.com> <127ae611-814c-1910-0d16-8c548963aa20@redhat.com> Message-ID: <7bd3ec74-ca2c-f80d-89f9-34211bbdceb3@redhat.com> Am 12.06.2018 um 12:11 schrieb Andrew Haley: > On 06/12/2018 10:23 AM, Roman Kennke wrote: >> Am 12.06.2018 um 11:11 schrieb Andrew Haley: >>> On 06/11/2018 08:17 PM, Roman Kennke wrote: >>>> Am 11.06.2018 um 19:11 schrieb Andrew Haley: >>>>> On 06/11/2018 04:56 PM, Andrew Haley wrote: >>>>>> On 06/08/2018 09:17 PM, Roman Kennke wrote: >>>>>>> Why is it better? And how would I do that? It sounds like a fairly >>>>>>> complex undertaking for a special case. Notice that if the oop doesn't >>>>>>> qualify as immediate operand (quite likely for an oop?) it used to be >>>>>>> moved into rscratch1 anyway a few lines below. >>>>>> >>>>>> Sorry for the slow reply. I'm looking now. >>>>> >>>>> OK. The problem is that this is a very bad code smell: >>>>> >>>>> case T_ARRAY: >>>>> jobject2reg(opr2->as_constant_ptr()->as_jobject(), rscratch1); >>>>> __ cmpoop(reg1, rscratch1); >>>>> >>>>> I can't tell that this is correct. rscratch1 is used by assembler >>>>> macros, and I don't know if some other GC (e.g. ZGC) might need to use >>>>> rscratch1 inside cmpoop. The risk here is obvious. The Right Thing >>>>> to do IMO is to generate a scratch register for pointer comparisons. >>>>> >>>>> Unless, I guess, we know that cmpoop never ever needs a scratch >>>>> register for any forseeable garbage collector. >>>>> >>>> >>>> I do know that Shenandoah does not require a tmp reg. I also do know >>>> that no other collector currently needs equals-barriers at all. >>> >>> So cmpoop() is literally useless. It does nothing except add a layer >>> of obfuscation in the name of some possible future collector. >> >> The layer of abstraction is needed by Shenandoah. We need special >> handling for comparing oops. It is certainly not useless. Or are we >> talking about different issues? > > Ah, okay. I'm looking at ShenandoahBarrierSetAssembler::obj_equals() > and I see that it actually has a side effect on its operands rather > than using scratch registers. Ewww. I get it now. > > OK, I withdraw my objection. It's very confusing code to read, but > it is what it is. > Thanks for reviewing! Roman From bob.vandette at oracle.com Tue Jun 12 10:56:33 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Tue, 12 Jun 2018 06:56:33 -0400 Subject: Ping!! Re: RFR: 8203357 Container Metrics In-Reply-To: <2b705a94-afe0-c9bb-4377-accabf73696e@oracle.com> References: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> <1c57d8ee-1178-2e9a-4b7d-aafbe885473c@oracle.com> <469a46b5-aa9c-c6fd-b270-6a4230b4e08a@oracle.com> <2b705a94-afe0-c9bb-4377-accabf73696e@oracle.com> Message-ID: > On Jun 12, 2018, at 1:12 AM, David Holmes wrote: > > On 12/06/2018 9:30 AM, Bob Vandette wrote: >>> On Jun 11, 2018, at 5:21 PM, David Holmes wrote: >>> >>> On 12/06/2018 12:12 AM, Bob Vandette wrote: >>>>> On Jun 11, 2018, at 4:32 AM, David Holmes > wrote: >>>>> >>>>> Sorry Bob I haven't had a chance to look at this detail. >>>>> >>>>> For the Java code ... methods that return arrays should return zero-length arrays when something is not available rather than null. >>>> All methods do return zero length arrays except I missed the getPerCpuUsage. I?ll fix that one and correct the javadoc. >>> >>> There are a few more too: >>> >> Those are covered by the function that converts the string range. > > ??? I have no idea what you mean. > > Java API design style is to return zero-length arrays rather than null. [Ref: Effective Java First Edition, Item 27]. Look at line 174 in this file. http://cr.openjdk.java.net/~bobv/8203357/webrev.01/src/java.base/linux/classes/jdk/internal/platform/cgroupv1/SubSystem.java.html Bob > > Cheers, > David > ----- > >>> 231 * @return An array of available CPUs or null if metric is not available. >>> 232 * >>> 233 */ >>> 234 public int[] getCpuSetCpus(); >>> >>> 242 * @return An array of available and online CPUs or null if the metric >>> 243 * is not available. >>> 244 * >>> 245 */ >>> 246 public int[] getEffectiveCpuSetCpus(); >>> >>> 256 * @return An array of available memory nodes or null if metric is not available. >>> 257 * >>> 258 */ >>> 259 public int[] getCpuSetMems(); >>> >>> 267 * @return An array of available and online nodes or null if the metric >>> 268 * is not available. >>> 269 * >>> 270 */ >>> 271 public int[] getEffectiveCpuSetMems(); >>>>> >>>>> For getCpuPeriod() the term "operating system time slice" can be misconstrued as being related to the scheduler timeslice that may, or may not, exist, depending on the scheduler and scheduling policy etc. This "timeslice" is something specific to cgroups - no? >>>> The comments reads: >>>> * Returns the length of the operating system time slice, in >>>> * milliseconds, for processes within the Isolation Group. >>>> The comment does infer that it?s process and cgroup (Isolation group) specific and not the generic os timeslice. >>>> Isn?t this sufficient? >>> >>> The phrase "operating system" makes this sound like some kind of global timeslice notion - which it isn't. And I don't like to think of cpu periods/shares/quotas in terms of "time slice" anyway. I don't see the Docker or Cgroup documentation using "time slice" either. It suffices IMHO to just say for period: >>> >>> * Returns the length of the scheduling period, in >>> * milliseconds, for processes within the Isolation Group. >>> >>> then for quota: >>> >>> * Returns the total available run-time allowed, in milliseconds, >>> * during each scheduling period for all tasks in the Isolation Group. >>> >> Ok. I?ll update the docs. >> Bob >>> Thanks, >>> David >>> >>>> Thanks, >>>> Bob. >>>>> >>>>> David >>>>> >>>>>>> On 8/06/2018 3:43 AM, Bob Vandette wrote: >>>>>>> Can I get one more reviewer for this RFE so I can integrate it? >>>>>>> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 >>>>>> Mandy Chung has reviewed this change. >>>>>> I?ve run Mach5 hotspot and core lib tests. >>>>>> I?ve reviewed the tests which were written by Harsha Wardhana >>>>>> I filed a CSR for the command line change and it?s now approved and closed. >>>>>> Thanks, >>>>>> Bob. >>>>>>> On May 30, 2018, at 3:45 PM, Bob Vandette > wrote: >>>>>>> >>>>>>> Please review the following RFE which adds an internal API, along with jtreg tests that provide >>>>>>> access to Docker container configuration data and metrics. In addition to the API which we hope to >>>>>>> take advantage of in the future with Java Flight Recorder and a JMX Mbean, I?ve added an additional >>>>>>> option to -XshowSettings:system than dumps out the container or host cgroup confguration >>>>>>> information. See the sample output below: >>>>>>> >>>>>>> RFE: Container Metrics >>>>>>> >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8203357 >>>>>>> >>>>>>> WEBREV: >>>>>>> >>>>>>> http://cr.openjdk.java.net/~bobv/8203357/webrev.01 >>>>>>> >>>>>>> >>>>>>> This commit will also include a fix for the following bug. >>>>>>> >>>>>>> BUG: [TESTBUG] Test /runtime/containers/cgroup/PlainRead.java fails >>>>>>> >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8203691 >>>>>>> >>>>>>> WEBREV: >>>>>>> >>>>>>> http://cr.openjdk.java.net/~bobv/8203357/webrev.00/test/hotspot/jtreg/runtime/containers/cgroup/PlainRead.java.sdiff.html >>>>>>> >>>>>>> SAMPLE USAGE and OUTPUT: >>>>>>> >>>>>>> docker run ?memory=256m --cpuset-cpus 4-7 -it ubuntu bash >>>>>>> ./java -XshowSettings:system >>>>>>> Operating System Metrics: >>>>>>> Provider: cgroupv1 >>>>>>> Effective CPU Count: 4 >>>>>>> CPU Period: 100000 >>>>>>> CPU Quota: -1 >>>>>>> CPU Shares: -1 >>>>>>> List of Processors, 4 total: >>>>>>> 4 5 6 7 >>>>>>> List of Effective Processors, 4 total: >>>>>>> 4 5 6 7 >>>>>>> List of Memory Nodes, 2 total: >>>>>>> 0 1 >>>>>>> List of Available Memory Nodes, 2 total: >>>>>>> 0 1 >>>>>>> CPUSet Memory Pressure Enabled: false >>>>>>> Memory Limit: 256.00M >>>>>>> Memory Soft Limit: Unlimited >>>>>>> Memory & Swap Limit: 512.00M >>>>>>> Kernel Memory Limit: Unlimited >>>>>>> TCP Memory Limit: Unlimited >>>>>>> Out Of Memory Killer Enabled: true >>>>>>> >>>>>>> TEST RESULTS: >>>>>>> >>>>>>> testing runtime container APIs >>>>>>> Directory "JTwork" not found: creating >>>>>>> Passed: runtime/containers/cgroup/PlainRead.java >>>>>>> Passed: runtime/containers/docker/DockerBasicTest.java >>>>>>> Passed: runtime/containers/docker/TestCPUAwareness.java >>>>>>> Passed: runtime/containers/docker/TestCPUSets.java >>>>>>> Passed: runtime/containers/docker/TestMemoryAwareness.java >>>>>>> Passed: runtime/containers/docker/TestMisc.java >>>>>>> Test results: passed: 6 >>>>>>> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >>>>>>> >>>>>>> testing jdk.internal.platform APIs >>>>>>> Passed: jdk/internal/platform/cgroup/TestCgroupMetrics.java >>>>>>> Passed: jdk/internal/platform/docker/TestDockerCpuMetrics.java >>>>>>> Passed: jdk/internal/platform/docker/TestDockerMemoryMetrics.java >>>>>>> Passed: jdk/internal/platform/docker/TestSystemMetrics.java >>>>>>> Test results: passed: 4 >>>>>>> Results written to /export/users/bobv/jdk11/build/jtreg/JTwork >>>>>>> >>>>>>> testing -XshowSettings:system launcher option >>>>>>> Passed: tools/launcher/Settings.java >>>>>>> Test results: passed: 1 >>>>>>> >>>>>>> >>>>>>> Bob. >>>>>>> >>>>>>> From bob.vandette at oracle.com Tue Jun 12 10:59:23 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Tue, 12 Jun 2018 06:59:23 -0400 Subject: Ping!! Re: RFR: 8203357 Container Metrics In-Reply-To: <73dd06ec-544f-3cc9-00d5-c5c8c8ffdacc@oracle.com> References: <510E46BF-A220-40CC-8752-687C849D9114@oracle.com> <1c57d8ee-1178-2e9a-4b7d-aafbe885473c@oracle.com> <469a46b5-aa9c-c6fd-b270-6a4230b4e08a@oracle.com> <2b705a94-afe0-c9bb-4377-accabf73696e@oracle.com> <73dd06ec-544f-3cc9-00d5-c5c8c8ffdacc@oracle.com> Message-ID: > On Jun 12, 2018, at 1:43 AM, David Holmes wrote: > >> On 12/06/2018 3:31 PM, mandy chung wrote: >> On 6/11/18 10:12 PM, David Holmes wrote: >>>>>>> >>>>>>> For the Java code ... methods that return arrays should return zero-length arrays when something is not available rather than null. >>>>>> All methods do return zero length arrays except I missed the getPerCpuUsage. I?ll fix that one and correct the javadoc. >>>>> >>>>> There are a few more too: >>>>> >>>> >>>> Those are covered by the function that converts the string range. >>> >>> ??? I have no idea what you mean. >> I think the methods returning an array calls Subsystem::StringRangeToIntArray which returns an empty array. >> 171 public static int[] StringRangeToIntArray(String range) { >> 172 int[] ints = new int[0]; >> 173 >> 174 if (range == null) return ints; > > I'm commenting on the specification of the Metrics interface: > > http://cr.openjdk.java.net/~bobv/8203357/webrev.01/src/java.base/share/classes/jdk/internal/platform/Metrics.java.html > > not any implementation. Oh. I previously mentioned that I needed to correct the javadoc comments. I had corrected the implementation but hadn?t fixed the spec. Bob. > > Cheers, > David > >> Mandy From thomas.schatzl at oracle.com Tue Jun 12 11:51:37 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Tue, 12 Jun 2018 13:51:37 +0200 Subject: RFR(M) 8203641: Refactor String Deduplication into shared In-Reply-To: <2952ecd7a31f0837a8852a9b53a3700418fb6bc2.camel@oracle.com> References: <2952ecd7a31f0837a8852a9b53a3700418fb6bc2.camel@oracle.com> Message-ID: Hi, testing showed that some error has been introduced. It can be reliably reproduced by running the test gc/stress/TestGCBasherWithG1.java with -XX:+UseStringDeduplication set. An example command line could be: jtreg -jdk: -vmoption:-XX:+UseStringDeduplication gc/stress/TestGCCBasherWithG1.java There were some other failed tests, but they all looked very similar. The crash does not occur without these changes. Please fix this issue. :) It would be nice if you ran through all jtreg tests with -XX:+UseStringDeduplication to check any other issues. There are not many tests that explicitly use -XX:+UseStringDeduplication, but of course a run of all tests with string dedup enabled should give a good coverage. Thanks, Thomas On Tue, 2018-06-12 at 11:28 +0200, Thomas Schatzl wrote: > Hi, > > On Mon, 2018-05-28 at 17:11 -0400, Zhengyu Gu wrote: > > Hi, > > > > Please review this refactoring of G1 string deduplication into > > shared directory, so that other GCs (such as Shenandoah) can > > advantage of existing infrastructure and plugin their own > > implementation. > > > > This refactoring preserves G1's String Deduplication > > infrastructure > > (please see the comments in stringDedup.hpp for details), so that > > there is no change to G1 outside of string deduplication code. > > would it be possible to provide separate diffs for moving the files > and applying the refactoring? (And do the minimal changes to make it > compile). > > This would very likely decrease the amount of changes in the > important > change, the refactoring, significantly. > > Now everything is shown as "new" in diff tools, and we reviewers need > to go through everything. It seems a bit of a stretch to call this > "M" > with 1800 lines of changed lines, both on the raw number of changes > and > the review complexity. > > Please, in the future use two CRs or provide two webrevs in one > review > in such a case. This would make the reviewing a lot less work for > reviewers and turnaround a lot faster. > > Some initial comments anyway: > > - the change may not apply cleanly any more, sorry for the delay. At > least it complains about > "src/hotspot/share/gc/g1/g1StringDedup.[hc]pp > is not empty after patch; not deleting". > Maybe it is a limitation of the "patch" tool that incorrectly prints > this. It builds though. > Probably you first moved the files and then recreated them? > > - I am not sure why g1StringDedup.hpp still contains a general > description of the mechanism at the top; that should probably move to > the shared files. > Also it duplicates the "Candidate selection" paragraphs apparently. > Please avoid comment duplication. > > - the comment on G1StringDedupQueue does not describe the queue at > all > but seems to be some random implementation detail. > Maybe put all the G1 specific considerations in g1StringDedup.hpp - > and > only these? > > (I saw that stringDedup.hpp refers to "gc specific" files, which is > fine) > > - generally, if a definition of a method in a base class is > commented, > describing its contract, it is not necessary to duplicate it in the > overriding methods. > That just makes it prone to getting out of date. > > - maybe instead of a "queue_" prefix for the protected > G1StringDedupQueue methods, use "_impl" as elsewhere. > > I am not sure that keeping the interface related to string > deduplication all static and then use instance variables behind the > scene makes it easily readable. > > Making everything static has to me been an implementation choice > because there has only been one user (G1) before. > > I will need to bring this up with others in the (Oracle-)team what > they > think about this. Probably it's okay to keep this, and this could be > done at another time. > > - in stringDedupStat.cpp remove commented out renmants of > generational > statistics (line 121+,152+) > > - some copyright years need to be updated I guess. > > - in StringDedupThread::do_deduplication the template parameter > changes > from "S" (in the definition) to "STAT". Not sure why; also we do not > tend to use all-caps type names. > > I will run it through our testing infra with/without string dedup and > then look through it some more. > > Thanks, > Thomas > > From shade at redhat.com Tue Jun 12 12:17:24 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Tue, 12 Jun 2018 14:17:24 +0200 Subject: RFR 8204850: BarrierSet::make_* should be static In-Reply-To: References: <0b6b7efe-996c-afe6-d6b5-4510caa8e4bf@redhat.com> <4a75bce7-1ca2-bd06-e82c-bd7bcb456865@redhat.com> Message-ID: <31e95c46-2182-434c-72a8-abfd9a7f0e3b@redhat.com> Thanks! Also is it trivial? I had it through jdk-submit as part of Epsilon change, and it seems fine. I am pushing it shortly. -Aleksey On 06/12/2018 09:50 AM, Thomas Schatzl wrote: > Hi, > > looks good. > > Thomas > > On Tue, 2018-06-12 at 08:39 +0200, Aleksey Shipilev wrote: >> (resending with proper To: field) >> >> On 06/12/2018 08:38 AM, Aleksey Shipilev wrote: >>> RFE: >>> https://bugs.openjdk.java.net/browse/JDK-8204850 >>> >>> In Epsilon, we have the call like: >>> >>> EpsilonBarrierSet::EpsilonBarrierSet() : BarrierSet( >>> make_barrier_set_assembler(), >>> make_barrier_set_c1(), >>> make_barrier_set_c2(), >>> BarrierSet::FakeRtti(BarrierSet::EpsilonBarrierSet)) {}; >>> >>> ...and some compilers (notably Mac OS builds) complain that: >>> >>> /Users/yosemite/jdk- >>> sandbox/src/hotspot/share/gc/epsilon/epsilonBarrierSet.cpp:40:11: >>> error: base >>> class 'BarrierSet' is uninitialized when used here to access >>> 'BarrierSet::make_barrier_set_assembler' [- >>> Werror,-Wuninitialized] >>> make_barrier_set_assembler(), >>> ^ >>> >>> This warning is legit: we are calling instance method of BarrierSet >>> before initializing it. But, >>> those methods are just factory methods, and they could be static, >>> resolving the warning. >>> >>> Fix: >>> >>> diff -r 7f166e010af4 src/hotspot/share/gc/shared/barrierSet.hpp >>> --- a/src/hotspot/share/gc/shared/barrierSet.hpp Mon Jun 11 >>> 22:35:07 2018 -0400 >>> +++ b/src/hotspot/share/gc/shared/barrierSet.hpp Tue Jun 12 >>> 08:34:40 2018 +0200 >>> @@ -103,17 +103,17 @@ >>> ~BarrierSet() { } >>> >>> template >>> - BarrierSetAssembler* make_barrier_set_assembler() { >>> + static BarrierSetAssembler* make_barrier_set_assembler() { >>> return NOT_ZERO(new BarrierSetAssemblerT()) ZERO_ONLY(NULL); >>> } >>> >>> template >>> - BarrierSetC1* make_barrier_set_c1() { >>> + static BarrierSetC1* make_barrier_set_c1() { >>> return COMPILER1_PRESENT(new BarrierSetC1T()) >>> NOT_COMPILER1(NULL); >>> } >>> >>> template >>> - BarrierSetC2* make_barrier_set_c2() { >>> + static BarrierSetC2* make_barrier_set_c2() { >>> return COMPILER2_PRESENT(new BarrierSetC2T()) >>> NOT_COMPILER2(NULL); >>> } >>> >>> >>> Testing: x86_64 build, Epsilon MacOS build >>> >>> Thanks, >>> -Aleksey >>> >> >> > From thomas.schatzl at oracle.com Tue Jun 12 12:20:30 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Tue, 12 Jun 2018 14:20:30 +0200 Subject: RFR 8204850: BarrierSet::make_* should be static In-Reply-To: <31e95c46-2182-434c-72a8-abfd9a7f0e3b@redhat.com> References: <0b6b7efe-996c-afe6-d6b5-4510caa8e4bf@redhat.com> <4a75bce7-1ca2-bd06-e82c-bd7bcb456865@redhat.com> <31e95c46-2182-434c-72a8-abfd9a7f0e3b@redhat.com> Message-ID: <09c4a91db62fae51521d09e9519fed3cdcbb9eee.camel@oracle.com> Hi, On Tue, 2018-06-12 at 14:17 +0200, Aleksey Shipilev wrote: > Thanks! Also is it trivial? I had it through jdk-submit as part of > Epsilon change, and it seems > fine. I am pushing it shortly. I think this could qualify as trival change. Thanks, Thomas From shade at redhat.com Tue Jun 12 12:24:46 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Tue, 12 Jun 2018 14:24:46 +0200 Subject: RFR 8204850: BarrierSet::make_* should be static In-Reply-To: <09c4a91db62fae51521d09e9519fed3cdcbb9eee.camel@oracle.com> References: <0b6b7efe-996c-afe6-d6b5-4510caa8e4bf@redhat.com> <4a75bce7-1ca2-bd06-e82c-bd7bcb456865@redhat.com> <31e95c46-2182-434c-72a8-abfd9a7f0e3b@redhat.com> <09c4a91db62fae51521d09e9519fed3cdcbb9eee.camel@oracle.com> Message-ID: <4c75175f-57bd-afc5-2fc2-6e9d2338dd50@redhat.com> On 06/12/2018 02:20 PM, Thomas Schatzl wrote: > On Tue, 2018-06-12 at 14:17 +0200, Aleksey Shipilev wrote: >> Thanks! Also is it trivial? I had it through jdk-submit as part of >> Epsilon change, and it seems >> fine. I am pushing it shortly. > > I think this could qualify as trival change. Thanks Thomas, pushed. -Aleksey From rkennke at redhat.com Tue Jun 12 12:59:07 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 12 Jun 2018 14:59:07 +0200 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: <127ae611-814c-1910-0d16-8c548963aa20@redhat.com> References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> <5B16A8C5.7040005@oracle.com> <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> <48b8839a-8af7-e64c-a9c6-e9f67ea099e4@redhat.com> <5e67f600-dc99-4409-6c6d-c09d3786b215@redhat.com> <38797d40-8a06-dc91-7d52-cc9e51222683@redhat.com> <2a983bca-a0b6-d004-a7fa-16ccd9d7f809@redhat.com> <4c3d8867-ce8e-76dc-5d98-a751a0924fff@redhat.com> <127ae611-814c-1910-0d16-8c548963aa20@redhat.com> Message-ID: <5425d9a3-b8de-f8e9-c5e0-7fc8b462dd8e@redhat.com> jdk/submit came back with unstable (see below). Can somebody with access look what's going on? Thanks, Roman Build Details: 2018-06-12-1147472.roman.source 0 Failed Tests Mach5 Tasks Results Summary PASSED: 62 KILLED: 0 FAILED: 0 UNABLE_TO_RUN: 11 EXECUTED_WITH_FAILURE: 2 NA: 0 Build 2 Not run build_jdk_linux-linux-x64-debug-linux-x64-build-1 error while building, return value: 2 build_jdk_linux-linux-x64-open-debug-linux-x64-build-3 error while building, return value: 2 Test 11 Not run tier1-debug-jdk_open_test_hotspot_jtreg_tier1_common-linux-x64-debug-24 Dependency task failed: mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_1-linux-x64-debug-27 Dependency task failed: mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_2-linux-x64-debug-30 Dependency task failed: mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_3-linux-x64-debug-33 Dependency task failed: mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_not_xcomp-linux-x64-debug-36 Dependency task failed: mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_1-linux-x64-debug-39 Dependency task failed: mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_2-linux-x64-debug-42 Dependency task failed: mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_gcbasher-linux-x64-debug-45 Dependency task failed: mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_gcold-linux-x64-debug-48 Dependency task failed: mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_runtime-linux-x64-debug-51 Dependency task failed: mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 See all 11... > On 06/12/2018 10:23 AM, Roman Kennke wrote: >> Am 12.06.2018 um 11:11 schrieb Andrew Haley: >>> On 06/11/2018 08:17 PM, Roman Kennke wrote: >>>> Am 11.06.2018 um 19:11 schrieb Andrew Haley: >>>>> On 06/11/2018 04:56 PM, Andrew Haley wrote: >>>>>> On 06/08/2018 09:17 PM, Roman Kennke wrote: >>>>>>> Why is it better? And how would I do that? It sounds like a fairly >>>>>>> complex undertaking for a special case. Notice that if the oop doesn't >>>>>>> qualify as immediate operand (quite likely for an oop?) it used to be >>>>>>> moved into rscratch1 anyway a few lines below. >>>>>> >>>>>> Sorry for the slow reply. I'm looking now. >>>>> >>>>> OK. The problem is that this is a very bad code smell: >>>>> >>>>> case T_ARRAY: >>>>> jobject2reg(opr2->as_constant_ptr()->as_jobject(), rscratch1); >>>>> __ cmpoop(reg1, rscratch1); >>>>> >>>>> I can't tell that this is correct. rscratch1 is used by assembler >>>>> macros, and I don't know if some other GC (e.g. ZGC) might need to use >>>>> rscratch1 inside cmpoop. The risk here is obvious. The Right Thing >>>>> to do IMO is to generate a scratch register for pointer comparisons. >>>>> >>>>> Unless, I guess, we know that cmpoop never ever needs a scratch >>>>> register for any forseeable garbage collector. >>>>> >>>> >>>> I do know that Shenandoah does not require a tmp reg. I also do know >>>> that no other collector currently needs equals-barriers at all. >>> >>> So cmpoop() is literally useless. It does nothing except add a layer >>> of obfuscation in the name of some possible future collector. >> >> The layer of abstraction is needed by Shenandoah. We need special >> handling for comparing oops. It is certainly not useless. Or are we >> talking about different issues? > > Ah, okay. I'm looking at ShenandoahBarrierSetAssembler::obj_equals() > and I see that it actually has a side effect on its operands rather > than using scratch registers. Ewww. I get it now. > > OK, I withdraw my objection. It's very confusing code to read, but > it is what it is. > From thomas.schatzl at oracle.com Tue Jun 12 13:03:56 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Tue, 12 Jun 2018 15:03:56 +0200 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: <5425d9a3-b8de-f8e9-c5e0-7fc8b462dd8e@redhat.com> References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> <5B16A8C5.7040005@oracle.com> <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> <48b8839a-8af7-e64c-a9c6-e9f67ea099e4@redhat.com> <5e67f600-dc99-4409-6c6d-c09d3786b215@redhat.com> <38797d40-8a06-dc91-7d52-cc9e51222683@redhat.com> <2a983bca-a0b6-d004-a7fa-16ccd9d7f809@redhat.com> <4c3d8867-ce8e-76dc-5d98-a751a0924fff@redhat.com> <127ae611-814c-1910-0d16-8c548963aa20@redhat.com> <5425d9a3-b8de-f8e9-c5e0-7fc8b462dd8e@redhat.com> Message-ID: Hi, the issue is (or actually the fix) https://bugs.openjdk.java.net/brow se/JDK-8204861 . Thanks, Thomas On Tue, 2018-06-12 at 14:59 +0200, Roman Kennke wrote: > jdk/submit came back with unstable (see below). Can somebody with > access > look what's going on? > > Thanks, Roman > > Build Details: 2018-06-12-1147472.roman.source > 0 Failed Tests > Mach5 Tasks Results Summary > > PASSED: 62 > KILLED: 0 > FAILED: 0 > UNABLE_TO_RUN: 11 > EXECUTED_WITH_FAILURE: 2 > NA: 0 > Build > > 2 Not run > build_jdk_linux-linux-x64-debug-linux-x64-build-1 error > while building, return value: 2 > build_jdk_linux-linux-x64-open-debug-linux-x64-build-3 > error > while building, return value: 2 > > Test > > 11 Not run > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_common-linux-x64-debug- > 24 > Dependency task failed: > mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_1-linux-x64- > debug-27 > Dependency task failed: > mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_2-linux-x64- > debug-30 > Dependency task failed: > mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_3-linux-x64- > debug-33 > Dependency task failed: > mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_not_xcomp- > linux-x64-debug-36 > Dependency task failed: > mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_1-linux-x64-debug-39 > Dependency task failed: > mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_2-linux-x64-debug-42 > Dependency task failed: > mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_gcbasher-linux-x64- > debug-45 > Dependency task failed: > mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_gcold-linux-x64- > debug-48 > Dependency task failed: > mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_runtime-linux-x64- > debug-51 > Dependency task failed: > mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 > See all 11... > > > > On 06/12/2018 10:23 AM, Roman Kennke wrote: > > > Am 12.06.2018 um 11:11 schrieb Andrew Haley: > > > > On 06/11/2018 08:17 PM, Roman Kennke wrote: > > > > > Am 11.06.2018 um 19:11 schrieb Andrew Haley: > > > > > > On 06/11/2018 04:56 PM, Andrew Haley wrote: > > > > > > > On 06/08/2018 09:17 PM, Roman Kennke wrote: > > > > > > > > Why is it better? And how would I do that? It sounds > > > > > > > > like a fairly > > > > > > > > complex undertaking for a special case. Notice that if > > > > > > > > the oop doesn't > > > > > > > > qualify as immediate operand (quite likely for an oop?) > > > > > > > > it used to be > > > > > > > > moved into rscratch1 anyway a few lines below. > > > > > > > > > > > > > > Sorry for the slow reply. I'm looking now. > > > > > > > > > > > > OK. The problem is that this is a very bad code smell: > > > > > > > > > > > > case T_ARRAY: > > > > > > jobject2reg(opr2->as_constant_ptr()->as_jobject(), > > > > > > rscratch1); > > > > > > __ cmpoop(reg1, rscratch1); > > > > > > > > > > > > I can't tell that this is correct. rscratch1 is used by > > > > > > assembler > > > > > > macros, and I don't know if some other GC (e.g. ZGC) might > > > > > > need to use > > > > > > rscratch1 inside cmpoop. The risk here is obvious. The > > > > > > Right Thing > > > > > > to do IMO is to generate a scratch register for pointer > > > > > > comparisons. > > > > > > > > > > > > Unless, I guess, we know that cmpoop never ever needs a > > > > > > scratch > > > > > > register for any forseeable garbage collector. > > > > > > > > > > > > > > > > I do know that Shenandoah does not require a tmp reg. I also > > > > > do know > > > > > that no other collector currently needs equals-barriers at > > > > > all. > > > > > > > > So cmpoop() is literally useless. It does nothing except add a > > > > layer > > > > of obfuscation in the name of some possible future collector. > > > > > > The layer of abstraction is needed by Shenandoah. We need special > > > handling for comparing oops. It is certainly not useless. Or are > > > we > > > talking about different issues? > > > > Ah, okay. I'm looking at > > ShenandoahBarrierSetAssembler::obj_equals() > > and I see that it actually has a side effect on its operands rather > > than using scratch registers. Ewww. I get it now. > > > > OK, I withdraw my objection. It's very confusing code to read, but > > it is what it is. > > > > From rkennke at redhat.com Tue Jun 12 13:07:34 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 12 Jun 2018 15:07:34 +0200 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> <5B16A8C5.7040005@oracle.com> <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> <48b8839a-8af7-e64c-a9c6-e9f67ea099e4@redhat.com> <5e67f600-dc99-4409-6c6d-c09d3786b215@redhat.com> <38797d40-8a06-dc91-7d52-cc9e51222683@redhat.com> <2a983bca-a0b6-d004-a7fa-16ccd9d7f809@redhat.com> <4c3d8867-ce8e-76dc-5d98-a751a0924fff@redhat.com> <127ae611-814c-1910-0d16-8c548963aa20@redhat.com> <5425d9a3-b8de-f8e9-c5e0-7fc8b462dd8e@redhat.com> Message-ID: Ok, interesting. I have that patch, but it builds+tests fine for me. How has this slipped through mach5 (pre-commit) in the first place? Anyway, I'll wait for the fix, and then retry. Thanks, Roman > Hi, > > the issue is (or actually the fix) https://bugs.openjdk.java.net/brow > se/JDK-8204861 . > > Thanks, > Thomas > > On Tue, 2018-06-12 at 14:59 +0200, Roman Kennke wrote: >> jdk/submit came back with unstable (see below). Can somebody with >> access >> look what's going on? >> >> Thanks, Roman >> >> Build Details: 2018-06-12-1147472.roman.source >> 0 Failed Tests >> Mach5 Tasks Results Summary >> >> PASSED: 62 >> KILLED: 0 >> FAILED: 0 >> UNABLE_TO_RUN: 11 >> EXECUTED_WITH_FAILURE: 2 >> NA: 0 >> Build >> >> 2 Not run >> build_jdk_linux-linux-x64-debug-linux-x64-build-1 error >> while building, return value: 2 >> build_jdk_linux-linux-x64-open-debug-linux-x64-build-3 >> error >> while building, return value: 2 >> >> Test >> >> 11 Not run >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_common-linux-x64-debug- >> 24 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_1-linux-x64- >> debug-27 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_2-linux-x64- >> debug-30 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_3-linux-x64- >> debug-33 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_not_xcomp- >> linux-x64-debug-36 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_1-linux-x64-debug-39 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_2-linux-x64-debug-42 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_gcbasher-linux-x64- >> debug-45 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_gcold-linux-x64- >> debug-48 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_runtime-linux-x64- >> debug-51 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> See all 11... >> >> >>> On 06/12/2018 10:23 AM, Roman Kennke wrote: >>>> Am 12.06.2018 um 11:11 schrieb Andrew Haley: >>>>> On 06/11/2018 08:17 PM, Roman Kennke wrote: >>>>>> Am 11.06.2018 um 19:11 schrieb Andrew Haley: >>>>>>> On 06/11/2018 04:56 PM, Andrew Haley wrote: >>>>>>>> On 06/08/2018 09:17 PM, Roman Kennke wrote: >>>>>>>>> Why is it better? And how would I do that? It sounds >>>>>>>>> like a fairly >>>>>>>>> complex undertaking for a special case. Notice that if >>>>>>>>> the oop doesn't >>>>>>>>> qualify as immediate operand (quite likely for an oop?) >>>>>>>>> it used to be >>>>>>>>> moved into rscratch1 anyway a few lines below. >>>>>>>> >>>>>>>> Sorry for the slow reply. I'm looking now. >>>>>>> >>>>>>> OK. The problem is that this is a very bad code smell: >>>>>>> >>>>>>> case T_ARRAY: >>>>>>> jobject2reg(opr2->as_constant_ptr()->as_jobject(), >>>>>>> rscratch1); >>>>>>> __ cmpoop(reg1, rscratch1); >>>>>>> >>>>>>> I can't tell that this is correct. rscratch1 is used by >>>>>>> assembler >>>>>>> macros, and I don't know if some other GC (e.g. ZGC) might >>>>>>> need to use >>>>>>> rscratch1 inside cmpoop. The risk here is obvious. The >>>>>>> Right Thing >>>>>>> to do IMO is to generate a scratch register for pointer >>>>>>> comparisons. >>>>>>> >>>>>>> Unless, I guess, we know that cmpoop never ever needs a >>>>>>> scratch >>>>>>> register for any forseeable garbage collector. >>>>>>> >>>>>> >>>>>> I do know that Shenandoah does not require a tmp reg. I also >>>>>> do know >>>>>> that no other collector currently needs equals-barriers at >>>>>> all. >>>>> >>>>> So cmpoop() is literally useless. It does nothing except add a >>>>> layer >>>>> of obfuscation in the name of some possible future collector. >>>> >>>> The layer of abstraction is needed by Shenandoah. We need special >>>> handling for comparing oops. It is certainly not useless. Or are >>>> we >>>> talking about different issues? >>> >>> Ah, okay. I'm looking at >>> ShenandoahBarrierSetAssembler::obj_equals() >>> and I see that it actually has a side effect on its operands rather >>> than using scratch registers. Ewww. I get it now. >>> >>> OK, I withdraw my objection. It's very confusing code to read, but >>> it is what it is. >>> >> >> > From glaubitz at physik.fu-berlin.de Tue Jun 12 13:37:51 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Tue, 12 Jun 2018 15:37:51 +0200 Subject: RFR: 8203301: Linux-sparc fails to build after JDK-8199712 (Flight Recorder) Message-ID: <189933f3-2dd5-d70e-364b-794f375ec430@physik.fu-berlin.de> Hello! After fixing 8203787 (Hotspot build broken on linux-sparc after 8202377), the build on linux-sparc continues to be broken as a result of JDK-8199712 (Flight Recorder). I now had a closer look at the problem and looked at the necessary changes made for PowerPC and S390 on Linux [1]. I have adopted the changes for SPARC on Linux, the resulting changeset can be found in [2]. Please review! I am still in the process of doing a test build. The Sun Fire 2000 I am using at the moment is a tad slow, so the test build isn't finished yet :(. I wanted to put the changeset up for review anyway though and I will most likely follow up with a second revision. I hope we will have faster SPARC machines in Debian available in the near future. Thanks, Adrian > [1] http://hg.openjdk.java.net/jdk/jdk/rev/dc18db671651 > [2] http://cr.openjdk.java.net/~glaubitz/8203301/webrev.00/ -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From glaubitz at physik.fu-berlin.de Tue Jun 12 13:49:14 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Tue, 12 Jun 2018 15:49:14 +0200 Subject: OpenJDK wiki still points to old submit-hs repository Message-ID: <9df06789-808e-1f8f-935b-783e342aaecf@physik.fu-berlin.de> Hi! Just a heads-up: The wiki page for the submit repository is still pointing to the the old submit-hs repository. This should be "submit" nowadays, "submit-hs" is read-only anyway. See: https://wiki.openjdk.java.net/display/Build/Submit+Repo Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From jesper.wilhelmsson at oracle.com Tue Jun 12 14:18:07 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Tue, 12 Jun 2018 16:18:07 +0200 Subject: OpenJDK wiki still points to old submit-hs repository In-Reply-To: <9df06789-808e-1f8f-935b-783e342aaecf@physik.fu-berlin.de> References: <9df06789-808e-1f8f-935b-783e342aaecf@physik.fu-berlin.de> Message-ID: <01D84E66-C40B-4EFD-9929-B6DCFA98E9E0@oracle.com> Paging Christian. (I don't have write access to this page.) Thanks for reporting this Adrian! /Jesper > On 12 Jun 2018, at 15:49, John Paul Adrian Glaubitz wrote: > > Hi! > > Just a heads-up: The wiki page for the submit repository is still pointing > to the the old submit-hs repository. This should be "submit" nowadays, > "submit-hs" is read-only anyway. > > See: https://wiki.openjdk.java.net/display/Build/Submit+Repo > > Adrian > > -- > .''`. John Paul Adrian Glaubitz > : :' : Debian Developer - glaubitz at debian.org > `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de > `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From kim.barrett at oracle.com Tue Jun 12 14:46:51 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 12 Jun 2018 10:46:51 -0400 Subject: RFR: 8204585: Remove IN_ARCHIVE_ROOT from Access API In-Reply-To: <8bb0524fa8787a9a0fe1821d0449d1f70e370cd0.camel@oracle.com> References: <328F7DDB-D6B9-4865-84CF-E014D0108257@oracle.com> <13628F9C-6DE9-40FF-B58C-C90E607A9255@oracle.com> <7BA7673E-CFD0-41D8-AC62-EBA619B56E31@oracle.com> <8bb0524fa8787a9a0fe1821d0449d1f70e370cd0.camel@oracle.com> Message-ID: <0C5BB9BF-A1D1-41D9-95BD-B27D246DDCC8@oracle.com> > On Jun 12, 2018, at 4:11 AM, Thomas Schatzl wrote: > > Hi, > > On Mon, 2018-06-11 at 18:58 -0400, Kim Barrett wrote: >>> On Jun 11, 2018, at 6:26 PM, Jiangli Zhou >>> wrote: >>> >>> Hi Kim, >>> >>> Both the changes and testing look good to me. Would it be better to >>> rename MetaspaceShared::unarchive_heap_object() to >>> MetaspaceShared::materialize_archived_object() to reflect the API >>> in G1CollectedHeap? The use of ?materialize? in the GC API looks >>> very good. Thank you for continuing improving the GC underlying >>> support! >> >> I like that suggestion. I?ll see what other folks think, but I?m >> inclined to do it. >> > > looks good. Agree with the suggested name change. > > Thanks, > Thomas Thanks. I?ll make the suggested name change. From glaubitz at physik.fu-berlin.de Tue Jun 12 15:53:25 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Tue, 12 Jun 2018 17:53:25 +0200 Subject: RFR: 8203301: Linux-sparc fails to build after JDK-8199712 (Flight Recorder) In-Reply-To: <189933f3-2dd5-d70e-364b-794f375ec430@physik.fu-berlin.de> References: <189933f3-2dd5-d70e-364b-794f375ec430@physik.fu-berlin.de> Message-ID: <96fb699f-5c86-0f0b-036b-a9bd2e2aa30c@physik.fu-berlin.de> Hello! On 06/12/2018 03:37 PM, John Paul Adrian Glaubitz wrote: > I now had a closer look at the problem and looked at the necessary changes > made for PowerPC and S390 on Linux [1]. I have adopted the changes for > SPARC on Linux, the resulting changeset can be found in [2]. Please review! > > I am still in the process of doing a test build. The Sun Fire 2000 I am using > at the moment is a tad slow, so the test build isn't finished yet :(. I wanted > to put the changeset up for review anyway though and I will most likely follow > up with a second revision. First revision had a small mistake, it used a wrong number of arguments for frame(). Second revision here [1]. Adrian > [1] http://cr.openjdk.java.net/~glaubitz/8203301/webrev.01/ -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From vladimir.kozlov at oracle.com Tue Jun 12 16:00:54 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 12 Jun 2018 09:00:54 -0700 Subject: RFR: 8203301: Linux-sparc fails to build after JDK-8199712 (Flight Recorder) In-Reply-To: <96fb699f-5c86-0f0b-036b-a9bd2e2aa30c@physik.fu-berlin.de> References: <189933f3-2dd5-d70e-364b-794f375ec430@physik.fu-berlin.de> <96fb699f-5c86-0f0b-036b-a9bd2e2aa30c@physik.fu-berlin.de> Message-ID: <9893f6f0-8cb5-bb58-db29-34a649a9d89e@oracle.com> Looks good to me. Thanks, Vladimir On 6/12/18 8:53 AM, John Paul Adrian Glaubitz wrote: > Hello! > > On 06/12/2018 03:37 PM, John Paul Adrian Glaubitz wrote: >> I now had a closer look at the problem and looked at the necessary changes >> made for PowerPC and S390 on Linux [1]. I have adopted the changes for >> SPARC on Linux, the resulting changeset can be found in [2]. Please review! >> >> I am still in the process of doing a test build. The Sun Fire 2000 I am using >> at the moment is a tad slow, so the test build isn't finished yet :(. I wanted >> to put the changeset up for review anyway though and I will most likely follow >> up with a second revision. > > First revision had a small mistake, it used a wrong number of arguments for > frame(). Second revision here [1]. > > Adrian > >> [1] http://cr.openjdk.java.net/~glaubitz/8203301/webrev.01/ > From christian.tornqvist at oracle.com Tue Jun 12 16:57:11 2018 From: christian.tornqvist at oracle.com (Christian Tornqvist) Date: Tue, 12 Jun 2018 09:57:11 -0700 Subject: OpenJDK wiki still points to old submit-hs repository In-Reply-To: <01D84E66-C40B-4EFD-9929-B6DCFA98E9E0@oracle.com> References: <9df06789-808e-1f8f-935b-783e342aaecf@physik.fu-berlin.de> <01D84E66-C40B-4EFD-9929-B6DCFA98E9E0@oracle.com> Message-ID: <955E9C4F-DBC0-40EE-BEC0-936C3995E32C@oracle.com> I?ve updated the page so that it now correctly points to the jdk/submit repo, thanks for noticing this! Thanks, Christian > On Jun 12, 2018, at 7:18 07AM, jesper.wilhelmsson at oracle.com wrote: > > Paging Christian. > (I don't have write access to this page.) > > Thanks for reporting this Adrian! > > /Jesper > >> On 12 Jun 2018, at 15:49, John Paul Adrian Glaubitz wrote: >> >> Hi! >> >> Just a heads-up: The wiki page for the submit repository is still pointing >> to the the old submit-hs repository. This should be "submit" nowadays, >> "submit-hs" is read-only anyway. >> >> See: https://wiki.openjdk.java.net/display/Build/Submit+Repo >> >> Adrian >> >> -- >> .''`. John Paul Adrian Glaubitz >> : :' : Debian Developer - glaubitz at debian.org >> `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de >> `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 > From per.liden at oracle.com Tue Jun 12 17:59:44 2018 From: per.liden at oracle.com (Per Liden) Date: Tue, 12 Jun 2018 19:59:44 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <47ea4414-fd84-7c9d-807e-a0bbdba23860@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <47ea4414-fd84-7c9d-807e-a0bbdba23860@oracle.com> Message-ID: <81c1b28b-6d5a-674e-fa74-465ceccd6d3f@oracle.com> Hi, Just an updated to say that we've now pushed ZGC to jdk/jdk. cheers, Per On 06/08/2018 08:20 PM, Per Liden wrote: > Hi all, > > Here are updated webrevs, which address all the feedback and comments > received. These webrevs are also rebased on today's jdk/jdk. We're > looking for any final comments people might have, and if things go well > we hope to be able to push this some time (preferably early) next week. > > These webrevs have passed tier{1,2,3,4,5,6} on Linux-x64, and > tier{1,2,3} on all other Oracle supported platforms. > > ZGC Master > ? http://cr.openjdk.java.net/~pliden/8204210/webrev.2-master > > ZGC Testing > ? http://cr.openjdk.java.net/~pliden/8204210/webrev.2-testing > > Thanks! > > /Per & Stefan > > > On 06/06/2018 12:48 AM, Per Liden wrote: >> Hi all, >> >> Here are updated webrevs reflecting the feedback received so far. >> >> ZGC Master >> ?? Incremental: >> http://cr.openjdk.java.net/~pliden/8204210/webrev.0vs1-master >> ?? Full: http://cr.openjdk.java.net/~pliden/8204210/webrev.1-master >> >> ZGC Testing >> ?? Incremental: >> http://cr.openjdk.java.net/~pliden/8204210/webrev.0vs1-testing >> ?? Full: http://cr.openjdk.java.net/~pliden/8204210/webrev.1-testing >> >> Thanks! >> >> /Per >> >> On 06/01/2018 11:41 PM, Per Liden wrote: >>> Hi, >>> >>> Please review the implementation of JEP 333: ZGC: A Scalable >>> Low-Latency Garbage Collector (Experimental) >>> >>> Please see the JEP for more information about the project. The JEP is >>> currently in state "Proposed to Target" for JDK 11. >>> >>> https://bugs.openjdk.java.net/browse/JDK-8197831 >>> >>> Additional information in can also be found on the ZGC project wiki. >>> >>> https://wiki.openjdk.java.net/display/zgc/Main >>> >>> >>> Webrevs >>> ------- >>> >>> To make this easier to review, we've divided the change into two >>> webrevs. >>> >>> * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master >>> >>> ?? This patch contains the actual ZGC implementation, the new unit >>> tests and other changes needed in HotSpot. >>> >>> * ZGC Testing: >>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >>> >>> ?? This patch contains changes to existing tests needed by ZGC. >>> >>> >>> Overview of Changes >>> ------------------- >>> >>> Below follows a list of the files we add/modify in the master patch, >>> with a short summary describing each group. >>> >>> * Build support - Making ZGC an optional feature. >>> >>> ?? make/autoconf/hotspot.m4 >>> ?? make/hotspot/lib/JvmFeatures.gmk >>> ?? src/hotspot/share/utilities/macros.hpp >>> >>> * C2 AD file - Additions needed to generate ZGC load barriers (adlc >>> does not currently offer a way to easily break this out). >>> >>> ?? src/hotspot/cpu/x86/x86.ad >>> ?? src/hotspot/cpu/x86/x86_64.ad >>> >>> * C2 - Things that can't be easily abstracted out into ZGC specific >>> code, most of which is guarded behind a #if INCLUDE_ZGC and/or if >>> (UseZGC) condition. There should only be two logic changes (one in >>> idealKit.cpp and one in node.cpp) that are still active when ZGC is >>> disabled. We believe these are low risk changes and should not >>> introduce any real change i behavior when using other GCs. >>> >>> ?? src/hotspot/share/adlc/formssel.cpp >>> ?? src/hotspot/share/opto/* >>> ?? src/hotspot/share/compiler/compilerDirectives.hpp >>> >>> * General GC+Runtime - Registering ZGC as a collector. >>> >>> ?? src/hotspot/share/gc/shared/* >>> ?? src/hotspot/share/runtime/vmStructs.cpp >>> ?? src/hotspot/share/runtime/vm_operations.hpp >>> ?? src/hotspot/share/prims/whitebox.cpp >>> >>> * GC thread local data - Increasing the size of data area by 32 bytes. >>> >>> ?? src/hotspot/share/gc/shared/gcThreadLocalData.hpp >>> >>> * ZGC - The collector itself. >>> >>> ?? src/hotspot/share/gc/z/* >>> ?? src/hotspot/cpu/x86/gc/z/* >>> ?? src/hotspot/os_cpu/linux_x86/gc/z/* >>> ?? test/hotspot/gtest/gc/z/* >>> >>> * JFR - Adding new event types. >>> >>> ?? src/hotspot/share/jfr/* >>> ?? src/jdk.jfr/share/conf/jfr/* >>> >>> * Logging - Adding new log tags. >>> >>> ?? src/hotspot/share/logging/* >>> >>> * Metaspace - Adding a friend declaration. >>> >>> ?? src/hotspot/share/memory/metaspace.hpp >>> >>> * InstanceRefKlass - Adjustments for concurrent reference processing. >>> >>> ?? src/hotspot/share/oops/instanceRefKlass.inline.hpp >>> >>> * vmSymbol - Disabled clone intrinsic for ZGC. >>> >>> ?? src/hotspot/share/classfile/vmSymbols.cpp >>> >>> * Oop Verification - In four cases we disabled oop verification >>> because it do not makes sense or is not applicable to a GC using load >>> barriers. >>> >>> ?? src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp >>> ?? src/hotspot/cpu/x86/stubGenerator_x86_64.cpp >>> ?? src/hotspot/share/compiler/oopMap.cpp >>> ?? src/hotspot/share/runtime/jniHandles.cpp >>> >>> * StackValue - Apply a load barrier in case of OSR. This is a bit of >>> a hack. However, this will go away in the future, when we have the >>> next iteration of C2's load barriers in place (aka "C2 late barrier >>> insertion"). >>> >>> ?? src/hotspot/share/runtime/stackValue.cpp >>> >>> * JVMTI - Adding an assert() to catch problems if the tagmap hashing >>> is changed in the future. >>> >>> ?? src/hotspot/share/prims/jvmtiTagMap.cpp >>> >>> * Legal - Adding copyright/license for 3rd party hash function used >>> in ZHash. >>> >>> ?? src/java.base/share/legal/c-libutl.md >>> >>> * SA - Adding basic ZGC support. >>> >>> ?? src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* >>> >>> >>> Testing >>> ------- >>> >>> * Unit testing >>> >>> ?? A number of new ZGC specific gtests have been added, in >>> test/hotspot/gtest/gc/z/ >>> >>> * Regression testing >>> >>> ?? No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} >>> ?? No new failures in Mach5, with ZGC disabled, tier{1,2,3} >>> >>> * Stress testing >>> >>> ?? We have been continuously been running a number stress tests >>> throughout the development, these include: >>> >>> ???? specjbb2000 >>> ???? specjbb2005 >>> ???? specjbb2015 >>> ???? specjvm98 >>> ???? specjvm2008 >>> ???? dacapo2009 >>> ???? test/hotspot/jtreg/gc/stress/gcold >>> ???? test/hotspot/jtreg/gc/stress/systemgc >>> ???? test/hotspot/jtreg/gc/stress/gclocker >>> ???? test/hotspot/jtreg/gc/stress/gcbasher >>> ???? test/hotspot/jtreg/gc/stress/finalizer >>> ???? Kitchensink >>> >>> >>> Thanks! >>> >>> /Per, Stefan & the ZGC team From rkennke at redhat.com Tue Jun 12 18:16:06 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 12 Jun 2018 20:16:06 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <81c1b28b-6d5a-674e-fa74-465ceccd6d3f@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <47ea4414-fd84-7c9d-807e-a0bbdba23860@oracle.com> <81c1b28b-6d5a-674e-fa74-465ceccd6d3f@oracle.com> Message-ID: Woohoo! Congratulations! Cheers, Roman > Hi, > > Just an updated to say that we've now pushed ZGC to jdk/jdk. > > cheers, > Per > > On 06/08/2018 08:20 PM, Per Liden wrote: >> Hi all, >> >> Here are updated webrevs, which address all the feedback and comments >> received. These webrevs are also rebased on today's jdk/jdk. We're >> looking for any final comments people might have, and if things go >> well we hope to be able to push this some time (preferably early) next >> week. >> >> These webrevs have passed tier{1,2,3,4,5,6} on Linux-x64, and >> tier{1,2,3} on all other Oracle supported platforms. >> >> ZGC Master >> ?? http://cr.openjdk.java.net/~pliden/8204210/webrev.2-master >> >> ZGC Testing >> ?? http://cr.openjdk.java.net/~pliden/8204210/webrev.2-testing >> >> Thanks! >> >> /Per & Stefan >> >> >> On 06/06/2018 12:48 AM, Per Liden wrote: >>> Hi all, >>> >>> Here are updated webrevs reflecting the feedback received so far. >>> >>> ZGC Master >>> ?? Incremental: >>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0vs1-master >>> ?? Full: http://cr.openjdk.java.net/~pliden/8204210/webrev.1-master >>> >>> ZGC Testing >>> ?? Incremental: >>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0vs1-testing >>> ?? Full: http://cr.openjdk.java.net/~pliden/8204210/webrev.1-testing >>> >>> Thanks! >>> >>> /Per >>> >>> On 06/01/2018 11:41 PM, Per Liden wrote: >>>> Hi, >>>> >>>> Please review the implementation of JEP 333: ZGC: A Scalable >>>> Low-Latency Garbage Collector (Experimental) >>>> >>>> Please see the JEP for more information about the project. The JEP >>>> is currently in state "Proposed to Target" for JDK 11. >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8197831 >>>> >>>> Additional information in can also be found on the ZGC project wiki. >>>> >>>> https://wiki.openjdk.java.net/display/zgc/Main >>>> >>>> >>>> Webrevs >>>> ------- >>>> >>>> To make this easier to review, we've divided the change into two >>>> webrevs. >>>> >>>> * ZGC Master: >>>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master >>>> >>>> ?? This patch contains the actual ZGC implementation, the new unit >>>> tests and other changes needed in HotSpot. >>>> >>>> * ZGC Testing: >>>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >>>> >>>> ?? This patch contains changes to existing tests needed by ZGC. >>>> >>>> >>>> Overview of Changes >>>> ------------------- >>>> >>>> Below follows a list of the files we add/modify in the master patch, >>>> with a short summary describing each group. >>>> >>>> * Build support - Making ZGC an optional feature. >>>> >>>> ?? make/autoconf/hotspot.m4 >>>> ?? make/hotspot/lib/JvmFeatures.gmk >>>> ?? src/hotspot/share/utilities/macros.hpp >>>> >>>> * C2 AD file - Additions needed to generate ZGC load barriers (adlc >>>> does not currently offer a way to easily break this out). >>>> >>>> ?? src/hotspot/cpu/x86/x86.ad >>>> ?? src/hotspot/cpu/x86/x86_64.ad >>>> >>>> * C2 - Things that can't be easily abstracted out into ZGC specific >>>> code, most of which is guarded behind a #if INCLUDE_ZGC and/or if >>>> (UseZGC) condition. There should only be two logic changes (one in >>>> idealKit.cpp and one in node.cpp) that are still active when ZGC is >>>> disabled. We believe these are low risk changes and should not >>>> introduce any real change i behavior when using other GCs. >>>> >>>> ?? src/hotspot/share/adlc/formssel.cpp >>>> ?? src/hotspot/share/opto/* >>>> ?? src/hotspot/share/compiler/compilerDirectives.hpp >>>> >>>> * General GC+Runtime - Registering ZGC as a collector. >>>> >>>> ?? src/hotspot/share/gc/shared/* >>>> ?? src/hotspot/share/runtime/vmStructs.cpp >>>> ?? src/hotspot/share/runtime/vm_operations.hpp >>>> ?? src/hotspot/share/prims/whitebox.cpp >>>> >>>> * GC thread local data - Increasing the size of data area by 32 bytes. >>>> >>>> ?? src/hotspot/share/gc/shared/gcThreadLocalData.hpp >>>> >>>> * ZGC - The collector itself. >>>> >>>> ?? src/hotspot/share/gc/z/* >>>> ?? src/hotspot/cpu/x86/gc/z/* >>>> ?? src/hotspot/os_cpu/linux_x86/gc/z/* >>>> ?? test/hotspot/gtest/gc/z/* >>>> >>>> * JFR - Adding new event types. >>>> >>>> ?? src/hotspot/share/jfr/* >>>> ?? src/jdk.jfr/share/conf/jfr/* >>>> >>>> * Logging - Adding new log tags. >>>> >>>> ?? src/hotspot/share/logging/* >>>> >>>> * Metaspace - Adding a friend declaration. >>>> >>>> ?? src/hotspot/share/memory/metaspace.hpp >>>> >>>> * InstanceRefKlass - Adjustments for concurrent reference processing. >>>> >>>> ?? src/hotspot/share/oops/instanceRefKlass.inline.hpp >>>> >>>> * vmSymbol - Disabled clone intrinsic for ZGC. >>>> >>>> ?? src/hotspot/share/classfile/vmSymbols.cpp >>>> >>>> * Oop Verification - In four cases we disabled oop verification >>>> because it do not makes sense or is not applicable to a GC using >>>> load barriers. >>>> >>>> ?? src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp >>>> ?? src/hotspot/cpu/x86/stubGenerator_x86_64.cpp >>>> ?? src/hotspot/share/compiler/oopMap.cpp >>>> ?? src/hotspot/share/runtime/jniHandles.cpp >>>> >>>> * StackValue - Apply a load barrier in case of OSR. This is a bit of >>>> a hack. However, this will go away in the future, when we have the >>>> next iteration of C2's load barriers in place (aka "C2 late barrier >>>> insertion"). >>>> >>>> ?? src/hotspot/share/runtime/stackValue.cpp >>>> >>>> * JVMTI - Adding an assert() to catch problems if the tagmap hashing >>>> is changed in the future. >>>> >>>> ?? src/hotspot/share/prims/jvmtiTagMap.cpp >>>> >>>> * Legal - Adding copyright/license for 3rd party hash function used >>>> in ZHash. >>>> >>>> ?? src/java.base/share/legal/c-libutl.md >>>> >>>> * SA - Adding basic ZGC support. >>>> >>>> ?? src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* >>>> >>>> >>>> Testing >>>> ------- >>>> >>>> * Unit testing >>>> >>>> ?? A number of new ZGC specific gtests have been added, in >>>> test/hotspot/gtest/gc/z/ >>>> >>>> * Regression testing >>>> >>>> ?? No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} >>>> ?? No new failures in Mach5, with ZGC disabled, tier{1,2,3} >>>> >>>> * Stress testing >>>> >>>> ?? We have been continuously been running a number stress tests >>>> throughout the development, these include: >>>> >>>> ???? specjbb2000 >>>> ???? specjbb2005 >>>> ???? specjbb2015 >>>> ???? specjvm98 >>>> ???? specjvm2008 >>>> ???? dacapo2009 >>>> ???? test/hotspot/jtreg/gc/stress/gcold >>>> ???? test/hotspot/jtreg/gc/stress/systemgc >>>> ???? test/hotspot/jtreg/gc/stress/gclocker >>>> ???? test/hotspot/jtreg/gc/stress/gcbasher >>>> ???? test/hotspot/jtreg/gc/stress/finalizer >>>> ???? Kitchensink >>>> >>>> >>>> Thanks! >>>> >>>> /Per, Stefan & the ZGC team From jesper.wilhelmsson at oracle.com Tue Jun 12 18:21:43 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Tue, 12 Jun 2018 11:21:43 -0700 (PDT) Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <81c1b28b-6d5a-674e-fa74-465ceccd6d3f@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <47ea4414-fd84-7c9d-807e-a0bbdba23860@oracle.com> <81c1b28b-6d5a-674e-fa74-465ceccd6d3f@oracle.com> Message-ID: <00C3FB19-82E4-4121-ABAC-4AC86F90FB99@oracle.com> Awesome work!! Congratulations to this huge achievement! /Jesper > On 12 Jun 2018, at 19:59, Per Liden wrote: > > Hi, > > Just an updated to say that we've now pushed ZGC to jdk/jdk. > > cheers, > Per > > On 06/08/2018 08:20 PM, Per Liden wrote: >> Hi all, >> Here are updated webrevs, which address all the feedback and comments received. These webrevs are also rebased on today's jdk/jdk. We're looking for any final comments people might have, and if things go well we hope to be able to push this some time (preferably early) next week. >> These webrevs have passed tier{1,2,3,4,5,6} on Linux-x64, and tier{1,2,3} on all other Oracle supported platforms. >> ZGC Master >> http://cr.openjdk.java.net/~pliden/8204210/webrev.2-master >> ZGC Testing >> http://cr.openjdk.java.net/~pliden/8204210/webrev.2-testing >> Thanks! >> /Per & Stefan >> On 06/06/2018 12:48 AM, Per Liden wrote: >>> Hi all, >>> >>> Here are updated webrevs reflecting the feedback received so far. >>> >>> ZGC Master >>> Incremental: http://cr.openjdk.java.net/~pliden/8204210/webrev.0vs1-master >>> Full: http://cr.openjdk.java.net/~pliden/8204210/webrev.1-master >>> >>> ZGC Testing >>> Incremental: http://cr.openjdk.java.net/~pliden/8204210/webrev.0vs1-testing >>> Full: http://cr.openjdk.java.net/~pliden/8204210/webrev.1-testing >>> >>> Thanks! >>> >>> /Per >>> >>> On 06/01/2018 11:41 PM, Per Liden wrote: >>>> Hi, >>>> >>>> Please review the implementation of JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) >>>> >>>> Please see the JEP for more information about the project. The JEP is currently in state "Proposed to Target" for JDK 11. >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8197831 >>>> >>>> Additional information in can also be found on the ZGC project wiki. >>>> >>>> https://wiki.openjdk.java.net/display/zgc/Main >>>> >>>> >>>> Webrevs >>>> ------- >>>> >>>> To make this easier to review, we've divided the change into two webrevs. >>>> >>>> * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master >>>> >>>> This patch contains the actual ZGC implementation, the new unit tests and other changes needed in HotSpot. >>>> >>>> * ZGC Testing: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >>>> >>>> This patch contains changes to existing tests needed by ZGC. >>>> >>>> >>>> Overview of Changes >>>> ------------------- >>>> >>>> Below follows a list of the files we add/modify in the master patch, with a short summary describing each group. >>>> >>>> * Build support - Making ZGC an optional feature. >>>> >>>> make/autoconf/hotspot.m4 >>>> make/hotspot/lib/JvmFeatures.gmk >>>> src/hotspot/share/utilities/macros.hpp >>>> >>>> * C2 AD file - Additions needed to generate ZGC load barriers (adlc does not currently offer a way to easily break this out). >>>> >>>> src/hotspot/cpu/x86/x86.ad >>>> src/hotspot/cpu/x86/x86_64.ad >>>> >>>> * C2 - Things that can't be easily abstracted out into ZGC specific code, most of which is guarded behind a #if INCLUDE_ZGC and/or if (UseZGC) condition. There should only be two logic changes (one in idealKit.cpp and one in node.cpp) that are still active when ZGC is disabled. We believe these are low risk changes and should not introduce any real change i behavior when using other GCs. >>>> >>>> src/hotspot/share/adlc/formssel.cpp >>>> src/hotspot/share/opto/* >>>> src/hotspot/share/compiler/compilerDirectives.hpp >>>> >>>> * General GC+Runtime - Registering ZGC as a collector. >>>> >>>> src/hotspot/share/gc/shared/* >>>> src/hotspot/share/runtime/vmStructs.cpp >>>> src/hotspot/share/runtime/vm_operations.hpp >>>> src/hotspot/share/prims/whitebox.cpp >>>> >>>> * GC thread local data - Increasing the size of data area by 32 bytes. >>>> >>>> src/hotspot/share/gc/shared/gcThreadLocalData.hpp >>>> >>>> * ZGC - The collector itself. >>>> >>>> src/hotspot/share/gc/z/* >>>> src/hotspot/cpu/x86/gc/z/* >>>> src/hotspot/os_cpu/linux_x86/gc/z/* >>>> test/hotspot/gtest/gc/z/* >>>> >>>> * JFR - Adding new event types. >>>> >>>> src/hotspot/share/jfr/* >>>> src/jdk.jfr/share/conf/jfr/* >>>> >>>> * Logging - Adding new log tags. >>>> >>>> src/hotspot/share/logging/* >>>> >>>> * Metaspace - Adding a friend declaration. >>>> >>>> src/hotspot/share/memory/metaspace.hpp >>>> >>>> * InstanceRefKlass - Adjustments for concurrent reference processing. >>>> >>>> src/hotspot/share/oops/instanceRefKlass.inline.hpp >>>> >>>> * vmSymbol - Disabled clone intrinsic for ZGC. >>>> >>>> src/hotspot/share/classfile/vmSymbols.cpp >>>> >>>> * Oop Verification - In four cases we disabled oop verification because it do not makes sense or is not applicable to a GC using load barriers. >>>> >>>> src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp >>>> src/hotspot/cpu/x86/stubGenerator_x86_64.cpp >>>> src/hotspot/share/compiler/oopMap.cpp >>>> src/hotspot/share/runtime/jniHandles.cpp >>>> >>>> * StackValue - Apply a load barrier in case of OSR. This is a bit of a hack. However, this will go away in the future, when we have the next iteration of C2's load barriers in place (aka "C2 late barrier insertion"). >>>> >>>> src/hotspot/share/runtime/stackValue.cpp >>>> >>>> * JVMTI - Adding an assert() to catch problems if the tagmap hashing is changed in the future. >>>> >>>> src/hotspot/share/prims/jvmtiTagMap.cpp >>>> >>>> * Legal - Adding copyright/license for 3rd party hash function used in ZHash. >>>> >>>> src/java.base/share/legal/c-libutl.md >>>> >>>> * SA - Adding basic ZGC support. >>>> >>>> src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* >>>> >>>> >>>> Testing >>>> ------- >>>> >>>> * Unit testing >>>> >>>> A number of new ZGC specific gtests have been added, in test/hotspot/gtest/gc/z/ >>>> >>>> * Regression testing >>>> >>>> No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} >>>> No new failures in Mach5, with ZGC disabled, tier{1,2,3} >>>> >>>> * Stress testing >>>> >>>> We have been continuously been running a number stress tests throughout the development, these include: >>>> >>>> specjbb2000 >>>> specjbb2005 >>>> specjbb2015 >>>> specjvm98 >>>> specjvm2008 >>>> dacapo2009 >>>> test/hotspot/jtreg/gc/stress/gcold >>>> test/hotspot/jtreg/gc/stress/systemgc >>>> test/hotspot/jtreg/gc/stress/gclocker >>>> test/hotspot/jtreg/gc/stress/gcbasher >>>> test/hotspot/jtreg/gc/stress/finalizer >>>> Kitchensink >>>> >>>> >>>> Thanks! >>>> >>>> /Per, Stefan & the ZGC team From kirk.pepperdine at gmail.com Tue Jun 12 18:46:06 2018 From: kirk.pepperdine at gmail.com (Kirk Pepperdine) Date: Tue, 12 Jun 2018 21:46:06 +0300 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: <81c1b28b-6d5a-674e-fa74-465ceccd6d3f@oracle.com> References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <47ea4414-fd84-7c9d-807e-a0bbdba23860@oracle.com> <81c1b28b-6d5a-674e-fa74-465ceccd6d3f@oracle.com> Message-ID: Well done! ? Kirk > On Jun 12, 2018, at 8:59 PM, Per Liden wrote: > > Hi, > > Just an updated to say that we've now pushed ZGC to jdk/jdk. > > cheers, > Per > > On 06/08/2018 08:20 PM, Per Liden wrote: >> Hi all, >> Here are updated webrevs, which address all the feedback and comments received. These webrevs are also rebased on today's jdk/jdk. We're looking for any final comments people might have, and if things go well we hope to be able to push this some time (preferably early) next week. >> These webrevs have passed tier{1,2,3,4,5,6} on Linux-x64, and tier{1,2,3} on all other Oracle supported platforms. >> ZGC Master >> http://cr.openjdk.java.net/~pliden/8204210/webrev.2-master >> ZGC Testing >> http://cr.openjdk.java.net/~pliden/8204210/webrev.2-testing >> Thanks! >> /Per & Stefan >> On 06/06/2018 12:48 AM, Per Liden wrote: >>> Hi all, >>> >>> Here are updated webrevs reflecting the feedback received so far. >>> >>> ZGC Master >>> Incremental: http://cr.openjdk.java.net/~pliden/8204210/webrev.0vs1-master >>> Full: http://cr.openjdk.java.net/~pliden/8204210/webrev.1-master >>> >>> ZGC Testing >>> Incremental: http://cr.openjdk.java.net/~pliden/8204210/webrev.0vs1-testing >>> Full: http://cr.openjdk.java.net/~pliden/8204210/webrev.1-testing >>> >>> Thanks! >>> >>> /Per >>> >>> On 06/01/2018 11:41 PM, Per Liden wrote: >>>> Hi, >>>> >>>> Please review the implementation of JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) >>>> >>>> Please see the JEP for more information about the project. The JEP is currently in state "Proposed to Target" for JDK 11. >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8197831 >>>> >>>> Additional information in can also be found on the ZGC project wiki. >>>> >>>> https://wiki.openjdk.java.net/display/zgc/Main >>>> >>>> >>>> Webrevs >>>> ------- >>>> >>>> To make this easier to review, we've divided the change into two webrevs. >>>> >>>> * ZGC Master: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master >>>> >>>> This patch contains the actual ZGC implementation, the new unit tests and other changes needed in HotSpot. >>>> >>>> * ZGC Testing: http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >>>> >>>> This patch contains changes to existing tests needed by ZGC. >>>> >>>> >>>> Overview of Changes >>>> ------------------- >>>> >>>> Below follows a list of the files we add/modify in the master patch, with a short summary describing each group. >>>> >>>> * Build support - Making ZGC an optional feature. >>>> >>>> make/autoconf/hotspot.m4 >>>> make/hotspot/lib/JvmFeatures.gmk >>>> src/hotspot/share/utilities/macros.hpp >>>> >>>> * C2 AD file - Additions needed to generate ZGC load barriers (adlc does not currently offer a way to easily break this out). >>>> >>>> src/hotspot/cpu/x86/x86.ad >>>> src/hotspot/cpu/x86/x86_64.ad >>>> >>>> * C2 - Things that can't be easily abstracted out into ZGC specific code, most of which is guarded behind a #if INCLUDE_ZGC and/or if (UseZGC) condition. There should only be two logic changes (one in idealKit.cpp and one in node.cpp) that are still active when ZGC is disabled. We believe these are low risk changes and should not introduce any real change i behavior when using other GCs. >>>> >>>> src/hotspot/share/adlc/formssel.cpp >>>> src/hotspot/share/opto/* >>>> src/hotspot/share/compiler/compilerDirectives.hpp >>>> >>>> * General GC+Runtime - Registering ZGC as a collector. >>>> >>>> src/hotspot/share/gc/shared/* >>>> src/hotspot/share/runtime/vmStructs.cpp >>>> src/hotspot/share/runtime/vm_operations.hpp >>>> src/hotspot/share/prims/whitebox.cpp >>>> >>>> * GC thread local data - Increasing the size of data area by 32 bytes. >>>> >>>> src/hotspot/share/gc/shared/gcThreadLocalData.hpp >>>> >>>> * ZGC - The collector itself. >>>> >>>> src/hotspot/share/gc/z/* >>>> src/hotspot/cpu/x86/gc/z/* >>>> src/hotspot/os_cpu/linux_x86/gc/z/* >>>> test/hotspot/gtest/gc/z/* >>>> >>>> * JFR - Adding new event types. >>>> >>>> src/hotspot/share/jfr/* >>>> src/jdk.jfr/share/conf/jfr/* >>>> >>>> * Logging - Adding new log tags. >>>> >>>> src/hotspot/share/logging/* >>>> >>>> * Metaspace - Adding a friend declaration. >>>> >>>> src/hotspot/share/memory/metaspace.hpp >>>> >>>> * InstanceRefKlass - Adjustments for concurrent reference processing. >>>> >>>> src/hotspot/share/oops/instanceRefKlass.inline.hpp >>>> >>>> * vmSymbol - Disabled clone intrinsic for ZGC. >>>> >>>> src/hotspot/share/classfile/vmSymbols.cpp >>>> >>>> * Oop Verification - In four cases we disabled oop verification because it do not makes sense or is not applicable to a GC using load barriers. >>>> >>>> src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp >>>> src/hotspot/cpu/x86/stubGenerator_x86_64.cpp >>>> src/hotspot/share/compiler/oopMap.cpp >>>> src/hotspot/share/runtime/jniHandles.cpp >>>> >>>> * StackValue - Apply a load barrier in case of OSR. This is a bit of a hack. However, this will go away in the future, when we have the next iteration of C2's load barriers in place (aka "C2 late barrier insertion"). >>>> >>>> src/hotspot/share/runtime/stackValue.cpp >>>> >>>> * JVMTI - Adding an assert() to catch problems if the tagmap hashing is changed in the future. >>>> >>>> src/hotspot/share/prims/jvmtiTagMap.cpp >>>> >>>> * Legal - Adding copyright/license for 3rd party hash function used in ZHash. >>>> >>>> src/java.base/share/legal/c-libutl.md >>>> >>>> * SA - Adding basic ZGC support. >>>> >>>> src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* >>>> >>>> >>>> Testing >>>> ------- >>>> >>>> * Unit testing >>>> >>>> A number of new ZGC specific gtests have been added, in test/hotspot/gtest/gc/z/ >>>> >>>> * Regression testing >>>> >>>> No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} >>>> No new failures in Mach5, with ZGC disabled, tier{1,2,3} >>>> >>>> * Stress testing >>>> >>>> We have been continuously been running a number stress tests throughout the development, these include: >>>> >>>> specjbb2000 >>>> specjbb2005 >>>> specjbb2015 >>>> specjvm98 >>>> specjvm2008 >>>> dacapo2009 >>>> test/hotspot/jtreg/gc/stress/gcold >>>> test/hotspot/jtreg/gc/stress/systemgc >>>> test/hotspot/jtreg/gc/stress/gclocker >>>> test/hotspot/jtreg/gc/stress/gcbasher >>>> test/hotspot/jtreg/gc/stress/finalizer >>>> Kitchensink >>>> >>>> >>>> Thanks! >>>> >>>> /Per, Stefan & the ZGC team From rkennke at redhat.com Tue Jun 12 19:38:48 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 12 Jun 2018 21:38:48 +0200 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> <5B16A8C5.7040005@oracle.com> <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> <48b8839a-8af7-e64c-a9c6-e9f67ea099e4@redhat.com> <5e67f600-dc99-4409-6c6d-c09d3786b215@redhat.com> <38797d40-8a06-dc91-7d52-cc9e51222683@redhat.com> <2a983bca-a0b6-d004-a7fa-16ccd9d7f809@redhat.com> <4c3d8867-ce8e-76dc-5d98-a751a0924fff@redhat.com> <127ae611-814c-1910-0d16-8c548963aa20@redhat.com> <5425d9a3-b8de-f8e9-c5e0-7fc8b462dd8e@redhat.com> Message-ID: <6b5f16dd-da03-3fbd-f40f-aa9b7a274507@redhat.com> I merged in updates from default branch, but it still fails. See below. Are they any other known blockers? Thanks, Roman Build Details: 2018-06-12-1716282.roman.source 0 Failed Tests Mach5 Tasks Results Summary PASSED: 55 KILLED: 0 FAILED: 0 UNABLE_TO_RUN: 18 EXECUTED_WITH_FAILURE: 2 NA: 0 Build 3 Not run build_jdk_linux-linux-x64-linux-x64-build-0 Error while running 'jib configure', return value: 1 build_jdk_linux-linux-x64-debug-linux-x64-build-1 Error while running 'jib configure', return value: 1 linux-x64-install-linux-x64-build-14 Dependency task failed: mach5...5-build_jdk_linux-linux-x64-linux-x64-build-0 Test 17 Not run tier1-product-jdk_open_test_hotspot_jtreg_tier1_common-linux-x64-18 Dependency task failed: mach5...5-build_jdk_linux-linux-x64-linux-x64-build-0 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_common-linux-x64-debug-24 Dependency task failed: mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_1-linux-x64-debug-27 Dependency task failed: mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_2-linux-x64-debug-30 Dependency task failed: mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_3-linux-x64-debug-33 Dependency task failed: mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_not_xcomp-linux-x64-debug-36 Dependency task failed: mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_1-linux-x64-debug-39 Dependency task failed: mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_2-linux-x64-debug-42 Dependency task failed: mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 tier1-product-jdk_open_test_hotspot_jtreg_tier1_gc_gcbasher-linux-x64-21 Dependency task failed: mach5...5-build_jdk_linux-linux-x64-linux-x64-build-0 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_gcbasher-linux-x64-debug-45 Dependency task failed: mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 See all 17... > Hi, > > the issue is (or actually the fix) https://bugs.openjdk.java.net/brow > se/JDK-8204861 . > > Thanks, > Thomas > > On Tue, 2018-06-12 at 14:59 +0200, Roman Kennke wrote: >> jdk/submit came back with unstable (see below). Can somebody with >> access >> look what's going on? >> >> Thanks, Roman >> >> Build Details: 2018-06-12-1147472.roman.source >> 0 Failed Tests >> Mach5 Tasks Results Summary >> >> PASSED: 62 >> KILLED: 0 >> FAILED: 0 >> UNABLE_TO_RUN: 11 >> EXECUTED_WITH_FAILURE: 2 >> NA: 0 >> Build >> >> 2 Not run >> build_jdk_linux-linux-x64-debug-linux-x64-build-1 error >> while building, return value: 2 >> build_jdk_linux-linux-x64-open-debug-linux-x64-build-3 >> error >> while building, return value: 2 >> >> Test >> >> 11 Not run >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_common-linux-x64-debug- >> 24 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_1-linux-x64- >> debug-27 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_2-linux-x64- >> debug-30 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_3-linux-x64- >> debug-33 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_not_xcomp- >> linux-x64-debug-36 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_1-linux-x64-debug-39 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_2-linux-x64-debug-42 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_gcbasher-linux-x64- >> debug-45 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_gcold-linux-x64- >> debug-48 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_runtime-linux-x64- >> debug-51 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> See all 11... >> >> >>> On 06/12/2018 10:23 AM, Roman Kennke wrote: >>>> Am 12.06.2018 um 11:11 schrieb Andrew Haley: >>>>> On 06/11/2018 08:17 PM, Roman Kennke wrote: >>>>>> Am 11.06.2018 um 19:11 schrieb Andrew Haley: >>>>>>> On 06/11/2018 04:56 PM, Andrew Haley wrote: >>>>>>>> On 06/08/2018 09:17 PM, Roman Kennke wrote: >>>>>>>>> Why is it better? And how would I do that? It sounds >>>>>>>>> like a fairly >>>>>>>>> complex undertaking for a special case. Notice that if >>>>>>>>> the oop doesn't >>>>>>>>> qualify as immediate operand (quite likely for an oop?) >>>>>>>>> it used to be >>>>>>>>> moved into rscratch1 anyway a few lines below. >>>>>>>> >>>>>>>> Sorry for the slow reply. I'm looking now. >>>>>>> >>>>>>> OK. The problem is that this is a very bad code smell: >>>>>>> >>>>>>> case T_ARRAY: >>>>>>> jobject2reg(opr2->as_constant_ptr()->as_jobject(), >>>>>>> rscratch1); >>>>>>> __ cmpoop(reg1, rscratch1); >>>>>>> >>>>>>> I can't tell that this is correct. rscratch1 is used by >>>>>>> assembler >>>>>>> macros, and I don't know if some other GC (e.g. ZGC) might >>>>>>> need to use >>>>>>> rscratch1 inside cmpoop. The risk here is obvious. The >>>>>>> Right Thing >>>>>>> to do IMO is to generate a scratch register for pointer >>>>>>> comparisons. >>>>>>> >>>>>>> Unless, I guess, we know that cmpoop never ever needs a >>>>>>> scratch >>>>>>> register for any forseeable garbage collector. >>>>>>> >>>>>> >>>>>> I do know that Shenandoah does not require a tmp reg. I also >>>>>> do know >>>>>> that no other collector currently needs equals-barriers at >>>>>> all. >>>>> >>>>> So cmpoop() is literally useless. It does nothing except add a >>>>> layer >>>>> of obfuscation in the name of some possible future collector. >>>> >>>> The layer of abstraction is needed by Shenandoah. We need special >>>> handling for comparing oops. It is certainly not useless. Or are >>>> we >>>> talking about different issues? >>> >>> Ah, okay. I'm looking at >>> ShenandoahBarrierSetAssembler::obj_equals() >>> and I see that it actually has a side effect on its operands rather >>> than using scratch registers. Ewww. I get it now. >>> >>> OK, I withdraw my objection. It's very confusing code to read, but >>> it is what it is. >>> >> >> > From jesper.wilhelmsson at oracle.com Tue Jun 12 20:07:30 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Tue, 12 Jun 2018 22:07:30 +0200 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: <6b5f16dd-da03-3fbd-f40f-aa9b7a274507@redhat.com> References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> <5B16A8C5.7040005@oracle.com> <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> <48b8839a-8af7-e64c-a9c6-e9f67ea099e4@redhat.com> <5e67f600-dc99-4409-6c6d-c09d3786b215@redhat.com> <38797d40-8a06-dc91-7d52-cc9e51222683@redhat.com> <2a983bca-a0b6-d004-a7fa-16ccd9d7f809@redhat.com> <4c3d8867-ce8e-76dc-5d98-a751a0924fff@redhat.com> <127ae611-814c-1910-0d16-8c548963aa20@redhat.com> <5425d9a3-b8de-f8e9-c5e0-7fc8b462dd8e@redhat.com> <6b5f16dd-da03-3fbd-f40f-aa9b7a274507@redhat.com> Message-ID: Did you bring in ZGC into your branch? After the integration, the infrastructure expects it to be there. This is not specific to the ZGC integration, but an infrastructure issue with any change that contains changes to both open code and Oracle internal parts. /Jesper > On 12 Jun 2018, at 21:38, Roman Kennke wrote: > > I merged in updates from default branch, but it still fails. See below. > > Are they any other known blockers? > > Thanks, Roman > > > > Build Details: 2018-06-12-1716282.roman.source > 0 Failed Tests > Mach5 Tasks Results Summary > > PASSED: 55 > KILLED: 0 > FAILED: 0 > UNABLE_TO_RUN: 18 > EXECUTED_WITH_FAILURE: 2 > NA: 0 > Build > > 3 Not run > build_jdk_linux-linux-x64-linux-x64-build-0 Error while > running 'jib configure', return value: 1 > build_jdk_linux-linux-x64-debug-linux-x64-build-1 Error > while running 'jib configure', return value: 1 > linux-x64-install-linux-x64-build-14 Dependency task failed: > mach5...5-build_jdk_linux-linux-x64-linux-x64-build-0 > > Test > > 17 Not run > > tier1-product-jdk_open_test_hotspot_jtreg_tier1_common-linux-x64-18 > Dependency task failed: > mach5...5-build_jdk_linux-linux-x64-linux-x64-build-0 > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_common-linux-x64-debug-24 > Dependency task failed: > mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_1-linux-x64-debug-27 > Dependency task failed: > mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_2-linux-x64-debug-30 > Dependency task failed: > mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_3-linux-x64-debug-33 > Dependency task failed: > mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_not_xcomp-linux-x64-debug-36 > Dependency task failed: > mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_1-linux-x64-debug-39 > Dependency task failed: > mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_2-linux-x64-debug-42 > Dependency task failed: > mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 > > tier1-product-jdk_open_test_hotspot_jtreg_tier1_gc_gcbasher-linux-x64-21 > Dependency task failed: > mach5...5-build_jdk_linux-linux-x64-linux-x64-build-0 > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_gcbasher-linux-x64-debug-45 > Dependency task failed: > mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 > See all 17... > > >> Hi, >> >> the issue is (or actually the fix) https://bugs.openjdk.java.net/brow >> se/JDK-8204861 . >> >> Thanks, >> Thomas >> >> On Tue, 2018-06-12 at 14:59 +0200, Roman Kennke wrote: >>> jdk/submit came back with unstable (see below). Can somebody with >>> access >>> look what's going on? >>> >>> Thanks, Roman >>> >>> Build Details: 2018-06-12-1147472.roman.source >>> 0 Failed Tests >>> Mach5 Tasks Results Summary >>> >>> PASSED: 62 >>> KILLED: 0 >>> FAILED: 0 >>> UNABLE_TO_RUN: 11 >>> EXECUTED_WITH_FAILURE: 2 >>> NA: 0 >>> Build >>> >>> 2 Not run >>> build_jdk_linux-linux-x64-debug-linux-x64-build-1 error >>> while building, return value: 2 >>> build_jdk_linux-linux-x64-open-debug-linux-x64-build-3 >>> error >>> while building, return value: 2 >>> >>> Test >>> >>> 11 Not run >>> >>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_common-linux-x64-debug- >>> 24 >>> Dependency task failed: >>> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >>> >>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_1-linux-x64- >>> debug-27 >>> Dependency task failed: >>> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >>> >>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_2-linux-x64- >>> debug-30 >>> Dependency task failed: >>> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >>> >>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_3-linux-x64- >>> debug-33 >>> Dependency task failed: >>> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >>> >>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_not_xcomp- >>> linux-x64-debug-36 >>> Dependency task failed: >>> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >>> >>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_1-linux-x64-debug-39 >>> Dependency task failed: >>> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >>> >>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_2-linux-x64-debug-42 >>> Dependency task failed: >>> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >>> >>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_gcbasher-linux-x64- >>> debug-45 >>> Dependency task failed: >>> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >>> >>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_gcold-linux-x64- >>> debug-48 >>> Dependency task failed: >>> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >>> >>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_runtime-linux-x64- >>> debug-51 >>> Dependency task failed: >>> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >>> See all 11... >>> >>> >>>> On 06/12/2018 10:23 AM, Roman Kennke wrote: >>>>> Am 12.06.2018 um 11:11 schrieb Andrew Haley: >>>>>> On 06/11/2018 08:17 PM, Roman Kennke wrote: >>>>>>> Am 11.06.2018 um 19:11 schrieb Andrew Haley: >>>>>>>> On 06/11/2018 04:56 PM, Andrew Haley wrote: >>>>>>>>> On 06/08/2018 09:17 PM, Roman Kennke wrote: >>>>>>>>>> Why is it better? And how would I do that? It sounds >>>>>>>>>> like a fairly >>>>>>>>>> complex undertaking for a special case. Notice that if >>>>>>>>>> the oop doesn't >>>>>>>>>> qualify as immediate operand (quite likely for an oop?) >>>>>>>>>> it used to be >>>>>>>>>> moved into rscratch1 anyway a few lines below. >>>>>>>>> >>>>>>>>> Sorry for the slow reply. I'm looking now. >>>>>>>> >>>>>>>> OK. The problem is that this is a very bad code smell: >>>>>>>> >>>>>>>> case T_ARRAY: >>>>>>>> jobject2reg(opr2->as_constant_ptr()->as_jobject(), >>>>>>>> rscratch1); >>>>>>>> __ cmpoop(reg1, rscratch1); >>>>>>>> >>>>>>>> I can't tell that this is correct. rscratch1 is used by >>>>>>>> assembler >>>>>>>> macros, and I don't know if some other GC (e.g. ZGC) might >>>>>>>> need to use >>>>>>>> rscratch1 inside cmpoop. The risk here is obvious. The >>>>>>>> Right Thing >>>>>>>> to do IMO is to generate a scratch register for pointer >>>>>>>> comparisons. >>>>>>>> >>>>>>>> Unless, I guess, we know that cmpoop never ever needs a >>>>>>>> scratch >>>>>>>> register for any forseeable garbage collector. >>>>>>>> >>>>>>> >>>>>>> I do know that Shenandoah does not require a tmp reg. I also >>>>>>> do know >>>>>>> that no other collector currently needs equals-barriers at >>>>>>> all. >>>>>> >>>>>> So cmpoop() is literally useless. It does nothing except add a >>>>>> layer >>>>>> of obfuscation in the name of some possible future collector. >>>>> >>>>> The layer of abstraction is needed by Shenandoah. We need special >>>>> handling for comparing oops. It is certainly not useless. Or are >>>>> we >>>>> talking about different issues? >>>> >>>> Ah, okay. I'm looking at >>>> ShenandoahBarrierSetAssembler::obj_equals() >>>> and I see that it actually has a side effect on its operands rather >>>> than using scratch registers. Ewww. I get it now. >>>> >>>> OK, I withdraw my objection. It's very confusing code to read, but >>>> it is what it is. >>>> >>> >>> >> > > From rkennke at redhat.com Tue Jun 12 20:16:53 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 12 Jun 2018 22:16:53 +0200 Subject: RFR: JDK-8203157: Object equals abstraction for BarrierSetAssembler In-Reply-To: References: <61fcbf38-ecf2-3d19-9f96-5331a1fb65ff@redhat.com> <5B1532C9.4070206@oracle.com> <222894ba-13c8-d204-5a8d-cc417a9517ae@redhat.com> <5B16A59D.7020609@oracle.com> <5B16A8C5.7040005@oracle.com> <6a6c8bfb-515f-1bfd-c90f-b41d8e2fd599@redhat.com> <48b8839a-8af7-e64c-a9c6-e9f67ea099e4@redhat.com> <5e67f600-dc99-4409-6c6d-c09d3786b215@redhat.com> <38797d40-8a06-dc91-7d52-cc9e51222683@redhat.com> <2a983bca-a0b6-d004-a7fa-16ccd9d7f809@redhat.com> <4c3d8867-ce8e-76dc-5d98-a751a0924fff@redhat.com> <127ae611-814c-1910-0d16-8c548963aa20@redhat.com> <5425d9a3-b8de-f8e9-c5e0-7fc8b462dd8e@redhat.com> <6b5f16dd-da03-3fbd-f40f-aa9b7a274507@redhat.com> Message-ID: <0054d54c-1be4-75d1-5710-290d8ad5d2ad@redhat.com> Aha. I merged from latest default branch just now and pushed again. Hopefully this cleans it up. Thanks! Roman > Did you bring in ZGC into your branch? After the integration, the infrastructure expects it to be there. This is not specific to the ZGC integration, but an infrastructure issue with any change that contains changes to both open code and Oracle internal parts. > /Jesper > > >> On 12 Jun 2018, at 21:38, Roman Kennke wrote: >> >> I merged in updates from default branch, but it still fails. See below. >> >> Are they any other known blockers? >> >> Thanks, Roman >> >> >> >> Build Details: 2018-06-12-1716282.roman.source >> 0 Failed Tests >> Mach5 Tasks Results Summary >> >> PASSED: 55 >> KILLED: 0 >> FAILED: 0 >> UNABLE_TO_RUN: 18 >> EXECUTED_WITH_FAILURE: 2 >> NA: 0 >> Build >> >> 3 Not run >> build_jdk_linux-linux-x64-linux-x64-build-0 Error while >> running 'jib configure', return value: 1 >> build_jdk_linux-linux-x64-debug-linux-x64-build-1 Error >> while running 'jib configure', return value: 1 >> linux-x64-install-linux-x64-build-14 Dependency task failed: >> mach5...5-build_jdk_linux-linux-x64-linux-x64-build-0 >> >> Test >> >> 17 Not run >> >> tier1-product-jdk_open_test_hotspot_jtreg_tier1_common-linux-x64-18 >> Dependency task failed: >> mach5...5-build_jdk_linux-linux-x64-linux-x64-build-0 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_common-linux-x64-debug-24 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_1-linux-x64-debug-27 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_2-linux-x64-debug-30 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_3-linux-x64-debug-33 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_not_xcomp-linux-x64-debug-36 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_1-linux-x64-debug-39 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_2-linux-x64-debug-42 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> >> tier1-product-jdk_open_test_hotspot_jtreg_tier1_gc_gcbasher-linux-x64-21 >> Dependency task failed: >> mach5...5-build_jdk_linux-linux-x64-linux-x64-build-0 >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_gcbasher-linux-x64-debug-45 >> Dependency task failed: >> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >> See all 17... >> >> >>> Hi, >>> >>> the issue is (or actually the fix) https://bugs.openjdk.java.net/brow >>> se/JDK-8204861 . >>> >>> Thanks, >>> Thomas >>> >>> On Tue, 2018-06-12 at 14:59 +0200, Roman Kennke wrote: >>>> jdk/submit came back with unstable (see below). Can somebody with >>>> access >>>> look what's going on? >>>> >>>> Thanks, Roman >>>> >>>> Build Details: 2018-06-12-1147472.roman.source >>>> 0 Failed Tests >>>> Mach5 Tasks Results Summary >>>> >>>> PASSED: 62 >>>> KILLED: 0 >>>> FAILED: 0 >>>> UNABLE_TO_RUN: 11 >>>> EXECUTED_WITH_FAILURE: 2 >>>> NA: 0 >>>> Build >>>> >>>> 2 Not run >>>> build_jdk_linux-linux-x64-debug-linux-x64-build-1 error >>>> while building, return value: 2 >>>> build_jdk_linux-linux-x64-open-debug-linux-x64-build-3 >>>> error >>>> while building, return value: 2 >>>> >>>> Test >>>> >>>> 11 Not run >>>> >>>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_common-linux-x64-debug- >>>> 24 >>>> Dependency task failed: >>>> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >>>> >>>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_1-linux-x64- >>>> debug-27 >>>> Dependency task failed: >>>> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >>>> >>>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_2-linux-x64- >>>> debug-30 >>>> Dependency task failed: >>>> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >>>> >>>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_3-linux-x64- >>>> debug-33 >>>> Dependency task failed: >>>> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >>>> >>>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_not_xcomp- >>>> linux-x64-debug-36 >>>> Dependency task failed: >>>> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >>>> >>>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_1-linux-x64-debug-39 >>>> Dependency task failed: >>>> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >>>> >>>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_2-linux-x64-debug-42 >>>> Dependency task failed: >>>> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >>>> >>>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_gcbasher-linux-x64- >>>> debug-45 >>>> Dependency task failed: >>>> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >>>> >>>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_gcold-linux-x64- >>>> debug-48 >>>> Dependency task failed: >>>> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >>>> >>>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_runtime-linux-x64- >>>> debug-51 >>>> Dependency task failed: >>>> mach5...d_jdk_linux-linux-x64-debug-linux-x64-build-1 >>>> See all 11... >>>> >>>> >>>>> On 06/12/2018 10:23 AM, Roman Kennke wrote: >>>>>> Am 12.06.2018 um 11:11 schrieb Andrew Haley: >>>>>>> On 06/11/2018 08:17 PM, Roman Kennke wrote: >>>>>>>> Am 11.06.2018 um 19:11 schrieb Andrew Haley: >>>>>>>>> On 06/11/2018 04:56 PM, Andrew Haley wrote: >>>>>>>>>> On 06/08/2018 09:17 PM, Roman Kennke wrote: >>>>>>>>>>> Why is it better? And how would I do that? It sounds >>>>>>>>>>> like a fairly >>>>>>>>>>> complex undertaking for a special case. Notice that if >>>>>>>>>>> the oop doesn't >>>>>>>>>>> qualify as immediate operand (quite likely for an oop?) >>>>>>>>>>> it used to be >>>>>>>>>>> moved into rscratch1 anyway a few lines below. >>>>>>>>>> >>>>>>>>>> Sorry for the slow reply. I'm looking now. >>>>>>>>> >>>>>>>>> OK. The problem is that this is a very bad code smell: >>>>>>>>> >>>>>>>>> case T_ARRAY: >>>>>>>>> jobject2reg(opr2->as_constant_ptr()->as_jobject(), >>>>>>>>> rscratch1); >>>>>>>>> __ cmpoop(reg1, rscratch1); >>>>>>>>> >>>>>>>>> I can't tell that this is correct. rscratch1 is used by >>>>>>>>> assembler >>>>>>>>> macros, and I don't know if some other GC (e.g. ZGC) might >>>>>>>>> need to use >>>>>>>>> rscratch1 inside cmpoop. The risk here is obvious. The >>>>>>>>> Right Thing >>>>>>>>> to do IMO is to generate a scratch register for pointer >>>>>>>>> comparisons. >>>>>>>>> >>>>>>>>> Unless, I guess, we know that cmpoop never ever needs a >>>>>>>>> scratch >>>>>>>>> register for any forseeable garbage collector. >>>>>>>>> >>>>>>>> >>>>>>>> I do know that Shenandoah does not require a tmp reg. I also >>>>>>>> do know >>>>>>>> that no other collector currently needs equals-barriers at >>>>>>>> all. >>>>>>> >>>>>>> So cmpoop() is literally useless. It does nothing except add a >>>>>>> layer >>>>>>> of obfuscation in the name of some possible future collector. >>>>>> >>>>>> The layer of abstraction is needed by Shenandoah. We need special >>>>>> handling for comparing oops. It is certainly not useless. Or are >>>>>> we >>>>>> talking about different issues? >>>>> >>>>> Ah, okay. I'm looking at >>>>> ShenandoahBarrierSetAssembler::obj_equals() >>>>> and I see that it actually has a side effect on its operands rather >>>>> than using scratch registers. Ewww. I get it now. >>>>> >>>>> OK, I withdraw my objection. It's very confusing code to read, but >>>>> it is what it is. >>>>> >>>> >>>> >>> >> >> > From zgu at redhat.com Tue Jun 12 20:31:44 2018 From: zgu at redhat.com (Zhengyu Gu) Date: Tue, 12 Jun 2018 16:31:44 -0400 Subject: RFR(M) 8203641: Refactor String Deduplication into shared In-Reply-To: <2952ecd7a31f0837a8852a9b53a3700418fb6bc2.camel@oracle.com> References: <2952ecd7a31f0837a8852a9b53a3700418fb6bc2.camel@oracle.com> Message-ID: Hi Thomas, Thanks for the reviewing. On 06/12/2018 05:28 AM, Thomas Schatzl wrote: > > This would very likely decrease the amount of changes in the important > change, the refactoring, significantly. > > Now everything is shown as "new" in diff tools, and we reviewers need > to go through everything. It seems a bit of a stretch to call this "M" > with 1800 lines of changed lines, both on the raw number of changes and > the review complexity. I reshuffled some moved files, seems following updated webrev to have better diff. http://cr.openjdk.java.net/~zgu/8203641/webrev.01/index.html > > Please, in the future use two CRs or provide two webrevs in one review > in such a case. This would make the reviewing a lot less work for > reviewers and turnaround a lot faster. Will do > > Some initial comments anyway: > > - the change may not apply cleanly any more, sorry for the delay. At > least it complains about "src/hotspot/share/gc/g1/g1StringDedup.[hc]pp > is not empty after patch; not deleting". > Maybe it is a limitation of the "patch" tool that incorrectly prints > this. It builds though. > Probably you first moved the files and then recreated them? Yes, it is due to reuse the same filenames. > > - I am not sure why g1StringDedup.hpp still contains a general > description of the mechanism at the top; that should probably move to > the shared files. > Also it duplicates the "Candidate selection" paragraphs apparently. > Please avoid comment duplication. Fixed. > > - the comment on G1StringDedupQueue does not describe the queue at all > but seems to be some random implementation detail. > Maybe put all the G1 specific considerations in g1StringDedup.hpp - and > only these? > > (I saw that stringDedup.hpp refers to "gc specific" files, which is > fine) You mean StringDedupQueue, right? I moved implementation related comments into g1StringDedup queue, while kept general mechanism in stringDedupQueue, ok? > > - generally, if a definition of a method in a base class is commented, > describing its contract, it is not necessary to duplicate it in the > overriding methods. > That just makes it prone to getting out of date. > Fixed > - maybe instead of a "queue_" prefix for the protected > G1StringDedupQueue methods, use "_impl" as elsewhere. Fixed. > > I am not sure that keeping the interface related to string > deduplication all static and then use instance variables behind the > scene makes it easily readable. > > Making everything static has to me been an implementation choice > because there has only been one user (G1) before. I kept this way to minimize changes in G1, especially, outside of string deduplication code. > > I will need to bring this up with others in the (Oracle-)team what they > think about this. Probably it's okay to keep this, and this could be > done at another time. Please let me know what is your decision, or file a RFE for future cleanup. > > - in stringDedupStat.cpp remove commented out renmants of generational > statistics (line 121+,152+) Done. > > - some copyright years need to be updated I guess. > Done. > - in StringDedupThread::do_deduplication the template parameter changes > from "S" (in the definition) to "STAT". Not sure why; also we do not > tend to use all-caps type names. Fixed. Also, fixed the bug that caused crashes you mentioned. Ran runtime_gc tests with -XX:+UseStringDeduplication on Linux x64 (fastdebug | release). Thanks, -Zhengyu > > I will run it through our testing infra with/without string dedup and > then look through it some more. > > Thanks, > Thomas > > From Derek.White at cavium.com Tue Jun 12 21:56:20 2018 From: Derek.White at cavium.com (White, Derek) Date: Tue, 12 Jun 2018 21:56:20 +0000 Subject: UseNUMA membind Issue in openJDK In-Reply-To: References: <9a0310b7-2880-db69-cfbc-7abba844ecbf@oracle.com> <187f278f-5069-3afa-c61f-f49a1fc0d790@linux.vnet.ibm.com> <36dc8538-f984-d2df-e401-8362a429c6f3@linux.vnet.ibm.com> <71ac36b7-263b-9f48-47d7-aa4cdbe04983@linux.vnet.ibm.com> Message-ID: Hi Swati, Gustavo, I?m not the best qualified to review the change ? I just reported the issue as a JDK bug! I?d be happy to test a fix but I?m having trouble following the patch. Did Gustavo post a patch to your patch, or is that a full independent patch? Also, if you or Gustavo have permissions to post a webrev to http://cr.openjdk.java.net/ that would make reviewing a little easier. I?d be happy to post a webrev for you if not. http://openjdk.java.net/guide/codeReview.html * Derek From: Swati Sharma [mailto:swatibits14 at gmail.com] Sent: Monday, June 11, 2018 6:01 AM To: Gustavo Romero Cc: White, Derek ; hotspot-dev at openjdk.java.net; zgu at redhat.com; David Holmes ; Prakash.Raghavendra at amd.com; Prasad.Vishwanath at amd.com Subject: Re: UseNUMA membind Issue in openJDK Hi Gustavo, May be you can remove the method "numa_bitmask_nbytes" as it's not getting used. I am ok with the changes,If Derek confirms we can go ahead. My name is there on the page "Swati Sharma - OpenJDK" , I have already signed the OCA on individual basis. Thanks, Swati On Sat, Jun 9, 2018 at 5:06 AM, Gustavo Romero > wrote: Hi Swati, Sorry, as usual I had to reserve a machine before trying it. I wanted to test it against a POWER9 with a NVIDIA Tesla V100 device attached. On such a machines numa nodes are quite sparse so I thought it would not be bad to check against them: available: 8 nodes (0,8,250-255) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 node 0 size: 261693 MB node 0 free: 233982 MB node 8 cpus: 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 node 8 size: 261748 MB node 8 free: 257078 MB node 250 cpus: node 250 size: 0 MB node 250 free: 0 MB node 251 cpus: node 251 size: 0 MB node 251 free: 0 MB node 252 cpus: node 252 size: 15360 MB node 252 free: 15360 MB node 253 cpus: node 253 size: 0 MB node 253 free: 0 MB node 254 cpus: node 254 size: 0 MB node 254 free: 0 MB node 255 cpus: node 255 size: 15360 MB node 255 free: 15360 MB node distances: node 0 8 250 251 252 253 254 255 0: 10 40 80 80 80 80 80 80 8: 40 10 80 80 80 80 80 80 250: 80 80 10 80 80 80 80 80 251: 80 80 80 10 80 80 80 80 252: 80 80 80 80 10 80 80 80 253: 80 80 80 80 80 10 80 80 254: 80 80 80 80 80 80 10 80 255: 80 80 80 80 80 80 80 10 Please, find my comments below, inlined. On 06/01/2018 08:10 AM, Swati Sharma wrote: I will fix the thread binding issue in a separate patch. I would like to address it in this change. I think it's not good to leave such a "dangling" behavior for the cpus once the memory bind issue is addressed. I suggest the following simple check to fix it (in accordance to what we've discussed previously, i.e. remap cpu/node considering configuration, bind, and distance in rebuild_cpu_to_node_map(): - if (!isnode_in_configured_nodes(nindex_to_node()->at(i))) { + if (!isnode_in_configured_nodes(nindex_to_node()->at(i)) || + !isnode_in_bound_nodes(nindex_to_node()->at(i))) { closest_distance = INT_MAX; ... for (size_t m = 0; m < node_num; m++) { - if (m != i && isnode_in_configured_nodes(nindex_to_node()->at(m))) { + if (m != i && + isnode_in_configured_nodes(nindex_to_node()->at(m)) && + isnode_in_bound_nodes(nindex_to_node()->at(m))) { I tested it against the aforementioned topology and against the following one: available: 4 nodes (0-3) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 node 0 size: 55685 MB node 0 free: 53196 MB node 1 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 node 1 size: 53961 MB node 1 free: 49795 MB node 2 cpus: node 2 size: 21231 MB node 2 free: 21171 MB node 3 cpus: node 3 size: 22492 MB node 3 free: 22432 MB node distances: node 0 1 2 3 0: 10 20 40 40 1: 20 10 40 40 2: 40 40 10 20 3: 40 40 20 10 Updated the previous patch by removing the structure and using the methods provided by numa API.Here is the updated one with the changes(attached also). Thanks. ========================PATCH========================= diff --git a/src/hotspot/os/linux/os_linux.cpp b/src/hotspot/os/linux/os_linux.cpp --- a/src/hotspot/os/linux/os_linux.cpp +++ b/src/hotspot/os/linux/os_linux.cpp ... @@ -4962,8 +4972,9 @@ if (!Linux::libnuma_init()) { UseNUMA = false; } else { - if ((Linux::numa_max_node() < 1)) { - // There's only one node(they start from 0), disable NUMA. + if ((Linux::numa_max_node() < 1) || Linux::isbound_to_single_node()) { + // If there's only one node(they start from 0) or if the process ^ let's fix this missing space ... + // Check if bound to only one numa node. + // Returns true if bound to a single numa node, otherwise returns false. + static bool isbound_to_single_node() { + int single_node = 0; + struct bitmask* bmp = NULL; + unsigned int node = 0; + unsigned int max_number_of_nodes = 0; + if (_numa_get_membind != NULL && _numa_bitmask_nbytes != NULL) { + bmp = _numa_get_membind(); + max_number_of_nodes = _numa_bitmask_nbytes(bmp) * 8; + } else { + return false; + } + for (node = 0; node < max_number_of_nodes; node++) { + if (_numa_bitmask_isbitset(bmp, node)) { + single_node++; + if (single_node == 2) { + return false; + } + } + } + if (single_node == 1) { + return true; + } else { + return false; + } + } Now that numa_bitmask_isbitset() is being used (instead of the previous version that iterated through an array of longs, I suggest to tweak it a bit, removing the if (single_node == 2) check. I don't think removing it will hurt. In fact, numa_bitmask_nbytes() returns the total amount of bytes the bitmask can hold. However the total number of nodes in the system is usually much smaller than numa_bitmask_nbytes() * 8. So for a x86_64 system like that with only 2 numa nodes: available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23 node 0 size: 131018 MB node 0 free: 101646 MB node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31 node 1 size: 98304 MB node 1 free: 91692 MB node distances: node 0 1 0: 10 11 1: 11 10 numa_bitmask_nbytes(): 64 => max_number_of_node = 512 numa_max_node(): 1 => 1 + 1 iterations and the value returned by numa_bitmask_nbytes() does not change for different bind configurations. It's fixed. Another example is that on Power with 4 numa nodes: available: 4 nodes (0-1,16-17) node 0 cpus: 0 8 16 24 32 node 0 size: 130722 MB node 0 free: 71930 MB node 1 cpus: 40 48 56 64 72 node 1 size: 0 MB node 1 free: 0 MB node 16 cpus: 80 88 96 104 112 node 16 size: 130599 MB node 16 free: 75934 MB node 17 cpus: 120 128 136 144 152 node 17 size: 0 MB node 17 free: 0 MB node distances: node 0 1 16 17 0: 10 20 40 40 1: 20 10 40 40 16: 40 40 10 20 17: 40 40 20 10 numa_bitmask_nbytes(): 32 => max_number_of_node = 256 numa_max_node(): 17 => 17 + 1 iterations So I understand it's better to set the iteration over numa_max_node() instead of numa_bitmask_nbytes(). Even more for Intel (with contiguous nodes) than for Power. For the POWER9 with NVIDIA Tesla it would be a worst case: only 8 numa nodes but numa_max_node is 255! But I understand it's a very rare case and I'm fine with that. So what about: + if (_numa_get_membind != NULL && _numa_max_node != NULL) { + bmp = _numa_get_membind(); + highest_node_number = _numa_max_node(); + } else { + return false; + } + + for (node = 0; node <= highest_node_number; node++) { + if (_numa_bitmask_isbitset(bmp, node)) { + nodes++; + } + } + + if (nodes == 1) { + return true; + } else { + return false; + } For convenience, I hosted a patch with all the changes above here: http://cr.openjdk.java.net/~gromero/8189922/draft/usenuma_v4.patch @Derek, could you please confirm that this change solves JDK-8189922? Swati, if Derek confirms it solves JDK-8189922? and you confirm it's fine for you I'll consider it's reviewed from my side and I can host that change for you so you can start a formal request for approval (remember I'm not a Reviewer, so you still need two additional reviews for the change). Finally, as a heads up, I could not find you (nor AMD?) in the OCA: http://www.oracle.com/technetwork/community/oca-486395.html#a If I'm not mistaken, you (individually) or AMD must sign it before contributing to OpenJDK. Best regards, Gustavo ======================================================= Swati On Tue, May 29, 2018 at 6:53 PM, Gustavo Romero >> wrote: > > Hi Swati, > > On 05/29/2018 06:12 AM, Swati Sharma wrote: >> >> I have incorporated some changes suggested by you. >> >> The use of struct bitmask's maskp for checking 64 bit in single iteration >> is more optimized compared to numa_bitmask_isbitset() as by using this we >> need to check each bit for 1024 times(SUSE case) and 64 times(Ubuntu Case). >> If its fine to iterate at initialization time then I can change. > > > Yes, I know, your version is more optimized. libnuma API should provide a > ready-made solution for that... but that's another story. I'm curious to know > what the time difference is on the worst case for both ways tho. Anyway, I > just would like to point out that, regardless performance, it's possible to > achieve the same result with current libnuma API. > > >> For the answer to your question: >> If it picks up node 16, not so bad, but what if it picks up node 0 or 1? >> It can be checked based on numa_distance instead of picking up the lgrps randomly. > > > That seems a good solution. You can do the checking very early, so > lgrp_spaces()->find() does not even fail (return -1), i.e. by changing the CPU to > node mapping on initialization (avoiding to change cas_allocate()). On that checking > both numa distance and if the node is bound (or not) would be considered to generate > the map. > > > Best regards, > Gustavo > >> Thanks, >> Swati >> >> >> >> On Fri, May 25, 2018 at 4:54 AM, Gustavo Romero > >>> wrote: >> >> Hi Swati, >> >> >> Thanks for CC:ing me. Sorry for the delay replying it, I had to reserve a few >> specific machines before trying your patch :-) >> >> I think that UseNUMA's original task was to figure out the best binding >> setup for the JVM automatically but I understand that it also has to be aware >> that sometimes, for some (new) particular reasons, its binding task is >> "modulated" by other external agents. Thanks for proposing a fix. >> >> I have just a question/concern on the proposal: how the JVM should behave if >> CPUs are not bound in accordance to the bound memory nodes? For instance, what >> happens if no '--cpunodebind' is passed and '--membind=0,1,16' is passed at >> the same time on this numa topology: >> >> brianh at p215n12:~$ numactl -H >> available: 4 nodes (0-1,16-17) >> node 0 cpus: 0 1 2 3 8 9 10 11 16 17 18 19 24 25 26 27 32 33 34 35 >> node 0 size: 65342 MB >> node 0 free: 56902 MB >> node 1 cpus: 40 41 42 43 48 49 50 51 56 57 58 59 64 65 66 67 72 73 74 75 >> node 1 size: 65447 MB >> node 1 free: 58322 MB >> node 16 cpus: 80 81 82 83 88 89 90 91 96 97 98 99 104 105 106 107 112 113 114 115 >> node 16 size: 65448 MB >> node 16 free: 63096 MB >> node 17 cpus: 120 121 122 123 128 129 130 131 136 137 138 139 144 145 146 147 152 153 154 155 >> node 17 size: 65175 MB >> node 17 free: 61522 MB >> node distances: >> node 0 1 16 17 >> 0: 10 20 40 40 >> 1: 20 10 40 40 >> 16: 40 40 10 20 >> 17: 40 40 20 10 >> >> >> In that case JVM will spawn threads that will run on all CPUs, including those >> CPUs in numa node 17. Then once in >> src/hotspot/share/gc/parallel/mutableNUMASpace.cpp, in cas_allocate(): >> >> 834 // This version is lock-free. >> 835 HeapWord* MutableNUMASpace::cas_allocate(size_t size) { >> 836 Thread* thr = Thread::current(); >> 837 int lgrp_id = thr->lgrp_id(); >> 838 if (lgrp_id == -1 || !os::numa_has_group_homing()) { >> 839 lgrp_id = os::numa_get_group_id(); >> 840 thr->set_lgrp_id(lgrp_id); >> 841 } >> >> a newly created thread will try to be mapped to a numa node given your CPU ID. >> So if that CPU is in numa node 17 it will then not find it in: >> >> 843 int i = lgrp_spaces()->find(&lgrp_id, LGRPSpace::equals); >> >> and will fallback to a random map, picking up a random numa node among nodes >> 0, 1, and 16: >> >> 846 if (i == -1) { >> 847 i = os::random() % lgrp_spaces()->length(); >> 848 } >> >> If it picks up node 16, not so bad, but what if it picks up node 0 or 1? >> >> I see that if one binds mem but leaves CPU unbound one has to know exactly what >> she/he is doing, because it can be likely suboptimal. On the other hand, letting >> the node being picked up randomly when there are memory nodes bound but no CPUs >> seems even more suboptimal in some scenarios. Thus, should the JVM deal with it? >> >> @Zhengyu, do you have any opinion on that? >> >> Please find a few nits / comments inline. >> >> Note that I'm not a (R)eviewer so you still need two official reviews. >> >> >> Best regards, >> Gustavo >> >> On 05/21/2018 01:44 PM, Swati Sharma wrote: >> >> ======================PATCH============================== >> diff --git a/src/hotspot/os/linux/os_linux.cpp b/src/hotspot/os/linux/os_linux.cpp >> --- a/src/hotspot/os/linux/os_linux.cpp >> +++ b/src/hotspot/os/linux/os_linux.cpp >> @@ -2832,14 +2832,42 @@ >> // Map all node ids in which is possible to allocate memory. Also nodes are >> // not always consecutively available, i.e. available from 0 to the highest >> // node number. >> + // If the nodes have been bound explicitly using numactl membind, then >> + // allocate memory from those nodes only. >> >> >> I think ok to place that comment on the same existing line, like: >> >> - // node number. >> + // node number. If the nodes have been bound explicitly using numactl membind, >> + // then allocate memory from these nodes only. >> >> >> for (size_t node = 0; node <= highest_node_number; node++) { >> - if (Linux::isnode_in_configured_nodes(node)) { >> + if (Linux::isnode_in_bounded_nodes(node)) { >> >> ---------------------------------^ s/bounded/bound/ >> >> >> ids[i++] = node; >> } >> } >> return i; >> } >> +extern "C" struct bitmask { >> + unsigned long size; /* number of bits in the map */ >> + unsigned long *maskp; >> +}; >> >> >> I think it's possible to move the function below to os_linux.hpp with its >> friends and cope with the forward declaration of 'struct bitmask*` by using the >> functions from numa API, notably numa_bitmask_nbytes() and >> numa_bitmask_isbitset() only, avoiding the member dereferecing issue and the >> need to add the above struct explicitly. >> >> >> +// Check if single memory node bound. >> +// Returns true if single memory node bound. >> >> >> I suggest a minuscule improvement, something like: >> >> +// Check if bound to only one numa node. >> +// Returns true if bound to a single numa node, otherwise returns false. >> >> >> +bool os::Linux::issingle_node_bound() { >> >> >> What about s/issingle_node_bound/isbound_to_single_node/ ? >> >> >> + struct bitmask* bmp = _numa_get_membind != NULL ? _numa_get_membind() : NULL; >> + if(!(bmp != NULL && bmp->maskp != NULL)) return false; >> >> -----^ >> Are you sure this checking is necessary? I think if numa_get_membind succeed >> bmp->maskp is always != NULL. >> >> Indentation here is odd. No space before 'if' and return on the same line. >> >> I would try to avoid lines over 80 chars. >> >> >> + int issingle = 0; >> + // System can have more than 64 nodes so check in all the elements of >> + // unsigned long array >> + for (unsigned long i = 0; i < (bmp->size / (8 * sizeof(unsigned long))); i++) { >> + if (bmp->maskp[i] == 0) { >> + continue; >> + } else if ((bmp->maskp[i] & (bmp->maskp[i] - 1)) == 0) { >> + issingle++; >> + } else { >> + return false; >> + } >> + } >> + if (issingle == 1) >> + return true; >> + return false; >> +} >> + >> >> >> As I mentioned, I think it could be moved to os_linux.hpp instead. Also, it >> could be something like: >> >> +bool os::Linux::isbound_to_single_node(void) { >> + struct bitmask* bmp; >> + unsigned long mask; // a mask element in the mask array >> + unsigned long max_num_masks; >> + int single_node = 0; >> + >> + if (_numa_get_membind != NULL) { >> + bmp = _numa_get_membind(); >> + } else { >> + return false; >> + } >> + >> + max_num_masks = bmp->size / (8 * sizeof(unsigned long)); >> + >> + for (mask = 0; mask < max_num_masks; mask++) { >> + if (bmp->maskp[mask] != 0) { // at least one numa node in the mask >> + if (bmp->maskp[mask] & (bmp->maskp[mask] - 1) == 0) { >> + single_node++; // a single numa node in the mask >> + } else { >> + return false; >> + } >> + } >> + } >> + >> + if (single_node == 1) { >> + return true; // only a single mask with a single numa node >> + } else { >> + return false; >> + } >> +} >> >> >> bool os::get_page_info(char *start, page_info* info) { >> return false; >> } >> @@ -2930,6 +2958,10 @@ >> libnuma_dlsym(handle, "numa_bitmask_isbitset"))); >> set_numa_distance(CAST_TO_FN_PTR(numa_distance_func_t, >> libnuma_dlsym(handle, "numa_distance"))); >> + set_numa_set_membind(CAST_TO_FN_PTR(numa_set_membind_func_t, >> + libnuma_dlsym(handle, "numa_set_membind"))); >> + set_numa_get_membind(CAST_TO_FN_PTR(numa_get_membind_func_t, >> + libnuma_v2_dlsym(handle, "numa_get_membind"))); >> if (numa_available() != -1) { >> set_numa_all_nodes((unsigned long*)libnuma_dlsym(handle, "numa_all_nodes")); >> @@ -3054,6 +3086,8 @@ >> os::Linux::numa_set_bind_policy_func_t os::Linux::_numa_set_bind_policy; >> os::Linux::numa_bitmask_isbitset_func_t os::Linux::_numa_bitmask_isbitset; >> os::Linux::numa_distance_func_t os::Linux::_numa_distance; >> +os::Linux::numa_set_membind_func_t os::Linux::_numa_set_membind; >> +os::Linux::numa_get_membind_func_t os::Linux::_numa_get_membind; >> unsigned long* os::Linux::_numa_all_nodes; >> struct bitmask* os::Linux::_numa_all_nodes_ptr; >> struct bitmask* os::Linux::_numa_nodes_ptr; >> @@ -4962,8 +4996,9 @@ >> if (!Linux::libnuma_init()) { >> UseNUMA = false; >> } else { >> - if ((Linux::numa_max_node() < 1)) { >> - // There's only one node(they start from 0), disable NUMA. >> + if ((Linux::numa_max_node() < 1) || Linux::issingle_node_bound()) { >> + // If there's only one node(they start from 0) or if the process >> + // is bound explicitly to a single node using membind, disable NUMA. >> UseNUMA = false; >> } >> } >> diff --git a/src/hotspot/os/linux/os_linux.hpp b/src/hotspot/os/linux/os_linux.hpp >> --- a/src/hotspot/os/linux/os_linux.hpp >> +++ b/src/hotspot/os/linux/os_linux.hpp >> @@ -228,6 +228,8 @@ >> typedef int (*numa_tonode_memory_func_t)(void *start, size_t size, int node); >> typedef void (*numa_interleave_memory_func_t)(void *start, size_t size, unsigned long *nodemask); >> typedef void (*numa_interleave_memory_v2_func_t)(void *start, size_t size, struct bitmask* mask); >> + typedef void (*numa_set_membind_func_t)(struct bitmask *mask); >> + typedef struct bitmask* (*numa_get_membind_func_t)(void); >> typedef void (*numa_set_bind_policy_func_t)(int policy); >> typedef int (*numa_bitmask_isbitset_func_t)(struct bitmask *bmp, unsigned int n); >> @@ -244,6 +246,8 @@ >> static numa_set_bind_policy_func_t _numa_set_bind_policy; >> static numa_bitmask_isbitset_func_t _numa_bitmask_isbitset; >> static numa_distance_func_t _numa_distance; >> + static numa_set_membind_func_t _numa_set_membind; >> + static numa_get_membind_func_t _numa_get_membind; >> static unsigned long* _numa_all_nodes; >> static struct bitmask* _numa_all_nodes_ptr; >> static struct bitmask* _numa_nodes_ptr; >> @@ -259,6 +263,8 @@ >> static void set_numa_set_bind_policy(numa_set_bind_policy_func_t func) { _numa_set_bind_policy = func; } >> static void set_numa_bitmask_isbitset(numa_bitmask_isbitset_func_t func) { _numa_bitmask_isbitset = func; } >> static void set_numa_distance(numa_distance_func_t func) { _numa_distance = func; } >> + static void set_numa_set_membind(numa_set_membind_func_t func) { _numa_set_membind = func; } >> + static void set_numa_get_membind(numa_get_membind_func_t func) { _numa_get_membind = func; } >> static void set_numa_all_nodes(unsigned long* ptr) { _numa_all_nodes = ptr; } >> static void set_numa_all_nodes_ptr(struct bitmask **ptr) { _numa_all_nodes_ptr = (ptr == NULL ? NULL : *ptr); } >> static void set_numa_nodes_ptr(struct bitmask **ptr) { _numa_nodes_ptr = (ptr == NULL ? NULL : *ptr); } >> @@ -320,6 +326,15 @@ >> } else >> return 0; >> } >> + // Check if node in bounded nodes >> >> >> + // Check if node is in bound node set. Maybe? >> >> >> + static bool isnode_in_bounded_nodes(int node) { >> + struct bitmask* bmp = _numa_get_membind != NULL ? _numa_get_membind() : NULL; >> + if (bmp != NULL && _numa_bitmask_isbitset != NULL && _numa_bitmask_isbitset(bmp, node)) { >> + return true; >> + } else >> + return false; >> + } >> + static bool issingle_node_bound(); >> >> >> Looks like it can be re-written like: >> >> + static bool isnode_in_bound_nodes(int node) { >> + if (_numa_get_membind != NULL && _numa_bitmask_isbitset != NULL) { >> + return _numa_bitmask_isbitset(_numa_get_membind(), node); >> + } else { >> + return false; >> + } >> + } >> >> ? >> >> >> }; >> #endif // OS_LINUX_VM_OS_LINUX_HPP >> >> >> > From gromero at linux.vnet.ibm.com Tue Jun 12 22:25:20 2018 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Tue, 12 Jun 2018 19:25:20 -0300 Subject: UseNUMA membind Issue in openJDK In-Reply-To: References: <9a0310b7-2880-db69-cfbc-7abba844ecbf@oracle.com> <187f278f-5069-3afa-c61f-f49a1fc0d790@linux.vnet.ibm.com> <36dc8538-f984-d2df-e401-8362a429c6f3@linux.vnet.ibm.com> <71ac36b7-263b-9f48-47d7-aa4cdbe04983@linux.vnet.ibm.com> Message-ID: Hi Derek, On 06/12/2018 06:56 PM, White, Derek wrote: > Hi Swati, Gustavo, > > I?m not the best qualified to review the change ? I just reported the issue as a JDK bug! > > I?d be happy to test a fix but I?m having trouble following the patch. Did Gustavo post a patch to your patch, or is that a full independent patch? Yes, the idea was that you could help on testing it against JDK-8189922. Swati's initial report on this thread was accompanied with a simple way to test the issue he reported. You said it was related to bug JDK-8189922 but I can't see a simple way to test it as you reported. Besides that I assumed that you tested it on arm64, so I can't test it myself (I don't have such a hardware). Btw, if you could provide some numactl -H information I would be glad. I consider the patch I pointed out as the fourth version of Swati's original proposal, it evolved from the reviews so far: http://cr.openjdk.java.net/~gromero/8189922/draft/usenuma_v4.patch > Also, if you or Gustavo have permissions to post a webrev to http://cr.openjdk.java.net/ that would make reviewing a little easier. I?d be happy to post a webrev for you if not. I was planing to host the webrev after your comments, but feel free to host it. Thank you. Regards, Gustavo > http://openjdk.java.net/guide/codeReview.html > > * Derek > > *From:* Swati Sharma [mailto:swatibits14 at gmail.com] > *Sent:* Monday, June 11, 2018 6:01 AM > *To:* Gustavo Romero > *Cc:* White, Derek ; hotspot-dev at openjdk.java.net; zgu at redhat.com; David Holmes ; Prakash.Raghavendra at amd.com; Prasad.Vishwanath at amd.com > *Subject:* Re: UseNUMA membind Issue in openJDK > > Hi Gustavo, > > May be you can remove the method "numa_bitmask_nbytes" as it's not getting used. > > I am ok with the changes,If Derek confirms we can go ahead. > > My name is there on the page "Swati Sharma - OpenJDK" , I have already signed the OCA on individual basis. > > Thanks, > > Swati > > On Sat, Jun 9, 2018 at 5:06 AM, Gustavo Romero > wrote: > > Hi Swati, > > Sorry, as usual I had to reserve a machine before trying it. > > I wanted to test it against a POWER9 with a NVIDIA Tesla V100 device attached. > > On such a machines numa nodes are quite sparse so I thought it would not be bad > to check against them: > > available: 8 nodes (0,8,250-255) > node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 > node 0 size: 261693 MB > node 0 free: 233982 MB > node 8 cpus: 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 > node 8 size: 261748 MB > node 8 free: 257078 MB > node 250 cpus: > node 250 size: 0 MB > node 250 free: 0 MB > node 251 cpus: > node 251 size: 0 MB > node 251 free: 0 MB > node 252 cpus: > node 252 size: 15360 MB > node 252 free: 15360 MB > node 253 cpus: > node 253 size: 0 MB > node 253 free: 0 MB > node 254 cpus: > node 254 size: 0 MB > node 254 free: 0 MB > node 255 cpus: > node 255 size: 15360 MB > node 255 free: 15360 MB > node distances: > node 0 8 250 251 252 253 254 255 > 0: 10 40 80 80 80 80 80 80 > 8: 40 10 80 80 80 80 80 80 > 250: 80 80 10 80 80 80 80 80 > 251: 80 80 80 10 80 80 80 80 > 252: 80 80 80 80 10 80 80 80 > 253: 80 80 80 80 80 10 80 80 > 254: 80 80 80 80 80 80 10 80 > 255: 80 80 80 80 80 80 80 10 > > > Please, find my comments below, inlined. > > On 06/01/2018 08:10 AM, Swati Sharma wrote: > > I will fix the thread binding issue in a separate patch. > > > I would like to address it in this change. I think it's not good to leave such a > "dangling" behavior for the cpus once the memory bind issue is addressed. > > I suggest the following simple check to fix it (in accordance to what we've > discussed previously, i.e. remap cpu/node considering configuration, bind, and > distance in rebuild_cpu_to_node_map(): > > - if (!isnode_in_configured_nodes(nindex_to_node()->at(i))) { > + if (!isnode_in_configured_nodes(nindex_to_node()->at(i)) || > + !isnode_in_bound_nodes(nindex_to_node()->at(i))) { > closest_distance = INT_MAX; > ... > for (size_t m = 0; m < node_num; m++) { > - if (m != i && isnode_in_configured_nodes(nindex_to_node()->at(m))) { > + if (m != i && > + isnode_in_configured_nodes(nindex_to_node()->at(m)) && > + isnode_in_bound_nodes(nindex_to_node()->at(m))) { > > I tested it against the aforementioned topology and against the following one: > > available: 4 nodes (0-3) > node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 > node 0 size: 55685 MB > node 0 free: 53196 MB > node 1 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 > node 1 size: 53961 MB > node 1 free: 49795 MB > node 2 cpus: > node 2 size: 21231 MB > node 2 free: 21171 MB > node 3 cpus: > node 3 size: 22492 MB > node 3 free: 22432 MB > node distances: > node 0 1 2 3 > 0: 10 20 40 40 > 1: 20 10 40 40 > 2: 40 40 10 20 > 3: 40 40 20 10 > > Updated the previous patch by removing the structure and using the methods > provided by numa API.Here is the updated one with the changes(attached also). > > > Thanks. > > ========================PATCH========================= > diff --git a/src/hotspot/os/linux/os_linux.cpp b/src/hotspot/os/linux/os_linux.cpp > --- a/src/hotspot/os/linux/os_linux.cpp > +++ b/src/hotspot/os/linux/os_linux.cpp > > > ... > > @@ -4962,8 +4972,9 @@ > if (!Linux::libnuma_init()) { > UseNUMA = false; > } else { > - if ((Linux::numa_max_node() < 1)) { > - // There's only one node(they start from 0), disable NUMA. > + if ((Linux::numa_max_node() < 1) || Linux::isbound_to_single_node()) { > + // If there's only one node(they start from 0) or if the process > > ^ let's fix this missing space > > ... > > + // Check if bound to only one numa node. > + // Returns true if bound to a single numa node, otherwise returns false. > + static bool isbound_to_single_node() { > + int single_node = 0; > + struct bitmask* bmp = NULL; > + unsigned int node = 0; > + unsigned int max_number_of_nodes = 0; > + if (_numa_get_membind != NULL && _numa_bitmask_nbytes != NULL) { > + bmp = _numa_get_membind(); > + max_number_of_nodes = _numa_bitmask_nbytes(bmp) * 8; > + } else { > + return false; > + } > + for (node = 0; node < max_number_of_nodes; node++) { > + if (_numa_bitmask_isbitset(bmp, node)) { > + single_node++; > + if (single_node == 2) { > + return false; > + } > + } > + } > + if (single_node == 1) { > + return true; > + } else { > + return false; > + } > + } > > Now that numa_bitmask_isbitset() is being used (instead of the previous version > that iterated through an array of longs, I suggest to tweak it a bit, removing > the if (single_node == 2) check. > > I don't think removing it will hurt. In fact, numa_bitmask_nbytes() returns the > total amount of bytes the bitmask can hold. However the total number of nodes in > the system is usually much smaller than numa_bitmask_nbytes() * 8. > > So for a x86_64 system like that with only 2 numa nodes: > > available: 2 nodes (0-1) > node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23 > node 0 size: 131018 MB > node 0 free: 101646 MB > node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31 > node 1 size: 98304 MB > node 1 free: 91692 MB > node distances: > node 0 1 > 0: 10 11 > 1: 11 10 > > numa_bitmask_nbytes(): 64 => max_number_of_node = 512 > numa_max_node(): 1 => 1 + 1 iterations > > and the value returned by numa_bitmask_nbytes() does not change for different > bind configurations. It's fixed. Another example is that on Power with 4 numa > nodes: > > available: 4 nodes (0-1,16-17) > node 0 cpus: 0 8 16 24 32 > node 0 size: 130722 MB > node 0 free: 71930 MB > node 1 cpus: 40 48 56 64 72 > node 1 size: 0 MB > node 1 free: 0 MB > node 16 cpus: 80 88 96 104 112 > node 16 size: 130599 MB > node 16 free: 75934 MB > node 17 cpus: 120 128 136 144 152 > node 17 size: 0 MB > node 17 free: 0 MB > node distances: > node 0 1 16 17 > 0: 10 20 40 40 > 1: 20 10 40 40 > 16: 40 40 10 20 > 17: 40 40 20 10 > > numa_bitmask_nbytes(): 32 => max_number_of_node = 256 > numa_max_node(): 17 => 17 + 1 iterations > > So I understand it's better to set the iteration over numa_max_node() instead of > numa_bitmask_nbytes(). Even more for Intel (with contiguous nodes) than for > Power. > > For the POWER9 with NVIDIA Tesla it would be a worst case: only 8 numa nodes but > numa_max_node is 255! But I understand it's a very rare case and I'm fine with > that. > > So what about: > > + if (_numa_get_membind != NULL && _numa_max_node != NULL) { > + bmp = _numa_get_membind(); > + highest_node_number = _numa_max_node(); > + } else { > + return false; > + } > + > + for (node = 0; node <= highest_node_number; node++) { > + if (_numa_bitmask_isbitset(bmp, node)) { > + nodes++; > + } > + } > + > + if (nodes == 1) { > + return true; > + } else { > + return false; > + } > > For convenience, I hosted a patch with all the changes above here: > http://cr.openjdk.java.net/~gromero/8189922/draft/usenuma_v4.patch > > @Derek, could you please confirm that this change solves JDK-8189922? > > Swati, if Derek confirms it solves JDK-8189922? and you confirm it's fine for > you I'll consider it's reviewed from my side and I can host that change for you > so you can start a formal request for approval (remember I'm not a Reviewer, so > you still need two additional reviews for the change). > > Finally, as a heads up, I could not find you (nor AMD?) in the OCA: > > http://www.oracle.com/technetwork/community/oca-486395.html#a > > If I'm not mistaken, you (individually) or AMD must sign it before contributing > to OpenJDK. > > > Best regards, > Gustavo > > ======================================================= > > Swati > > > > > > On Tue, May 29, 2018 at 6:53 PM, Gustavo Romero >> wrote: > > > > Hi Swati, > > > > On 05/29/2018 06:12 AM, Swati Sharma wrote: > >> > >> I have incorporated some changes suggested by you. > >> > >> The use of struct bitmask's maskp for checking 64 bit in single iteration > >> is more optimized compared to numa_bitmask_isbitset() as by using this we > >> need to check each bit for 1024 times(SUSE case) and 64 times(Ubuntu Case). > >> If its fine to iterate at initialization time then I can change. > > > > > > Yes, I know, your version is more optimized. libnuma API should provide a > > ready-made solution for that... but that's another story. I'm curious to know > > what the time difference is on the worst case for both ways tho. Anyway, I > > just would like to point out that, regardless performance, it's possible to > > achieve the same result with current libnuma API. > > > > > >> For the answer to your question: > >> If it picks up node 16, not so bad, but what if it picks up node 0 or 1? > >> It can be checked based on numa_distance instead of picking up the lgrps randomly. > > > > > > That seems a good solution. You can do the checking very early, so > > lgrp_spaces()->find() does not even fail (return -1), i.e. by changing the CPU to > > node mapping on initialization (avoiding to change cas_allocate()). On that checking > > both numa distance and if the node is bound (or not) would be considered to generate > > the map. > > > > > > Best regards, > > Gustavo > > > >> Thanks, > >> Swati > >> > >> > >> > > >> On Fri, May 25, 2018 at 4:54 AM, Gustavo Romero > >>> wrote: > >> > >> Hi Swati, > >> > >> > >> Thanks for CC:ing me. Sorry for the delay replying it, I had to reserve a few > >> specific machines before trying your patch :-) > >> > >> I think that UseNUMA's original task was to figure out the best binding > >> setup for the JVM automatically but I understand that it also has to be aware > >> that sometimes, for some (new) particular reasons, its binding task is > >> "modulated" by other external agents. Thanks for proposing a fix. > >> > >> I have just a question/concern on the proposal: how the JVM should behave if > >> CPUs are not bound in accordance to the bound memory nodes? For instance, what > >> happens if no '--cpunodebind' is passed and '--membind=0,1,16' is passed at > >> the same time on this numa topology: > >> > >> brianh at p215n12:~$ numactl -H > >> available: 4 nodes (0-1,16-17) > >> node 0 cpus: 0 1 2 3 8 9 10 11 16 17 18 19 24 25 26 27 32 33 34 35 > >> node 0 size: 65342 MB > >> node 0 free: 56902 MB > >> node 1 cpus: 40 41 42 43 48 49 50 51 56 57 58 59 64 65 66 67 72 73 74 75 > >> node 1 size: 65447 MB > >> node 1 free: 58322 MB > >> node 16 cpus: 80 81 82 83 88 89 90 91 96 97 98 99 104 105 106 107 112 113 114 115 > >> node 16 size: 65448 MB > >> node 16 free: 63096 MB > >> node 17 cpus: 120 121 122 123 128 129 130 131 136 137 138 139 144 145 146 147 152 153 154 155 > >> node 17 size: 65175 MB > >> node 17 free: 61522 MB > >> node distances: > >> node 0 1 16 17 > >> 0: 10 20 40 40 > >> 1: 20 10 40 40 > >> 16: 40 40 10 20 > >> 17: 40 40 20 10 > >> > >> > >> In that case JVM will spawn threads that will run on all CPUs, including those > >> CPUs in numa node 17. Then once in > >> src/hotspot/share/gc/parallel/mutableNUMASpace.cpp, in cas_allocate(): > >> > >> 834 // This version is lock-free. > >> 835 HeapWord* MutableNUMASpace::cas_allocate(size_t size) { > >> 836 Thread* thr = Thread::current(); > >> 837 int lgrp_id = thr->lgrp_id(); > >> 838 if (lgrp_id == -1 || !os::numa_has_group_homing()) { > >> 839 lgrp_id = os::numa_get_group_id(); > >> 840 thr->set_lgrp_id(lgrp_id); > >> 841 } > >> > >> a newly created thread will try to be mapped to a numa node given your CPU ID. > >> So if that CPU is in numa node 17 it will then not find it in: > >> > >> 843 int i = lgrp_spaces()->find(&lgrp_id, LGRPSpace::equals); > >> > >> and will fallback to a random map, picking up a random numa node among nodes > >> 0, 1, and 16: > >> > >> 846 if (i == -1) { > >> 847 i = os::random() % lgrp_spaces()->length(); > >> 848 } > >> > >> If it picks up node 16, not so bad, but what if it picks up node 0 or 1? > >> > >> I see that if one binds mem but leaves CPU unbound one has to know exactly what > >> she/he is doing, because it can be likely suboptimal. On the other hand, letting > >> the node being picked up randomly when there are memory nodes bound but no CPUs > >> seems even more suboptimal in some scenarios. Thus, should the JVM deal with it? > >> > >> @Zhengyu, do you have any opinion on that? > >> > >> Please find a few nits / comments inline. > >> > >> Note that I'm not a (R)eviewer so you still need two official reviews. > >> > >> > >> Best regards, > >> Gustavo > >> > >> On 05/21/2018 01:44 PM, Swati Sharma wrote: > >> > >> ======================PATCH============================== > >> diff --git a/src/hotspot/os/linux/os_linux.cpp b/src/hotspot/os/linux/os_linux.cpp > >> --- a/src/hotspot/os/linux/os_linux.cpp > >> +++ b/src/hotspot/os/linux/os_linux.cpp > >> @@ -2832,14 +2832,42 @@ > >> // Map all node ids in which is possible to allocate memory. Also nodes are > >> // not always consecutively available, i.e. available from 0 to the highest > >> // node number. > >> + // If the nodes have been bound explicitly using numactl membind, then > >> + // allocate memory from those nodes only. > >> > >> > >> I think ok to place that comment on the same existing line, like: > >> > >> - // node number. > >> + // node number. If the nodes have been bound explicitly using numactl membind, > >> + // then allocate memory from these nodes only. > >> > >> > >> for (size_t node = 0; node <= highest_node_number; node++) { > >> - if (Linux::isnode_in_configured_nodes(node)) { > >> + if (Linux::isnode_in_bounded_nodes(node)) { > >> > >> ---------------------------------^ s/bounded/bound/ > >> > >> > >> ids[i++] = node; > >> } > >> } > >> return i; > >> } > >> +extern "C" struct bitmask { > >> + unsigned long size; /* number of bits in the map */ > >> + unsigned long *maskp; > >> +}; > >> > >> > >> I think it's possible to move the function below to os_linux.hpp with its > >> friends and cope with the forward declaration of 'struct bitmask*` by using the > >> functions from numa API, notably numa_bitmask_nbytes() and > >> numa_bitmask_isbitset() only, avoiding the member dereferecing issue and the > >> need to add the above struct explicitly. > >> > >> > >> +// Check if single memory node bound. > >> +// Returns true if single memory node bound. > >> > >> > >> I suggest a minuscule improvement, something like: > >> > >> +// Check if bound to only one numa node. > >> +// Returns true if bound to a single numa node, otherwise returns false. > >> > >> > >> +bool os::Linux::issingle_node_bound() { > >> > >> > >> What about s/issingle_node_bound/isbound_to_single_node/ ? > >> > >> > >> + struct bitmask* bmp = _numa_get_membind != NULL ? _numa_get_membind() : NULL; > >> + if(!(bmp != NULL && bmp->maskp != NULL)) return false; > >> > >> -----^ > >> Are you sure this checking is necessary? I think if numa_get_membind succeed > >> bmp->maskp is always != NULL. > >> > >> Indentation here is odd. No space before 'if' and return on the same line. > >> > >> I would try to avoid lines over 80 chars. > >> > >> > >> + int issingle = 0; > >> + // System can have more than 64 nodes so check in all the elements of > >> + // unsigned long array > >> + for (unsigned long i = 0; i < (bmp->size / (8 * sizeof(unsigned long))); i++) { > >> + if (bmp->maskp[i] == 0) { > >> + continue; > >> + } else if ((bmp->maskp[i] & (bmp->maskp[i] - 1)) == 0) { > >> + issingle++; > >> + } else { > >> + return false; > >> + } > >> + } > >> + if (issingle == 1) > >> + return true; > >> + return false; > >> +} > >> + > >> > >> > >> As I mentioned, I think it could be moved to os_linux.hpp instead. Also, it > >> could be something like: > >> > >> +bool os::Linux::isbound_to_single_node(void) { > >> + struct bitmask* bmp; > >> + unsigned long mask; // a mask element in the mask array > >> + unsigned long max_num_masks; > >> + int single_node = 0; > >> + > >> + if (_numa_get_membind != NULL) { > >> + bmp = _numa_get_membind(); > >> + } else { > >> + return false; > >> + } > >> + > >> + max_num_masks = bmp->size / (8 * sizeof(unsigned long)); > >> + > >> + for (mask = 0; mask < max_num_masks; mask++) { > >> + if (bmp->maskp[mask] != 0) { // at least one numa node in the mask > >> + if (bmp->maskp[mask] & (bmp->maskp[mask] - 1) == 0) { > >> + single_node++; // a single numa node in the mask > >> + } else { > >> + return false; > >> + } > >> + } > >> + } > >> + > >> + if (single_node == 1) { > >> + return true; // only a single mask with a single numa node > >> + } else { > >> + return false; > >> + } > >> +} > >> > >> > >> bool os::get_page_info(char *start, page_info* info) { > >> return false; > >> } > >> @@ -2930,6 +2958,10 @@ > >> libnuma_dlsym(handle, "numa_bitmask_isbitset"))); > >> set_numa_distance(CAST_TO_FN_PTR(numa_distance_func_t, > >> libnuma_dlsym(handle, "numa_distance"))); > >> + set_numa_set_membind(CAST_TO_FN_PTR(numa_set_membind_func_t, > >> + libnuma_dlsym(handle, "numa_set_membind"))); > >> + set_numa_get_membind(CAST_TO_FN_PTR(numa_get_membind_func_t, > >> + libnuma_v2_dlsym(handle, "numa_get_membind"))); > >> if (numa_available() != -1) { > >> set_numa_all_nodes((unsigned long*)libnuma_dlsym(handle, "numa_all_nodes")); > >> @@ -3054,6 +3086,8 @@ > >> os::Linux::numa_set_bind_policy_func_t os::Linux::_numa_set_bind_policy; > >> os::Linux::numa_bitmask_isbitset_func_t os::Linux::_numa_bitmask_isbitset; > >> os::Linux::numa_distance_func_t os::Linux::_numa_distance; > >> +os::Linux::numa_set_membind_func_t os::Linux::_numa_set_membind; > >> +os::Linux::numa_get_membind_func_t os::Linux::_numa_get_membind; > >> unsigned long* os::Linux::_numa_all_nodes; > >> struct bitmask* os::Linux::_numa_all_nodes_ptr; > >> struct bitmask* os::Linux::_numa_nodes_ptr; > >> @@ -4962,8 +4996,9 @@ > >> if (!Linux::libnuma_init()) { > >> UseNUMA = false; > >> } else { > >> - if ((Linux::numa_max_node() < 1)) { > >> - // There's only one node(they start from 0), disable NUMA. > >> + if ((Linux::numa_max_node() < 1) || Linux::issingle_node_bound()) { > >> + // If there's only one node(they start from 0) or if the process > >> + // is bound explicitly to a single node using membind, disable NUMA. > >> UseNUMA = false; > >> } > >> } > >> diff --git a/src/hotspot/os/linux/os_linux.hpp b/src/hotspot/os/linux/os_linux.hpp > >> --- a/src/hotspot/os/linux/os_linux.hpp > >> +++ b/src/hotspot/os/linux/os_linux.hpp > >> @@ -228,6 +228,8 @@ > >> typedef int (*numa_tonode_memory_func_t)(void *start, size_t size, int node); > >> typedef void (*numa_interleave_memory_func_t)(void *start, size_t size, unsigned long *nodemask); > >> typedef void (*numa_interleave_memory_v2_func_t)(void *start, size_t size, struct bitmask* mask); > >> + typedef void (*numa_set_membind_func_t)(struct bitmask *mask); > >> + typedef struct bitmask* (*numa_get_membind_func_t)(void); > >> typedef void (*numa_set_bind_policy_func_t)(int policy); > >> typedef int (*numa_bitmask_isbitset_func_t)(struct bitmask *bmp, unsigned int n); > >> @@ -244,6 +246,8 @@ > >> static numa_set_bind_policy_func_t _numa_set_bind_policy; > >> static numa_bitmask_isbitset_func_t _numa_bitmask_isbitset; > >> static numa_distance_func_t _numa_distance; > >> + static numa_set_membind_func_t _numa_set_membind; > >> + static numa_get_membind_func_t _numa_get_membind; > >> static unsigned long* _numa_all_nodes; > >> static struct bitmask* _numa_all_nodes_ptr; > >> static struct bitmask* _numa_nodes_ptr; > >> @@ -259,6 +263,8 @@ > >> static void set_numa_set_bind_policy(numa_set_bind_policy_func_t func) { _numa_set_bind_policy = func; } > >> static void set_numa_bitmask_isbitset(numa_bitmask_isbitset_func_t func) { _numa_bitmask_isbitset = func; } > >> static void set_numa_distance(numa_distance_func_t func) { _numa_distance = func; } > >> + static void set_numa_set_membind(numa_set_membind_func_t func) { _numa_set_membind = func; } > >> + static void set_numa_get_membind(numa_get_membind_func_t func) { _numa_get_membind = func; } > >> static void set_numa_all_nodes(unsigned long* ptr) { _numa_all_nodes = ptr; } > >> static void set_numa_all_nodes_ptr(struct bitmask **ptr) { _numa_all_nodes_ptr = (ptr == NULL ? NULL : *ptr); } > >> static void set_numa_nodes_ptr(struct bitmask **ptr) { _numa_nodes_ptr = (ptr == NULL ? NULL : *ptr); } > >> @@ -320,6 +326,15 @@ > >> } else > >> return 0; > >> } > >> + // Check if node in bounded nodes > >> > >> > >> + // Check if node is in bound node set. Maybe? > >> > >> > >> + static bool isnode_in_bounded_nodes(int node) { > >> + struct bitmask* bmp = _numa_get_membind != NULL ? _numa_get_membind() : NULL; > >> + if (bmp != NULL && _numa_bitmask_isbitset != NULL && _numa_bitmask_isbitset(bmp, node)) { > >> + return true; > >> + } else > >> + return false; > >> + } > >> + static bool issingle_node_bound(); > >> > >> > >> Looks like it can be re-written like: > >> > >> + static bool isnode_in_bound_nodes(int node) { > >> + if (_numa_get_membind != NULL && _numa_bitmask_isbitset != NULL) { > >> + return _numa_bitmask_isbitset(_numa_get_membind(), node); > >> + } else { > >> + return false; > >> + } > >> + } > >> > >> ? > >> > >> > >> }; > >> #endif // OS_LINUX_VM_OS_LINUX_HPP > >> > >> > >> > > > From per.liden at oracle.com Wed Jun 13 07:00:08 2018 From: per.liden at oracle.com (Per Liden) Date: Wed, 13 Jun 2018 09:00:08 +0200 Subject: RFR: 8204210: Implementation: JEP 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) In-Reply-To: References: <69590d4c-d675-9910-035d-eabe8be9fdfc@oracle.com> <47ea4414-fd84-7c9d-807e-a0bbdba23860@oracle.com> <81c1b28b-6d5a-674e-fa74-465ceccd6d3f@oracle.com> Message-ID: <227b98e3-737a-b0e1-a42c-03af65e1e396@oracle.com> Thank you all! /Per On 06/12/2018 08:16 PM, Roman Kennke wrote: > Woohoo! Congratulations! > Cheers, > Roman > >> Hi, >> >> Just an updated to say that we've now pushed ZGC to jdk/jdk. >> >> cheers, >> Per >> >> On 06/08/2018 08:20 PM, Per Liden wrote: >>> Hi all, >>> >>> Here are updated webrevs, which address all the feedback and comments >>> received. These webrevs are also rebased on today's jdk/jdk. We're >>> looking for any final comments people might have, and if things go >>> well we hope to be able to push this some time (preferably early) next >>> week. >>> >>> These webrevs have passed tier{1,2,3,4,5,6} on Linux-x64, and >>> tier{1,2,3} on all other Oracle supported platforms. >>> >>> ZGC Master >>> ?? http://cr.openjdk.java.net/~pliden/8204210/webrev.2-master >>> >>> ZGC Testing >>> ?? http://cr.openjdk.java.net/~pliden/8204210/webrev.2-testing >>> >>> Thanks! >>> >>> /Per & Stefan >>> >>> >>> On 06/06/2018 12:48 AM, Per Liden wrote: >>>> Hi all, >>>> >>>> Here are updated webrevs reflecting the feedback received so far. >>>> >>>> ZGC Master >>>> ?? Incremental: >>>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0vs1-master >>>> ?? Full: http://cr.openjdk.java.net/~pliden/8204210/webrev.1-master >>>> >>>> ZGC Testing >>>> ?? Incremental: >>>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0vs1-testing >>>> ?? Full: http://cr.openjdk.java.net/~pliden/8204210/webrev.1-testing >>>> >>>> Thanks! >>>> >>>> /Per >>>> >>>> On 06/01/2018 11:41 PM, Per Liden wrote: >>>>> Hi, >>>>> >>>>> Please review the implementation of JEP 333: ZGC: A Scalable >>>>> Low-Latency Garbage Collector (Experimental) >>>>> >>>>> Please see the JEP for more information about the project. The JEP >>>>> is currently in state "Proposed to Target" for JDK 11. >>>>> >>>>> https://bugs.openjdk.java.net/browse/JDK-8197831 >>>>> >>>>> Additional information in can also be found on the ZGC project wiki. >>>>> >>>>> https://wiki.openjdk.java.net/display/zgc/Main >>>>> >>>>> >>>>> Webrevs >>>>> ------- >>>>> >>>>> To make this easier to review, we've divided the change into two >>>>> webrevs. >>>>> >>>>> * ZGC Master: >>>>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-master >>>>> >>>>> ?? This patch contains the actual ZGC implementation, the new unit >>>>> tests and other changes needed in HotSpot. >>>>> >>>>> * ZGC Testing: >>>>> http://cr.openjdk.java.net/~pliden/8204210/webrev.0-testing >>>>> >>>>> ?? This patch contains changes to existing tests needed by ZGC. >>>>> >>>>> >>>>> Overview of Changes >>>>> ------------------- >>>>> >>>>> Below follows a list of the files we add/modify in the master patch, >>>>> with a short summary describing each group. >>>>> >>>>> * Build support - Making ZGC an optional feature. >>>>> >>>>> ?? make/autoconf/hotspot.m4 >>>>> ?? make/hotspot/lib/JvmFeatures.gmk >>>>> ?? src/hotspot/share/utilities/macros.hpp >>>>> >>>>> * C2 AD file - Additions needed to generate ZGC load barriers (adlc >>>>> does not currently offer a way to easily break this out). >>>>> >>>>> ?? src/hotspot/cpu/x86/x86.ad >>>>> ?? src/hotspot/cpu/x86/x86_64.ad >>>>> >>>>> * C2 - Things that can't be easily abstracted out into ZGC specific >>>>> code, most of which is guarded behind a #if INCLUDE_ZGC and/or if >>>>> (UseZGC) condition. There should only be two logic changes (one in >>>>> idealKit.cpp and one in node.cpp) that are still active when ZGC is >>>>> disabled. We believe these are low risk changes and should not >>>>> introduce any real change i behavior when using other GCs. >>>>> >>>>> ?? src/hotspot/share/adlc/formssel.cpp >>>>> ?? src/hotspot/share/opto/* >>>>> ?? src/hotspot/share/compiler/compilerDirectives.hpp >>>>> >>>>> * General GC+Runtime - Registering ZGC as a collector. >>>>> >>>>> ?? src/hotspot/share/gc/shared/* >>>>> ?? src/hotspot/share/runtime/vmStructs.cpp >>>>> ?? src/hotspot/share/runtime/vm_operations.hpp >>>>> ?? src/hotspot/share/prims/whitebox.cpp >>>>> >>>>> * GC thread local data - Increasing the size of data area by 32 bytes. >>>>> >>>>> ?? src/hotspot/share/gc/shared/gcThreadLocalData.hpp >>>>> >>>>> * ZGC - The collector itself. >>>>> >>>>> ?? src/hotspot/share/gc/z/* >>>>> ?? src/hotspot/cpu/x86/gc/z/* >>>>> ?? src/hotspot/os_cpu/linux_x86/gc/z/* >>>>> ?? test/hotspot/gtest/gc/z/* >>>>> >>>>> * JFR - Adding new event types. >>>>> >>>>> ?? src/hotspot/share/jfr/* >>>>> ?? src/jdk.jfr/share/conf/jfr/* >>>>> >>>>> * Logging - Adding new log tags. >>>>> >>>>> ?? src/hotspot/share/logging/* >>>>> >>>>> * Metaspace - Adding a friend declaration. >>>>> >>>>> ?? src/hotspot/share/memory/metaspace.hpp >>>>> >>>>> * InstanceRefKlass - Adjustments for concurrent reference processing. >>>>> >>>>> ?? src/hotspot/share/oops/instanceRefKlass.inline.hpp >>>>> >>>>> * vmSymbol - Disabled clone intrinsic for ZGC. >>>>> >>>>> ?? src/hotspot/share/classfile/vmSymbols.cpp >>>>> >>>>> * Oop Verification - In four cases we disabled oop verification >>>>> because it do not makes sense or is not applicable to a GC using >>>>> load barriers. >>>>> >>>>> ?? src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp >>>>> ?? src/hotspot/cpu/x86/stubGenerator_x86_64.cpp >>>>> ?? src/hotspot/share/compiler/oopMap.cpp >>>>> ?? src/hotspot/share/runtime/jniHandles.cpp >>>>> >>>>> * StackValue - Apply a load barrier in case of OSR. This is a bit of >>>>> a hack. However, this will go away in the future, when we have the >>>>> next iteration of C2's load barriers in place (aka "C2 late barrier >>>>> insertion"). >>>>> >>>>> ?? src/hotspot/share/runtime/stackValue.cpp >>>>> >>>>> * JVMTI - Adding an assert() to catch problems if the tagmap hashing >>>>> is changed in the future. >>>>> >>>>> ?? src/hotspot/share/prims/jvmtiTagMap.cpp >>>>> >>>>> * Legal - Adding copyright/license for 3rd party hash function used >>>>> in ZHash. >>>>> >>>>> ?? src/java.base/share/legal/c-libutl.md >>>>> >>>>> * SA - Adding basic ZGC support. >>>>> >>>>> ?? src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/* >>>>> >>>>> >>>>> Testing >>>>> ------- >>>>> >>>>> * Unit testing >>>>> >>>>> ?? A number of new ZGC specific gtests have been added, in >>>>> test/hotspot/gtest/gc/z/ >>>>> >>>>> * Regression testing >>>>> >>>>> ?? No new failures in Mach5, with ZGC enabled, tier{1,2,3,4,5,6} >>>>> ?? No new failures in Mach5, with ZGC disabled, tier{1,2,3} >>>>> >>>>> * Stress testing >>>>> >>>>> ?? We have been continuously been running a number stress tests >>>>> throughout the development, these include: >>>>> >>>>> ???? specjbb2000 >>>>> ???? specjbb2005 >>>>> ???? specjbb2015 >>>>> ???? specjvm98 >>>>> ???? specjvm2008 >>>>> ???? dacapo2009 >>>>> ???? test/hotspot/jtreg/gc/stress/gcold >>>>> ???? test/hotspot/jtreg/gc/stress/systemgc >>>>> ???? test/hotspot/jtreg/gc/stress/gclocker >>>>> ???? test/hotspot/jtreg/gc/stress/gcbasher >>>>> ???? test/hotspot/jtreg/gc/stress/finalizer >>>>> ???? Kitchensink >>>>> >>>>> >>>>> Thanks! >>>>> >>>>> /Per, Stefan & the ZGC team > > From glaubitz at physik.fu-berlin.de Wed Jun 13 07:08:01 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Wed, 13 Jun 2018 09:08:01 +0200 Subject: RFR: 8203301: Linux-sparc fails to build after JDK-8199712 (Flight Recorder) In-Reply-To: <9893f6f0-8cb5-bb58-db29-34a649a9d89e@oracle.com> References: <189933f3-2dd5-d70e-364b-794f375ec430@physik.fu-berlin.de> <96fb699f-5c86-0f0b-036b-a9bd2e2aa30c@physik.fu-berlin.de> <9893f6f0-8cb5-bb58-db29-34a649a9d89e@oracle.com> Message-ID: <25814b23-007a-8007-92f0-995515280a1a@physik.fu-berlin.de> On 06/12/2018 06:00 PM, Vladimir Kozlov wrote: > Looks good to me. Thanks! Build in the submit repository passed as well. Can I get a second review? Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From tobias.hartmann at oracle.com Wed Jun 13 07:28:41 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 13 Jun 2018 09:28:41 +0200 Subject: [11] RFR(L) 8184349: There should be some verification that EnableJVMCI is disabled if a GC not supporting JVMCI is selected In-Reply-To: <8b33f96f-48c5-6df3-5efe-f77dd19961c5@oracle.com> References: <8b33f96f-48c5-6df3-5efe-f77dd19961c5@oracle.com> Message-ID: <8e787deb-8535-e986-de0e-252d7839a6fe@oracle.com> Hi Vladimir, this looks good to me (nice refactoring!) but I would suggest to wait for another review. Thanks, Tobias On 04.05.2018 00:12, Vladimir Kozlov wrote: > http://cr.openjdk.java.net/~kvn/8184349/webrev.02/ > https://bugs.openjdk.java.net/browse/JDK-8184349 > > Recent testing problem after Graal dropped CMS (throw exception) made this RFE more urgent. I > decided to fix it. > > The main fix is for JVMCI to check if GC is supported and exit VM with error if not [1]. It is > called from Arguments::apply_ergo() after GC is selected in GCConfig::initialize(). > > Main changes are refactoring. > > I used this opportunity (inspired by GCConfig) to move compiler related code from arguments.cpp file > into compilerDefinitions.* files. And renamed it to compilerConfig.*. > > Two new CompilerConfig methods check_comp_args_consistency() and CompilerConfig::ergo_initialize() > are called from arguments.cpp. > > The rest are test fixing. Mostly to not run CMS GC with Graal JIT. > > One test CheckCompileThresholdScaling.java was modified because I skipped scaling compiler threshold > in Interpreter mode (CompileThreshold = 0). > > For tests which use CMS I added @requires !vm.graal.enabled. > Unfortunately I did not fix all tests which use CMS. Some tests have several @run commands for each > GC. And some tests fork new process to test different GCs. Changes for those tests are more > complicated and I filed follow up bug 8202611 [2] I will fix after this. > > Tested tier1,tier2,tier2-graal > From tobias.hartmann at oracle.com Wed Jun 13 07:35:26 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 13 Jun 2018 09:35:26 +0200 Subject: RFR: 8203301: Linux-sparc fails to build after JDK-8199712 (Flight Recorder) In-Reply-To: <25814b23-007a-8007-92f0-995515280a1a@physik.fu-berlin.de> References: <189933f3-2dd5-d70e-364b-794f375ec430@physik.fu-berlin.de> <96fb699f-5c86-0f0b-036b-a9bd2e2aa30c@physik.fu-berlin.de> <9893f6f0-8cb5-bb58-db29-34a649a9d89e@oracle.com> <25814b23-007a-8007-92f0-995515280a1a@physik.fu-berlin.de> Message-ID: Hi Adrian, On 13.06.2018 09:08, John Paul Adrian Glaubitz wrote: > Can I get a second review? Looks good to me as well. Thanks, Tobias From kim.barrett at oracle.com Wed Jun 13 08:49:52 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 13 Jun 2018 04:49:52 -0400 Subject: RFR: 8204939: Change Access nomenclature: root to native Message-ID: <445FC510-972F-4857-85A7-CCBB240A03BD@oracle.com> Please review this change of nomenclature in the Access API. Rather than using the word "root" we now use "native". A "native" access has an off-heap location, such as in a C/C++ data structure or global variable. This involves renaming the RootAccess class template to NativeAccess, and renaming the IN_ROOT access decorator to IN_NATIVE. Note that we are not renaming the IN_CONCURRENT_ROOT access decorator as part of this change. As discussed in JDK-8204690, we intend to instead eliminate that decorator, as part of a later change. This change consists of several sets of renamings and other minor adjustments, all performed completely mechanically, e.g. by applying a short sequence of bash commands to the repository being modified. To aid in reviewing, in addition to a webrev containing the full set of changes, there is also a sequence of 4 webrevs that combined make up that same complete set, along with the commands to produce them. CR: https://bugs.openjdk.java.net/browse/JDK-8204939 Webrevs: (1) Rename RootAccess to NativeAccess http://cr.openjdk.java.net/~kbarrett/8204939/1.rename_RootAccess/ hg qnew rename_RootAccess find . -type f -name "*.[ch]pp" \ -exec grep -q RootAccess {} \; -print \ | xargs sed -i 's/RootAccess/NativeAccess/' hg qrefresh ----- (2) Rename IN_ROOT to IN_NATIVE http://cr.openjdk.java.net/~kbarrett/8204939/2.rename_IN_ROOT/ hg qnew rename_IN_ROOT find . -type f -name "*.[ch]pp" \ -exec egrep -q " IN_ROOT \s*=" {} \; -print \ | xargs sed -i 's/ IN_ROOT / IN_NATIVE /' find . -type f -name "*.[ch]pp" \ -exec grep -q IN_ROOT {} \; -print \ | xargs sed -i 's/IN_ROOT/IN_NATIVE/' hg qrefresh ----- (3) Rename some local variables named on_root and in_root to in_native, for consistency. http://cr.openjdk.java.net/~kbarrett/8204939/3.rename_on_root/ hg qnew rename_on_root find . -type f -name "*.[ch]pp" \ -exec egrep -q "[^[:alnum:]_]on_root[^[:alnum:]_]" {} \; -print \ | xargs sed -i 's/on_root/in_native/' find . -type f -name "*.[ch]pp" \ -exec egrep -q "[^[:alnum:]_]in_root[^[:alnum:]_]" {} \; -print \ | xargs sed -i 's/in_root/in_native/' find . -type f -name "*.[ch]pp" \ -exec egrep -q " in_native =" {} \; -print \ | xargs sed -i 's/ in_native =/ in_native =/' hg qrefresh ----- (4) Rename some local variables named on_heap, for consistency. http://cr.openjdk.java.net/~kbarrett/8204939/4.rename_on_heap/ hg qnew rename_on_heap find . -type f -name "*.[ch]pp" \ -exec egrep -q "[^[:alnum:]_]on_heap[^[:alnum:]_]" {} \; -print \ | xargs sed -i 's/on_heap/in_heap/' find . -type f -name "*.[ch]pp" \ -exec egrep -q " in_heap =" {} \; -print \ | xargs sed -i 's/ in_heap =/ in_heap =/' hg qrefresh ----- (5) All changes http://cr.openjdk.java.net/~kbarrett/8204939/open.00/ Testing: Local build and minimal testing of each of the partial webrevs. Mach5 tier1,2,3 for the full change. From per.liden at oracle.com Wed Jun 13 08:55:07 2018 From: per.liden at oracle.com (Per Liden) Date: Wed, 13 Jun 2018 10:55:07 +0200 Subject: RFC: 8204690: Simplify usage of Access API In-Reply-To: <4F2B66C2-7571-41D1-9780-9AA568067B01@oracle.com> References: <4F2B66C2-7571-41D1-9780-9AA568067B01@oracle.com> Message-ID: +1 /Per On 06/11/2018 09:36 PM, Kim Barrett wrote: > JDK-8204690 is an enhancement request for simplifing the usage of the > Access API. This RFE comes out of some discussions within the Oracle > runtime and GC teams about difficulties encountered when using the > Access API. We now have a concrete set of changes to propose (rather > than just vague complaints), described in that RFE, which I'm > duplicating below for further discussion. > > Most of the proposed changes are technically straight-forward; many > are just changes of nomenclature. However, because they are name > changes, they end up touching a bunch of files, including various > platform-specific files. So we'll be asking for help with testing. > > We want to move ahead with these changes ASAP, because of the impact > they will have to backporting to JDK 11 if not included in that > release. However, a few of the changes significantly intersect other > changes that are soon to be pushed to JDK 11, so some amount of > scheduling will be needed to minimize overall work. > > Here's the description from the RFE: > > ---------- > > Simplify usage of Access API > > With 6+ months of usage of the Access API, some usage issues have been > noted. In particular, there are some issues around decorator names and > semantics which have caused confusion and led to some long discussions. > While the underlying strategy is sound, there are some changes that would > simplify usage. This proposal is in part the result of attempting to create > a guide for choosing the decorators for some use of the Access API. > > We currently have several categories of decorators, with some categories > having entries with overlapping semantics. We'd like to have a set of > categories from which one chooses exactly one entry, and it should be > "obvious" which one to choose for a given access. > > The first step is to determine where the operand is located. We presently > have the following decorators to indicate the Access location: IN_HEAP, > IN_HEAP_ARRAY, IN_ROOT, IN_CONCURRENT_ROOT, and IN_ARCHIVE_ROOT. Some of > these overlap with or imply others; the goal is to have a disjoint set. > > IN_CONCURRENT_ROOT has generated much discussion about when and how it > should be used. This might be better modelled as a Barrier Strength > decorator, e.g. in the AS_ category. It was placed among the location > decorators with the idea that some Access-roots would be identified as being > fully processed during a safe-point (and so would not require GC barriers), > while others (the "concurrent" roots) would require GC barriers. There was a > question of whether we needed more fine-grained decorators, or whether just > two categories that are the same for all collectors would be sufficient. So > far, we've gotten along without introducing further granularity. But we've > also found no significant need for the distinction at all. > > Proposal 1: IN_CONCURRENT_ROOT should be eliminated, and the corresponding > behavior should be the default. > > Proposal 2: IN_ARCHIVE_ROOT should be eliminated; see JDK-8204585. > > IN_HEAP_ARRAY is effectively an additional orthogonal property layered over > IN_HEAP. It would be better to actually make it an orthogonal property. > > Proposal 3: Remove IN_HEAP_ARRAY and add IS_ARRAY with similar semantics. > (IS_ARRAY might only be valid in conjunction with IN_HEAP.) > > The use of "root" here differs from how that term is usually used in the > context of GC. In particular, while GC-roots are Access-roots, not all > Access-roots are GC-roots. This is a frequent source of confusion. > > Proposal 4: The use of "root" by Access should be replaced by "native". So > IN_ROOT => IN_NATIVE, and RootAccess<> => NativeAccess<>. > > The second step is to determine the reference strength. The current API has > been working well here. We have ON_STRONG_OOP_REF, ON_WEAK_OOP_REF, > ON_PHANTOM_OOP_REF, and ON_UNKNOWN_OOP_REF, with ON_STRONG_OOP_REF being the > default. No changes are being proposed in this area. > > Another step is to determine the barrier strength. We presently have the > following decorators for this: AS_RAW, AS_DEST_NOT_INITIALIZED, > AS_NO_KEEPALIVE, and AS_NORMAL. AS_DEST_NOT_INITIALIZED is somewhat out of > place here, describing a property of the value rather than the access. It > would be better to make it an orthogonal property. The existing name is also > a little awkward, especially when turned into a variable and logically > negated, e.g. > > bool is_dest_not_initialized = ...; > ... !is_dest_not_initialized ... > > Proposal 5: Rename AS_DEST_NOT_INITIALIZED to IS_DEST_UNINITIALIZED. > > The fourth step is to determine the memory order. The current API has been > working well here. We have MO_UNORDERED, MO_VOLATILE, MO_RELAXED, > MO_ACQUIRE, MO_RELEASE, MO_SEQ_CST. No changes are being proposed in this > area. > > In addition, we presently have OOP_NOT_NULL, all on its own in a separate > category. There's no need for this separate category, and this can be > renamed to be similar to other orthogonal properties proposed above. > > Proposal 6: Rename OOP_NOT_NULL to IS_NOT_NULL. Remove OOP_DECORATOR_MASK. > > Proposal 7: Add IS_DECORATOR_MASK, containing the values for IS_ARRAY, > IS_NOT_NULL, and IS_DEST_UNINITIALIZED. > > There are also decorators for annotating arraycopy. These are highly tied in > to the code, and are not discussed here. > > With these changes, the process of selecting the decorators for an access > consists of first selecting one decorator in each of the following > categories: > > (1) Operand location: IN_NATIVE, IN_HEAP. There is no default; one or the > other must be explicitly specified. However, rather than using the > decorators directly, use the NativeAccess<> and HeapAccess<> classes. > > (2) Access strength: AS_NORMAL, AS_RAW, AS_NO_KEEPALIVE. The default > is AS_NORMAL. When accessing a primitive (non-object) value, use > AS_RAW. > > (3) Reference strength (if not raw access): ON_STRONG_OOP_REF, > ON_WEAK_OOP_REF, ON_PHANTOM_OOP_REF, ON_UNKNOWN_OOP_REF. The default is > ON_STRONG_OOP_REF. This decorator is ignored and should be left empty if the > access strength is AS_RAW. > > (4) Memory ordering: MO_UNORDERED, MO_VOLATILE, MO_RELAXED, MO_ACQUIRE, > MO_RELEASE, MO_SEQ_CST. The default is MO_UNORDERED. > > Then, add any of the following "flag" decorators that are appropriate: > IS_ARRAY, IS_NOT_NULL, IS_DEST_UNINITIALIZED. The default for these is that > the flag is unset. > Simplify usage of Access API > > With 6+ months of usage of the Access API, some usage issues have been > noted. In particular, there are some issues around decorator names and > semantics which have caused confusion and led to some long discussions. > While the underlying strategy is sound, there are some changes that would > simplify usage. This proposal is in part the result of attempting to create > a guide for choosing the decorators for some use of the Access API. > > We currently have several categories of decorators, with some categories > having entries with overlapping semantics. We'd like to have a set of > categories from which one chooses exactly one entry, and it should be > "obvious" which one to choose for a given access. > > The first step is to determine where the operand is located. We presently > have the following decorators to indicate the Access location: IN_HEAP, > IN_HEAP_ARRAY, IN_ROOT, IN_CONCURRENT_ROOT, and IN_ARCHIVE_ROOT. Some of > these overlap with or imply others; the goal is to have a disjoint set. > > IN_CONCURRENT_ROOT has generated much discussion about when and how it > should be used. This might be better modelled as a Barrier Strength > decorator, e.g. in the AS_ category. It was placed among the location > decorators with the idea that some Access-roots would be identified as being > fully processed during a safe-point (and so would not require GC barriers), > while others (the "concurrent" roots) would require GC barriers. There was a > question of whether we needed more fine-grained decorators, or whether just > two categories that are the same for all collectors would be sufficient. So > far, we've gotten along without introducing further granularity. But we've > also found no significant need for the distinction at all. > > Proposal 1: IN_CONCURRENT_ROOT should be eliminated, and the corresponding > behavior should be the default. > > Proposal 2: IN_ARCHIVE_ROOT should be eliminated; see JDK-8204585. > > IN_HEAP_ARRAY is effectively an additional orthogonal property layered over > IN_HEAP. It would be better to actually make it an orthogonal property. > > Proposal 3: Remove IN_HEAP_ARRAY and add IS_ARRAY with similar semantics. > (IS_ARRAY might only be valid in conjunction with IN_HEAP.) > > The use of "root" here differs from how that term is usually used in the > context of GC. In particular, while GC-roots are Access-roots, not all > Access-roots are GC-roots. This is a frequent source of confusion. > > Proposal 4: The use of "root" by Access should be replaced by "native". So > IN_ROOT => IN_NATIVE, and RootAccess<> => NativeAccess<>. > > The second step is to determine the reference strength. The current API has > been working well here. We have ON_STRONG_OOP_REF, ON_WEAK_OOP_REF, > ON_PHANTOM_OOP_REF, and ON_UNKNOWN_OOP_REF, with ON_STRONG_OOP_REF being the > default. No changes are being proposed in this area. > > Another step is to determine the barrier strength. We presently have the > following decorators for this: AS_RAW, AS_DEST_NOT_INITIALIZED, > AS_NO_KEEPALIVE, and AS_NORMAL. AS_DEST_NOT_INITIALIZED is somewhat out of > place here, describing a property of the value rather than the access. It > would be better to make it an orthogonal property. The existing name is also > a little awkward, especially when turned into a variable and logically > negated, e.g. > > bool is_dest_not_initialized = ...; > ... !is_dest_not_initialized ... > > Proposal 5: Rename AS_DEST_NOT_INITIALIZED to IS_DEST_UNINITIALIZED. > > The fourth step is to determine the memory order. The current API has been > working well here. We have MO_UNORDERED, MO_VOLATILE, MO_RELAXED, > MO_ACQUIRE, MO_RELEASE, MO_SEQ_CST. No changes are being proposed in this > area. > > In addition, we presently have OOP_NOT_NULL, all on its own in a separate > category. There's no need for this separate category, and this can be > renamed to be similar to other orthogonal properties proposed above. > > Proposal 6: Rename OOP_NOT_NULL to IS_NOT_NULL. Remove OOP_DECORATOR_MASK. > > Proposal 7: Add IS_DECORATOR_MASK, containing the values for IS_ARRAY, > IS_NOT_NULL, and IS_DEST_UNINITIALIZED. > > There are also decorators for annotating arraycopy. These are highly tied in > to the code, and are not discussed here. > > With these changes, the process of selecting the decorators for an access > consists of first selecting one decorator in each of the following > categories: > > (1) Operand location: IN_NATIVE, IN_HEAP. There is no default; one or the > other must be explicitly specified. However, rather than using the > decorators directly, use the NativeAccess<> and HeapAccess<> classes. > > (2) Access strength: AS_NORMAL, AS_RAW, AS_NO_KEEPALIVE. The default is AS_NORMAL. When accessing a primitive (non-object) value, use AS_RAW. > > (3) Reference strength (if not raw access): ON_STRONG_OOP_REF, > ON_WEAK_OOP_REF, ON_PHANTOM_OOP_REF, ON_UNKNOWN_OOP_REF. The default is > ON_STRONG_OOP_REF. This decorator is ignored and should be left empty if the > access strength is AS_RAW. > > (4) Memory ordering: MO_UNORDERED, MO_VOLATILE, MO_RELAXED, MO_ACQUIRE, > MO_RELEASE, MO_SEQ_CST. The default is MO_UNORDERED. > > Then, add any of the following "flag" decorators that are appropriate: > IS_ARRAY, IS_NOT_NULL, IS_DEST_UNINITIALIZED. The default for these is that > the flag is unset. > From per.liden at oracle.com Wed Jun 13 09:04:39 2018 From: per.liden at oracle.com (Per Liden) Date: Wed, 13 Jun 2018 11:04:39 +0200 Subject: RFR: 8204939: Change Access nomenclature: root to native In-Reply-To: <445FC510-972F-4857-85A7-CCBB240A03BD@oracle.com> References: <445FC510-972F-4857-85A7-CCBB240A03BD@oracle.com> Message-ID: <0c4a4b8f-bfbf-9476-5104-6a5623e8851b@oracle.com> On 06/13/2018 10:49 AM, Kim Barrett wrote: > Please review this change of nomenclature in the Access API. Rather > than using the word "root" we now use "native". A "native" access has > an off-heap location, such as in a C/C++ data structure or global > variable. This involves renaming the RootAccess class template to > NativeAccess, and renaming the IN_ROOT access decorator to IN_NATIVE. > > Note that we are not renaming the IN_CONCURRENT_ROOT access decorator > as part of this change. As discussed in JDK-8204690, we intend to > instead eliminate that decorator, as part of a later change. > > This change consists of several sets of renamings and other minor > adjustments, all performed completely mechanically, e.g. by applying a > short sequence of bash commands to the repository being modified. To > aid in reviewing, in addition to a webrev containing the full set of > changes, there is also a sequence of 4 webrevs that combined make up > that same complete set, along with the commands to produce them. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8204939 > [...] > (5) All changes > > http://cr.openjdk.java.net/~kbarrett/8204939/open.00/ Looks good! /Per > > Testing: > Local build and minimal testing of each of the partial webrevs. > Mach5 tier1,2,3 for the full change. > From kim.barrett at oracle.com Wed Jun 13 09:05:50 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 13 Jun 2018 05:05:50 -0400 Subject: RFR: 8204939: Change Access nomenclature: root to native In-Reply-To: <0c4a4b8f-bfbf-9476-5104-6a5623e8851b@oracle.com> References: <445FC510-972F-4857-85A7-CCBB240A03BD@oracle.com> <0c4a4b8f-bfbf-9476-5104-6a5623e8851b@oracle.com> Message-ID: > On Jun 13, 2018, at 5:04 AM, Per Liden wrote: > > On 06/13/2018 10:49 AM, Kim Barrett wrote: >> Please review this change of nomenclature in the Access API. Rather >> than using the word "root" we now use "native". A "native" access has >> an off-heap location, such as in a C/C++ data structure or global >> variable. This involves renaming the RootAccess class template to >> NativeAccess, and renaming the IN_ROOT access decorator to IN_NATIVE. >> Note that we are not renaming the IN_CONCURRENT_ROOT access decorator >> as part of this change. As discussed in JDK-8204690, we intend to >> instead eliminate that decorator, as part of a later change. >> This change consists of several sets of renamings and other minor >> adjustments, all performed completely mechanically, e.g. by applying a >> short sequence of bash commands to the repository being modified. To >> aid in reviewing, in addition to a webrev containing the full set of >> changes, there is also a sequence of 4 webrevs that combined make up >> that same complete set, along with the commands to produce them. >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8204939 > [...] >> (5) All changes >> http://cr.openjdk.java.net/~kbarrett/8204939/open.00/ > > Looks good! > > /Per Thanks. > >> Testing: >> Local build and minimal testing of each of the partial webrevs. >> Mach5 tier1,2,3 for the full change. From stefan.karlsson at oracle.com Wed Jun 13 10:29:09 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 13 Jun 2018 12:29:09 +0200 Subject: RFC: 8204690: Simplify usage of Access API In-Reply-To: <4F2B66C2-7571-41D1-9780-9AA568067B01@oracle.com> References: <4F2B66C2-7571-41D1-9780-9AA568067B01@oracle.com> Message-ID: Sounds good to me. Thanks for taking care about this. StefanK On 2018-06-11 21:36, Kim Barrett wrote: > JDK-8204690 is an enhancement request for simplifing the usage of the > Access API. This RFE comes out of some discussions within the Oracle > runtime and GC teams about difficulties encountered when using the > Access API. We now have a concrete set of changes to propose (rather > than just vague complaints), described in that RFE, which I'm > duplicating below for further discussion. > > Most of the proposed changes are technically straight-forward; many > are just changes of nomenclature. However, because they are name > changes, they end up touching a bunch of files, including various > platform-specific files. So we'll be asking for help with testing. > > We want to move ahead with these changes ASAP, because of the impact > they will have to backporting to JDK 11 if not included in that > release. However, a few of the changes significantly intersect other > changes that are soon to be pushed to JDK 11, so some amount of > scheduling will be needed to minimize overall work. > > Here's the description from the RFE: > > ---------- > > Simplify usage of Access API > > With 6+ months of usage of the Access API, some usage issues have been > noted. In particular, there are some issues around decorator names and > semantics which have caused confusion and led to some long discussions. > While the underlying strategy is sound, there are some changes that would > simplify usage. This proposal is in part the result of attempting to create > a guide for choosing the decorators for some use of the Access API. > > We currently have several categories of decorators, with some categories > having entries with overlapping semantics. We'd like to have a set of > categories from which one chooses exactly one entry, and it should be > "obvious" which one to choose for a given access. > > The first step is to determine where the operand is located. We presently > have the following decorators to indicate the Access location: IN_HEAP, > IN_HEAP_ARRAY, IN_ROOT, IN_CONCURRENT_ROOT, and IN_ARCHIVE_ROOT. Some of > these overlap with or imply others; the goal is to have a disjoint set. > > IN_CONCURRENT_ROOT has generated much discussion about when and how it > should be used. This might be better modelled as a Barrier Strength > decorator, e.g. in the AS_ category. It was placed among the location > decorators with the idea that some Access-roots would be identified as being > fully processed during a safe-point (and so would not require GC barriers), > while others (the "concurrent" roots) would require GC barriers. There was a > question of whether we needed more fine-grained decorators, or whether just > two categories that are the same for all collectors would be sufficient. So > far, we've gotten along without introducing further granularity. But we've > also found no significant need for the distinction at all. > > Proposal 1: IN_CONCURRENT_ROOT should be eliminated, and the corresponding > behavior should be the default. > > Proposal 2: IN_ARCHIVE_ROOT should be eliminated; see JDK-8204585. > > IN_HEAP_ARRAY is effectively an additional orthogonal property layered over > IN_HEAP. It would be better to actually make it an orthogonal property. > > Proposal 3: Remove IN_HEAP_ARRAY and add IS_ARRAY with similar semantics. > (IS_ARRAY might only be valid in conjunction with IN_HEAP.) > > The use of "root" here differs from how that term is usually used in the > context of GC. In particular, while GC-roots are Access-roots, not all > Access-roots are GC-roots. This is a frequent source of confusion. > > Proposal 4: The use of "root" by Access should be replaced by "native". So > IN_ROOT => IN_NATIVE, and RootAccess<> => NativeAccess<>. > > The second step is to determine the reference strength. The current API has > been working well here. We have ON_STRONG_OOP_REF, ON_WEAK_OOP_REF, > ON_PHANTOM_OOP_REF, and ON_UNKNOWN_OOP_REF, with ON_STRONG_OOP_REF being the > default. No changes are being proposed in this area. > > Another step is to determine the barrier strength. We presently have the > following decorators for this: AS_RAW, AS_DEST_NOT_INITIALIZED, > AS_NO_KEEPALIVE, and AS_NORMAL. AS_DEST_NOT_INITIALIZED is somewhat out of > place here, describing a property of the value rather than the access. It > would be better to make it an orthogonal property. The existing name is also > a little awkward, especially when turned into a variable and logically > negated, e.g. > > bool is_dest_not_initialized = ...; > ... !is_dest_not_initialized ... > > Proposal 5: Rename AS_DEST_NOT_INITIALIZED to IS_DEST_UNINITIALIZED. > > The fourth step is to determine the memory order. The current API has been > working well here. We have MO_UNORDERED, MO_VOLATILE, MO_RELAXED, > MO_ACQUIRE, MO_RELEASE, MO_SEQ_CST. No changes are being proposed in this > area. > > In addition, we presently have OOP_NOT_NULL, all on its own in a separate > category. There's no need for this separate category, and this can be > renamed to be similar to other orthogonal properties proposed above. > > Proposal 6: Rename OOP_NOT_NULL to IS_NOT_NULL. Remove OOP_DECORATOR_MASK. > > Proposal 7: Add IS_DECORATOR_MASK, containing the values for IS_ARRAY, > IS_NOT_NULL, and IS_DEST_UNINITIALIZED. > > There are also decorators for annotating arraycopy. These are highly tied in > to the code, and are not discussed here. > > With these changes, the process of selecting the decorators for an access > consists of first selecting one decorator in each of the following > categories: > > (1) Operand location: IN_NATIVE, IN_HEAP. There is no default; one or the > other must be explicitly specified. However, rather than using the > decorators directly, use the NativeAccess<> and HeapAccess<> classes. > > (2) Access strength: AS_NORMAL, AS_RAW, AS_NO_KEEPALIVE. The default > is AS_NORMAL. When accessing a primitive (non-object) value, use > AS_RAW. > > (3) Reference strength (if not raw access): ON_STRONG_OOP_REF, > ON_WEAK_OOP_REF, ON_PHANTOM_OOP_REF, ON_UNKNOWN_OOP_REF. The default is > ON_STRONG_OOP_REF. This decorator is ignored and should be left empty if the > access strength is AS_RAW. > > (4) Memory ordering: MO_UNORDERED, MO_VOLATILE, MO_RELAXED, MO_ACQUIRE, > MO_RELEASE, MO_SEQ_CST. The default is MO_UNORDERED. > > Then, add any of the following "flag" decorators that are appropriate: > IS_ARRAY, IS_NOT_NULL, IS_DEST_UNINITIALIZED. The default for these is that > the flag is unset. > Simplify usage of Access API > > With 6+ months of usage of the Access API, some usage issues have been > noted. In particular, there are some issues around decorator names and > semantics which have caused confusion and led to some long discussions. > While the underlying strategy is sound, there are some changes that would > simplify usage. This proposal is in part the result of attempting to create > a guide for choosing the decorators for some use of the Access API. > > We currently have several categories of decorators, with some categories > having entries with overlapping semantics. We'd like to have a set of > categories from which one chooses exactly one entry, and it should be > "obvious" which one to choose for a given access. > > The first step is to determine where the operand is located. We presently > have the following decorators to indicate the Access location: IN_HEAP, > IN_HEAP_ARRAY, IN_ROOT, IN_CONCURRENT_ROOT, and IN_ARCHIVE_ROOT. Some of > these overlap with or imply others; the goal is to have a disjoint set. > > IN_CONCURRENT_ROOT has generated much discussion about when and how it > should be used. This might be better modelled as a Barrier Strength > decorator, e.g. in the AS_ category. It was placed among the location > decorators with the idea that some Access-roots would be identified as being > fully processed during a safe-point (and so would not require GC barriers), > while others (the "concurrent" roots) would require GC barriers. There was a > question of whether we needed more fine-grained decorators, or whether just > two categories that are the same for all collectors would be sufficient. So > far, we've gotten along without introducing further granularity. But we've > also found no significant need for the distinction at all. > > Proposal 1: IN_CONCURRENT_ROOT should be eliminated, and the corresponding > behavior should be the default. > > Proposal 2: IN_ARCHIVE_ROOT should be eliminated; see JDK-8204585. > > IN_HEAP_ARRAY is effectively an additional orthogonal property layered over > IN_HEAP. It would be better to actually make it an orthogonal property. > > Proposal 3: Remove IN_HEAP_ARRAY and add IS_ARRAY with similar semantics. > (IS_ARRAY might only be valid in conjunction with IN_HEAP.) > > The use of "root" here differs from how that term is usually used in the > context of GC. In particular, while GC-roots are Access-roots, not all > Access-roots are GC-roots. This is a frequent source of confusion. > > Proposal 4: The use of "root" by Access should be replaced by "native". So > IN_ROOT => IN_NATIVE, and RootAccess<> => NativeAccess<>. > > The second step is to determine the reference strength. The current API has > been working well here. We have ON_STRONG_OOP_REF, ON_WEAK_OOP_REF, > ON_PHANTOM_OOP_REF, and ON_UNKNOWN_OOP_REF, with ON_STRONG_OOP_REF being the > default. No changes are being proposed in this area. > > Another step is to determine the barrier strength. We presently have the > following decorators for this: AS_RAW, AS_DEST_NOT_INITIALIZED, > AS_NO_KEEPALIVE, and AS_NORMAL. AS_DEST_NOT_INITIALIZED is somewhat out of > place here, describing a property of the value rather than the access. It > would be better to make it an orthogonal property. The existing name is also > a little awkward, especially when turned into a variable and logically > negated, e.g. > > bool is_dest_not_initialized = ...; > ... !is_dest_not_initialized ... > > Proposal 5: Rename AS_DEST_NOT_INITIALIZED to IS_DEST_UNINITIALIZED. > > The fourth step is to determine the memory order. The current API has been > working well here. We have MO_UNORDERED, MO_VOLATILE, MO_RELAXED, > MO_ACQUIRE, MO_RELEASE, MO_SEQ_CST. No changes are being proposed in this > area. > > In addition, we presently have OOP_NOT_NULL, all on its own in a separate > category. There's no need for this separate category, and this can be > renamed to be similar to other orthogonal properties proposed above. > > Proposal 6: Rename OOP_NOT_NULL to IS_NOT_NULL. Remove OOP_DECORATOR_MASK. > > Proposal 7: Add IS_DECORATOR_MASK, containing the values for IS_ARRAY, > IS_NOT_NULL, and IS_DEST_UNINITIALIZED. > > There are also decorators for annotating arraycopy. These are highly tied in > to the code, and are not discussed here. > > With these changes, the process of selecting the decorators for an access > consists of first selecting one decorator in each of the following > categories: > > (1) Operand location: IN_NATIVE, IN_HEAP. There is no default; one or the > other must be explicitly specified. However, rather than using the > decorators directly, use the NativeAccess<> and HeapAccess<> classes. > > (2) Access strength: AS_NORMAL, AS_RAW, AS_NO_KEEPALIVE. The default is AS_NORMAL. When accessing a primitive (non-object) value, use AS_RAW. > > (3) Reference strength (if not raw access): ON_STRONG_OOP_REF, > ON_WEAK_OOP_REF, ON_PHANTOM_OOP_REF, ON_UNKNOWN_OOP_REF. The default is > ON_STRONG_OOP_REF. This decorator is ignored and should be left empty if the > access strength is AS_RAW. > > (4) Memory ordering: MO_UNORDERED, MO_VOLATILE, MO_RELAXED, MO_ACQUIRE, > MO_RELEASE, MO_SEQ_CST. The default is MO_UNORDERED. > > Then, add any of the following "flag" decorators that are appropriate: > IS_ARRAY, IS_NOT_NULL, IS_DEST_UNINITIALIZED. The default for these is that > the flag is unset. > From thomas.stuefe at gmail.com Wed Jun 13 10:37:28 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 13 Jun 2018 12:37:28 +0200 Subject: OpenJDK wiki still points to old submit-hs repository In-Reply-To: <955E9C4F-DBC0-40EE-BEC0-936C3995E32C@oracle.com> References: <9df06789-808e-1f8f-935b-783e342aaecf@physik.fu-berlin.de> <01D84E66-C40B-4EFD-9929-B6DCFA98E9E0@oracle.com> <955E9C4F-DBC0-40EE-BEC0-936C3995E32C@oracle.com> Message-ID: Hi Christian, some small remarks: - the sections "Should I close my branch?" and "How do I update my branch with the latest upstream changes?" both miss "hg push" at the end. - also, as a suggested add: to get an overview about how many test heads one has still open and maybe forgotten to close, one can use "hg log -r "heads(all()) and not closed() and user('') and not branch(default)" Best Regards, Thomas On Tue, Jun 12, 2018 at 6:57 PM, Christian Tornqvist wrote: > I?ve updated the page so that it now correctly points to the jdk/submit repo, thanks for noticing this! > > Thanks, > Christian > >> On Jun 12, 2018, at 7:18 07AM, jesper.wilhelmsson at oracle.com wrote: >> >> Paging Christian. >> (I don't have write access to this page.) >> >> Thanks for reporting this Adrian! >> >> /Jesper >> >>> On 12 Jun 2018, at 15:49, John Paul Adrian Glaubitz wrote: >>> >>> Hi! >>> >>> Just a heads-up: The wiki page for the submit repository is still pointing >>> to the the old submit-hs repository. This should be "submit" nowadays, >>> "submit-hs" is read-only anyway. >>> >>> See: https://wiki.openjdk.java.net/display/Build/Submit+Repo >>> >>> Adrian >>> >>> -- >>> .''`. John Paul Adrian Glaubitz >>> : :' : Debian Developer - glaubitz at debian.org >>> `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de >>> `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 >> > From stefan.karlsson at oracle.com Wed Jun 13 10:38:11 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 13 Jun 2018 12:38:11 +0200 Subject: RFR: 8204939: Change Access nomenclature: root to native In-Reply-To: <445FC510-972F-4857-85A7-CCBB240A03BD@oracle.com> References: <445FC510-972F-4857-85A7-CCBB240A03BD@oracle.com> Message-ID: <26bdd23c-11f3-29f9-1d86-9203c5b11785@oracle.com> Looks good. StefanK On 2018-06-13 10:49, Kim Barrett wrote: > Please review this change of nomenclature in the Access API. Rather > than using the word "root" we now use "native". A "native" access has > an off-heap location, such as in a C/C++ data structure or global > variable. This involves renaming the RootAccess class template to > NativeAccess, and renaming the IN_ROOT access decorator to IN_NATIVE. > > Note that we are not renaming the IN_CONCURRENT_ROOT access decorator > as part of this change. As discussed in JDK-8204690, we intend to > instead eliminate that decorator, as part of a later change. > > This change consists of several sets of renamings and other minor > adjustments, all performed completely mechanically, e.g. by applying a > short sequence of bash commands to the repository being modified. To > aid in reviewing, in addition to a webrev containing the full set of > changes, there is also a sequence of 4 webrevs that combined make up > that same complete set, along with the commands to produce them. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8204939 > > Webrevs: > > (1) Rename RootAccess to NativeAccess > > http://cr.openjdk.java.net/~kbarrett/8204939/1.rename_RootAccess/ > > hg qnew rename_RootAccess > > find . -type f -name "*.[ch]pp" \ > -exec grep -q RootAccess {} \; -print \ > | xargs sed -i 's/RootAccess/NativeAccess/' > > hg qrefresh > > ----- > (2) Rename IN_ROOT to IN_NATIVE > > http://cr.openjdk.java.net/~kbarrett/8204939/2.rename_IN_ROOT/ > > hg qnew rename_IN_ROOT > > find . -type f -name "*.[ch]pp" \ > -exec egrep -q " IN_ROOT \s*=" {} \; -print \ > | xargs sed -i 's/ IN_ROOT / IN_NATIVE /' > > find . -type f -name "*.[ch]pp" \ > -exec grep -q IN_ROOT {} \; -print \ > | xargs sed -i 's/IN_ROOT/IN_NATIVE/' > > hg qrefresh > > ----- > (3) Rename some local variables named on_root and in_root to > in_native, for consistency. > > http://cr.openjdk.java.net/~kbarrett/8204939/3.rename_on_root/ > > hg qnew rename_on_root > > find . -type f -name "*.[ch]pp" \ > -exec egrep -q "[^[:alnum:]_]on_root[^[:alnum:]_]" {} \; -print \ > | xargs sed -i 's/on_root/in_native/' > > find . -type f -name "*.[ch]pp" \ > -exec egrep -q "[^[:alnum:]_]in_root[^[:alnum:]_]" {} \; -print \ > | xargs sed -i 's/in_root/in_native/' > > find . -type f -name "*.[ch]pp" \ > -exec egrep -q " in_native =" {} \; -print \ > | xargs sed -i 's/ in_native =/ in_native =/' > > hg qrefresh > > ----- > (4) Rename some local variables named on_heap, for consistency. > > http://cr.openjdk.java.net/~kbarrett/8204939/4.rename_on_heap/ > > hg qnew rename_on_heap > > find . -type f -name "*.[ch]pp" \ > -exec egrep -q "[^[:alnum:]_]on_heap[^[:alnum:]_]" {} \; -print \ > | xargs sed -i 's/on_heap/in_heap/' > > find . -type f -name "*.[ch]pp" \ > -exec egrep -q " in_heap =" {} \; -print \ > | xargs sed -i 's/ in_heap =/ in_heap =/' > > hg qrefresh > > ----- > (5) All changes > > http://cr.openjdk.java.net/~kbarrett/8204939/open.00/ > > Testing: > Local build and minimal testing of each of the partial webrevs. > Mach5 tier1,2,3 for the full change. > From roshanmangal at gmail.com Wed Jun 13 11:23:43 2018 From: roshanmangal at gmail.com (roshan mangal) Date: Wed, 13 Jun 2018 16:53:43 +0530 Subject: 8006742: Initial TLAB sizing heuristics might provoke premature GCs Message-ID: Hi Everyone, This is my first patch as a new member of OpenJDK community. I have looked into minor bug https://bugs.openjdk.java.net/browse/JDK-8006742 ( Initial TLAB sizing heuristics might provoke premature GCs ) Issue: - The issue is due to late update of average threads count "global_stats()->allocating_threads_avg". The method "global_stats()->allocating_threads_avg()" always returns 1 until first young GC happens. ThreadLocalAllocBuffer::initial_desired_size() returns "init_sz= tlab_capacity/ allocating_threads_avg *target_refills" i.e init_sz = tlab_capacity/1*50. Due to above calculation young GC happens before creating first 50 threads. Issue happens with below command in jdk11 :- $java -Xmn3520m -Xms3584m -Xmx3584m -XX:+PrintGC -XX:+UseParallelOldGC -XX:+UseParallelGC Threads 64 [0.001s][warning][gc] -XX:+PrintGC is deprecated. Will use -Xlog:gc instead. [0.004s][info ][gc] Using Parallel [0.209s][info ][gc] GC(0) Pause Young (Allocation Failure) 2640M->1M(3144M) 14.863ms Proposed Solution: The variable "GlobalTLABStats:: _allocating_threads" should be updated with each thread creation. So incremented "GlobalTLABStats:: _allocating_threads" inside ThreadLocalAllocBuffer::initialize ( call stack:- Thread -> initialize_tlab() -> tlab().initialize() ) . Please find the patch below. ======================== PATCH ========================================== diff -r d12828b7cd64 src/hotspot/share/gc/shared/threadLocalAllocBuffer.cpp --- a/src/hotspot/share/gc/shared/threadLocalAllocBuffer.cpp Wed Jun 13 10:15:35 2018 +0200 +++ b/src/hotspot/share/gc/shared/threadLocalAllocBuffer.cpp Wed Jun 13 05:08:01 2018 -0500 @@ -192,7 +192,8 @@ initialize(NULL, // start NULL, // top NULL); // end - + global_stats()->update_allocating_threads(); + global_stats()->publish(); set_desired_size(initial_desired_size()); // Following check is needed because at startup the main ======================================================================== Thanks, Roshan Mangal MTS Software Engineer at AMD From rkennke at redhat.com Wed Jun 13 11:53:42 2018 From: rkennke at redhat.com (Roman Kennke) Date: Wed, 13 Jun 2018 13:53:42 +0200 Subject: RFR: JDK-8204941: Refactor TemplateTable::_new to use MacroAssembler helpers for tlab and eden Message-ID: <445338d5-43a9-363c-cc61-e9c66195b5ff@redhat.com> TemplateTable::_new (in x86) currently has its own implementation of tlab and eden allocation paths, which are basically identical to the ones in MacroAssembler::tlab_allocate() and MacroAssembler::eden_allocate(). TemplateTable should use the MacroAssembler helpers to avoid duplication. The MacroAssembler version of eden_allocate() features an additional bounds check to prevent wraparound of obj-end. I am not sure if/how that can ever happen and if/how this could be exploited, but it might be relevant. In any case, I think it's a good thing to include it in the interpreter too. The refactoring can be taken further: fold incr_allocated_bytes() into eden_allocate() (they always come in pairs), probably fold tlab_allocate() and eden_allocate() into a single helper (they also seem to come in pairs mostly), also fold initialize_object/initialize_header sections too, but 1. I wanted to keep this manageable and 2. I also want to factor the tlab_allocate/eden_allocate paths into BarrierSetAssembler as next step (which should also include at least some of the mentioned unifications). http://cr.openjdk.java.net/~rkennke/JDK-8204941/webrev.00/ Passes tier1_hotspot Can I please get a review? Roman From coleen.phillimore at oracle.com Wed Jun 13 14:25:47 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 13 Jun 2018 10:25:47 -0400 Subject: RFR: 8204939: Change Access nomenclature: root to native In-Reply-To: <445FC510-972F-4857-85A7-CCBB240A03BD@oracle.com> References: <445FC510-972F-4857-85A7-CCBB240A03BD@oracle.com> Message-ID: Yes, this looks good.? Thank you for posting the bash scripts. Coleen On 6/13/18 4:49 AM, Kim Barrett wrote: > Please review this change of nomenclature in the Access API. Rather > than using the word "root" we now use "native". A "native" access has > an off-heap location, such as in a C/C++ data structure or global > variable. This involves renaming the RootAccess class template to > NativeAccess, and renaming the IN_ROOT access decorator to IN_NATIVE. > > Note that we are not renaming the IN_CONCURRENT_ROOT access decorator > as part of this change. As discussed in JDK-8204690, we intend to > instead eliminate that decorator, as part of a later change. > > This change consists of several sets of renamings and other minor > adjustments, all performed completely mechanically, e.g. by applying a > short sequence of bash commands to the repository being modified. To > aid in reviewing, in addition to a webrev containing the full set of > changes, there is also a sequence of 4 webrevs that combined make up > that same complete set, along with the commands to produce them. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8204939 > > Webrevs: > > (1) Rename RootAccess to NativeAccess > > http://cr.openjdk.java.net/~kbarrett/8204939/1.rename_RootAccess/ > > hg qnew rename_RootAccess > > find . -type f -name "*.[ch]pp" \ > -exec grep -q RootAccess {} \; -print \ > | xargs sed -i 's/RootAccess/NativeAccess/' > > hg qrefresh > > ----- > (2) Rename IN_ROOT to IN_NATIVE > > http://cr.openjdk.java.net/~kbarrett/8204939/2.rename_IN_ROOT/ > > hg qnew rename_IN_ROOT > > find . -type f -name "*.[ch]pp" \ > -exec egrep -q " IN_ROOT \s*=" {} \; -print \ > | xargs sed -i 's/ IN_ROOT / IN_NATIVE /' > > find . -type f -name "*.[ch]pp" \ > -exec grep -q IN_ROOT {} \; -print \ > | xargs sed -i 's/IN_ROOT/IN_NATIVE/' > > hg qrefresh > > ----- > (3) Rename some local variables named on_root and in_root to > in_native, for consistency. > > http://cr.openjdk.java.net/~kbarrett/8204939/3.rename_on_root/ > > hg qnew rename_on_root > > find . -type f -name "*.[ch]pp" \ > -exec egrep -q "[^[:alnum:]_]on_root[^[:alnum:]_]" {} \; -print \ > | xargs sed -i 's/on_root/in_native/' > > find . -type f -name "*.[ch]pp" \ > -exec egrep -q "[^[:alnum:]_]in_root[^[:alnum:]_]" {} \; -print \ > | xargs sed -i 's/in_root/in_native/' > > find . -type f -name "*.[ch]pp" \ > -exec egrep -q " in_native =" {} \; -print \ > | xargs sed -i 's/ in_native =/ in_native =/' > > hg qrefresh > > ----- > (4) Rename some local variables named on_heap, for consistency. > > http://cr.openjdk.java.net/~kbarrett/8204939/4.rename_on_heap/ > > hg qnew rename_on_heap > > find . -type f -name "*.[ch]pp" \ > -exec egrep -q "[^[:alnum:]_]on_heap[^[:alnum:]_]" {} \; -print \ > | xargs sed -i 's/on_heap/in_heap/' > > find . -type f -name "*.[ch]pp" \ > -exec egrep -q " in_heap =" {} \; -print \ > | xargs sed -i 's/ in_heap =/ in_heap =/' > > hg qrefresh > > ----- > (5) All changes > > http://cr.openjdk.java.net/~kbarrett/8204939/open.00/ > > Testing: > Local build and minimal testing of each of the partial webrevs. > Mach5 tier1,2,3 for the full change. > From jesper.wilhelmsson at oracle.com Wed Jun 13 18:33:28 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Wed, 13 Jun 2018 20:33:28 +0200 Subject: RFR: JDK-8203927 - Update version string to identify which VM is being used In-Reply-To: <1254ccb6-4b6b-a6e1-7e03-721707f88d38@oracle.com> References: <1254ccb6-4b6b-a6e1-7e03-721707f88d38@oracle.com> Message-ID: Thank you for the review David! /Jesper > On 12 Jun 2018, at 11:24, David Holmes wrote: > > Hi Jesper, > > Looks fine. > > (I wish there was a better way to combine string literals to avoid the repetition - but I don't know of one.) > > Thanks, > David > > On 12/06/2018 6:59 PM, jesper.wilhelmsson at oracle.com wrote: >> Hi, >> Please review this change to make the version string to identify which JVM is being used in the presence of a hardened JVM. This change relates to JDK-8202384 which is currently out for review as well. >> Testing: Local verification and tier 1. >> Bug: https://bugs.openjdk.java.net/browse/JDK-8203927 >> Webrev: http://cr.openjdk.java.net/~jwilhelm/8203927/webrev.00/ >> Thanks, >> /Jesper From thomas.stuefe at gmail.com Wed Jun 13 18:44:51 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 13 Jun 2018 20:44:51 +0200 Subject: RFR: JDK-8203927 - Update version string to identify which VM is being used In-Reply-To: References: <1254ccb6-4b6b-a6e1-7e03-721707f88d38@oracle.com> Message-ID: Looks fine to me too. I was surprised to see though that the vm info string is actually dependent on runtime settings. Thanks, Thomas On Wed, Jun 13, 2018 at 8:33 PM, wrote: > Thank you for the review David! > /Jesper > >> On 12 Jun 2018, at 11:24, David Holmes wrote: >> >> Hi Jesper, >> >> Looks fine. >> >> (I wish there was a better way to combine string literals to avoid the repetition - but I don't know of one.) >> >> Thanks, >> David >> >> On 12/06/2018 6:59 PM, jesper.wilhelmsson at oracle.com wrote: >>> Hi, >>> Please review this change to make the version string to identify which JVM is being used in the presence of a hardened JVM. This change relates to JDK-8202384 which is currently out for review as well. >>> Testing: Local verification and tier 1. >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8203927 >>> Webrev: http://cr.openjdk.java.net/~jwilhelm/8203927/webrev.00/ >>> Thanks, >>> /Jesper > From jesper.wilhelmsson at oracle.com Wed Jun 13 18:49:46 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Wed, 13 Jun 2018 20:49:46 +0200 Subject: RFR: JDK-8203927 - Update version string to identify which VM is being used In-Reply-To: References: <1254ccb6-4b6b-a6e1-7e03-721707f88d38@oracle.com> Message-ID: Thank you Thomas! Yes, the runtime dependent part is very useful when debugging installations where you have limited knowledge about the configuration. Thanks, /Jesper > On 13 Jun 2018, at 20:44, Thomas St?fe wrote: > > Looks fine to me too. > > I was surprised to see though that the vm info string is actually > dependent on runtime settings. > > Thanks, Thomas > > On Wed, Jun 13, 2018 at 8:33 PM, wrote: >> Thank you for the review David! >> /Jesper >> >>> On 12 Jun 2018, at 11:24, David Holmes wrote: >>> >>> Hi Jesper, >>> >>> Looks fine. >>> >>> (I wish there was a better way to combine string literals to avoid the repetition - but I don't know of one.) >>> >>> Thanks, >>> David >>> >>> On 12/06/2018 6:59 PM, jesper.wilhelmsson at oracle.com wrote: >>>> Hi, >>>> Please review this change to make the version string to identify which JVM is being used in the presence of a hardened JVM. This change relates to JDK-8202384 which is currently out for review as well. >>>> Testing: Local verification and tier 1. >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8203927 >>>> Webrev: http://cr.openjdk.java.net/~jwilhelm/8203927/webrev.00/ >>>> Thanks, >>>> /Jesper >> From vladimir.kozlov at oracle.com Wed Jun 13 19:24:55 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 13 Jun 2018 12:24:55 -0700 Subject: [11] RFR(L) 8184349: There should be some verification that EnableJVMCI is disabled if a GC not supporting JVMCI is selected In-Reply-To: <8e787deb-8535-e986-de0e-252d7839a6fe@oracle.com> References: <8b33f96f-48c5-6df3-5efe-f77dd19961c5@oracle.com> <8e787deb-8535-e986-de0e-252d7839a6fe@oracle.com> Message-ID: <7db789f2-818a-14a9-d8a2-81ee45885d45@oracle.com> Thank you, Tobias Vladimir On 6/13/18 12:28 AM, Tobias Hartmann wrote: > Hi Vladimir, > > this looks good to me (nice refactoring!) but I would suggest to wait for another review. > > Thanks, > Tobias > > On 04.05.2018 00:12, Vladimir Kozlov wrote: >> http://cr.openjdk.java.net/~kvn/8184349/webrev.02/ >> https://bugs.openjdk.java.net/browse/JDK-8184349 >> >> Recent testing problem after Graal dropped CMS (throw exception) made this RFE more urgent. I >> decided to fix it. >> >> The main fix is for JVMCI to check if GC is supported and exit VM with error if not [1]. It is >> called from Arguments::apply_ergo() after GC is selected in GCConfig::initialize(). >> >> Main changes are refactoring. >> >> I used this opportunity (inspired by GCConfig) to move compiler related code from arguments.cpp file >> into compilerDefinitions.* files. And renamed it to compilerConfig.*. >> >> Two new CompilerConfig methods check_comp_args_consistency() and CompilerConfig::ergo_initialize() >> are called from arguments.cpp. >> >> The rest are test fixing. Mostly to not run CMS GC with Graal JIT. >> >> One test CheckCompileThresholdScaling.java was modified because I skipped scaling compiler threshold >> in Interpreter mode (CompileThreshold = 0). >> >> For tests which use CMS I added @requires !vm.graal.enabled. >> Unfortunately I did not fix all tests which use CMS. Some tests have several @run commands for each >> GC. And some tests fork new process to test different GCs. Changes for those tests are more >> complicated and I filed follow up bug 8202611 [2] I will fix after this. >> >> Tested tier1,tier2,tier2-graal >> From kim.barrett at oracle.com Wed Jun 13 20:01:18 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 13 Jun 2018 16:01:18 -0400 Subject: RFR: 8204939: Change Access nomenclature: root to native In-Reply-To: <26bdd23c-11f3-29f9-1d86-9203c5b11785@oracle.com> References: <445FC510-972F-4857-85A7-CCBB240A03BD@oracle.com> <26bdd23c-11f3-29f9-1d86-9203c5b11785@oracle.com> Message-ID: <914FA037-9658-442C-8D8F-CA26B135AB8E@oracle.com> > On Jun 13, 2018, at 6:38 AM, Stefan Karlsson wrote: > > Looks good. > > StefanK Thanks. From kim.barrett at oracle.com Wed Jun 13 20:01:34 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 13 Jun 2018 16:01:34 -0400 Subject: RFR: 8204939: Change Access nomenclature: root to native In-Reply-To: References: <445FC510-972F-4857-85A7-CCBB240A03BD@oracle.com> Message-ID: <3973283E-2D94-4169-992A-541716696AD1@oracle.com> > On Jun 13, 2018, at 10:25 AM, coleen.phillimore at oracle.com wrote: > > > Yes, this looks good. Thank you for posting the bash scripts. > Coleen Thanks. From igor.veresov at oracle.com Wed Jun 13 20:06:44 2018 From: igor.veresov at oracle.com (Igor Veresov) Date: Wed, 13 Jun 2018 13:06:44 -0700 Subject: [11] RFR(L) 8184349: There should be some verification that EnableJVMCI is disabled if a GC not supporting JVMCI is selected In-Reply-To: <8b33f96f-48c5-6df3-5efe-f77dd19961c5@oracle.com> References: <8b33f96f-48c5-6df3-5efe-f77dd19961c5@oracle.com> Message-ID: <70480A57-3E23-473B-A9FF-5D941C4B106B@oracle.com> Looks good. I think if the future it would make more sense to be able to ask a JVMCI compiler what GCs does it support. But since there is just one implementation of a JVMCI compiler I guess it doesn?t matter much for now. igor > On May 3, 2018, at 3:12 PM, Vladimir Kozlov wrote: > > http://cr.openjdk.java.net/~kvn/8184349/webrev.02/ > https://bugs.openjdk.java.net/browse/JDK-8184349 > > Recent testing problem after Graal dropped CMS (throw exception) made this RFE more urgent. I decided to fix it. > > The main fix is for JVMCI to check if GC is supported and exit VM with error if not [1]. It is called from Arguments::apply_ergo() after GC is selected in GCConfig::initialize(). > > Main changes are refactoring. > > I used this opportunity (inspired by GCConfig) to move compiler related code from arguments.cpp file into compilerDefinitions.* files. And renamed it to compilerConfig.*. > > Two new CompilerConfig methods check_comp_args_consistency() and CompilerConfig::ergo_initialize() are called from arguments.cpp. > > The rest are test fixing. Mostly to not run CMS GC with Graal JIT. > > One test CheckCompileThresholdScaling.java was modified because I skipped scaling compiler threshold in Interpreter mode (CompileThreshold = 0). > > For tests which use CMS I added @requires !vm.graal.enabled. > Unfortunately I did not fix all tests which use CMS. Some tests have several @run commands for each GC. And some tests fork new process to test different GCs. Changes for those tests are more complicated and I filed follow up bug 8202611 [2] I will fix after this. > > Tested tier1,tier2,tier2-graal > > -- > Thanks, > Vladimir > > [1] http://cr.openjdk.java.net/~kvn/8184349/webrev.02/src/hotspot/share/jvmci/jvmci_globals.cpp.udiff.html > [2] https://bugs.openjdk.java.net/browse/JDK-8202611 From david.griffiths at gmail.com Tue Jun 12 20:23:37 2018 From: david.griffiths at gmail.com (David Griffiths) Date: Tue, 12 Jun 2018 21:23:37 +0100 Subject: Compiler deoptimization behaviour Message-ID: I wrote a simple little test to better understand the compiler frame layout but it exhibits strange behaviour in that it starts off very fast and then slows to a million times slower. From running with -XX:+PrintCompilation I _think_ it is something to do with one of the bottom level methods getting deoptimized - this message appears just as the performance falls off the cliff: 115 48 3 TestFrames::add2 (31 bytes) made not entrant but I don't understand why the more optimized version doesn't then kick in. I've tried various things like breaking out of all the loops and starting again when my monitor thread wakes up but it still stays slow. The slowdown occurs randomly when "i" is typically between 150 and 500. public class TestFrames { long var; public static void main(String[] args) { new TestFrames(); } TestFrames() { // monitor thread new Thread() { public void run() { for (;;) { try { Thread.sleep(5000); } catch (Exception e) {} System.out.println("var = " + var); } } }.start(); for (int i = 0; i < 100000; i++) { System.out.println("i = " + i + " var = " + var); add1(1); } } private void add1(int n) { var += n; for (int i = 0; i < 10; i++) add2(2); } private void add2(int n) { var += n; for (int i = 0; i < 10; i++) add3(3); } private void add3(int n) { var += n; for (int i = 0; i < 10; i++) add4(4); } private void add4(int n) { var += n; for (int i = 0; i < 10; i++) add5(5); } private void add5(int n) { var += n; for (int i = 0; i < 10; i++) add6(6); } private void add6(int n) { var += n; for (int i = 0; i < 10; i++) add7(7); } private void add7(int n) { var += n; for (int i = 0; i < 10; i++) add8(8); } private void add8(int n) { var += n; for (int i = 0; i < 10; i++) add9(9); } private void add9(int n) { var += n; for (int i = 0; i < 10; i++) add10(10); } private void add10(int n) { var += n; for (int i = 0; i < 10; i++) add11(11); } private void add11(int n) { var += n; for (int i = 0; i < 10; i++) add12(12); } private void add12(int n) { var += n; for (int i = 0; i < 10; i++) add13(13); } private void add13(int n) { var += n; } } Is there any tweak I can make to the program such that the optimized methof will get picked up (if that is indeed the problem)? Thanks, David From vladimir.kozlov at oracle.com Wed Jun 13 20:57:08 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 13 Jun 2018 13:57:08 -0700 Subject: [11] RFR(L) 8184349: There should be some verification that EnableJVMCI is disabled if a GC not supporting JVMCI is selected In-Reply-To: <70480A57-3E23-473B-A9FF-5D941C4B106B@oracle.com> References: <8b33f96f-48c5-6df3-5efe-f77dd19961c5@oracle.com> <70480A57-3E23-473B-A9FF-5D941C4B106B@oracle.com> Message-ID: Thank you, Igor On 6/13/18 1:06 PM, Igor Veresov wrote: > Looks good. > > I think if the future it would make more sense to be able to ask a JVMCI compiler what GCs does it support. But since there is just one implementation of a JVMCI compiler I guess it doesn?t matter much for now. I think it could be difficult to implement and too late during startup because JVMCI compiler is Java code and GC will already be used when it is executed and could cause problem. Unless it has C++ code which checks GCs which Hotspot can call. Thanks, Vladimir > > igor > >> On May 3, 2018, at 3:12 PM, Vladimir Kozlov wrote: >> >> http://cr.openjdk.java.net/~kvn/8184349/webrev.02/ >> https://bugs.openjdk.java.net/browse/JDK-8184349 >> >> Recent testing problem after Graal dropped CMS (throw exception) made this RFE more urgent. I decided to fix it. >> >> The main fix is for JVMCI to check if GC is supported and exit VM with error if not [1]. It is called from Arguments::apply_ergo() after GC is selected in GCConfig::initialize(). >> >> Main changes are refactoring. >> >> I used this opportunity (inspired by GCConfig) to move compiler related code from arguments.cpp file into compilerDefinitions.* files. And renamed it to compilerConfig.*. >> >> Two new CompilerConfig methods check_comp_args_consistency() and CompilerConfig::ergo_initialize() are called from arguments.cpp. >> >> The rest are test fixing. Mostly to not run CMS GC with Graal JIT. >> >> One test CheckCompileThresholdScaling.java was modified because I skipped scaling compiler threshold in Interpreter mode (CompileThreshold = 0). >> >> For tests which use CMS I added @requires !vm.graal.enabled. >> Unfortunately I did not fix all tests which use CMS. Some tests have several @run commands for each GC. And some tests fork new process to test different GCs. Changes for those tests are more complicated and I filed follow up bug 8202611 [2] I will fix after this. >> >> Tested tier1,tier2,tier2-graal >> >> -- >> Thanks, >> Vladimir >> >> [1] http://cr.openjdk.java.net/~kvn/8184349/webrev.02/src/hotspot/share/jvmci/jvmci_globals.cpp.udiff.html >> [2] https://bugs.openjdk.java.net/browse/JDK-8202611 > From Derek.White at cavium.com Wed Jun 13 21:53:50 2018 From: Derek.White at cavium.com (White, Derek) Date: Wed, 13 Jun 2018 21:53:50 +0000 Subject: UseNUMA membind Issue in openJDK In-Reply-To: References: <9a0310b7-2880-db69-cfbc-7abba844ecbf@oracle.com> <187f278f-5069-3afa-c61f-f49a1fc0d790@linux.vnet.ibm.com> <36dc8538-f984-d2df-e401-8362a429c6f3@linux.vnet.ibm.com> <71ac36b7-263b-9f48-47d7-aa4cdbe04983@linux.vnet.ibm.com> Message-ID: See inline: > -----Original Message----- > From: Gustavo Romero [mailto:gromero at linux.vnet.ibm.com] ... > Hi Derek, > > On 06/12/2018 06:56 PM, White, Derek wrote: > > Hi Swati, Gustavo, > > > > I?m not the best qualified to review the change ? I just reported the issue > as a JDK bug! > > > > I?d be happy to test a fix but I?m having trouble following the patch. Did > Gustavo post a patch to your patch, or is that a full independent patch? > > Yes, the idea was that you could help on testing it against JDK-8189922. > Swati's initial report on this thread was accompanied with a simple way to > test the issue he reported. You said it was related to bug JDK-8189922 but I > can't see a simple way to test it as you reported. Besides that I assumed that > you tested it on arm64, so I can't test it myself (I don't have such a > hardware). Btw, if you could provide some numactl -H information I would > be glad. OK, here's a test case: $ numactl -N 0 -m 0 java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version Before patch, failed output shows 1/2 of Eden being wasted for threads from node that will never allocate: ... [0.230s][info][gc,heap,exit ] eden space 524800K, 4% used [0x0000000580100000,0x0000000581580260,0x00000005a0180000) [0.230s][info][gc,heap,exit ] lgrp 0 space 262400K, 8% used [0x0000000580100000,0x0000000581580260,0x0000000590140000) [0.230s][info][gc,heap,exit ] lgrp 1 space 262400K, 0% used [0x0000000590140000,0x0000000590140000,0x00000005a0180000) ... After patch, passed output: ... [0.231s][info][gc,heap,exit ] eden space 524800K, 8% used [0x0000000580100000,0x0000000582a00260,0x00000005a0180000) ... (no lgrps) Open questions - still a bug? 1) What should JVM do if cpu node is bound, but not memory is bound? Even with patch, JVM wastes memory because it sets aside part of Eden for threads that can never run on other node. - numactl -N 0 java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version - My expectation was that it would act as if membind is set. But I'm not an expert. - What do containers do under the hood? Would they ever bind cpus and NOT memory? 2) What should JVM do if cpu node is bound, and numactl --localalloc specified? Even with patch, JVM wastes memory. - numactl -N 0 --localalloc java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version - My expectation was that "--localalloc" would be identical to setting membind for all of the cpu bound nodes, but I guess it's not. FYI - numactl -H: available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 node 0 size: 128924 MB node 0 free: 8499 MB node 1 cpus: 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 node 1 size: 129011 MB node 1 free: 7964 MB node distances: node 0 1 0: 10 20 1: 20 10 > I consider the patch I pointed out as the fourth version of Swati's original > proposal, it evolved from the reviews so far: > http://cr.openjdk.java.net/~gromero/8189922/draft/usenuma_v4.patch > > > > Also, if you or Gustavo have permissions to post a webrev to > http://cr.openjdk.java.net/ that would make reviewing a little easier. I?d be > happy to post a webrev for you if not. > > I was planing to host the webrev after your comments, but feel free to host > it. No, you have it covered well, I'll stay out of it. - Derek From lois.foltan at oracle.com Wed Jun 13 22:58:42 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Wed, 13 Jun 2018 18:58:42 -0400 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name Message-ID: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> Please review this change to standardize on how to obtain a class loader's name within the VM.? SystemDictionary::loader_name() methods have been removed in favor of ClassLoaderData::loader_name(). Since the loader name is largely used in the VM for display purposes (error messages, logging, jcmd, JFR) this change also adopts a new format to append to a class loader's name its identityHashCode and if the loader has not been explicitly named it's qualified class name is used instead. 391 /** 392 * If the defining loader has a name explicitly set then 393 * '' @ 394 * If the defining loader has no name then 395 * @ 396 * If it's built-in loader then omit `@` as there is only one instance. 397 */ The names for the builtin loaders are 'bootstrap', 'app' and 'platform'. open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ bug link at https://bugs.openjdk.java.net/browse/JDK-8202605 Testing: hs-tier(1-2), jdk-tier(1-2) complete ?????????????? hs-tier(3-5), jdk-tier(3) in progress Thanks, Lois From rene.schuenemann at gmail.com Thu Jun 14 07:41:22 2018 From: rene.schuenemann at gmail.com (=?UTF-8?B?UmVuw6kgU2Now7xuZW1hbm4=?=) Date: Thu, 14 Jun 2018 09:41:22 +0200 Subject: RFR: 8204955: Extend ClassCastException message Message-ID: Hi, can I please get a review for the following change: Bug: https://bugs.openjdk.java.net/browse/JDK-8204955 Webrev: http://cr.openjdk.java.net/~simonis/webrevs/2018/8204955 This change adds additional details to the ClassCastException message when the class cast failed due to non-matching class loaders. Example: "MyLoader/m/MyClass cannot be cast to OtherLoader/m/MyClass. Loaded by OtherLoader, but needed class loader MyLoader." It is now also checked whether the target class is an extended interface or super class of the caster class and casting failed due to non-matching class loaders. Example: "MyLoader/m/MyClass cannot be cast to OtherLoader/m/MyInterface. Found matching interface OtherLoader/m/MyInterface loaded by OtherLoader but needed class loader MyLoader." I have added the test "jdk/test/hotspot/jtreg/runtime/exceptionMsgs/ClassCastException/ClassCastExceptionTest.java" for the new exception message. Thank you, Rene From robbin.ehn at oracle.com Thu Jun 14 10:11:30 2018 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Thu, 14 Jun 2018 12:11:30 +0200 Subject: RFR(s): 8204166: TLH: Semaphore may not be destroy until signal have returned. Message-ID: Hi all, please review. Bug: https://bugs.openjdk.java.net/browse/JDK-8204166 Webrev: http://cr.openjdk.java.net/~rehn/8204166/v1/webrev/ The root cause of this failure is a bug in the posix semaphores: https://sourceware.org/bugzilla/show_bug.cgi?id=12674 Thread a: sem_post(my_sem); Thread b: sem_wait(my_sem); sem_destroy(my_sem); Thread b is waiting on my_sem (count 0), Thread a posts (count 0->1). If Thread b start executing directly after the increment in post but before Thread a leaves the call to post and manage to destroy the semaphore. Thread a _can_ get EINVAL from sem_post! This is fixed in newer glibc(2.21). Note that mutexes have had same issue on some platforms: https://sourceware.org/bugzilla/show_bug.cgi?id=13690 Fixed in 2.23. Since we only have one handshake operation running at anytime (safepoints and handshakes are also mutual exclusive, both run on VM Thread) we can actually always use the same semaphore. This patch changes the _done semaphore to be static instead, thus avoiding the post<->destroy race. Patch also contains some small changes which remove of dead code, remove unneeded state, handling of cases which we can't easily say will never happen and some additional error checks. Handshakes test passes, but they don't trigger the original issue, so more interesting is that this issue do not happen when running ZGC which utilize handshakes with the static semaphore. Thanks, Robbin From thomas.schatzl at oracle.com Thu Jun 14 10:36:18 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 14 Jun 2018 12:36:18 +0200 Subject: RFR(M) 8203641: Refactor String Deduplication into shared In-Reply-To: References: <2952ecd7a31f0837a8852a9b53a3700418fb6bc2.camel@oracle.com> Message-ID: Hi Zhengyu, On Tue, 2018-06-12 at 16:31 -0400, Zhengyu Gu wrote: > Hi Thomas, > > Thanks for the reviewing. > > On 06/12/2018 05:28 AM, Thomas Schatzl wrote: > > > > This would very likely decrease the amount of changes in the > > important change, the refactoring, significantly. > > > > Now everything is shown as "new" in diff tools, and we reviewers > > need to go through everything. It seems a bit of a stretch to call > > this "M" with 1800 lines of changed lines, both on the raw number > > of changes and the review complexity. > > I reshuffled some moved files, seems following updated webrev to > have better diff. > > http://cr.openjdk.java.net/~zgu/8203641/webrev.01/index.html Thanks. > > > > - I am not sure why g1StringDedup.hpp still contains a general > > description of the mechanism at the top; that should probably move > > to the shared files. > > Also it duplicates the "Candidate selection" paragraphs apparently. > > Please avoid comment duplication. > > Fixed. Thanks. Could you rename "Candidate selection" to "G1 string deduplication candidate selection" in g1StringDedup.hpp? (Just rename the title of that paragraph) > > I am not sure that keeping the interface related to string > > deduplication all static and then use instance variables behind the > > scene makes it easily readable. > > > > Making everything static has to me been an implementation choice > > because there has only been one user (G1) before. > > I kept this way to minimize changes in G1, especially, outside of > string deduplication code. I see. > > > > I will need to bring this up with others in the (Oracle-)team what > > they think about this. Probably it's okay to keep this, and this > > could be done at another time. > > Please let me know what is your decision, or file a RFE for future > cleanup. In either case, it is best to do this as an extra CR. [...] > > - in StringDedupThread::do_deduplication the template parameter > > changes from "S" (in the definition) to "STAT". Not sure why; also > > we do not tend to use all-caps type names. > > Fixed. > > Also, fixed the bug that caused crashes you mentioned. > > Ran runtime_gc tests with -XX:+UseStringDeduplication on Linux x64 > (fastdebug | release). > Retesting looks good from our side too. The change seems good too. I do not need to see a new webrev for above mentioned comment change. Thanks, Thomas From erik.osterlund at oracle.com Thu Jun 14 10:52:19 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 14 Jun 2018 12:52:19 +0200 Subject: RFR(s): 8204166: TLH: Semaphore may not be destroy until signal have returned. In-Reply-To: References: Message-ID: <5B2248E3.7020206@oracle.com> Hi Robbin, Looks good. Thanks, /Erik On 2018-06-14 12:11, Robbin Ehn wrote: > Hi all, please review. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8204166 > Webrev: http://cr.openjdk.java.net/~rehn/8204166/v1/webrev/ > > The root cause of this failure is a bug in the posix semaphores: > https://sourceware.org/bugzilla/show_bug.cgi?id=12674 > > Thread a: > sem_post(my_sem); > > Thread b: > sem_wait(my_sem); > sem_destroy(my_sem); > > Thread b is waiting on my_sem (count 0), Thread a posts (count 0->1). > If Thread b start executing directly after the increment in post but > before > Thread a leaves the call to post and manage to destroy the semaphore. > Thread a > _can_ get EINVAL from sem_post! This is fixed in newer glibc(2.21). > > Note that mutexes have had same issue on some platforms: > https://sourceware.org/bugzilla/show_bug.cgi?id=13690 > Fixed in 2.23. > > Since we only have one handshake operation running at anytime > (safepoints and handshakes are also mutual exclusive, both run on VM > Thread) we can actually always use the same semaphore. This patch > changes the _done semaphore to be static instead, thus avoiding the > post<->destroy race. > > Patch also contains some small changes which remove of dead code, > remove unneeded state, handling of cases which we can't easily say > will never happen and some additional error checks. > > Handshakes test passes, but they don't trigger the original issue, so > more interesting is that this issue do not happen when running ZGC > which utilize handshakes with the static semaphore. > > Thanks, Robbin From robbin.ehn at oracle.com Thu Jun 14 10:53:28 2018 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Thu, 14 Jun 2018 12:53:28 +0200 Subject: RFR(s): 8204166: TLH: Semaphore may not be destroy until signal have returned. In-Reply-To: <5B2248E3.7020206@oracle.com> References: <5B2248E3.7020206@oracle.com> Message-ID: <43b79528-fcd6-6d0d-e565-0e0dd4a7fd6f@oracle.com> Hi Erik, I should have given you credit in RFR, thanks for all help during this bug hunt! Also thanks to Stefan K! On 2018-06-14 12:52, Erik ?sterlund wrote: > Hi Robbin, > > Looks good. Thanks! /Robbin > > Thanks, > /Erik > > On 2018-06-14 12:11, Robbin Ehn wrote: >> Hi all, please review. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8204166 >> Webrev: http://cr.openjdk.java.net/~rehn/8204166/v1/webrev/ >> >> The root cause of this failure is a bug in the posix semaphores: >> https://sourceware.org/bugzilla/show_bug.cgi?id=12674 >> >> Thread a: >> sem_post(my_sem); >> >> Thread b: >> sem_wait(my_sem); >> sem_destroy(my_sem); >> >> Thread b is waiting on my_sem (count 0), Thread a posts (count 0->1). >> If Thread b start executing directly after the increment in post but before >> Thread a leaves the call to post and manage to destroy the semaphore. Thread a >> _can_ get EINVAL from sem_post! This is fixed in newer glibc(2.21). >> >> Note that mutexes have had same issue on some platforms: >> https://sourceware.org/bugzilla/show_bug.cgi?id=13690 >> Fixed in 2.23. >> >> Since we only have one handshake operation running at anytime (safepoints and >> handshakes are also mutual exclusive, both run on VM Thread) we can actually >> always use the same semaphore. This patch changes the _done semaphore to be >> static instead, thus avoiding the post<->destroy race. >> >> Patch also contains some small changes which remove of dead code, remove >> unneeded state, handling of cases which we can't easily say will never happen >> and some additional error checks. >> >> Handshakes test passes, but they don't trigger the original issue, so more >> interesting is that this issue do not happen when running ZGC which utilize >> handshakes with the static semaphore. >> >> Thanks, Robbin > From swatibits14 at gmail.com Thu Jun 14 12:01:36 2018 From: swatibits14 at gmail.com (Swati Sharma) Date: Thu, 14 Jun 2018 17:31:36 +0530 Subject: UseNUMA membind Issue in openJDK In-Reply-To: References: <9a0310b7-2880-db69-cfbc-7abba844ecbf@oracle.com> <187f278f-5069-3afa-c61f-f49a1fc0d790@linux.vnet.ibm.com> <36dc8538-f984-d2df-e401-8362a429c6f3@linux.vnet.ibm.com> <71ac36b7-263b-9f48-47d7-aa4cdbe04983@linux.vnet.ibm.com> Message-ID: +Roshan Hi Derek, Thanks for your testing and finding additional bug with UseNUMA ,I appreciate your effort. The answer to your questions: 1) What should JVM do if cpu node is bound, but not memory is bound? Even with patch, JVM wastes memory because it sets aside part of Eden for threads that can never run on other node. - numactl -N 0 java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version - My expectation was that it would act as if membind is set. But I'm not an expert. - What do containers do under the hood? Would they ever bind cpus and NOT memory? If membind is not given then JVM should use the memory on all nodes available. You are right, wastage of memory is happening, We have analyzed the code and got the root cause of this issue and the fix for this issue will take some time, Note: My colleague Roshan has found the root cause in existing code and working on the fix for this issue, soon he will come up with the patch. 2) What should JVM do if cpu node is bound, and numactl --localalloc specified? Even with patch, JVM wastes memory. - numactl -N 0 --localalloc java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version - My expectation was that "--localalloc" would be identical to setting membind for all of the cpu bound nodes, but I guess it's not. Yes ,In case of "numactl --localalloc" , thread should use local memory always. Lgrp should be created based on cpunode given.In the current example it should create only single lgrp. Gustavo, Shall we go ahead with the current patch as issue pointed out by Derek is not with current patch but exists in existing code and can fix the issue in another patch ? Derek , Can you file the separate bug for above issues with no --membind in numactl ? My current patch fixes the issue when user mentions --membind with numactl , the same mentioned also in subject line( UseNUMA membind Issue in openJDK) Thanks, Swati Sharma Software Engineer - 2 @AMD On Thu, Jun 14, 2018 at 3:23 AM, White, Derek wrote: > See inline: > > > -----Original Message----- > > From: Gustavo Romero [mailto:gromero at linux.vnet.ibm.com] > ... > > Hi Derek, > > > > On 06/12/2018 06:56 PM, White, Derek wrote: > > > Hi Swati, Gustavo, > > > > > > I?m not the best qualified to review the change ? I just reported the > issue > > as a JDK bug! > > > > > > I?d be happy to test a fix but I?m having trouble following the patch. > Did > > Gustavo post a patch to your patch, or is that a full independent patch? > > > > Yes, the idea was that you could help on testing it against JDK-8189922. > > Swati's initial report on this thread was accompanied with a simple way > to > > test the issue he reported. You said it was related to bug JDK-8189922 > but I > > can't see a simple way to test it as you reported. Besides that I > assumed that > > you tested it on arm64, so I can't test it myself (I don't have such a > > hardware). Btw, if you could provide some numactl -H information I would > > be glad. > > > OK, here's a test case: > $ numactl -N 0 -m 0 java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA > -version > > Before patch, failed output shows 1/2 of Eden being wasted for threads > from node that will never allocate: > ... > [0.230s][info][gc,heap,exit ] eden space 524800K, 4% used > [0x0000000580100000,0x0000000581580260,0x00000005a0180000) > [0.230s][info][gc,heap,exit ] lgrp 0 space 262400K, 8% used > [0x0000000580100000,0x0000000581580260,0x0000000590140000) > [0.230s][info][gc,heap,exit ] lgrp 1 space 262400K, 0% used > [0x0000000590140000,0x0000000590140000,0x00000005a0180000) > ... > > After patch, passed output: > ... > [0.231s][info][gc,heap,exit ] eden space 524800K, 8% used > [0x0000000580100000,0x0000000582a00260,0x00000005a0180000) > ... (no lgrps) > > Open questions - still a bug? > 1) What should JVM do if cpu node is bound, but not memory is bound? Even > with patch, JVM wastes memory because it sets aside part of Eden for > threads that can never run on other node. > - numactl -N 0 java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA > -version > - My expectation was that it would act as if membind is set. But I'm not > an expert. > - What do containers do under the hood? Would they ever bind cpus and > NOT memory? > 2) What should JVM do if cpu node is bound, and numactl --localalloc > specified? Even with patch, JVM wastes memory. > - numactl -N 0 --localalloc java -Xlog:gc*=info -XX:+UseParallelGC > -XX:+UseNUMA -version > - My expectation was that "--localalloc" would be identical to setting > membind for all of the cpu bound nodes, but I guess it's not. > > > > FYI - numactl -H: > available: 2 nodes (0-1) > node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 > 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 > node 0 size: 128924 MB > node 0 free: 8499 MB > node 1 cpus: 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 > 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 > 93 94 95 > node 1 size: 129011 MB > node 1 free: 7964 MB > node distances: > node 0 1 > 0: 10 20 > 1: 20 10 > > > I consider the patch I pointed out as the fourth version of Swati's > original > > proposal, it evolved from the reviews so far: > > http://cr.openjdk.java.net/~gromero/8189922/draft/usenuma_v4.patch > > > > > > > Also, if you or Gustavo have permissions to post a webrev to > > http://cr.openjdk.java.net/ that would make reviewing a little easier. > I?d be > > happy to post a webrev for you if not. > > > > I was planing to host the webrev after your comments, but feel free to > host > > it. > > No, you have it covered well, I'll stay out of it. > > - Derek > > From zgu at redhat.com Thu Jun 14 12:03:07 2018 From: zgu at redhat.com (Zhengyu Gu) Date: Thu, 14 Jun 2018 08:03:07 -0400 Subject: RFR(M) 8203641: Refactor String Deduplication into shared In-Reply-To: References: <2952ecd7a31f0837a8852a9b53a3700418fb6bc2.camel@oracle.com> Message-ID: Thanks, Thomas. -Zhengyu On 06/14/2018 06:36 AM, Thomas Schatzl wrote: > Hi Zhengyu, > > On Tue, 2018-06-12 at 16:31 -0400, Zhengyu Gu wrote: >> Hi Thomas, >> >> Thanks for the reviewing. >> >> On 06/12/2018 05:28 AM, Thomas Schatzl wrote: >>> >>> This would very likely decrease the amount of changes in the >>> important change, the refactoring, significantly. >>> >>> Now everything is shown as "new" in diff tools, and we reviewers >>> need to go through everything. It seems a bit of a stretch to call >>> this "M" with 1800 lines of changed lines, both on the raw number >>> of changes and the review complexity. >> >> I reshuffled some moved files, seems following updated webrev to >> have better diff. >> >> http://cr.openjdk.java.net/~zgu/8203641/webrev.01/index.html > > Thanks. > >>> >>> - I am not sure why g1StringDedup.hpp still contains a general >>> description of the mechanism at the top; that should probably move >>> to the shared files. >>> Also it duplicates the "Candidate selection" paragraphs apparently. >>> Please avoid comment duplication. >> >> Fixed. > > Thanks. Could you rename "Candidate selection" to "G1 string > deduplication candidate selection" in g1StringDedup.hpp? (Just rename > the title of that paragraph) > >>> I am not sure that keeping the interface related to string >>> deduplication all static and then use instance variables behind the >>> scene makes it easily readable. >>> >>> Making everything static has to me been an implementation choice >>> because there has only been one user (G1) before. >> >> I kept this way to minimize changes in G1, especially, outside of >> string deduplication code. > > I see. > >>> >>> I will need to bring this up with others in the (Oracle-)team what >>> they think about this. Probably it's okay to keep this, and this >>> could be done at another time. >> >> Please let me know what is your decision, or file a RFE for future >> cleanup. > > In either case, it is best to do this as an extra CR. > > [...] >>> - in StringDedupThread::do_deduplication the template parameter >>> changes from "S" (in the definition) to "STAT". Not sure why; also >>> we do not tend to use all-caps type names. >> >> Fixed. >> >> Also, fixed the bug that caused crashes you mentioned. >> >> Ran runtime_gc tests with -XX:+UseStringDeduplication on Linux x64 >> (fastdebug | release). >> > > Retesting looks good from our side too. > > The change seems good too. I do not need to see a new webrev for above > mentioned comment change. > > Thanks, > Thomas > From goetz.lindenmaier at sap.com Thu Jun 14 12:23:38 2018 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Thu, 14 Jun 2018 12:23:38 +0000 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> Message-ID: <9387143d895843b8b245605137e3443e@sap.com> Hi Lois, thanks for doing this change! I appreciate a clear guidance on the naming of classloaders. Adding a new field to ClassLoader is the best way to assure this is used widely. Some comments on the format: * I would skip the space between the name and the @. * If two loaders have the same name, it might be helpful to see the classname. But I guess this will mostly happen if the loaders are of the same class, too. (like loading the same library twice. In this case the id helps). In detail: classLoaderData.cpp: Please use external_name(): + _class_loader_name_id = _class_loader_klass->name(); I would rename loader_name() to loader_nameAndId(). classLoaderData.hpp: Line 400: eventually adapt the comment to use ' ' instead of <>. classLoaderHierarchyDCmd.cpp: I would remove the double quotes from this output. But you can leave this to a follow up I guess. classLoaderStats.cpp: Maybe you want to adapt classLoaderStats.cpp:114 to say 'bootstrap' instead of . javaClasses.cpp: +// Returns the name of this class loader or null if this class loader is not named. "the name" is ambiguous now. Maybe say "Returns the name _field_ of ..." +java_lang_ClassLoader::nameAndId(oop loader) I would add to the comment: // Use ClassLoaderData::loader_name() to obtain this String as a char* for internal use. jfrTypeSetUtils.hpp: Don't you want to put this field into classLoaderData.hpp? Then you can use it in loader_name(), too. ClassLoaderHierarchyTest.java: I think you need to edit the other names, too: "Kevin" --> "'Kevin'" etc. Or, if you followed my above comment, remove the double quotes altogether. Best regards, Goetz. > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On > Behalf Of Lois Foltan > Sent: Donnerstag, 14. Juni 2018 00:59 > To: hotspot-dev developers > Subject: RFR (M) JDK-8202605: Standardize on > ClassLoaderData::loader_name() throughout the VM to obtain a class > loader's name > > Please review this change to standardize on how to obtain a class > loader's name within the VM.? SystemDictionary::loader_name() methods > have been removed in favor of ClassLoaderData::loader_name(). > > Since the loader name is largely used in the VM for display purposes > (error messages, logging, jcmd, JFR) this change also adopts a new > format to append to a class loader's name its identityHashCode and if > the loader has not been explicitly named it's qualified class name is > used instead. > > 391 /** > 392 * If the defining loader has a name explicitly set then > 393 * '' @ > 394 * If the defining loader has no name then > 395 * @ > 396 * If it's built-in loader then omit `@` as there is only one > instance. > 397 */ > > The names for the builtin loaders are 'bootstrap', 'app' and 'platform'. > > open webrev at > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ > bug link at https://bugs.openjdk.java.net/browse/JDK-8202605 > > Testing: hs-tier(1-2), jdk-tier(1-2) complete > ?????????????? hs-tier(3-5), jdk-tier(3) in progress > > Thanks, > Lois From boris.ulasevich at bell-sw.com Thu Jun 14 12:39:02 2018 From: boris.ulasevich at bell-sw.com (Boris Ulasevich) Date: Thu, 14 Jun 2018 15:39:02 +0300 Subject: RFR (XS) 8204961: JVMTI jtreg tests build warnings on 32-bit platforms Message-ID: Hi all, Please review the following patch: https://bugs.openjdk.java.net/browse/JDK-8204961 http://cr.openjdk.java.net/~bulasevich/8204961/webrev.01 Recently opensourced JVMTI tests gives build warnings for ARM32 build. GCC complains about conversion between 4-byte pointer to 8-byte jlong type which is Ok in this case. I propose to hide warning using conversion to intptr_t. thanks, Boris From thomas.stuefe at gmail.com Thu Jun 14 12:51:35 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 14 Jun 2018 14:51:35 +0200 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> Message-ID: Hi Lois, it is a good thing to introduce a common naming scheme plus guidelines of how to use them. I like that you introduced this to both ClassLoaderData and ClassLoader.java - that way we can have a common scheme regardless whether we have a CLD pointer or a ClassLoader oop. --- But of course I have complaints too :) I dislike the compound format: - it hides the class name if the loader name is set. - it adds quotes, which may not be desired in all places. - names of the special loaders bootstrap I would prefer to keep without quotes but with angular brackets: 'bootstrap' -> , 'app' -> , 'platform' -> Since your patch changes how some of my jcmd subcommands print things, they now look not so good. For example VM.classloaders: Before we had: 938: +-- | +-- jdk.internal.reflect.DelegatingClassLoader {0x000000071523ae00} | | Classes: jdk.internal.reflect.GeneratedConstructorAccessor1 (invokes: java/lang/management/ManagementPermission:: (Ljava/lang/String;)V) | (1 class) | +-- "Kevin", ClassLoaderHierarchyTest$TestClassLoader {0x000000071529f620} | | Classes: TestClass2 | (1 class) | | Anonymous Classes: TestClass2$$Lambda$46/0x000000080011cc40 | (1 anonymous class) With your patch: VM.classloaders 13580: +-- "'bootstrap'", | | +-- "jdk.internal.reflect.DelegatingClassLoader @41cbc171", jdk.internal.reflect.DelegatingClassLoader {0x000000071523a638} | | Classes: jdk.internal.reflect.GeneratedConstructorAccessor1 (invokes: java/lang/management/ManagementPermission:: (Ljava/lang/String;)V) | (1 class) | +-- "'Kevin' @9904154", ClassLoaderHierarchyTest$TestClassLoader {0x000000071529f1e8} | | Classes: TestClass2 | (1 class) | | Anonymous Classes: TestClass2$$Lambda$46/0x000000080011cc40 | (1 anonymous class) Of course this could be improved a bit - if I were to use your new function and tweak it a bit, it would be: VM.classloaders 13580: +-- 'bootstrap', | | +-- jdk.internal.reflect.DelegatingClassLoader @41cbc171 {0x000000071523a638} | | Classes: jdk.internal.reflect.GeneratedConstructorAccessor1 (invokes: java/lang/management/ManagementPermission:: (Ljava/lang/String;)V) | (1 class) | +-- 'Kevin' @9904154 {0x000000071529f1e8} | | Classes: TestClass2 | (1 class) | | Anonymous Classes: TestClass2$$Lambda$46/0x000000080011cc40 | (1 anonymous class) but again, I still loose the class name for loaders which have names, or print the class name twice for loaders without name. ---- So, could we not keep the old ClassLoaderData::class_loader_name() unchanged, and add the new compound name with a (clearly named) new function, e.g. "ClassLoaderData::class_loader_name_and_id()" or "ClassLoaderData::class_loader_compund_name()" or similar? I appreciate your wish to unify all naming, but I find this overly restrictive, see above examples. Also, I really like methods doing what they are named to do - I dislike too-smart methods which force me to second-guess their function: ClassLoaderData::class_loader_name() suggests it does just that, returning "ClassLoader.name()". Now it returns the new compound format. This is not appearant from the naming. Thanks and Kind Regards, Thomas On Thu, Jun 14, 2018 at 12:58 AM, Lois Foltan wrote: > Please review this change to standardize on how to obtain a class loader's > name within the VM. SystemDictionary::loader_name() methods have been > removed in favor of ClassLoaderData::loader_name(). > > Since the loader name is largely used in the VM for display purposes (error > messages, logging, jcmd, JFR) this change also adopts a new format to append > to a class loader's name its identityHashCode and if the loader has not been > explicitly named it's qualified class name is used instead. > > 391 /** > 392 * If the defining loader has a name explicitly set then > 393 * '' @ > 394 * If the defining loader has no name then > 395 * @ > 396 * If it's built-in loader then omit `@` as there is only one > instance. > 397 */ > > The names for the builtin loaders are 'bootstrap', 'app' and 'platform'. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ > bug link at https://bugs.openjdk.java.net/browse/JDK-8202605 > > Testing: hs-tier(1-2), jdk-tier(1-2) complete > hs-tier(3-5), jdk-tier(3) in progress > > Thanks, > Lois > From david.holmes at oracle.com Thu Jun 14 12:55:15 2018 From: david.holmes at oracle.com (David Holmes) Date: Thu, 14 Jun 2018 22:55:15 +1000 Subject: RFR (XS) 8204961: JVMTI jtreg tests build warnings on 32-bit platforms In-Reply-To: References: Message-ID: <80d0c562-c684-111e-ba70-a1860247aa6a@oracle.com> Hi Boris, I added serviceability-dev as JVM TI and its tests are technically serviceability concerns. On 14/06/2018 10:39 PM, Boris Ulasevich wrote: > Hi all, > > Please review the following patch: > ? https://bugs.openjdk.java.net/browse/JDK-8204961 > ? http://cr.openjdk.java.net/~bulasevich/8204961/webrev.01 > > Recently opensourced JVMTI tests gives build warnings for ARM32 build. I'm guessing the compiler version must have changed since we last ran these tests on 32-bit ARM. :) > GCC complains about conversion between 4-byte pointer to 8-byte jlong > type which is Ok in this case. I propose to hide warning using > conversion to intptr_t. I was concerned about what the warnings might imply but now I see that a JVM TI "tag" is simply a jlong used to funnel real pointers around to use for the tagging. So on 32-bit the upper 32-bits of the tag will always be zero and there is no data loss in any of the conversions. So assuming none of the other compilers complain about this, this seems fine to me. Thanks, David > thanks, > Boris From volker.simonis at gmail.com Thu Jun 14 14:26:00 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Thu, 14 Jun 2018 16:26:00 +0200 Subject: RFR(M): 8204965: Fix '--disable-cds' and disable CDS on AIX by default Message-ID: Hi, can I please have a review for the following fix: http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/ https://bugs.openjdk.java.net/browse/JDK-8204965 CDS does currently not work on AIX because of the way how we reserve/commit memory on AIX. The problem is that we're using a combination of shmat/mmap depending on the page size and the size of the memory chunk to reserve. This makes it impossible to reliably reserve the memory for the CDS archive and later on map the various parts of the archive into these regions. In order to fix this we would have to completely rework the memory reserve/commit/uncommit logic on AIX which is currently out of our scope because of resource limitations. Unfortunately, I could not simply disable CDS in the configure step because some of the shared code apparently relies on parts of the CDS code which gets excluded from the build when CDS is disabled. So I also fixed the offending parts in hotspot and cleaned up the configure logic for CDS. Thank you and best regards, Volker PS: I did run the job through the submit forest (mach5-one-simonis-JDK-8204965-20180613-1946-26349) but the results weren't really useful because they mention build failures on linux-x64 which I can't reproduce locally. From ChrisPhi at LGonQn.Org Thu Jun 14 15:01:24 2018 From: ChrisPhi at LGonQn.Org (Chris Phillips) Date: Thu, 14 Jun 2018 11:01:24 -0400 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> <034bbb1e-5f79-c429-878c-79e8f824dc3d@oracle.com> <7c8571dc-8781-cfdf-1bb9-d0c3b3297c17@oracle.com> Message-ID: <82ac960d-5794-0bfb-8d5b-ff0be858230a@LGonQn.Org> Hi Any further comments or changes? On 06/06/18 05:56 PM, Chris Phillips wrote: > Hi Per, > > On 06/06/18 05:48 PM, Per Liden wrote: >> Hi Chris, >> >> On 06/06/2018 11:15 PM, Chris Phillips wrote: >>> Hi Per, >>> >>> On 06/06/18 04:47 PM, Per Liden wrote: >>>> Hi Chris, >>>> >>>> On 06/06/2018 09:36 PM, Chris Phillips wrote: >>>>> Hi, >>>>> >>>>> On 06/06/18 02:23 PM, Per Liden wrote: >>>>>> On 2018-06-06 18:29, Andrew Haley wrote: >>>>>>> On 06/06/2018 04:47 PM, Chris Phillips wrote: >>>>>>>> Please review this set of changes to shared code >>>>>>>> related to S390 (31bit) Zero self-build type mis-match failures. >>>>>>>> >>>>>>>> Bug:??? https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>>>> webrev: http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 >>>>>>> >>>>>>> Can you explain this a little more?? What is the type of size_t on >>>>>>> s390x?? What is the type of uintptr_t?? What are the errors? >>>>>> >>>>>> I would like to understand this too. >>>>>> >>>>>> cheers, >>>>>> Per >>>>>> >>>>>> >>>>> Quoting from the original bug? review request: >>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>> >>>>> "This >>>>> is a problem when one parameter is of size_t type and the second of >>>>> uintx type and the platform has size_t defined as eg. unsigned long as >>>>> on s390 (32-bit)." >>>> >>>> Please clarify what the sizes of uintx (i.e. uintptr_t) and size_t are >>>> on s390? >>> See Dan's explanation. >>>> >>>> I fail to see how any of this matters to _entries here? What am I >>>> missing? >>>> >>> >>> By changing the type, to its actual usage, we avoid the >>> necessity of patching in src/hotspot/share/gc/g1/g1StringDedupTable.cpp >>> around line 617, since its consistent usage and local I patched at the >>> definition. >>> >>> - _table->_entries, percent_of(_table->_entries, _table->_size), >>> _entry_cache->size(), _entries_added, _entries_removed); >>> + _table->_entries, percent_of( (size_t)(_table->_entries), >>> _table->_size), _entry_cache->size(), _entries_added, _entries_removed); >>> >>> percent_of will complain about types otherwise. >> >> Ok, so why don't you just cast it in the call to percent_of? Your >> current patch has ripple effects that you fail to take into account. For >> example, _entries is still printed using UINTX_FORMAT and compared >> against other uintx variables. You're now mixing types in an unsound way. > > Hmm missed that, so will do the cast instead as you suggest. > (Fixing at the defn is what was suggested the last time around so I > tried to do that where it was consistent, obviously this is not. > Thanks. > >> cheers, >> Per >> >>> >>> >>>> src/hotspot/share/gc/g1/g1StringDedupTable.hpp >>>> @@ -120,11 +120,11 @@ >>>> ??? // Cache for reuse and fast alloc/free of table entries. >>>> ??? static G1StringDedupEntryCache* _entry_cache; >>>> >>>> ??? G1StringDedupEntry**??????????? _buckets; >>>> ??? size_t????????????????????????? _size; >>>> -? uintx?????????????????????????? _entries; >>>> +? size_t????????????????????????? _entries; >>>> ??? uintx?????????????????????????? _shrink_threshold; >>>> ??? uintx?????????????????????????? _grow_threshold; >>>> ??? bool??????????????????????????? _rehash_needed; >>>> >>>> cheers, >>>> Per >>>> >>>>> >>>>> Hope that helps, >>>>> Chris >>>>> >>>>> (I'll answer further if needed but the info is in the bugs and >>>>> review thread mostly) >>>>> See: >>>>> https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>> and: >>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>> >>>>> https://bugs.openjdk.java.net/browse/JDK-8046938 >>>>> https://bugs.openjdk.java.net/browse/JDK-8074459 >>>>> For more info. >>>>> >>>> >>>> >> >> > Cheers! > Chris > > > Finally through testing and submit run again after Per's requested change, here's the knew webrev: http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.2 attached is the passing run fron the submit queue. Please review... Chris From Derek.White at cavium.com Thu Jun 14 15:02:26 2018 From: Derek.White at cavium.com (White, Derek) Date: Thu, 14 Jun 2018 15:02:26 +0000 Subject: UseNUMA membind Issue in openJDK In-Reply-To: References: <9a0310b7-2880-db69-cfbc-7abba844ecbf@oracle.com> <187f278f-5069-3afa-c61f-f49a1fc0d790@linux.vnet.ibm.com> <36dc8538-f984-d2df-e401-8362a429c6f3@linux.vnet.ibm.com> <71ac36b7-263b-9f48-47d7-aa4cdbe04983@linux.vnet.ibm.com> Message-ID: Hi Swati, Thanks for taking on these issues! I?ve renamed the original bug JDK-8189922 ?UseNUMA memory interleaving vs membind?, and added a new bug JDK-8205051 ?UseNUMA memory interleaving vs cpunodebind & localalloc?, linked the two together, and added test cases to each. If you, or anyone on your team is able to assign themselves to the bug(s), that will simplify bug triage (probably). They can also change the ?fix version? to 11. I think project mgmt. has done a sweep and set all unassigned bugs to JDK 12 or later. * Derek From: Swati Sharma [mailto:swatibits14 at gmail.com] Sent: Thursday, June 14, 2018 8:02 AM To: White, Derek Cc: Gustavo Romero ; hotspot-dev at openjdk.java.net; zgu at redhat.com; David Holmes ; Prakash.Raghavendra at amd.com; Prasad.Vishwanath at amd.com; roshanmangal at gmail.com Subject: Re: UseNUMA membind Issue in openJDK External Email +Roshan Hi Derek, Thanks for your testing and finding additional bug with UseNUMA ,I appreciate your effort. The answer to your questions: 1) What should JVM do if cpu node is bound, but not memory is bound? Even with patch, JVM wastes memory because it sets aside part of Eden for threads that can never run on other node. - numactl -N 0 java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version - My expectation was that it would act as if membind is set. But I'm not an expert. - What do containers do under the hood? Would they ever bind cpus and NOT memory? If membind is not given then JVM should use the memory on all nodes available. You are right, wastage of memory is happening, We have analyzed the code and got the root cause of this issue and the fix for this issue will take some time, Note: My colleague Roshan has found the root cause in existing code and working on the fix for this issue, soon he will come up with the patch. 2) What should JVM do if cpu node is bound, and numactl --localalloc specified? Even with patch, JVM wastes memory. - numactl -N 0 --localalloc java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version - My expectation was that "--localalloc" would be identical to setting membind for all of the cpu bound nodes, but I guess it's not. Yes ,In case of "numactl --localalloc" , thread should use local memory always. Lgrp should be created based on cpunode given.In the current example it should create only single lgrp. Gustavo, Shall we go ahead with the current patch as issue pointed out by Derek is not with current patch but exists in existing code and can fix the issue in another patch ? Derek , Can you file the separate bug for above issues with no --membind in numactl ? My current patch fixes the issue when user mentions --membind with numactl , the same mentioned also in subject line( UseNUMA membind Issue in openJDK) Thanks, Swati Sharma Software Engineer - 2 @AMD On Thu, Jun 14, 2018 at 3:23 AM, White, Derek > wrote: See inline: > -----Original Message----- > From: Gustavo Romero [mailto:gromero at linux.vnet.ibm.com] ... > Hi Derek, > > On 06/12/2018 06:56 PM, White, Derek wrote: > > Hi Swati, Gustavo, > > > > I?m not the best qualified to review the change ? I just reported the issue > as a JDK bug! > > > > I?d be happy to test a fix but I?m having trouble following the patch. Did > Gustavo post a patch to your patch, or is that a full independent patch? > > Yes, the idea was that you could help on testing it against JDK-8189922. > Swati's initial report on this thread was accompanied with a simple way to > test the issue he reported. You said it was related to bug JDK-8189922 but I > can't see a simple way to test it as you reported. Besides that I assumed that > you tested it on arm64, so I can't test it myself (I don't have such a > hardware). Btw, if you could provide some numactl -H information I would > be glad. OK, here's a test case: $ numactl -N 0 -m 0 java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version Before patch, failed output shows 1/2 of Eden being wasted for threads from node that will never allocate: ... [0.230s][info][gc,heap,exit ] eden space 524800K, 4% used [0x0000000580100000,0x0000000581580260,0x00000005a0180000) [0.230s][info][gc,heap,exit ] lgrp 0 space 262400K, 8% used [0x0000000580100000,0x0000000581580260,0x0000000590140000) [0.230s][info][gc,heap,exit ] lgrp 1 space 262400K, 0% used [0x0000000590140000,0x0000000590140000,0x00000005a0180000) ... After patch, passed output: ... [0.231s][info][gc,heap,exit ] eden space 524800K, 8% used [0x0000000580100000,0x0000000582a00260,0x00000005a0180000) ... (no lgrps) Open questions - still a bug? 1) What should JVM do if cpu node is bound, but not memory is bound? Even with patch, JVM wastes memory because it sets aside part of Eden for threads that can never run on other node. - numactl -N 0 java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version - My expectation was that it would act as if membind is set. But I'm not an expert. - What do containers do under the hood? Would they ever bind cpus and NOT memory? 2) What should JVM do if cpu node is bound, and numactl --localalloc specified? Even with patch, JVM wastes memory. - numactl -N 0 --localalloc java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version - My expectation was that "--localalloc" would be identical to setting membind for all of the cpu bound nodes, but I guess it's not. FYI - numactl -H: available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 node 0 size: 128924 MB node 0 free: 8499 MB node 1 cpus: 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 node 1 size: 129011 MB node 1 free: 7964 MB node distances: node 0 1 0: 10 20 1: 20 10 > I consider the patch I pointed out as the fourth version of Swati's original > proposal, it evolved from the reviews so far: > http://cr.openjdk.java.net/~gromero/8189922/draft/usenuma_v4.patch > > > > Also, if you or Gustavo have permissions to post a webrev to > http://cr.openjdk.java.net/ that would make reviewing a little easier. I?d be > happy to post a webrev for you if not. > > I was planing to host the webrev after your comments, but feel free to host > it. No, you have it covered well, I'll stay out of it. - Derek From bob.vandette at oracle.com Thu Jun 14 15:40:58 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Thu, 14 Jun 2018 08:40:58 -0700 Subject: UseNUMA membind Issue in openJDK In-Reply-To: References: <9a0310b7-2880-db69-cfbc-7abba844ecbf@oracle.com> <187f278f-5069-3afa-c61f-f49a1fc0d790@linux.vnet.ibm.com> <36dc8538-f984-d2df-e401-8362a429c6f3@linux.vnet.ibm.com> <71ac36b7-263b-9f48-47d7-aa4cdbe04983@linux.vnet.ibm.com> Message-ID: > On Jun 14, 2018, at 8:02 AM, White, Derek wrote: > > Hi Swati, > > Thanks for taking on these issues! > I?ve renamed the original bug JDK-8189922 ?UseNUMA memory interleaving vs membind?, and added a new bug JDK-8205051 ?UseNUMA memory interleaving vs cpunodebind & localalloc?, linked the two together, and added test cases to each. > If you, or anyone on your team is able to assign themselves to the bug(s), that will simplify bug triage (probably). They can also change the ?fix version? to 11. I think project mgmt. has done a sweep and set all unassigned bugs to JDK 12 or later. > > * Derek > > > From: Swati Sharma [mailto:swatibits14 at gmail.com] > Sent: Thursday, June 14, 2018 8:02 AM > To: White, Derek > Cc: Gustavo Romero ; hotspot-dev at openjdk.java.net; zgu at redhat.com; David Holmes ; Prakash.Raghavendra at amd.com; Prasad.Vishwanath at amd.com; roshanmangal at gmail.com > Subject: Re: UseNUMA membind Issue in openJDK > > > External Email > +Roshan > > Hi Derek, > > Thanks for your testing and finding additional bug with UseNUMA ,I appreciate your effort. > > The answer to your questions: > > 1) What should JVM do if cpu node is bound, but not memory is bound? Even with patch, JVM wastes memory because it sets aside part of Eden for threads that can never run on other node. > - numactl -N 0 java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version > - My expectation was that it would act as if membind is set. But I'm not an expert. > - What do containers do under the hood? Would they ever bind cpus and NOT memory? I don?t know what is done by default, but docker and cgroups allow cpusets to be bound to a container independent of memory nodes. This may not be a problem since in my experience the libnuma libraries have not been packaged with any base image I?ve worked with causing the VM to avoid using the UseNUMA path. I filed an RFE to investigate using the cgroup info to configure the VM?s numa usage (https://bugs.openjdk.java.net/browse/JDK-8198715) There is a function in OSContainer that can at least tell you how many nodes are available to the container. Feel free to grab that RFE and run with it ;) Bob > If membind is not given then JVM should use the memory on all nodes available. You are right, wastage of memory is happening, > We have analyzed the code and got the root cause of this issue and the fix for this issue will take some time, > > Note: My colleague Roshan has found the root cause in existing code and working on the fix for this issue, soon he will come up with the patch. > > 2) What should JVM do if cpu node is bound, and numactl --localalloc specified? Even with patch, JVM wastes memory. > - numactl -N 0 --localalloc java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version > - My expectation was that "--localalloc" would be identical to setting membind for all of the cpu bound nodes, but I guess it's not. > Yes ,In case of "numactl --localalloc" , thread should use local memory always. Lgrp should be created based on cpunode given.In the current example it should create only single lgrp. > > Gustavo, Shall we go ahead with the current patch as issue pointed out by Derek is not with current patch but exists in existing code and can fix the issue in another patch ? > Derek , Can you file the separate bug for above issues with no --membind in numactl ? > My current patch fixes the issue when user mentions --membind with numactl , the same mentioned also in subject line( UseNUMA membind Issue in openJDK) > > Thanks, > Swati Sharma > Software Engineer - 2 @AMD > > > On Thu, Jun 14, 2018 at 3:23 AM, White, Derek > wrote: > See inline: > >> -----Original Message----- >> From: Gustavo Romero [mailto:gromero at linux.vnet.ibm.com] > ... >> Hi Derek, >> >>> On 06/12/2018 06:56 PM, White, Derek wrote: >>> Hi Swati, Gustavo, >>> >>> I?m not the best qualified to review the change ? I just reported the issue >> as a JDK bug! >>> >>> I?d be happy to test a fix but I?m having trouble following the patch. Did >> Gustavo post a patch to your patch, or is that a full independent patch? >> >> Yes, the idea was that you could help on testing it against JDK-8189922. >> Swati's initial report on this thread was accompanied with a simple way to >> test the issue he reported. You said it was related to bug JDK-8189922 but I >> can't see a simple way to test it as you reported. Besides that I assumed that >> you tested it on arm64, so I can't test it myself (I don't have such a >> hardware). Btw, if you could provide some numactl -H information I would >> be glad. > > > OK, here's a test case: > $ numactl -N 0 -m 0 java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version > > Before patch, failed output shows 1/2 of Eden being wasted for threads from node that will never allocate: > ... > [0.230s][info][gc,heap,exit ] eden space 524800K, 4% used [0x0000000580100000,0x0000000581580260,0x00000005a0180000) > [0.230s][info][gc,heap,exit ] lgrp 0 space 262400K, 8% used [0x0000000580100000,0x0000000581580260,0x0000000590140000) > [0.230s][info][gc,heap,exit ] lgrp 1 space 262400K, 0% used [0x0000000590140000,0x0000000590140000,0x00000005a0180000) > ... > > After patch, passed output: > ... > [0.231s][info][gc,heap,exit ] eden space 524800K, 8% used [0x0000000580100000,0x0000000582a00260,0x00000005a0180000) > ... (no lgrps) > > Open questions - still a bug? > 1) What should JVM do if cpu node is bound, but not memory is bound? Even with patch, JVM wastes memory because it sets aside part of Eden for threads that can never run on other node. > - numactl -N 0 java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version > - My expectation was that it would act as if membind is set. But I'm not an expert. > - What do containers do under the hood? Would they ever bind cpus and NOT memory? > 2) What should JVM do if cpu node is bound, and numactl --localalloc specified? Even with patch, JVM wastes memory. > - numactl -N 0 --localalloc java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version > - My expectation was that "--localalloc" would be identical to setting membind for all of the cpu bound nodes, but I guess it's not. > > > > FYI - numactl -H: > available: 2 nodes (0-1) > node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 > node 0 size: 128924 MB > node 0 free: 8499 MB > node 1 cpus: 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 > node 1 size: 129011 MB > node 1 free: 7964 MB > node distances: > node 0 1 > 0: 10 20 > 1: 20 10 > >> I consider the patch I pointed out as the fourth version of Swati's original >> proposal, it evolved from the reviews so far: >> http://cr.openjdk.java.net/~gromero/8189922/draft/usenuma_v4.patch >> >> >>> Also, if you or Gustavo have permissions to post a webrev to >> http://cr.openjdk.java.net/ that would make reviewing a little easier. I?d be >> happy to post a webrev for you if not. >> >> I was planing to host the webrev after your comments, but feel free to host >> it. > > No, you have it covered well, I'll stay out of it. > > - Derek > From erik.joelsson at oracle.com Thu Jun 14 16:04:19 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Thu, 14 Jun 2018 09:04:19 -0700 Subject: RFR(M): 8204965: Fix '--disable-cds' and disable CDS on AIX by default In-Reply-To: References: Message-ID: Build changes look ok. /Erik On 2018-06-14 07:26, Volker Simonis wrote: > Hi, > > can I please have a review for the following fix: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/ > https://bugs.openjdk.java.net/browse/JDK-8204965 > > CDS does currently not work on AIX because of the way how we > reserve/commit memory on AIX. The problem is that we're using a > combination of shmat/mmap depending on the page size and the size of > the memory chunk to reserve. This makes it impossible to reliably > reserve the memory for the CDS archive and later on map the various > parts of the archive into these regions. > > In order to fix this we would have to completely rework the memory > reserve/commit/uncommit logic on AIX which is currently out of our > scope because of resource limitations. > > Unfortunately, I could not simply disable CDS in the configure step > because some of the shared code apparently relies on parts of the CDS > code which gets excluded from the build when CDS is disabled. So I > also fixed the offending parts in hotspot and cleaned up the configure > logic for CDS. > > Thank you and best regards, > Volker > > PS: I did run the job through the submit forest > (mach5-one-simonis-JDK-8204965-20180613-1946-26349) but the results > weren't really useful because they mention build failures on linux-x64 > which I can't reproduce locally. From gromero at linux.vnet.ibm.com Thu Jun 14 16:28:56 2018 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Thu, 14 Jun 2018 13:28:56 -0300 Subject: UseNUMA membind Issue in openJDK In-Reply-To: References: <187f278f-5069-3afa-c61f-f49a1fc0d790@linux.vnet.ibm.com> <36dc8538-f984-d2df-e401-8362a429c6f3@linux.vnet.ibm.com> <71ac36b7-263b-9f48-47d7-aa4cdbe04983@linux.vnet.ibm.com> Message-ID: <8cdb0636-d958-a307-4163-67b2f71205dd@linux.vnet.ibm.com> Hi, On 06/14/2018 09:01 AM, Swati Sharma wrote: > +Roshan > > Hi Derek, > > Thanks for your testing and finding additional bug with UseNUMA ,I appreciate your effort. > > The answer to your questions: > > 1) What should JVM do if cpu node is bound, but not memory is bound? Even with patch, JVM wastes memory because it sets aside part of Eden for threads that can never run on other node. > - numactl -N 0 java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version > - My expectation was that it would act as if membind is set. But I'm not an expert. > - What do containers do under the hood? Would they ever bind cpus and NOT memory? > If membind is not given then JVM should use the memory on all nodes available. You are right, wastage of memory is happening, > We have analyzed the code and got the root cause of this issue and the fix for this issue will take some time, > > Note: My colleague Roshan has found the root cause in existing code and working on the fix for this issue, soon he will come up with the patch. Thanks for the helpful comments, Derek and Swati. I agree: it's an issue and a separated one. The problem (even with Swati's patch applied) is that JVM will just look at the node mask information and won't consider cpu bindings to adapt. I guess that originally UseNUMA was only interested on numa topology in regard to find out the best memory allocation for the given unpinned cpus on the machine. I call not tell about the container question, but I understand the if we cover all the bound/not bound combinations of cpu/memory the JVM should be fine in the worst case (It might be the case that bindings are transparent to the JVM, I don't know...). > 2) What should JVM do if cpu node is bound, and numactl --localalloc specified? Even with patch, JVM wastes memory. > - numactl -N 0 --localalloc java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version > - My expectation was that "--localalloc" would be identical to setting membind for all of the cpu bound nodes, but I guess it's not. > Yes ,In case of "numactl --localalloc" , thread should use local memory always. Lgrp should be created based on cpunode given.In the current example it should create only single lgrp. I agree. > Gustavo, Shall we go ahead with the current patch as issue pointed out by Derek is not with current patch but exists in existing code and can fix the issue in another patch ? > Derek , Can you file the separate bug for above issues with no --membind in numactl ? > My current patch fixes the issue when user mentions --membind with numactl , the same mentioned also in subject line( UseNUMA membind Issue in openJDK) Yes, I'm fine with that. Derek kindly already filed a new bug. Also the other issue (not addressed by Swati's patch) is well stated. Thanks. Best regards, Gustavo > Thanks, > Swati Sharma > Software Engineer - 2 @AMD > > > On Thu, Jun 14, 2018 at 3:23 AM, White, Derek > wrote: > > See inline: > > > -----Original Message----- > > From: Gustavo Romero [mailto:gromero at linux.vnet.ibm.com ] > ... > > Hi Derek, > > > > On 06/12/2018 06:56 PM, White, Derek wrote: > > > Hi Swati, Gustavo, > > > > > > I?m not the best qualified to review the change ? I just reported the issue > > as a JDK bug! > > > > > > I?d be happy to test a fix but I?m having trouble following the patch. Did > > Gustavo post a patch to your patch, or is that a full independent patch? > > > > Yes, the idea was that you could help on testing it against JDK-8189922. > > Swati's initial report on this thread was accompanied with a simple way to > > test the issue he reported. You said it was related to bug JDK-8189922 but I > > can't see a simple way to test it as you reported. Besides that I assumed that > > you tested it on arm64, so I can't test it myself (I don't have such a > > hardware). Btw, if you could provide some numactl -H information I would > > be glad. > > > OK, here's a test case: > $ numactl -N 0 -m 0 java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version > > Before patch, failed output shows 1/2 of Eden being wasted for threads from node that will never allocate: > ... > [0.230s][info][gc,heap,exit ] eden space 524800K, 4% used [0x0000000580100000,0x0000000581580260,0x00000005a0180000) > [0.230s][info][gc,heap,exit ] lgrp 0 space 262400K, 8% used [0x0000000580100000,0x0000000581580260,0x0000000590140000) > [0.230s][info][gc,heap,exit ] lgrp 1 space 262400K, 0% used [0x0000000590140000,0x0000000590140000,0x00000005a0180000) > ... > > After patch, passed output: > ... > [0.231s][info][gc,heap,exit ] eden space 524800K, 8% used [0x0000000580100000,0x0000000582a00260,0x00000005a0180000) > ... (no lgrps) > > Open questions - still a bug? > 1) What should JVM do if cpu node is bound, but not memory is bound? Even with patch, JVM wastes memory because it sets aside part of Eden for threads that can never run on other node. > - numactl -N 0 java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version > - My expectation was that it would act as if membind is set. But I'm not an expert. > - What do containers do under the hood? Would they ever bind cpus and NOT memory? > 2) What should JVM do if cpu node is bound, and numactl --localalloc specified? Even with patch, JVM wastes memory. > - numactl -N 0 --localalloc java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version > - My expectation was that "--localalloc" would be identical to setting membind for all of the cpu bound nodes, but I guess it's not. > > > > FYI - numactl -H: > available: 2 nodes (0-1) > node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 > node 0 size: 128924 MB > node 0 free: 8499 MB > node 1 cpus: 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 > node 1 size: 129011 MB > node 1 free: 7964 MB > node distances: > node 0 1 > 0: 10 20 > 1: 20 10 > > > I consider the patch I pointed out as the fourth version of Swati's original > > proposal, it evolved from the reviews so far: > > http://cr.openjdk.java.net/~gromero/8189922/draft/usenuma_v4.patch > > > > > > > Also, if you or Gustavo have permissions to post a webrev to > > http://cr.openjdk.java.net/ that would make reviewing a little easier. I?d be > > happy to post a webrev for you if not. > > > > I was planing to host the webrev after your comments, but feel free to host > > it. > > No, you have it covered well, I'll stay out of it. > > - Derek > > From jiangli.zhou at Oracle.COM Thu Jun 14 16:42:00 2018 From: jiangli.zhou at Oracle.COM (Jiangli Zhou) Date: Thu, 14 Jun 2018 09:42:00 -0700 Subject: RFR(M): 8204965: Fix '--disable-cds' and disable CDS on AIX by default In-Reply-To: References: Message-ID: <14702170-8CA5-4033-B3EE-CE12C906BDE7@oracle.com> Hi Volker, The changes look good to me overall. I?ll refer to the JVMTI experts for jvmtiEnv.cpp change. I have a question for the change in vmStructs.cpp. Any reason why only _current_info needs CDS_ONLY? /********************************************/ \ /* FileMapInfo fields (CDS archive related) */ \ /********************************************/ \ \ nonstatic_field(FileMapInfo, _header, FileMapInfo::FileMapHeader*) \ - static_field(FileMapInfo, _current_info, FileMapInfo*) \ + CDS_ONLY(static_field(FileMapInfo, _current_info, FileMapInfo*)) \ nonstatic_field(FileMapInfo::FileMapHeader, _space[0], FileMapInfo::FileMapHeader::space_info)\ nonstatic_field(FileMapInfo::FileMapHeader::space_info, _addr._base, char*) \ nonstatic_field(FileMapInfo::FileMapHeader::space_info, _used, size_t) \ \ Thanks, Jiangli > On Jun 14, 2018, at 7:26 AM, Volker Simonis wrote: > > Hi, > > can I please have a review for the following fix: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/ > https://bugs.openjdk.java.net/browse/JDK-8204965 > > CDS does currently not work on AIX because of the way how we > reserve/commit memory on AIX. The problem is that we're using a > combination of shmat/mmap depending on the page size and the size of > the memory chunk to reserve. This makes it impossible to reliably > reserve the memory for the CDS archive and later on map the various > parts of the archive into these regions. > > In order to fix this we would have to completely rework the memory > reserve/commit/uncommit logic on AIX which is currently out of our > scope because of resource limitations. > > Unfortunately, I could not simply disable CDS in the configure step > because some of the shared code apparently relies on parts of the CDS > code which gets excluded from the build when CDS is disabled. So I > also fixed the offending parts in hotspot and cleaned up the configure > logic for CDS. > > Thank you and best regards, > Volker > > PS: I did run the job through the submit forest > (mach5-one-simonis-JDK-8204965-20180613-1946-26349) but the results > weren't really useful because they mention build failures on linux-x64 > which I can't reproduce locally. From thomas.stuefe at gmail.com Thu Jun 14 19:04:31 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 14 Jun 2018 21:04:31 +0200 Subject: RFR(M): 8204965: Fix '--disable-cds' and disable CDS on AIX by default In-Reply-To: References: Message-ID: Hi Volker, http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/make/autoconf/hotspot.m4.udiff.html Seems like a roundabout way to have a platform specific default value. Why not determine a default value beforehand: if test "x$OPENJDK_TARGET_OS" = "xaix"; then ENABLE_CDS_DEFAULT="false" else ENABLE_CDS_DEFAULT=true" fi AC_ARG_ENABLE([cds], [AS_HELP_STRING([--enable-cds@<:@=yes/no/auto@:>@], [enable class data sharing feature in non-minimal VM. Default is ${ENABLE_CDS_DEFAULT}.])]) and so on? See also what we did for "8202325: [aix] disable warnings-as-errors by default". -- http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/src/hotspot/share/classfile/javaClasses.cpp.udiff.html Here, do we really need to exclude this from compiling, DumpSharedSpaces = false is not enough? Best Regards, Thomas On Thu, Jun 14, 2018 at 4:26 PM, Volker Simonis wrote: > Hi, > > can I please have a review for the following fix: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/ > https://bugs.openjdk.java.net/browse/JDK-8204965 > > CDS does currently not work on AIX because of the way how we > reserve/commit memory on AIX. The problem is that we're using a > combination of shmat/mmap depending on the page size and the size of > the memory chunk to reserve. This makes it impossible to reliably > reserve the memory for the CDS archive and later on map the various > parts of the archive into these regions. > > In order to fix this we would have to completely rework the memory > reserve/commit/uncommit logic on AIX which is currently out of our > scope because of resource limitations. > > Unfortunately, I could not simply disable CDS in the configure step > because some of the shared code apparently relies on parts of the CDS > code which gets excluded from the build when CDS is disabled. So I > also fixed the offending parts in hotspot and cleaned up the configure > logic for CDS. > > Thank you and best regards, > Volker > > PS: I did run the job through the submit forest > (mach5-one-simonis-JDK-8204965-20180613-1946-26349) but the results > weren't really useful because they mention build failures on linux-x64 > which I can't reproduce locally. From serguei.spitsyn at oracle.com Thu Jun 14 19:09:00 2018 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Thu, 14 Jun 2018 12:09:00 -0700 Subject: RFR (XS) 8204961: JVMTI jtreg tests build warnings on 32-bit platforms In-Reply-To: <80d0c562-c684-111e-ba70-a1860247aa6a@oracle.com> References: <80d0c562-c684-111e-ba70-a1860247aa6a@oracle.com> Message-ID: Hi Boris, It looks good to me. Thank you for taking care about these warnings! Thanks, Serguei On 6/14/18 05:55, David Holmes wrote: > Hi Boris, > > I added serviceability-dev as JVM TI and its tests are technically > serviceability concerns. > > On 14/06/2018 10:39 PM, Boris Ulasevich wrote: >> Hi all, >> >> Please review the following patch: >> ?? https://bugs.openjdk.java.net/browse/JDK-8204961 >> ?? http://cr.openjdk.java.net/~bulasevich/8204961/webrev.01 >> >> Recently opensourced JVMTI tests gives build warnings for ARM32 build. > > I'm guessing the compiler version must have changed since we last ran > these tests on 32-bit ARM. :) > >> GCC complains about conversion between 4-byte pointer to 8-byte jlong >> type which is Ok in this case. I propose to hide warning using >> conversion to intptr_t. > > I was concerned about what the warnings might imply but now I see that > a JVM TI "tag" is simply a jlong used to funnel real pointers around > to use for the tagging. So on 32-bit the upper 32-bits of the tag will > always be zero and there is no data loss in any of the conversions. > > So assuming none of the other compilers complain about this, this > seems fine to me. > > Thanks, > David > >> thanks, >> Boris From lois.foltan at oracle.com Thu Jun 14 19:30:53 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Thu, 14 Jun 2018 15:30:53 -0400 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: <9387143d895843b8b245605137e3443e@sap.com> References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <9387143d895843b8b245605137e3443e@sap.com> Message-ID: <4efa14d4-afdf-4c32-2843-8696cbb98e24@oracle.com> On 6/14/2018 8:23 AM, Lindenmaier, Goetz wrote: > Hi Lois, > > thanks for doing this change! Hi Goetz, Thanks for your review and patience while we worked on a proposal for this! > I appreciate a clear guidance on the naming of classloaders. > Adding a new field to ClassLoader is the best way to > assure this is used widely. I was remiss to not recognize Mandy Chung's contribution of this change to ClassLoader.java. > > Some comments on the format: > > * I would skip the space between the name and the @. Yes, Mandy & I did discuss this and have concerns either way (with space, without space).? For now I am leaving as is and we can renegotiate when reviewing the work for improving error message details. > > * If two loaders have the same name, it might be helpful > to see the classname. But I guess this will mostly > happen if the loaders are of the same class, too. > (like loading the same library twice. In this case > the id helps). > > In detail: > > classLoaderData.cpp: > Please use external_name(): > + _class_loader_name_id = _class_loader_klass->name(); Done. > > I would rename loader_name() to loader_nameAndId(). Yes, I will be sending out a new webrev shortly.? I have added back in the original loader_name() and now have a new method loader_name_and_id().? This should make the intention much clearer. > > > classLoaderData.hpp: > Line 400: eventually adapt the comment to use ' ' instead of <>. Done. > > classLoaderHierarchyDCmd.cpp: > I would remove the double quotes from this output. > But you can leave this to a follow up I guess. Done. > > classLoaderStats.cpp: > Maybe you want to adapt classLoaderStats.cpp:114 to say > 'bootstrap' instead of . Done, thanks for that catch! > > javaClasses.cpp: > +// Returns the name of this class loader or null if this class loader is not named. > "the name" is ambiguous now. Maybe say "Returns the name _field_ of ..." > > +java_lang_ClassLoader::nameAndId(oop loader) > I would add to the comment: > // Use ClassLoaderData::loader_name() to obtain this String as a char* for internal use. Done. > > > jfrTypeSetUtils.hpp: > Don't you want to put this field into classLoaderData.hpp? > Then you can use it in loader_name(), too. Done. > > ClassLoaderHierarchyTest.java: > I think you need to edit the other names, too: "Kevin" --> "'Kevin'" etc. > Or, if you followed my above comment, remove the double quotes altogether. Done, good catch. Again, thanks for your review! Lois > > Best regards, > Goetz. > > >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >> Behalf Of Lois Foltan >> Sent: Donnerstag, 14. Juni 2018 00:59 >> To: hotspot-dev developers >> Subject: RFR (M) JDK-8202605: Standardize on >> ClassLoaderData::loader_name() throughout the VM to obtain a class >> loader's name >> >> Please review this change to standardize on how to obtain a class >> loader's name within the VM.? SystemDictionary::loader_name() methods >> have been removed in favor of ClassLoaderData::loader_name(). >> >> Since the loader name is largely used in the VM for display purposes >> (error messages, logging, jcmd, JFR) this change also adopts a new >> format to append to a class loader's name its identityHashCode and if >> the loader has not been explicitly named it's qualified class name is >> used instead. >> >> 391 /** >> 392 * If the defining loader has a name explicitly set then >> 393 * '' @ >> 394 * If the defining loader has no name then >> 395 * @ >> 396 * If it's built-in loader then omit `@` as there is only one >> instance. >> 397 */ >> >> The names for the builtin loaders are 'bootstrap', 'app' and 'platform'. >> >> open webrev at >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ >> bug link at https://bugs.openjdk.java.net/browse/JDK-8202605 >> >> Testing: hs-tier(1-2), jdk-tier(1-2) complete >> ?????????????? hs-tier(3-5), jdk-tier(3) in progress >> >> Thanks, >> Lois From lois.foltan at oracle.com Thu Jun 14 19:54:27 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Thu, 14 Jun 2018 15:54:27 -0400 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> Message-ID: <3637e3d0-ab41-08c4-497a-60808b917e08@oracle.com> On 6/14/2018 8:51 AM, Thomas St?fe wrote: > Hi Lois, > > it is a good thing to introduce a common naming scheme plus guidelines > of how to use them. I like that you introduced this to both > ClassLoaderData and ClassLoader.java - that way we can have a common > scheme regardless whether we have a CLD pointer or a ClassLoader oop. Hi Thomas, Thanks for your review! As I pointed out to Goetz, Mandy Chung contributed this change to ClassLoader.java. > > --- > > But of course I have complaints too :) > > I dislike the compound format: > > - it hides the class name if the loader name is set. > - it adds quotes, which may not be desired in all places. > - names of the special loaders bootstrap I would prefer to keep > without quotes but with angular brackets: > 'bootstrap' -> , 'app' -> , 'platform' -> The names for the builtin loaders ('bootstrap', 'app', 'platform') was decided in the JDK 9 time frame.? Any suggestion to change it would have to involve the core library, hotspot groups, etc.? It is important that the JVM use the names given to these loaders to provide more helpful and accurate output. > > Since your patch changes how some of my jcmd subcommands print things, > they now look not so good. For example VM.classloaders: > > Before we had: > > 938: > +-- > | > +-- jdk.internal.reflect.DelegatingClassLoader {0x000000071523ae00} > | > | Classes: > jdk.internal.reflect.GeneratedConstructorAccessor1 (invokes: > java/lang/management/ManagementPermission:: > (Ljava/lang/String;)V) > | (1 class) > | > +-- "Kevin", ClassLoaderHierarchyTest$TestClassLoader {0x000000071529f620} > | > | Classes: TestClass2 > | (1 class) > | > | Anonymous Classes: TestClass2$$Lambda$46/0x000000080011cc40 > | (1 anonymous class) > > > With your patch: > > VM.classloaders > 13580: > +-- "'bootstrap'", > | > | > +-- "jdk.internal.reflect.DelegatingClassLoader @41cbc171", > jdk.internal.reflect.DelegatingClassLoader {0x000000071523a638} > | > | Classes: > jdk.internal.reflect.GeneratedConstructorAccessor1 (invokes: > java/lang/management/ManagementPermission:: > (Ljava/lang/String;)V) > | (1 class) > | > +-- "'Kevin' @9904154", ClassLoaderHierarchyTest$TestClassLoader > {0x000000071529f1e8} > | > | Classes: TestClass2 > | (1 class) > | > | Anonymous Classes: TestClass2$$Lambda$46/0x000000080011cc40 > | (1 anonymous class) > > > Of course this could be improved a bit - if I were to use your new > function and tweak it a bit, it would be: > > VM.classloaders > 13580: > +-- 'bootstrap', > | > | > +-- jdk.internal.reflect.DelegatingClassLoader @41cbc171 > {0x000000071523a638} > | > | Classes: > jdk.internal.reflect.GeneratedConstructorAccessor1 (invokes: > java/lang/management/ManagementPermission:: > (Ljava/lang/String;)V) > | (1 class) > | > +-- 'Kevin' @9904154 {0x000000071529f1e8} > | > | Classes: TestClass2 > | (1 class) > | > | Anonymous Classes: TestClass2$$Lambda$46/0x000000080011cc40 > | (1 anonymous class) > > but again, I still loose the class name for loaders which have names, > or print the class name twice for loaders without name. > > ---- > > So, could we not keep the old ClassLoaderData::class_loader_name() > unchanged, and add the new compound name with a (clearly named) new > function, e.g. "ClassLoaderData::class_loader_name_and_id()" or > "ClassLoaderData::class_loader_compund_name()" or similar? > > I appreciate your wish to unify all naming, but I find this overly > restrictive, see above examples. > > Also, I really like methods doing what they are named to do - I > dislike too-smart methods which force me to second-guess their > function: ClassLoaderData::class_loader_name() suggests it does just > that, returning "ClassLoader.name()". Now it returns the new compound > format. This is not appearant from the naming. Sometimes one size doesn't fit all!? I will be sending out a new webrev shortly that introduces the fields ClassLoaderData::_name and ClassLoaderData::_name_and_id with corresponding methods to obtain each.? I have backed out my change specifically to memory/metaspace/printCLDMetaspaceInfoClosure.cpp & test Metaspace/PrintMetaspaceDcmd.java. Thanks, Lois > > Thanks and Kind Regards, > > Thomas > > > > > > On Thu, Jun 14, 2018 at 12:58 AM, Lois Foltan wrote: >> Please review this change to standardize on how to obtain a class loader's >> name within the VM. SystemDictionary::loader_name() methods have been >> removed in favor of ClassLoaderData::loader_name(). >> >> Since the loader name is largely used in the VM for display purposes (error >> messages, logging, jcmd, JFR) this change also adopts a new format to append >> to a class loader's name its identityHashCode and if the loader has not been >> explicitly named it's qualified class name is used instead. >> >> 391 /** >> 392 * If the defining loader has a name explicitly set then >> 393 * '' @ >> 394 * If the defining loader has no name then >> 395 * @ >> 396 * If it's built-in loader then omit `@` as there is only one >> instance. >> 397 */ >> >> The names for the builtin loaders are 'bootstrap', 'app' and 'platform'. >> >> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ >> bug link at https://bugs.openjdk.java.net/browse/JDK-8202605 >> >> Testing: hs-tier(1-2), jdk-tier(1-2) complete >> hs-tier(3-5), jdk-tier(3) in progress >> >> Thanks, >> Lois >> From lois.foltan at oracle.com Thu Jun 14 19:56:12 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Thu, 14 Jun 2018 15:56:12 -0400 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> Message-ID: <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> Please review this updated webrev that address review comments received. http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/ Thanks, Lois On 6/13/2018 6:58 PM, Lois Foltan wrote: > Please review this change to standardize on how to obtain a class > loader's name within the VM.? SystemDictionary::loader_name() methods > have been removed in favor of ClassLoaderData::loader_name(). > > Since the loader name is largely used in the VM for display purposes > (error messages, logging, jcmd, JFR) this change also adopts a new > format to append to a class loader's name its identityHashCode and if > the loader has not been explicitly named it's qualified class name is > used instead. > > 391 /** > 392 * If the defining loader has a name explicitly set then > 393 * '' @ > 394 * If the defining loader has no name then > 395 * @ > 396 * If it's built-in loader then omit `@` as there is only one > instance. > 397 */ > > The names for the builtin loaders are 'bootstrap', 'app' and 'platform'. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ > bug link at https://bugs.openjdk.java.net/browse/JDK-8202605 > > Testing: hs-tier(1-2), jdk-tier(1-2) complete > ?????????????? hs-tier(3-5), jdk-tier(3) in progress > > Thanks, > Lois > From harold.seigel at oracle.com Thu Jun 14 20:02:46 2018 From: harold.seigel at oracle.com (Harold David Seigel) Date: Thu, 14 Jun 2018 16:02:46 -0400 Subject: RFR: 8204955: Extend ClassCastException message In-Reply-To: References: Message-ID: <6b81b9db-9701-b0b0-fb3a-ced535aa0869@oracle.com> Hi Rene, I'm not sure that adding text such as "Loaded by OtherLoader, but needed class loader MyLoader" is all that helpful if the first part of the message already contains the class loader names. Also, the text implies that the problem is with "OtherLoader" but the user may actually need to change the type or something else. Thanks, Harold On 6/14/2018 3:41 AM, Ren? Sch?nemann wrote: > Hi, > > can I please get a review for the following change: > > Bug: https://bugs.openjdk.java.net/browse/JDK-8204955 > Webrev: http://cr.openjdk.java.net/~simonis/webrevs/2018/8204955 > > This change adds additional details to the ClassCastException message > when the class cast failed due to non-matching class loaders. > > Example: > > "MyLoader/m/MyClass cannot be cast to OtherLoader/m/MyClass. Loaded by > OtherLoader, but needed class loader MyLoader." > > It is now also checked whether the target class is an extended > interface or super class of the caster class and casting failed due to > non-matching class loaders. > > Example: > > "MyLoader/m/MyClass cannot be cast to OtherLoader/m/MyInterface. Found > matching interface OtherLoader/m/MyInterface loaded by OtherLoader but > needed class loader MyLoader." > > I have added the test > "jdk/test/hotspot/jtreg/runtime/exceptionMsgs/ClassCastException/ClassCastExceptionTest.java" > for the new exception message. > > > Thank you, > Rene From thomas.stuefe at gmail.com Thu Jun 14 20:15:04 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 14 Jun 2018 22:15:04 +0200 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: <3637e3d0-ab41-08c4-497a-60808b917e08@oracle.com> References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <3637e3d0-ab41-08c4-497a-60808b917e08@oracle.com> Message-ID: Hi Lois, On Thu, Jun 14, 2018 at 9:54 PM, Lois Foltan wrote: > On 6/14/2018 8:51 AM, Thomas St?fe wrote: > >> Hi Lois, >> >> it is a good thing to introduce a common naming scheme plus guidelines >> of how to use them. I like that you introduced this to both >> ClassLoaderData and ClassLoader.java - that way we can have a common >> scheme regardless whether we have a CLD pointer or a ClassLoader oop. > > > Hi Thomas, > Thanks for your review! As I pointed out to Goetz, Mandy Chung contributed > this change to ClassLoader.java. > >> >> --- >> >> But of course I have complaints too :) >> >> I dislike the compound format: >> >> - it hides the class name if the loader name is set. >> - it adds quotes, which may not be desired in all places. >> - names of the special loaders bootstrap I would prefer to keep >> without quotes but with angular brackets: >> 'bootstrap' -> , 'app' -> , 'platform' -> > > > The names for the builtin loaders ('bootstrap', 'app', 'platform') was > decided in the JDK 9 time frame. Any suggestion to change it would have to > involve the core library, hotspot groups, etc. It is important that the JVM > use the names given to these loaders to provide more helpful and accurate > output. > Okay, I understand. > >> >> Since your patch changes how some of my jcmd subcommands print things, >> they now look not so good. For example VM.classloaders: >> >> Before we had: >> >> 938: >> +-- >> | >> +-- jdk.internal.reflect.DelegatingClassLoader {0x000000071523ae00} >> | >> | Classes: >> jdk.internal.reflect.GeneratedConstructorAccessor1 (invokes: >> java/lang/management/ManagementPermission:: >> (Ljava/lang/String;)V) >> | (1 class) >> | >> +-- "Kevin", ClassLoaderHierarchyTest$TestClassLoader >> {0x000000071529f620} >> | >> | Classes: TestClass2 >> | (1 class) >> | >> | Anonymous Classes: >> TestClass2$$Lambda$46/0x000000080011cc40 >> | (1 anonymous class) >> >> >> With your patch: >> >> VM.classloaders >> 13580: >> +-- "'bootstrap'", >> | >> | >> +-- "jdk.internal.reflect.DelegatingClassLoader @41cbc171", >> jdk.internal.reflect.DelegatingClassLoader {0x000000071523a638} >> | >> | Classes: >> jdk.internal.reflect.GeneratedConstructorAccessor1 (invokes: >> java/lang/management/ManagementPermission:: >> (Ljava/lang/String;)V) >> | (1 class) >> | >> +-- "'Kevin' @9904154", ClassLoaderHierarchyTest$TestClassLoader >> {0x000000071529f1e8} >> | >> | Classes: TestClass2 >> | (1 class) >> | >> | Anonymous Classes: >> TestClass2$$Lambda$46/0x000000080011cc40 >> | (1 anonymous class) >> >> >> Of course this could be improved a bit - if I were to use your new >> function and tweak it a bit, it would be: >> >> VM.classloaders >> 13580: >> +-- 'bootstrap', >> | >> | >> +-- jdk.internal.reflect.DelegatingClassLoader @41cbc171 >> {0x000000071523a638} >> | >> | Classes: >> jdk.internal.reflect.GeneratedConstructorAccessor1 (invokes: >> java/lang/management/ManagementPermission:: >> (Ljava/lang/String;)V) >> | (1 class) >> | >> +-- 'Kevin' @9904154 {0x000000071529f1e8} >> | >> | Classes: TestClass2 >> | (1 class) >> | >> | Anonymous Classes: >> TestClass2$$Lambda$46/0x000000080011cc40 >> | (1 anonymous class) >> >> but again, I still loose the class name for loaders which have names, >> or print the class name twice for loaders without name. >> >> ---- >> >> So, could we not keep the old ClassLoaderData::class_loader_name() >> unchanged, and add the new compound name with a (clearly named) new >> function, e.g. "ClassLoaderData::class_loader_name_and_id()" or >> "ClassLoaderData::class_loader_compund_name()" or similar? >> >> I appreciate your wish to unify all naming, but I find this overly >> restrictive, see above examples. >> >> Also, I really like methods doing what they are named to do - I >> dislike too-smart methods which force me to second-guess their >> function: ClassLoaderData::class_loader_name() suggests it does just >> that, returning "ClassLoader.name()". Now it returns the new compound >> format. This is not appearant from the naming. > > > Sometimes one size doesn't fit all! Sure. It is difficult to unify naming without bothering anyone :) > I will be sending out a new webrev > shortly that introduces the fields ClassLoaderData::_name and > ClassLoaderData::_name_and_id with corresponding methods to obtain each. I > have backed out my change specifically to > memory/metaspace/printCLDMetaspaceInfoClosure.cpp & test > Metaspace/PrintMetaspaceDcmd.java. > Had a quick peek at your webrev, thank you for keeping both _name and _name_and_id. Also, very nice commenting in classloaderData.cpp. Will look again tomorrow with a fresh head. Thanks, Thomas > Thanks, > Lois > > >> >> Thanks and Kind Regards, >> >> Thomas >> >> >> >> >> >> On Thu, Jun 14, 2018 at 12:58 AM, Lois Foltan >> wrote: >>> >>> Please review this change to standardize on how to obtain a class >>> loader's >>> name within the VM. SystemDictionary::loader_name() methods have been >>> removed in favor of ClassLoaderData::loader_name(). >>> >>> Since the loader name is largely used in the VM for display purposes >>> (error >>> messages, logging, jcmd, JFR) this change also adopts a new format to >>> append >>> to a class loader's name its identityHashCode and if the loader has not >>> been >>> explicitly named it's qualified class name is used instead. >>> >>> 391 /** >>> 392 * If the defining loader has a name explicitly set then >>> 393 * '' @ >>> 394 * If the defining loader has no name then >>> 395 * @ >>> 396 * If it's built-in loader then omit `@` as there is only one >>> instance. >>> 397 */ >>> >>> The names for the builtin loaders are 'bootstrap', 'app' and 'platform'. >>> >>> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ >>> bug link at https://bugs.openjdk.java.net/browse/JDK-8202605 >>> >>> Testing: hs-tier(1-2), jdk-tier(1-2) complete >>> hs-tier(3-5), jdk-tier(3) in progress >>> >>> Thanks, >>> Lois >>> > From lois.foltan at oracle.com Thu Jun 14 20:45:30 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Thu, 14 Jun 2018 16:45:30 -0400 Subject: RFR: 8204955: Extend ClassCastException message In-Reply-To: References: Message-ID: On 6/14/2018 3:41 AM, Ren? Sch?nemann wrote: > Hi, > > can I please get a review for the following change: > > Bug: https://bugs.openjdk.java.net/browse/JDK-8204955 > Webrev: http://cr.openjdk.java.net/~simonis/webrevs/2018/8204955 > > This change adds additional details to the ClassCastException message > when the class cast failed due to non-matching class loaders. > > Example: > > "MyLoader/m/MyClass cannot be cast to OtherLoader/m/MyClass. Loaded by > OtherLoader, but needed class loader MyLoader." > > It is now also checked whether the target class is an extended > interface or super class of the caster class and casting failed due to > non-matching class loaders. > > Example: > > "MyLoader/m/MyClass cannot be cast to OtherLoader/m/MyInterface. Found > matching interface OtherLoader/m/MyInterface loaded by OtherLoader but > needed class loader MyLoader." > > I have added the test > "jdk/test/hotspot/jtreg/runtime/exceptionMsgs/ClassCastException/ClassCastExceptionTest.java" > for the new exception message. > > > Thank you, > Rene Hi Rene, Thank you for your work to improve ClassCastException.? I do have some concerns about this change: - In runtime/sharedRuntime.cpp - please do not add another method to obtain the class loader's name.? Work is underway to consolidate how within the VM a class loader's name is obtained.? See the work for JDK-8202605, currently out for review and can be followed at http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-June/033063.html - Work is also underway to standardize the output of certain error messages like IllegalAccessError, ClassCastException, etc.? Please see the current proposal at http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2018-June/028425.html. The error changes you have made do not follow this proposed format. - I'm not sure the added verbiage to ClassCastException is helpful. I actually find "Loaded by OtherLoader, but needed class loader MyLoader." quite confusing.? How does this help diagnose the issue that caused the ClassCastException? - It is not clear what tests you have run for this change.? This information should be included in any RFR request. Thank you, Lois From mandy.chung at oracle.com Thu Jun 14 21:00:12 2018 From: mandy.chung at oracle.com (mandy chung) Date: Thu, 14 Jun 2018 14:00:12 -0700 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> Message-ID: <0942e1e2-4447-6b22-4b06-0ed7022eba5f@oracle.com> On 6/14/18 12:56 PM, Lois Foltan wrote: > Please review this updated webrev that address review comments received. > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/ This looks good to me. ClassLoaderData::loader_name_and_id is better. I like it. As to jcmd output, I agree that loader_name_and_id may not be applicable here. ClassLoaderHierarchyTest.java test added @ which I suspect it's not intentional?? Mandy From lois.foltan at oracle.com Thu Jun 14 21:28:59 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Thu, 14 Jun 2018 17:28:59 -0400 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: <0942e1e2-4447-6b22-4b06-0ed7022eba5f@oracle.com> References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> <0942e1e2-4447-6b22-4b06-0ed7022eba5f@oracle.com> Message-ID: On 6/14/2018 5:00 PM, mandy chung wrote: > > > On 6/14/18 12:56 PM, Lois Foltan wrote: >> Please review this updated webrev that address review comments received. >> >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/ > > This looks good to me.? ClassLoaderData::loader_name_and_id is better. > I like it. Hi Mandy, Thanks for the review! > > As to jcmd output, I agree that loader_name_and_id may not be > applicable here.? ClassLoaderHierarchyTest.java test added @ which > I suspect it's not intentional?? It was intentional, I did change classfile/classLoaderHierarchyDCmd.cpp to output the class loader's name_and_id, thus causing differing results for the test.? Like jcmd do you think name_and_id is not applicable here as well? Lois > > Mandy From mandy.chung at oracle.com Thu Jun 14 22:23:28 2018 From: mandy.chung at oracle.com (mandy chung) Date: Thu, 14 Jun 2018 15:23:28 -0700 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> <0942e1e2-4447-6b22-4b06-0ed7022eba5f@oracle.com> Message-ID: On 6/14/18 2:28 PM, Lois Foltan wrote: > It was intentional, I did change > classfile/classLoaderHierarchyDCmd.cpp to output the class loader's > name_and_id, thus causing differing results for the test. Like jcmd > do you think name_and_id is not applicable here as well? I wasn't aware that classfile/classLoaderHierarchyDCmd.cpp is for jcmd to use (the file does say so but I was assuming that dcmd source files would be in other directory). The test comment does not indicate the oop pointer address is printed but I notice that the jcmd output Thomas sent out earlier. I confirm from the implementation: 162 // e.g. "+--- jdk.internal.reflect.DelegatingClassLoader" 163 st->print("+%.*s", BranchTracker::twig_len, "----------"); 164 st->print(" %s,", _cld->loader_name_and_id()); 165 if (!_cld->is_the_null_class_loader_data()) { 166 st->print(" %s", loader_klass != NULL ? loader_klass->external_name() : "??"); 167 st->print(" {" PTR_FORMAT "}", p2i(_loader_oop)); 168 } Since you are in that file, it'd help to update line 162 to include the address. I will leave the question to the serviceability team on showing loader oop address vs the identity hash depending how it's intended to be used for troubleshooting. Maybe keep it as is and file an issue to resolve for 11. Mandy From rene.schuenemann at gmail.com Fri Jun 15 05:31:42 2018 From: rene.schuenemann at gmail.com (=?UTF-8?B?UmVuw6kgU2Now7xuZW1hbm4=?=) Date: Fri, 15 Jun 2018 07:31:42 +0200 Subject: RFR: 8204955: Extend ClassCastException message In-Reply-To: References: Message-ID: Hi Lois, I agree the message is confusing. I saw your proposal for the new standardized output when I put this change for review. I think it is best to wait for the outcome of that and revise this change upon the new output format, or even discard this change if it is not needed anymore. Best Regards, Rene On Thu, Jun 14, 2018 at 10:45 PM, Lois Foltan > wrote: > >> On 6/14/2018 3:41 AM, Ren? Sch?nemann wrote: >> >> Hi, >>> >>> can I please get a review for the following change: >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8204955 >>> Webrev: http://cr.openjdk.java.net/~simonis/webrevs/2018/8204955 >>> >>> This change adds additional details to the ClassCastException message >>> when the class cast failed due to non-matching class loaders. >>> >>> Example: >>> >>> "MyLoader/m/MyClass cannot be cast to OtherLoader/m/MyClass. Loaded by >>> OtherLoader, but needed class loader MyLoader." >>> >>> It is now also checked whether the target class is an extended >>> interface or super class of the caster class and casting failed due to >>> non-matching class loaders. >>> >>> Example: >>> >>> "MyLoader/m/MyClass cannot be cast to OtherLoader/m/MyInterface. Found >>> matching interface OtherLoader/m/MyInterface loaded by OtherLoader but >>> needed class loader MyLoader." >>> >>> I have added the test >>> "jdk/test/hotspot/jtreg/runtime/exceptionMsgs/ClassCastExcep >>> tion/ClassCastExceptionTest.java" >>> for the new exception message. >>> >>> >>> Thank you, >>> Rene >>> >> Hi Rene, >> >> Thank you for your work to improve ClassCastException. I do have some >> concerns about this change: >> >> - In runtime/sharedRuntime.cpp - please do not add another method to >> obtain the class loader's name. Work is underway to consolidate how within >> the VM a class loader's name is obtained. See the work for JDK-8202605, >> currently out for review and can be followed at >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-June/033063.html >> >> - Work is also underway to standardize the output of certain error >> messages like IllegalAccessError, ClassCastException, etc. Please see the >> current proposal at http://mail.openjdk.java.net/p >> ipermail/hotspot-runtime-dev/2018-June/028425.html. The error changes >> you have made do not follow this proposed format. >> >> - I'm not sure the added verbiage to ClassCastException is helpful. I >> actually find "Loaded by OtherLoader, but needed class loader MyLoader." >> quite confusing. How does this help diagnose the issue that caused the >> ClassCastException? >> >> - It is not clear what tests you have run for this change. This >> information should be included in any RFR request. >> >> Thank you, >> Lois >> >> >> > From thomas.stuefe at gmail.com Fri Jun 15 07:06:29 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Fri, 15 Jun 2018 09:06:29 +0200 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> Message-ID: Hi Lois, ---- We have now: Symbol* ClassLoaderData::name() which returns ClassLoader.name and const char* ClassLoaderData::loader_name() which returns either ClassLoader.name or, if that is null, the class name. 1) if we keep it that way, we should at least rename loader_name() to something like loader_name_or_class_name() to lessen the surprise. 2) But maybe these two functions should have the same behaviour? Return name or null if not set, not the class name? I see that nobody yet uses loader_name(), so you are free to define it as you see fit. 3) but if (2), maybe alternativly just get rid of loader_name() altogether, as just calling as_C_string() on a symbol is not worth a utility function? --- For VM.systemdictionary, the texts seem to be a bit off: 29167: Dictionary for loader data: 0x00007f7550cb8660 for instance a 'jdk/internal/reflect/DelegatingClassLoader'{0x0000000706c00000} "for instance a" ? Dictionary for loader data: 0x00007f75503b3a50 for instance a 'jdk/internal/loader/ClassLoaders$AppClassLoader'{0x000000070647b098} Dictionary for loader data: 0x00007f75503a4e30 for instance a 'jdk/internal/loader/ClassLoaders$PlatformClassLoader'{0x0000000706479088} should that not be "app" or "platform", respectively? ... but I just see it was the same way before and not touched by your change. Maybe here, your new compound name would make sense? ---- http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderData.cpp.sdiff.html Good comments. suggested change to comment: 129 // Obtain the class loader's name and identity hash. If the class loader's 130 // name was not explicitly set during construction, the class loader's name and id 131 // will be set to the qualified class name of the class loader along with its 132 // identity hash. rather: 129 // Obtain the class loader's name and identity hash. If the class loader's 130 // name was not explicitly set during construction, the class loader's ** _name_and_id field ** 131 // will be set to the qualified class name of the class loader along with its 132 // identity hash. ---- 133 // If for some reason the ClassLoader's constructor has not been run, instead of I am curious, how can this happen? Bad bytecode instrumentation? Should we also attempt to work in the identity hashcode in that case to be consistent with the java side? Or maybe name it something like "classname "? Or is this too exotic a case to care? ---- In various places I see you using: 937 if (_class_loader_klass == NULL) { // bootstrap case just to make sure, this is the same as CLD::is_the_null_class_loader_data(), yes? So, one could use one and assert the other? ---- http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderData.hpp.sdiff.html Not sure about BOOTSTRAP_LOADER_NAME_LEN, since its sole user - jfr - could probably just do a ::strlen(BOOTSTRAP_LOADER_NAME). Not sure either about BOOTSTRAP_LOADER_NAME having quotes baked in - this is something I would rather see in the printing code. + // Obtain the class loader's _name, works during unloading. + const char* loader_name() const; + Symbol* name() const { return _name; } See above my comments to loader_name(). At the very least comment should be updated describing that this function returns name or class name or "bootstrap". ---- http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderHierarchyDCmd.cpp.udiff.html Hm, unfortunately, this does not look so good. I would prefer to keep the old version, see here my proposal, updated to use your new CLD::name() function and to remove the offending "<>" around "bootstrap". @@ -157,13 +157,18 @@ // Retrieve information. const Klass* const loader_klass = _cld->class_loader_klass(); + const Symbol* const loader_name = _cld->name(); branchtracker.print(st); // e.g. "+--- jdk.internal.reflect.DelegatingClassLoader" st->print("+%.*s", BranchTracker::twig_len, "----------"); - st->print(" %s,", _cld->loader_name_and_id()); - if (!_cld->is_the_null_class_loader_data()) { + if (_cld->is_the_null_class_loader_data()) { + st->print(" bootstrap"); + } else { + if (loader_name != NULL) { + st->print(" \"%s\",", loader_name->as_C_string()); + } st->print(" %s", loader_klass != NULL ? loader_klass->external_name() : "??"); st->print(" {" PTR_FORMAT "}", p2i(_loader_oop)); } This also depends on what you decide happens with CLD::loader_name(). If that one were to return "loader name or null if not set, as ra-allocated const char*", it could be used here. ---- http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderStats.cpp.udiff.html In VM.classloader_stats we see the effect of the new naming: x000000080000a0b8 0x00000008000623f0 0x00007f5facafe540 1 6144 4064 jdk.internal.reflect.DelegatingClassLoader @7b5a12ae 0x000000080000a0b8 0x00000008000623f0 0x00007f5facbcdd50 1 6144 3960 jdk.internal.reflect.DelegatingClassLoader @5b529706 0x00000008000623f0 0x0000000000000000 0x00007f5facbcca00 10 90112 51760 'MyInMemoryClassLoader' @17cdf2d0 0x00000008000623f0 0x0000000000000000 0x00007f5facbca560 1 6144 4184 'MyInMemoryClassLoader' @1477089c 0x00000008000623f0 0x0000000000000000 0x00007f5facba7890 1 6144 4184 'MyInMemoryClassLoader' @a87f8ec 0x00000008000623f0 0x0000000000000000 0x00007f5facba5390 1 6144 4184 'MyInMemoryClassLoader' @5a3bc7ed 0x00000008000623f0 0x0000000000000000 0x00007f5facba3bf0 1 6144 4184 'MyInMemoryClassLoader' @48c76607 0x00000008000623f0 0x0000000000000000 0x00007f5facb23f80 1 6144 4184 'MyInMemoryClassLoader' @1224144a 0x00000008000623f0 0x0000000000000000 0x00007f5facb228f0 1 6144 4184 'MyInMemoryClassLoader' @75437611 0x00000008000623f0 0x0000000000000000 0x00007f5facb65c60 1 6144 4184 'MyInMemoryClassLoader' @25084a1e 0x00000008000623f0 0x0000000000000000 0x00007f5facb6a030 1 6144 4184 'MyInMemoryClassLoader' @2d2ffcb7 0x00000008000623f0 0x0000000000000000 0x00007f5facb4bfe0 1 6144 4184 'MyInMemoryClassLoader' @42a48628 0x0000000800010340 0x00000008000107a8 0x00007f5fac3bd670 1064 7004160 6979376 'app' 96 311296 202600 + unsafe anonymous classes 0x0000000000000000 0x0000000000000000 0x00007f5fac1da1e0 1091 8380416 8301048 'bootstrap' 92 263168 169808 + unsafe anonymous classes 0x000000080000a0b8 0x000000080000a0b8 0x00007f5faca63460 1 6144 3960 jdk.internal.reflect.DelegatingClassLoader @5bd03f44 Since we hide now the class name of the loader, if everyone names their class loader the same - e.g. "Test" or "MyInMemoryClassLoader" - we loose information. I'm afraid this will be an issue if people will start naming their class loaders more and more. It is not unimaginable that completely different frameworks name their loaders the same. This "name or if not then class name" scheme will also complicate parsing a lot for people who parse the output of these commands. I would strongly prefer to see both - name and class type. ---- Hmm. At this point I noticed that I still had general reservations about the new compound naming scheme - see my remarks above. So I guess I stop here to wait for your response before continuing the code review. Thanks & Kind Regards, Thomas On Thu, Jun 14, 2018 at 9:56 PM, Lois Foltan wrote: > Please review this updated webrev that address review comments received. > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/ > > Thanks, > Lois > > > On 6/13/2018 6:58 PM, Lois Foltan wrote: >> >> Please review this change to standardize on how to obtain a class loader's >> name within the VM. SystemDictionary::loader_name() methods have been >> removed in favor of ClassLoaderData::loader_name(). >> >> Since the loader name is largely used in the VM for display purposes >> (error messages, logging, jcmd, JFR) this change also adopts a new format to >> append to a class loader's name its identityHashCode and if the loader has >> not been explicitly named it's qualified class name is used instead. >> >> 391 /** >> 392 * If the defining loader has a name explicitly set then >> 393 * '' @ >> 394 * If the defining loader has no name then >> 395 * @ >> 396 * If it's built-in loader then omit `@` as there is only one >> instance. >> 397 */ >> >> The names for the builtin loaders are 'bootstrap', 'app' and 'platform'. >> >> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ >> bug link at https://bugs.openjdk.java.net/browse/JDK-8202605 >> >> Testing: hs-tier(1-2), jdk-tier(1-2) complete >> hs-tier(3-5), jdk-tier(3) in progress >> >> Thanks, >> Lois >> > From volker.simonis at gmail.com Fri Jun 15 07:43:14 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 15 Jun 2018 09:43:14 +0200 Subject: RFR(M): 8204965: Fix '--disable-cds' and disable CDS on AIX by default In-Reply-To: <14702170-8CA5-4033-B3EE-CE12C906BDE7@oracle.com> References: <14702170-8CA5-4033-B3EE-CE12C906BDE7@oracle.com> Message-ID: Hi Jiangli, thanks for looking at the change. 'CDS_only' is only required for static fields because the VMStructEntry for them contains a reference to the actual static field which isn't present if we disable CDS, because the corresponding compilations units (i.e. filemap.cpp) won't be part of libjvm.so. For non-static fields, the VMStructEntry structure only contains the offset of the corresponding field with regards to an object of that type which is harmless. Regards, Volker On Thu, Jun 14, 2018 at 6:42 PM, Jiangli Zhou wrote: > Hi Volker, > > The changes look good to me overall. I?ll refer to the JVMTI experts for > jvmtiEnv.cpp change. I have a question for the change in vmStructs.cpp. Any > reason why only _current_info needs CDS_ONLY? > > /********************************************/ > \ > /* FileMapInfo fields (CDS archive related) */ > \ > /********************************************/ > \ > > \ > nonstatic_field(FileMapInfo, _header, > FileMapInfo::FileMapHeader*) \ > - static_field(FileMapInfo, _current_info, > FileMapInfo*) \ > + CDS_ONLY(static_field(FileMapInfo, _current_info, > FileMapInfo*)) \ > nonstatic_field(FileMapInfo::FileMapHeader, _space[0], > FileMapInfo::FileMapHeader::space_info)\ > nonstatic_field(FileMapInfo::FileMapHeader::space_info, _addr._base, > char*) \ > nonstatic_field(FileMapInfo::FileMapHeader::space_info, _used, > size_t) \ > > \ > > Thanks, > Jiangli > > On Jun 14, 2018, at 7:26 AM, Volker Simonis > wrote: > > Hi, > > can I please have a review for the following fix: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/ > https://bugs.openjdk.java.net/browse/JDK-8204965 > > CDS does currently not work on AIX because of the way how we > reserve/commit memory on AIX. The problem is that we're using a > combination of shmat/mmap depending on the page size and the size of > the memory chunk to reserve. This makes it impossible to reliably > reserve the memory for the CDS archive and later on map the various > parts of the archive into these regions. > > In order to fix this we would have to completely rework the memory > reserve/commit/uncommit logic on AIX which is currently out of our > scope because of resource limitations. > > Unfortunately, I could not simply disable CDS in the configure step > because some of the shared code apparently relies on parts of the CDS > code which gets excluded from the build when CDS is disabled. So I > also fixed the offending parts in hotspot and cleaned up the configure > logic for CDS. > > Thank you and best regards, > Volker > > PS: I did run the job through the submit forest > (mach5-one-simonis-JDK-8204965-20180613-1946-26349) but the results > weren't really useful because they mention build failures on linux-x64 > which I can't reproduce locally. > > From matthias.baesken at sap.com Fri Jun 15 07:47:45 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Fri, 15 Jun 2018 07:47:45 +0000 Subject: RFR: 8205091: AIX: build errors in hotspot after 8203641: Refactor String Deduplication into shared Message-ID: <1a62b7b4123c45f1a0e57774d72ee683@sap.com> Please review this small change that fixes the AIX build after "8203641: Refactor String Deduplication into shared" . We are getting this compilation error : /build_ci_jdk_jdk_rs6000_64/src/hotspot/share/gc/shared/stringdedup/stringDedup.hpp", line 107.38: 1540-0063 (S) The text "1" is unexpected. Looks like the name of the second template parameter (STAT) template static void initialize_impl(); is clashing with defines from the AIX system headers (where I find #define STAT 1 ) . Renaming STAT to something else fixes the build on AIX . Webrev : http://cr.openjdk.java.net/~mbaesken/webrevs/8205091/ Bug : https://bugs.openjdk.java.net/browse/JDK-8205091 Thanks, Matthias From volker.simonis at gmail.com Fri Jun 15 08:05:01 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 15 Jun 2018 10:05:01 +0200 Subject: RFR(M): 8204965: Fix '--disable-cds' and disable CDS on AIX by default In-Reply-To: References: Message-ID: On Thu, Jun 14, 2018 at 9:04 PM, Thomas St?fe wrote: > Hi Volker, > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/make/autoconf/hotspot.m4.udiff.html > > Seems like a roundabout way to have a platform specific default value. > > Why not determine a default value beforehand: > > if test "x$OPENJDK_TARGET_OS" = "xaix"; then > ENABLE_CDS_DEFAULT="false" > else > ENABLE_CDS_DEFAULT=true" > fi > > AC_ARG_ENABLE([cds], [AS_HELP_STRING([--enable-cds@<:@=yes/no/auto@:>@], > [enable class data sharing feature in non-minimal VM. Default is > ${ENABLE_CDS_DEFAULT}.])]) > > and so on? > I've just followed the pattern used for '--enable-aot' right above the code I changed. Moreover, I don't think that we would save any code because we would still have to check for AIX in the '--enable-cds=yes' case. Also, the new reporting added later in the file (see "AC_MSG_CHECKING([if cds should be enabled])" seems easier to me without the extra default value. So if you don't mind I'd prefer to leave it as is. > See also what we did for "8202325: [aix] disable warnings-as-errors by default". > > -- > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/src/hotspot/share/classfile/javaClasses.cpp.udiff.html > > Here, do we really need to exclude this from compiling, > DumpSharedSpaces = false is not enough? > Yes, we need it because the excluded code references methods (e.g. 'StringTable::create_archived_string()') which are not compiled into libjvm.so if we disable CDS. > > Best Regards, Thomas > > On Thu, Jun 14, 2018 at 4:26 PM, Volker Simonis > wrote: >> Hi, >> >> can I please have a review for the following fix: >> >> http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/ >> https://bugs.openjdk.java.net/browse/JDK-8204965 >> >> CDS does currently not work on AIX because of the way how we >> reserve/commit memory on AIX. The problem is that we're using a >> combination of shmat/mmap depending on the page size and the size of >> the memory chunk to reserve. This makes it impossible to reliably >> reserve the memory for the CDS archive and later on map the various >> parts of the archive into these regions. >> >> In order to fix this we would have to completely rework the memory >> reserve/commit/uncommit logic on AIX which is currently out of our >> scope because of resource limitations. >> >> Unfortunately, I could not simply disable CDS in the configure step >> because some of the shared code apparently relies on parts of the CDS >> code which gets excluded from the build when CDS is disabled. So I >> also fixed the offending parts in hotspot and cleaned up the configure >> logic for CDS. >> >> Thank you and best regards, >> Volker >> >> PS: I did run the job through the submit forest >> (mach5-one-simonis-JDK-8204965-20180613-1946-26349) but the results >> weren't really useful because they mention build failures on linux-x64 >> which I can't reproduce locally. From volker.simonis at gmail.com Fri Jun 15 09:01:08 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 15 Jun 2018 11:01:08 +0200 Subject: RFR: 8205091: AIX: build errors in hotspot after 8203641: Refactor String Deduplication into shared In-Reply-To: <1a62b7b4123c45f1a0e57774d72ee683@sap.com> References: <1a62b7b4123c45f1a0e57774d72ee683@sap.com> Message-ID: Hi Mattias, the change looks good. Could you please also update the comment in the line above which still reads "STAT". Also, maybe "STATI" would be a better name choice (longer is better :) because the probability of a clash is lower (and it would nicely align with QUEUE in the comments :) But I leave that up to you... Regards, Volker On Fri, Jun 15, 2018 at 9:47 AM, Baesken, Matthias wrote: > Please review this small change that fixes the AIX build after "8203641: Refactor String Deduplication into shared" . > > We are getting this compilation error : > /build_ci_jdk_jdk_rs6000_64/src/hotspot/share/gc/shared/stringdedup/stringDedup.hpp", line 107.38: 1540-0063 (S) The text "1" is unexpected. > > > Looks like the name of the second template parameter (STAT) > > template > static void initialize_impl(); > > is clashing with defines from the AIX system headers (where I find #define STAT 1 ) . > Renaming STAT to something else fixes the build on AIX . > > Webrev : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8205091/ > > Bug : > > https://bugs.openjdk.java.net/browse/JDK-8205091 > > > Thanks, Matthias From sgehwolf at redhat.com Fri Jun 15 09:01:38 2018 From: sgehwolf at redhat.com (Severin Gehwolf) Date: Fri, 15 Jun 2018 11:01:38 +0200 Subject: RFR: 8203188: Add JEP-181 support to the Zero interpreter In-Reply-To: References: <9aa2709edbf7e1b417ce47ed93a2f53d591984cd.camel@redhat.com> <4a295598fd72fb1eff1536545111def62c5ef20f.camel@redhat.com> Message-ID: Hi David, On Tue, 2018-06-05 at 19:46 +1000, David Holmes wrote: > Looks good. > > I'll push this with the nestmate changes later in the week. Any update on this? Thanks, Severin > Thanks, > David > > On 5/06/2018 7:40 PM, Severin Gehwolf wrote: > > Hi David, > > > > Thanks for the review! > > > > On Tue, 2018-06-05 at 14:44 +1000, David Holmes wrote: > > > Hi Severin, > > > > > > On 5/06/2018 1:26 AM, Severin Gehwolf wrote: > > > > Hi, > > > > > > > > Could I please get a review of this change adding support for JEP-181 - > > > > a.k.a Nestmates - to Zero. This patch depends on David Holmes' > > > > Nestmates implementation via JDK-8010319. Thanks to David Holmes and > > > > Chris Phillips for their initial reviews prior to this RFR. > > > > > > > > Bug: https://bugs.openjdk.java.net/browse/JDK-8203188 > > > > webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8203188/webrev.02/ > > > > > > src/hotspot/cpu/zero/methodHandles_zero.cpp > > > > > > The change here seems to be an existing bug unrelated to nestmate > > > changes. > > > > Agreed. > > > > > IT also begs the question as to what happens in the same > > > circumstance with a removed static or "special" method? (I thought I had > > > a test for that in the nestmates changes ... will need to double-check > > > and add it if missing!). > > > > It might bomb in the same way (NULL dereference). I'm currently looking > > at some other potential issues in this area... > > > > > src/hotspot/share/interpreter/bytecodeInterpreter.cpp > > > > > > Interpreter changes seem fine - mirroring what is done elsewhere. You > > > can delete these incorrect comments: > > > > > > 2576 // This code isn't produced by javac, but could be produced by > > > 2577 // another compliant java compiler. > > > > > > That code path is taken in more circumstances than the author of that > > > comment realized. :) > > > > Done. > > > > > > Testing: > > > > > > > > Zero on Linux-x86_64 with the following test set: > > > > > > > > test/jdk/java/lang/invoke/AccessControlTest.java > > > > test/jdk/java/lang/invoke/FinalVirtualCallFromInterface.java > > > > test/jdk/java/lang/invoke/PrivateInterfaceCall.java > > > > test/jdk/java/lang/invoke/SpecialInterfaceCall.java > > > > test/jdk/java/lang/reflect/Nestmates > > > > test/hotspot/jtreg/runtime/SelectionResolution/InvokeInterfaceICCE.java > > > > test/hotspot/jtreg/runtime/SelectionResolution/InvokeInterfaceSuccessTest.java > > > > test/hotspot/jtreg/runtime/Nestmates > > > > > > > > I cannot run this through the submit repo since the main Nestmates > > > > patch hasn't yet landed in JDK 11. Currently testing a Zero bootcycle- > > > > images build on x86_64. Thoughts? > > > > FWIW, bootcycle-images build passed on linux x86_64 Zero. > > > > > I can bundle this in with the nestmate changes when I push them later > > > this week. Just send me a pointer to the finalized changeset once its > > > finalized. I'll run it all through a final step of testing equivalent > > > (actually more than) the submit repo. > > > > OK, thanks! > > > > Latest webrev: > > http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8203188/webrev.03/ > > > > Thanks, > > Severin > > > > > Thanks, > > > David > > > > > > > Thanks, > > > > Severin > > > > From rene.schuenemann at gmail.com Fri Jun 15 09:08:06 2018 From: rene.schuenemann at gmail.com (=?UTF-8?B?UmVuw6kgU2Now7xuZW1hbm4=?=) Date: Fri, 15 Jun 2018 11:08:06 +0200 Subject: RFR: 8205091: AIX: build errors in hotspot after 8203641: Refactor String Deduplication into shared In-Reply-To: <1a62b7b4123c45f1a0e57774d72ee683@sap.com> References: <1a62b7b4123c45f1a0e57774d72ee683@sap.com> Message-ID: Hi Matthias, the name SI seems also quite "common" and may result in other naming clashes in the future. Maybe something more readable like STAT_IMPL? Please also change the name in the comment: // STAT: String Dedup Stat implementation Regards, Rene On Fri, Jun 15, 2018 at 9:47 AM, Baesken, Matthias wrote: > Please review this small change that fixes the AIX build after "8203641: Refactor String Deduplication into shared" . > > We are getting this compilation error : > /build_ci_jdk_jdk_rs6000_64/src/hotspot/share/gc/shared/stringdedup/stringDedup.hpp", line 107.38: 1540-0063 (S) The text "1" is unexpected. > > > Looks like the name of the second template parameter (STAT) > > template > static void initialize_impl(); > > is clashing with defines from the AIX system headers (where I find #define STAT 1 ) . > Renaming STAT to something else fixes the build on AIX . > > Webrev : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8205091/ > > Bug : > > https://bugs.openjdk.java.net/browse/JDK-8205091 > > > Thanks, Matthias From glaubitz at physik.fu-berlin.de Fri Jun 15 09:08:31 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Fri, 15 Jun 2018 11:08:31 +0200 Subject: Normal server build broken - runtime/threadHeapSampler.hpp: No such file or directory Message-ID: <8e2b7aa8-ca4d-6be7-9176-af0757577b1d@physik.fu-berlin.de> Hi! As of today, I am running into the build failure below which affects both the server builds as well as zero. I haven't done any bisecting yet. Anyone seen this? Adrian === Output from failing command(s) repeated here === /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_abstractCompiler.o:\n" * For target hotspot_variant-server_libjvm_objs_abstractCompiler.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractCompiler.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/compiler/abstractCompiler.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/compiler/abstractCompiler.cpp:25: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractCompiler.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_abstractInterpreter.o:\n" * For target hotspot_variant-server_libjvm_objs_abstractInterpreter.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractInterpreter.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/abstractInterpreter.hpp:32:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/cppInterpreter.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/interpreter.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/abstractInterpreter.cpp:31: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractInterpreter.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_abstractInterpreter_x86.o:\n" * For target hotspot_variant-server_libjvm_objs_abstractInterpreter_x86.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractInterpreter_x86.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciMethod.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/cpu/x86/abstractInterpreter_x86.cpp:26: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractInterpreter_x86.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_access.o:\n" * For target hotspot_variant-server_libjvm_objs_access.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_access.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/compressedOops.inline.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/modRefBarrierSet.inline.hpp:30, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/barrierSetConfig.inline.hpp:30, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/access.inline.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/access.cpp:26: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_access.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_accessBackend.o:\n" * For target hotspot_variant-server_libjvm_objs_accessBackend.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessBackend.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/compressedOops.inline.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/accessBackend.inline.hpp:30, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/accessBackend.cpp:26: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessBackend.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_accessBarrierSupport.o:\n" * For target hotspot_variant-server_libjvm_objs_accessBarrierSupport.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessBarrierSupport.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/compressedOops.inline.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/modRefBarrierSet.inline.hpp:30, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/barrierSetConfig.inline.hpp:30, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/access.inline.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/classfile/javaClasses.inline.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/accessBarrierSupport.cpp:26: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessBarrierSupport.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_accessFlags.o:\n" * For target hotspot_variant-server_libjvm_objs_accessFlags.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessFlags.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/oop.inline.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/accessFlags.cpp:26: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessFlags.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86.o:\n" * For target hotspot_variant-server_libjvm_objs_ad_x86.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, from ad_x86.hpp:33, if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi ... (rest of output omitted) /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_clone.o:\n" * For target hotspot_variant-server_libjvm_objs_ad_x86_clone.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_clone.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, from ad_x86.hpp:33, if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_clone.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi ... (rest of output omitted) /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_expand.o:\n" * For target hotspot_variant-server_libjvm_objs_ad_x86_expand.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_expand.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, from ad_x86.hpp:33, if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_expand.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi ... (rest of output omitted) /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_format.o:\n" * For target hotspot_variant-server_libjvm_objs_ad_x86_format.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_format.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, from ad_x86.hpp:33, if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_format.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi ... (rest of output omitted) /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_gen.o:\n" * For target hotspot_variant-server_libjvm_objs_ad_x86_gen.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_gen.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, from ad_x86.hpp:33, if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_gen.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi ... (rest of output omitted) /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_misc.o:\n" * For target hotspot_variant-server_libjvm_objs_ad_x86_misc.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_misc.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, from ad_x86.hpp:33, if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_misc.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi ... (rest of output omitted) /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_peephole.o:\n" * For target hotspot_variant-server_libjvm_objs_ad_x86_peephole.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_peephole.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, from ad_x86.hpp:33, if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_peephole.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi ... (rest of output omitted) /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_pipeline.o:\n" * For target hotspot_variant-server_libjvm_objs_ad_x86_pipeline.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_pipeline.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, from ad_x86.hpp:33, if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_pipeline.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi ... (rest of output omitted) /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_adaptiveFreeList.o:\n" * For target hotspot_variant-server_libjvm_objs_adaptiveFreeList.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adaptiveFreeList.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/cms/adaptiveFreeList.cpp:28: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adaptiveFreeList.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_adaptiveSizePolicy.o:\n" * For target hotspot_variant-server_libjvm_objs_adaptiveSizePolicy.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adaptiveSizePolicy.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/adaptiveSizePolicy.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/adaptiveSizePolicy.cpp:26: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adaptiveSizePolicy.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_addnode.o:\n" * For target hotspot_variant-server_libjvm_objs_addnode.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_addnode.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/addnode.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/addnode.cpp:27: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_addnode.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi ... (rest of output omitted) /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_adjoiningGenerations.o:\n" * For target hotspot_variant-server_libjvm_objs_adjoiningGenerations.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adjoiningGenerations.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/workgroup.hpp:30:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/space.hpp:30, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/spaceDecorator.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/parallel/asPSYoungGen.hpp:34, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/parallel/adjoiningGenerations.hpp:30, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/parallel/adjoiningGenerations.cpp:26: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adjoiningGenerations.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ageTable.o:\n" * For target hotspot_variant-server_libjvm_objs_ageTable.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ageTable.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/oop.inline.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/ageTable.inline.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/ageTable.cpp:27: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ageTable.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ageTableTracer.o:\n" * For target hotspot_variant-server_libjvm_objs_ageTableTracer.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ageTableTracer.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.inline.hpp:32:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/recorder/checkpoint/types/traceid/jfrTraceId.inline.hpp:37, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrWriterHost.inline.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrEventWriterHost.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrEventWriterHost.inline.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrNativeEventWriter.hpp:30, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/recorder/service/jfrEvent.hpp:32, from /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/hotspot/variant-server/gensrc/jfrfiles/jfrEventClasses.hpp:12, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/jfrEvents.hpp:32, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/ageTableTracer.cpp:28: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ageTableTracer.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi ... (rest of output omitted) /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_allocTracer.o:\n" * For target hotspot_variant-server_libjvm_objs_allocTracer.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_allocTracer.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.inline.hpp:32:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/recorder/checkpoint/types/traceid/jfrTraceId.inline.hpp:37, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrWriterHost.inline.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrEventWriterHost.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrEventWriterHost.inline.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrNativeEventWriter.hpp:30, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/recorder/service/jfrEvent.hpp:32, from /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/hotspot/variant-server/gensrc/jfrfiles/jfrEventClasses.hpp:12, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/jfrEvents.hpp:32, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/allocTracer.cpp:27: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_allocTracer.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi ... (rest of output omitted) /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_allocation.o:\n" * For target hotspot_variant-server_libjvm_objs_allocation.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_allocation.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/allocation.cpp:30: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_allocation.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_altHashing.o:\n" * For target hotspot_variant-server_libjvm_objs_altHashing.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_altHashing.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/oop.inline.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/classfile/altHashing.cpp:30: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_altHashing.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_annotations.o:\n" * For target hotspot_variant-server_libjvm_objs_annotations.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_annotations.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/compressedOops.inline.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/modRefBarrierSet.inline.hpp:30, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/barrierSetConfig.inline.hpp:30, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/access.inline.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/typeArrayOop.inline.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/annotations.cpp:34: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_annotations.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_aotCodeHeap.o:\n" * For target hotspot_variant-server_libjvm_objs_aotCodeHeap.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotCodeHeap.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciUtilities.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciUtilities.inline.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/aot/aotCodeHeap.cpp:28: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotCodeHeap.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_aotCompiledMethod.o:\n" * For target hotspot_variant-server_libjvm_objs_aotCompiledMethod.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotCompiledMethod.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/aot/aotCompiledMethod.cpp:34: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotCompiledMethod.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_aotLoader.o:\n" * For target hotspot_variant-server_libjvm_objs_aotLoader.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotLoader.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/abstractInterpreter.hpp:32:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/cppInterpreter.hpp:28, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/interpreter.hpp:29, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jvmci/jvmciRuntime.hpp:27, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/aot/aotLoader.cpp:29: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotLoader.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_arena.o:\n" * For target hotspot_variant-server_libjvm_objs_arena.o: (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_arena.o.log || true) | /usr/bin/head -n 12 In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/arena.cpp:29: /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory #include "runtime/threadHeapSampler.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_arena.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "\n* All command lines available in /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs.\n" * All command lines available in /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs. /usr/bin/printf "=== End of repeated output ===\n" === End of repeated output === -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From matthias.baesken at sap.com Fri Jun 15 09:09:43 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Fri, 15 Jun 2018 09:09:43 +0000 Subject: RFR: 8205091: AIX: build errors in hotspot after 8203641: Refactor String Deduplication into shared In-Reply-To: References: <1a62b7b4123c45f1a0e57774d72ee683@sap.com> Message-ID: <53a0c1b692a64c22aa65db4db7576f51@sap.com> Hi , thanks for looking into it. I think I will use STAT_IMPL . Best regards, Matthias > -----Original Message----- > From: Ren? Sch?nemann [mailto:rene.schuenemann at gmail.com] > Sent: Freitag, 15. Juni 2018 11:08 > To: Baesken, Matthias > Cc: build-dev at openjdk.java.net; hotspot-dev at openjdk.java.net > Subject: Re: RFR: 8205091: AIX: build errors in hotspot after 8203641: Refactor > String Deduplication into shared > > Hi Matthias, > > the name SI seems also quite "common" and may result in other naming > clashes in the future. > Maybe something more readable like STAT_IMPL? > > Please also change the name in the comment: > > // STAT: String Dedup Stat implementation > > Regards, > Rene > > On Fri, Jun 15, 2018 at 9:47 AM, Baesken, Matthias > wrote: > > Please review this small change that fixes the AIX build after "8203641: > Refactor String Deduplication into shared" . > > > > We are getting this compilation error : > > > /build_ci_jdk_jdk_rs6000_64/src/hotspot/share/gc/shared/stringdedup/stri > ngDedup.hpp", line 107.38: 1540-0063 (S) The text "1" is unexpected. > > > > > > Looks like the name of the second template parameter (STAT) > > > > template > > static void initialize_impl(); > > > > is clashing with defines from the AIX system headers (where I find #define > STAT 1 ) . > > Renaming STAT to something else fixes the build on AIX . > > > > Webrev : > > > > http://cr.openjdk.java.net/~mbaesken/webrevs/8205091/ > > > > Bug : > > > > https://bugs.openjdk.java.net/browse/JDK-8205091 > > > > > > Thanks, Matthias From volker.simonis at gmail.com Fri Jun 15 09:18:56 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 15 Jun 2018 11:18:56 +0200 Subject: Normal server build broken - runtime/threadHeapSampler.hpp: No such file or directory In-Reply-To: <8e2b7aa8-ca4d-6be7-9176-af0757577b1d@physik.fu-berlin.de> References: <8e2b7aa8-ca4d-6be7-9176-af0757577b1d@physik.fu-berlin.de> Message-ID: Yes, I see this as well. It is caused by "8203394: Implementation of JEP 331: Low-Overhead Heap Profiling" [1] JC, Serguei can you please have a look? Regards, Volker [1] http://hg.openjdk.java.net/jdk/jdk/rev/e2a7f431f65c On Fri, Jun 15, 2018 at 11:08 AM, John Paul Adrian Glaubitz wrote: > Hi! > > As of today, I am running into the build failure below which affects both the > server builds as well as zero. I haven't done any bisecting yet. > > Anyone seen this? > > Adrian > > === Output from failing command(s) repeated here === > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_abstractCompiler.o:\n" > * For target hotspot_variant-server_libjvm_objs_abstractCompiler.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractCompiler.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/compiler/abstractCompiler.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/compiler/abstractCompiler.cpp:25: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractCompiler.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_abstractInterpreter.o:\n" > * For target hotspot_variant-server_libjvm_objs_abstractInterpreter.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractInterpreter.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/abstractInterpreter.hpp:32:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/cppInterpreter.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/interpreter.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/abstractInterpreter.cpp:31: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractInterpreter.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_abstractInterpreter_x86.o:\n" > * For target hotspot_variant-server_libjvm_objs_abstractInterpreter_x86.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractInterpreter_x86.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciMethod.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/cpu/x86/abstractInterpreter_x86.cpp:26: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractInterpreter_x86.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_access.o:\n" > * For target hotspot_variant-server_libjvm_objs_access.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_access.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/compressedOops.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/modRefBarrierSet.inline.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/barrierSetConfig.inline.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/access.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/access.cpp:26: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_access.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_accessBackend.o:\n" > * For target hotspot_variant-server_libjvm_objs_accessBackend.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessBackend.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/compressedOops.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/accessBackend.inline.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/accessBackend.cpp:26: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessBackend.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_accessBarrierSupport.o:\n" > * For target hotspot_variant-server_libjvm_objs_accessBarrierSupport.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessBarrierSupport.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/compressedOops.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/modRefBarrierSet.inline.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/barrierSetConfig.inline.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/access.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/classfile/javaClasses.inline.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/accessBarrierSupport.cpp:26: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessBarrierSupport.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_accessFlags.o:\n" > * For target hotspot_variant-server_libjvm_objs_accessFlags.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessFlags.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/oop.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/accessFlags.cpp:26: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessFlags.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86.o:\n" > * For target hotspot_variant-server_libjvm_objs_ad_x86.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, > from ad_x86.hpp:33, > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_clone.o:\n" > * For target hotspot_variant-server_libjvm_objs_ad_x86_clone.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_clone.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, > from ad_x86.hpp:33, > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_clone.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_expand.o:\n" > * For target hotspot_variant-server_libjvm_objs_ad_x86_expand.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_expand.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, > from ad_x86.hpp:33, > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_expand.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_format.o:\n" > * For target hotspot_variant-server_libjvm_objs_ad_x86_format.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_format.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, > from ad_x86.hpp:33, > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_format.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_gen.o:\n" > * For target hotspot_variant-server_libjvm_objs_ad_x86_gen.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_gen.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, > from ad_x86.hpp:33, > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_gen.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_misc.o:\n" > * For target hotspot_variant-server_libjvm_objs_ad_x86_misc.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_misc.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, > from ad_x86.hpp:33, > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_misc.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_peephole.o:\n" > * For target hotspot_variant-server_libjvm_objs_ad_x86_peephole.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_peephole.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, > from ad_x86.hpp:33, > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_peephole.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_pipeline.o:\n" > * For target hotspot_variant-server_libjvm_objs_ad_x86_pipeline.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_pipeline.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, > from ad_x86.hpp:33, > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_pipeline.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_adaptiveFreeList.o:\n" > * For target hotspot_variant-server_libjvm_objs_adaptiveFreeList.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adaptiveFreeList.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/cms/adaptiveFreeList.cpp:28: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adaptiveFreeList.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_adaptiveSizePolicy.o:\n" > * For target hotspot_variant-server_libjvm_objs_adaptiveSizePolicy.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adaptiveSizePolicy.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/adaptiveSizePolicy.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/adaptiveSizePolicy.cpp:26: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adaptiveSizePolicy.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_addnode.o:\n" > * For target hotspot_variant-server_libjvm_objs_addnode.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_addnode.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/addnode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/addnode.cpp:27: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_addnode.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_adjoiningGenerations.o:\n" > * For target hotspot_variant-server_libjvm_objs_adjoiningGenerations.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adjoiningGenerations.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/workgroup.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/space.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/spaceDecorator.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/parallel/asPSYoungGen.hpp:34, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/parallel/adjoiningGenerations.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/parallel/adjoiningGenerations.cpp:26: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adjoiningGenerations.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ageTable.o:\n" > * For target hotspot_variant-server_libjvm_objs_ageTable.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ageTable.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/oop.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/ageTable.inline.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/ageTable.cpp:27: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ageTable.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ageTableTracer.o:\n" > * For target hotspot_variant-server_libjvm_objs_ageTableTracer.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ageTableTracer.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.inline.hpp:32:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/recorder/checkpoint/types/traceid/jfrTraceId.inline.hpp:37, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrWriterHost.inline.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrEventWriterHost.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrEventWriterHost.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrNativeEventWriter.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/recorder/service/jfrEvent.hpp:32, > from /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/hotspot/variant-server/gensrc/jfrfiles/jfrEventClasses.hpp:12, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/jfrEvents.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/ageTableTracer.cpp:28: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ageTableTracer.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_allocTracer.o:\n" > * For target hotspot_variant-server_libjvm_objs_allocTracer.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_allocTracer.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.inline.hpp:32:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/recorder/checkpoint/types/traceid/jfrTraceId.inline.hpp:37, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrWriterHost.inline.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrEventWriterHost.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrEventWriterHost.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrNativeEventWriter.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/recorder/service/jfrEvent.hpp:32, > from /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/hotspot/variant-server/gensrc/jfrfiles/jfrEventClasses.hpp:12, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/jfrEvents.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/allocTracer.cpp:27: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_allocTracer.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_allocation.o:\n" > * For target hotspot_variant-server_libjvm_objs_allocation.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_allocation.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/allocation.cpp:30: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_allocation.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_altHashing.o:\n" > * For target hotspot_variant-server_libjvm_objs_altHashing.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_altHashing.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/oop.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/classfile/altHashing.cpp:30: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_altHashing.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_annotations.o:\n" > * For target hotspot_variant-server_libjvm_objs_annotations.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_annotations.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/compressedOops.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/modRefBarrierSet.inline.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/barrierSetConfig.inline.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/access.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/typeArrayOop.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/annotations.cpp:34: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_annotations.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_aotCodeHeap.o:\n" > * For target hotspot_variant-server_libjvm_objs_aotCodeHeap.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotCodeHeap.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciUtilities.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciUtilities.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/aot/aotCodeHeap.cpp:28: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotCodeHeap.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_aotCompiledMethod.o:\n" > * For target hotspot_variant-server_libjvm_objs_aotCompiledMethod.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotCompiledMethod.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/aot/aotCompiledMethod.cpp:34: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotCompiledMethod.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_aotLoader.o:\n" > * For target hotspot_variant-server_libjvm_objs_aotLoader.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotLoader.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/abstractInterpreter.hpp:32:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/cppInterpreter.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/interpreter.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jvmci/jvmciRuntime.hpp:27, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/aot/aotLoader.cpp:29: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotLoader.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_arena.o:\n" > * For target hotspot_variant-server_libjvm_objs_arena.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_arena.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/arena.cpp:29: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_arena.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "\n* All command lines available in /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs.\n" > > * All command lines available in /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs. > /usr/bin/printf "=== End of repeated output ===\n" > === End of repeated output === > > -- > .''`. John Paul Adrian Glaubitz > : :' : Debian Developer - glaubitz at debian.org > `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de > `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From serguei.spitsyn at oracle.com Fri Jun 15 09:20:57 2018 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Fri, 15 Jun 2018 02:20:57 -0700 Subject: Normal server build broken - runtime/threadHeapSampler.hpp: No such file or directory In-Reply-To: References: <8e2b7aa8-ca4d-6be7-9176-af0757577b1d@physik.fu-berlin.de> Message-ID: <9d4733c5-f976-50a0-13fa-c03fd89c26f1@oracle.com> I've posted an RFR to fix this. Sorry, for the trouble. Thanks, Serguei On 6/15/18 02:18, Volker Simonis wrote: > Yes, I see this as well. It is caused by "8203394: Implementation of > JEP 331: Low-Overhead Heap Profiling" [1] > > JC, Serguei can you please have a look? > > Regards, > Volker > > > [1] http://hg.openjdk.java.net/jdk/jdk/rev/e2a7f431f65c > > On Fri, Jun 15, 2018 at 11:08 AM, John Paul Adrian Glaubitz > wrote: >> Hi! >> >> As of today, I am running into the build failure below which affects both the >> server builds as well as zero. I haven't done any bisecting yet. >> >> Anyone seen this? >> >> Adrian >> >> === Output from failing command(s) repeated here === >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_abstractCompiler.o:\n" >> * For target hotspot_variant-server_libjvm_objs_abstractCompiler.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractCompiler.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/compiler/abstractCompiler.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/compiler/abstractCompiler.cpp:25: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> compilation terminated. >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractCompiler.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_abstractInterpreter.o:\n" >> * For target hotspot_variant-server_libjvm_objs_abstractInterpreter.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractInterpreter.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/abstractInterpreter.hpp:32:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/cppInterpreter.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/interpreter.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/abstractInterpreter.cpp:31: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> compilation terminated. >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractInterpreter.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_abstractInterpreter_x86.o:\n" >> * For target hotspot_variant-server_libjvm_objs_abstractInterpreter_x86.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractInterpreter_x86.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciMethod.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/cpu/x86/abstractInterpreter_x86.cpp:26: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> compilation terminated. >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractInterpreter_x86.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_access.o:\n" >> * For target hotspot_variant-server_libjvm_objs_access.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_access.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/compressedOops.inline.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/modRefBarrierSet.inline.hpp:30, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/barrierSetConfig.inline.hpp:30, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/access.inline.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/access.cpp:26: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> compilation terminated. >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_access.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_accessBackend.o:\n" >> * For target hotspot_variant-server_libjvm_objs_accessBackend.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessBackend.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/compressedOops.inline.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/accessBackend.inline.hpp:30, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/accessBackend.cpp:26: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> compilation terminated. >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessBackend.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_accessBarrierSupport.o:\n" >> * For target hotspot_variant-server_libjvm_objs_accessBarrierSupport.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessBarrierSupport.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/compressedOops.inline.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/modRefBarrierSet.inline.hpp:30, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/barrierSetConfig.inline.hpp:30, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/access.inline.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/classfile/javaClasses.inline.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/accessBarrierSupport.cpp:26: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> compilation terminated. >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessBarrierSupport.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_accessFlags.o:\n" >> * For target hotspot_variant-server_libjvm_objs_accessFlags.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessFlags.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/oop.inline.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/accessFlags.cpp:26: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> compilation terminated. >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessFlags.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86.o:\n" >> * For target hotspot_variant-server_libjvm_objs_ad_x86.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, >> from ad_x86.hpp:33, >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> ... (rest of output omitted) >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_clone.o:\n" >> * For target hotspot_variant-server_libjvm_objs_ad_x86_clone.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_clone.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, >> from ad_x86.hpp:33, >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_clone.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> ... (rest of output omitted) >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_expand.o:\n" >> * For target hotspot_variant-server_libjvm_objs_ad_x86_expand.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_expand.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, >> from ad_x86.hpp:33, >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_expand.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> ... (rest of output omitted) >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_format.o:\n" >> * For target hotspot_variant-server_libjvm_objs_ad_x86_format.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_format.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, >> from ad_x86.hpp:33, >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_format.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> ... (rest of output omitted) >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_gen.o:\n" >> * For target hotspot_variant-server_libjvm_objs_ad_x86_gen.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_gen.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, >> from ad_x86.hpp:33, >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_gen.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> ... (rest of output omitted) >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_misc.o:\n" >> * For target hotspot_variant-server_libjvm_objs_ad_x86_misc.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_misc.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, >> from ad_x86.hpp:33, >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_misc.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> ... (rest of output omitted) >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_peephole.o:\n" >> * For target hotspot_variant-server_libjvm_objs_ad_x86_peephole.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_peephole.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, >> from ad_x86.hpp:33, >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_peephole.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> ... (rest of output omitted) >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_pipeline.o:\n" >> * For target hotspot_variant-server_libjvm_objs_ad_x86_pipeline.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_pipeline.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, >> from ad_x86.hpp:33, >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_pipeline.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> ... (rest of output omitted) >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_adaptiveFreeList.o:\n" >> * For target hotspot_variant-server_libjvm_objs_adaptiveFreeList.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adaptiveFreeList.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/cms/adaptiveFreeList.cpp:28: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> compilation terminated. >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adaptiveFreeList.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_adaptiveSizePolicy.o:\n" >> * For target hotspot_variant-server_libjvm_objs_adaptiveSizePolicy.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adaptiveSizePolicy.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/adaptiveSizePolicy.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/adaptiveSizePolicy.cpp:26: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> compilation terminated. >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adaptiveSizePolicy.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_addnode.o:\n" >> * For target hotspot_variant-server_libjvm_objs_addnode.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_addnode.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/addnode.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/addnode.cpp:27: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_addnode.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> ... (rest of output omitted) >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_adjoiningGenerations.o:\n" >> * For target hotspot_variant-server_libjvm_objs_adjoiningGenerations.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adjoiningGenerations.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/workgroup.hpp:30:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/space.hpp:30, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/spaceDecorator.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/parallel/asPSYoungGen.hpp:34, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/parallel/adjoiningGenerations.hpp:30, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/parallel/adjoiningGenerations.cpp:26: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> compilation terminated. >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adjoiningGenerations.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ageTable.o:\n" >> * For target hotspot_variant-server_libjvm_objs_ageTable.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ageTable.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/oop.inline.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/ageTable.inline.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/ageTable.cpp:27: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> compilation terminated. >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ageTable.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ageTableTracer.o:\n" >> * For target hotspot_variant-server_libjvm_objs_ageTableTracer.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ageTableTracer.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.inline.hpp:32:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/recorder/checkpoint/types/traceid/jfrTraceId.inline.hpp:37, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrWriterHost.inline.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrEventWriterHost.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrEventWriterHost.inline.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrNativeEventWriter.hpp:30, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/recorder/service/jfrEvent.hpp:32, >> from /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/hotspot/variant-server/gensrc/jfrfiles/jfrEventClasses.hpp:12, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/jfrEvents.hpp:32, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/ageTableTracer.cpp:28: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ageTableTracer.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> ... (rest of output omitted) >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_allocTracer.o:\n" >> * For target hotspot_variant-server_libjvm_objs_allocTracer.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_allocTracer.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.inline.hpp:32:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/recorder/checkpoint/types/traceid/jfrTraceId.inline.hpp:37, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrWriterHost.inline.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrEventWriterHost.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrEventWriterHost.inline.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrNativeEventWriter.hpp:30, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/recorder/service/jfrEvent.hpp:32, >> from /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/hotspot/variant-server/gensrc/jfrfiles/jfrEventClasses.hpp:12, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/jfrEvents.hpp:32, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/allocTracer.cpp:27: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_allocTracer.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> ... (rest of output omitted) >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_allocation.o:\n" >> * For target hotspot_variant-server_libjvm_objs_allocation.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_allocation.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/allocation.cpp:30: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> compilation terminated. >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_allocation.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_altHashing.o:\n" >> * For target hotspot_variant-server_libjvm_objs_altHashing.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_altHashing.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/oop.inline.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/classfile/altHashing.cpp:30: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> compilation terminated. >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_altHashing.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_annotations.o:\n" >> * For target hotspot_variant-server_libjvm_objs_annotations.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_annotations.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/compressedOops.inline.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/modRefBarrierSet.inline.hpp:30, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/barrierSetConfig.inline.hpp:30, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/access.inline.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/typeArrayOop.inline.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/annotations.cpp:34: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> compilation terminated. >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_annotations.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_aotCodeHeap.o:\n" >> * For target hotspot_variant-server_libjvm_objs_aotCodeHeap.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotCodeHeap.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciUtilities.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciUtilities.inline.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/aot/aotCodeHeap.cpp:28: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> compilation terminated. >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotCodeHeap.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_aotCompiledMethod.o:\n" >> * For target hotspot_variant-server_libjvm_objs_aotCompiledMethod.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotCompiledMethod.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/aot/aotCompiledMethod.cpp:34: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> compilation terminated. >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotCompiledMethod.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_aotLoader.o:\n" >> * For target hotspot_variant-server_libjvm_objs_aotLoader.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotLoader.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/abstractInterpreter.hpp:32:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/cppInterpreter.hpp:28, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/interpreter.hpp:29, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jvmci/jvmciRuntime.hpp:27, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/aot/aotLoader.cpp:29: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> compilation terminated. >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotLoader.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_arena.o:\n" >> * For target hotspot_variant-server_libjvm_objs_arena.o: >> (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_arena.o.log || true) | /usr/bin/head -n 12 >> In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, >> from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/arena.cpp:29: >> /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory >> #include "runtime/threadHeapSampler.hpp" >> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> compilation terminated. >> if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_arena.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi >> /usr/bin/printf "\n* All command lines available in /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs.\n" >> >> * All command lines available in /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs. >> /usr/bin/printf "=== End of repeated output ===\n" >> === End of repeated output === >> >> -- >> .''`. John Paul Adrian Glaubitz >> : :' : Debian Developer - glaubitz at debian.org >> `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de >> `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From erik.helin at oracle.com Fri Jun 15 09:21:11 2018 From: erik.helin at oracle.com (Erik Helin) Date: Fri, 15 Jun 2018 11:21:11 +0200 Subject: Normal server build broken - runtime/threadHeapSampler.hpp: No such file or directory In-Reply-To: <8e2b7aa8-ca4d-6be7-9176-af0757577b1d@physik.fu-berlin.de> References: <8e2b7aa8-ca4d-6be7-9176-af0757577b1d@physik.fu-berlin.de> Message-ID: <86137c16-7d44-ce23-da04-ceb6161224cd@oracle.com> On 06/15/2018 11:08 AM, John Paul Adrian Glaubitz wrote: > Hi! > > As of today, I am running into the build failure below which affects both the > server builds as well as zero. I haven't done any bisecting yet. > > Anyone seen this? Yes, we are aware, a few files are missing from http://hg.openjdk.java.net/jdk/jdk/rev/e2a7f431f65c. The patch to fix this is being prepared right now. Thanks, Erik > Adrian > > === Output from failing command(s) repeated here === > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_abstractCompiler.o:\n" > * For target hotspot_variant-server_libjvm_objs_abstractCompiler.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractCompiler.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/compiler/abstractCompiler.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/compiler/abstractCompiler.cpp:25: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractCompiler.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_abstractInterpreter.o:\n" > * For target hotspot_variant-server_libjvm_objs_abstractInterpreter.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractInterpreter.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/abstractInterpreter.hpp:32:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/cppInterpreter.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/interpreter.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/abstractInterpreter.cpp:31: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractInterpreter.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_abstractInterpreter_x86.o:\n" > * For target hotspot_variant-server_libjvm_objs_abstractInterpreter_x86.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractInterpreter_x86.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciMethod.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/cpu/x86/abstractInterpreter_x86.cpp:26: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_abstractInterpreter_x86.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_access.o:\n" > * For target hotspot_variant-server_libjvm_objs_access.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_access.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/compressedOops.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/modRefBarrierSet.inline.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/barrierSetConfig.inline.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/access.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/access.cpp:26: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_access.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_accessBackend.o:\n" > * For target hotspot_variant-server_libjvm_objs_accessBackend.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessBackend.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/compressedOops.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/accessBackend.inline.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/accessBackend.cpp:26: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessBackend.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_accessBarrierSupport.o:\n" > * For target hotspot_variant-server_libjvm_objs_accessBarrierSupport.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessBarrierSupport.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/compressedOops.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/modRefBarrierSet.inline.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/barrierSetConfig.inline.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/access.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/classfile/javaClasses.inline.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/accessBarrierSupport.cpp:26: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessBarrierSupport.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_accessFlags.o:\n" > * For target hotspot_variant-server_libjvm_objs_accessFlags.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessFlags.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/oop.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/accessFlags.cpp:26: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_accessFlags.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86.o:\n" > * For target hotspot_variant-server_libjvm_objs_ad_x86.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, > from ad_x86.hpp:33, > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_clone.o:\n" > * For target hotspot_variant-server_libjvm_objs_ad_x86_clone.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_clone.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, > from ad_x86.hpp:33, > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_clone.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_expand.o:\n" > * For target hotspot_variant-server_libjvm_objs_ad_x86_expand.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_expand.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, > from ad_x86.hpp:33, > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_expand.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_format.o:\n" > * For target hotspot_variant-server_libjvm_objs_ad_x86_format.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_format.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, > from ad_x86.hpp:33, > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_format.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_gen.o:\n" > * For target hotspot_variant-server_libjvm_objs_ad_x86_gen.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_gen.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, > from ad_x86.hpp:33, > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_gen.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_misc.o:\n" > * For target hotspot_variant-server_libjvm_objs_ad_x86_misc.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_misc.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, > from ad_x86.hpp:33, > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_misc.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_peephole.o:\n" > * For target hotspot_variant-server_libjvm_objs_ad_x86_peephole.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_peephole.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, > from ad_x86.hpp:33, > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_peephole.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ad_x86_pipeline.o:\n" > * For target hotspot_variant-server_libjvm_objs_ad_x86_pipeline.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_pipeline.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/connode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/callnode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/machnode.hpp:28, > from ad_x86.hpp:33, > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ad_x86_pipeline.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_adaptiveFreeList.o:\n" > * For target hotspot_variant-server_libjvm_objs_adaptiveFreeList.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adaptiveFreeList.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/cms/adaptiveFreeList.cpp:28: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adaptiveFreeList.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_adaptiveSizePolicy.o:\n" > * For target hotspot_variant-server_libjvm_objs_adaptiveSizePolicy.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adaptiveSizePolicy.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/adaptiveSizePolicy.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/adaptiveSizePolicy.cpp:26: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adaptiveSizePolicy.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_addnode.o:\n" > * For target hotspot_variant-server_libjvm_objs_addnode.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_addnode.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/compilerInterface.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/compile.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/node.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/addnode.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/opto/addnode.cpp:27: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_addnode.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_adjoiningGenerations.o:\n" > * For target hotspot_variant-server_libjvm_objs_adjoiningGenerations.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adjoiningGenerations.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/workgroup.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/space.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/spaceDecorator.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/parallel/asPSYoungGen.hpp:34, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/parallel/adjoiningGenerations.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/parallel/adjoiningGenerations.cpp:26: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_adjoiningGenerations.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ageTable.o:\n" > * For target hotspot_variant-server_libjvm_objs_ageTable.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ageTable.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/oop.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/ageTable.inline.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/ageTable.cpp:27: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ageTable.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_ageTableTracer.o:\n" > * For target hotspot_variant-server_libjvm_objs_ageTableTracer.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ageTableTracer.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.inline.hpp:32:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/recorder/checkpoint/types/traceid/jfrTraceId.inline.hpp:37, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrWriterHost.inline.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrEventWriterHost.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrEventWriterHost.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrNativeEventWriter.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/recorder/service/jfrEvent.hpp:32, > from /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/hotspot/variant-server/gensrc/jfrfiles/jfrEventClasses.hpp:12, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/jfrEvents.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/ageTableTracer.cpp:28: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_ageTableTracer.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_allocTracer.o:\n" > * For target hotspot_variant-server_libjvm_objs_allocTracer.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_allocTracer.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.inline.hpp:32:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/recorder/checkpoint/types/traceid/jfrTraceId.inline.hpp:37, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrWriterHost.inline.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrEventWriterHost.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrEventWriterHost.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/writers/jfrNativeEventWriter.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/recorder/service/jfrEvent.hpp:32, > from /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/hotspot/variant-server/gensrc/jfrfiles/jfrEventClasses.hpp:12, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jfr/jfrEvents.hpp:32, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/allocTracer.cpp:27: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_allocTracer.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > ... (rest of output omitted) > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_allocation.o:\n" > * For target hotspot_variant-server_libjvm_objs_allocation.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_allocation.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/allocation.cpp:30: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_allocation.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_altHashing.o:\n" > * For target hotspot_variant-server_libjvm_objs_altHashing.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_altHashing.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/oop.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/classfile/altHashing.cpp:30: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_altHashing.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_annotations.o:\n" > * For target hotspot_variant-server_libjvm_objs_annotations.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_annotations.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/compressedOops.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/modRefBarrierSet.inline.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/barrierSetConfig.inline.hpp:30, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/access.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/typeArrayOop.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/oops/annotations.cpp:34: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_annotations.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_aotCodeHeap.o:\n" > * For target hotspot_variant-server_libjvm_objs_aotCodeHeap.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotCodeHeap.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciConstantPoolCache.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciInstanceKlass.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/code/debugInfoRec.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciEnv.hpp:31, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciUtilities.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/ci/ciUtilities.inline.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/aot/aotCodeHeap.cpp:28: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotCodeHeap.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_aotCompiledMethod.o:\n" > * For target hotspot_variant-server_libjvm_objs_aotCompiledMethod.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotCompiledMethod.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/utilities/events.hpp:30:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/aot/aotCompiledMethod.cpp:34: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotCompiledMethod.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_aotLoader.o:\n" > * For target hotspot_variant-server_libjvm_objs_aotLoader.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotLoader.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/abstractInterpreter.hpp:32:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/cppInterpreter.hpp:28, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/interpreter/interpreter.hpp:29, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/jvmci/jvmciRuntime.hpp:27, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/aot/aotLoader.cpp:29: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_aotLoader.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "* For target hotspot_variant-server_libjvm_objs_arena.o:\n" > * For target hotspot_variant-server_libjvm_objs_arena.o: > (/usr/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_arena.o.log || true) | /usr/bin/head -n 12 > In file included from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/resourceArea.hpp:29:0, > from /srv/glaubitz/openjdk/jdk/src/hotspot/share/memory/arena.cpp:29: > /srv/glaubitz/openjdk/jdk/src/hotspot/share/runtime/thread.hpp:45:10: fatal error: runtime/threadHeapSampler.hpp: No such file or directory > #include "runtime/threadHeapSampler.hpp" > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > compilation terminated. > if test `/usr/bin/wc -l < /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_arena.o.log` -gt 12; then /usr/bin/echo " ... (rest of output omitted)" ; fi > /usr/bin/printf "\n* All command lines available in /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs.\n" > > * All command lines available in /srv/glaubitz/openjdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs. > /usr/bin/printf "=== End of repeated output ===\n" > === End of repeated output === > From david.holmes at oracle.com Fri Jun 15 09:23:02 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 15 Jun 2018 19:23:02 +1000 Subject: [URGENT] RFR: 8205096: Add missing files for 8203394 In-Reply-To: References: Message-ID: <79e8f0e2-4bb6-7792-9c9d-93c985bbca23@oracle.com> Ship it! Thanks, David On 15/06/2018 7:16 PM, serguei.spitsyn at oracle.com wrote: > Please, review a fix for: > https://bugs.openjdk.java.net/browse/JDK-8205096 > > Webrev: > http://cr.openjdk.java.net/~sspitsyn/webrevs/2018/8205096-missed-files-for-8203394/ > > > Summary: > ? I forgot to "hg add" all new files when committed fixes for the > JDK-8203394 > > Thanks a lot! > Serguei From serguei.spitsyn at oracle.com Fri Jun 15 09:24:00 2018 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Fri, 15 Jun 2018 02:24:00 -0700 Subject: [URGENT] RFR: 8205096: Add missing files for 8203394 In-Reply-To: <79e8f0e2-4bb6-7792-9c9d-93c985bbca23@oracle.com> References: <79e8f0e2-4bb6-7792-9c9d-93c985bbca23@oracle.com> Message-ID: <75baca12-0573-d9a7-56e6-3378a522c359@oracle.com> Thanks, David! Serguei On 6/15/18 02:23, David Holmes wrote: > Ship it! > > Thanks, > David > > On 15/06/2018 7:16 PM, serguei.spitsyn at oracle.com wrote: >> Please, review a fix for: >> https://bugs.openjdk.java.net/browse/JDK-8205096 >> >> Webrev: >> http://cr.openjdk.java.net/~sspitsyn/webrevs/2018/8205096-missed-files-for-8203394/ >> >> >> >> Summary: >> ?? I forgot to "hg add" all new files when committed fixes for the >> JDK-8203394 >> >> Thanks a lot! >> Serguei From glaubitz at physik.fu-berlin.de Fri Jun 15 09:25:03 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Fri, 15 Jun 2018 11:25:03 +0200 Subject: Normal server build broken - runtime/threadHeapSampler.hpp: No such file or directory In-Reply-To: <86137c16-7d44-ce23-da04-ceb6161224cd@oracle.com> References: <8e2b7aa8-ca4d-6be7-9176-af0757577b1d@physik.fu-berlin.de> <86137c16-7d44-ce23-da04-ceb6161224cd@oracle.com> Message-ID: <237b4f46-aea4-1cb8-ea1a-ce686262135a@physik.fu-berlin.de> On 06/15/2018 11:21 AM, Erik Helin wrote: >> As of today, I am running into the build failure below which affects both the >> server builds as well as zero. I haven't done any bisecting yet. >> >> Anyone seen this? > > Yes, we are aware, a few files are missing from http://hg.openjdk.java.net/jdk/jdk/rev/e2a7f431f65c. The patch to fix this is being prepared right now. Ok, great! Thank you for taking care of it so quickly! Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From thomas.schatzl at oracle.com Fri Jun 15 09:26:47 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Fri, 15 Jun 2018 11:26:47 +0200 Subject: [URGENT] RFR: 8205096: Add missing files for 8203394 In-Reply-To: <79e8f0e2-4bb6-7792-9c9d-93c985bbca23@oracle.com> References: <79e8f0e2-4bb6-7792-9c9d-93c985bbca23@oracle.com> Message-ID: <9bcd116bbb10db217e5d28085ec5ff7b31c3bac2.camel@oracle.com> Hi, On Fri, 2018-06-15 at 19:23 +1000, David Holmes wrote: > Ship it! +1 Thomas From serguei.spitsyn at oracle.com Fri Jun 15 09:27:56 2018 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Fri, 15 Jun 2018 02:27:56 -0700 Subject: [URGENT] RFR: 8205096: Add missing files for 8203394 In-Reply-To: <9bcd116bbb10db217e5d28085ec5ff7b31c3bac2.camel@oracle.com> References: <79e8f0e2-4bb6-7792-9c9d-93c985bbca23@oracle.com> <9bcd116bbb10db217e5d28085ec5ff7b31c3bac2.camel@oracle.com> Message-ID: <62ed54e3-69a5-6f0b-f288-a8e4b2c7285e@oracle.com> Thanks a lot, Thomas! Serguei On 6/15/18 02:26, Thomas Schatzl wrote: > Hi, > > On Fri, 2018-06-15 at 19:23 +1000, David Holmes wrote: >> Ship it! > +1 > > Thomas > From robbin.ehn at oracle.com Fri Jun 15 09:27:50 2018 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Fri, 15 Jun 2018 11:27:50 +0200 Subject: [URGENT] RFR: 8205096: Add missing files for 8203394 In-Reply-To: <79e8f0e2-4bb6-7792-9c9d-93c985bbca23@oracle.com> References: <79e8f0e2-4bb6-7792-9c9d-93c985bbca23@oracle.com> Message-ID: On 06/15/2018 11:23 AM, David Holmes wrote: > Ship it! +1 /Robbin > > Thanks, > David > > On 15/06/2018 7:16 PM, serguei.spitsyn at oracle.com wrote: >> Please, review a fix for: >> https://bugs.openjdk.java.net/browse/JDK-8205096 >> >> Webrev: >> http://cr.openjdk.java.net/~sspitsyn/webrevs/2018/8205096-missed-files-for-8203394/ >> >> >> >> Summary: >> ?? I forgot to "hg add" all new files when committed fixes for the JDK-8203394 >> >> Thanks a lot! >> Serguei From serguei.spitsyn at oracle.com Fri Jun 15 09:28:41 2018 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Fri, 15 Jun 2018 02:28:41 -0700 Subject: [URGENT] RFR: 8205096: Add missing files for 8203394 In-Reply-To: References: <79e8f0e2-4bb6-7792-9c9d-93c985bbca23@oracle.com> Message-ID: <2535b2aa-21c2-697f-a9a5-7914f43d7c68@oracle.com> Thank you, Robbin! Serguei On 6/15/18 02:27, Robbin Ehn wrote: > On 06/15/2018 11:23 AM, David Holmes wrote: >> Ship it! > > +1 > > /Robbin > >> >> Thanks, >> David >> >> On 15/06/2018 7:16 PM, serguei.spitsyn at oracle.com wrote: >>> Please, review a fix for: >>> https://bugs.openjdk.java.net/browse/JDK-8205096 >>> >>> Webrev: >>> http://cr.openjdk.java.net/~sspitsyn/webrevs/2018/8205096-missed-files-for-8203394/ >>> >>> >>> >>> Summary: >>> ?? I forgot to "hg add" all new files when committed fixes for the >>> JDK-8203394 >>> >>> Thanks a lot! >>> Serguei From volker.simonis at gmail.com Fri Jun 15 09:28:42 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 15 Jun 2018 11:28:42 +0200 Subject: [URGENT] RFR: 8205096: Add missing files for 8203394 In-Reply-To: References: Message-ID: Can't comment on the content, but at least it fixes the build so thumbs up from me! Regards, Volker On Fri, Jun 15, 2018 at 11:16 AM, serguei.spitsyn at oracle.com wrote: > Please, review a fix for: > https://bugs.openjdk.java.net/browse/JDK-8205096 > > Webrev: > > http://cr.openjdk.java.net/~sspitsyn/webrevs/2018/8205096-missed-files-for-8203394/ > > > Summary: > I forgot to "hg add" all new files when committed fixes for the > JDK-8203394 > > Thanks a lot! > Serguei From erik.helin at oracle.com Fri Jun 15 09:30:22 2018 From: erik.helin at oracle.com (Erik Helin) Date: Fri, 15 Jun 2018 11:30:22 +0200 Subject: [URGENT] RFR: 8205096: Add missing files for 8203394 In-Reply-To: References: Message-ID: <96266260-40c8-9777-a9d7-33c3a4c35b35@oracle.com> On 06/15/2018 11:16 AM, serguei.spitsyn at oracle.com wrote: > Please, review a fix for: > https://bugs.openjdk.java.net/browse/JDK-8205096 > > Webrev: > http://cr.openjdk.java.net/~sspitsyn/webrevs/2018/8205096-missed-files-for-8203394/ Looks good, Reviewed. I can confirm that jdk/jdk builds with patch applied, so please push this as soon as possible :) Thanks, Erik > Summary: > ? I forgot to "hg add" all new files when committed fixes for the > JDK-8203394 > > Thanks a lot! > Serguei From serguei.spitsyn at oracle.com Fri Jun 15 09:34:56 2018 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Fri, 15 Jun 2018 02:34:56 -0700 Subject: [URGENT] RFR: 8205096: Add missing files for 8203394 In-Reply-To: <96266260-40c8-9777-a9d7-33c3a4c35b35@oracle.com> References: <96266260-40c8-9777-a9d7-33c3a4c35b35@oracle.com> Message-ID: Thank you, Erik H., Erik D. and Volker for the review! I've pushed the patch. Thanks, Serguei On 6/15/18 02:30, Erik Helin wrote: > On 06/15/2018 11:16 AM, serguei.spitsyn at oracle.com wrote: >> Please, review a fix for: >> https://bugs.openjdk.java.net/browse/JDK-8205096 >> >> Webrev: >> http://cr.openjdk.java.net/~sspitsyn/webrevs/2018/8205096-missed-files-for-8203394/ >> > > Looks good, Reviewed. I can confirm that jdk/jdk builds with patch > applied, so please push this as soon as possible :) > > Thanks, > Erik > >> Summary: >> ?? I forgot to "hg add" all new files when committed fixes for the >> JDK-8203394 >> >> Thanks a lot! >> Serguei From david.holmes at oracle.com Fri Jun 15 09:36:29 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 15 Jun 2018 19:36:29 +1000 Subject: RFR: 8203188: Add JEP-181 support to the Zero interpreter In-Reply-To: References: <9aa2709edbf7e1b417ce47ed93a2f53d591984cd.camel@redhat.com> <4a295598fd72fb1eff1536545111def62c5ef20f.camel@redhat.com> Message-ID: <7d7c83f7-ae77-2277-a088-34306bddd03b@oracle.com> Hi Severin, On 15/06/2018 7:01 PM, Severin Gehwolf wrote: > Hi David, > > On Tue, 2018-06-05 at 19:46 +1000, David Holmes wrote: >> Looks good. >> >> I'll push this with the nestmate changes later in the week. > > Any update on this? A last minute hurdle to overcome. Hopefully early next week now. David > Thanks, > Severin > >> Thanks, >> David >> >> On 5/06/2018 7:40 PM, Severin Gehwolf wrote: >>> Hi David, >>> >>> Thanks for the review! >>> >>> On Tue, 2018-06-05 at 14:44 +1000, David Holmes wrote: >>>> Hi Severin, >>>> >>>> On 5/06/2018 1:26 AM, Severin Gehwolf wrote: >>>>> Hi, >>>>> >>>>> Could I please get a review of this change adding support for JEP-181 - >>>>> a.k.a Nestmates - to Zero. This patch depends on David Holmes' >>>>> Nestmates implementation via JDK-8010319. Thanks to David Holmes and >>>>> Chris Phillips for their initial reviews prior to this RFR. >>>>> >>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8203188 >>>>> webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8203188/webrev.02/ >>>> >>>> src/hotspot/cpu/zero/methodHandles_zero.cpp >>>> >>>> The change here seems to be an existing bug unrelated to nestmate >>>> changes. >>> >>> Agreed. >>> >>>> IT also begs the question as to what happens in the same >>>> circumstance with a removed static or "special" method? (I thought I had >>>> a test for that in the nestmates changes ... will need to double-check >>>> and add it if missing!). >>> >>> It might bomb in the same way (NULL dereference). I'm currently looking >>> at some other potential issues in this area... >>> >>>> src/hotspot/share/interpreter/bytecodeInterpreter.cpp >>>> >>>> Interpreter changes seem fine - mirroring what is done elsewhere. You >>>> can delete these incorrect comments: >>>> >>>> 2576 // This code isn't produced by javac, but could be produced by >>>> 2577 // another compliant java compiler. >>>> >>>> That code path is taken in more circumstances than the author of that >>>> comment realized. :) >>> >>> Done. >>> >>>>> Testing: >>>>> >>>>> Zero on Linux-x86_64 with the following test set: >>>>> >>>>> test/jdk/java/lang/invoke/AccessControlTest.java >>>>> test/jdk/java/lang/invoke/FinalVirtualCallFromInterface.java >>>>> test/jdk/java/lang/invoke/PrivateInterfaceCall.java >>>>> test/jdk/java/lang/invoke/SpecialInterfaceCall.java >>>>> test/jdk/java/lang/reflect/Nestmates >>>>> test/hotspot/jtreg/runtime/SelectionResolution/InvokeInterfaceICCE.java >>>>> test/hotspot/jtreg/runtime/SelectionResolution/InvokeInterfaceSuccessTest.java >>>>> test/hotspot/jtreg/runtime/Nestmates >>>>> >>>>> I cannot run this through the submit repo since the main Nestmates >>>>> patch hasn't yet landed in JDK 11. Currently testing a Zero bootcycle- >>>>> images build on x86_64. Thoughts? >>> >>> FWIW, bootcycle-images build passed on linux x86_64 Zero. >>> >>>> I can bundle this in with the nestmate changes when I push them later >>>> this week. Just send me a pointer to the finalized changeset once its >>>> finalized. I'll run it all through a final step of testing equivalent >>>> (actually more than) the submit repo. >>> >>> OK, thanks! >>> >>> Latest webrev: >>> http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8203188/webrev.03/ >>> >>> Thanks, >>> Severin >>> >>>> Thanks, >>>> David >>>> >>>>> Thanks, >>>>> Severin >>>>> From goetz.lindenmaier at sap.com Fri Jun 15 09:48:34 2018 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Fri, 15 Jun 2018 09:48:34 +0000 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> Message-ID: <846b53fb74e341759fde21f2f3eb57e2@sap.com> Hi, thanks for this update and for incorporating all my comments! Looks good, just two comments: Is it correct to include the ' ' in BOOTSTRAP_LOADER_NAME? _name does not include ' ' either. if you do print("'%s'", loader_name()) you will get 'app' but ''bootstrap''. In loader_name_and_id you can do return "'" BOOTSTRAP_LOADER_NAME "'"; similar in the jfr file. But I'm also fine with removing loader_name(), then you only have cases that need the ' ' around bootstrap :) I didn't see a use of loader_name() any more, and one can always call java_lang_ClassLoader::name() (except for during unloading.) I don't mind the @id printouts in the class loader tree. But is the comment correct? Doesn't it print the class name twice? -// +-- jdk.internal.reflect.DelegatingClassLoader +// +-- jdk.internal.reflect.DelegatingClassLoader @ jdk.internal.reflect.DelegatingClassLoader Maybe you need ClassLoaderData::loader_name_and_id_prints_classname() { return (strchr(_name_and_id, '\'') == NULL); } to guard against printing this twice. Best regards, Goetz. > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On > Behalf Of Lois Foltan > Sent: Donnerstag, 14. Juni 2018 21:56 > To: hotspot-dev developers > Subject: Re: RFR (M) JDK-8202605: Standardize on > ClassLoaderData::loader_name() throughout the VM to obtain a class > loader's name > > Please review this updated webrev that address review comments received. > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/ > > Thanks, > Lois > > On 6/13/2018 6:58 PM, Lois Foltan wrote: > > Please review this change to standardize on how to obtain a class > > loader's name within the VM.? SystemDictionary::loader_name() methods > > have been removed in favor of ClassLoaderData::loader_name(). > > > > Since the loader name is largely used in the VM for display purposes > > (error messages, logging, jcmd, JFR) this change also adopts a new > > format to append to a class loader's name its identityHashCode and if > > the loader has not been explicitly named it's qualified class name is > > used instead. > > > > 391 /** > > 392 * If the defining loader has a name explicitly set then > > 393 * '' @ > > 394 * If the defining loader has no name then > > 395 * @ > > 396 * If it's built-in loader then omit `@` as there is only one > > instance. > > 397 */ > > > > The names for the builtin loaders are 'bootstrap', 'app' and 'platform'. > > > > open webrev at > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ > > bug link at https://bugs.openjdk.java.net/browse/JDK-8202605 > > > > Testing: hs-tier(1-2), jdk-tier(1-2) complete > > ?????????????? hs-tier(3-5), jdk-tier(3) in progress > > > > Thanks, > > Lois > > From thomas.stuefe at gmail.com Fri Jun 15 10:36:48 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Fri, 15 Jun 2018 12:36:48 +0200 Subject: RFR: 8205091: AIX: build errors in hotspot after 8203641: Refactor String Deduplication into shared In-Reply-To: <1a62b7b4123c45f1a0e57774d72ee683@sap.com> References: <1a62b7b4123c45f1a0e57774d72ee683@sap.com> Message-ID: Hi Matthias, Good catch. Patch for me is good if you guys agree on a good uncommon name. Gru? Thomas On Fri, Jun 15, 2018, 09:48 Baesken, Matthias wrote: > Please review this small change that fixes the AIX build after > "8203641: Refactor String Deduplication into shared" . > > We are getting this compilation error : > /build_ci_jdk_jdk_rs6000_64/src/hotspot/share/gc/shared/stringdedup/stringDedup.hpp", > line 107.38: 1540-0063 (S) The text "1" is unexpected. > > > Looks like the name of the second template parameter (STAT) > > template > static void initialize_impl(); > > is clashing with defines from the AIX system headers (where I find > #define STAT 1 ) . > Renaming STAT to something else fixes the build on AIX . > > Webrev : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8205091/ > > Bug : > > https://bugs.openjdk.java.net/browse/JDK-8205091 > > > Thanks, Matthias > From boris.ulasevich at bell-sw.com Fri Jun 15 10:44:24 2018 From: boris.ulasevich at bell-sw.com (Boris Ulasevich) Date: Fri, 15 Jun 2018 13:44:24 +0300 Subject: RFR (S) 8203479: JFR enabled ARM32 build assertion failure Message-ID: Hi, Please review the following patch: http://cr.openjdk.java.net/~bulasevich/8203479/webrev.01 https://bugs.openjdk.java.net/browse/JDK-8203479 Assertion fires in JFR codes on first VM thread setup because VM globals are not yet initialized (and supports_cx8 property is not predefined for ARM32 platform). I propose to exploit early_initialize() method to set up supports_cx8 property on early stage of VM initialization. Thanks Boris From zgu at redhat.com Fri Jun 15 11:55:02 2018 From: zgu at redhat.com (Zhengyu Gu) Date: Fri, 15 Jun 2018 07:55:02 -0400 Subject: RFR 8199868: Support JNI critical functions in object pinning API In-Reply-To: <16c60f53-d835-29f1-981f-4f70537ceb62@oracle.com> References: <931060af-b44d-f348-92ba-e98d623d4c84@redhat.com> <19c791e7-1bf8-69ef-7090-b7da800f2021@redhat.com> <16c60f53-d835-29f1-981f-4f70537ceb62@oracle.com> Message-ID: <43e5c158-1c5c-892b-9bb4-775bd29ad96d@redhat.com> Ping!!!! anyone? please! - Zhengyu On 05/02/2018 05:12 PM, Per Liden wrote: > Hi, > > On 05/02/2018 09:41 PM, Zhengyu Gu wrote: >> Hi, >> >> Can I have reviews for this RFR? >> >> This patch completes object pinning for JNI critical section, provides >> critical native support. >> >> The approach is quite straightforward: >> >> During generating native wrapper for critical native method, it >> generates runtime call to pin every array argument, before unpacks them. >> >> For pinned objects, it also needs to save them for unpinning after JNI >> function call completes. >> >> If argument is passed on stack, it saves pinned object at the original >> slot (as pin_object() may move the object). For register based >> arguments, it reuses oop handle area (where GCLocker based >> implementation saves register based arguments for safepoints). >> >> Currently, only Shenandoah uses object pinning for JNI critical >> section, this patch has been baked quite some time there. However, I >> am new to Runtime Stub code, I would appreciate your comments and >> suggestions. >> >> I rebased patch to jdk/jdk repo. >> >> Webrev: http://cr.openjdk.java.net/~zgu/8199868/webrev.02/ > > Just want to say that I would really like to see this patch go in. As > mentioned, it completes the object pinning story and it's useful for > other GCs too (at least ZGC and possibly G1). However, I also agree with > Aleksey that some one who really knows this code needs to review this. > Unfortunately that's not me. Anyone? > > cheers, > Per > >> >> Thanks, >> >> -Zhengyu >> >> >> On 04/06/2018 10:35 PM, Zhengyu Gu wrote: >>> Offline discussion with Aleksey, he suggested that >>> pin/unpin_critical_native_array methods can be made more generic as >>> pin/unpin_object. >>> >>> Updated webrev: http://cr.openjdk.java.net/~zgu/8199868/webrev.01/ >>> >>> Test: >>> ?? Reran all tests, submit-hs tests still clean. >>> >>> Thanks, >>> >>> -Zhengyu >>> >>> On 04/06/2018 08:55 AM, Aleksey Shipilev wrote: >>>> On 04/04/2018 07:47 PM, Zhengyu Gu wrote: >>>>> Please review this patch that adds JNI critical native support to >>>>> object pinning. >>>>> >>>>> Shenandoah does not block GC while JNI critical session is in >>>>> progress. This patch allows it to pin >>>>> all incoming array objects before critical native call, and unpin >>>>> them after call completes. >>>>> >>>>> >>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8199868 >>>>> Webrev: http://cr.openjdk.java.net/~zgu/8199868/webrev.00/ >>>> >>>> Looks good to me, but somebody more savvy with runtime stub >>>> generation should take a closer look. >>>> >>>> *) Should probably be "Why we are here?" >>>> >>>> 2867?? assert(Universe::heap()->supports_object_pinning(), "Why we >>>> here?"); >>>> >>>> 2876?? assert(Universe::heap()->supports_object_pinning(), "Why we >>>> here?"); >>>> >>>> >>>> Thanks, >>>> -Aleksey >>>> From doko at ubuntu.com Fri Jun 15 12:04:30 2018 From: doko at ubuntu.com (Matthias Klose) Date: Fri, 15 Jun 2018 14:04:30 +0200 Subject: client VM build doesn't build in parallel anymore Message-ID: <0a60faf6-6241-50b9-1c3c-d8028dd5aeff@ubuntu.com> Since b14 or b15, the client VM on x86 doesn't build anymore when building with --with-jvm-variants=client,server --with-num-cores=4 building with one to three cores seems to work however. The server and zero VMs build without issues and parallel builds. Any idea which dependencies got dropped? javac: file not found: /home/packages/openjdk/11/openjdk-11-11~18/build/buildtools/tools_classes/_the.BUILD_JFR_TOOLS_batch.tmp make[4]: *** [/home/packages/openjdk/11/openjdk-11-11~18/build/buildtools/tools_classes/_the.BUILD_JFR_TOOLS_batch] Error 3 make[4]: *** Deleting file '/home/packages/openjdk/11/openjdk-11-11~18/build/buildtools/tools_classes/_the.BUILD_JFR_TOOLS_batch' gensrc/GensrcJfr.gmk:40: recipe for target '/home/packages/openjdk/11/openjdk-11-11~18/build/buildtools/tools_classes/_the.BUILD_JFR_TOOLS_batch' failed make[4]: Leaving directory '/home/packages/openjdk/11/openjdk-11-11~18/make/hotspot' make[3]: *** [hotspot-client-gensrc] Error 2 make[3]: *** Waiting for unfinished jobs.... make/Main.gmk:249: recipe for target 'hotspot-client-gensrc' failed From zgu at redhat.com Fri Jun 15 12:30:40 2018 From: zgu at redhat.com (Zhengyu Gu) Date: Fri, 15 Jun 2018 08:30:40 -0400 Subject: RFR: 8205091: AIX: build errors in hotspot after 8203641: Refactor String Deduplication into shared In-Reply-To: References: <1a62b7b4123c45f1a0e57774d72ee683@sap.com> Message-ID: <67e9c38c-48e4-26ad-116b-81568d2deee7@redhat.com> Hi, Whatever the name you come up, could you please also update stringDedup.inline.hpp to use the same names? Thomas Schatzl pointed out the inconsistent, I made change to use S and Q, but apparently, I messed up in final patch. Otherwise, looks good to me too. -Zhengyu On 06/15/2018 06:36 AM, Thomas St?fe wrote: > Hi Matthias, > > Good catch. Patch for me is good if you guys agree on a good uncommon name. > > Gru? Thomas > > On Fri, Jun 15, 2018, 09:48 Baesken, Matthias > wrote: > >> Please review this small change that fixes the AIX build after >> "8203641: Refactor String Deduplication into shared" . >> >> We are getting this compilation error : >> /build_ci_jdk_jdk_rs6000_64/src/hotspot/share/gc/shared/stringdedup/stringDedup.hpp", >> line 107.38: 1540-0063 (S) The text "1" is unexpected. >> >> >> Looks like the name of the second template parameter (STAT) >> >> template >> static void initialize_impl(); >> >> is clashing with defines from the AIX system headers (where I find >> #define STAT 1 ) . >> Renaming STAT to something else fixes the build on AIX . >> >> Webrev : >> >> http://cr.openjdk.java.net/~mbaesken/webrevs/8205091/ >> >> Bug : >> >> https://bugs.openjdk.java.net/browse/JDK-8205091 >> >> >> Thanks, Matthias >> From magnus.ihse.bursie at oracle.com Fri Jun 15 12:44:05 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Fri, 15 Jun 2018 14:44:05 +0200 Subject: client VM build doesn't build in parallel anymore In-Reply-To: <0a60faf6-6241-50b9-1c3c-d8028dd5aeff@ubuntu.com> References: <0a60faf6-6241-50b9-1c3c-d8028dd5aeff@ubuntu.com> Message-ID: <9dd2f785-4ebc-750c-d183-cbf011f033db@oracle.com> There was a race with the JFR build tools when building multiple JVMs. :-( Erik produced a fix for this as part of? JDK-8202384), unfortunately this has not yet been pushed. You can find the JFR fix part here: http://cr.openjdk.java.net/~erikj/8202384/webrev.05/make/hotspot/gensrc/GensrcJfr.gmk.udiff.html If you apply it locally, it should resolve your issue. If JDK-8202384 takes much longer to push, hopefully Erik can separate out this trivial part (which is already reviewed by me as part of JDK-8202384) and push it separately. /Magnus On 2018-06-15 14:04, Matthias Klose wrote: > Since b14 or b15, the client VM on x86 doesn't build anymore when > building with > > ? --with-jvm-variants=client,server --with-num-cores=4 > > building with one to three cores seems to work however. The server and > zero VMs build without issues and parallel builds.? Any idea which > dependencies got dropped? > > javac: file not found: > /home/packages/openjdk/11/openjdk-11-11~18/build/buildtools/tools_classes/_the.BUILD_JFR_TOOLS_batch.tmp > make[4]: *** > [/home/packages/openjdk/11/openjdk-11-11~18/build/buildtools/tools_classes/_the.BUILD_JFR_TOOLS_batch] > Error 3 > make[4]: *** Deleting file > '/home/packages/openjdk/11/openjdk-11-11~18/build/buildtools/tools_classes/_the.BUILD_JFR_TOOLS_batch' > gensrc/GensrcJfr.gmk:40: recipe for target > '/home/packages/openjdk/11/openjdk-11-11~18/build/buildtools/tools_classes/_the.BUILD_JFR_TOOLS_batch' > failed > make[4]: Leaving directory > '/home/packages/openjdk/11/openjdk-11-11~18/make/hotspot' > make[3]: *** [hotspot-client-gensrc] Error 2 > make[3]: *** Waiting for unfinished jobs.... > make/Main.gmk:249: recipe for target 'hotspot-client-gensrc' failed > From sgehwolf at redhat.com Fri Jun 15 12:59:56 2018 From: sgehwolf at redhat.com (Severin Gehwolf) Date: Fri, 15 Jun 2018 14:59:56 +0200 Subject: [8u] RFR: 8205104: EXTRA_LDFLAGS not consistently being used Message-ID: Hi, This is a JDK 8u specific problem. It's not applicable to 10/11 since the build system has changed. Make files in JDK 8 live in the hotspot tree, hence, I'm also including hotspot-dev. The issue at hand is that linker flags are not consistently passed down to individual library builds. Specifically libjvm.so, libjsig.so and libsaproc.so. This prevents downstream users from producing hardened builds. We have been using this patch in downstream Fedora for a while now without issues. Please review! Bug: https://bugs.openjdk.java.net/browse/JDK-8205104 webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8205104/webrev.01/ Testing: Before: $ for i in build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/server/libjvm.so \ build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libsaproc.so \ build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libjsig.so; do \ echo $i; readelf -d $i | grep NOW done build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/server/libjvm.so build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libsaproc.so build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libjsig.so After: $ for i in build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/server/libjvm.so \ build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libsaproc.so \ build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libjsig.so; do \ echo $i; readelf -d $i | grep NOW done build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/server/libjvm.so 0x0000000000000018 (BIND_NOW) 0x000000006ffffffb (FLAGS_1) Flags: NOW build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libsaproc.so 0x0000000000000018 (BIND_NOW) 0x000000006ffffffb (FLAGS_1) Flags: NOW build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libjsig.so 0x0000000000000018 (BIND_NOW) 0x000000006ffffffb (FLAGS_1) Flags: NOW Thanks, Severin From magnus.ihse.bursie at oracle.com Fri Jun 15 13:24:07 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Fri, 15 Jun 2018 15:24:07 +0200 Subject: [8u] RFR: 8205104: EXTRA_LDFLAGS not consistently being used In-Reply-To: References: Message-ID: <09b9a36c-7a3a-1c2a-a72c-88a4b4d26be3@oracle.com> On 2018-06-15 14:59, Severin Gehwolf wrote: > Hi, > > This is a JDK 8u specific problem. It's not applicable to 10/11 since > the build system has changed. Make files in JDK 8 live in the hotspot > tree, hence, I'm also including hotspot-dev. The issue at hand is that > linker flags are not consistently passed down to individual library > builds. Specifically libjvm.so, libjsig.so and libsaproc.so. This > prevents downstream users from producing hardened builds. We have been > using this patch in downstream Fedora for a while now without issues. Looks good to me. Seeing this makes me realize I'm *soo* happy to not have to live with the old hotspot build system anymore. :-) /Magnus > > Please review! > > Bug: https://bugs.openjdk.java.net/browse/JDK-8205104 > webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8205104/webrev.01/ > > Testing: > > Before: > $ for i in build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/server/libjvm.so \ > build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libsaproc.so \ > build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libjsig.so; do \ > echo $i; readelf -d $i | grep NOW > done > build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/server/libjvm.so > build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libsaproc.so > build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libjsig.so > > After: > $ for i in build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/server/libjvm.so \ > build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libsaproc.so \ > build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libjsig.so; do \ > echo $i; readelf -d $i | grep NOW > done > build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/server/libjvm.so > 0x0000000000000018 (BIND_NOW) > 0x000000006ffffffb (FLAGS_1) Flags: NOW > build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libsaproc.so > 0x0000000000000018 (BIND_NOW) > 0x000000006ffffffb (FLAGS_1) Flags: NOW > build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libjsig.so > 0x0000000000000018 (BIND_NOW) > 0x000000006ffffffb (FLAGS_1) Flags: NOW > > Thanks, > Severin From ChrisPhi at LGonQn.Org Fri Jun 15 15:36:10 2018 From: ChrisPhi at LGonQn.Org (Chris Phillips) Date: Fri, 15 Jun 2018 11:36:10 -0400 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> <034bbb1e-5f79-c429-878c-79e8f824dc3d@oracle.com> <7c8571dc-8781-cfdf-1bb9-d0c3b3297c17@oracle.com> Message-ID: <2770673c-3765-0680-feef-d2bed0f59426@LGonQn.Org> On 14/06/18 11:01 AM, Chris Phillips wrote: > Hi > Any further comments or changes? > On 06/06/18 05:56 PM, Chris Phillips wrote: >> Hi Per, >> >> On 06/06/18 05:48 PM, Per Liden wrote: >>> Hi Chris, >>> >>> On 06/06/2018 11:15 PM, Chris Phillips wrote: >>>> Hi Per, >>>> >>>> On 06/06/18 04:47 PM, Per Liden wrote: >>>>> Hi Chris, >>>>> >>>>> On 06/06/2018 09:36 PM, Chris Phillips wrote: >>>>>> Hi, >>>>>> >>>>>> On 06/06/18 02:23 PM, Per Liden wrote: >>>>>>> On 2018-06-06 18:29, Andrew Haley wrote: >>>>>>>> On 06/06/2018 04:47 PM, Chris Phillips wrote: >>>>>>>>> Please review this set of changes to shared code >>>>>>>>> related to S390 (31bit) Zero self-build type mis-match failures. >>>>>>>>> >>>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>>>>> webrev: http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 >>>>>>>> >>>>>>>> Can you explain this a little more? What is the type of size_t on >>>>>>>> s390x? What is the type of uintptr_t? What are the errors? >>>>>>> >>>>>>> I would like to understand this too. >>>>>>> >>>>>>> cheers, >>>>>>> Per >>>>>>> >>>>>>> >>>>>> Quoting from the original bug review request: >>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>>> >>>>>> "This >>>>>> is a problem when one parameter is of size_t type and the second of >>>>>> uintx type and the platform has size_t defined as eg. unsigned long as >>>>>> on s390 (32-bit)." >>>>> >>>>> Please clarify what the sizes of uintx (i.e. uintptr_t) and size_t are >>>>> on s390? >>>> See Dan's explanation. >>>>> >>>>> I fail to see how any of this matters to _entries here? What am I >>>>> missing? >>>>> >>>> >>>> By changing the type, to its actual usage, we avoid the >>>> necessity of patching in src/hotspot/share/gc/g1/g1StringDedupTable.cpp >>>> around line 617, since its consistent usage and local I patched at the >>>> definition. >>>> >>>> - _table->_entries, percent_of(_table->_entries, _table->_size), >>>> _entry_cache->size(), _entries_added, _entries_removed); >>>> + _table->_entries, percent_of( (size_t)(_table->_entries), >>>> _table->_size), _entry_cache->size(), _entries_added, _entries_removed); >>>> >>>> percent_of will complain about types otherwise. >>> >>> Ok, so why don't you just cast it in the call to percent_of? Your >>> current patch has ripple effects that you fail to take into account. For >>> example, _entries is still printed using UINTX_FORMAT and compared >>> against other uintx variables. You're now mixing types in an unsound way. >> >> Hmm missed that, so will do the cast instead as you suggest. >> (Fixing at the defn is what was suggested the last time around so I >> tried to do that where it was consistent, obviously this is not. >> Thanks. >> >>> cheers, >>> Per >>> >>>> >>>> >>>>> src/hotspot/share/gc/g1/g1StringDedupTable.hpp >>>>> @@ -120,11 +120,11 @@ >>>>> // Cache for reuse and fast alloc/free of table entries. >>>>> static G1StringDedupEntryCache* _entry_cache; >>>>> >>>>> G1StringDedupEntry** _buckets; >>>>> size_t _size; >>>>> - uintx _entries; >>>>> + size_t _entries; >>>>> uintx _shrink_threshold; >>>>> uintx _grow_threshold; >>>>> bool _rehash_needed; >>>>> >>>>> cheers, >>>>> Per >>>>> >>>>>> >>>>>> Hope that helps, >>>>>> Chris >>>>>> >>>>>> (I'll answer further if needed but the info is in the bugs and >>>>>> review thread mostly) >>>>>> See: >>>>>> https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>> and: >>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>>> >>>>>> https://bugs.openjdk.java.net/browse/JDK-8046938 >>>>>> https://bugs.openjdk.java.net/browse/JDK-8074459 >>>>>> For more info. >>>>>> >>>>> >>>>> >>> >>> >> Cheers! >> Chris >> >> >> > > Finally through testing and submit run again after Per's requested > change, here's the knew webrev: > http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.2 > attached is the passing run fron the submit queue. > > Please review... > > Chris > Hi Please may I have another review and someone to push ? Thanks! Chris Hmm attachments stripped... Here it is inline: Build Details: 2018-06-14-1347454.chrisphi.source 0 Failed Tests Mach5 Tasks Results Summary PASSED: 75 KILLED: 0 FAILED: 0 UNABLE_TO_RUN: 0 EXECUTED_WITH_FAILURE: 0 NA: 0 From erik.joelsson at oracle.com Fri Jun 15 15:50:05 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Fri, 15 Jun 2018 08:50:05 -0700 Subject: client VM build doesn't build in parallel anymore In-Reply-To: <0a60faf6-6241-50b9-1c3c-d8028dd5aeff@ubuntu.com> References: <0a60faf6-6241-50b9-1c3c-d8028dd5aeff@ubuntu.com> Message-ID: Hello Matthias, I believe I know the problem. I fixed it in my patch currently under review here (see GensrcJfr.gmk): http://cr.openjdk.java.net/~erikj/8202384/webrev.05/index.html That patch will take a while before it gets in though because it needs to go through the JEP process first. Please take just the GensrcJfr.gmk patch and try it. That part could be separated out into its own fix. /Erik On 2018-06-15 05:04, Matthias Klose wrote: > Since b14 or b15, the client VM on x86 doesn't build anymore when > building with > > ? --with-jvm-variants=client,server --with-num-cores=4 > > building with one to three cores seems to work however. The server and > zero VMs build without issues and parallel builds.? Any idea which > dependencies got dropped? > > javac: file not found: > /home/packages/openjdk/11/openjdk-11-11~18/build/buildtools/tools_classes/_the.BUILD_JFR_TOOLS_batch.tmp > make[4]: *** > [/home/packages/openjdk/11/openjdk-11-11~18/build/buildtools/tools_classes/_the.BUILD_JFR_TOOLS_batch] > Error 3 > make[4]: *** Deleting file > '/home/packages/openjdk/11/openjdk-11-11~18/build/buildtools/tools_classes/_the.BUILD_JFR_TOOLS_batch' > gensrc/GensrcJfr.gmk:40: recipe for target > '/home/packages/openjdk/11/openjdk-11-11~18/build/buildtools/tools_classes/_the.BUILD_JFR_TOOLS_batch' > failed > make[4]: Leaving directory > '/home/packages/openjdk/11/openjdk-11-11~18/make/hotspot' > make[3]: *** [hotspot-client-gensrc] Error 2 > make[3]: *** Waiting for unfinished jobs.... > make/Main.gmk:249: recipe for target 'hotspot-client-gensrc' failed > From erik.joelsson at oracle.com Fri Jun 15 15:53:48 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Fri, 15 Jun 2018 08:53:48 -0700 Subject: [8u] RFR: 8205104: EXTRA_LDFLAGS not consistently being used In-Reply-To: References: Message-ID: <1a1feba3-4ff4-7701-7f94-54a5cdd9b7c5@oracle.com> Looks good. /Erik On 2018-06-15 05:59, Severin Gehwolf wrote: > Hi, > > This is a JDK 8u specific problem. It's not applicable to 10/11 since > the build system has changed. Make files in JDK 8 live in the hotspot > tree, hence, I'm also including hotspot-dev. The issue at hand is that > linker flags are not consistently passed down to individual library > builds. Specifically libjvm.so, libjsig.so and libsaproc.so. This > prevents downstream users from producing hardened builds. We have been > using this patch in downstream Fedora for a while now without issues. > > Please review! > > Bug: https://bugs.openjdk.java.net/browse/JDK-8205104 > webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8205104/webrev.01/ > > Testing: > > Before: > $ for i in build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/server/libjvm.so \ > build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libsaproc.so \ > build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libjsig.so; do \ > echo $i; readelf -d $i | grep NOW > done > build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/server/libjvm.so > build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libsaproc.so > build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libjsig.so > > After: > $ for i in build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/server/libjvm.so \ > build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libsaproc.so \ > build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libjsig.so; do \ > echo $i; readelf -d $i | grep NOW > done > build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/server/libjvm.so > 0x0000000000000018 (BIND_NOW) > 0x000000006ffffffb (FLAGS_1) Flags: NOW > build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libsaproc.so > 0x0000000000000018 (BIND_NOW) > 0x000000006ffffffb (FLAGS_1) Flags: NOW > build/linux-x86_64-normal-server-release/images/j2sdk-image/jre/lib/amd64/libjsig.so > 0x0000000000000018 (BIND_NOW) > 0x000000006ffffffb (FLAGS_1) Flags: NOW > > Thanks, > Severin From swatibits14 at gmail.com Fri Jun 15 15:58:10 2018 From: swatibits14 at gmail.com (Swati Sharma) Date: Fri, 15 Jun 2018 21:28:10 +0530 Subject: UseNUMA membind Issue in openJDK In-Reply-To: <8cdb0636-d958-a307-4163-67b2f71205dd@linux.vnet.ibm.com> References: <187f278f-5069-3afa-c61f-f49a1fc0d790@linux.vnet.ibm.com> <36dc8538-f984-d2df-e401-8362a429c6f3@linux.vnet.ibm.com> <71ac36b7-263b-9f48-47d7-aa4cdbe04983@linux.vnet.ibm.com> <8cdb0636-d958-a307-4163-67b2f71205dd@linux.vnet.ibm.com> Message-ID: Hi Gustavo, Removed the two unused numa methods set_membind and numa_bitmask_nbytes as both were not getting used,May be if required in future then can be added again. Prepared the webrev associating with -c option to bug JDK-8189922. Here is the link to updated webrev zipped folder :- https://drive.google.com/open?id=1tE-RX269Q2vkMyF0neziRMKljNKi9dq6 Thanks, Swati Sharma Software Engineer-2@ AMD On Thu, Jun 14, 2018 at 9:58 PM, Gustavo Romero wrote: > Hi, > > On 06/14/2018 09:01 AM, Swati Sharma wrote: > >> +Roshan >> >> Hi Derek, >> >> Thanks for your testing and finding additional bug with UseNUMA ,I >> appreciate your effort. >> >> The answer to your questions: >> >> 1) What should JVM do if cpu node is bound, but not memory is bound? Even >> with patch, JVM wastes memory because it sets aside part of Eden for >> threads that can never run on other node. >> - numactl -N 0 java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA >> -version >> - My expectation was that it would act as if membind is set. But I'm >> not an expert. >> - What do containers do under the hood? Would they ever bind cpus and >> NOT memory? >> If membind is not given then JVM should use the memory on all nodes >> available. You are right, wastage of memory is happening, >> We have analyzed the code and got the root cause of this issue and the >> fix for this issue will take some time, >> >> Note: My colleague Roshan has found the root cause in existing code and >> working on the fix for this issue, soon he will come up with the patch. >> > > Thanks for the helpful comments, Derek and Swati. I agree: it's an issue > and a > separated one. The problem (even with Swati's patch applied) is that JVM > will > just look at the node mask information and won't consider cpu bindings to > adapt. > I guess that originally UseNUMA was only interested on numa topology in > regard > to find out the best memory allocation for the given unpinned cpus on the > machine. > > I call not tell about the container question, but I understand the if we > cover > all the bound/not bound combinations of cpu/memory the JVM should be fine > in the > worst case (It might be the case that bindings are transparent to the JVM, > I > don't know...). > > > 2) What should JVM do if cpu node is bound, and numactl --localalloc >> specified? Even with patch, JVM wastes memory. >> - numactl -N 0 --localalloc java -Xlog:gc*=info -XX:+UseParallelGC >> -XX:+UseNUMA -version >> - My expectation was that "--localalloc" would be identical to setting >> membind for all of the cpu bound nodes, but I guess it's not. >> Yes ,In case of "numactl --localalloc" , thread should use local memory >> always. Lgrp should be created based on cpunode given.In the current >> example it should create only single lgrp. >> > > I agree. > > > Gustavo, Shall we go ahead with the current patch as issue pointed out by >> Derek is not with current patch but exists in existing code and can fix the >> issue in another patch ? >> Derek , Can you file the separate bug for above issues with no --membind >> in numactl ? >> My current patch fixes the issue when user mentions --membind with >> numactl , the same mentioned also in subject line( UseNUMA membind Issue in >> openJDK) >> > > Yes, I'm fine with that. Derek kindly already filed a new bug. Also the > other > issue (not addressed by Swati's patch) is well stated. > > Thanks. > > > Best regards, > Gustavo > > Thanks, >> Swati Sharma >> Software Engineer - 2 @AMD >> >> >> On Thu, Jun 14, 2018 at 3:23 AM, White, Derek > > wrote: >> >> See inline: >> >> > -----Original Message----- >> > From: Gustavo Romero [mailto:gromero at linux.vnet.ibm.com > gromero at linux.vnet.ibm.com>] >> ... >> > Hi Derek, >> > > On 06/12/2018 06:56 PM, White, Derek wrote: >> > > Hi Swati, Gustavo, >> > > >> > > I?m not the best qualified to review the change ? I just reported >> the issue >> > as a JDK bug! >> > > >> > > I?d be happy to test a fix but I?m having trouble following the >> patch. Did >> > Gustavo post a patch to your patch, or is that a full independent >> patch? >> > > Yes, the idea was that you could help on testing it against >> JDK-8189922. >> > Swati's initial report on this thread was accompanied with a simple >> way to >> > test the issue he reported. You said it was related to bug >> JDK-8189922 but I >> > can't see a simple way to test it as you reported. Besides that I >> assumed that >> > you tested it on arm64, so I can't test it myself (I don't have >> such a >> > hardware). Btw, if you could provide some numactl -H information I >> would >> > be glad. >> >> >> OK, here's a test case: >> $ numactl -N 0 -m 0 java -Xlog:gc*=info -XX:+UseParallelGC >> -XX:+UseNUMA -version >> >> Before patch, failed output shows 1/2 of Eden being wasted for >> threads from node that will never allocate: >> ... >> [0.230s][info][gc,heap,exit ] eden space 524800K, 4% used >> [0x0000000580100000,0x0000000581580260,0x00000005a0180000) >> [0.230s][info][gc,heap,exit ] lgrp 0 space 262400K, 8% used >> [0x0000000580100000,0x0000000581580260,0x0000000590140000) >> [0.230s][info][gc,heap,exit ] lgrp 1 space 262400K, 0% used >> [0x0000000590140000,0x0000000590140000,0x00000005a0180000) >> ... >> >> After patch, passed output: >> ... >> [0.231s][info][gc,heap,exit ] eden space 524800K, 8% used >> [0x0000000580100000,0x0000000582a00260,0x00000005a0180000) >> ... (no lgrps) >> >> Open questions - still a bug? >> 1) What should JVM do if cpu node is bound, but not memory is bound? >> Even with patch, JVM wastes memory because it sets aside part of Eden for >> threads that can never run on other node. >> - numactl -N 0 java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA >> -version >> - My expectation was that it would act as if membind is set. But >> I'm not an expert. >> - What do containers do under the hood? Would they ever bind cpus >> and NOT memory? >> 2) What should JVM do if cpu node is bound, and numactl --localalloc >> specified? Even with patch, JVM wastes memory. >> - numactl -N 0 --localalloc java -Xlog:gc*=info -XX:+UseParallelGC >> -XX:+UseNUMA -version >> - My expectation was that "--localalloc" would be identical to >> setting membind for all of the cpu bound nodes, but I guess it's not. >> >> >> >> FYI - numactl -H: >> available: 2 nodes (0-1) >> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 >> 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 >> 47 >> node 0 size: 128924 MB >> node 0 free: 8499 MB >> node 1 cpus: 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 >> 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 >> 92 93 94 95 >> node 1 size: 129011 MB >> node 1 free: 7964 MB >> node distances: >> node 0 1 >> 0: 10 20 >> 1: 20 10 >> >> > I consider the patch I pointed out as the fourth version of Swati's >> original >> > proposal, it evolved from the reviews so far: >> > http://cr.openjdk.java.net/~gromero/8189922/draft/usenuma_v4.patch >> >> > > > > Also, if you or Gustavo have permissions to post a >> webrev to >> > http://cr.openjdk.java.net/ that would make reviewing a little >> easier. I?d be >> > happy to post a webrev for you if not. >> > > I was planing to host the webrev after your comments, but >> feel free to host >> > it. >> >> No, you have it covered well, I'll stay out of it. >> >> - Derek >> >> >> > From jiangli.zhou at oracle.com Fri Jun 15 16:47:29 2018 From: jiangli.zhou at oracle.com (Jiangli Zhou) Date: Fri, 15 Jun 2018 09:47:29 -0700 Subject: RFR(M): 8204965: Fix '--disable-cds' and disable CDS on AIX by default In-Reply-To: References: <14702170-8CA5-4033-B3EE-CE12C906BDE7@oracle.com> Message-ID: <97125D19-9C38-4953-A3DB-1D18892F3925@oracle.com> Hi Volker, > On Jun 15, 2018, at 12:43 AM, Volker Simonis wrote: > > Hi Jiangli, > > thanks for looking at the change. > > 'CDS_only' is only required for static fields because the > VMStructEntry for them contains a reference to the actual static field > which isn't present if we disable CDS, because the corresponding > compilations units (i.e. filemap.cpp) won't be part of libjvm.so. For > non-static fields, the VMStructEntry structure only contains the > offset of the corresponding field with regards to an object of that > type which is harmless. Thanks for the explanation. For consistency, would it be worth to add CDS_ONLY for the non-static fields in FileMapInfo also? Thanks, Jiangli > > Regards, > Volker > > > On Thu, Jun 14, 2018 at 6:42 PM, Jiangli Zhou wrote: >> Hi Volker, >> >> The changes look good to me overall. I?ll refer to the JVMTI experts for >> jvmtiEnv.cpp change. I have a question for the change in vmStructs.cpp. Any >> reason why only _current_info needs CDS_ONLY? >> >> /********************************************/ >> \ >> /* FileMapInfo fields (CDS archive related) */ >> \ >> /********************************************/ >> \ >> >> \ >> nonstatic_field(FileMapInfo, _header, >> FileMapInfo::FileMapHeader*) \ >> - static_field(FileMapInfo, _current_info, >> FileMapInfo*) \ >> + CDS_ONLY(static_field(FileMapInfo, _current_info, >> FileMapInfo*)) \ >> nonstatic_field(FileMapInfo::FileMapHeader, _space[0], >> FileMapInfo::FileMapHeader::space_info)\ >> nonstatic_field(FileMapInfo::FileMapHeader::space_info, _addr._base, >> char*) \ >> nonstatic_field(FileMapInfo::FileMapHeader::space_info, _used, >> size_t) \ >> >> \ >> >> Thanks, >> Jiangli >> >> On Jun 14, 2018, at 7:26 AM, Volker Simonis >> wrote: >> >> Hi, >> >> can I please have a review for the following fix: >> >> http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/ >> https://bugs.openjdk.java.net/browse/JDK-8204965 >> >> CDS does currently not work on AIX because of the way how we >> reserve/commit memory on AIX. The problem is that we're using a >> combination of shmat/mmap depending on the page size and the size of >> the memory chunk to reserve. This makes it impossible to reliably >> reserve the memory for the CDS archive and later on map the various >> parts of the archive into these regions. >> >> In order to fix this we would have to completely rework the memory >> reserve/commit/uncommit logic on AIX which is currently out of our >> scope because of resource limitations. >> >> Unfortunately, I could not simply disable CDS in the configure step >> because some of the shared code apparently relies on parts of the CDS >> code which gets excluded from the build when CDS is disabled. So I >> also fixed the offending parts in hotspot and cleaned up the configure >> logic for CDS. >> >> Thank you and best regards, >> Volker >> >> PS: I did run the job through the submit forest >> (mach5-one-simonis-JDK-8204965-20180613-1946-26349) but the results >> weren't really useful because they mention build failures on linux-x64 >> which I can't reproduce locally. >> >> From vladimir.kozlov at oracle.com Fri Jun 15 16:49:29 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 15 Jun 2018 09:49:29 -0700 Subject: RFR: JDK-8204941: Refactor TemplateTable::_new to use MacroAssembler helpers for tlab and eden In-Reply-To: <445338d5-43a9-363c-cc61-e9c66195b5ff@redhat.com> References: <445338d5-43a9-363c-cc61-e9c66195b5ff@redhat.com> Message-ID: <781aa5d5-c320-0d58-3e45-b34613ef88d8@oracle.com> Looks good to me. Thanks, Vladimir On 6/13/18 4:53 AM, Roman Kennke wrote: > TemplateTable::_new (in x86) currently has its own implementation of > tlab and eden allocation paths, which are basically identical to the > ones in MacroAssembler::tlab_allocate() and > MacroAssembler::eden_allocate(). TemplateTable should use the > MacroAssembler helpers to avoid duplication. > > The MacroAssembler version of eden_allocate() features an additional > bounds check to prevent wraparound of obj-end. I am not sure if/how that > can ever happen and if/how this could be exploited, but it might be > relevant. In any case, I think it's a good thing to include it in the > interpreter too. > > The refactoring can be taken further: fold incr_allocated_bytes() into > eden_allocate() (they always come in pairs), probably fold > tlab_allocate() and eden_allocate() into a single helper (they also seem > to come in pairs mostly), also fold initialize_object/initialize_header > sections too, but 1. I wanted to keep this manageable and 2. I also want > to factor the tlab_allocate/eden_allocate paths into BarrierSetAssembler > as next step (which should also include at least some of the mentioned > unifications). > > http://cr.openjdk.java.net/~rkennke/JDK-8204941/webrev.00/ > > Passes tier1_hotspot > > Can I please get a review? > > Roman > From thomas.stuefe at gmail.com Fri Jun 15 16:52:39 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Fri, 15 Jun 2018 18:52:39 +0200 Subject: RFR(M): 8204965: Fix '--disable-cds' and disable CDS on AIX by default In-Reply-To: References: Message-ID: Hi Volker, On Fri, Jun 15, 2018 at 10:05 AM, Volker Simonis wrote: > On Thu, Jun 14, 2018 at 9:04 PM, Thomas St?fe wrote: >> Hi Volker, >> >> http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/make/autoconf/hotspot.m4.udiff.html >> >> Seems like a roundabout way to have a platform specific default value. >> >> Why not determine a default value beforehand: >> >> if test "x$OPENJDK_TARGET_OS" = "xaix"; then >> ENABLE_CDS_DEFAULT="false" >> else >> ENABLE_CDS_DEFAULT=true" >> fi >> >> AC_ARG_ENABLE([cds], [AS_HELP_STRING([--enable-cds@<:@=yes/no/auto@:>@], >> [enable class data sharing feature in non-minimal VM. Default is >> ${ENABLE_CDS_DEFAULT}.])]) >> >> and so on? >> > > I've just followed the pattern used for '--enable-aot' right above the > code I changed. > > Moreover, I don't think that we would save any code because we would > still have to check for AIX in the '--enable-cds=yes' case. Also, the > new reporting added later in the file (see "AC_MSG_CHECKING([if cds > should be enabled])" seems easier to me without the extra default > value. So if you don't mind I'd prefer to leave it as is. > I just think that having three options (on/off/auto) is confusing. Okay, I still think a platform dependent default value would be cleaner, but I can live with auto="yes, if possible". >> See also what we did for "8202325: [aix] disable warnings-as-errors by default". >> >> -- >> >> http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/src/hotspot/share/classfile/javaClasses.cpp.udiff.html >> >> Here, do we really need to exclude this from compiling, >> DumpSharedSpaces = false is not enough? >> > > Yes, we need it because the excluded code references methods (e.g. > 'StringTable::create_archived_string()') which are not compiled into > libjvm.so if we disable CDS. > Are you really sure? Both MetaspaceShared::is_archive_object() and StringTable::create_archived_string() are available outside CDS, the latter explicitly returns NULL if CDS is not enabled at build time: NOT_CDS_JAVA_HEAP_RETURN_(NULL); I also just built a Linux vm without CDS, and it compiles without problems without the #ifdef. But maybe AIX is different. --- But all this is idle nitpicking, so I leave it up to you if you change anything. The change is good in its current form to me and I do not need another webrev. Best Regards, Thomas >> >> Best Regards, Thomas >> >> On Thu, Jun 14, 2018 at 4:26 PM, Volker Simonis >> wrote: >>> Hi, >>> >>> can I please have a review for the following fix: >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/ >>> https://bugs.openjdk.java.net/browse/JDK-8204965 >>> >>> CDS does currently not work on AIX because of the way how we >>> reserve/commit memory on AIX. The problem is that we're using a >>> combination of shmat/mmap depending on the page size and the size of >>> the memory chunk to reserve. This makes it impossible to reliably >>> reserve the memory for the CDS archive and later on map the various >>> parts of the archive into these regions. >>> >>> In order to fix this we would have to completely rework the memory >>> reserve/commit/uncommit logic on AIX which is currently out of our >>> scope because of resource limitations. >>> >>> Unfortunately, I could not simply disable CDS in the configure step >>> because some of the shared code apparently relies on parts of the CDS >>> code which gets excluded from the build when CDS is disabled. So I >>> also fixed the offending parts in hotspot and cleaned up the configure >>> logic for CDS. >>> >>> Thank you and best regards, >>> Volker >>> >>> PS: I did run the job through the submit forest >>> (mach5-one-simonis-JDK-8204965-20180613-1946-26349) but the results >>> weren't really useful because they mention build failures on linux-x64 >>> which I can't reproduce locally. From lois.foltan at oracle.com Fri Jun 15 17:11:18 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 15 Jun 2018 13:11:18 -0400 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: <846b53fb74e341759fde21f2f3eb57e2@sap.com> References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> <846b53fb74e341759fde21f2f3eb57e2@sap.com> Message-ID: <1e508284-71f6-a93a-b1a9-5181bc1b436c@oracle.com> On 6/15/2018 5:48 AM, Lindenmaier, Goetz wrote: > Hi, > > thanks for this update and for incorporating all my comments! Hi Goetz, Thanks for another round of review! > > Looks good, just two comments: > > Is it correct to include the ' ' in BOOTSTRAP_LOADER_NAME? > _name does not include ' ' either. > if you do print("'%s'", loader_name()) you will get 'app' but ''bootstrap''. I have removed the ' ' from BOOTSTRAP_LOADER_NAME, good catch. > > In loader_name_and_id you can do > return "'" BOOTSTRAP_LOADER_NAME "'"; > similar in the jfr file. Done. > > But I'm also fine with removing loader_name(), then you only have > cases that need the ' ' around bootstrap :) > I didn't see a use of loader_name() any more, and one can always > call java_lang_ClassLoader::name() (except for during unloading.) I am going to leave ClassLoaderData::loader_name() as is.? It is has the same method name and behavior that currently exists today.? I also want to discourage future changes that directly call java_lang_ClassLoader::name() since as you point out that is not safe during unloading.? Also removing ClassLoaderData::loader_name() may tempt future changes to introduce a new loader_name() method in some data structure other than ClassLoaderData to obtain the java_lang_ClassLoader::name().? Hopefully by leaving loader_name() in, this will prevent ending up back where we are today with multiple ways one can obtain the loader's name. > > I don't mind the @id printouts in the class loader tree. But is the comment > correct? Doesn't it print the class name twice? > > -// +-- jdk.internal.reflect.DelegatingClassLoader > +// +-- jdk.internal.reflect.DelegatingClassLoader @ jdk.internal.reflect.DelegatingClassLoader > > Maybe you need > ClassLoaderData::loader_name_and_id_prints_classname() { return (strchr(_name_and_id, '\'') == NULL); } > to guard against printing this twice. I believe you are referring to the comment in test serviceability/dcmd/vm/ClassLoaderHierarchyTest.java?? The actual output looks like this: **Running DCMD 'VM.classloaders' through 'JMXExecutor' ---------------- stdout ---------------- +-- 'bootstrap', ????? | ????? +-- 'platform', jdk.internal.loader.ClassLoaders$PlatformClassLoader {0x
} ????? |???? | ????? |???? +-- 'app', jdk.internal.loader.ClassLoaders$AppClassLoader {0x
} ????? | ????? +-- jdk.internal.reflect.DelegatingClassLoader @20f4f1e0, jdk.internal.reflect.DelegatingClassLoader {0x
} ????? | ????? +-- 'Kevin' @3330b2f5, ClassLoaderHierarchyTest$TestClassLoader {0x
} ????? | ????? +-- ClassLoaderHierarchyTest$TestClassLoader @4d81f205, ClassLoaderHierarchyTest$TestClassLoader {0x
} ??????????? | ??????????? +-- 'Bill' @4ea761aa, ClassLoaderHierarchyTest$TestClassLoader {0x
} As Mandy pointed out in her review yesterday it really isn't necessary to print out the address of the class loader oop anymore. I will be opening up a RFE for the serviceability team to address this.? And I will update the comment in the test itself.? Is this acceptable to you? Thanks, Lois ** > > Best regards, > Goetz. > > > > >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >> Behalf Of Lois Foltan >> Sent: Donnerstag, 14. Juni 2018 21:56 >> To: hotspot-dev developers >> Subject: Re: RFR (M) JDK-8202605: Standardize on >> ClassLoaderData::loader_name() throughout the VM to obtain a class >> loader's name >> >> Please review this updated webrev that address review comments received. >> >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/ >> >> Thanks, >> Lois >> >> On 6/13/2018 6:58 PM, Lois Foltan wrote: >>> Please review this change to standardize on how to obtain a class >>> loader's name within the VM.? SystemDictionary::loader_name() methods >>> have been removed in favor of ClassLoaderData::loader_name(). >>> >>> Since the loader name is largely used in the VM for display purposes >>> (error messages, logging, jcmd, JFR) this change also adopts a new >>> format to append to a class loader's name its identityHashCode and if >>> the loader has not been explicitly named it's qualified class name is >>> used instead. >>> >>> 391 /** >>> 392 * If the defining loader has a name explicitly set then >>> 393 * '' @ >>> 394 * If the defining loader has no name then >>> 395 * @ >>> 396 * If it's built-in loader then omit `@` as there is only one >>> instance. >>> 397 */ >>> >>> The names for the builtin loaders are 'bootstrap', 'app' and 'platform'. >>> >>> open webrev at >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ >>> bug link at https://bugs.openjdk.java.net/browse/JDK-8202605 >>> >>> Testing: hs-tier(1-2), jdk-tier(1-2) complete >>> ?????????????? hs-tier(3-5), jdk-tier(3) in progress >>> >>> Thanks, >>> Lois >>> From lois.foltan at oracle.com Fri Jun 15 17:14:43 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 15 Jun 2018 13:14:43 -0400 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> <0942e1e2-4447-6b22-4b06-0ed7022eba5f@oracle.com> Message-ID: On 6/14/2018 6:23 PM, mandy chung wrote: > On 6/14/18 2:28 PM, Lois Foltan wrote: >> It was intentional, I did change >> classfile/classLoaderHierarchyDCmd.cpp to output the class loader's >> name_and_id, thus causing differing results for the test.? Like jcmd >> do you think name_and_id is not applicable here as well? > I wasn't aware that classfile/classLoaderHierarchyDCmd.cpp is for jcmd > to use (the file does say so but I was assuming that dcmd source files > would be in other directory). > > The test comment does not indicate the oop pointer address is printed Hi Mandy, I will update the test comment. > but I notice that the jcmd output Thomas sent out earlier.? I confirm > from the implementation: > > ?162???? // e.g. "+--- jdk.internal.reflect.DelegatingClassLoader" > ?163???? st->print("+%.*s", BranchTracker::twig_len, "----------"); > ?164???? st->print(" %s,", _cld->loader_name_and_id()); > ?165???? if (!_cld->is_the_null_class_loader_data()) { > ?166?????? st->print(" %s", loader_klass != NULL ? > loader_klass->external_name() : "??"); > ?167?????? st->print(" {" PTR_FORMAT "}", p2i(_loader_oop)); > ?168???? } > > Since you are in that file, it'd help to update line 162 to include > the address.? I will leave the question to the serviceability team on > showing loader oop address vs the identity hash depending how it's > intended to be used for troubleshooting. Maybe keep it as is and file > an issue to resolve for 11. I have updated line 162 to reflect the current output and will open an RFE for the serviceability team to decide if showing the loader oop address is still needed given the identity hasn. Thanks for the review! Lois > > Mandy From stuart.monteith at linaro.org Fri Jun 15 18:02:16 2018 From: stuart.monteith at linaro.org (Stuart Monteith) Date: Fri, 15 Jun 2018 19:02:16 +0100 Subject: RFR: 8204680: Disassembly does not display code strings in stubs In-Reply-To: <262821a6-5de6-0bed-332c-bab06e64b43f@oracle.com> References: <262821a6-5de6-0bed-332c-bab06e64b43f@oracle.com> Message-ID: There appears to be a bug introduced here. I've opened https://bugs.openjdk.java.net/browse/JDK-8205118 and I am investigating. On 11 June 2018 at 21:35, Vladimir Kozlov wrote: > Looks fine to me. > > Thanks, > Vladimir > > > On 6/11/18 7:37 AM, Andrew Haley wrote: >> >> So last Friday I was looking at the code we generate for the runtime >> stubs and I noticed that there were no comments in the disassembly. >> Which is odd, because I'm sure it used to work. I found a bug which >> prevented it from working, fixed it, but there was still no output. >> What??! This led me down a rabbit hole from which I was to emerge >> several hours later. >> >> It turns out there are two separate bugs. >> >> When we disassemble, the code strings are found in the CodeBlob that >> contains the code. Unfortunately, when we use -XX:+PrintStubCode the >> disassembly is done from a CodeBuffer before the code strings have >> actually been copied to the code blob, so the disassembler finds no >> code strings. >> >> Also, the code strings are only copied into the CodeBlob if >> PrintStubCode is true, so "call disnm()" in the debugger doesn't print >> any code strings because they were lost when the CodeBlob was created. >> >> With both of these fixed, we have fully-commented disassembly in the >> stubs again. >> >> http://cr.openjdk.java.net/~aph/8204680/ >> >> OK? >> > From lois.foltan at oracle.com Fri Jun 15 18:26:37 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 15 Jun 2018 14:26:37 -0400 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> Message-ID: <5207064d-787a-680f-cc23-20e4ece999b3@oracle.com> On 6/15/2018 3:06 AM, Thomas St?fe wrote: > Hi Lois, Hi Thomas, Thank you for looking at this change and giving it another round of review! > > ---- > > We have now: > > Symbol* ClassLoaderData::name() > > which returns ClassLoader.name > > and > > const char* ClassLoaderData::loader_name() > > which returns either ClassLoader.name or, if that is null, the class name. I would like to point out that ClassLoaderData::loader_name() is pretty much unchanged as it exists today, so this behavior is not new or changed. > > 1) if we keep it that way, we should at least rename loader_name() to > something like loader_name_or_class_name() to lessen the surprise. > > 2) But maybe these two functions should have the same behaviour? > Return name or null if not set, not the class name? I see that nobody > yet uses loader_name(), so you are free to define it as you see fit. > > 3) but if (2), maybe alternativly just get rid of loader_name() > altogether, as just calling as_C_string() on a symbol is not worth a > utility function? I would like to leave ClassLoaderData::loader_name() in for a couple of reasons.? Leaving it in discourages new methods like it to be introduced in the future in data structures other than ClassLoaderData, calling java_lang_ClassLoader::name() directly is not safe during unloading and getting rid of it may force a call to as_C_string() as option #3 suggests but that doesn't handle the bootstrap class loader.? Given this I think the best course of action would be to update ClassLoaderData.hpp with the same comments I put in place within ClassLoaderData.cpp for this method as you suggest below. > > --- > > For VM.systemdictionary, the texts seem to be a bit off: > > 29167: > Dictionary for loader data: 0x00007f7550cb8660 for instance a > 'jdk/internal/reflect/DelegatingClassLoader'{0x0000000706c00000} > > "for instance a" ? > > Dictionary for loader data: 0x00007f75503b3a50 for instance a > 'jdk/internal/loader/ClassLoaders$AppClassLoader'{0x000000070647b098} > Dictionary for loader data: 0x00007f75503a4e30 for instance a > 'jdk/internal/loader/ClassLoaders$PlatformClassLoader'{0x0000000706479088} > > should that not be "app" or "platform", respectively? > > ... but I just see it was the same way before and not touched by your > change. Maybe here, your new compound name would make sense? > > ---- If I understand correctly this output shows up when one specifies -Xlog:class+load=debug?? I see that the "for instance " is printed by void ClassLoaderData::print_value_on(outputStream* out) const { ? if (!is_unloading() && class_loader() != NULL) { ??? out->print("loader data: " INTPTR_FORMAT " for instance ", p2i(this)); ??? class_loader()->print_value_on(out);? // includes loader_name_and_id() and address of class loader instance and class_loader()->print_value_on(out); eventually calls InstanceKlass::oop_print_value_on to print the "a". void InstanceKlass::oop_print_value_on(oop obj, outputStream* st) { ? st->print("a "); ? name()->print_value_on(st); ? obj->print_address_on(st); ? if (this == SystemDictionary::String_klass() This is a good follow up RFE since one will have to look at all the calls to InstanceKlass::oop_print_value_on() to determine if the "a " is still applicable. > > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderData.cpp.sdiff.html > > Good comments. > > suggested change to comment: > > 129 // Obtain the class loader's name and identity hash. If the > class loader's > 130 // name was not explicitly set during construction, the class > loader's name and id > 131 // will be set to the qualified class name of the class loader > along with its > 132 // identity hash. > > rather: > > 129 // Obtain the class loader's name and identity hash. If the > class loader's > 130 // name was not explicitly set during construction, the class > loader's ** _name_and_id field ** > 131 // will be set to the qualified class name of the class loader > along with its > 132 // identity hash. Done. > > ---- > > 133 // If for some reason the ClassLoader's constructor has not > been run, instead of > > I am curious, how can this happen? Bad bytecode instrumentation? > Should we also attempt to work in the identity hashcode in that case > to be consistent with the java side? Or maybe name it something like > "classname "? Or is this too exotic a case to care? Bad bytecode instrumentation, Unsafe.allocateInstance(), see test open/test/hotspot/jtreg/runtime/modules/ClassLoaderNoUnnamedModuleTest.java for example.? I too was actually thinking of "classname @" so I do like that approach but it is a rare case. > > ---- > > In various places I see you using: > > 937 if (_class_loader_klass == NULL) { // bootstrap case > > just to make sure, this is the same as > CLD::is_the_null_class_loader_data(), yes? So, one could use one and > assert the other? Yes.? Actually Coleen & I were discussing that maybe we could remove ClassLoaderData::_class_loader_klass since its original purpose was to allow for ultimately a way to obtain the class loader's klass external_name.? Will look into creating a follow on RFE if _class_loader_klass is no longer needed. > > ---- > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderData.hpp.sdiff.html > > Not sure about BOOTSTRAP_LOADER_NAME_LEN, since its sole user - jfr - > could probably just do a ::strlen(BOOTSTRAP_LOADER_NAME). > > Not sure either about BOOTSTRAP_LOADER_NAME having quotes baked in - > this is something I would rather see in the printing code. I agree.? I removed the single quotes but I would like to leave in BOOTSTAP_LOADER_NAME_LEN. > > + // Obtain the class loader's _name, works during unloading. > + const char* loader_name() const; > + Symbol* name() const { return _name; } > > See above my comments to loader_name(). At the very least comment > should be updated describing that this function returns name or class > name or "bootstrap". Comment in ClassLoaderData.hpp will be updated as you suggest. > > ---- > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderHierarchyDCmd.cpp.udiff.html > > Hm, unfortunately, this does not look so good. I would prefer to keep > the old version, see here my proposal, updated to use your new > CLD::name() function and to remove the offending "<>" around > "bootstrap". > > @@ -157,13 +157,18 @@ > > // Retrieve information. > const Klass* const loader_klass = _cld->class_loader_klass(); > + const Symbol* const loader_name = _cld->name(); > > branchtracker.print(st); > > // e.g. "+--- jdk.internal.reflect.DelegatingClassLoader" > st->print("+%.*s", BranchTracker::twig_len, "----------"); > - st->print(" %s,", _cld->loader_name_and_id()); > - if (!_cld->is_the_null_class_loader_data()) { > + if (_cld->is_the_null_class_loader_data()) { > + st->print(" bootstrap"); > + } else { > + if (loader_name != NULL) { > + st->print(" \"%s\",", loader_name->as_C_string()); > + } > st->print(" %s", loader_klass != NULL ? > loader_klass->external_name() : "??"); > st->print(" {" PTR_FORMAT "}", p2i(_loader_oop)); > } > > This also depends on what you decide happens with CLD::loader_name(). > If that one were to return "loader name or null if not set, as > ra-allocated const char*", it could be used here. I like this change and I like how the output looks.? Can you take another look at the next webrev's updated comments in test serviceability/dcmd/vm/ClassLoaderHierarchyTest.java?? I plan to open an RFE to have the serviceability team consider removing the address of the loader oop now that the included identity hash provides unique identification. > > ---- > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderStats.cpp.udiff.html > > In VM.classloader_stats we see the effect of the new naming: > > x000000080000a0b8 0x00000008000623f0 0x00007f5facafe540 1 > 6144 4064 jdk.internal.reflect.DelegatingClassLoader @7b5a12ae > 0x000000080000a0b8 0x00000008000623f0 0x00007f5facbcdd50 1 > 6144 3960 jdk.internal.reflect.DelegatingClassLoader @5b529706 > 0x00000008000623f0 0x0000000000000000 0x00007f5facbcca00 10 > 90112 51760 'MyInMemoryClassLoader' @17cdf2d0 > 0x00000008000623f0 0x0000000000000000 0x00007f5facbca560 1 > 6144 4184 'MyInMemoryClassLoader' @1477089c > 0x00000008000623f0 0x0000000000000000 0x00007f5facba7890 1 > 6144 4184 'MyInMemoryClassLoader' @a87f8ec > 0x00000008000623f0 0x0000000000000000 0x00007f5facba5390 1 > 6144 4184 'MyInMemoryClassLoader' @5a3bc7ed > 0x00000008000623f0 0x0000000000000000 0x00007f5facba3bf0 1 > 6144 4184 'MyInMemoryClassLoader' @48c76607 > 0x00000008000623f0 0x0000000000000000 0x00007f5facb23f80 1 > 6144 4184 'MyInMemoryClassLoader' @1224144a > 0x00000008000623f0 0x0000000000000000 0x00007f5facb228f0 1 > 6144 4184 'MyInMemoryClassLoader' @75437611 > 0x00000008000623f0 0x0000000000000000 0x00007f5facb65c60 1 > 6144 4184 'MyInMemoryClassLoader' @25084a1e > 0x00000008000623f0 0x0000000000000000 0x00007f5facb6a030 1 > 6144 4184 'MyInMemoryClassLoader' @2d2ffcb7 > 0x00000008000623f0 0x0000000000000000 0x00007f5facb4bfe0 1 > 6144 4184 'MyInMemoryClassLoader' @42a48628 > 0x0000000800010340 0x00000008000107a8 0x00007f5fac3bd670 1064 > 7004160 6979376 'app' > 96 > 311296 202600 + unsafe anonymous classes > 0x0000000000000000 0x0000000000000000 0x00007f5fac1da1e0 1091 > 8380416 8301048 'bootstrap' > 92 > 263168 169808 + unsafe anonymous classes > 0x000000080000a0b8 0x000000080000a0b8 0x00007f5faca63460 1 > 6144 3960 jdk.internal.reflect.DelegatingClassLoader @5bd03f44 > > > Since we hide now the class name of the loader, if everyone names > their class loader the same - e.g. "Test" or "MyInMemoryClassLoader" - > we loose information. We loose the name of class loader's class' fully qualified name only in the situation where the class loader's name has been explicitly specified by the user during construction.? I would think in that case one would want to see the explicitly given name of the class loader.? We also gain in either situation (unnamed or named class loader), the class loader's identity hash which allows for uniquely identifying a class loader in question. > I'm afraid this will be an issue if people will > start naming their class loaders more and more. It is not unimaginable > that completely different frameworks name their loaders the same. Point taken, however, doesn't including the identity hash allow for unique identification of the class loader? > > This "name or if not then class name" scheme will also complicate > parsing a lot for people who parse the output of these commands. I > would strongly prefer to see both - name and class type. Much like classfile/classLoaderHierarchyDCmd.cpp now generates, correct? Thanks, Lois > > ---- > > Hmm. At this point I noticed that I still had general reservations > about the new compound naming scheme - see my remarks above. So I > guess I stop here to wait for your response before continuing the code > review. > > Thanks & Kind Regards, > > Thomas > > > > > > > > > > > > On Thu, Jun 14, 2018 at 9:56 PM, Lois Foltan wrote: >> Please review this updated webrev that address review comments received. >> >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/ >> >> Thanks, >> Lois >> >> >> On 6/13/2018 6:58 PM, Lois Foltan wrote: >>> Please review this change to standardize on how to obtain a class loader's >>> name within the VM. SystemDictionary::loader_name() methods have been >>> removed in favor of ClassLoaderData::loader_name(). >>> >>> Since the loader name is largely used in the VM for display purposes >>> (error messages, logging, jcmd, JFR) this change also adopts a new format to >>> append to a class loader's name its identityHashCode and if the loader has >>> not been explicitly named it's qualified class name is used instead. >>> >>> 391 /** >>> 392 * If the defining loader has a name explicitly set then >>> 393 * '' @ >>> 394 * If the defining loader has no name then >>> 395 * @ >>> 396 * If it's built-in loader then omit `@` as there is only one >>> instance. >>> 397 */ >>> >>> The names for the builtin loaders are 'bootstrap', 'app' and 'platform'. >>> >>> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ >>> bug link at https://bugs.openjdk.java.net/browse/JDK-8202605 >>> >>> Testing: hs-tier(1-2), jdk-tier(1-2) complete >>> hs-tier(3-5), jdk-tier(3) in progress >>> >>> Thanks, >>> Lois >>> From lois.foltan at oracle.com Fri Jun 15 19:26:09 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 15 Jun 2018 15:26:09 -0400 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> Message-ID: <225c0247-08aa-0395-70de-21e8fda2fd07@oracle.com> Please review this updated webrev based on additional comments received. http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.2/webrev/ Thanks, Lois On 6/14/2018 3:56 PM, Lois Foltan wrote: > Please review this updated webrev that address review comments received. > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/ > > Thanks, > Lois > > On 6/13/2018 6:58 PM, Lois Foltan wrote: >> Please review this change to standardize on how to obtain a class >> loader's name within the VM. SystemDictionary::loader_name() methods >> have been removed in favor of ClassLoaderData::loader_name(). >> >> Since the loader name is largely used in the VM for display purposes >> (error messages, logging, jcmd, JFR) this change also adopts a new >> format to append to a class loader's name its identityHashCode and if >> the loader has not been explicitly named it's qualified class name is >> used instead. >> >> 391 /** >> 392 * If the defining loader has a name explicitly set then >> 393 * '' @ >> 394 * If the defining loader has no name then >> 395 * @ >> 396 * If it's built-in loader then omit `@` as there is only one >> instance. >> 397 */ >> >> The names for the builtin loaders are 'bootstrap', 'app' and 'platform'. >> >> open webrev at >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ >> bug link at https://bugs.openjdk.java.net/browse/JDK-8202605 >> >> Testing: hs-tier(1-2), jdk-tier(1-2) complete >> ?????????????? hs-tier(3-5), jdk-tier(3) in progress >> >> Thanks, >> Lois >> > From goetz.lindenmaier at sap.com Fri Jun 15 19:26:42 2018 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Fri, 15 Jun 2018 19:26:42 +0000 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: <1e508284-71f6-a93a-b1a9-5181bc1b436c@oracle.com> References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> <846b53fb74e341759fde21f2f3eb57e2@sap.com> <1e508284-71f6-a93a-b1a9-5181bc1b436c@oracle.com> Message-ID: <4f02b89023e6459695a314486db0231e@sap.com> Hi Lois, ---------------- stdout ---------------- +-- 'bootstrap', | +-- 'platform', jdk.internal.loader.ClassLoaders$PlatformClassLoader {0x
} | | | +-- 'app', jdk.internal.loader.ClassLoaders$AppClassLoader {0x
} | +-- jdk.internal.reflect.DelegatingClassLoader @20f4f1e0, jdk.internal.reflect.DelegatingClassLoader {0x
} What I mean is that there is twice ?jdk.internal.reflect.DelegatingClassLoader? The printout of the second string could be guarded by ClassLoaderData::loader_name_and_id_prints_classname() { return (strchr(_name_and_id, '\'') == NULL); } Adding this function would fit well into your change. Adapting the output of the jcmd could be left to a follow up, I think Thomas has stronger feelings here than I do. Fixing the comment would be great. The rest is fine. Consider it Reviewed from my side. Best regards, Goetz. From: Lois Foltan Sent: Friday, June 15, 2018 7:11 PM To: Lindenmaier, Goetz ; hotspot-dev developers Subject: Re: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name On 6/15/2018 5:48 AM, Lindenmaier, Goetz wrote: Hi, thanks for this update and for incorporating all my comments! Hi Goetz, Thanks for another round of review! Looks good, just two comments: Is it correct to include the ' ' in BOOTSTRAP_LOADER_NAME? _name does not include ' ' either. if you do print("'%s'", loader_name()) you will get 'app' but ''bootstrap''. I have removed the ' ' from BOOTSTRAP_LOADER_NAME, good catch. In loader_name_and_id you can do return "'" BOOTSTRAP_LOADER_NAME "'"; similar in the jfr file. Done. But I'm also fine with removing loader_name(), then you only have cases that need the ' ' around bootstrap :) I didn't see a use of loader_name() any more, and one can always call java_lang_ClassLoader::name() (except for during unloading.) I am going to leave ClassLoaderData::loader_name() as is. It is has the same method name and behavior that currently exists today. I also want to discourage future changes that directly call java_lang_ClassLoader::name() since as you point out that is not safe during unloading. Also removing ClassLoaderData::loader_name() may tempt future changes to introduce a new loader_name() method in some data structure other than ClassLoaderData to obtain the java_lang_ClassLoader::name(). Hopefully by leaving loader_name() in, this will prevent ending up back where we are today with multiple ways one can obtain the loader's name. I don't mind the @id printouts in the class loader tree. But is the comment correct? Doesn't it print the class name twice? -// +-- jdk.internal.reflect.DelegatingClassLoader +// +-- jdk.internal.reflect.DelegatingClassLoader @ jdk.internal.reflect.DelegatingClassLoader Maybe you need ClassLoaderData::loader_name_and_id_prints_classname() { return (strchr(_name_and_id, '\'') == NULL); } to guard against printing this twice. I believe you are referring to the comment in test serviceability/dcmd/vm/ClassLoaderHierarchyTest.java? The actual output looks like this: Running DCMD 'VM.classloaders' through 'JMXExecutor' ---------------- stdout ---------------- +-- 'bootstrap', | +-- 'platform', jdk.internal.loader.ClassLoaders$PlatformClassLoader {0x
} | | | +-- 'app', jdk.internal.loader.ClassLoaders$AppClassLoader {0x
} | +-- jdk.internal.reflect.DelegatingClassLoader @20f4f1e0, jdk.internal.reflect.DelegatingClassLoader {0x
} | +-- 'Kevin' @3330b2f5, ClassLoaderHierarchyTest$TestClassLoader {0x
} | +-- ClassLoaderHierarchyTest$TestClassLoader @4d81f205, ClassLoaderHierarchyTest$TestClassLoader {0x
} | +-- 'Bill' @4ea761aa, ClassLoaderHierarchyTest$TestClassLoader {0x
} As Mandy pointed out in her review yesterday it really isn't necessary to print out the address of the class loader oop anymore. I will be opening up a RFE for the serviceability team to address this. And I will update the comment in the test itself. Is this acceptable to you? Thanks, Lois Best regards, Goetz. -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Lois Foltan Sent: Donnerstag, 14. Juni 2018 21:56 To: hotspot-dev developers Subject: Re: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name Please review this updated webrev that address review comments received. http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/ Thanks, Lois On 6/13/2018 6:58 PM, Lois Foltan wrote: Please review this change to standardize on how to obtain a class loader's name within the VM. SystemDictionary::loader_name() methods have been removed in favor of ClassLoaderData::loader_name(). Since the loader name is largely used in the VM for display purposes (error messages, logging, jcmd, JFR) this change also adopts a new format to append to a class loader's name its identityHashCode and if the loader has not been explicitly named it's qualified class name is used instead. 391 /** 392 * If the defining loader has a name explicitly set then 393 * '' @ 394 * If the defining loader has no name then 395 * @ 396 * If it's built-in loader then omit `@` as there is only one instance. 397 */ The names for the builtin loaders are 'bootstrap', 'app' and 'platform'. open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ bug link at https://bugs.openjdk.java.net/browse/JDK-8202605 Testing: hs-tier(1-2), jdk-tier(1-2) complete hs-tier(3-5), jdk-tier(3) in progress Thanks, Lois From thomas.stuefe at gmail.com Fri Jun 15 19:43:41 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Fri, 15 Jun 2018 21:43:41 +0200 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: <5207064d-787a-680f-cc23-20e4ece999b3@oracle.com> References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> <5207064d-787a-680f-cc23-20e4ece999b3@oracle.com> Message-ID: Hi Lois, On Fri, Jun 15, 2018 at 8:26 PM, Lois Foltan wrote: > On 6/15/2018 3:06 AM, Thomas St?fe wrote: > > Hi Lois, > > Hi Thomas, > Thank you for looking at this change and giving it another round of review! > > ---- > > We have now: > > Symbol* ClassLoaderData::name() > > which returns ClassLoader.name > > and > > const char* ClassLoaderData::loader_name() > > which returns either ClassLoader.name or, if that is null, the class > name. > > I would like to point out that ClassLoaderData::loader_name() is pretty much > unchanged as it exists today, so this behavior is not new or changed. > Okay. > > 1) if we keep it that way, we should at least rename loader_name() to > something like loader_name_or_class_name() to lessen the surprise. > > 2) But maybe these two functions should have the same behaviour? > Return name or null if not set, not the class name? I see that nobody > yet uses loader_name(), so you are free to define it as you see fit. > > 3) but if (2), maybe alternativly just get rid of loader_name() > altogether, as just calling as_C_string() on a symbol is not worth a > utility function? > > I would like to leave ClassLoaderData::loader_name() in for a couple of > reasons. Leaving it in discourages new methods like it to be introduced in > the future in data structures other than ClassLoaderData, calling > java_lang_ClassLoader::name() directly is not safe during unloading and > getting rid of it may force a call to as_C_string() as option #3 suggests > but that doesn't handle the bootstrap class loader. Given this I think the > best course of action would be to update ClassLoaderData.hpp with the same > comments I put in place within ClassLoaderData.cpp for this method as you > suggest below. Okay. > > > --- > > For VM.systemdictionary, the texts seem to be a bit off: > > 29167: > Dictionary for loader data: 0x00007f7550cb8660 for instance a > 'jdk/internal/reflect/DelegatingClassLoader'{0x0000000706c00000} > > "for instance a" ? > > Dictionary for loader data: 0x00007f75503b3a50 for instance a > 'jdk/internal/loader/ClassLoaders$AppClassLoader'{0x000000070647b098} > Dictionary for loader data: 0x00007f75503a4e30 for instance a > 'jdk/internal/loader/ClassLoaders$PlatformClassLoader'{0x0000000706479088} > > should that not be "app" or "platform", respectively? > > ... but I just see it was the same way before and not touched by your > change. Maybe here, your new compound name would make sense? > > ---- > > If I understand correctly this output shows up when one specifies > -Xlog:class+load=debug? I saw it as result of jcmd VM.systemdictionary (Coleen's command, I think?) but it may show up too in other places, I did not check. > I see that the "for instance " is printed by > > void ClassLoaderData::print_value_on(outputStream* out) const { > if (!is_unloading() && class_loader() != NULL) { > out->print("loader data: " INTPTR_FORMAT " for instance ", p2i(this)); > class_loader()->print_value_on(out); // includes loader_name_and_id() > and address of class loader instance > > and class_loader()->print_value_on(out); eventually calls > InstanceKlass::oop_print_value_on to print the "a". > > void InstanceKlass::oop_print_value_on(oop obj, outputStream* st) { > st->print("a "); > name()->print_value_on(st); > obj->print_address_on(st); > if (this == SystemDictionary::String_klass() > > This is a good follow up RFE since one will have to look at all the calls to > InstanceKlass::oop_print_value_on() to determine if the "a " is still > applicable. > Yes, there may be a number of follow up cleanups after this patch is in. > > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderData.cpp.sdiff.html > > Good comments. > > suggested change to comment: > > 129 // Obtain the class loader's name and identity hash. If the > class loader's > 130 // name was not explicitly set during construction, the class > loader's name and id > 131 // will be set to the qualified class name of the class loader > along with its > 132 // identity hash. > > rather: > > 129 // Obtain the class loader's name and identity hash. If the > class loader's > 130 // name was not explicitly set during construction, the class > loader's ** _name_and_id field ** > 131 // will be set to the qualified class name of the class loader > along with its > 132 // identity hash. > > Done. > > > ---- > > 133 // If for some reason the ClassLoader's constructor has not > been run, instead of > > I am curious, how can this happen? Bad bytecode instrumentation? > Should we also attempt to work in the identity hashcode in that case > to be consistent with the java side? Or maybe name it something like > "classname "? Or is this too exotic a case to care? > > Bad bytecode instrumentation, Unsafe.allocateInstance(), see test > open/test/hotspot/jtreg/runtime/modules/ClassLoaderNoUnnamedModuleTest.java > for example. JDK-8202758... Wow. Yikes. > I too was actually thinking of "classname @" so I > do like that approach but it is a rare case. > Thanks for taking that suggestion. > > ---- > > In various places I see you using: > > 937 if (_class_loader_klass == NULL) { // bootstrap case > > just to make sure, this is the same as > CLD::is_the_null_class_loader_data(), yes? So, one could use one and > assert the other? > > Yes. Actually Coleen & I were discussing that maybe we could remove > ClassLoaderData::_class_loader_klass since its original purpose was to allow > for ultimately a way to obtain the class loader's klass external_name. Will > look into creating a follow on RFE if _class_loader_klass is no longer > needed. > I use it in VM.classloaders and VM.metaspace, to print out the loader class name and in VM.classloaders verbose mode I print out the Klass* pointer too. We found it useful in some debugging scenarios. Btw, for the same reason I print out the "{loader oop}" in VM.classloaders - debugging help. This was also a wish of Kirk Pepperdine when we introduced VM.classloaders, see discussion: http://mail.openjdk.java.net/pipermail/serviceability-dev/2018-May/023770.html . There are discussions and experiments currently done to execute multiple jcmd subcommands at one safe point. In this context, printing oops is more interesting in diagnostic commands, since you can chain multiple commands together and get consistent oop values. See discussions here: http://mail.openjdk.java.net/pipermail/serviceability-dev/2018-May/023673.html (Currently, Frederic Parain from Oracle took this over and provided a prototype patch). But all in all, if it makes matters easier, I think yes we should remove _class_loader_klass from CLD. > > ---- > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderData.hpp.sdiff.html > > Not sure about BOOTSTRAP_LOADER_NAME_LEN, since its sole user - jfr - > could probably just do a ::strlen(BOOTSTRAP_LOADER_NAME). > > Not sure either about BOOTSTRAP_LOADER_NAME having quotes baked in - > this is something I would rather see in the printing code. > > I agree. I removed the single quotes but I would like to leave in > BOOTSTAP_LOADER_NAME_LEN. > Okay. We should make sure they stay consistent, but that is no terrible burden. > > + // Obtain the class loader's _name, works during unloading. > + const char* loader_name() const; > + Symbol* name() const { return _name; } > > See above my comments to loader_name(). At the very least comment > should be updated describing that this function returns name or class > name or "bootstrap". > > Comment in ClassLoaderData.hpp will be updated as you suggest. > Thank you. > > > ---- > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderHierarchyDCmd.cpp.udiff.html > > Hm, unfortunately, this does not look so good. I would prefer to keep > the old version, see here my proposal, updated to use your new > CLD::name() function and to remove the offending "<>" around > "bootstrap". > > @@ -157,13 +157,18 @@ > > // Retrieve information. > const Klass* const loader_klass = _cld->class_loader_klass(); > + const Symbol* const loader_name = _cld->name(); > > branchtracker.print(st); > > // e.g. "+--- jdk.internal.reflect.DelegatingClassLoader" > st->print("+%.*s", BranchTracker::twig_len, "----------"); > - st->print(" %s,", _cld->loader_name_and_id()); > - if (!_cld->is_the_null_class_loader_data()) { > + if (_cld->is_the_null_class_loader_data()) { > + st->print(" bootstrap"); > + } else { > + if (loader_name != NULL) { > + st->print(" \"%s\",", loader_name->as_C_string()); > + } > st->print(" %s", loader_klass != NULL ? > loader_klass->external_name() : "??"); > st->print(" {" PTR_FORMAT "}", p2i(_loader_oop)); > } > > This also depends on what you decide happens with CLD::loader_name(). > If that one were to return "loader name or null if not set, as > ra-allocated const char*", it could be used here. > > I like this change and I like how the output looks. Can you take another > look at the next webrev's updated comments in test > serviceability/dcmd/vm/ClassLoaderHierarchyTest.java? Sure. It is not yet posted, yes? May take till monday though, I am gone over the weekend. > I plan to open an RFE > to have the serviceability team consider removing the address of the loader > oop now that the included identity hash provides unique identification. > See my remarks above - that command including oop was added by me, and if possible I'd like to keep the oop for debugging purposes. However, I could move the output to the "verbose" section (if you run VM.classloaders verbose, there are additional things printed below the class loader name). Note however, that printing "{}" was consistent with pre-existing commands from Oracle, in this case VM.systemdictionary. > > > ---- > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderStats.cpp.udiff.html > > In VM.classloader_stats we see the effect of the new naming: > > x000000080000a0b8 0x00000008000623f0 0x00007f5facafe540 1 > 6144 4064 jdk.internal.reflect.DelegatingClassLoader @7b5a12ae > 0x000000080000a0b8 0x00000008000623f0 0x00007f5facbcdd50 1 > 6144 3960 jdk.internal.reflect.DelegatingClassLoader @5b529706 > 0x00000008000623f0 0x0000000000000000 0x00007f5facbcca00 10 > 90112 51760 'MyInMemoryClassLoader' @17cdf2d0 > 0x00000008000623f0 0x0000000000000000 0x00007f5facbca560 1 > 6144 4184 'MyInMemoryClassLoader' @1477089c > 0x00000008000623f0 0x0000000000000000 0x00007f5facba7890 1 > 6144 4184 'MyInMemoryClassLoader' @a87f8ec > 0x00000008000623f0 0x0000000000000000 0x00007f5facba5390 1 > 6144 4184 'MyInMemoryClassLoader' @5a3bc7ed > 0x00000008000623f0 0x0000000000000000 0x00007f5facba3bf0 1 > 6144 4184 'MyInMemoryClassLoader' @48c76607 > 0x00000008000623f0 0x0000000000000000 0x00007f5facb23f80 1 > 6144 4184 'MyInMemoryClassLoader' @1224144a > 0x00000008000623f0 0x0000000000000000 0x00007f5facb228f0 1 > 6144 4184 'MyInMemoryClassLoader' @75437611 > 0x00000008000623f0 0x0000000000000000 0x00007f5facb65c60 1 > 6144 4184 'MyInMemoryClassLoader' @25084a1e > 0x00000008000623f0 0x0000000000000000 0x00007f5facb6a030 1 > 6144 4184 'MyInMemoryClassLoader' @2d2ffcb7 > 0x00000008000623f0 0x0000000000000000 0x00007f5facb4bfe0 1 > 6144 4184 'MyInMemoryClassLoader' @42a48628 > 0x0000000800010340 0x00000008000107a8 0x00007f5fac3bd670 1064 > 7004160 6979376 'app' > 96 > 311296 202600 + unsafe anonymous classes > 0x0000000000000000 0x0000000000000000 0x00007f5fac1da1e0 1091 > 8380416 8301048 'bootstrap' > 92 > 263168 169808 + unsafe anonymous classes > 0x000000080000a0b8 0x000000080000a0b8 0x00007f5faca63460 1 > 6144 3960 jdk.internal.reflect.DelegatingClassLoader @5bd03f44 > > >> Since we hide now the class name of the loader, if everyone names >> their class loader the same - e.g. "Test" or "MyInMemoryClassLoader" - >> we loose information. > > We loose the name of class loader's class' fully qualified name only in the > situation where the class loader's name has been explicitly specified by the > user during construction. I would think in that case one would want to see > the explicitly given name of the class loader. We also gain in either > situation (unnamed or named class loader), the class loader's identity hash > which allows for uniquely identifying a class loader in question. For the record, I would prefer a naming scheme which printed unconditionally both name and class name, if both are set: '"name", instance of , @id' or 'instance of , @id' or maybe some more condensed, technical form, as a clear triple: '[name, , @id]' or '{name, , @id}' The reason why I keep harping on this is that it is useful to have consistent output, meaning, output that does not change its format on a line-by-line base. Just a tiny example why this is useful, lets say I run a Spring MVC app and want to know the number of Spring loaders, I do a: ./images/jdk/bin/jcmd hello VM.classloaders | grep org.springframework | wc -l Won't work consistently anymore if class names disappear for loader names which have names. Of course, there are myriad other ways to get the same information, so this is just an illustration. -- But I guess I won't convince you that this is better, and it seems you spent a lot of thoughts and discussions on this point already. I think this is a case of one-size-fits-not-all. And also a matter of taste. If emphasis is on brevity, your naming scheme is better. If ease-of-parsing and ease-of-reading are important, I think my scheme wins. But as long as we have alternatives - e.g. CLD::name() and CLD::class_loader_class() - and as long as VM.classloaders and VM.metaspace commands stay useful, I am content and can live with your scheme. > >> I'm afraid this will be an issue if people will >> start naming their class loaders more and more. It is not unimaginable >> that completely different frameworks name their loaders the same. > > Point taken, however, doesn't including the identity hash allow for unique > identification of the class loader? I think the point of diagnostic commands is to get information quick. An identity hash may help me after I managed to finally resolve it, but it is not a quick process (that I know of). Whereas, for example, just reading "com.wily.introscope.Loader" tells me immediately that the VM I am looking at has Wily byte code modifications enabled. > > >> This "name or if not then class name" scheme will also complicate >> parsing a lot for people who parse the output of these commands. I >> would strongly prefer to see both - name and class type. > > Much like classfile/classLoaderHierarchyDCmd.cpp now generates, correct? > Yes! :) > Thanks, > Lois > > I just saw your webrev popping in, but again it is late. I'll take a look tomorrow morning or monday. Thank you for your work. ..Thomas > ---- > > Hmm. At this point I noticed that I still had general reservations > about the new compound naming scheme - see my remarks above. So I > guess I stop here to wait for your response before continuing the code > review. > > Thanks & Kind Regards, > > Thomas > > > > > > > > > > > > On Thu, Jun 14, 2018 at 9:56 PM, Lois Foltan wrote: > > Please review this updated webrev that address review comments received. > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/ > > Thanks, > Lois > > > On 6/13/2018 6:58 PM, Lois Foltan wrote: > > Please review this change to standardize on how to obtain a class loader's > name within the VM. SystemDictionary::loader_name() methods have been > removed in favor of ClassLoaderData::loader_name(). > > Since the loader name is largely used in the VM for display purposes > (error messages, logging, jcmd, JFR) this change also adopts a new format to > append to a class loader's name its identityHashCode and if the loader has > not been explicitly named it's qualified class name is used instead. > > 391 /** > 392 * If the defining loader has a name explicitly set then > 393 * '' @ > 394 * If the defining loader has no name then > 395 * @ > 396 * If it's built-in loader then omit `@` as there is only one > instance. > 397 */ > > The names for the builtin loaders are 'bootstrap', 'app' and 'platform'. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ > bug link at https://bugs.openjdk.java.net/browse/JDK-8202605 > > Testing: hs-tier(1-2), jdk-tier(1-2) complete > hs-tier(3-5), jdk-tier(3) in progress > > Thanks, > Lois > > From lois.foltan at oracle.com Fri Jun 15 23:52:50 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 15 Jun 2018 19:52:50 -0400 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> <5207064d-787a-680f-cc23-20e4ece999b3@oracle.com> Message-ID: <38efa995-8541-73c2-fc2f-f22671c12560@oracle.com> Hi Thomas, I have read through all your comments below, thank you.? I think the best compromise that hopefully will enable this change to go forward is to back out my changes to classfile/classLoaderHierarchyDCmd.cpp and classfile/classLoaderStats.cpp.? This will allow the serviceability team to review the new format for the class loader's name_and_id and go forward if applicable to jcmd in a follow on RFE.? Updated webrev at: http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.3/webrev/ Thanks, Lois On 6/15/2018 3:43 PM, Thomas St?fe wrote: > Hi Lois, > > On Fri, Jun 15, 2018 at 8:26 PM, Lois Foltan wrote: >> On 6/15/2018 3:06 AM, Thomas St?fe wrote: >> >> Hi Lois, >> >> Hi Thomas, >> Thank you for looking at this change and giving it another round of review! >> >> ---- >> >> We have now: >> >> Symbol* ClassLoaderData::name() >> >> which returns ClassLoader.name >> >> and >> >> const char* ClassLoaderData::loader_name() >> >> which returns either ClassLoader.name or, if that is null, the class >> name. >> >> I would like to point out that ClassLoaderData::loader_name() is pretty much >> unchanged as it exists today, so this behavior is not new or changed. >> > Okay. > >> 1) if we keep it that way, we should at least rename loader_name() to >> something like loader_name_or_class_name() to lessen the surprise. >> >> 2) But maybe these two functions should have the same behaviour? >> Return name or null if not set, not the class name? I see that nobody >> yet uses loader_name(), so you are free to define it as you see fit. >> >> 3) but if (2), maybe alternativly just get rid of loader_name() >> altogether, as just calling as_C_string() on a symbol is not worth a >> utility function? >> >> I would like to leave ClassLoaderData::loader_name() in for a couple of >> reasons. Leaving it in discourages new methods like it to be introduced in >> the future in data structures other than ClassLoaderData, calling >> java_lang_ClassLoader::name() directly is not safe during unloading and >> getting rid of it may force a call to as_C_string() as option #3 suggests >> but that doesn't handle the bootstrap class loader. Given this I think the >> best course of action would be to update ClassLoaderData.hpp with the same >> comments I put in place within ClassLoaderData.cpp for this method as you >> suggest below. > Okay. > >> >> --- >> >> For VM.systemdictionary, the texts seem to be a bit off: >> >> 29167: >> Dictionary for loader data: 0x00007f7550cb8660 for instance a >> 'jdk/internal/reflect/DelegatingClassLoader'{0x0000000706c00000} >> >> "for instance a" ? >> >> Dictionary for loader data: 0x00007f75503b3a50 for instance a >> 'jdk/internal/loader/ClassLoaders$AppClassLoader'{0x000000070647b098} >> Dictionary for loader data: 0x00007f75503a4e30 for instance a >> 'jdk/internal/loader/ClassLoaders$PlatformClassLoader'{0x0000000706479088} >> >> should that not be "app" or "platform", respectively? >> >> ... but I just see it was the same way before and not touched by your >> change. Maybe here, your new compound name would make sense? >> >> ---- >> >> If I understand correctly this output shows up when one specifies >> -Xlog:class+load=debug? > I saw it as result of jcmd VM.systemdictionary (Coleen's command, I > think?) but it may show up too in other places, I did not check. > >> I see that the "for instance " is printed by >> >> void ClassLoaderData::print_value_on(outputStream* out) const { >> if (!is_unloading() && class_loader() != NULL) { >> out->print("loader data: " INTPTR_FORMAT " for instance ", p2i(this)); >> class_loader()->print_value_on(out); // includes loader_name_and_id() >> and address of class loader instance >> >> and class_loader()->print_value_on(out); eventually calls >> InstanceKlass::oop_print_value_on to print the "a". >> >> void InstanceKlass::oop_print_value_on(oop obj, outputStream* st) { >> st->print("a "); >> name()->print_value_on(st); >> obj->print_address_on(st); >> if (this == SystemDictionary::String_klass() >> >> This is a good follow up RFE since one will have to look at all the calls to >> InstanceKlass::oop_print_value_on() to determine if the "a " is still >> applicable. >> > Yes, there may be a number of follow up cleanups after this patch is in. > >> >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderData.cpp.sdiff.html >> >> Good comments. >> >> suggested change to comment: >> >> 129 // Obtain the class loader's name and identity hash. If the >> class loader's >> 130 // name was not explicitly set during construction, the class >> loader's name and id >> 131 // will be set to the qualified class name of the class loader >> along with its >> 132 // identity hash. >> >> rather: >> >> 129 // Obtain the class loader's name and identity hash. If the >> class loader's >> 130 // name was not explicitly set during construction, the class >> loader's ** _name_and_id field ** >> 131 // will be set to the qualified class name of the class loader >> along with its >> 132 // identity hash. >> >> Done. >> >> >> ---- >> >> 133 // If for some reason the ClassLoader's constructor has not >> been run, instead of >> >> I am curious, how can this happen? Bad bytecode instrumentation? >> Should we also attempt to work in the identity hashcode in that case >> to be consistent with the java side? Or maybe name it something like >> "classname "? Or is this too exotic a case to care? >> >> Bad bytecode instrumentation, Unsafe.allocateInstance(), see test >> open/test/hotspot/jtreg/runtime/modules/ClassLoaderNoUnnamedModuleTest.java >> for example. > JDK-8202758... Wow. Yikes. > >> I too was actually thinking of "classname @" so I >> do like that approach but it is a rare case. >> > Thanks for taking that suggestion. > >> ---- >> >> In various places I see you using: >> >> 937 if (_class_loader_klass == NULL) { // bootstrap case >> >> just to make sure, this is the same as >> CLD::is_the_null_class_loader_data(), yes? So, one could use one and >> assert the other? >> >> Yes. Actually Coleen & I were discussing that maybe we could remove >> ClassLoaderData::_class_loader_klass since its original purpose was to allow >> for ultimately a way to obtain the class loader's klass external_name. Will >> look into creating a follow on RFE if _class_loader_klass is no longer >> needed. >> > I use it in VM.classloaders and VM.metaspace, to print out the loader > class name and in VM.classloaders verbose mode I print out the Klass* > pointer too. We found it useful in some debugging scenarios. > > Btw, for the same reason I print out the "{loader oop}" in > VM.classloaders - debugging help. This was also a wish of Kirk > Pepperdine when we introduced VM.classloaders, see discussion: > http://mail.openjdk.java.net/pipermail/serviceability-dev/2018-May/023770.html > . > > There are discussions and experiments currently done to execute > multiple jcmd subcommands at one safe point. In this context, printing > oops is more interesting in diagnostic commands, since you can chain > multiple commands together and get consistent oop values. See > discussions here: > http://mail.openjdk.java.net/pipermail/serviceability-dev/2018-May/023673.html > (Currently, Frederic Parain from Oracle took this over and provided a > prototype patch). > > But all in all, if it makes matters easier, I think yes we should > remove _class_loader_klass from CLD. > >> ---- >> >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderData.hpp.sdiff.html >> >> Not sure about BOOTSTRAP_LOADER_NAME_LEN, since its sole user - jfr - >> could probably just do a ::strlen(BOOTSTRAP_LOADER_NAME). >> >> Not sure either about BOOTSTRAP_LOADER_NAME having quotes baked in - >> this is something I would rather see in the printing code. >> >> I agree. I removed the single quotes but I would like to leave in >> BOOTSTAP_LOADER_NAME_LEN. >> > Okay. We should make sure they stay consistent, but that is no terrible burden. > >> + // Obtain the class loader's _name, works during unloading. >> + const char* loader_name() const; >> + Symbol* name() const { return _name; } >> >> See above my comments to loader_name(). At the very least comment >> should be updated describing that this function returns name or class >> name or "bootstrap". >> >> Comment in ClassLoaderData.hpp will be updated as you suggest. >> > Thank you. > >> >> ---- >> >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderHierarchyDCmd.cpp.udiff.html >> >> Hm, unfortunately, this does not look so good. I would prefer to keep >> the old version, see here my proposal, updated to use your new >> CLD::name() function and to remove the offending "<>" around >> "bootstrap". >> >> @@ -157,13 +157,18 @@ >> >> // Retrieve information. >> const Klass* const loader_klass = _cld->class_loader_klass(); >> + const Symbol* const loader_name = _cld->name(); >> >> branchtracker.print(st); >> >> // e.g. "+--- jdk.internal.reflect.DelegatingClassLoader" >> st->print("+%.*s", BranchTracker::twig_len, "----------"); >> - st->print(" %s,", _cld->loader_name_and_id()); >> - if (!_cld->is_the_null_class_loader_data()) { >> + if (_cld->is_the_null_class_loader_data()) { >> + st->print(" bootstrap"); >> + } else { >> + if (loader_name != NULL) { >> + st->print(" \"%s\",", loader_name->as_C_string()); >> + } >> st->print(" %s", loader_klass != NULL ? >> loader_klass->external_name() : "??"); >> st->print(" {" PTR_FORMAT "}", p2i(_loader_oop)); >> } >> >> This also depends on what you decide happens with CLD::loader_name(). >> If that one were to return "loader name or null if not set, as >> ra-allocated const char*", it could be used here. >> >> I like this change and I like how the output looks. Can you take another >> look at the next webrev's updated comments in test >> serviceability/dcmd/vm/ClassLoaderHierarchyTest.java? > Sure. It is not yet posted, yes? > > May take till monday though, I am gone over the weekend. > >> I plan to open an RFE >> to have the serviceability team consider removing the address of the loader >> oop now that the included identity hash provides unique identification. >> > See my remarks above - that command including oop was added by me, and > if possible I'd like to keep the oop for debugging purposes. However, > I could move the output to the "verbose" section (if you run > VM.classloaders verbose, there are additional things printed below the > class loader name). > > Note however, that printing "{}" was consistent with pre-existing > commands from Oracle, in this case VM.systemdictionary. > >> >> ---- >> >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderStats.cpp.udiff.html >> >> In VM.classloader_stats we see the effect of the new naming: >> >> x000000080000a0b8 0x00000008000623f0 0x00007f5facafe540 1 >> 6144 4064 jdk.internal.reflect.DelegatingClassLoader @7b5a12ae >> 0x000000080000a0b8 0x00000008000623f0 0x00007f5facbcdd50 1 >> 6144 3960 jdk.internal.reflect.DelegatingClassLoader @5b529706 >> 0x00000008000623f0 0x0000000000000000 0x00007f5facbcca00 10 >> 90112 51760 'MyInMemoryClassLoader' @17cdf2d0 >> 0x00000008000623f0 0x0000000000000000 0x00007f5facbca560 1 >> 6144 4184 'MyInMemoryClassLoader' @1477089c >> 0x00000008000623f0 0x0000000000000000 0x00007f5facba7890 1 >> 6144 4184 'MyInMemoryClassLoader' @a87f8ec >> 0x00000008000623f0 0x0000000000000000 0x00007f5facba5390 1 >> 6144 4184 'MyInMemoryClassLoader' @5a3bc7ed >> 0x00000008000623f0 0x0000000000000000 0x00007f5facba3bf0 1 >> 6144 4184 'MyInMemoryClassLoader' @48c76607 >> 0x00000008000623f0 0x0000000000000000 0x00007f5facb23f80 1 >> 6144 4184 'MyInMemoryClassLoader' @1224144a >> 0x00000008000623f0 0x0000000000000000 0x00007f5facb228f0 1 >> 6144 4184 'MyInMemoryClassLoader' @75437611 >> 0x00000008000623f0 0x0000000000000000 0x00007f5facb65c60 1 >> 6144 4184 'MyInMemoryClassLoader' @25084a1e >> 0x00000008000623f0 0x0000000000000000 0x00007f5facb6a030 1 >> 6144 4184 'MyInMemoryClassLoader' @2d2ffcb7 >> 0x00000008000623f0 0x0000000000000000 0x00007f5facb4bfe0 1 >> 6144 4184 'MyInMemoryClassLoader' @42a48628 >> 0x0000000800010340 0x00000008000107a8 0x00007f5fac3bd670 1064 >> 7004160 6979376 'app' >> 96 >> 311296 202600 + unsafe anonymous classes >> 0x0000000000000000 0x0000000000000000 0x00007f5fac1da1e0 1091 >> 8380416 8301048 'bootstrap' >> 92 >> 263168 169808 + unsafe anonymous classes >> 0x000000080000a0b8 0x000000080000a0b8 0x00007f5faca63460 1 >> 6144 3960 jdk.internal.reflect.DelegatingClassLoader @5bd03f44 >> >> >>> Since we hide now the class name of the loader, if everyone names >>> their class loader the same - e.g. "Test" or "MyInMemoryClassLoader" - >>> we loose information. >> We loose the name of class loader's class' fully qualified name only in the >> situation where the class loader's name has been explicitly specified by the >> user during construction. I would think in that case one would want to see >> the explicitly given name of the class loader. We also gain in either >> situation (unnamed or named class loader), the class loader's identity hash >> which allows for uniquely identifying a class loader in question. > For the record, I would prefer a naming scheme which printed > unconditionally both name and class name, if both are set: > > '"name", instance of , @id' > > or > > 'instance of , @id' > > or maybe some more condensed, technical form, as a clear triple: > > '[name, , @id]' or '{name, , @id}' > > The reason why I keep harping on this is that it is useful to have > consistent output, meaning, output that does not change its format on > a line-by-line base. > > Just a tiny example why this is useful, lets say I run a Spring MVC > app and want to know the number of Spring loaders, I do a: > > ./images/jdk/bin/jcmd hello VM.classloaders | grep org.springframework | wc -l > > Won't work consistently anymore if class names disappear for loader > names which have names. > > Of course, there are myriad other ways to get the same information, so > this is just an illustration. > > -- > > But I guess I won't convince you that this is better, and it seems you > spent a lot of thoughts and discussions on this point already. I think > this is a case of one-size-fits-not-all. And also a matter of taste. > > If emphasis is on brevity, your naming scheme is better. If > ease-of-parsing and ease-of-reading are important, I think my scheme > wins. > > But as long as we have alternatives - e.g. CLD::name() and > CLD::class_loader_class() - and as long as VM.classloaders and > VM.metaspace commands stay useful, I am content and can live with your > scheme. > >>> I'm afraid this will be an issue if people will >>> start naming their class loaders more and more. It is not unimaginable >>> that completely different frameworks name their loaders the same. >> Point taken, however, doesn't including the identity hash allow for unique >> identification of the class loader? > I think the point of diagnostic commands is to get information quick. > An identity hash may help me after I managed to finally resolve it, > but it is not a quick process (that I know of). Whereas, for example, > just reading "com.wily.introscope.Loader" tells me immediately that > the VM I am looking at has Wily byte code modifications enabled. > >> >>> This "name or if not then class name" scheme will also complicate >>> parsing a lot for people who parse the output of these commands. I >>> would strongly prefer to see both - name and class type. >> Much like classfile/classLoaderHierarchyDCmd.cpp now generates, correct? >> > Yes! :) > >> Thanks, >> Lois >> >> > I just saw your webrev popping in, but again it is late. I'll take a > look tomorrow morning or monday. Thank you for your work. > > ..Thomas > >> ---- >> >> Hmm. At this point I noticed that I still had general reservations >> about the new compound naming scheme - see my remarks above. So I >> guess I stop here to wait for your response before continuing the code >> review. >> >> Thanks & Kind Regards, >> >> Thomas >> >> >> >> >> >> >> >> >> >> >> >> On Thu, Jun 14, 2018 at 9:56 PM, Lois Foltan wrote: >> >> Please review this updated webrev that address review comments received. >> >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/ >> >> Thanks, >> Lois >> >> >> On 6/13/2018 6:58 PM, Lois Foltan wrote: >> >> Please review this change to standardize on how to obtain a class loader's >> name within the VM. SystemDictionary::loader_name() methods have been >> removed in favor of ClassLoaderData::loader_name(). >> >> Since the loader name is largely used in the VM for display purposes >> (error messages, logging, jcmd, JFR) this change also adopts a new format to >> append to a class loader's name its identityHashCode and if the loader has >> not been explicitly named it's qualified class name is used instead. >> >> 391 /** >> 392 * If the defining loader has a name explicitly set then >> 393 * '' @ >> 394 * If the defining loader has no name then >> 395 * @ >> 396 * If it's built-in loader then omit `@` as there is only one >> instance. >> 397 */ >> >> The names for the builtin loaders are 'bootstrap', 'app' and 'platform'. >> >> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ >> bug link at https://bugs.openjdk.java.net/browse/JDK-8202605 >> >> Testing: hs-tier(1-2), jdk-tier(1-2) complete >> hs-tier(3-5), jdk-tier(3) in progress >> >> Thanks, >> Lois >> >> From thomas.stuefe at gmail.com Sat Jun 16 05:01:31 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Sat, 16 Jun 2018 07:01:31 +0200 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: <38efa995-8541-73c2-fc2f-f22671c12560@oracle.com> References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> <5207064d-787a-680f-cc23-20e4ece999b3@oracle.com> <38efa995-8541-73c2-fc2f-f22671c12560@oracle.com> Message-ID: Hi Lois, thanks, I had a look and this looks good to me now. Thank you for your patiente. Best Regards, Thomas On Sat, Jun 16, 2018 at 1:52 AM, Lois Foltan wrote: > Hi Thomas, > > I have read through all your comments below, thank you. I think the best > compromise that hopefully will enable this change to go forward is to back > out my changes to classfile/classLoaderHierarchyDCmd.cpp and > classfile/classLoaderStats.cpp. This will allow the serviceability team to > review the new format for the class loader's name_and_id and go forward if > applicable to jcmd in a follow on RFE. Updated webrev at: > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.3/webrev/ > > Thanks, > Lois > > > On 6/15/2018 3:43 PM, Thomas St?fe wrote: > >> Hi Lois, >> >> On Fri, Jun 15, 2018 at 8:26 PM, Lois Foltan >> wrote: >>> >>> On 6/15/2018 3:06 AM, Thomas St?fe wrote: >>> >>> Hi Lois, >>> >>> Hi Thomas, >>> Thank you for looking at this change and giving it another round of >>> review! >>> >>> ---- >>> >>> We have now: >>> >>> Symbol* ClassLoaderData::name() >>> >>> which returns ClassLoader.name >>> >>> and >>> >>> const char* ClassLoaderData::loader_name() >>> >>> which returns either ClassLoader.name or, if that is null, the class >>> name. >>> >>> I would like to point out that ClassLoaderData::loader_name() is pretty >>> much >>> unchanged as it exists today, so this behavior is not new or changed. >>> >> Okay. >> >>> 1) if we keep it that way, we should at least rename loader_name() to >>> something like loader_name_or_class_name() to lessen the surprise. >>> >>> 2) But maybe these two functions should have the same behaviour? >>> Return name or null if not set, not the class name? I see that nobody >>> yet uses loader_name(), so you are free to define it as you see fit. >>> >>> 3) but if (2), maybe alternativly just get rid of loader_name() >>> altogether, as just calling as_C_string() on a symbol is not worth a >>> utility function? >>> >>> I would like to leave ClassLoaderData::loader_name() in for a couple of >>> reasons. Leaving it in discourages new methods like it to be introduced >>> in >>> the future in data structures other than ClassLoaderData, calling >>> java_lang_ClassLoader::name() directly is not safe during unloading and >>> getting rid of it may force a call to as_C_string() as option #3 suggests >>> but that doesn't handle the bootstrap class loader. Given this I think >>> the >>> best course of action would be to update ClassLoaderData.hpp with the >>> same >>> comments I put in place within ClassLoaderData.cpp for this method as you >>> suggest below. >> >> Okay. >> >>> >>> --- >>> >>> For VM.systemdictionary, the texts seem to be a bit off: >>> >>> 29167: >>> Dictionary for loader data: 0x00007f7550cb8660 for instance a >>> 'jdk/internal/reflect/DelegatingClassLoader'{0x0000000706c00000} >>> >>> "for instance a" ? >>> >>> Dictionary for loader data: 0x00007f75503b3a50 for instance a >>> 'jdk/internal/loader/ClassLoaders$AppClassLoader'{0x000000070647b098} >>> Dictionary for loader data: 0x00007f75503a4e30 for instance a >>> >>> 'jdk/internal/loader/ClassLoaders$PlatformClassLoader'{0x0000000706479088} >>> >>> should that not be "app" or "platform", respectively? >>> >>> ... but I just see it was the same way before and not touched by your >>> change. Maybe here, your new compound name would make sense? >>> >>> ---- >>> >>> If I understand correctly this output shows up when one specifies >>> -Xlog:class+load=debug? >> >> I saw it as result of jcmd VM.systemdictionary (Coleen's command, I >> think?) but it may show up too in other places, I did not check. >> >>> I see that the "for instance " is printed by >>> >>> void ClassLoaderData::print_value_on(outputStream* out) const { >>> if (!is_unloading() && class_loader() != NULL) { >>> out->print("loader data: " INTPTR_FORMAT " for instance ", >>> p2i(this)); >>> class_loader()->print_value_on(out); // includes >>> loader_name_and_id() >>> and address of class loader instance >>> >>> and class_loader()->print_value_on(out); eventually calls >>> InstanceKlass::oop_print_value_on to print the "a". >>> >>> void InstanceKlass::oop_print_value_on(oop obj, outputStream* st) { >>> st->print("a "); >>> name()->print_value_on(st); >>> obj->print_address_on(st); >>> if (this == SystemDictionary::String_klass() >>> >>> This is a good follow up RFE since one will have to look at all the calls >>> to >>> InstanceKlass::oop_print_value_on() to determine if the "a " is still >>> applicable. >>> >> Yes, there may be a number of follow up cleanups after this patch is in. >> >>> >>> >>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderData.cpp.sdiff.html >>> >>> Good comments. >>> >>> suggested change to comment: >>> >>> 129 // Obtain the class loader's name and identity hash. If the >>> class loader's >>> 130 // name was not explicitly set during construction, the class >>> loader's name and id >>> 131 // will be set to the qualified class name of the class loader >>> along with its >>> 132 // identity hash. >>> >>> rather: >>> >>> 129 // Obtain the class loader's name and identity hash. If the >>> class loader's >>> 130 // name was not explicitly set during construction, the class >>> loader's ** _name_and_id field ** >>> 131 // will be set to the qualified class name of the class loader >>> along with its >>> 132 // identity hash. >>> >>> Done. >>> >>> >>> ---- >>> >>> 133 // If for some reason the ClassLoader's constructor has not >>> been run, instead of >>> >>> I am curious, how can this happen? Bad bytecode instrumentation? >>> Should we also attempt to work in the identity hashcode in that case >>> to be consistent with the java side? Or maybe name it something like >>> "classname "? Or is this too exotic a case to care? >>> >>> Bad bytecode instrumentation, Unsafe.allocateInstance(), see test >>> >>> open/test/hotspot/jtreg/runtime/modules/ClassLoaderNoUnnamedModuleTest.java >>> for example. >> >> JDK-8202758... Wow. Yikes. >> >>> I too was actually thinking of "classname @" so I >>> do like that approach but it is a rare case. >>> >> Thanks for taking that suggestion. >> >>> ---- >>> >>> In various places I see you using: >>> >>> 937 if (_class_loader_klass == NULL) { // bootstrap case >>> >>> just to make sure, this is the same as >>> CLD::is_the_null_class_loader_data(), yes? So, one could use one and >>> assert the other? >>> >>> Yes. Actually Coleen & I were discussing that maybe we could remove >>> ClassLoaderData::_class_loader_klass since its original purpose was to >>> allow >>> for ultimately a way to obtain the class loader's klass external_name. >>> Will >>> look into creating a follow on RFE if _class_loader_klass is no longer >>> needed. >>> >> I use it in VM.classloaders and VM.metaspace, to print out the loader >> class name and in VM.classloaders verbose mode I print out the Klass* >> pointer too. We found it useful in some debugging scenarios. >> >> Btw, for the same reason I print out the "{loader oop}" in >> VM.classloaders - debugging help. This was also a wish of Kirk >> Pepperdine when we introduced VM.classloaders, see discussion: >> >> http://mail.openjdk.java.net/pipermail/serviceability-dev/2018-May/023770.html >> . >> >> There are discussions and experiments currently done to execute >> multiple jcmd subcommands at one safe point. In this context, printing >> oops is more interesting in diagnostic commands, since you can chain >> multiple commands together and get consistent oop values. See >> discussions here: >> >> http://mail.openjdk.java.net/pipermail/serviceability-dev/2018-May/023673.html >> (Currently, Frederic Parain from Oracle took this over and provided a >> prototype patch). >> >> But all in all, if it makes matters easier, I think yes we should >> remove _class_loader_klass from CLD. >> >>> ---- >>> >>> >>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderData.hpp.sdiff.html >>> >>> Not sure about BOOTSTRAP_LOADER_NAME_LEN, since its sole user - jfr - >>> could probably just do a ::strlen(BOOTSTRAP_LOADER_NAME). >>> >>> Not sure either about BOOTSTRAP_LOADER_NAME having quotes baked in - >>> this is something I would rather see in the printing code. >>> >>> I agree. I removed the single quotes but I would like to leave in >>> BOOTSTAP_LOADER_NAME_LEN. >>> >> Okay. We should make sure they stay consistent, but that is no terrible >> burden. >> >>> + // Obtain the class loader's _name, works during unloading. >>> + const char* loader_name() const; >>> + Symbol* name() const { return _name; } >>> >>> See above my comments to loader_name(). At the very least comment >>> should be updated describing that this function returns name or class >>> name or "bootstrap". >>> >>> Comment in ClassLoaderData.hpp will be updated as you suggest. >>> >> Thank you. >> >>> >>> ---- >>> >>> >>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderHierarchyDCmd.cpp.udiff.html >>> >>> Hm, unfortunately, this does not look so good. I would prefer to keep >>> the old version, see here my proposal, updated to use your new >>> CLD::name() function and to remove the offending "<>" around >>> "bootstrap". >>> >>> @@ -157,13 +157,18 @@ >>> >>> // Retrieve information. >>> const Klass* const loader_klass = _cld->class_loader_klass(); >>> + const Symbol* const loader_name = _cld->name(); >>> >>> branchtracker.print(st); >>> >>> // e.g. "+--- jdk.internal.reflect.DelegatingClassLoader" >>> st->print("+%.*s", BranchTracker::twig_len, "----------"); >>> - st->print(" %s,", _cld->loader_name_and_id()); >>> - if (!_cld->is_the_null_class_loader_data()) { >>> + if (_cld->is_the_null_class_loader_data()) { >>> + st->print(" bootstrap"); >>> + } else { >>> + if (loader_name != NULL) { >>> + st->print(" \"%s\",", loader_name->as_C_string()); >>> + } >>> st->print(" %s", loader_klass != NULL ? >>> loader_klass->external_name() : "??"); >>> st->print(" {" PTR_FORMAT "}", p2i(_loader_oop)); >>> } >>> >>> This also depends on what you decide happens with CLD::loader_name(). >>> If that one were to return "loader name or null if not set, as >>> ra-allocated const char*", it could be used here. >>> >>> I like this change and I like how the output looks. Can you take another >>> look at the next webrev's updated comments in test >>> serviceability/dcmd/vm/ClassLoaderHierarchyTest.java? >> >> Sure. It is not yet posted, yes? >> >> May take till monday though, I am gone over the weekend. >> >>> I plan to open an RFE >>> to have the serviceability team consider removing the address of the >>> loader >>> oop now that the included identity hash provides unique identification. >>> >> See my remarks above - that command including oop was added by me, and >> if possible I'd like to keep the oop for debugging purposes. However, >> I could move the output to the "verbose" section (if you run >> VM.classloaders verbose, there are additional things printed below the >> class loader name). >> >> Note however, that printing "{}" was consistent with pre-existing >> commands from Oracle, in this case VM.systemdictionary. >> >>> >>> ---- >>> >>> >>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderStats.cpp.udiff.html >>> >>> In VM.classloader_stats we see the effect of the new naming: >>> >>> x000000080000a0b8 0x00000008000623f0 0x00007f5facafe540 1 >>> 6144 4064 jdk.internal.reflect.DelegatingClassLoader @7b5a12ae >>> 0x000000080000a0b8 0x00000008000623f0 0x00007f5facbcdd50 1 >>> 6144 3960 jdk.internal.reflect.DelegatingClassLoader @5b529706 >>> 0x00000008000623f0 0x0000000000000000 0x00007f5facbcca00 10 >>> 90112 51760 'MyInMemoryClassLoader' @17cdf2d0 >>> 0x00000008000623f0 0x0000000000000000 0x00007f5facbca560 1 >>> 6144 4184 'MyInMemoryClassLoader' @1477089c >>> 0x00000008000623f0 0x0000000000000000 0x00007f5facba7890 1 >>> 6144 4184 'MyInMemoryClassLoader' @a87f8ec >>> 0x00000008000623f0 0x0000000000000000 0x00007f5facba5390 1 >>> 6144 4184 'MyInMemoryClassLoader' @5a3bc7ed >>> 0x00000008000623f0 0x0000000000000000 0x00007f5facba3bf0 1 >>> 6144 4184 'MyInMemoryClassLoader' @48c76607 >>> 0x00000008000623f0 0x0000000000000000 0x00007f5facb23f80 1 >>> 6144 4184 'MyInMemoryClassLoader' @1224144a >>> 0x00000008000623f0 0x0000000000000000 0x00007f5facb228f0 1 >>> 6144 4184 'MyInMemoryClassLoader' @75437611 >>> 0x00000008000623f0 0x0000000000000000 0x00007f5facb65c60 1 >>> 6144 4184 'MyInMemoryClassLoader' @25084a1e >>> 0x00000008000623f0 0x0000000000000000 0x00007f5facb6a030 1 >>> 6144 4184 'MyInMemoryClassLoader' @2d2ffcb7 >>> 0x00000008000623f0 0x0000000000000000 0x00007f5facb4bfe0 1 >>> 6144 4184 'MyInMemoryClassLoader' @42a48628 >>> 0x0000000800010340 0x00000008000107a8 0x00007f5fac3bd670 1064 >>> 7004160 6979376 'app' >>> 96 >>> 311296 202600 + unsafe anonymous classes >>> 0x0000000000000000 0x0000000000000000 0x00007f5fac1da1e0 1091 >>> 8380416 8301048 'bootstrap' >>> 92 >>> 263168 169808 + unsafe anonymous classes >>> 0x000000080000a0b8 0x000000080000a0b8 0x00007f5faca63460 1 >>> 6144 3960 jdk.internal.reflect.DelegatingClassLoader @5bd03f44 >>> >>> >>>> Since we hide now the class name of the loader, if everyone names >>>> their class loader the same - e.g. "Test" or "MyInMemoryClassLoader" - >>>> we loose information. >>> >>> We loose the name of class loader's class' fully qualified name only in >>> the >>> situation where the class loader's name has been explicitly specified by >>> the >>> user during construction. I would think in that case one would want to >>> see >>> the explicitly given name of the class loader. We also gain in either >>> situation (unnamed or named class loader), the class loader's identity >>> hash >>> which allows for uniquely identifying a class loader in question. >> >> For the record, I would prefer a naming scheme which printed >> unconditionally both name and class name, if both are set: >> >> '"name", instance of , @id' >> >> or >> >> 'instance of , @id' >> >> or maybe some more condensed, technical form, as a clear triple: >> >> '[name, , @id]' or '{name, , @id}' >> >> The reason why I keep harping on this is that it is useful to have >> consistent output, meaning, output that does not change its format on >> a line-by-line base. >> >> Just a tiny example why this is useful, lets say I run a Spring MVC >> app and want to know the number of Spring loaders, I do a: >> >> ./images/jdk/bin/jcmd hello VM.classloaders | grep org.springframework | >> wc -l >> >> Won't work consistently anymore if class names disappear for loader >> names which have names. >> >> Of course, there are myriad other ways to get the same information, so >> this is just an illustration. >> >> -- >> >> But I guess I won't convince you that this is better, and it seems you >> spent a lot of thoughts and discussions on this point already. I think >> this is a case of one-size-fits-not-all. And also a matter of taste. >> >> If emphasis is on brevity, your naming scheme is better. If >> ease-of-parsing and ease-of-reading are important, I think my scheme >> wins. >> >> But as long as we have alternatives - e.g. CLD::name() and >> CLD::class_loader_class() - and as long as VM.classloaders and >> VM.metaspace commands stay useful, I am content and can live with your >> scheme. >> >>>> I'm afraid this will be an issue if people will >>>> start naming their class loaders more and more. It is not unimaginable >>>> that completely different frameworks name their loaders the same. >>> >>> Point taken, however, doesn't including the identity hash allow for >>> unique >>> identification of the class loader? >> >> I think the point of diagnostic commands is to get information quick. >> An identity hash may help me after I managed to finally resolve it, >> but it is not a quick process (that I know of). Whereas, for example, >> just reading "com.wily.introscope.Loader" tells me immediately that >> the VM I am looking at has Wily byte code modifications enabled. >> >>> >>>> This "name or if not then class name" scheme will also complicate >>>> parsing a lot for people who parse the output of these commands. I >>>> would strongly prefer to see both - name and class type. >>> >>> Much like classfile/classLoaderHierarchyDCmd.cpp now generates, correct? >>> >> Yes! :) >> >>> Thanks, >>> Lois >>> >>> >> I just saw your webrev popping in, but again it is late. I'll take a >> look tomorrow morning or monday. Thank you for your work. >> >> ..Thomas >> >>> ---- >>> >>> Hmm. At this point I noticed that I still had general reservations >>> about the new compound naming scheme - see my remarks above. So I >>> guess I stop here to wait for your response before continuing the code >>> review. >>> >>> Thanks & Kind Regards, >>> >>> Thomas >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> On Thu, Jun 14, 2018 at 9:56 PM, Lois Foltan >>> wrote: >>> >>> Please review this updated webrev that address review comments received. >>> >>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/ >>> >>> Thanks, >>> Lois >>> >>> >>> On 6/13/2018 6:58 PM, Lois Foltan wrote: >>> >>> Please review this change to standardize on how to obtain a class >>> loader's >>> name within the VM. SystemDictionary::loader_name() methods have been >>> removed in favor of ClassLoaderData::loader_name(). >>> >>> Since the loader name is largely used in the VM for display purposes >>> (error messages, logging, jcmd, JFR) this change also adopts a new format >>> to >>> append to a class loader's name its identityHashCode and if the loader >>> has >>> not been explicitly named it's qualified class name is used instead. >>> >>> 391 /** >>> 392 * If the defining loader has a name explicitly set then >>> 393 * '' @ >>> 394 * If the defining loader has no name then >>> 395 * @ >>> 396 * If it's built-in loader then omit `@` as there is only one >>> instance. >>> 397 */ >>> >>> The names for the builtin loaders are 'bootstrap', 'app' and 'platform'. >>> >>> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ >>> bug link at https://bugs.openjdk.java.net/browse/JDK-8202605 >>> >>> Testing: hs-tier(1-2), jdk-tier(1-2) complete >>> hs-tier(3-5), jdk-tier(3) in progress >>> >>> Thanks, >>> Lois >>> >>> > From swatibits14 at gmail.com Sat Jun 16 17:52:49 2018 From: swatibits14 at gmail.com (Swati Sharma) Date: Sat, 16 Jun 2018 23:22:49 +0530 Subject: [11] RFR(M): 8189922: UseNUMA memory interleaving vs membind Message-ID: Hi All, This is my first patch,I would appreciate if anyone can review the fix: Bug : https://bugs.openjdk.java.net/browse/JDK-8189922 Webrev : http://cr.openjdk.java.net/~gromero/8189922/v1 The bug is about JVM flag UseNUMA which bypasses the user specified numactl --membind option and divides the whole heap in lgrps according to available numa nodes. The proposed solution is to disable UseNUMA if bound to single numa node. In case more than one numa node binding, create the lgrps according to bound nodes.If there is no binding, then JVM will divide the whole heap based on the number of NUMA nodes available on the system. I appreciate Gustavo's help for fixing the thread allocation based on numa distance for membind which was a dangling issue associated with main patch. Tested the fix by running specjbb2015 composite workload on 8 NUMA node system. Case 1 : Single NUMA node bind numactl --cpunodebind=0 --membind=0 java -Xmx24g -Xms24g -Xmn22g -XX:+UseNUMA -Xlog:gc*=debug:file=gc.log:time,uptimemillis Before Patch: gc.log eden space 22511616K(22GB), 12% used lgrp 0 space 2813952K, 100% used lgrp 1 space 2813952K, 0% used lgrp 2 space 2813952K, 0% used lgrp 3 space 2813952K, 0% used lgrp 4 space 2813952K, 0% used lgrp 5 space 2813952K, 0% used lgrp 6 space 2813952K, 0% used lgrp 7 space 2813952K, 0% used After Patch : gc.log eden space 46718976K(45GB), 99% used (NUMA disabled) Case 2 : Multiple NUMA node bind numactl --cpunodebind=0,7 ?membind=0,7 java -Xms50g -Xmx50g -Xmn45g -XX:+UseNUMA -Xlog:gc*=debug:file=gc.log:time,uptimemillis Before Patch :gc.log eden space 46718976K, 6% used lgrp 0 space 5838848K, 14% used lgrp 1 space 5838848K, 0% used lgrp 2 space 5838848K, 0% used lgrp 3 space 5838848K, 0% used lgrp 4 space 5838848K, 0% used lgrp 5 space 5838848K, 0% used lgrp 6 space 5838848K, 0% used lgrp 7 space 5847040K, 35% used After Patch : gc.log eden space 46718976K(45GB), 99% used lgrp 0 space 23359488K(23.5GB), 100% used lgrp 7 space 23359488K(23.5GB), 99% used Note: The proposed solution is only for numactl membind option.The fix is not for --cpunodebind and localalloc which is a separate bug bug https://bugs.openjdk.java.net/browse/JDK-8205051 and fix is in progress on this. Thanks, Swati Sharma Software Engineer -2 at AMD From david.holmes at oracle.com Mon Jun 18 02:39:15 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 18 Jun 2018 12:39:15 +1000 Subject: RFR (S) 8203479: JFR enabled ARM32 build assertion failure In-Reply-To: References: Message-ID: <7ffe11f8-53b7-f7e2-bb54-c24410934db4@oracle.com> Hi Boris, On 15/06/2018 8:44 PM, Boris Ulasevich wrote: > Hi, > > Please review the following patch: > ? http://cr.openjdk.java.net/~bulasevich/8203479/webrev.01 > ? https://bugs.openjdk.java.net/browse/JDK-8203479 > > Assertion fires in JFR codes on first VM thread setup because VM globals > are not yet initialized (and supports_cx8 property is not predefined for > ARM32 platform). I propose to exploit early_initialize() method to set > up supports_cx8 property on early stage of VM initialization. Your fix looks good. Please update copyright years. Just some additional commentary ... from the bug report: First _supports_cx8 usage: Threads::create_vm -> JavaThread::JavaThread(bool) --> Thread::Thread() --> JfrThreadLocal::JfrThreadLocal() --> atomic_inc(unsigned long long volatile*) I'm not sure when this was introduced but it's risky to perform atomic operations on jlong early during VM initialization. Apart from the problem you encountered with ARM32, on some platforms it may require that the stub-generator has been initialized. (Though in that case we may harmlessly fallback to using non-atomic operations - which is okay when creating the initial JavaThread.) Thanks, David > Thanks > Boris From david.holmes at oracle.com Mon Jun 18 04:54:05 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 18 Jun 2018 14:54:05 +1000 Subject: RFR (XS) 8204961: JVMTI jtreg tests build warnings on 32-bit platforms In-Reply-To: <80d0c562-c684-111e-ba70-a1860247aa6a@oracle.com> References: <80d0c562-c684-111e-ba70-a1860247aa6a@oracle.com> Message-ID: <5283f9f6-87a1-f945-40c3-bb56d1b84971@oracle.com> I ran this through our testing and it was fine. I can sponsor this for you if you like Boris. Thanks, David On 14/06/2018 10:55 PM, David Holmes wrote: > Hi Boris, > > I added serviceability-dev as JVM TI and its tests are technically > serviceability concerns. > > On 14/06/2018 10:39 PM, Boris Ulasevich wrote: >> Hi all, >> >> Please review the following patch: >> ?? https://bugs.openjdk.java.net/browse/JDK-8204961 >> ?? http://cr.openjdk.java.net/~bulasevich/8204961/webrev.01 >> >> Recently opensourced JVMTI tests gives build warnings for ARM32 build. > > I'm guessing the compiler version must have changed since we last ran > these tests on 32-bit ARM. :) > >> GCC complains about conversion between 4-byte pointer to 8-byte jlong >> type which is Ok in this case. I propose to hide warning using >> conversion to intptr_t. > > I was concerned about what the warnings might imply but now I see that a > JVM TI "tag" is simply a jlong used to funnel real pointers around to > use for the tagging. So on 32-bit the upper 32-bits of the tag will > always be zero and there is no data loss in any of the conversions. > > So assuming none of the other compilers complain about this, this seems > fine to me. > > Thanks, > David > >> thanks, >> Boris From david.holmes at oracle.com Mon Jun 18 06:17:04 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 18 Jun 2018 16:17:04 +1000 Subject: RFR(M): 8204965: Fix '--disable-cds' and disable CDS on AIX by default In-Reply-To: References: Message-ID: Hi Volker, src/hotspot/share/runtime/globals.hpp This change should not be needed! We do minimal VM builds without CDS and we don't have to touch the UseSharedSpaces defaults (unless recent change have broken this - in which case that needs to be addressed in its own right!) src/hotspot/share/classfile/javaClasses.cpp AFAICS you should be using INCLUDE_CDS in the ifdefs not INCLUDE_CDS_JAVA_HEAP. But again I'm unclear (as was Thomas) why this should be needed as we have not needed it before. As Thomas notes we have: ./hotspot/share/memory/metaspaceShared.hpp: static bool is_archive_object(oop p) NOT_CDS_JAVA_HEAP_RETURN_(false); ./hotspot/share/classfile/stringTable.hpp: static oop create_archived_string(oop s, Thread* THREAD) NOT_CDS_JAVA_HEAP_RETURN_(NULL); so these methods should be defined when CDS is not available. ?? Thanks, David ----- On 15/06/2018 12:26 AM, Volker Simonis wrote: > Hi, > > can I please have a review for the following fix: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/ > https://bugs.openjdk.java.net/browse/JDK-8204965 > > CDS does currently not work on AIX because of the way how we > reserve/commit memory on AIX. The problem is that we're using a > combination of shmat/mmap depending on the page size and the size of > the memory chunk to reserve. This makes it impossible to reliably > reserve the memory for the CDS archive and later on map the various > parts of the archive into these regions. > > In order to fix this we would have to completely rework the memory > reserve/commit/uncommit logic on AIX which is currently out of our > scope because of resource limitations. > > Unfortunately, I could not simply disable CDS in the configure step > because some of the shared code apparently relies on parts of the CDS > code which gets excluded from the build when CDS is disabled. So I > also fixed the offending parts in hotspot and cleaned up the configure > logic for CDS. > > Thank you and best regards, > Volker > > PS: I did run the job through the submit forest > (mach5-one-simonis-JDK-8204965-20180613-1946-26349) but the results > weren't really useful because they mention build failures on linux-x64 > which I can't reproduce locally. > From david.holmes at oracle.com Mon Jun 18 07:45:07 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 18 Jun 2018 17:45:07 +1000 Subject: RFR: 8205091: AIX: build errors in hotspot after 8203641: Refactor String Deduplication into shared In-Reply-To: <1a62b7b4123c45f1a0e57774d72ee683@sap.com> References: <1a62b7b4123c45f1a0e57774d72ee683@sap.com> Message-ID: <95e8c7fa-320e-b48a-8807-a4938799a671@oracle.com> Hi Matthias, On 15/06/2018 5:47 PM, Baesken, Matthias wrote: > Please review this small change that fixes the AIX build after "8203641: Refactor String Deduplication into shared" . > > We are getting this compilation error : > /build_ci_jdk_jdk_rs6000_64/src/hotspot/share/gc/shared/stringdedup/stringDedup.hpp", line 107.38: 1540-0063 (S) The text "1" is unexpected. > > > Looks like the name of the second template parameter (STAT) > > template > static void initialize_impl(); > > is clashing with defines from the AIX system headers (where I find #define STAT 1 ) . > Renaming STAT to something else fixes the build on AIX . // STAT: String Dedup Stat implementation ! template You need to change the comment as well. But I suggest changing to match the implementation: template Thanks, David > Webrev : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8205091/ > > Bug : > > https://bugs.openjdk.java.net/browse/JDK-8205091 > > > Thanks, Matthias > From boris.ulasevich at bell-sw.com Mon Jun 18 09:18:51 2018 From: boris.ulasevich at bell-sw.com (Boris Ulasevich) Date: Mon, 18 Jun 2018 12:18:51 +0300 Subject: RFR (S) 8203479: JFR enabled ARM32 build assertion failure In-Reply-To: <7ffe11f8-53b7-f7e2-bb54-c24410934db4@oracle.com> References: <7ffe11f8-53b7-f7e2-bb54-c24410934db4@oracle.com> Message-ID: <833e84ab-5b3c-b819-6aec-64d09d4ca7df@bell-sw.com> Hi David, Many thanks! Here is the werbev with updated copyrights: http://cr.openjdk.java.net/~bulasevich/8203479/webrev.02 thanks, Boris On 18.06.2018 05:39, David Holmes wrote: > Hi Boris, > > On 15/06/2018 8:44 PM, Boris Ulasevich wrote: >> Hi, >> >> Please review the following patch: >> ?? http://cr.openjdk.java.net/~bulasevich/8203479/webrev.01 >> ?? https://bugs.openjdk.java.net/browse/JDK-8203479 >> >> Assertion fires in JFR codes on first VM thread setup because VM >> globals are not yet initialized (and supports_cx8 property is not >> predefined for ARM32 platform). I propose to exploit >> early_initialize() method to set up supports_cx8 property on early >> stage of VM initialization. > > Your fix looks good. Please update copyright years. > Just some additional commentary ... from the bug report: > > First _supports_cx8 usage: > ? Threads::create_vm -> JavaThread::JavaThread(bool) --> > Thread::Thread() --> JfrThreadLocal::JfrThreadLocal() --> > atomic_inc(unsigned long long volatile*) > > I'm not sure when this was introduced but it's risky to perform atomic > operations on jlong early during VM initialization. For me it comes from revision when JFR was opensourced: changeset: 50113:caf115bb98ad user: egahlin date: Tue May 15 20:24:34 2018 +0200 summary: 8199712: Flight Recorder > Apart from the > problem you encountered with ARM32, on some platforms it may require > that the stub-generator has been initialized. (Though in that case we > may harmlessly fallback to using non-atomic operations - which is okay > when creating the initial JavaThread.) Good point! Yes, we discussed as an optional solution: (1) reworking jfr next_thread_id() function to increase counter without atomic_inc function call when counter value is zero, and (2) disable assertion when !is_init_completed(). We decided that for our case initialization sequence reorder works better. > Thanks, > David > >> Thanks >> Boris From boris.ulasevich at bell-sw.com Mon Jun 18 09:24:12 2018 From: boris.ulasevich at bell-sw.com (Boris Ulasevich) Date: Mon, 18 Jun 2018 12:24:12 +0300 Subject: RFR (XS) 8204961: JVMTI jtreg tests build warnings on 32-bit platforms In-Reply-To: <5283f9f6-87a1-f945-40c3-bb56d1b84971@oracle.com> References: <80d0c562-c684-111e-ba70-a1860247aa6a@oracle.com> <5283f9f6-87a1-f945-40c3-bb56d1b84971@oracle.com> Message-ID: Hi David, Thank you very much! Yes, sponsor this please! best regards, Boris On 18.06.2018 07:54, David Holmes wrote: > I ran this through our testing and it was fine. > > I can sponsor this for you if you like Boris. > > Thanks, > David > > On 14/06/2018 10:55 PM, David Holmes wrote: >> Hi Boris, >> >> I added serviceability-dev as JVM TI and its tests are technically >> serviceability concerns. >> >> On 14/06/2018 10:39 PM, Boris Ulasevich wrote: >>> Hi all, >>> >>> Please review the following patch: >>> ?? https://bugs.openjdk.java.net/browse/JDK-8204961 >>> ?? http://cr.openjdk.java.net/~bulasevich/8204961/webrev.01 >>> >>> Recently opensourced JVMTI tests gives build warnings for ARM32 build. >> >> I'm guessing the compiler version must have changed since we last ran >> these tests on 32-bit ARM. :) >> >>> GCC complains about conversion between 4-byte pointer to 8-byte jlong >>> type which is Ok in this case. I propose to hide warning using >>> conversion to intptr_t. >> >> I was concerned about what the warnings might imply but now I see that >> a JVM TI "tag" is simply a jlong used to funnel real pointers around >> to use for the tagging. So on 32-bit the upper 32-bits of the tag will >> always be zero and there is no data loss in any of the conversions. >> >> So assuming none of the other compilers complain about this, this >> seems fine to me. >> >> Thanks, >> David >> >>> thanks, >>> Boris From per.liden at oracle.com Mon Jun 18 09:55:52 2018 From: per.liden at oracle.com (Per Liden) Date: Mon, 18 Jun 2018 11:55:52 +0200 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: <82ac960d-5794-0bfb-8d5b-ff0be858230a@LGonQn.Org> References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> <034bbb1e-5f79-c429-878c-79e8f824dc3d@oracle.com> <7c8571dc-8781-cfdf-1bb9-d0c3b3297c17@oracle.com> <82ac960d-5794-0bfb-8d5b-ff0be858230a@LGonQn.Org> Message-ID: <8632cbab-75e8-7945-1b11-339e471b8ae2@oracle.com> On 06/14/2018 05:01 PM, Chris Phillips wrote: > Hi > Any further comments or changes? > On 06/06/18 05:56 PM, Chris Phillips wrote: >> Hi Per, >> >> On 06/06/18 05:48 PM, Per Liden wrote: >>> Hi Chris, >>> >>> On 06/06/2018 11:15 PM, Chris Phillips wrote: >>>> Hi Per, >>>> >>>> On 06/06/18 04:47 PM, Per Liden wrote: >>>>> Hi Chris, >>>>> >>>>> On 06/06/2018 09:36 PM, Chris Phillips wrote: >>>>>> Hi, >>>>>> >>>>>> On 06/06/18 02:23 PM, Per Liden wrote: >>>>>>> On 2018-06-06 18:29, Andrew Haley wrote: >>>>>>>> On 06/06/2018 04:47 PM, Chris Phillips wrote: >>>>>>>>> Please review this set of changes to shared code >>>>>>>>> related to S390 (31bit) Zero self-build type mis-match failures. >>>>>>>>> >>>>>>>>> Bug:??? https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>>>>> webrev: http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 >>>>>>>> >>>>>>>> Can you explain this a little more?? What is the type of size_t on >>>>>>>> s390x?? What is the type of uintptr_t?? What are the errors? >>>>>>> >>>>>>> I would like to understand this too. >>>>>>> >>>>>>> cheers, >>>>>>> Per >>>>>>> >>>>>>> >>>>>> Quoting from the original bug? review request: >>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>>> >>>>>> "This >>>>>> is a problem when one parameter is of size_t type and the second of >>>>>> uintx type and the platform has size_t defined as eg. unsigned long as >>>>>> on s390 (32-bit)." >>>>> >>>>> Please clarify what the sizes of uintx (i.e. uintptr_t) and size_t are >>>>> on s390? >>>> See Dan's explanation. >>>>> >>>>> I fail to see how any of this matters to _entries here? What am I >>>>> missing? >>>>> >>>> >>>> By changing the type, to its actual usage, we avoid the >>>> necessity of patching in src/hotspot/share/gc/g1/g1StringDedupTable.cpp >>>> around line 617, since its consistent usage and local I patched at the >>>> definition. >>>> >>>> - _table->_entries, percent_of(_table->_entries, _table->_size), >>>> _entry_cache->size(), _entries_added, _entries_removed); >>>> + _table->_entries, percent_of( (size_t)(_table->_entries), >>>> _table->_size), _entry_cache->size(), _entries_added, _entries_removed); >>>> >>>> percent_of will complain about types otherwise. >>> >>> Ok, so why don't you just cast it in the call to percent_of? Your >>> current patch has ripple effects that you fail to take into account. For >>> example, _entries is still printed using UINTX_FORMAT and compared >>> against other uintx variables. You're now mixing types in an unsound way. >> >> Hmm missed that, so will do the cast instead as you suggest. >> (Fixing at the defn is what was suggested the last time around so I >> tried to do that where it was consistent, obviously this is not. >> Thanks. >> >>> cheers, >>> Per >>> >>>> >>>> >>>>> src/hotspot/share/gc/g1/g1StringDedupTable.hpp >>>>> @@ -120,11 +120,11 @@ >>>>> ??? // Cache for reuse and fast alloc/free of table entries. >>>>> ??? static G1StringDedupEntryCache* _entry_cache; >>>>> >>>>> ??? G1StringDedupEntry**??????????? _buckets; >>>>> ??? size_t????????????????????????? _size; >>>>> -? uintx?????????????????????????? _entries; >>>>> +? size_t????????????????????????? _entries; >>>>> ??? uintx?????????????????????????? _shrink_threshold; >>>>> ??? uintx?????????????????????????? _grow_threshold; >>>>> ??? bool??????????????????????????? _rehash_needed; >>>>> >>>>> cheers, >>>>> Per >>>>> >>>>>> >>>>>> Hope that helps, >>>>>> Chris >>>>>> >>>>>> (I'll answer further if needed but the info is in the bugs and >>>>>> review thread mostly) >>>>>> See: >>>>>> https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>> and: >>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>>> >>>>>> https://bugs.openjdk.java.net/browse/JDK-8046938 >>>>>> https://bugs.openjdk.java.net/browse/JDK-8074459 >>>>>> For more info. >>>>>> >>>>> >>>>> >>> >>> >> Cheers! >> Chris >> >> >> > > Finally through testing and submit run again after Per's requested > change, here's the knew webrev: > http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.2 > attached is the passing run fron the submit queue. > > Please review... src/hotspot/share/gc/cms/cms_globals.hpp ---------------------------------------- Instead of changing the type of ParGCDesiredObjsFromOverflowList, I'd suggest you just change the single place where you need a cast, in ParScanThreadState::take_from_overflow_stack(). If you change the type of ParGCDesiredObjsFromOverflowList, but you otherwise have to clean up a number of places where it's already explicitly cast to size_t in concurrentMaskSweepGeneration.cpp. src/hotspot/share/gc/parallel/parallel_globals.hpp -------------------------------------------------- Please also change to type of ParallelOldDeadWoodLimiterMean to size_t. src/hotspot/share/gc/parallel/psParallelCompact.cpp --------------------------------------------------- No need to cast ParallelOldDeadWoodLimiterStdDev, you're already changed its type. And if you change ParallelOldDeadWoodLimiterMean to also being size_t you don't need to touch this file at all. src/hotspot/share/runtime/globals.hpp ------------------------------------- -define_pd_global(uintx, InitialCodeCacheSize, 160*K); -define_pd_global(uintx, ReservedCodeCacheSize, 32*M); +define_pd_global(size_t, InitialCodeCacheSize, 160*K); +define_pd_global(size_t, ReservedCodeCacheSize, 32*M); I would avoid changing these types, otherwise you need to go around and clean up a number of other places where it's says it's an uintx, like here: 1909 product_pd(uintx, InitialCodeCacheSize, \ 1910 "Initial code cache size (in bytes)") \ 1911 range(os::vm_page_size(), max_uintx) \ Also, it seems you've already added the cast you need for InitialCodeCacheSize in codeCache.cpp, so that type change looks unnecessary. Btw, patch no longer applies to the latest jdk/jdk. cheers, Per > > Chris > From aph at redhat.com Mon Jun 18 10:16:30 2018 From: aph at redhat.com (Andrew Haley) Date: Mon, 18 Jun 2018 11:16:30 +0100 Subject: RFR: 8204680: Disassembly does not display code strings in stubs In-Reply-To: References: <262821a6-5de6-0bed-332c-bab06e64b43f@oracle.com> Message-ID: On 06/15/2018 07:02 PM, Stuart Monteith wrote: > There appears to be a bug introduced here. I've opened > https://bugs.openjdk.java.net/browse/JDK-8205118 and I am > investigating. Umm, is there a reason why this bug is assigned to you? Surely I should have to fix it, for all the usual reasons. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From thomas.stuefe at gmail.com Mon Jun 18 12:20:53 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 18 Jun 2018 14:20:53 +0200 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: <2770673c-3765-0680-feef-d2bed0f59426@LGonQn.Org> References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> <034bbb1e-5f79-c429-878c-79e8f824dc3d@oracle.com> <7c8571dc-8781-cfdf-1bb9-d0c3b3297c17@oracle.com> <2770673c-3765-0680-feef-d2bed0f59426@LGonQn.Org> Message-ID: Hi Chris, it may be just me, but I dislike a bit the usage of "size_t" for "number of things". size_t, to me, will always mean a memory range. Best Regards, Thomas On Fri, Jun 15, 2018 at 5:36 PM, Chris Phillips wrote: > On 14/06/18 11:01 AM, Chris Phillips wrote: >> Hi >> Any further comments or changes? >> On 06/06/18 05:56 PM, Chris Phillips wrote: >>> Hi Per, >>> >>> On 06/06/18 05:48 PM, Per Liden wrote: >>>> Hi Chris, >>>> >>>> On 06/06/2018 11:15 PM, Chris Phillips wrote: >>>>> Hi Per, >>>>> >>>>> On 06/06/18 04:47 PM, Per Liden wrote: >>>>>> Hi Chris, >>>>>> >>>>>> On 06/06/2018 09:36 PM, Chris Phillips wrote: >>>>>>> Hi, >>>>>>> >>>>>>> On 06/06/18 02:23 PM, Per Liden wrote: >>>>>>>> On 2018-06-06 18:29, Andrew Haley wrote: >>>>>>>>> On 06/06/2018 04:47 PM, Chris Phillips wrote: >>>>>>>>>> Please review this set of changes to shared code >>>>>>>>>> related to S390 (31bit) Zero self-build type mis-match failures. >>>>>>>>>> >>>>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>>>>>> webrev: http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 >>>>>>>>> >>>>>>>>> Can you explain this a little more? What is the type of size_t on >>>>>>>>> s390x? What is the type of uintptr_t? What are the errors? >>>>>>>> >>>>>>>> I would like to understand this too. >>>>>>>> >>>>>>>> cheers, >>>>>>>> Per >>>>>>>> >>>>>>>> >>>>>>> Quoting from the original bug review request: >>>>>>> > http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>>>> >>>>>>> "This >>>>>>> is a problem when one parameter is of size_t type and the second of >>>>>>> uintx type and the platform has size_t defined as eg. unsigned > long as >>>>>>> on s390 (32-bit)." >>>>>> >>>>>> Please clarify what the sizes of uintx (i.e. uintptr_t) and size_t are >>>>>> on s390? >>>>> See Dan's explanation. >>>>>> >>>>>> I fail to see how any of this matters to _entries here? What am I >>>>>> missing? >>>>>> >>>>> >>>>> By changing the type, to its actual usage, we avoid the >>>>> necessity of patching in src/hotspot/share/gc/g1/g1StringDedupTable.cpp >>>>> around line 617, since its consistent usage and local I patched at the >>>>> definition. >>>>> >>>>> - _table->_entries, percent_of(_table->_entries, _table->_size), >>>>> _entry_cache->size(), _entries_added, _entries_removed); >>>>> + _table->_entries, percent_of( (size_t)(_table->_entries), >>>>> _table->_size), _entry_cache->size(), _entries_added, > _entries_removed); >>>>> >>>>> percent_of will complain about types otherwise. >>>> >>>> Ok, so why don't you just cast it in the call to percent_of? Your >>>> current patch has ripple effects that you fail to take into account. For >>>> example, _entries is still printed using UINTX_FORMAT and compared >>>> against other uintx variables. You're now mixing types in an unsound > way. >>> >>> Hmm missed that, so will do the cast instead as you suggest. >>> (Fixing at the defn is what was suggested the last time around so I >>> tried to do that where it was consistent, obviously this is not. >>> Thanks. >>> >>>> cheers, >>>> Per >>>> >>>>> >>>>> >>>>>> src/hotspot/share/gc/g1/g1StringDedupTable.hpp >>>>>> @@ -120,11 +120,11 @@ >>>>>> // Cache for reuse and fast alloc/free of table entries. >>>>>> static G1StringDedupEntryCache* _entry_cache; >>>>>> >>>>>> G1StringDedupEntry** _buckets; >>>>>> size_t _size; >>>>>> - uintx _entries; >>>>>> + size_t _entries; >>>>>> uintx _shrink_threshold; >>>>>> uintx _grow_threshold; >>>>>> bool _rehash_needed; >>>>>> >>>>>> cheers, >>>>>> Per >>>>>> >>>>>>> >>>>>>> Hope that helps, >>>>>>> Chris >>>>>>> >>>>>>> (I'll answer further if needed but the info is in the bugs and >>>>>>> review thread mostly) >>>>>>> See: >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>>> and: >>>>>>> > http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>>>> >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8046938 >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8074459 >>>>>>> For more info. >>>>>>> >>>>>> >>>>>> >>>> >>>> >>> Cheers! >>> Chris >>> >>> >>> >> >> Finally through testing and submit run again after Per's requested >> change, here's the knew webrev: >> http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.2 >> attached is the passing run fron the submit queue. >> >> Please review... >> >> Chris >> > Hi > Please may I have another review > and someone to push ? > > Thanks! > Chris > > Hmm attachments stripped... > > Here it is inline: > > Build Details: 2018-06-14-1347454.chrisphi.source > 0 Failed Tests > Mach5 Tasks Results Summary > > PASSED: 75 > KILLED: 0 > FAILED: 0 > UNABLE_TO_RUN: 0 > EXECUTED_WITH_FAILURE: 0 > NA: 0 > From stuart.monteith at linaro.org Mon Jun 18 13:05:25 2018 From: stuart.monteith at linaro.org (Stuart Monteith) Date: Mon, 18 Jun 2018 14:05:25 +0100 Subject: RFR: 8204680: Disassembly does not display code strings in stubs In-Reply-To: References: <262821a6-5de6-0bed-332c-bab06e64b43f@oracle.com> Message-ID: Hi, When I try and change the correct field, it does... I've popped that over to you. Backing out the change definitely resolves the issue for me, which is what I'll be working with for now. Ta, Stuart On 18 June 2018 at 11:16, Andrew Haley wrote: > On 06/15/2018 07:02 PM, Stuart Monteith wrote: >> There appears to be a bug introduced here. I've opened >> https://bugs.openjdk.java.net/browse/JDK-8205118 and I am >> investigating. > > Umm, is there a reason why this bug is assigned to you? Surely I should > have to fix it, for all the usual reasons. > > -- > Andrew Haley > Java Platform Lead Engineer > Red Hat UK Ltd. > EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From robbin.ehn at oracle.com Mon Jun 18 13:07:28 2018 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Mon, 18 Jun 2018 15:07:28 +0200 Subject: RFR(s): 8204166: TLH: Semaphore may not be destroy until signal have returned. In-Reply-To: References: Message-ID: <3db415fe-f96d-1805-76a2-b50d052c9e1a@oracle.com> Hi all, After some internal discussions I changed the patch to: http://rehn-ws.se.oracle.com/cr_mirror/8204166/v2/ Which handles thread off javathreads list better. Passes handshake testing and ZGC testing seems okay. Thanks, Robbin On 06/14/2018 12:11 PM, Robbin Ehn wrote: > Hi all, please review. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8204166 > Webrev: http://cr.openjdk.java.net/~rehn/8204166/v1/webrev/ > > The root cause of this failure is a bug in the posix semaphores: > https://sourceware.org/bugzilla/show_bug.cgi?id=12674 > > Thread a: > sem_post(my_sem); > > Thread b: > sem_wait(my_sem); > sem_destroy(my_sem); > > Thread b is waiting on my_sem (count 0), Thread a posts (count 0->1). > If Thread b start executing directly after the increment in post but before > Thread a leaves the call to post and manage to destroy the semaphore. Thread a > _can_ get EINVAL from sem_post! This is fixed in newer glibc(2.21). > > Note that mutexes have had same issue on some platforms: > https://sourceware.org/bugzilla/show_bug.cgi?id=13690 > Fixed in 2.23. > > Since we only have one handshake operation running at anytime (safepoints and > handshakes are also mutual exclusive, both run on VM Thread) we can actually > always use the same semaphore. This patch changes the _done semaphore to be > static instead, thus avoiding the post<->destroy race. > > Patch also contains some small changes which remove of dead code, remove > unneeded state, handling of cases which we can't easily say will never happen > and some additional error checks. > > Handshakes test passes, but they don't trigger the original issue, so more > interesting is that this issue do not happen when running ZGC which utilize > handshakes with the static semaphore. > > Thanks, Robbin From erik.osterlund at oracle.com Mon Jun 18 13:12:01 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 18 Jun 2018 15:12:01 +0200 Subject: RFR(s): 8204166: TLH: Semaphore may not be destroy until signal have returned. In-Reply-To: <3db415fe-f96d-1805-76a2-b50d052c9e1a@oracle.com> References: <3db415fe-f96d-1805-76a2-b50d052c9e1a@oracle.com> Message-ID: <5B27AFA1.6020809@oracle.com> Hi Robbin, Looks good. Thanks, /Erik On 2018-06-18 15:07, Robbin Ehn wrote: > Hi all, > > After some internal discussions I changed the patch to: > http://rehn-ws.se.oracle.com/cr_mirror/8204166/v2/ > > Which handles thread off javathreads list better. > > Passes handshake testing and ZGC testing seems okay. > > Thanks, Robbin > > On 06/14/2018 12:11 PM, Robbin Ehn wrote: >> Hi all, please review. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8204166 >> Webrev: http://cr.openjdk.java.net/~rehn/8204166/v1/webrev/ >> >> The root cause of this failure is a bug in the posix semaphores: >> https://sourceware.org/bugzilla/show_bug.cgi?id=12674 >> >> Thread a: >> sem_post(my_sem); >> >> Thread b: >> sem_wait(my_sem); >> sem_destroy(my_sem); >> >> Thread b is waiting on my_sem (count 0), Thread a posts (count 0->1). >> If Thread b start executing directly after the increment in post but >> before >> Thread a leaves the call to post and manage to destroy the semaphore. >> Thread a >> _can_ get EINVAL from sem_post! This is fixed in newer glibc(2.21). >> >> Note that mutexes have had same issue on some platforms: >> https://sourceware.org/bugzilla/show_bug.cgi?id=13690 >> Fixed in 2.23. >> >> Since we only have one handshake operation running at anytime >> (safepoints and handshakes are also mutual exclusive, both run on VM >> Thread) we can actually always use the same semaphore. This patch >> changes the _done semaphore to be static instead, thus avoiding the >> post<->destroy race. >> >> Patch also contains some small changes which remove of dead code, >> remove unneeded state, handling of cases which we can't easily say >> will never happen and some additional error checks. >> >> Handshakes test passes, but they don't trigger the original issue, so >> more interesting is that this issue do not happen when running ZGC >> which utilize handshakes with the static semaphore. >> >> Thanks, Robbin From robbin.ehn at oracle.com Mon Jun 18 13:58:41 2018 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Mon, 18 Jun 2018 15:58:41 +0200 Subject: RFR(s): 8204166: TLH: Semaphore may not be destroy until signal have returned. In-Reply-To: <5B27AFA1.6020809@oracle.com> References: <3db415fe-f96d-1805-76a2-b50d052c9e1a@oracle.com> <5B27AFA1.6020809@oracle.com> Message-ID: Thanks, again! /Robbin On 06/18/2018 03:12 PM, Erik ?sterlund wrote: > Hi Robbin, > > Looks good. > > Thanks, > /Erik > > On 2018-06-18 15:07, Robbin Ehn wrote: >> Hi all, >> >> After some internal discussions I changed the patch to: >> http://rehn-ws.se.oracle.com/cr_mirror/8204166/v2/ >> >> Which handles thread off javathreads list better. >> >> Passes handshake testing and ZGC testing seems okay. >> >> Thanks, Robbin >> >> On 06/14/2018 12:11 PM, Robbin Ehn wrote: >>> Hi all, please review. >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8204166 >>> Webrev: http://cr.openjdk.java.net/~rehn/8204166/v1/webrev/ >>> >>> The root cause of this failure is a bug in the posix semaphores: >>> https://sourceware.org/bugzilla/show_bug.cgi?id=12674 >>> >>> Thread a: >>> sem_post(my_sem); >>> >>> Thread b: >>> sem_wait(my_sem); >>> sem_destroy(my_sem); >>> >>> Thread b is waiting on my_sem (count 0), Thread a posts (count 0->1). >>> If Thread b start executing directly after the increment in post but before >>> Thread a leaves the call to post and manage to destroy the semaphore. Thread a >>> _can_ get EINVAL from sem_post! This is fixed in newer glibc(2.21). >>> >>> Note that mutexes have had same issue on some platforms: >>> https://sourceware.org/bugzilla/show_bug.cgi?id=13690 >>> Fixed in 2.23. >>> >>> Since we only have one handshake operation running at anytime (safepoints and >>> handshakes are also mutual exclusive, both run on VM Thread) we can actually >>> always use the same semaphore. This patch changes the _done semaphore to be >>> static instead, thus avoiding the post<->destroy race. >>> >>> Patch also contains some small changes which remove of dead code, remove >>> unneeded state, handling of cases which we can't easily say will never happen >>> and some additional error checks. >>> >>> Handshakes test passes, but they don't trigger the original issue, so more >>> interesting is that this issue do not happen when running ZGC which utilize >>> handshakes with the static semaphore. >>> >>> Thanks, Robbin > From robbin.ehn at oracle.com Mon Jun 18 14:05:51 2018 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Mon, 18 Jun 2018 16:05:51 +0200 Subject: RFR(s): 8204166: TLH: Semaphore may not be destroy until signal have returned. In-Reply-To: <3db415fe-f96d-1805-76a2-b50d052c9e1a@oracle.com> References: <3db415fe-f96d-1805-76a2-b50d052c9e1a@oracle.com> Message-ID: On 06/18/2018 03:07 PM, Robbin Ehn wrote: > Hi all, > > After some internal discussions I changed the patch to: > http://rehn-ws.se.oracle.com/cr_mirror/8204166/v2/ Correct external url: http://cr.openjdk.java.net/~rehn/8204166/v2/ /Robbin > > Which handles thread off javathreads list better. > > Passes handshake testing and ZGC testing seems okay. > > Thanks, Robbin > > On 06/14/2018 12:11 PM, Robbin Ehn wrote: >> Hi all, please review. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8204166 >> Webrev: http://cr.openjdk.java.net/~rehn/8204166/v1/webrev/ >> >> The root cause of this failure is a bug in the posix semaphores: >> https://sourceware.org/bugzilla/show_bug.cgi?id=12674 >> >> Thread a: >> sem_post(my_sem); >> >> Thread b: >> sem_wait(my_sem); >> sem_destroy(my_sem); >> >> Thread b is waiting on my_sem (count 0), Thread a posts (count 0->1). >> If Thread b start executing directly after the increment in post but before >> Thread a leaves the call to post and manage to destroy the semaphore. Thread a >> _can_ get EINVAL from sem_post! This is fixed in newer glibc(2.21). >> >> Note that mutexes have had same issue on some platforms: >> https://sourceware.org/bugzilla/show_bug.cgi?id=13690 >> Fixed in 2.23. >> >> Since we only have one handshake operation running at anytime (safepoints and >> handshakes are also mutual exclusive, both run on VM Thread) we can actually >> always use the same semaphore. This patch changes the _done semaphore to be >> static instead, thus avoiding the post<->destroy race. >> >> Patch also contains some small changes which remove of dead code, remove >> unneeded state, handling of cases which we can't easily say will never happen >> and some additional error checks. >> >> Handshakes test passes, but they don't trigger the original issue, so more >> interesting is that this issue do not happen when running ZGC which utilize >> handshakes with the static semaphore. >> >> Thanks, Robbin From martin.doerr at sap.com Mon Jun 18 14:10:46 2018 From: martin.doerr at sap.com (Doerr, Martin) Date: Mon, 18 Jun 2018 14:10:46 +0000 Subject: RFR(XS): 8205172: 32 bit build broken Message-ID: Hi, 32 bit build is currently broken due to: "trap_mask": jdk/src/hotspot/share/oops/methodData.hpp(142) : warning C4293: '<<' : shift count negative or too big, undefined behavior "PrngModMask": jdk/src/hotspot/share/runtime/threadHeapSampler.cpp(50) : warning C4293: '<<' : shift count negative or too big, undefined behavior Please review this small fix: http://cr.openjdk.java.net/~mdoerr/8205172_32bit_build/webrev.00/ Best regards, Martin From rwestrel at redhat.com Mon Jun 18 15:22:44 2018 From: rwestrel at redhat.com (Roland Westrelin) Date: Mon, 18 Jun 2018 17:22:44 +0200 Subject: RFR(XS): 8205172: 32 bit build broken In-Reply-To: References: Message-ID: > http://cr.openjdk.java.net/~mdoerr/8205172_32bit_build/webrev.00/ The methodData.hpp change looks good to me. Roland. From volker.simonis at gmail.com Mon Jun 18 16:04:58 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Mon, 18 Jun 2018 18:04:58 +0200 Subject: RFR(M): 8204965: Fix '--disable-cds' and disable CDS on AIX by default In-Reply-To: References: Message-ID: On Mon, Jun 18, 2018 at 8:17 AM, David Holmes wrote: > Hi Volker, > > src/hotspot/share/runtime/globals.hpp > > This change should not be needed! We do minimal VM builds without CDS and we > don't have to touch the UseSharedSpaces defaults (unless recent change have > broken this - in which case that needs to be addressed in its own right!) > Yes, you're right, CDS_ONLY/NOT_CDS isn't really required here, because UseSharedSpaces is reseted later on at the end of Arguments::parse(). I just thought it would be cleaner to disable it statically, if the VM doesn't support it. But anyway I don't really mind and I've reverted that change in globals.hpp. > src/hotspot/share/classfile/javaClasses.cpp > > AFAICS you should be using INCLUDE_CDS in the ifdefs not > INCLUDE_CDS_JAVA_HEAP. But again I'm unclear (as was Thomas) why this should > be needed as we have not needed it before. As Thomas notes we have: > > ./hotspot/share/memory/metaspaceShared.hpp: static bool > is_archive_object(oop p) NOT_CDS_JAVA_HEAP_RETURN_(false); > ./hotspot/share/classfile/stringTable.hpp: static oop > create_archived_string(oop s, Thread* THREAD) > NOT_CDS_JAVA_HEAP_RETURN_(NULL); > > so these methods should be defined when CDS is not available. > Thomas and you are right. Must have been a mis-configuration on AIX where I saw undefined symbols at link time. I've removed the ifdefs from javaClasses.cpp now. Finally, I've also wrapped all the FileMapInfo fields in vmStructs.cpp into CDS_ONLY macros as suggested by Jiangli because the really only make sense for a CDS-enabled VM. Here's the new webrev: http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965.v3/ Please let me know if you think there's still something missing. Regards, Volker > ?? > > Thanks, > David > ----- > > > > > > On 15/06/2018 12:26 AM, Volker Simonis wrote: >> >> Hi, >> >> can I please have a review for the following fix: >> >> http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/ >> https://bugs.openjdk.java.net/browse/JDK-8204965 >> >> CDS does currently not work on AIX because of the way how we >> reserve/commit memory on AIX. The problem is that we're using a >> combination of shmat/mmap depending on the page size and the size of >> the memory chunk to reserve. This makes it impossible to reliably >> reserve the memory for the CDS archive and later on map the various >> parts of the archive into these regions. >> >> In order to fix this we would have to completely rework the memory >> reserve/commit/uncommit logic on AIX which is currently out of our >> scope because of resource limitations. >> >> Unfortunately, I could not simply disable CDS in the configure step >> because some of the shared code apparently relies on parts of the CDS >> code which gets excluded from the build when CDS is disabled. So I >> also fixed the offending parts in hotspot and cleaned up the configure >> logic for CDS. >> >> Thank you and best regards, >> Volker >> >> PS: I did run the job through the submit forest >> (mach5-one-simonis-JDK-8204965-20180613-1946-26349) but the results >> weren't really useful because they mention build failures on linux-x64 >> which I can't reproduce locally. >> > From volker.simonis at gmail.com Mon Jun 18 16:09:52 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Mon, 18 Jun 2018 18:09:52 +0200 Subject: RFR: 8205091: AIX: build errors in hotspot after 8203641: Refactor String Deduplication into shared In-Reply-To: <95e8c7fa-320e-b48a-8807-a4938799a671@oracle.com> References: <1a62b7b4123c45f1a0e57774d72ee683@sap.com> <95e8c7fa-320e-b48a-8807-a4938799a671@oracle.com> Message-ID: Hi David, Zhengyu, thanks a lot for your comments. I've now renamed the template parameters according to your suggestions to match the implementation and pushed this fix on behalf of Matthias in order to fix the build. Regards, Volker On Mon, Jun 18, 2018 at 9:45 AM, David Holmes wrote: > > > Hi Matthias, > > On 15/06/2018 5:47 PM, Baesken, Matthias wrote: >> >> Please review this small change that fixes the AIX build after >> "8203641: Refactor String Deduplication into shared" . >> >> We are getting this compilation error : >> >> /build_ci_jdk_jdk_rs6000_64/src/hotspot/share/gc/shared/stringdedup/stringDedup.hpp", >> line 107.38: 1540-0063 (S) The text "1" is unexpected. >> >> >> Looks like the name of the second template parameter (STAT) >> >> template >> static void initialize_impl(); >> >> is clashing with defines from the AIX system headers (where I find >> #define STAT 1 ) . >> Renaming STAT to something else fixes the build on AIX . > > > // STAT: String Dedup Stat implementation > ! template > > You need to change the comment as well. But I suggest changing to match the > implementation: > > template > > Thanks, > David > > >> Webrev : >> >> http://cr.openjdk.java.net/~mbaesken/webrevs/8205091/ >> >> Bug : >> >> https://bugs.openjdk.java.net/browse/JDK-8205091 >> >> >> Thanks, Matthias >> > From thomas.stuefe at gmail.com Mon Jun 18 16:14:24 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 18 Jun 2018 18:14:24 +0200 Subject: RFR(M): 8204965: Fix '--disable-cds' and disable CDS on AIX by default In-Reply-To: References: Message-ID: Looks good to me, Volker. Thank you for fixing the tests. ..Thomas On Mon, Jun 18, 2018 at 6:04 PM, Volker Simonis wrote: > On Mon, Jun 18, 2018 at 8:17 AM, David Holmes wrote: >> Hi Volker, >> >> src/hotspot/share/runtime/globals.hpp >> >> This change should not be needed! We do minimal VM builds without CDS and we >> don't have to touch the UseSharedSpaces defaults (unless recent change have >> broken this - in which case that needs to be addressed in its own right!) >> > > Yes, you're right, CDS_ONLY/NOT_CDS isn't really required here, > because UseSharedSpaces is reseted later on at the end of > Arguments::parse(). I just thought it would be cleaner to disable it > statically, if the VM doesn't support it. But anyway I don't really > mind and I've reverted that change in globals.hpp. > >> src/hotspot/share/classfile/javaClasses.cpp >> >> AFAICS you should be using INCLUDE_CDS in the ifdefs not >> INCLUDE_CDS_JAVA_HEAP. But again I'm unclear (as was Thomas) why this should >> be needed as we have not needed it before. As Thomas notes we have: >> >> ./hotspot/share/memory/metaspaceShared.hpp: static bool >> is_archive_object(oop p) NOT_CDS_JAVA_HEAP_RETURN_(false); >> ./hotspot/share/classfile/stringTable.hpp: static oop >> create_archived_string(oop s, Thread* THREAD) >> NOT_CDS_JAVA_HEAP_RETURN_(NULL); >> >> so these methods should be defined when CDS is not available. >> > > Thomas and you are right. Must have been a mis-configuration on AIX > where I saw undefined symbols at link time. I've removed the ifdefs > from javaClasses.cpp now. > > Finally, I've also wrapped all the FileMapInfo fields in vmStructs.cpp > into CDS_ONLY macros as suggested by Jiangli because the really only > make sense for a CDS-enabled VM. > > Here's the new webrev: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965.v3/ > > Please let me know if you think there's still something missing. > > Regards, > Volker > > >> ?? >> >> Thanks, >> David >> ----- >> >> >> >> >> >> On 15/06/2018 12:26 AM, Volker Simonis wrote: >>> >>> Hi, >>> >>> can I please have a review for the following fix: >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/ >>> https://bugs.openjdk.java.net/browse/JDK-8204965 >>> >>> CDS does currently not work on AIX because of the way how we >>> reserve/commit memory on AIX. The problem is that we're using a >>> combination of shmat/mmap depending on the page size and the size of >>> the memory chunk to reserve. This makes it impossible to reliably >>> reserve the memory for the CDS archive and later on map the various >>> parts of the archive into these regions. >>> >>> In order to fix this we would have to completely rework the memory >>> reserve/commit/uncommit logic on AIX which is currently out of our >>> scope because of resource limitations. >>> >>> Unfortunately, I could not simply disable CDS in the configure step >>> because some of the shared code apparently relies on parts of the CDS >>> code which gets excluded from the build when CDS is disabled. So I >>> also fixed the offending parts in hotspot and cleaned up the configure >>> logic for CDS. >>> >>> Thank you and best regards, >>> Volker >>> >>> PS: I did run the job through the submit forest >>> (mach5-one-simonis-JDK-8204965-20180613-1946-26349) but the results >>> weren't really useful because they mention build failures on linux-x64 >>> which I can't reproduce locally. >>> >> From jiangli.zhou at oracle.com Mon Jun 18 16:41:59 2018 From: jiangli.zhou at oracle.com (Jiangli Zhou) Date: Mon, 18 Jun 2018 09:41:59 -0700 Subject: RFR(M): 8204965: Fix '--disable-cds' and disable CDS on AIX by default In-Reply-To: References: Message-ID: <5C9B2902-9174-4477-9434-D4BA6514549B@oracle.com> Hi Volker, Thanks for adding CDS_ONLY to all FileMapInfo fields. It looks cleaner also with Thomas and David?s suggestion to remove the macros in globals.hpp and javaClasses.cpp. Thanks! Jiangli > On Jun 18, 2018, at 9:04 AM, Volker Simonis wrote: > > On Mon, Jun 18, 2018 at 8:17 AM, David Holmes wrote: >> Hi Volker, >> >> src/hotspot/share/runtime/globals.hpp >> >> This change should not be needed! We do minimal VM builds without CDS and we >> don't have to touch the UseSharedSpaces defaults (unless recent change have >> broken this - in which case that needs to be addressed in its own right!) >> > > Yes, you're right, CDS_ONLY/NOT_CDS isn't really required here, > because UseSharedSpaces is reseted later on at the end of > Arguments::parse(). I just thought it would be cleaner to disable it > statically, if the VM doesn't support it. But anyway I don't really > mind and I've reverted that change in globals.hpp. > >> src/hotspot/share/classfile/javaClasses.cpp >> >> AFAICS you should be using INCLUDE_CDS in the ifdefs not >> INCLUDE_CDS_JAVA_HEAP. But again I'm unclear (as was Thomas) why this should >> be needed as we have not needed it before. As Thomas notes we have: >> >> ./hotspot/share/memory/metaspaceShared.hpp: static bool >> is_archive_object(oop p) NOT_CDS_JAVA_HEAP_RETURN_(false); >> ./hotspot/share/classfile/stringTable.hpp: static oop >> create_archived_string(oop s, Thread* THREAD) >> NOT_CDS_JAVA_HEAP_RETURN_(NULL); >> >> so these methods should be defined when CDS is not available. >> > > Thomas and you are right. Must have been a mis-configuration on AIX > where I saw undefined symbols at link time. I've removed the ifdefs > from javaClasses.cpp now. > > Finally, I've also wrapped all the FileMapInfo fields in vmStructs.cpp > into CDS_ONLY macros as suggested by Jiangli because the really only > make sense for a CDS-enabled VM. > > Here's the new webrev: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965.v3/ > > Please let me know if you think there's still something missing. > > Regards, > Volker > > >> ?? >> >> Thanks, >> David >> ----- >> >> >> >> >> >> On 15/06/2018 12:26 AM, Volker Simonis wrote: >>> >>> Hi, >>> >>> can I please have a review for the following fix: >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/ >>> https://bugs.openjdk.java.net/browse/JDK-8204965 >>> >>> CDS does currently not work on AIX because of the way how we >>> reserve/commit memory on AIX. The problem is that we're using a >>> combination of shmat/mmap depending on the page size and the size of >>> the memory chunk to reserve. This makes it impossible to reliably >>> reserve the memory for the CDS archive and later on map the various >>> parts of the archive into these regions. >>> >>> In order to fix this we would have to completely rework the memory >>> reserve/commit/uncommit logic on AIX which is currently out of our >>> scope because of resource limitations. >>> >>> Unfortunately, I could not simply disable CDS in the configure step >>> because some of the shared code apparently relies on parts of the CDS >>> code which gets excluded from the build when CDS is disabled. So I >>> also fixed the offending parts in hotspot and cleaned up the configure >>> logic for CDS. >>> >>> Thank you and best regards, >>> Volker >>> >>> PS: I did run the job through the submit forest >>> (mach5-one-simonis-JDK-8204965-20180613-1946-26349) but the results >>> weren't really useful because they mention build failures on linux-x64 >>> which I can't reproduce locally. >>> >> From aph at redhat.com Mon Jun 18 17:02:16 2018 From: aph at redhat.com (Andrew Haley) Date: Mon, 18 Jun 2018 18:02:16 +0100 Subject: 8205118: CodeStrings::copy() assertion caused by -XX:+VerifyOops -XX:+PrintStubCode Message-ID: <0da3b5fa-cb47-2803-f5b0-959ddc30c667@redhat.com> My recent patch to re-enable the printing of code comments in PrintStubCode revealed a latent bug in CodeStrings::copy(). VerifyOops uses CodeStrings to hold its assertion strings, and these are distinguished from code comments by an offset of -1. (Presumably to make sure they're not interpreted as code comments by the disassembler.) Unfortunately, CodeStrings::copy() triggers an assertion failure when it sees any of the assertion strings. The best fix, IMO, is to correct CodeStrings::copy(): it shouldn't fail whatever the code strings are. http://cr.openjdk.java.net/~aph/8205118-1/ OK? -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From shade at redhat.com Mon Jun 18 17:07:19 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Mon, 18 Jun 2018 19:07:19 +0200 Subject: 8205118: CodeStrings::copy() assertion caused by -XX:+VerifyOops -XX:+PrintStubCode In-Reply-To: <0da3b5fa-cb47-2803-f5b0-959ddc30c667@redhat.com> References: <0da3b5fa-cb47-2803-f5b0-959ddc30c667@redhat.com> Message-ID: <7c93f042-d668-6764-10cc-c74eb6b07d51@redhat.com> On 06/18/2018 07:02 PM, Andrew Haley wrote: > My recent patch to re-enable the printing of code comments in > PrintStubCode revealed a latent bug in CodeStrings::copy(). > VerifyOops uses CodeStrings to hold its assertion strings, and these > are distinguished from code comments by an offset of -1. (Presumably > to make sure they're not interpreted as code comments by the > disassembler.) Unfortunately, CodeStrings::copy() triggers an > assertion failure when it sees any of the assertion strings. > > The best fix, IMO, is to correct CodeStrings::copy(): it shouldn't > fail whatever the code strings are. > > http://cr.openjdk.java.net/~aph/8205118-1/ This fix looks like a typo :) Better make it explicit, e.g. intptr_t offset() const { assert(_offset >= 0, "offset for non comment?"); return offset_raw(); } intptr_t offset_raw() const { return _offset; } ... *ps = new CodeString(n->string(),n->offset_raw()); -Aleksey From stuart.monteith at linaro.org Mon Jun 18 17:36:55 2018 From: stuart.monteith at linaro.org (Stuart Monteith) Date: Mon, 18 Jun 2018 18:36:55 +0100 Subject: 8205118: CodeStrings::copy() assertion caused by -XX:+VerifyOops -XX:+PrintStubCode In-Reply-To: <7c93f042-d668-6764-10cc-c74eb6b07d51@redhat.com> References: <0da3b5fa-cb47-2803-f5b0-959ddc30c667@redhat.com> <7c93f042-d668-6764-10cc-c74eb6b07d51@redhat.com> Message-ID: Hi, Looks ok to me, although I'm in the Aleksey camp as far as using a getter rather than accessing the field directly. If I'm to pick a nit, I'd add the missing space (that admittedly was never there): From 1155 *ps = new CodeString(n->string(),n->_offset); to 1155 *ps = new CodeString(n->string(), n->_offset); Thanks, Stuart On 18 June 2018 at 18:07, Aleksey Shipilev wrote: > On 06/18/2018 07:02 PM, Andrew Haley wrote: >> My recent patch to re-enable the printing of code comments in >> PrintStubCode revealed a latent bug in CodeStrings::copy(). >> VerifyOops uses CodeStrings to hold its assertion strings, and these >> are distinguished from code comments by an offset of -1. (Presumably >> to make sure they're not interpreted as code comments by the >> disassembler.) Unfortunately, CodeStrings::copy() triggers an >> assertion failure when it sees any of the assertion strings. >> >> The best fix, IMO, is to correct CodeStrings::copy(): it shouldn't >> fail whatever the code strings are. >> >> http://cr.openjdk.java.net/~aph/8205118-1/ > > This fix looks like a typo :) > > Better make it explicit, e.g. > > intptr_t offset() const { assert(_offset >= 0, "offset for non comment?"); return offset_raw(); } > intptr_t offset_raw() const { return _offset; } > > ... > > *ps = new CodeString(n->string(),n->offset_raw()); > > -Aleksey > From aph at redhat.com Mon Jun 18 18:06:40 2018 From: aph at redhat.com (Andrew Haley) Date: Mon, 18 Jun 2018 19:06:40 +0100 Subject: 8205118: CodeStrings::copy() assertion caused by -XX:+VerifyOops -XX:+PrintStubCode In-Reply-To: References: <0da3b5fa-cb47-2803-f5b0-959ddc30c667@redhat.com> <7c93f042-d668-6764-10cc-c74eb6b07d51@redhat.com> Message-ID: <20e3cb89-1bb7-9ae5-b4bc-c5c5d0d88c84@redhat.com> On 06/18/2018 06:36 PM, Stuart Monteith wrote: > Looks ok to me, although I'm in the Aleksey camp as far as using a > getter rather than accessing the field directly. Well, yeah, but I suppose there must be some point to that assertion. I could add a raw_get() or something. Looks like this code has been rotting for some time. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From gromero at linux.vnet.ibm.com Mon Jun 18 18:30:50 2018 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Mon, 18 Jun 2018 15:30:50 -0300 Subject: [11] RFR(M): 8189922: UseNUMA memory interleaving vs membind In-Reply-To: References: Message-ID: <37fb3c66-0400-538e-bcef-83ac9894df22@linux.vnet.ibm.com> Hi Swati, On 06/16/2018 02:52 PM, Swati Sharma wrote: > Hi All, > > This is my first patch,I would appreciate if anyone can review the fix: > > Bug : https://bugs.openjdk.java.net/browse/JDK-8189922 > Webrev :http://cr.openjdk.java.net/~gromero/8189922/v1 > > The bug is about JVM flag UseNUMA which bypasses the user specified numactl --membind option and divides the whole heap in lgrps according to available numa nodes. > > The proposed solution is to disable UseNUMA if bound to single numa node. In case more than one numa node binding, create the lgrps according to bound nodes.If there is no binding, then JVM will divide the whole heap based on the number of NUMA nodes available on the system. > > I appreciate Gustavo's help for fixing the thread allocation based on numa distance for membind which was a dangling issue associated with main patch. Thanks. I have no further comments on it. LGTM. Best regards, Gustavo PS: Please, provide numactl -H information when possible. It helps to grasp promptly the actual NUMA topology in question :) > Tested the fix by running specjbb2015 composite workload on 8 NUMA node system. > Case 1 : Single NUMA node bind > numactl --cpunodebind=0 --membind=0 java -Xmx24g -Xms24g -Xmn22g -XX:+UseNUMA -Xlog:gc*=debug:file=gc.log:time,uptimemillis > Before Patch: gc.log > eden space 22511616K(22GB), 12% used > lgrp 0 space 2813952K, 100% used > lgrp 1 space 2813952K, 0% used > lgrp 2 space 2813952K, 0% used > lgrp 3 space 2813952K, 0% used > lgrp 4 space 2813952K, 0% used > lgrp 5 space 2813952K, 0% used > lgrp 6 space 2813952K, 0% used > lgrp 7 space 2813952K, 0% used > After Patch : gc.log > eden space 46718976K(45GB), 99% used(NUMA disabled) > > Case 2 : Multiple NUMA node bind > numactl --cpunodebind=0,7 ?membind=0,7 java -Xms50g -Xmx50g -Xmn45g -XX:+UseNUMA -Xlog:gc*=debug:file=gc.log:time,uptimemillis > Before Patch :gc.log > eden space 46718976K, 6% used > lgrp 0 space 5838848K, 14% used > lgrp 1 space 5838848K, 0% used > lgrp 2 space 5838848K, 0% used > lgrp 3 space 5838848K, 0% used > lgrp 4 space 5838848K, 0% used > lgrp 5 space 5838848K, 0% used > lgrp 6 space 5838848K, 0% used > lgrp 7 space 5847040K, 35% used > After Patch : gc.log > eden space 46718976K(45GB), 99% used > lgrp 0 space 23359488K(23.5GB), 100% used > lgrp 7 space 23359488K(23.5GB), 99% used > > > Note: The proposed solution is only for numactl membind option.The fix is not for --cpunodebind and localalloc which is a separate bug bug https://bugs.openjdk.java.net/browse/JDK-8205051 and fix is in progress on this. > > Thanks, > Swati Sharma > Software Engineer -2 at AMD > From vladimir.kozlov at oracle.com Mon Jun 18 18:44:56 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 18 Jun 2018 11:44:56 -0700 Subject: [11] RFR(XS) 8205181: ProblemList applications/ctw/modules/java_desktop_2.java Message-ID: https://bugs.openjdk.java.net/browse/JDK-8205181 Put failing test on Problem List until JDK-8204842 is fixed. diff -r e5d741569070 test/hotspot/jtreg/ProblemList.txt --- a/test/hotspot/jtreg/ProblemList.txt +++ b/test/hotspot/jtreg/ProblemList.txt @@ -52,7 +52,7 @@ compiler/c2/Test8007294.java 8192992 generic-all applications/ctw/modules/java_desktop.java 8189604 windows-all -applications/ctw/modules/java_desktop_2.java 8189604 windows-all +applications/ctw/modules/java_desktop_2.java 8189604,8204842 generic-all applications/ctw/modules/jdk_jconsole.java 8189604 windows-all ############################################################################# -- Thanks, Vladimir From igor.ignatyev at oracle.com Mon Jun 18 19:11:18 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Mon, 18 Jun 2018 12:11:18 -0700 Subject: [11] RFR(XS) 8205181: ProblemList applications/ctw/modules/java_desktop_2.java In-Reply-To: References: Message-ID: <6CF2C98C-662E-46A4-93E1-37B2185C2D0D@oracle.com> Hi Vladimir, looks good to me. Thanks, -- Igor > On Jun 18, 2018, at 11:44 AM, Vladimir Kozlov wrote: > > https://bugs.openjdk.java.net/browse/JDK-8205181 > > Put failing test on Problem List until JDK-8204842 is fixed. > > diff -r e5d741569070 test/hotspot/jtreg/ProblemList.txt > --- a/test/hotspot/jtreg/ProblemList.txt > +++ b/test/hotspot/jtreg/ProblemList.txt > @@ -52,7 +52,7 @@ > compiler/c2/Test8007294.java 8192992 generic-all > > applications/ctw/modules/java_desktop.java 8189604 windows-all > -applications/ctw/modules/java_desktop_2.java 8189604 windows-all > +applications/ctw/modules/java_desktop_2.java 8189604,8204842 generic-all > applications/ctw/modules/jdk_jconsole.java 8189604 windows-all > > ############################################################################# > > > -- > Thanks, > Vladimir From vladimir.kozlov at oracle.com Mon Jun 18 19:19:05 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 18 Jun 2018 12:19:05 -0700 Subject: [11] RFR(XS) 8205181: ProblemList applications/ctw/modules/java_desktop_2.java In-Reply-To: <6CF2C98C-662E-46A4-93E1-37B2185C2D0D@oracle.com> References: <6CF2C98C-662E-46A4-93E1-37B2185C2D0D@oracle.com> Message-ID: <6db2e46e-769f-1ffa-ded8-d7231912ad76@oracle.com> Thank you, Igor Vladimir On 6/18/18 12:11 PM, Igor Ignatyev wrote: > Hi Vladimir, > > looks good to me. > > Thanks, > -- Igor > >> On Jun 18, 2018, at 11:44 AM, Vladimir Kozlov wrote: >> >> https://bugs.openjdk.java.net/browse/JDK-8205181 >> >> Put failing test on Problem List until JDK-8204842 is fixed. >> >> diff -r e5d741569070 test/hotspot/jtreg/ProblemList.txt >> --- a/test/hotspot/jtreg/ProblemList.txt >> +++ b/test/hotspot/jtreg/ProblemList.txt >> @@ -52,7 +52,7 @@ >> compiler/c2/Test8007294.java 8192992 generic-all >> >> applications/ctw/modules/java_desktop.java 8189604 windows-all >> -applications/ctw/modules/java_desktop_2.java 8189604 windows-all >> +applications/ctw/modules/java_desktop_2.java 8189604,8204842 generic-all >> applications/ctw/modules/jdk_jconsole.java 8189604 windows-all >> >> ############################################################################# >> >> >> -- >> Thanks, >> Vladimir > From vladimir.kozlov at oracle.com Mon Jun 18 19:21:09 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 18 Jun 2018 12:21:09 -0700 Subject: RFR(XS): 8205172: 32 bit build broken In-Reply-To: References: Message-ID: Looks fine to me. Thanks, Vladimir On 6/18/18 7:10 AM, Doerr, Martin wrote: > Hi, > > 32 bit build is currently broken due to: > "trap_mask": jdk/src/hotspot/share/oops/methodData.hpp(142) : warning C4293: '<<' : shift count negative or too big, undefined behavior > "PrngModMask": jdk/src/hotspot/share/runtime/threadHeapSampler.cpp(50) : warning C4293: '<<' : shift count negative or too big, undefined behavior > > Please review this small fix: > http://cr.openjdk.java.net/~mdoerr/8205172_32bit_build/webrev.00/ > > Best regards, > Martin > From coleen.phillimore at oracle.com Mon Jun 18 19:40:24 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 18 Jun 2018 15:40:24 -0400 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: <225c0247-08aa-0395-70de-21e8fda2fd07@oracle.com> References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> <225c0247-08aa-0395-70de-21e8fda2fd07@oracle.com> Message-ID: <6016185e-f6ed-96dd-0759-3fa5206e6628@oracle.com> The code for this looks good to me.? Thanks for including the comments about unloading. Coleen On 6/15/18 3:26 PM, Lois Foltan wrote: > Please review this updated webrev based on additional comments received. > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.2/webrev/ > > Thanks, > Lois > > On 6/14/2018 3:56 PM, Lois Foltan wrote: >> Please review this updated webrev that address review comments received. >> >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/ >> >> Thanks, >> Lois >> >> On 6/13/2018 6:58 PM, Lois Foltan wrote: >>> Please review this change to standardize on how to obtain a class >>> loader's name within the VM. SystemDictionary::loader_name() methods >>> have been removed in favor of ClassLoaderData::loader_name(). >>> >>> Since the loader name is largely used in the VM for display purposes >>> (error messages, logging, jcmd, JFR) this change also adopts a new >>> format to append to a class loader's name its identityHashCode and >>> if the loader has not been explicitly named it's qualified class >>> name is used instead. >>> >>> 391 /** >>> 392 * If the defining loader has a name explicitly set then >>> 393 * '' @ >>> 394 * If the defining loader has no name then >>> 395 * @ >>> 396 * If it's built-in loader then omit `@` as there is only one >>> instance. >>> 397 */ >>> >>> The names for the builtin loaders are 'bootstrap', 'app' and >>> 'platform'. >>> >>> open webrev at >>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ >>> bug link at https://bugs.openjdk.java.net/browse/JDK-8202605 >>> >>> Testing: hs-tier(1-2), jdk-tier(1-2) complete >>> ?????????????? hs-tier(3-5), jdk-tier(3) in progress >>> >>> Thanks, >>> Lois >>> >> > From coleen.phillimore at oracle.com Mon Jun 18 19:42:00 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 18 Jun 2018 15:42:00 -0400 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: <38efa995-8541-73c2-fc2f-f22671c12560@oracle.com> References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> <5207064d-787a-680f-cc23-20e4ece999b3@oracle.com> <38efa995-8541-73c2-fc2f-f22671c12560@oracle.com> Message-ID: <775e7d27-b835-4cb2-7a78-b1530f26a3e8@oracle.com> This version is good as well.? Thank you for working through all the concerns. Coleen On 6/15/18 7:52 PM, Lois Foltan wrote: > Hi Thomas, > > I have read through all your comments below, thank you.? I think the > best compromise that hopefully will enable this change to go forward > is to back out my changes to classfile/classLoaderHierarchyDCmd.cpp > and classfile/classLoaderStats.cpp.? This will allow the > serviceability team to review the new format for the class loader's > name_and_id and go forward if applicable to jcmd in a follow on RFE.? > Updated webrev at: > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.3/webrev/ > > Thanks, > Lois > > On 6/15/2018 3:43 PM, Thomas St?fe wrote: > >> Hi Lois, >> >> On Fri, Jun 15, 2018 at 8:26 PM, Lois Foltan >> wrote: >>> On 6/15/2018 3:06 AM, Thomas St?fe wrote: >>> >>> Hi Lois, >>> >>> Hi Thomas, >>> Thank you for looking at this change and giving it another round of >>> review! >>> >>> ---- >>> >>> We have now: >>> >>> Symbol* ClassLoaderData::name() >>> >>> ???? which returns ClassLoader.name >>> >>> and >>> >>> const char* ClassLoaderData::loader_name() >>> >>> ???? which returns either ClassLoader.name or, if that is null, the >>> class >>> name. >>> >>> I would like to point out that ClassLoaderData::loader_name() is >>> pretty much >>> unchanged as it exists today, so this behavior is not new or changed. >>> >> Okay. >> >>> 1) if we keep it that way, we should at least rename loader_name() to >>> something like loader_name_or_class_name() to lessen the surprise. >>> >>> 2) But maybe these two functions should have the same behaviour? >>> Return name or null if not set, not the class name? I see that nobody >>> yet uses loader_name(), so you are free to define it as you see fit. >>> >>> 3) but if (2), maybe alternativly just get rid of loader_name() >>> altogether, as just calling as_C_string() on a symbol is not worth a >>> utility function? >>> >>> I would like to leave ClassLoaderData::loader_name() in for a couple of >>> reasons.? Leaving it in discourages new methods like it to be >>> introduced in >>> the future in data structures other than ClassLoaderData, calling >>> java_lang_ClassLoader::name() directly is not safe during unloading and >>> getting rid of it may force a call to as_C_string() as option #3 >>> suggests >>> but that doesn't handle the bootstrap class loader.? Given this I >>> think the >>> best course of action would be to update ClassLoaderData.hpp with >>> the same >>> comments I put in place within ClassLoaderData.cpp for this method >>> as you >>> suggest below. >> Okay. >> >>> >>> --- >>> >>> For VM.systemdictionary, the texts seem to be a bit off: >>> >>> 29167: >>> Dictionary for loader data: 0x00007f7550cb8660 for instance a >>> 'jdk/internal/reflect/DelegatingClassLoader'{0x0000000706c00000} >>> >>> "for instance a" ? >>> >>> Dictionary for loader data: 0x00007f75503b3a50 for instance a >>> 'jdk/internal/loader/ClassLoaders$AppClassLoader'{0x000000070647b098} >>> Dictionary for loader data: 0x00007f75503a4e30 for instance a >>> 'jdk/internal/loader/ClassLoaders$PlatformClassLoader'{0x0000000706479088} >>> >>> >>> should that not be "app" or "platform", respectively? >>> >>> ... but I just see it was the same way before and not touched by your >>> change. Maybe here, your new compound name would make sense? >>> >>> ---- >>> >>> If I understand correctly this output shows up when one specifies >>> -Xlog:class+load=debug? >> I saw it as result of jcmd VM.systemdictionary (Coleen's command, I >> think?) but it may show up too in other places, I did not check. >> >>> ? I see that the "for instance " is printed by >>> >>> void ClassLoaderData::print_value_on(outputStream* out) const { >>> ?? if (!is_unloading() && class_loader() != NULL) { >>> ???? out->print("loader data: " INTPTR_FORMAT " for instance ", >>> p2i(this)); >>> ???? class_loader()->print_value_on(out);? // includes >>> loader_name_and_id() >>> and address of class loader instance >>> >>> and class_loader()->print_value_on(out); eventually calls >>> InstanceKlass::oop_print_value_on to print the "a". >>> >>> void InstanceKlass::oop_print_value_on(oop obj, outputStream* st) { >>> ?? st->print("a "); >>> ?? name()->print_value_on(st); >>> ?? obj->print_address_on(st); >>> ?? if (this == SystemDictionary::String_klass() >>> >>> This is a good follow up RFE since one will have to look at all the >>> calls to >>> InstanceKlass::oop_print_value_on() to determine if the "a " is still >>> applicable. >>> >> Yes, there may be a number of follow up cleanups after this patch is in. >> >>> >>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderData.cpp.sdiff.html >>> >>> >>> Good comments. >>> >>> suggested change to comment: >>> >>> ? 129?? // Obtain the class loader's name and identity hash. If the >>> class loader's >>> ? 130?? // name was not explicitly set during construction, the class >>> loader's name and id >>> ? 131?? // will be set to the qualified class name of the class loader >>> along with its >>> ? 132?? // identity hash. >>> >>> rather: >>> >>> ? 129?? // Obtain the class loader's name and identity hash. If the >>> class loader's >>> ? 130?? // name was not explicitly set during construction, the class >>> loader's ** _name_and_id field ** >>> ? 131?? // will be set to the qualified class name of the class loader >>> along with its >>> ? 132?? // identity hash. >>> >>> Done. >>> >>> >>> ---- >>> >>> ? 133?? // If for some reason the ClassLoader's constructor has not >>> been run, instead of >>> >>> I am curious, how can this happen? Bad bytecode instrumentation? >>> Should we also attempt to work in the identity hashcode in that case >>> to be consistent with the java side? Or maybe name it something like >>> "classname "? Or is this too exotic a case to care? >>> >>> Bad bytecode instrumentation, Unsafe.allocateInstance(), see test >>> open/test/hotspot/jtreg/runtime/modules/ClassLoaderNoUnnamedModuleTest.java >>> >>> for example. >> JDK-8202758... Wow. Yikes. >> >>> ? I too was actually thinking of "classname @" so I >>> do like that approach but it is a rare case. >>> >> Thanks for taking that suggestion. >> >>> ---- >>> >>> In various places I see you using: >>> >>> 937??? if (_class_loader_klass == NULL) { // bootstrap case >>> >>> just to make sure, this is the same as >>> CLD::is_the_null_class_loader_data(), yes? So, one could use one and >>> assert the other? >>> >>> Yes.? Actually Coleen & I were discussing that maybe we could remove >>> ClassLoaderData::_class_loader_klass since its original purpose was >>> to allow >>> for ultimately a way to obtain the class loader's klass >>> external_name.? Will >>> look into creating a follow on RFE if _class_loader_klass is no longer >>> needed. >>> >> I use it in VM.classloaders and VM.metaspace, to print out the loader >> class name and in VM.classloaders verbose mode I print out the Klass* >> pointer too. We found it useful in some debugging scenarios. >> >> Btw, for the same reason I print out the "{loader oop}" in >> VM.classloaders - debugging help. This was also a wish of Kirk >> Pepperdine when we introduced VM.classloaders, see discussion: >> http://mail.openjdk.java.net/pipermail/serviceability-dev/2018-May/023770.html >> >> . >> >> There are discussions and experiments currently done to execute >> multiple jcmd subcommands at one safe point. In this context, printing >> oops is more interesting in diagnostic commands, since you can chain >> multiple commands together and get consistent oop values. See >> discussions here: >> http://mail.openjdk.java.net/pipermail/serviceability-dev/2018-May/023673.html >> >> (Currently, Frederic Parain from Oracle took this over and provided a >> prototype patch). >> >> But all in all, if it makes matters easier, I think yes we should >> remove _class_loader_klass from CLD. >> >>> ---- >>> >>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderData.hpp.sdiff.html >>> >>> >>> Not sure about BOOTSTRAP_LOADER_NAME_LEN, since its sole user - jfr - >>> could probably just do a ::strlen(BOOTSTRAP_LOADER_NAME). >>> >>> Not sure either about BOOTSTRAP_LOADER_NAME having quotes baked in - >>> this is something I would rather see in the printing code. >>> >>> I agree.? I removed the single quotes but I would like to leave in >>> BOOTSTAP_LOADER_NAME_LEN. >>> >> Okay. We should make sure they stay consistent, but that is no >> terrible burden. >> >>> +? // Obtain the class loader's _name, works during unloading. >>> +? const char* loader_name() const; >>> +? Symbol* name() const { return _name; } >>> >>> See above my comments to loader_name(). At the very least comment >>> should be updated describing that this function returns name or class >>> name or "bootstrap". >>> >>> Comment in ClassLoaderData.hpp will be updated as you suggest. >>> >> Thank you. >> >>> >>> ---- >>> >>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderHierarchyDCmd.cpp.udiff.html >>> >>> >>> Hm, unfortunately, this does not look so good. I would prefer to keep >>> the old version, see here my proposal, updated to use your new >>> CLD::name() function and to remove the offending "<>" around >>> "bootstrap". >>> >>> @@ -157,13 +157,18 @@ >>> >>> ????? // Retrieve information. >>> ????? const Klass* const loader_klass = _cld->class_loader_klass(); >>> +??? const Symbol* const loader_name = _cld->name(); >>> >>> ????? branchtracker.print(st); >>> >>> ????? // e.g. "+--- jdk.internal.reflect.DelegatingClassLoader" >>> ????? st->print("+%.*s", BranchTracker::twig_len, "----------"); >>> -??? st->print(" %s,", _cld->loader_name_and_id()); >>> -??? if (!_cld->is_the_null_class_loader_data()) { >>> +??? if (_cld->is_the_null_class_loader_data()) { >>> +????? st->print(" bootstrap"); >>> +??? } else { >>> +????? if (loader_name != NULL) { >>> +??????? st->print(" \"%s\",", loader_name->as_C_string()); >>> +????? } >>> ??????? st->print(" %s", loader_klass != NULL ? >>> loader_klass->external_name() : "??"); >>> ??????? st->print(" {" PTR_FORMAT "}", p2i(_loader_oop)); >>> ????? } >>> >>> This also depends on what you decide happens with CLD::loader_name(). >>> If that one were to return "loader name or null if not set, as >>> ra-allocated const char*", it could be used here. >>> >>> I like this change and I like how the output looks.? Can you take >>> another >>> look at the next webrev's updated comments in test >>> serviceability/dcmd/vm/ClassLoaderHierarchyTest.java? >> Sure. It is not yet posted, yes? >> >> May take till monday though, I am gone over the weekend. >> >>> ? I plan to open an RFE >>> to have the serviceability team consider removing the address of the >>> loader >>> oop now that the included identity hash provides unique identification. >>> >> See my remarks above - that command including oop was added by me, and >> if possible I'd like to keep the oop for debugging purposes. However, >> I could move the output to the "verbose" section (if you run >> VM.classloaders verbose, there are additional things printed below the >> class loader name). >> >> Note however, that printing "{}" was consistent with pre-existing >> commands from Oracle, in this case VM.systemdictionary. >> >>> >>> ---- >>> >>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderStats.cpp.udiff.html >>> >>> >>> In VM.classloader_stats we see the effect of the new naming: >>> >>> x000000080000a0b8? 0x00000008000623f0 0x00007f5facafe540?????? 1 >>> 6144????? 4064? jdk.internal.reflect.DelegatingClassLoader @7b5a12ae >>> 0x000000080000a0b8? 0x00000008000623f0 0x00007f5facbcdd50?????? 1 >>> ? 6144????? 3960? jdk.internal.reflect.DelegatingClassLoader @5b529706 >>> 0x00000008000623f0? 0x0000000000000000 0x00007f5facbcca00????? 10 >>> 90112???? 51760? 'MyInMemoryClassLoader' @17cdf2d0 >>> 0x00000008000623f0? 0x0000000000000000 0x00007f5facbca560?????? 1 >>> ? 6144????? 4184? 'MyInMemoryClassLoader' @1477089c >>> 0x00000008000623f0? 0x0000000000000000 0x00007f5facba7890?????? 1 >>> ? 6144????? 4184? 'MyInMemoryClassLoader' @a87f8ec >>> 0x00000008000623f0? 0x0000000000000000 0x00007f5facba5390?????? 1 >>> ? 6144????? 4184? 'MyInMemoryClassLoader' @5a3bc7ed >>> 0x00000008000623f0? 0x0000000000000000 0x00007f5facba3bf0?????? 1 >>> ? 6144????? 4184? 'MyInMemoryClassLoader' @48c76607 >>> 0x00000008000623f0? 0x0000000000000000 0x00007f5facb23f80?????? 1 >>> ? 6144????? 4184? 'MyInMemoryClassLoader' @1224144a >>> 0x00000008000623f0? 0x0000000000000000 0x00007f5facb228f0?????? 1 >>> ? 6144????? 4184? 'MyInMemoryClassLoader' @75437611 >>> 0x00000008000623f0? 0x0000000000000000 0x00007f5facb65c60?????? 1 >>> ? 6144????? 4184? 'MyInMemoryClassLoader' @25084a1e >>> 0x00000008000623f0? 0x0000000000000000 0x00007f5facb6a030?????? 1 >>> ? 6144????? 4184? 'MyInMemoryClassLoader' @2d2ffcb7 >>> 0x00000008000623f0? 0x0000000000000000 0x00007f5facb4bfe0?????? 1 >>> ? 6144????? 4184? 'MyInMemoryClassLoader' @42a48628 >>> 0x0000000800010340? 0x00000008000107a8? 0x00007f5fac3bd670 1064 >>> 7004160?? 6979376? 'app' >>> ???????????????????????????????????????????????????????????????? 96 >>> 311296??? 202600?? + unsafe anonymous classes >>> 0x0000000000000000? 0x0000000000000000? 0x00007f5fac1da1e0 1091 >>> 8380416?? 8301048? 'bootstrap' >>> ???????????????????????????????????????????????????????????????? 92 >>> 263168??? 169808?? + unsafe anonymous classes >>> 0x000000080000a0b8? 0x000000080000a0b8 0x00007f5faca63460?????? 1 >>> ? 6144????? 3960? jdk.internal.reflect.DelegatingClassLoader @5bd03f44 >>> >>> >>>> Since we hide now the class name of the loader, if everyone names >>>> their class loader the same - e.g. "Test" or "MyInMemoryClassLoader" - >>>> we loose information. >>> We loose the name of class loader's class' fully qualified name only >>> in the >>> situation where the class loader's name has been explicitly >>> specified by the >>> user during construction.? I would think in that case one would want >>> to see >>> the explicitly given name of the class loader.? We also gain in either >>> situation (unnamed or named class loader), the class loader's >>> identity hash >>> which allows for uniquely identifying a class loader in question. >> For the record, I would prefer a naming scheme which printed >> unconditionally both name and class name, if both are set: >> >> '"name", instance of , @id' >> >> or >> >> 'instance of , @id' >> >> or maybe some more condensed, technical form, as a clear triple: >> >> '[name, , @id]' or '{name, , @id}' >> >> The reason why I keep harping on this is that it is useful to have >> consistent output, meaning, output that does not change its format on >> a line-by-line base. >> >> Just a tiny example why this is useful, lets say I run a Spring MVC >> app and want to know the number of Spring loaders, I do a: >> >> ./images/jdk/bin/jcmd hello VM.classloaders | grep >> org.springframework | wc -l >> >> Won't work consistently anymore if class names disappear for loader >> names which have names. >> >> Of course, there are myriad other ways to get the same information, so >> this is just an illustration. >> >> -- >> >> But I guess I won't convince you that this is better, and it seems you >> spent a lot of thoughts and discussions on this point already. I think >> this is a case of one-size-fits-not-all. And also a matter of taste. >> >> If emphasis is on brevity, your naming scheme is better. If >> ease-of-parsing and ease-of-reading are important, I think my scheme >> wins. >> >> But as long as we have alternatives - e.g. CLD::name() and >> CLD::class_loader_class() - and as long as VM.classloaders and >> VM.metaspace commands stay useful, I am content and can live with your >> scheme. >> >>>> I'm afraid this will be an issue if people will >>>> start naming their class loaders more and more. It is not unimaginable >>>> that completely different frameworks name their loaders the same. >>> Point taken, however, doesn't including the identity hash allow for >>> unique >>> identification of the class loader? >> I think the point of diagnostic commands is to get information quick. >> An identity hash may help me after I managed to finally resolve it, >> but it is not a quick process (that I know of). Whereas, for example, >> just reading "com.wily.introscope.Loader" tells me immediately that >> the VM I am looking at has Wily byte code modifications enabled. >> >>> >>>> This "name or if not then class name" scheme will also complicate >>>> parsing a lot for people who parse the output of these commands. I >>>> would strongly prefer to see both - name and class type. >>> Much like classfile/classLoaderHierarchyDCmd.cpp now generates, >>> correct? >>> >> Yes! :) >> >>> Thanks, >>> Lois >>> >>> >> I just saw your webrev popping in, but again it is late. I'll take a >> look tomorrow morning or monday. Thank you for your work. >> >> ..Thomas >> >>> ---- >>> >>> Hmm. At this point I noticed that I still had general reservations >>> about the new compound naming scheme - see my remarks above. So I >>> guess I stop here to wait for your response before continuing the code >>> review. >>> >>> Thanks & Kind Regards, >>> >>> Thomas >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> On Thu, Jun 14, 2018 at 9:56 PM, Lois Foltan >>> wrote: >>> >>> Please review this updated webrev that address review comments >>> received. >>> >>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/ >>> >>> Thanks, >>> Lois >>> >>> >>> On 6/13/2018 6:58 PM, Lois Foltan wrote: >>> >>> Please review this change to standardize on how to obtain a class >>> loader's >>> name within the VM.? SystemDictionary::loader_name() methods have been >>> removed in favor of ClassLoaderData::loader_name(). >>> >>> Since the loader name is largely used in the VM for display purposes >>> (error messages, logging, jcmd, JFR) this change also adopts a new >>> format to >>> append to a class loader's name its identityHashCode and if the >>> loader has >>> not been explicitly named it's qualified class name is used instead. >>> >>> 391 /** >>> 392 * If the defining loader has a name explicitly set then >>> 393 * '' @ >>> 394 * If the defining loader has no name then >>> 395 * @ >>> 396 * If it's built-in loader then omit `@` as there is only one >>> instance. >>> 397 */ >>> >>> The names for the builtin loaders are 'bootstrap', 'app' and >>> 'platform'. >>> >>> open webrev at >>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ >>> bug link at https://bugs.openjdk.java.net/browse/JDK-8202605 >>> >>> Testing: hs-tier(1-2), jdk-tier(1-2) complete >>> ??????????????? hs-tier(3-5), jdk-tier(3) in progress >>> >>> Thanks, >>> Lois >>> >>> > From mandy.chung at oracle.com Mon Jun 18 19:58:38 2018 From: mandy.chung at oracle.com (mandy chung) Date: Mon, 18 Jun 2018 12:58:38 -0700 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: <38efa995-8541-73c2-fc2f-f22671c12560@oracle.com> References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> <5207064d-787a-680f-cc23-20e4ece999b3@oracle.com> <38efa995-8541-73c2-fc2f-f22671c12560@oracle.com> Message-ID: <87cfb0fc-0ab4-3f31-2f8b-847d5e9d7a3f@oracle.com> On 6/15/18 4:52 PM, Lois Foltan wrote: > Hi Thomas, > > I have read through all your comments below, thank you.? I think the > best compromise that hopefully will enable this change to go forward is > to back out my changes to classfile/classLoaderHierarchyDCmd.cpp and > classfile/classLoaderStats.cpp.? This will allow the serviceability team > to review the new format for the class loader's name_and_id and go > forward if applicable to jcmd in a follow on RFE.? Updated webrev at: > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.3/webrev/ Looks good to me. Mandy From coleen.phillimore at oracle.com Mon Jun 18 20:09:23 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 18 Jun 2018 16:09:23 -0400 Subject: RFR: JDK-8204941: Refactor TemplateTable::_new to use MacroAssembler helpers for tlab and eden In-Reply-To: <781aa5d5-c320-0d58-3e45-b34613ef88d8@oracle.com> References: <445338d5-43a9-363c-cc61-e9c66195b5ff@redhat.com> <781aa5d5-c320-0d58-3e45-b34613ef88d8@oracle.com> Message-ID: <59ced498-9189-001f-03f0-03c29fdc2e3a@oracle.com> This looks good to me as well.? Except sparc and other platforms still have the duplicated tlab and eden allocation in TemplateTable::_new(), but aarch64 has code similar to this.? It seems like something we could not clean up unless we need to. thanks, Coleen On 6/15/18 12:49 PM, Vladimir Kozlov wrote: > Looks good to me. > > Thanks, > Vladimir > > On 6/13/18 4:53 AM, Roman Kennke wrote: >> TemplateTable::_new (in x86) currently has its own implementation of >> tlab and eden allocation paths, which are basically identical to the >> ones in MacroAssembler::tlab_allocate() and >> MacroAssembler::eden_allocate(). TemplateTable should use the >> MacroAssembler helpers to avoid duplication. >> >> The MacroAssembler version of eden_allocate() features an additional >> bounds check to prevent wraparound of obj-end. I am not sure if/how that >> can ever happen and if/how this could be exploited, but it might be >> relevant. In any case, I think it's a good thing to include it in the >> interpreter too. >> >> The refactoring can be taken further: fold incr_allocated_bytes() into >> eden_allocate() (they always come in pairs), probably fold >> tlab_allocate() and eden_allocate() into a single helper (they also seem >> to come in pairs mostly), also fold initialize_object/initialize_header >> sections too, but 1. I wanted to keep this manageable and 2. I also want >> to factor the tlab_allocate/eden_allocate paths into BarrierSetAssembler >> as next step (which should also include at least some of the mentioned >> unifications). >> >> http://cr.openjdk.java.net/~rkennke/JDK-8204941/webrev.00/ >> >> Passes tier1_hotspot >> >> Can I please get a review? >> >> Roman >> From igor.ignatyev at oracle.com Mon Jun 18 23:19:57 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Mon, 18 Jun 2018 16:19:57 -0700 Subject: RFR(XS) : 8202559 : Tests which start VM using JNI start failing after compile upgrade to VC 2017 Message-ID: <00A4B29E-267F-4EDD-BA70-02F3BF565727@oracle.com> http://cr.openjdk.java.net/~iignatyev//8202559/webrev.00/index.html > 12 lines changed: 5 ins; 0 del; 7 mod Hi all, could you please review this small fix for ExecDriver class to add /bin directory to PATH in 'launcher' mode on windows? we started to observer that the tests which use ExecDriver's launcher mode fail due to missed a shared library. the reason is that ExecDriver's adds only /bin/, but has to add both /bin and /bin/ webrev: http://cr.openjdk.java.net/~iignatyev//8202559/webrev.00/index.html JBS: https://bugs.openjdk.java.net/browse/JDK-8202559 testing: affected tests (none in open) on windows and non-windows platform Thanks, -- Igor From erik.joelsson at oracle.com Mon Jun 18 23:46:59 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Mon, 18 Jun 2018 16:46:59 -0700 Subject: RFR(XS) : 8202559 : Tests which start VM using JNI start failing after compile upgrade to VC 2017 In-Reply-To: <00A4B29E-267F-4EDD-BA70-02F3BF565727@oracle.com> References: <00A4B29E-267F-4EDD-BA70-02F3BF565727@oracle.com> Message-ID: Looks good to me. /Erik On 2018-06-18 16:19, Igor Ignatyev wrote: > http://cr.openjdk.java.net/~iignatyev//8202559/webrev.00/index.html >> 12 lines changed: 5 ins; 0 del; 7 mod > Hi all, > > could you please review this small fix for ExecDriver class to add /bin directory to PATH in 'launcher' mode on windows? > > we started to observer that the tests which use ExecDriver's launcher mode fail due to missed a shared library. the reason is that ExecDriver's adds only /bin/, but has to add both /bin and /bin/ > > webrev: http://cr.openjdk.java.net/~iignatyev//8202559/webrev.00/index.html > JBS: https://bugs.openjdk.java.net/browse/JDK-8202559 > testing: affected tests (none in open) on windows and non-windows platform > > Thanks, > -- Igor From vladimir.kozlov at oracle.com Tue Jun 19 00:59:21 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 18 Jun 2018 17:59:21 -0700 Subject: RFR(XS) : 8202559 : Tests which start VM using JNI start failing after compile upgrade to VC 2017 In-Reply-To: <00A4B29E-267F-4EDD-BA70-02F3BF565727@oracle.com> References: <00A4B29E-267F-4EDD-BA70-02F3BF565727@oracle.com> Message-ID: Good. thanks, Vladimir On 6/18/18 4:19 PM, Igor Ignatyev wrote: > http://cr.openjdk.java.net/~iignatyev//8202559/webrev.00/index.html >> 12 lines changed: 5 ins; 0 del; 7 mod > > Hi all, > > could you please review this small fix for ExecDriver class to add /bin directory to PATH in 'launcher' mode on windows? > > we started to observer that the tests which use ExecDriver's launcher mode fail due to missed a shared library. the reason is that ExecDriver's adds only /bin/, but has to add both /bin and /bin/ > > webrev: http://cr.openjdk.java.net/~iignatyev//8202559/webrev.00/index.html > JBS: https://bugs.openjdk.java.net/browse/JDK-8202559 > testing: affected tests (none in open) on windows and non-windows platform > > Thanks, > -- Igor > From david.holmes at oracle.com Tue Jun 19 03:01:06 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 19 Jun 2018 13:01:06 +1000 Subject: RFR (S) 8203479: JFR enabled ARM32 build assertion failure In-Reply-To: <833e84ab-5b3c-b819-6aec-64d09d4ca7df@bell-sw.com> References: <7ffe11f8-53b7-f7e2-bb54-c24410934db4@oracle.com> <833e84ab-5b3c-b819-6aec-64d09d4ca7df@bell-sw.com> Message-ID: Hi Boris, Looks good. I've pushed this with the single review as it is constrained to a non-Oracle port, and is very simple. Thanks, David On 18/06/2018 7:18 PM, Boris Ulasevich wrote: > Hi David, > > Many thanks! Here is the werbev with updated copyrights: > http://cr.openjdk.java.net/~bulasevich/8203479/webrev.02 > > thanks, > Boris > > On 18.06.2018 05:39, David Holmes wrote: >> Hi Boris, >> >> On 15/06/2018 8:44 PM, Boris Ulasevich wrote: >>> Hi, >>> >>> Please review the following patch: >>> ?? http://cr.openjdk.java.net/~bulasevich/8203479/webrev.01 >>> ?? https://bugs.openjdk.java.net/browse/JDK-8203479 >>> >>> Assertion fires in JFR codes on first VM thread setup because VM >>> globals are not yet initialized (and supports_cx8 property is not >>> predefined for ARM32 platform). I propose to exploit >>> early_initialize() method to set up supports_cx8 property on early >>> stage of VM initialization. >> >> Your fix looks good. Please update copyright years. >> Just some additional commentary ... from the bug report: >> >> First _supports_cx8 usage: >> ?? Threads::create_vm -> JavaThread::JavaThread(bool) --> >> Thread::Thread() --> JfrThreadLocal::JfrThreadLocal() --> >> atomic_inc(unsigned long long volatile*) >> >> I'm not sure when this was introduced but it's risky to perform atomic >> operations on jlong early during VM initialization. > > For me it comes from revision when JFR was opensourced: > > changeset:?? 50113:caf115bb98ad > user:??????? egahlin > date:??????? Tue May 15 20:24:34 2018 +0200 > summary:???? 8199712: Flight Recorder > >> Apart from the problem you encountered with ARM32, on some platforms >> it may require that the stub-generator has been initialized. (Though >> in that case we may harmlessly fallback to using non-atomic operations >> - which is okay when creating the initial JavaThread.) > > Good point! Yes, we discussed as an optional solution: (1) reworking jfr > next_thread_id() function to increase counter without atomic_inc > function call when counter value is zero, and (2) disable assertion when > !is_init_completed(). We decided that for our case initialization > sequence reorder works better. > >> Thanks, >> David >> >>> Thanks >>> Boris From david.holmes at oracle.com Tue Jun 19 04:54:25 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 19 Jun 2018 14:54:25 +1000 Subject: RFR(M): 8204965: Fix '--disable-cds' and disable CDS on AIX by default In-Reply-To: References: Message-ID: <67efe9b2-bd7a-e23a-74d6-786cbc135d16@oracle.com> Hi Volker, v3 looks much cleaner - thanks. But AFAICS the change to jvmtiEnv.cpp is also not needed. ClassLoaderExt::append_boot_classpath exists regardless of INCLUDE_CDS but operates differently (just calling ClassLoader::add_to_boot_append_entries). Thanks, David On 19/06/2018 2:04 AM, Volker Simonis wrote: > On Mon, Jun 18, 2018 at 8:17 AM, David Holmes wrote: >> Hi Volker, >> >> src/hotspot/share/runtime/globals.hpp >> >> This change should not be needed! We do minimal VM builds without CDS and we >> don't have to touch the UseSharedSpaces defaults (unless recent change have >> broken this - in which case that needs to be addressed in its own right!) >> > > Yes, you're right, CDS_ONLY/NOT_CDS isn't really required here, > because UseSharedSpaces is reseted later on at the end of > Arguments::parse(). I just thought it would be cleaner to disable it > statically, if the VM doesn't support it. But anyway I don't really > mind and I've reverted that change in globals.hpp. > >> src/hotspot/share/classfile/javaClasses.cpp >> >> AFAICS you should be using INCLUDE_CDS in the ifdefs not >> INCLUDE_CDS_JAVA_HEAP. But again I'm unclear (as was Thomas) why this should >> be needed as we have not needed it before. As Thomas notes we have: >> >> ./hotspot/share/memory/metaspaceShared.hpp: static bool >> is_archive_object(oop p) NOT_CDS_JAVA_HEAP_RETURN_(false); >> ./hotspot/share/classfile/stringTable.hpp: static oop >> create_archived_string(oop s, Thread* THREAD) >> NOT_CDS_JAVA_HEAP_RETURN_(NULL); >> >> so these methods should be defined when CDS is not available. >> > > Thomas and you are right. Must have been a mis-configuration on AIX > where I saw undefined symbols at link time. I've removed the ifdefs > from javaClasses.cpp now. > > Finally, I've also wrapped all the FileMapInfo fields in vmStructs.cpp > into CDS_ONLY macros as suggested by Jiangli because the really only > make sense for a CDS-enabled VM. > > Here's the new webrev: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965.v3/ > > Please let me know if you think there's still something missing. > > Regards, > Volker > > >> ?? >> >> Thanks, >> David >> ----- >> >> >> >> >> >> On 15/06/2018 12:26 AM, Volker Simonis wrote: >>> >>> Hi, >>> >>> can I please have a review for the following fix: >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/ >>> https://bugs.openjdk.java.net/browse/JDK-8204965 >>> >>> CDS does currently not work on AIX because of the way how we >>> reserve/commit memory on AIX. The problem is that we're using a >>> combination of shmat/mmap depending on the page size and the size of >>> the memory chunk to reserve. This makes it impossible to reliably >>> reserve the memory for the CDS archive and later on map the various >>> parts of the archive into these regions. >>> >>> In order to fix this we would have to completely rework the memory >>> reserve/commit/uncommit logic on AIX which is currently out of our >>> scope because of resource limitations. >>> >>> Unfortunately, I could not simply disable CDS in the configure step >>> because some of the shared code apparently relies on parts of the CDS >>> code which gets excluded from the build when CDS is disabled. So I >>> also fixed the offending parts in hotspot and cleaned up the configure >>> logic for CDS. >>> >>> Thank you and best regards, >>> Volker >>> >>> PS: I did run the job through the submit forest >>> (mach5-one-simonis-JDK-8204965-20180613-1946-26349) but the results >>> weren't really useful because they mention build failures on linux-x64 >>> which I can't reproduce locally. >>> >> From david.holmes at oracle.com Tue Jun 19 05:10:41 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 19 Jun 2018 15:10:41 +1000 Subject: RFR(s): 8204166: TLH: Semaphore may not be destroy until signal have returned. In-Reply-To: References: <3db415fe-f96d-1805-76a2-b50d052c9e1a@oracle.com> Message-ID: <5cfeb06c-30d3-4af4-e085-4e8c60aa5d98@oracle.com> Hi Robbin, Overall changes seem okay. I gave a lot of thought as to whether an "old" thread still returning from sem_wait could potentially interfere with the next use of the sempahore, but it seems okay. Interesting (read "scary") glibc bug! Minor comments: handshake.cpp: 311 if (thread->is_terminated()) { 312 // If thread is not on threads list but armed, cancel. 313 thread->cancel_handshake(); 314 return; 315 } did you actually encounter late handshakes in the thread lifecycle causing problems, or is this just being cautious? 377 if(vmthread_can_process_handshake(target)) { Space needed after "if" Thanks, David ----- On 19/06/2018 12:05 AM, Robbin Ehn wrote: > On 06/18/2018 03:07 PM, Robbin Ehn wrote: >> Hi all, >> >> After some internal discussions I changed the patch to: >> http://rehn-ws.se.oracle.com/cr_mirror/8204166/v2/ > > Correct external url: > http://cr.openjdk.java.net/~rehn/8204166/v2/ > > /Robbin > >> >> Which handles thread off javathreads list better. >> >> Passes handshake testing and ZGC testing seems okay. >> >> Thanks, Robbin >> >> On 06/14/2018 12:11 PM, Robbin Ehn wrote: >>> Hi all, please review. >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8204166 >>> Webrev: http://cr.openjdk.java.net/~rehn/8204166/v1/webrev/ >>> >>> The root cause of this failure is a bug in the posix semaphores: >>> https://sourceware.org/bugzilla/show_bug.cgi?id=12674 >>> >>> Thread a: >>> sem_post(my_sem); >>> >>> Thread b: >>> sem_wait(my_sem); >>> sem_destroy(my_sem); >>> >>> Thread b is waiting on my_sem (count 0), Thread a posts (count 0->1). >>> If Thread b start executing directly after the increment in post but >>> before >>> Thread a leaves the call to post and manage to destroy the semaphore. >>> Thread a >>> _can_ get EINVAL from sem_post! This is fixed in newer glibc(2.21). >>> >>> Note that mutexes have had same issue on some platforms: >>> https://sourceware.org/bugzilla/show_bug.cgi?id=13690 >>> Fixed in 2.23. >>> >>> Since we only have one handshake operation running at anytime >>> (safepoints and handshakes are also mutual exclusive, both run on VM >>> Thread) we can actually always use the same semaphore. This patch >>> changes the _done semaphore to be static instead, thus avoiding the >>> post<->destroy race. >>> >>> Patch also contains some small changes which remove of dead code, >>> remove unneeded state, handling of cases which we can't easily say >>> will never happen and some additional error checks. >>> >>> Handshakes test passes, but they don't trigger the original issue, so >>> more interesting is that this issue do not happen when running ZGC >>> which utilize handshakes with the static semaphore. >>> >>> Thanks, Robbin From david.holmes at oracle.com Tue Jun 19 05:53:18 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 19 Jun 2018 15:53:18 +1000 Subject: RFR(XS): 8205172: 32 bit build broken In-Reply-To: References: Message-ID: On 19/06/2018 12:10 AM, Doerr, Martin wrote: > Hi, > > 32 bit build is currently broken due to: > "trap_mask": jdk/src/hotspot/share/oops/methodData.hpp(142) : warning C4293: '<<' : shift count negative or too big, undefined behavior > "PrngModMask": jdk/src/hotspot/share/runtime/threadHeapSampler.cpp(50) : warning C4293: '<<' : shift count negative or too big, undefined behavior const uint64_t PrngModPower = 48; ! const uint64_t PrngModMask = right_n_bits(PrngModPower); So right_n_bits(48) should expand to: (nth_bit(48) - 1) where nth_bit is: (((n) >= BitsPerWord) ? 0 : (OneBit << (n))) so the compiler is complaining that (OneBit << 48) is undefined, but the conditional operator will not execute that path (as BitsPerWord == 32). That seems like a compiler bug to me. :( David ----- > Please review this small fix: > http://cr.openjdk.java.net/~mdoerr/8205172_32bit_build/webrev.00/ > > Best regards, > Martin > From rkennke at redhat.com Tue Jun 19 06:39:51 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 19 Jun 2018 08:39:51 +0200 Subject: RFR: JDK-8204941: Refactor TemplateTable::_new to use MacroAssembler helpers for tlab and eden In-Reply-To: <59ced498-9189-001f-03f0-03c29fdc2e3a@oracle.com> References: <445338d5-43a9-363c-cc61-e9c66195b5ff@redhat.com> <781aa5d5-c320-0d58-3e45-b34613ef88d8@oracle.com> <59ced498-9189-001f-03f0-03c29fdc2e3a@oracle.com> Message-ID: <45e837c8-09d9-bd84-bd74-028d15445501@redhat.com> Thank you Coleen and Vladimir for reviewing! Yes, AArch64 does it better, some other platforms don't. I don't have access to those other platforms, otherwise I'd fix them too. I submitted the changeset for testing in Mach5. Thanks, Roman > > This looks good to me as well.? Except sparc and other platforms still > have the duplicated tlab and eden allocation in TemplateTable::_new(), > but aarch64 has code similar to this.? It seems like something we could > not clean up unless we need to. > > thanks, > Coleen > > On 6/15/18 12:49 PM, Vladimir Kozlov wrote: >> Looks good to me. >> >> Thanks, >> Vladimir >> >> On 6/13/18 4:53 AM, Roman Kennke wrote: >>> TemplateTable::_new (in x86) currently has its own implementation of >>> tlab and eden allocation paths, which are basically identical to the >>> ones in MacroAssembler::tlab_allocate() and >>> MacroAssembler::eden_allocate(). TemplateTable should use the >>> MacroAssembler helpers to avoid duplication. >>> >>> The MacroAssembler version of eden_allocate() features an additional >>> bounds check to prevent wraparound of obj-end. I am not sure if/how that >>> can ever happen and if/how this could be exploited, but it might be >>> relevant. In any case, I think it's a good thing to include it in the >>> interpreter too. >>> >>> The refactoring can be taken further: fold incr_allocated_bytes() into >>> eden_allocate() (they always come in pairs), probably fold >>> tlab_allocate() and eden_allocate() into a single helper (they also seem >>> to come in pairs mostly), also fold initialize_object/initialize_header >>> sections too, but 1. I wanted to keep this manageable and 2. I also want >>> to factor the tlab_allocate/eden_allocate paths into BarrierSetAssembler >>> as next step (which should also include at least some of the mentioned >>> unifications). >>> >>> http://cr.openjdk.java.net/~rkennke/JDK-8204941/webrev.00/ >>> >>> Passes tier1_hotspot >>> >>> Can I please get a review? >>> >>> Roman >>> > From volker.simonis at gmail.com Tue Jun 19 06:50:37 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 19 Jun 2018 08:50:37 +0200 Subject: RFR(M): 8204965: Fix '--disable-cds' and disable CDS on AIX by default In-Reply-To: <67efe9b2-bd7a-e23a-74d6-786cbc135d16@oracle.com> References: <67efe9b2-bd7a-e23a-74d6-786cbc135d16@oracle.com> Message-ID: On Tue, Jun 19, 2018 at 6:54 AM, David Holmes wrote: > Hi Volker, > > v3 looks much cleaner - thanks. > > But AFAICS the change to jvmtiEnv.cpp is also not needed. > ClassLoaderExt::append_boot_classpath exists regardless of INCLUDE_CDS but > operates differently (just calling ClassLoader::add_to_boot_append_entries). > That's not entirely true because the whole compilation unit (i.e. classLoaderExt.cpp) which contains 'ClassLoaderExt::append_boot_classpath()' is excluded from the compilation if CDS is disabled (see make/hotspot/lib/JvmFeatures.gmk). So I can either move the whole implementation of 'ClassLoaderExt::append_boot_classpath()' into classLoaderExt.hpp in which case things would work as you explained and my changes to jvmtiEnv.cpp could be removed or leave the whole code and change as is. Please let me know what you think? Regards, Volker > Thanks, > David > > > On 19/06/2018 2:04 AM, Volker Simonis wrote: >> >> On Mon, Jun 18, 2018 at 8:17 AM, David Holmes >> wrote: >>> >>> Hi Volker, >>> >>> src/hotspot/share/runtime/globals.hpp >>> >>> This change should not be needed! We do minimal VM builds without CDS and >>> we >>> don't have to touch the UseSharedSpaces defaults (unless recent change >>> have >>> broken this - in which case that needs to be addressed in its own right!) >>> >> >> Yes, you're right, CDS_ONLY/NOT_CDS isn't really required here, >> because UseSharedSpaces is reseted later on at the end of >> Arguments::parse(). I just thought it would be cleaner to disable it >> statically, if the VM doesn't support it. But anyway I don't really >> mind and I've reverted that change in globals.hpp. >> >>> src/hotspot/share/classfile/javaClasses.cpp >>> >>> AFAICS you should be using INCLUDE_CDS in the ifdefs not >>> INCLUDE_CDS_JAVA_HEAP. But again I'm unclear (as was Thomas) why this >>> should >>> be needed as we have not needed it before. As Thomas notes we have: >>> >>> ./hotspot/share/memory/metaspaceShared.hpp: static bool >>> is_archive_object(oop p) NOT_CDS_JAVA_HEAP_RETURN_(false); >>> ./hotspot/share/classfile/stringTable.hpp: static oop >>> create_archived_string(oop s, Thread* THREAD) >>> NOT_CDS_JAVA_HEAP_RETURN_(NULL); >>> >>> so these methods should be defined when CDS is not available. >>> >> >> Thomas and you are right. Must have been a mis-configuration on AIX >> where I saw undefined symbols at link time. I've removed the ifdefs >> from javaClasses.cpp now. >> >> Finally, I've also wrapped all the FileMapInfo fields in vmStructs.cpp >> into CDS_ONLY macros as suggested by Jiangli because the really only >> make sense for a CDS-enabled VM. >> >> Here's the new webrev: >> >> http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965.v3/ >> >> Please let me know if you think there's still something missing. >> >> Regards, >> Volker >> >> >>> ?? >>> >>> Thanks, >>> David >>> ----- >>> >>> >>> >>> >>> >>> On 15/06/2018 12:26 AM, Volker Simonis wrote: >>>> >>>> >>>> Hi, >>>> >>>> can I please have a review for the following fix: >>>> >>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/ >>>> https://bugs.openjdk.java.net/browse/JDK-8204965 >>>> >>>> CDS does currently not work on AIX because of the way how we >>>> reserve/commit memory on AIX. The problem is that we're using a >>>> combination of shmat/mmap depending on the page size and the size of >>>> the memory chunk to reserve. This makes it impossible to reliably >>>> reserve the memory for the CDS archive and later on map the various >>>> parts of the archive into these regions. >>>> >>>> In order to fix this we would have to completely rework the memory >>>> reserve/commit/uncommit logic on AIX which is currently out of our >>>> scope because of resource limitations. >>>> >>>> Unfortunately, I could not simply disable CDS in the configure step >>>> because some of the shared code apparently relies on parts of the CDS >>>> code which gets excluded from the build when CDS is disabled. So I >>>> also fixed the offending parts in hotspot and cleaned up the configure >>>> logic for CDS. >>>> >>>> Thank you and best regards, >>>> Volker >>>> >>>> PS: I did run the job through the submit forest >>>> (mach5-one-simonis-JDK-8204965-20180613-1946-26349) but the results >>>> weren't really useful because they mention build failures on linux-x64 >>>> which I can't reproduce locally. >>>> >>> > From martin.doerr at sap.com Tue Jun 19 06:58:22 2018 From: martin.doerr at sap.com (Doerr, Martin) Date: Tue, 19 Jun 2018 06:58:22 +0000 Subject: RFR(XS): 8205172: 32 bit build broken In-Reply-To: References: Message-ID: <2d5144f9a7e1431fa42307bbff8017cb@sap.com> Hi David, it's not a compiler bug. The problem is that "const intptr_t OneBit = 1;" uses intptr_t which is 32 bit on 32 bit platforms. We can't shift it by 48. Seems like the other usages of right_n_bits only use less than 32. I just noticed that there's more to fix: --- a/test/hotspot/jtreg/serviceability/jvmti/HeapMonitor/libHeapMonitorTest.c Mon Jun 18 16:00:23 2018 +0200 +++ b/test/hotspot/jtreg/serviceability/jvmti/HeapMonitor/libHeapMonitorTest.c Tue Jun 19 08:53:26 2018 +0200 @@ -459,7 +459,7 @@ live_object = (ObjectTrace*) malloc(sizeof(*live_object)); live_object->frames = allocated_frames; live_object->frame_count = count; - live_object->size = size; + live_object->size = (size_t)size; live_object->thread = thread; live_object->object = (*jni)->NewWeakGlobalRef(jni, object); Best regards, Martin -----Original Message----- From: David Holmes [mailto:david.holmes at oracle.com] Sent: Dienstag, 19. Juni 2018 07:53 To: Doerr, Martin ; hotspot-dev developers (hotspot-dev at openjdk.java.net) ; Roland Westrelin (roland.westrelin at oracle.com) ; JC Beyler Subject: Re: RFR(XS): 8205172: 32 bit build broken On 19/06/2018 12:10 AM, Doerr, Martin wrote: > Hi, > > 32 bit build is currently broken due to: > "trap_mask": jdk/src/hotspot/share/oops/methodData.hpp(142) : warning C4293: '<<' : shift count negative or too big, undefined behavior > "PrngModMask": jdk/src/hotspot/share/runtime/threadHeapSampler.cpp(50) : warning C4293: '<<' : shift count negative or too big, undefined behavior const uint64_t PrngModPower = 48; ! const uint64_t PrngModMask = right_n_bits(PrngModPower); So right_n_bits(48) should expand to: (nth_bit(48) - 1) where nth_bit is: (((n) >= BitsPerWord) ? 0 : (OneBit << (n))) so the compiler is complaining that (OneBit << 48) is undefined, but the conditional operator will not execute that path (as BitsPerWord == 32). That seems like a compiler bug to me. :( David ----- > Please review this small fix: > http://cr.openjdk.java.net/~mdoerr/8205172_32bit_build/webrev.00/ > > Best regards, > Martin > From david.holmes at oracle.com Tue Jun 19 07:18:29 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 19 Jun 2018 17:18:29 +1000 Subject: RFR(XS): 8205172: 32 bit build broken In-Reply-To: <2d5144f9a7e1431fa42307bbff8017cb@sap.com> References: <2d5144f9a7e1431fa42307bbff8017cb@sap.com> Message-ID: <58f95a0e-b1fe-4f56-5da0-4bd6893df104@oracle.com> Hi Martin, On 19/06/2018 4:58 PM, Doerr, Martin wrote: > Hi David, > > it's not a compiler bug. The problem is that "const intptr_t OneBit = 1;" uses intptr_t which is 32 bit on 32 bit platforms. > We can't shift it by 48. Seems like the other usages of right_n_bits only use less than 32. No we can't, but we're also not going to try as we won't take that path on a 32-bit system. So we're getting a compiler warning about code that the compiler thinks might be executed even though it actually won't. In this case though the fix is correct as we're always dealing with 64-bit and those macros depend on the pointer-size. > I just noticed that there's more to fix: > --- a/test/hotspot/jtreg/serviceability/jvmti/HeapMonitor/libHeapMonitorTest.c Mon Jun 18 16:00:23 2018 +0200 > +++ b/test/hotspot/jtreg/serviceability/jvmti/HeapMonitor/libHeapMonitorTest.c Tue Jun 19 08:53:26 2018 +0200 > @@ -459,7 +459,7 @@ > live_object = (ObjectTrace*) malloc(sizeof(*live_object)); > live_object->frames = allocated_frames; > live_object->frame_count = count; > - live_object->size = size; > + live_object->size = (size_t)size; It's not obvious that is the right fix as the incoming "size" is 64-bit. I think strictly speaking this: typedef struct _ObjectTrace{ jweak object; size_t size; Should be using a 64-bit type for size. David ----- > live_object->thread = thread; > live_object->object = (*jni)->NewWeakGlobalRef(jni, object); > > Best regards, > Martin > > > -----Original Message----- > From: David Holmes [mailto:david.holmes at oracle.com] > Sent: Dienstag, 19. Juni 2018 07:53 > To: Doerr, Martin ; hotspot-dev developers (hotspot-dev at openjdk.java.net) ; Roland Westrelin (roland.westrelin at oracle.com) ; JC Beyler > Subject: Re: RFR(XS): 8205172: 32 bit build broken > > On 19/06/2018 12:10 AM, Doerr, Martin wrote: >> Hi, >> >> 32 bit build is currently broken due to: >> "trap_mask": jdk/src/hotspot/share/oops/methodData.hpp(142) : warning C4293: '<<' : shift count negative or too big, undefined behavior >> "PrngModMask": jdk/src/hotspot/share/runtime/threadHeapSampler.cpp(50) : warning C4293: '<<' : shift count negative or too big, undefined behavior > > const uint64_t PrngModPower = 48; > ! const uint64_t PrngModMask = right_n_bits(PrngModPower); > > So right_n_bits(48) should expand to: > > (nth_bit(48) - 1) > > where nth_bit is: > > (((n) >= BitsPerWord) ? 0 : (OneBit << (n))) > > so the compiler is complaining that (OneBit << 48) is undefined, but the > conditional operator will not execute that path (as BitsPerWord == 32). > That seems like a compiler bug to me. :( > > David > ----- > >> Please review this small fix: >> http://cr.openjdk.java.net/~mdoerr/8205172_32bit_build/webrev.00/ >> >> Best regards, >> Martin >> From david.holmes at oracle.com Tue Jun 19 07:25:44 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 19 Jun 2018 17:25:44 +1000 Subject: RFR(M): 8204965: Fix '--disable-cds' and disable CDS on AIX by default In-Reply-To: References: <67efe9b2-bd7a-e23a-74d6-786cbc135d16@oracle.com> Message-ID: <8ada6cad-09f4-cde2-69c8-561a142bc3bb@oracle.com> Hi Volker, On 19/06/2018 4:50 PM, Volker Simonis wrote: > On Tue, Jun 19, 2018 at 6:54 AM, David Holmes wrote: >> Hi Volker, >> >> v3 looks much cleaner - thanks. >> >> But AFAICS the change to jvmtiEnv.cpp is also not needed. >> ClassLoaderExt::append_boot_classpath exists regardless of INCLUDE_CDS but >> operates differently (just calling ClassLoader::add_to_boot_append_entries). >> > > That's not entirely true because the whole compilation unit (i.e. > classLoaderExt.cpp) which contains > 'ClassLoaderExt::append_boot_classpath()' is excluded from the > compilation if CDS is disabled (see make/hotspot/lib/JvmFeatures.gmk). Hmmm. There's a CDS bug there. Either classLoaderExt.cpp should not be excluded from a non-CDS build, or it should not contain any INCLUDE_CDS guards! I suspect it should not be excluded. > So I can either move the whole implementation of > 'ClassLoaderExt::append_boot_classpath()' into classLoaderExt.hpp in > which case things would work as you explained and my changes to > jvmtiEnv.cpp could be removed or leave the whole code and change as > is. Please let me know what you think? In the interest of moving forward you can push what you have and I will file a bug against CDS to sort out classLoaderExt.cpp. Thanks, David > Regards, > Volker > >> Thanks, >> David >> >> >> On 19/06/2018 2:04 AM, Volker Simonis wrote: >>> >>> On Mon, Jun 18, 2018 at 8:17 AM, David Holmes >>> wrote: >>>> >>>> Hi Volker, >>>> >>>> src/hotspot/share/runtime/globals.hpp >>>> >>>> This change should not be needed! We do minimal VM builds without CDS and >>>> we >>>> don't have to touch the UseSharedSpaces defaults (unless recent change >>>> have >>>> broken this - in which case that needs to be addressed in its own right!) >>>> >>> >>> Yes, you're right, CDS_ONLY/NOT_CDS isn't really required here, >>> because UseSharedSpaces is reseted later on at the end of >>> Arguments::parse(). I just thought it would be cleaner to disable it >>> statically, if the VM doesn't support it. But anyway I don't really >>> mind and I've reverted that change in globals.hpp. >>> >>>> src/hotspot/share/classfile/javaClasses.cpp >>>> >>>> AFAICS you should be using INCLUDE_CDS in the ifdefs not >>>> INCLUDE_CDS_JAVA_HEAP. But again I'm unclear (as was Thomas) why this >>>> should >>>> be needed as we have not needed it before. As Thomas notes we have: >>>> >>>> ./hotspot/share/memory/metaspaceShared.hpp: static bool >>>> is_archive_object(oop p) NOT_CDS_JAVA_HEAP_RETURN_(false); >>>> ./hotspot/share/classfile/stringTable.hpp: static oop >>>> create_archived_string(oop s, Thread* THREAD) >>>> NOT_CDS_JAVA_HEAP_RETURN_(NULL); >>>> >>>> so these methods should be defined when CDS is not available. >>>> >>> >>> Thomas and you are right. Must have been a mis-configuration on AIX >>> where I saw undefined symbols at link time. I've removed the ifdefs >>> from javaClasses.cpp now. >>> >>> Finally, I've also wrapped all the FileMapInfo fields in vmStructs.cpp >>> into CDS_ONLY macros as suggested by Jiangli because the really only >>> make sense for a CDS-enabled VM. >>> >>> Here's the new webrev: >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965.v3/ >>> >>> Please let me know if you think there's still something missing. >>> >>> Regards, >>> Volker >>> >>> >>>> ?? >>>> >>>> Thanks, >>>> David >>>> ----- >>>> >>>> >>>> >>>> >>>> >>>> On 15/06/2018 12:26 AM, Volker Simonis wrote: >>>>> >>>>> >>>>> Hi, >>>>> >>>>> can I please have a review for the following fix: >>>>> >>>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/ >>>>> https://bugs.openjdk.java.net/browse/JDK-8204965 >>>>> >>>>> CDS does currently not work on AIX because of the way how we >>>>> reserve/commit memory on AIX. The problem is that we're using a >>>>> combination of shmat/mmap depending on the page size and the size of >>>>> the memory chunk to reserve. This makes it impossible to reliably >>>>> reserve the memory for the CDS archive and later on map the various >>>>> parts of the archive into these regions. >>>>> >>>>> In order to fix this we would have to completely rework the memory >>>>> reserve/commit/uncommit logic on AIX which is currently out of our >>>>> scope because of resource limitations. >>>>> >>>>> Unfortunately, I could not simply disable CDS in the configure step >>>>> because some of the shared code apparently relies on parts of the CDS >>>>> code which gets excluded from the build when CDS is disabled. So I >>>>> also fixed the offending parts in hotspot and cleaned up the configure >>>>> logic for CDS. >>>>> >>>>> Thank you and best regards, >>>>> Volker >>>>> >>>>> PS: I did run the job through the submit forest >>>>> (mach5-one-simonis-JDK-8204965-20180613-1946-26349) but the results >>>>> weren't really useful because they mention build failures on linux-x64 >>>>> which I can't reproduce locally. >>>>> >>>> >> From martin.doerr at sap.com Tue Jun 19 08:28:48 2018 From: martin.doerr at sap.com (Doerr, Martin) Date: Tue, 19 Jun 2018 08:28:48 +0000 Subject: RFR(XS): 8205172: 32 bit build broken In-Reply-To: <58f95a0e-b1fe-4f56-5da0-4bd6893df104@oracle.com> References: <2d5144f9a7e1431fa42307bbff8017cb@sap.com> <58f95a0e-b1fe-4f56-5da0-4bd6893df104@oracle.com> Message-ID: Hi David, thanks for reviewing and for the quick response. I think it's good that the compiler warns about undefined subexpressions even when the result is not used. Anyway, the fix is needed to get a correct 64 bit mask as you have already mentioned. I'm ok with using jlong as type for the size field. This is safe (even though I think sizes which don't fit into 32 bit should never occur on 32 bit platforms, but I'm not really familiar with this code). Here's the new webrev: http://cr.openjdk.java.net/~mdoerr/8205172_32bit_build/webrev.01/ Thanks, Martin -----Original Message----- From: David Holmes [mailto:david.holmes at oracle.com] Sent: Dienstag, 19. Juni 2018 09:18 To: Doerr, Martin ; hotspot-dev developers (hotspot-dev at openjdk.java.net) ; JC Beyler Subject: Re: RFR(XS): 8205172: 32 bit build broken Hi Martin, On 19/06/2018 4:58 PM, Doerr, Martin wrote: > Hi David, > > it's not a compiler bug. The problem is that "const intptr_t OneBit = 1;" uses intptr_t which is 32 bit on 32 bit platforms. > We can't shift it by 48. Seems like the other usages of right_n_bits only use less than 32. No we can't, but we're also not going to try as we won't take that path on a 32-bit system. So we're getting a compiler warning about code that the compiler thinks might be executed even though it actually won't. In this case though the fix is correct as we're always dealing with 64-bit and those macros depend on the pointer-size. > I just noticed that there's more to fix: > --- a/test/hotspot/jtreg/serviceability/jvmti/HeapMonitor/libHeapMonitorTest.c Mon Jun 18 16:00:23 2018 +0200 > +++ b/test/hotspot/jtreg/serviceability/jvmti/HeapMonitor/libHeapMonitorTest.c Tue Jun 19 08:53:26 2018 +0200 > @@ -459,7 +459,7 @@ > live_object = (ObjectTrace*) malloc(sizeof(*live_object)); > live_object->frames = allocated_frames; > live_object->frame_count = count; > - live_object->size = size; > + live_object->size = (size_t)size; It's not obvious that is the right fix as the incoming "size" is 64-bit. I think strictly speaking this: typedef struct _ObjectTrace{ jweak object; size_t size; Should be using a 64-bit type for size. David ----- > live_object->thread = thread; > live_object->object = (*jni)->NewWeakGlobalRef(jni, object); > > Best regards, > Martin > > > -----Original Message----- > From: David Holmes [mailto:david.holmes at oracle.com] > Sent: Dienstag, 19. Juni 2018 07:53 > To: Doerr, Martin ; hotspot-dev developers (hotspot-dev at openjdk.java.net) ; Roland Westrelin (roland.westrelin at oracle.com) ; JC Beyler > Subject: Re: RFR(XS): 8205172: 32 bit build broken > > On 19/06/2018 12:10 AM, Doerr, Martin wrote: >> Hi, >> >> 32 bit build is currently broken due to: >> "trap_mask": jdk/src/hotspot/share/oops/methodData.hpp(142) : warning C4293: '<<' : shift count negative or too big, undefined behavior >> "PrngModMask": jdk/src/hotspot/share/runtime/threadHeapSampler.cpp(50) : warning C4293: '<<' : shift count negative or too big, undefined behavior > > const uint64_t PrngModPower = 48; > ! const uint64_t PrngModMask = right_n_bits(PrngModPower); > > So right_n_bits(48) should expand to: > > (nth_bit(48) - 1) > > where nth_bit is: > > (((n) >= BitsPerWord) ? 0 : (OneBit << (n))) > > so the compiler is complaining that (OneBit << 48) is undefined, but the > conditional operator will not execute that path (as BitsPerWord == 32). > That seems like a compiler bug to me. :( > > David > ----- > >> Please review this small fix: >> http://cr.openjdk.java.net/~mdoerr/8205172_32bit_build/webrev.00/ >> >> Best regards, >> Martin >> From robbin.ehn at oracle.com Tue Jun 19 08:46:14 2018 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Tue, 19 Jun 2018 10:46:14 +0200 Subject: RFR(s): 8204166: TLH: Semaphore may not be destroy until signal have returned. In-Reply-To: <5cfeb06c-30d3-4af4-e085-4e8c60aa5d98@oracle.com> References: <3db415fe-f96d-1805-76a2-b50d052c9e1a@oracle.com> <5cfeb06c-30d3-4af4-e085-4e8c60aa5d98@oracle.com> Message-ID: Hi David, On 06/19/2018 07:10 AM, David Holmes wrote: > Hi Robbin, > > Overall changes seem okay. I gave a lot of thought as to whether an "old" thread > still returning from sem_wait could potentially interfere with the next use of > the sempahore, but it seems okay. Interesting (read "scary") glibc bug! Good, yes! > > Minor comments: > > handshake.cpp: > > ?311?? if (thread->is_terminated()) { > ?312???? // If thread is not on threads list but armed, cancel. > ?313???? thread->cancel_handshake(); > ?314???? return; > ?315?? } > > did you actually encounter late handshakes in the thread lifecycle causing > problems, or is this just being cautious? Just cautious. > > 377?? if(vmthread_can_process_handshake(target)) { > > Space needed after "if" Fixed! Thanks David, Robbin > > Thanks, > David > ----- > > On 19/06/2018 12:05 AM, Robbin Ehn wrote: >> On 06/18/2018 03:07 PM, Robbin Ehn wrote: >>> Hi all, >>> >>> After some internal discussions I changed the patch to: >>> http://rehn-ws.se.oracle.com/cr_mirror/8204166/v2/ >> >> Correct external url: >> http://cr.openjdk.java.net/~rehn/8204166/v2/ >> >> /Robbin >> >>> >>> Which handles thread off javathreads list better. >>> >>> Passes handshake testing and ZGC testing seems okay. >>> >>> Thanks, Robbin >>> >>> On 06/14/2018 12:11 PM, Robbin Ehn wrote: >>>> Hi all, please review. >>>> >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8204166 >>>> Webrev: http://cr.openjdk.java.net/~rehn/8204166/v1/webrev/ >>>> >>>> The root cause of this failure is a bug in the posix semaphores: >>>> https://sourceware.org/bugzilla/show_bug.cgi?id=12674 >>>> >>>> Thread a: >>>> sem_post(my_sem); >>>> >>>> Thread b: >>>> sem_wait(my_sem); >>>> sem_destroy(my_sem); >>>> >>>> Thread b is waiting on my_sem (count 0), Thread a posts (count 0->1). >>>> If Thread b start executing directly after the increment in post but before >>>> Thread a leaves the call to post and manage to destroy the semaphore. Thread a >>>> _can_ get EINVAL from sem_post! This is fixed in newer glibc(2.21). >>>> >>>> Note that mutexes have had same issue on some platforms: >>>> https://sourceware.org/bugzilla/show_bug.cgi?id=13690 >>>> Fixed in 2.23. >>>> >>>> Since we only have one handshake operation running at anytime (safepoints >>>> and handshakes are also mutual exclusive, both run on VM Thread) we can >>>> actually always use the same semaphore. This patch changes the _done >>>> semaphore to be static instead, thus avoiding the post<->destroy race. >>>> >>>> Patch also contains some small changes which remove of dead code, remove >>>> unneeded state, handling of cases which we can't easily say will never >>>> happen and some additional error checks. >>>> >>>> Handshakes test passes, but they don't trigger the original issue, so more >>>> interesting is that this issue do not happen when running ZGC which utilize >>>> handshakes with the static semaphore. >>>> >>>> Thanks, Robbin From boris.ulasevich at bell-sw.com Tue Jun 19 08:53:56 2018 From: boris.ulasevich at bell-sw.com (Boris Ulasevich) Date: Tue, 19 Jun 2018 11:53:56 +0300 Subject: RFR (S) 8203479: JFR enabled ARM32 build assertion failure In-Reply-To: References: <7ffe11f8-53b7-f7e2-bb54-c24410934db4@oracle.com> <833e84ab-5b3c-b819-6aec-64d09d4ca7df@bell-sw.com> Message-ID: Thank you! On 19.06.2018 06:01, David Holmes wrote: > Hi Boris, > > Looks good. > > I've pushed this with the single review as it is constrained to a > non-Oracle port, and is very simple. > > Thanks, > David > > On 18/06/2018 7:18 PM, Boris Ulasevich wrote: >> Hi David, >> >> Many thanks! Here is the werbev with updated copyrights: >> http://cr.openjdk.java.net/~bulasevich/8203479/webrev.02 >> >> thanks, >> Boris >> >> On 18.06.2018 05:39, David Holmes wrote: >>> Hi Boris, >>> >>> On 15/06/2018 8:44 PM, Boris Ulasevich wrote: >>>> Hi, >>>> >>>> Please review the following patch: >>>> ?? http://cr.openjdk.java.net/~bulasevich/8203479/webrev.01 >>>> ?? https://bugs.openjdk.java.net/browse/JDK-8203479 >>>> >>>> Assertion fires in JFR codes on first VM thread setup because VM >>>> globals are not yet initialized (and supports_cx8 property is not >>>> predefined for ARM32 platform). I propose to exploit >>>> early_initialize() method to set up supports_cx8 property on early >>>> stage of VM initialization. >>> >>> Your fix looks good. Please update copyright years. >>> Just some additional commentary ... from the bug report: >>> >>> First _supports_cx8 usage: >>> ?? Threads::create_vm -> JavaThread::JavaThread(bool) --> >>> Thread::Thread() --> JfrThreadLocal::JfrThreadLocal() --> >>> atomic_inc(unsigned long long volatile*) >>> >>> I'm not sure when this was introduced but it's risky to perform >>> atomic operations on jlong early during VM initialization. >> >> For me it comes from revision when JFR was opensourced: >> >> changeset:?? 50113:caf115bb98ad >> user:??????? egahlin >> date:??????? Tue May 15 20:24:34 2018 +0200 >> summary:???? 8199712: Flight Recorder >> >>> Apart from the problem you encountered with ARM32, on some platforms >>> it may require that the stub-generator has been initialized. (Though >>> in that case we may harmlessly fallback to using non-atomic >>> operations - which is okay when creating the initial JavaThread.) >> >> Good point! Yes, we discussed as an optional solution: (1) reworking >> jfr next_thread_id() function to increase counter without atomic_inc >> function call when counter value is zero, and (2) disable assertion >> when !is_init_completed(). We decided that for our case initialization >> sequence reorder works better. >> >>> Thanks, >>> David >>> >>>> Thanks >>>> Boris From aph at redhat.com Tue Jun 19 09:46:02 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 19 Jun 2018 10:46:02 +0100 Subject: TestFrames.java deoptimization Message-ID: <6a241492-65ba-6927-60dd-e3fd36cde976@redhat.com> Someone posted a question about inlining and deoptimization, but I can't find the original email. I have an answer, if you want to know. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From rwestrel at redhat.com Tue Jun 19 09:53:48 2018 From: rwestrel at redhat.com (Roland Westrelin) Date: Tue, 19 Jun 2018 11:53:48 +0200 Subject: TestFrames.java deoptimization In-Reply-To: <6a241492-65ba-6927-60dd-e3fd36cde976@redhat.com> References: <6a241492-65ba-6927-60dd-e3fd36cde976@redhat.com> Message-ID: > Someone posted a question about inlining and deoptimization, but I can't > find the original email. I have an answer, if you want to know. It's: http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-June/033060.html Roland. From david.holmes at oracle.com Tue Jun 19 10:08:18 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 19 Jun 2018 20:08:18 +1000 Subject: RFR(XS): 8205172: 32 bit build broken In-Reply-To: References: <2d5144f9a7e1431fa42307bbff8017cb@sap.com> <58f95a0e-b1fe-4f56-5da0-4bd6893df104@oracle.com> Message-ID: On 19/06/2018 6:28 PM, Doerr, Martin wrote: > Hi David, > > thanks for reviewing and for the quick response. > > I think it's good that the compiler warns about undefined subexpressions even when the result is not used. Not if you have to change the code to work around a warning. :) Static analysis tools often are not smart enough to see the complete picture. > Anyway, the fix is needed to get a correct 64 bit mask as you have already mentioned. > > I'm ok with using jlong as type for the size field. This is safe (even though I think sizes which don't fit into 32 bit should never occur on 32 bit platforms, but I'm not really familiar with this code). The event is specified to provide a size that is a jlong so one has to assume that such sizes are anticipated to occur. For the test we can obviously control that so it's not such a concern, but still best that the correct sized types flow through the code. > Here's the new webrev: > http://cr.openjdk.java.net/~mdoerr/8205172_32bit_build/webrev.01/ Looks good. Thanks, David > Thanks, > Martin > > > -----Original Message----- > From: David Holmes [mailto:david.holmes at oracle.com] > Sent: Dienstag, 19. Juni 2018 09:18 > To: Doerr, Martin ; hotspot-dev developers (hotspot-dev at openjdk.java.net) ; JC Beyler > Subject: Re: RFR(XS): 8205172: 32 bit build broken > > Hi Martin, > > On 19/06/2018 4:58 PM, Doerr, Martin wrote: >> Hi David, >> >> it's not a compiler bug. The problem is that "const intptr_t OneBit = 1;" uses intptr_t which is 32 bit on 32 bit platforms. >> We can't shift it by 48. Seems like the other usages of right_n_bits only use less than 32. > > No we can't, but we're also not going to try as we won't take that path > on a 32-bit system. So we're getting a compiler warning about code that > the compiler thinks might be executed even though it actually won't. > > In this case though the fix is correct as we're always dealing with > 64-bit and those macros depend on the pointer-size. > >> I just noticed that there's more to fix: >> --- a/test/hotspot/jtreg/serviceability/jvmti/HeapMonitor/libHeapMonitorTest.c Mon Jun 18 16:00:23 2018 +0200 >> +++ b/test/hotspot/jtreg/serviceability/jvmti/HeapMonitor/libHeapMonitorTest.c Tue Jun 19 08:53:26 2018 +0200 >> @@ -459,7 +459,7 @@ >> live_object = (ObjectTrace*) malloc(sizeof(*live_object)); >> live_object->frames = allocated_frames; >> live_object->frame_count = count; >> - live_object->size = size; >> + live_object->size = (size_t)size; > > It's not obvious that is the right fix as the incoming "size" is 64-bit. > I think strictly speaking this: > > typedef struct _ObjectTrace{ > jweak object; > size_t size; > > Should be using a 64-bit type for size. > > David > ----- > >> live_object->thread = thread; >> live_object->object = (*jni)->NewWeakGlobalRef(jni, object); >> >> Best regards, >> Martin >> >> >> -----Original Message----- >> From: David Holmes [mailto:david.holmes at oracle.com] >> Sent: Dienstag, 19. Juni 2018 07:53 >> To: Doerr, Martin ; hotspot-dev developers (hotspot-dev at openjdk.java.net) ; Roland Westrelin (roland.westrelin at oracle.com) ; JC Beyler >> Subject: Re: RFR(XS): 8205172: 32 bit build broken >> >> On 19/06/2018 12:10 AM, Doerr, Martin wrote: >>> Hi, >>> >>> 32 bit build is currently broken due to: >>> "trap_mask": jdk/src/hotspot/share/oops/methodData.hpp(142) : warning C4293: '<<' : shift count negative or too big, undefined behavior >>> "PrngModMask": jdk/src/hotspot/share/runtime/threadHeapSampler.cpp(50) : warning C4293: '<<' : shift count negative or too big, undefined behavior >> >> const uint64_t PrngModPower = 48; >> ! const uint64_t PrngModMask = right_n_bits(PrngModPower); >> >> So right_n_bits(48) should expand to: >> >> (nth_bit(48) - 1) >> >> where nth_bit is: >> >> (((n) >= BitsPerWord) ? 0 : (OneBit << (n))) >> >> so the compiler is complaining that (OneBit << 48) is undefined, but the >> conditional operator will not execute that path (as BitsPerWord == 32). >> That seems like a compiler bug to me. :( >> >> David >> ----- >> >>> Please review this small fix: >>> http://cr.openjdk.java.net/~mdoerr/8205172_32bit_build/webrev.00/ >>> >>> Best regards, >>> Martin >>> From lois.foltan at oracle.com Tue Jun 19 11:28:48 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Tue, 19 Jun 2018 07:28:48 -0400 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: <4f02b89023e6459695a314486db0231e@sap.com> References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> <846b53fb74e341759fde21f2f3eb57e2@sap.com> <1e508284-71f6-a93a-b1a9-5181bc1b436c@oracle.com> <4f02b89023e6459695a314486db0231e@sap.com> Message-ID: <6cd97c73-6240-67ff-c0c4-9144e1af0cf5@oracle.com> On 6/15/2018 3:26 PM, Lindenmaier, Goetz wrote: > Hi Lois, > > > ---------------- stdout ---------------- > +-- 'bootstrap', > ????? | > ????? +-- 'platform', > jdk.internal.loader.ClassLoaders$PlatformClassLoader {0x
} > ????? |???? | > ????? |???? +-- 'app', jdk.internal.loader.ClassLoaders$AppClassLoader > {0x
} > ????? | > ????? +-- jdk.internal.reflect.DelegatingClassLoader @20f4f1e0, > jdk.internal.reflect.DelegatingClassLoader {0x
} > > What I mean is that there is twice > ?jdk.internal.reflect.DelegatingClassLoader? > > The printout of the second string could be guarded by > ClassLoaderData::loader_name_and_id_prints_classname() { return > (strchr(_name_and_id, '\'') == NULL); } > > Adding this function would fit well into your change. > > Adapting the output of the jcmd could be left to a follow up, > > I think Thomas has stronger feelings here than I do. > > Fixing the comment would be great. > Hi Goetz, Just an update that in the following updated webrev http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.3/, I removed the changes in classfile/classLoaderHierarchyDCmd.cpp and classfile/classLoaderStats.cpp.? I will make sure your comments above make it into the RFE for the serviceability team to look at using name_and_id for jcmd. > The rest is fine. Consider it Reviewed from my side. > Good, thanks for the review! Lois > Best regards, > > ? Goetz. > > *From:*Lois Foltan > *Sent:* Friday, June 15, 2018 7:11 PM > *To:* Lindenmaier, Goetz ; hotspot-dev > developers > *Subject:* Re: RFR (M) JDK-8202605: Standardize on > ClassLoaderData::loader_name() throughout the VM to obtain a class > loader's name > > On 6/15/2018 5:48 AM, Lindenmaier, Goetz wrote: > > Hi, > > thanks for this update and for incorporating all my comments! > > Hi Goetz, > Thanks for another round of review! > > > Looks good, just two comments: > > Is it correct to include the ' ' in BOOTSTRAP_LOADER_NAME? > > _name does not include ' ' either. > > if you do? print("'%s'", loader_name()) you will get 'app' but ''bootstrap''. > > I have removed the ' ' from BOOTSTRAP_LOADER_NAME, good catch. > > > In loader_name_and_id you can do > > ???return "'" BOOTSTRAP_LOADER_NAME "'"; > > similar in the jfr file. > > Done. > > > But I'm also fine with removing loader_name(), then you only have > > cases that need the ' ' around bootstrap :) > > I didn't see a use of loader_name() any more, and one can always > > call java_lang_ClassLoader::name() (except for during unloading.) > > I am going to leave ClassLoaderData::loader_name() as is.? It is has > the same method name and behavior that currently exists today.? I also > want to discourage future changes that directly call > java_lang_ClassLoader::name() since as you point out that is not safe > during unloading.? Also removing ClassLoaderData::loader_name() may > tempt future changes to introduce a new loader_name() method in some > data structure other than ClassLoaderData to obtain the > java_lang_ClassLoader::name().? Hopefully by leaving loader_name() in, > this will prevent ending up back where we are today with multiple ways > one can obtain the loader's name. > > > I don't mind the @id printouts in the class loader tree. But is the comment > > correct? Doesn't it print the class name twice? > > -//????? +-- jdk.internal.reflect.DelegatingClassLoader > > +//????? +-- jdk.internal.reflect.DelegatingClassLoader @ jdk.internal.reflect.DelegatingClassLoader > > Maybe you need > > ClassLoaderData::loader_name_and_id_prints_classname() { return (strchr(_name_and_id, '\'') == NULL); } > > to guard against printing this twice. > > I believe you are referring to the comment in test > serviceability/dcmd/vm/ClassLoaderHierarchyTest.java?? The actual > output looks like this: > > **Running DCMD 'VM.classloaders' through 'JMXExecutor' > ---------------- stdout ---------------- > +-- 'bootstrap', > ????? | > ????? +-- 'platform', > jdk.internal.loader.ClassLoaders$PlatformClassLoader {0x
} > ????? |???? | > ????? |???? +-- 'app', jdk.internal.loader.ClassLoaders$AppClassLoader > {0x
} > ????? | > ????? +-- jdk.internal.reflect.DelegatingClassLoader @20f4f1e0, > jdk.internal.reflect.DelegatingClassLoader {0x
} > ????? | > ????? +-- 'Kevin' @3330b2f5, ClassLoaderHierarchyTest$TestClassLoader > {0x
} > ????? | > ????? +-- ClassLoaderHierarchyTest$TestClassLoader @4d81f205, > ClassLoaderHierarchyTest$TestClassLoader {0x
} > ??????????? | > ??????????? +-- 'Bill' @4ea761aa, > ClassLoaderHierarchyTest$TestClassLoader {0x
} > > As Mandy pointed out in her review yesterday it really isn't necessary > to print out the address of the class loader oop anymore.? I will be > opening up a RFE for the serviceability team to address this.? And I > will update the comment in the test itself.? Is this acceptable to you? > > Thanks, > Lois > > > Best regards, > > ? Goetz. > > -----Original Message----- > > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On > > Behalf Of Lois Foltan > > Sent: Donnerstag, 14. Juni 2018 21:56 > > To: hotspot-dev developers > > > Subject: Re: RFR (M) JDK-8202605: Standardize on > > ClassLoaderData::loader_name() throughout the VM to obtain a class > > loader's name > > Please review this updated webrev that address review comments received. > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/ > > > Thanks, > > Lois > > On 6/13/2018 6:58 PM, Lois Foltan wrote: > > Please review this change to standardize on how to obtain a class > > loader's name within the VM.? SystemDictionary::loader_name() methods > > have been removed in favor of ClassLoaderData::loader_name(). > > Since the loader name is largely used in the VM for display purposes > > (error messages, logging, jcmd, JFR) this change also adopts a new > > format to append to a class loader's name its identityHashCode and if > > the loader has not been explicitly named it's qualified class name is > > used instead. > > 391 /** > > 392 * If the defining loader has a name explicitly set then > > 393 * '' @ > > 394 * If the defining loader has no name then > > 395 * @ > > 396 * If it's built-in loader then omit `@` as there is only one > > instance. > > 397 */ > > The names for the builtin loaders are 'bootstrap', 'app' and 'platform'. > > open webrev at > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ > > > bug link athttps://bugs.openjdk.java.net/browse/JDK-8202605 > > Testing: hs-tier(1-2), jdk-tier(1-2) complete > > ?????????????? hs-tier(3-5), jdk-tier(3) in progress > > Thanks, > > Lois > From lois.foltan at oracle.com Tue Jun 19 11:29:19 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Tue, 19 Jun 2018 07:29:19 -0400 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> <5207064d-787a-680f-cc23-20e4ece999b3@oracle.com> <38efa995-8541-73c2-fc2f-f22671c12560@oracle.com> Message-ID: On 6/16/2018 1:01 AM, Thomas St?fe wrote: > Hi Lois, > > thanks, I had a look and this looks good to me now. Thank you for your patiente. Thank you Thomas! Lois > > Best Regards, Thomas > > On Sat, Jun 16, 2018 at 1:52 AM, Lois Foltan wrote: >> Hi Thomas, >> >> I have read through all your comments below, thank you. I think the best >> compromise that hopefully will enable this change to go forward is to back >> out my changes to classfile/classLoaderHierarchyDCmd.cpp and >> classfile/classLoaderStats.cpp. This will allow the serviceability team to >> review the new format for the class loader's name_and_id and go forward if >> applicable to jcmd in a follow on RFE. Updated webrev at: >> >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.3/webrev/ >> >> Thanks, >> Lois >> >> >> On 6/15/2018 3:43 PM, Thomas St?fe wrote: >> >>> Hi Lois, >>> >>> On Fri, Jun 15, 2018 at 8:26 PM, Lois Foltan >>> wrote: >>>> On 6/15/2018 3:06 AM, Thomas St?fe wrote: >>>> >>>> Hi Lois, >>>> >>>> Hi Thomas, >>>> Thank you for looking at this change and giving it another round of >>>> review! >>>> >>>> ---- >>>> >>>> We have now: >>>> >>>> Symbol* ClassLoaderData::name() >>>> >>>> which returns ClassLoader.name >>>> >>>> and >>>> >>>> const char* ClassLoaderData::loader_name() >>>> >>>> which returns either ClassLoader.name or, if that is null, the class >>>> name. >>>> >>>> I would like to point out that ClassLoaderData::loader_name() is pretty >>>> much >>>> unchanged as it exists today, so this behavior is not new or changed. >>>> >>> Okay. >>> >>>> 1) if we keep it that way, we should at least rename loader_name() to >>>> something like loader_name_or_class_name() to lessen the surprise. >>>> >>>> 2) But maybe these two functions should have the same behaviour? >>>> Return name or null if not set, not the class name? I see that nobody >>>> yet uses loader_name(), so you are free to define it as you see fit. >>>> >>>> 3) but if (2), maybe alternativly just get rid of loader_name() >>>> altogether, as just calling as_C_string() on a symbol is not worth a >>>> utility function? >>>> >>>> I would like to leave ClassLoaderData::loader_name() in for a couple of >>>> reasons. Leaving it in discourages new methods like it to be introduced >>>> in >>>> the future in data structures other than ClassLoaderData, calling >>>> java_lang_ClassLoader::name() directly is not safe during unloading and >>>> getting rid of it may force a call to as_C_string() as option #3 suggests >>>> but that doesn't handle the bootstrap class loader. Given this I think >>>> the >>>> best course of action would be to update ClassLoaderData.hpp with the >>>> same >>>> comments I put in place within ClassLoaderData.cpp for this method as you >>>> suggest below. >>> Okay. >>> >>>> --- >>>> >>>> For VM.systemdictionary, the texts seem to be a bit off: >>>> >>>> 29167: >>>> Dictionary for loader data: 0x00007f7550cb8660 for instance a >>>> 'jdk/internal/reflect/DelegatingClassLoader'{0x0000000706c00000} >>>> >>>> "for instance a" ? >>>> >>>> Dictionary for loader data: 0x00007f75503b3a50 for instance a >>>> 'jdk/internal/loader/ClassLoaders$AppClassLoader'{0x000000070647b098} >>>> Dictionary for loader data: 0x00007f75503a4e30 for instance a >>>> >>>> 'jdk/internal/loader/ClassLoaders$PlatformClassLoader'{0x0000000706479088} >>>> >>>> should that not be "app" or "platform", respectively? >>>> >>>> ... but I just see it was the same way before and not touched by your >>>> change. Maybe here, your new compound name would make sense? >>>> >>>> ---- >>>> >>>> If I understand correctly this output shows up when one specifies >>>> -Xlog:class+load=debug? >>> I saw it as result of jcmd VM.systemdictionary (Coleen's command, I >>> think?) but it may show up too in other places, I did not check. >>> >>>> I see that the "for instance " is printed by >>>> >>>> void ClassLoaderData::print_value_on(outputStream* out) const { >>>> if (!is_unloading() && class_loader() != NULL) { >>>> out->print("loader data: " INTPTR_FORMAT " for instance ", >>>> p2i(this)); >>>> class_loader()->print_value_on(out); // includes >>>> loader_name_and_id() >>>> and address of class loader instance >>>> >>>> and class_loader()->print_value_on(out); eventually calls >>>> InstanceKlass::oop_print_value_on to print the "a". >>>> >>>> void InstanceKlass::oop_print_value_on(oop obj, outputStream* st) { >>>> st->print("a "); >>>> name()->print_value_on(st); >>>> obj->print_address_on(st); >>>> if (this == SystemDictionary::String_klass() >>>> >>>> This is a good follow up RFE since one will have to look at all the calls >>>> to >>>> InstanceKlass::oop_print_value_on() to determine if the "a " is still >>>> applicable. >>>> >>> Yes, there may be a number of follow up cleanups after this patch is in. >>> >>>> >>>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderData.cpp.sdiff.html >>>> >>>> Good comments. >>>> >>>> suggested change to comment: >>>> >>>> 129 // Obtain the class loader's name and identity hash. If the >>>> class loader's >>>> 130 // name was not explicitly set during construction, the class >>>> loader's name and id >>>> 131 // will be set to the qualified class name of the class loader >>>> along with its >>>> 132 // identity hash. >>>> >>>> rather: >>>> >>>> 129 // Obtain the class loader's name and identity hash. If the >>>> class loader's >>>> 130 // name was not explicitly set during construction, the class >>>> loader's ** _name_and_id field ** >>>> 131 // will be set to the qualified class name of the class loader >>>> along with its >>>> 132 // identity hash. >>>> >>>> Done. >>>> >>>> >>>> ---- >>>> >>>> 133 // If for some reason the ClassLoader's constructor has not >>>> been run, instead of >>>> >>>> I am curious, how can this happen? Bad bytecode instrumentation? >>>> Should we also attempt to work in the identity hashcode in that case >>>> to be consistent with the java side? Or maybe name it something like >>>> "classname "? Or is this too exotic a case to care? >>>> >>>> Bad bytecode instrumentation, Unsafe.allocateInstance(), see test >>>> >>>> open/test/hotspot/jtreg/runtime/modules/ClassLoaderNoUnnamedModuleTest.java >>>> for example. >>> JDK-8202758... Wow. Yikes. >>> >>>> I too was actually thinking of "classname @" so I >>>> do like that approach but it is a rare case. >>>> >>> Thanks for taking that suggestion. >>> >>>> ---- >>>> >>>> In various places I see you using: >>>> >>>> 937 if (_class_loader_klass == NULL) { // bootstrap case >>>> >>>> just to make sure, this is the same as >>>> CLD::is_the_null_class_loader_data(), yes? So, one could use one and >>>> assert the other? >>>> >>>> Yes. Actually Coleen & I were discussing that maybe we could remove >>>> ClassLoaderData::_class_loader_klass since its original purpose was to >>>> allow >>>> for ultimately a way to obtain the class loader's klass external_name. >>>> Will >>>> look into creating a follow on RFE if _class_loader_klass is no longer >>>> needed. >>>> >>> I use it in VM.classloaders and VM.metaspace, to print out the loader >>> class name and in VM.classloaders verbose mode I print out the Klass* >>> pointer too. We found it useful in some debugging scenarios. >>> >>> Btw, for the same reason I print out the "{loader oop}" in >>> VM.classloaders - debugging help. This was also a wish of Kirk >>> Pepperdine when we introduced VM.classloaders, see discussion: >>> >>> http://mail.openjdk.java.net/pipermail/serviceability-dev/2018-May/023770.html >>> . >>> >>> There are discussions and experiments currently done to execute >>> multiple jcmd subcommands at one safe point. In this context, printing >>> oops is more interesting in diagnostic commands, since you can chain >>> multiple commands together and get consistent oop values. See >>> discussions here: >>> >>> http://mail.openjdk.java.net/pipermail/serviceability-dev/2018-May/023673.html >>> (Currently, Frederic Parain from Oracle took this over and provided a >>> prototype patch). >>> >>> But all in all, if it makes matters easier, I think yes we should >>> remove _class_loader_klass from CLD. >>> >>>> ---- >>>> >>>> >>>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderData.hpp.sdiff.html >>>> >>>> Not sure about BOOTSTRAP_LOADER_NAME_LEN, since its sole user - jfr - >>>> could probably just do a ::strlen(BOOTSTRAP_LOADER_NAME). >>>> >>>> Not sure either about BOOTSTRAP_LOADER_NAME having quotes baked in - >>>> this is something I would rather see in the printing code. >>>> >>>> I agree. I removed the single quotes but I would like to leave in >>>> BOOTSTAP_LOADER_NAME_LEN. >>>> >>> Okay. We should make sure they stay consistent, but that is no terrible >>> burden. >>> >>>> + // Obtain the class loader's _name, works during unloading. >>>> + const char* loader_name() const; >>>> + Symbol* name() const { return _name; } >>>> >>>> See above my comments to loader_name(). At the very least comment >>>> should be updated describing that this function returns name or class >>>> name or "bootstrap". >>>> >>>> Comment in ClassLoaderData.hpp will be updated as you suggest. >>>> >>> Thank you. >>> >>>> ---- >>>> >>>> >>>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderHierarchyDCmd.cpp.udiff.html >>>> >>>> Hm, unfortunately, this does not look so good. I would prefer to keep >>>> the old version, see here my proposal, updated to use your new >>>> CLD::name() function and to remove the offending "<>" around >>>> "bootstrap". >>>> >>>> @@ -157,13 +157,18 @@ >>>> >>>> // Retrieve information. >>>> const Klass* const loader_klass = _cld->class_loader_klass(); >>>> + const Symbol* const loader_name = _cld->name(); >>>> >>>> branchtracker.print(st); >>>> >>>> // e.g. "+--- jdk.internal.reflect.DelegatingClassLoader" >>>> st->print("+%.*s", BranchTracker::twig_len, "----------"); >>>> - st->print(" %s,", _cld->loader_name_and_id()); >>>> - if (!_cld->is_the_null_class_loader_data()) { >>>> + if (_cld->is_the_null_class_loader_data()) { >>>> + st->print(" bootstrap"); >>>> + } else { >>>> + if (loader_name != NULL) { >>>> + st->print(" \"%s\",", loader_name->as_C_string()); >>>> + } >>>> st->print(" %s", loader_klass != NULL ? >>>> loader_klass->external_name() : "??"); >>>> st->print(" {" PTR_FORMAT "}", p2i(_loader_oop)); >>>> } >>>> >>>> This also depends on what you decide happens with CLD::loader_name(). >>>> If that one were to return "loader name or null if not set, as >>>> ra-allocated const char*", it could be used here. >>>> >>>> I like this change and I like how the output looks. Can you take another >>>> look at the next webrev's updated comments in test >>>> serviceability/dcmd/vm/ClassLoaderHierarchyTest.java? >>> Sure. It is not yet posted, yes? >>> >>> May take till monday though, I am gone over the weekend. >>> >>>> I plan to open an RFE >>>> to have the serviceability team consider removing the address of the >>>> loader >>>> oop now that the included identity hash provides unique identification. >>>> >>> See my remarks above - that command including oop was added by me, and >>> if possible I'd like to keep the oop for debugging purposes. However, >>> I could move the output to the "verbose" section (if you run >>> VM.classloaders verbose, there are additional things printed below the >>> class loader name). >>> >>> Note however, that printing "{}" was consistent with pre-existing >>> commands from Oracle, in this case VM.systemdictionary. >>> >>>> ---- >>>> >>>> >>>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderStats.cpp.udiff.html >>>> >>>> In VM.classloader_stats we see the effect of the new naming: >>>> >>>> x000000080000a0b8 0x00000008000623f0 0x00007f5facafe540 1 >>>> 6144 4064 jdk.internal.reflect.DelegatingClassLoader @7b5a12ae >>>> 0x000000080000a0b8 0x00000008000623f0 0x00007f5facbcdd50 1 >>>> 6144 3960 jdk.internal.reflect.DelegatingClassLoader @5b529706 >>>> 0x00000008000623f0 0x0000000000000000 0x00007f5facbcca00 10 >>>> 90112 51760 'MyInMemoryClassLoader' @17cdf2d0 >>>> 0x00000008000623f0 0x0000000000000000 0x00007f5facbca560 1 >>>> 6144 4184 'MyInMemoryClassLoader' @1477089c >>>> 0x00000008000623f0 0x0000000000000000 0x00007f5facba7890 1 >>>> 6144 4184 'MyInMemoryClassLoader' @a87f8ec >>>> 0x00000008000623f0 0x0000000000000000 0x00007f5facba5390 1 >>>> 6144 4184 'MyInMemoryClassLoader' @5a3bc7ed >>>> 0x00000008000623f0 0x0000000000000000 0x00007f5facba3bf0 1 >>>> 6144 4184 'MyInMemoryClassLoader' @48c76607 >>>> 0x00000008000623f0 0x0000000000000000 0x00007f5facb23f80 1 >>>> 6144 4184 'MyInMemoryClassLoader' @1224144a >>>> 0x00000008000623f0 0x0000000000000000 0x00007f5facb228f0 1 >>>> 6144 4184 'MyInMemoryClassLoader' @75437611 >>>> 0x00000008000623f0 0x0000000000000000 0x00007f5facb65c60 1 >>>> 6144 4184 'MyInMemoryClassLoader' @25084a1e >>>> 0x00000008000623f0 0x0000000000000000 0x00007f5facb6a030 1 >>>> 6144 4184 'MyInMemoryClassLoader' @2d2ffcb7 >>>> 0x00000008000623f0 0x0000000000000000 0x00007f5facb4bfe0 1 >>>> 6144 4184 'MyInMemoryClassLoader' @42a48628 >>>> 0x0000000800010340 0x00000008000107a8 0x00007f5fac3bd670 1064 >>>> 7004160 6979376 'app' >>>> 96 >>>> 311296 202600 + unsafe anonymous classes >>>> 0x0000000000000000 0x0000000000000000 0x00007f5fac1da1e0 1091 >>>> 8380416 8301048 'bootstrap' >>>> 92 >>>> 263168 169808 + unsafe anonymous classes >>>> 0x000000080000a0b8 0x000000080000a0b8 0x00007f5faca63460 1 >>>> 6144 3960 jdk.internal.reflect.DelegatingClassLoader @5bd03f44 >>>> >>>> >>>>> Since we hide now the class name of the loader, if everyone names >>>>> their class loader the same - e.g. "Test" or "MyInMemoryClassLoader" - >>>>> we loose information. >>>> We loose the name of class loader's class' fully qualified name only in >>>> the >>>> situation where the class loader's name has been explicitly specified by >>>> the >>>> user during construction. I would think in that case one would want to >>>> see >>>> the explicitly given name of the class loader. We also gain in either >>>> situation (unnamed or named class loader), the class loader's identity >>>> hash >>>> which allows for uniquely identifying a class loader in question. >>> For the record, I would prefer a naming scheme which printed >>> unconditionally both name and class name, if both are set: >>> >>> '"name", instance of , @id' >>> >>> or >>> >>> 'instance of , @id' >>> >>> or maybe some more condensed, technical form, as a clear triple: >>> >>> '[name, , @id]' or '{name, , @id}' >>> >>> The reason why I keep harping on this is that it is useful to have >>> consistent output, meaning, output that does not change its format on >>> a line-by-line base. >>> >>> Just a tiny example why this is useful, lets say I run a Spring MVC >>> app and want to know the number of Spring loaders, I do a: >>> >>> ./images/jdk/bin/jcmd hello VM.classloaders | grep org.springframework | >>> wc -l >>> >>> Won't work consistently anymore if class names disappear for loader >>> names which have names. >>> >>> Of course, there are myriad other ways to get the same information, so >>> this is just an illustration. >>> >>> -- >>> >>> But I guess I won't convince you that this is better, and it seems you >>> spent a lot of thoughts and discussions on this point already. I think >>> this is a case of one-size-fits-not-all. And also a matter of taste. >>> >>> If emphasis is on brevity, your naming scheme is better. If >>> ease-of-parsing and ease-of-reading are important, I think my scheme >>> wins. >>> >>> But as long as we have alternatives - e.g. CLD::name() and >>> CLD::class_loader_class() - and as long as VM.classloaders and >>> VM.metaspace commands stay useful, I am content and can live with your >>> scheme. >>> >>>>> I'm afraid this will be an issue if people will >>>>> start naming their class loaders more and more. It is not unimaginable >>>>> that completely different frameworks name their loaders the same. >>>> Point taken, however, doesn't including the identity hash allow for >>>> unique >>>> identification of the class loader? >>> I think the point of diagnostic commands is to get information quick. >>> An identity hash may help me after I managed to finally resolve it, >>> but it is not a quick process (that I know of). Whereas, for example, >>> just reading "com.wily.introscope.Loader" tells me immediately that >>> the VM I am looking at has Wily byte code modifications enabled. >>> >>>>> This "name or if not then class name" scheme will also complicate >>>>> parsing a lot for people who parse the output of these commands. I >>>>> would strongly prefer to see both - name and class type. >>>> Much like classfile/classLoaderHierarchyDCmd.cpp now generates, correct? >>>> >>> Yes! :) >>> >>>> Thanks, >>>> Lois >>>> >>>> >>> I just saw your webrev popping in, but again it is late. I'll take a >>> look tomorrow morning or monday. Thank you for your work. >>> >>> ..Thomas >>> >>>> ---- >>>> >>>> Hmm. At this point I noticed that I still had general reservations >>>> about the new compound naming scheme - see my remarks above. So I >>>> guess I stop here to wait for your response before continuing the code >>>> review. >>>> >>>> Thanks & Kind Regards, >>>> >>>> Thomas >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Thu, Jun 14, 2018 at 9:56 PM, Lois Foltan >>>> wrote: >>>> >>>> Please review this updated webrev that address review comments received. >>>> >>>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/ >>>> >>>> Thanks, >>>> Lois >>>> >>>> >>>> On 6/13/2018 6:58 PM, Lois Foltan wrote: >>>> >>>> Please review this change to standardize on how to obtain a class >>>> loader's >>>> name within the VM. SystemDictionary::loader_name() methods have been >>>> removed in favor of ClassLoaderData::loader_name(). >>>> >>>> Since the loader name is largely used in the VM for display purposes >>>> (error messages, logging, jcmd, JFR) this change also adopts a new format >>>> to >>>> append to a class loader's name its identityHashCode and if the loader >>>> has >>>> not been explicitly named it's qualified class name is used instead. >>>> >>>> 391 /** >>>> 392 * If the defining loader has a name explicitly set then >>>> 393 * '' @ >>>> 394 * If the defining loader has no name then >>>> 395 * @ >>>> 396 * If it's built-in loader then omit `@` as there is only one >>>> instance. >>>> 397 */ >>>> >>>> The names for the builtin loaders are 'bootstrap', 'app' and 'platform'. >>>> >>>> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ >>>> bug link at https://bugs.openjdk.java.net/browse/JDK-8202605 >>>> >>>> Testing: hs-tier(1-2), jdk-tier(1-2) complete >>>> hs-tier(3-5), jdk-tier(3) in progress >>>> >>>> Thanks, >>>> Lois >>>> >>>> From lois.foltan at oracle.com Tue Jun 19 11:29:49 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Tue, 19 Jun 2018 07:29:49 -0400 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: <775e7d27-b835-4cb2-7a78-b1530f26a3e8@oracle.com> References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> <5207064d-787a-680f-cc23-20e4ece999b3@oracle.com> <38efa995-8541-73c2-fc2f-f22671c12560@oracle.com> <775e7d27-b835-4cb2-7a78-b1530f26a3e8@oracle.com> Message-ID: <488ae45d-9b3e-da1b-6111-16dab16ca968@oracle.com> On 6/18/2018 3:42 PM, coleen.phillimore at oracle.com wrote: > > This version is good as well.? Thank you for working through all the > concerns. > Coleen Thank you Coleen! Lois > > On 6/15/18 7:52 PM, Lois Foltan wrote: >> Hi Thomas, >> >> I have read through all your comments below, thank you.? I think the >> best compromise that hopefully will enable this change to go forward >> is to back out my changes to classfile/classLoaderHierarchyDCmd.cpp >> and classfile/classLoaderStats.cpp.? This will allow the >> serviceability team to review the new format for the class loader's >> name_and_id and go forward if applicable to jcmd in a follow on RFE.? >> Updated webrev at: >> >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.3/webrev/ >> >> Thanks, >> Lois >> >> On 6/15/2018 3:43 PM, Thomas St?fe wrote: >> >>> Hi Lois, >>> >>> On Fri, Jun 15, 2018 at 8:26 PM, Lois Foltan >>> wrote: >>>> On 6/15/2018 3:06 AM, Thomas St?fe wrote: >>>> >>>> Hi Lois, >>>> >>>> Hi Thomas, >>>> Thank you for looking at this change and giving it another round of >>>> review! >>>> >>>> ---- >>>> >>>> We have now: >>>> >>>> Symbol* ClassLoaderData::name() >>>> >>>> ???? which returns ClassLoader.name >>>> >>>> and >>>> >>>> const char* ClassLoaderData::loader_name() >>>> >>>> ???? which returns either ClassLoader.name or, if that is null, the >>>> class >>>> name. >>>> >>>> I would like to point out that ClassLoaderData::loader_name() is >>>> pretty much >>>> unchanged as it exists today, so this behavior is not new or changed. >>>> >>> Okay. >>> >>>> 1) if we keep it that way, we should at least rename loader_name() to >>>> something like loader_name_or_class_name() to lessen the surprise. >>>> >>>> 2) But maybe these two functions should have the same behaviour? >>>> Return name or null if not set, not the class name? I see that nobody >>>> yet uses loader_name(), so you are free to define it as you see fit. >>>> >>>> 3) but if (2), maybe alternativly just get rid of loader_name() >>>> altogether, as just calling as_C_string() on a symbol is not worth a >>>> utility function? >>>> >>>> I would like to leave ClassLoaderData::loader_name() in for a >>>> couple of >>>> reasons.? Leaving it in discourages new methods like it to be >>>> introduced in >>>> the future in data structures other than ClassLoaderData, calling >>>> java_lang_ClassLoader::name() directly is not safe during unloading >>>> and >>>> getting rid of it may force a call to as_C_string() as option #3 >>>> suggests >>>> but that doesn't handle the bootstrap class loader.? Given this I >>>> think the >>>> best course of action would be to update ClassLoaderData.hpp with >>>> the same >>>> comments I put in place within ClassLoaderData.cpp for this method >>>> as you >>>> suggest below. >>> Okay. >>> >>>> >>>> --- >>>> >>>> For VM.systemdictionary, the texts seem to be a bit off: >>>> >>>> 29167: >>>> Dictionary for loader data: 0x00007f7550cb8660 for instance a >>>> 'jdk/internal/reflect/DelegatingClassLoader'{0x0000000706c00000} >>>> >>>> "for instance a" ? >>>> >>>> Dictionary for loader data: 0x00007f75503b3a50 for instance a >>>> 'jdk/internal/loader/ClassLoaders$AppClassLoader'{0x000000070647b098} >>>> Dictionary for loader data: 0x00007f75503a4e30 for instance a >>>> 'jdk/internal/loader/ClassLoaders$PlatformClassLoader'{0x0000000706479088} >>>> >>>> >>>> should that not be "app" or "platform", respectively? >>>> >>>> ... but I just see it was the same way before and not touched by your >>>> change. Maybe here, your new compound name would make sense? >>>> >>>> ---- >>>> >>>> If I understand correctly this output shows up when one specifies >>>> -Xlog:class+load=debug? >>> I saw it as result of jcmd VM.systemdictionary (Coleen's command, I >>> think?) but it may show up too in other places, I did not check. >>> >>>> ? I see that the "for instance " is printed by >>>> >>>> void ClassLoaderData::print_value_on(outputStream* out) const { >>>> ?? if (!is_unloading() && class_loader() != NULL) { >>>> ???? out->print("loader data: " INTPTR_FORMAT " for instance ", >>>> p2i(this)); >>>> ???? class_loader()->print_value_on(out);? // includes >>>> loader_name_and_id() >>>> and address of class loader instance >>>> >>>> and class_loader()->print_value_on(out); eventually calls >>>> InstanceKlass::oop_print_value_on to print the "a". >>>> >>>> void InstanceKlass::oop_print_value_on(oop obj, outputStream* st) { >>>> ?? st->print("a "); >>>> ?? name()->print_value_on(st); >>>> ?? obj->print_address_on(st); >>>> ?? if (this == SystemDictionary::String_klass() >>>> >>>> This is a good follow up RFE since one will have to look at all the >>>> calls to >>>> InstanceKlass::oop_print_value_on() to determine if the "a " is still >>>> applicable. >>>> >>> Yes, there may be a number of follow up cleanups after this patch is >>> in. >>> >>>> >>>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderData.cpp.sdiff.html >>>> >>>> >>>> Good comments. >>>> >>>> suggested change to comment: >>>> >>>> ? 129?? // Obtain the class loader's name and identity hash. If the >>>> class loader's >>>> ? 130?? // name was not explicitly set during construction, the class >>>> loader's name and id >>>> ? 131?? // will be set to the qualified class name of the class loader >>>> along with its >>>> ? 132?? // identity hash. >>>> >>>> rather: >>>> >>>> ? 129?? // Obtain the class loader's name and identity hash. If the >>>> class loader's >>>> ? 130?? // name was not explicitly set during construction, the class >>>> loader's ** _name_and_id field ** >>>> ? 131?? // will be set to the qualified class name of the class loader >>>> along with its >>>> ? 132?? // identity hash. >>>> >>>> Done. >>>> >>>> >>>> ---- >>>> >>>> ? 133?? // If for some reason the ClassLoader's constructor has not >>>> been run, instead of >>>> >>>> I am curious, how can this happen? Bad bytecode instrumentation? >>>> Should we also attempt to work in the identity hashcode in that case >>>> to be consistent with the java side? Or maybe name it something like >>>> "classname "? Or is this too exotic a case to care? >>>> >>>> Bad bytecode instrumentation, Unsafe.allocateInstance(), see test >>>> open/test/hotspot/jtreg/runtime/modules/ClassLoaderNoUnnamedModuleTest.java >>>> >>>> for example. >>> JDK-8202758... Wow. Yikes. >>> >>>> ? I too was actually thinking of "classname @" so I >>>> do like that approach but it is a rare case. >>>> >>> Thanks for taking that suggestion. >>> >>>> ---- >>>> >>>> In various places I see you using: >>>> >>>> 937??? if (_class_loader_klass == NULL) { // bootstrap case >>>> >>>> just to make sure, this is the same as >>>> CLD::is_the_null_class_loader_data(), yes? So, one could use one and >>>> assert the other? >>>> >>>> Yes.? Actually Coleen & I were discussing that maybe we could remove >>>> ClassLoaderData::_class_loader_klass since its original purpose was >>>> to allow >>>> for ultimately a way to obtain the class loader's klass >>>> external_name.? Will >>>> look into creating a follow on RFE if _class_loader_klass is no longer >>>> needed. >>>> >>> I use it in VM.classloaders and VM.metaspace, to print out the loader >>> class name and in VM.classloaders verbose mode I print out the Klass* >>> pointer too. We found it useful in some debugging scenarios. >>> >>> Btw, for the same reason I print out the "{loader oop}" in >>> VM.classloaders - debugging help. This was also a wish of Kirk >>> Pepperdine when we introduced VM.classloaders, see discussion: >>> http://mail.openjdk.java.net/pipermail/serviceability-dev/2018-May/023770.html >>> >>> . >>> >>> There are discussions and experiments currently done to execute >>> multiple jcmd subcommands at one safe point. In this context, printing >>> oops is more interesting in diagnostic commands, since you can chain >>> multiple commands together and get consistent oop values. See >>> discussions here: >>> http://mail.openjdk.java.net/pipermail/serviceability-dev/2018-May/023673.html >>> >>> (Currently, Frederic Parain from Oracle took this over and provided a >>> prototype patch). >>> >>> But all in all, if it makes matters easier, I think yes we should >>> remove _class_loader_klass from CLD. >>> >>>> ---- >>>> >>>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderData.hpp.sdiff.html >>>> >>>> >>>> Not sure about BOOTSTRAP_LOADER_NAME_LEN, since its sole user - jfr - >>>> could probably just do a ::strlen(BOOTSTRAP_LOADER_NAME). >>>> >>>> Not sure either about BOOTSTRAP_LOADER_NAME having quotes baked in - >>>> this is something I would rather see in the printing code. >>>> >>>> I agree.? I removed the single quotes but I would like to leave in >>>> BOOTSTAP_LOADER_NAME_LEN. >>>> >>> Okay. We should make sure they stay consistent, but that is no >>> terrible burden. >>> >>>> +? // Obtain the class loader's _name, works during unloading. >>>> +? const char* loader_name() const; >>>> +? Symbol* name() const { return _name; } >>>> >>>> See above my comments to loader_name(). At the very least comment >>>> should be updated describing that this function returns name or class >>>> name or "bootstrap". >>>> >>>> Comment in ClassLoaderData.hpp will be updated as you suggest. >>>> >>> Thank you. >>> >>>> >>>> ---- >>>> >>>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderHierarchyDCmd.cpp.udiff.html >>>> >>>> >>>> Hm, unfortunately, this does not look so good. I would prefer to keep >>>> the old version, see here my proposal, updated to use your new >>>> CLD::name() function and to remove the offending "<>" around >>>> "bootstrap". >>>> >>>> @@ -157,13 +157,18 @@ >>>> >>>> ????? // Retrieve information. >>>> ????? const Klass* const loader_klass = _cld->class_loader_klass(); >>>> +??? const Symbol* const loader_name = _cld->name(); >>>> >>>> ????? branchtracker.print(st); >>>> >>>> ????? // e.g. "+--- jdk.internal.reflect.DelegatingClassLoader" >>>> ????? st->print("+%.*s", BranchTracker::twig_len, "----------"); >>>> -??? st->print(" %s,", _cld->loader_name_and_id()); >>>> -??? if (!_cld->is_the_null_class_loader_data()) { >>>> +??? if (_cld->is_the_null_class_loader_data()) { >>>> +????? st->print(" bootstrap"); >>>> +??? } else { >>>> +????? if (loader_name != NULL) { >>>> +??????? st->print(" \"%s\",", loader_name->as_C_string()); >>>> +????? } >>>> ??????? st->print(" %s", loader_klass != NULL ? >>>> loader_klass->external_name() : "??"); >>>> ??????? st->print(" {" PTR_FORMAT "}", p2i(_loader_oop)); >>>> ????? } >>>> >>>> This also depends on what you decide happens with CLD::loader_name(). >>>> If that one were to return "loader name or null if not set, as >>>> ra-allocated const char*", it could be used here. >>>> >>>> I like this change and I like how the output looks.? Can you take >>>> another >>>> look at the next webrev's updated comments in test >>>> serviceability/dcmd/vm/ClassLoaderHierarchyTest.java? >>> Sure. It is not yet posted, yes? >>> >>> May take till monday though, I am gone over the weekend. >>> >>>> ? I plan to open an RFE >>>> to have the serviceability team consider removing the address of >>>> the loader >>>> oop now that the included identity hash provides unique >>>> identification. >>>> >>> See my remarks above - that command including oop was added by me, and >>> if possible I'd like to keep the oop for debugging purposes. However, >>> I could move the output to the "verbose" section (if you run >>> VM.classloaders verbose, there are additional things printed below the >>> class loader name). >>> >>> Note however, that printing "{}" was consistent with pre-existing >>> commands from Oracle, in this case VM.systemdictionary. >>> >>>> >>>> ---- >>>> >>>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/src/hotspot/share/classfile/classLoaderStats.cpp.udiff.html >>>> >>>> >>>> In VM.classloader_stats we see the effect of the new naming: >>>> >>>> x000000080000a0b8? 0x00000008000623f0 0x00007f5facafe540?????? 1 >>>> 6144????? 4064? jdk.internal.reflect.DelegatingClassLoader @7b5a12ae >>>> 0x000000080000a0b8? 0x00000008000623f0 0x00007f5facbcdd50?????? 1 >>>> ? 6144????? 3960? jdk.internal.reflect.DelegatingClassLoader @5b529706 >>>> 0x00000008000623f0? 0x0000000000000000 0x00007f5facbcca00????? 10 >>>> 90112???? 51760? 'MyInMemoryClassLoader' @17cdf2d0 >>>> 0x00000008000623f0? 0x0000000000000000 0x00007f5facbca560?????? 1 >>>> ? 6144????? 4184? 'MyInMemoryClassLoader' @1477089c >>>> 0x00000008000623f0? 0x0000000000000000 0x00007f5facba7890?????? 1 >>>> ? 6144????? 4184? 'MyInMemoryClassLoader' @a87f8ec >>>> 0x00000008000623f0? 0x0000000000000000 0x00007f5facba5390?????? 1 >>>> ? 6144????? 4184? 'MyInMemoryClassLoader' @5a3bc7ed >>>> 0x00000008000623f0? 0x0000000000000000 0x00007f5facba3bf0?????? 1 >>>> ? 6144????? 4184? 'MyInMemoryClassLoader' @48c76607 >>>> 0x00000008000623f0? 0x0000000000000000 0x00007f5facb23f80?????? 1 >>>> ? 6144????? 4184? 'MyInMemoryClassLoader' @1224144a >>>> 0x00000008000623f0? 0x0000000000000000 0x00007f5facb228f0?????? 1 >>>> ? 6144????? 4184? 'MyInMemoryClassLoader' @75437611 >>>> 0x00000008000623f0? 0x0000000000000000 0x00007f5facb65c60?????? 1 >>>> ? 6144????? 4184? 'MyInMemoryClassLoader' @25084a1e >>>> 0x00000008000623f0? 0x0000000000000000 0x00007f5facb6a030?????? 1 >>>> ? 6144????? 4184? 'MyInMemoryClassLoader' @2d2ffcb7 >>>> 0x00000008000623f0? 0x0000000000000000 0x00007f5facb4bfe0?????? 1 >>>> ? 6144????? 4184? 'MyInMemoryClassLoader' @42a48628 >>>> 0x0000000800010340? 0x00000008000107a8? 0x00007f5fac3bd670 1064 >>>> 7004160?? 6979376? 'app' >>>> ???????????????????????????????????????????????????????????????? 96 >>>> 311296??? 202600?? + unsafe anonymous classes >>>> 0x0000000000000000? 0x0000000000000000? 0x00007f5fac1da1e0 1091 >>>> 8380416?? 8301048? 'bootstrap' >>>> ???????????????????????????????????????????????????????????????? 92 >>>> 263168??? 169808?? + unsafe anonymous classes >>>> 0x000000080000a0b8? 0x000000080000a0b8 0x00007f5faca63460?????? 1 >>>> ? 6144????? 3960? jdk.internal.reflect.DelegatingClassLoader @5bd03f44 >>>> >>>> >>>>> Since we hide now the class name of the loader, if everyone names >>>>> their class loader the same - e.g. "Test" or >>>>> "MyInMemoryClassLoader" - >>>>> we loose information. >>>> We loose the name of class loader's class' fully qualified name >>>> only in the >>>> situation where the class loader's name has been explicitly >>>> specified by the >>>> user during construction.? I would think in that case one would >>>> want to see >>>> the explicitly given name of the class loader.? We also gain in either >>>> situation (unnamed or named class loader), the class loader's >>>> identity hash >>>> which allows for uniquely identifying a class loader in question. >>> For the record, I would prefer a naming scheme which printed >>> unconditionally both name and class name, if both are set: >>> >>> '"name", instance of , @id' >>> >>> or >>> >>> 'instance of , @id' >>> >>> or maybe some more condensed, technical form, as a clear triple: >>> >>> '[name, , @id]' or '{name, , @id}' >>> >>> The reason why I keep harping on this is that it is useful to have >>> consistent output, meaning, output that does not change its format on >>> a line-by-line base. >>> >>> Just a tiny example why this is useful, lets say I run a Spring MVC >>> app and want to know the number of Spring loaders, I do a: >>> >>> ./images/jdk/bin/jcmd hello VM.classloaders | grep >>> org.springframework | wc -l >>> >>> Won't work consistently anymore if class names disappear for loader >>> names which have names. >>> >>> Of course, there are myriad other ways to get the same information, so >>> this is just an illustration. >>> >>> -- >>> >>> But I guess I won't convince you that this is better, and it seems you >>> spent a lot of thoughts and discussions on this point already. I think >>> this is a case of one-size-fits-not-all. And also a matter of taste. >>> >>> If emphasis is on brevity, your naming scheme is better. If >>> ease-of-parsing and ease-of-reading are important, I think my scheme >>> wins. >>> >>> But as long as we have alternatives - e.g. CLD::name() and >>> CLD::class_loader_class() - and as long as VM.classloaders and >>> VM.metaspace commands stay useful, I am content and can live with your >>> scheme. >>> >>>>> I'm afraid this will be an issue if people will >>>>> start naming their class loaders more and more. It is not >>>>> unimaginable >>>>> that completely different frameworks name their loaders the same. >>>> Point taken, however, doesn't including the identity hash allow for >>>> unique >>>> identification of the class loader? >>> I think the point of diagnostic commands is to get information quick. >>> An identity hash may help me after I managed to finally resolve it, >>> but it is not a quick process (that I know of). Whereas, for example, >>> just reading "com.wily.introscope.Loader" tells me immediately that >>> the VM I am looking at has Wily byte code modifications enabled. >>> >>>> >>>>> This "name or if not then class name" scheme will also complicate >>>>> parsing a lot for people who parse the output of these commands. I >>>>> would strongly prefer to see both - name and class type. >>>> Much like classfile/classLoaderHierarchyDCmd.cpp now generates, >>>> correct? >>>> >>> Yes! :) >>> >>>> Thanks, >>>> Lois >>>> >>>> >>> I just saw your webrev popping in, but again it is late. I'll take a >>> look tomorrow morning or monday. Thank you for your work. >>> >>> ..Thomas >>> >>>> ---- >>>> >>>> Hmm. At this point I noticed that I still had general reservations >>>> about the new compound naming scheme - see my remarks above. So I >>>> guess I stop here to wait for your response before continuing the code >>>> review. >>>> >>>> Thanks & Kind Regards, >>>> >>>> Thomas >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Thu, Jun 14, 2018 at 9:56 PM, Lois Foltan >>>> wrote: >>>> >>>> Please review this updated webrev that address review comments >>>> received. >>>> >>>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.1/webrev/ >>>> >>>> Thanks, >>>> Lois >>>> >>>> >>>> On 6/13/2018 6:58 PM, Lois Foltan wrote: >>>> >>>> Please review this change to standardize on how to obtain a class >>>> loader's >>>> name within the VM.? SystemDictionary::loader_name() methods have been >>>> removed in favor of ClassLoaderData::loader_name(). >>>> >>>> Since the loader name is largely used in the VM for display purposes >>>> (error messages, logging, jcmd, JFR) this change also adopts a new >>>> format to >>>> append to a class loader's name its identityHashCode and if the >>>> loader has >>>> not been explicitly named it's qualified class name is used instead. >>>> >>>> 391 /** >>>> 392 * If the defining loader has a name explicitly set then >>>> 393 * '' @ >>>> 394 * If the defining loader has no name then >>>> 395 * @ >>>> 396 * If it's built-in loader then omit `@` as there is only one >>>> instance. >>>> 397 */ >>>> >>>> The names for the builtin loaders are 'bootstrap', 'app' and >>>> 'platform'. >>>> >>>> open webrev at >>>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605/webrev/ >>>> bug link at https://bugs.openjdk.java.net/browse/JDK-8202605 >>>> >>>> Testing: hs-tier(1-2), jdk-tier(1-2) complete >>>> ??????????????? hs-tier(3-5), jdk-tier(3) in progress >>>> >>>> Thanks, >>>> Lois >>>> >>>> >> > From lois.foltan at oracle.com Tue Jun 19 11:30:10 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Tue, 19 Jun 2018 07:30:10 -0400 Subject: RFR (M) JDK-8202605: Standardize on ClassLoaderData::loader_name() throughout the VM to obtain a class loader's name In-Reply-To: <87cfb0fc-0ab4-3f31-2f8b-847d5e9d7a3f@oracle.com> References: <53db9688-a7ba-241b-9712-e79352bccca6@oracle.com> <7c72947d-56d8-0288-eb72-20d702c8f2a8@oracle.com> <5207064d-787a-680f-cc23-20e4ece999b3@oracle.com> <38efa995-8541-73c2-fc2f-f22671c12560@oracle.com> <87cfb0fc-0ab4-3f31-2f8b-847d5e9d7a3f@oracle.com> Message-ID: On 6/18/2018 3:58 PM, mandy chung wrote: > > > On 6/15/18 4:52 PM, Lois Foltan wrote: >> Hi Thomas, >> >> I have read through all your comments below, thank you.? I think the >> best compromise that hopefully will enable this change to go forward >> is to back out my changes to classfile/classLoaderHierarchyDCmd.cpp >> and classfile/classLoaderStats.cpp.? This will allow the >> serviceability team to review the new format for the class loader's >> name_and_id and go forward if applicable to jcmd in a follow on RFE.? >> Updated webrev at: >> >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8202605.3/webrev/ > > Looks good to me. Thanks Mandy! Lois > > Mandy From david.griffiths at gmail.com Tue Jun 19 11:49:48 2018 From: david.griffiths at gmail.com (David Griffiths) Date: Tue, 19 Jun 2018 12:49:48 +0100 Subject: TestFrames.java deoptimization Message-ID: On 19 June 2018 at 12:28, wrote: > Someone posted a question about inlining and deoptimization, but I can't > find the original email. I have an answer, if you want to know. Hi Andrew, that was me. Would be interested to know your thoughts. I was able to maintain the top speed by excluding add2 from compilation: -XX:+UnlockDiagnosticVMOptions '-XX:CompileCommand=exclude,TestFrames.add2' Cheers, David From volker.simonis at gmail.com Tue Jun 19 12:21:54 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 19 Jun 2018 14:21:54 +0200 Subject: RFR(M): 8204965: Fix '--disable-cds' and disable CDS on AIX by default In-Reply-To: <8ada6cad-09f4-cde2-69c8-561a142bc3bb@oracle.com> References: <67efe9b2-bd7a-e23a-74d6-786cbc135d16@oracle.com> <8ada6cad-09f4-cde2-69c8-561a142bc3bb@oracle.com> Message-ID: On Tue, Jun 19, 2018 at 9:25 AM, David Holmes wrote: > Hi Volker, > > On 19/06/2018 4:50 PM, Volker Simonis wrote: >> >> On Tue, Jun 19, 2018 at 6:54 AM, David Holmes >> wrote: >>> >>> Hi Volker, >>> >>> v3 looks much cleaner - thanks. >>> >>> But AFAICS the change to jvmtiEnv.cpp is also not needed. >>> ClassLoaderExt::append_boot_classpath exists regardless of INCLUDE_CDS >>> but >>> operates differently (just calling >>> ClassLoader::add_to_boot_append_entries). >>> >> >> That's not entirely true because the whole compilation unit (i.e. >> classLoaderExt.cpp) which contains >> 'ClassLoaderExt::append_boot_classpath()' is excluded from the >> compilation if CDS is disabled (see make/hotspot/lib/JvmFeatures.gmk). > > > Hmmm. There's a CDS bug there. Either classLoaderExt.cpp should not be > excluded from a non-CDS build, or it should not contain any INCLUDE_CDS > guards! I suspect it should not be excluded. > >> So I can either move the whole implementation of >> 'ClassLoaderExt::append_boot_classpath()' into classLoaderExt.hpp in >> which case things would work as you explained and my changes to >> jvmtiEnv.cpp could be removed or leave the whole code and change as >> is. Please let me know what you think? > > > In the interest of moving forward you can push what you have and I will file > a bug against CDS to sort out classLoaderExt.cpp. > Thanks! As the current version also passed the submit-repo tests I've pushed it. Regarding classLoaderExt.cpp, I think it is OK to exclude it for non-CDS builds. If my IDE doesn't cheat on me (see [1]), ClassLoaderExt is mostly used from other CDS-only files (classListParser.cpp, systemDictionaryShared.cpp, filemap.cpp, metaspaceShared.cpp). The only references from non-CDS files are from classLoader.cpp an jvmtiEnv.cpp. The ones from classLoader.cpp are all guarded with 'INCLUDE_CDS' or they only use functions defined in classLoaderExt.hpp. The single remaining reference from jvmtiEnv.cpp has been guarded with 'INCLUDE_CDS' by my change. I think it is a matter of taste if we leave this as is or move the offending function from classLoaderExt.cpp to classLoaderExt.hpp and remove the new guard from jvmtiEnv.cpp. Regards, Volker [1] http://cr.openjdk.java.net/~simonis/webrevs/2018/ClassLoaderExt.html > Thanks, > David > > >> Regards, >> Volker >> >>> Thanks, >>> David >>> >>> >>> On 19/06/2018 2:04 AM, Volker Simonis wrote: >>>> >>>> >>>> On Mon, Jun 18, 2018 at 8:17 AM, David Holmes >>>> wrote: >>>>> >>>>> >>>>> Hi Volker, >>>>> >>>>> src/hotspot/share/runtime/globals.hpp >>>>> >>>>> This change should not be needed! We do minimal VM builds without CDS >>>>> and >>>>> we >>>>> don't have to touch the UseSharedSpaces defaults (unless recent change >>>>> have >>>>> broken this - in which case that needs to be addressed in its own >>>>> right!) >>>>> >>>> >>>> Yes, you're right, CDS_ONLY/NOT_CDS isn't really required here, >>>> because UseSharedSpaces is reseted later on at the end of >>>> Arguments::parse(). I just thought it would be cleaner to disable it >>>> statically, if the VM doesn't support it. But anyway I don't really >>>> mind and I've reverted that change in globals.hpp. >>>> >>>>> src/hotspot/share/classfile/javaClasses.cpp >>>>> >>>>> AFAICS you should be using INCLUDE_CDS in the ifdefs not >>>>> INCLUDE_CDS_JAVA_HEAP. But again I'm unclear (as was Thomas) why this >>>>> should >>>>> be needed as we have not needed it before. As Thomas notes we have: >>>>> >>>>> ./hotspot/share/memory/metaspaceShared.hpp: static bool >>>>> is_archive_object(oop p) NOT_CDS_JAVA_HEAP_RETURN_(false); >>>>> ./hotspot/share/classfile/stringTable.hpp: static oop >>>>> create_archived_string(oop s, Thread* THREAD) >>>>> NOT_CDS_JAVA_HEAP_RETURN_(NULL); >>>>> >>>>> so these methods should be defined when CDS is not available. >>>>> >>>> >>>> Thomas and you are right. Must have been a mis-configuration on AIX >>>> where I saw undefined symbols at link time. I've removed the ifdefs >>>> from javaClasses.cpp now. >>>> >>>> Finally, I've also wrapped all the FileMapInfo fields in vmStructs.cpp >>>> into CDS_ONLY macros as suggested by Jiangli because the really only >>>> make sense for a CDS-enabled VM. >>>> >>>> Here's the new webrev: >>>> >>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965.v3/ >>>> >>>> Please let me know if you think there's still something missing. >>>> >>>> Regards, >>>> Volker >>>> >>>> >>>>> ?? >>>>> >>>>> Thanks, >>>>> David >>>>> ----- >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On 15/06/2018 12:26 AM, Volker Simonis wrote: >>>>>> >>>>>> >>>>>> >>>>>> Hi, >>>>>> >>>>>> can I please have a review for the following fix: >>>>>> >>>>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/ >>>>>> https://bugs.openjdk.java.net/browse/JDK-8204965 >>>>>> >>>>>> CDS does currently not work on AIX because of the way how we >>>>>> reserve/commit memory on AIX. The problem is that we're using a >>>>>> combination of shmat/mmap depending on the page size and the size of >>>>>> the memory chunk to reserve. This makes it impossible to reliably >>>>>> reserve the memory for the CDS archive and later on map the various >>>>>> parts of the archive into these regions. >>>>>> >>>>>> In order to fix this we would have to completely rework the memory >>>>>> reserve/commit/uncommit logic on AIX which is currently out of our >>>>>> scope because of resource limitations. >>>>>> >>>>>> Unfortunately, I could not simply disable CDS in the configure step >>>>>> because some of the shared code apparently relies on parts of the CDS >>>>>> code which gets excluded from the build when CDS is disabled. So I >>>>>> also fixed the offending parts in hotspot and cleaned up the configure >>>>>> logic for CDS. >>>>>> >>>>>> Thank you and best regards, >>>>>> Volker >>>>>> >>>>>> PS: I did run the job through the submit forest >>>>>> (mach5-one-simonis-JDK-8204965-20180613-1946-26349) but the results >>>>>> weren't really useful because they mention build failures on linux-x64 >>>>>> which I can't reproduce locally. >>>>>> >>>>> >>> > From swatibits14 at gmail.com Tue Jun 19 12:28:55 2018 From: swatibits14 at gmail.com (Swati Sharma) Date: Tue, 19 Jun 2018 17:58:55 +0530 Subject: [11] RFR(M): 8189922: UseNUMA memory interleaving vs membind In-Reply-To: <37fb3c66-0400-538e-bcef-83ac9894df22@linux.vnet.ibm.com> References: <37fb3c66-0400-538e-bcef-83ac9894df22@linux.vnet.ibm.com> Message-ID: Hi All, Here is the numa information of the system : swati at java-diesel1:~$ numactl -H available: 8 nodes (0-7) node 0 cpus: 0 1 2 3 4 5 6 7 64 65 66 67 68 69 70 71 node 0 size: 64386 MB node 0 free: 64134 MB node 1 cpus: 8 9 10 11 12 13 14 15 72 73 74 75 76 77 78 79 node 1 size: 64509 MB node 1 free: 64232 MB node 2 cpus: 16 17 18 19 20 21 22 23 80 81 82 83 84 85 86 87 node 2 size: 64509 MB node 2 free: 64215 MB node 3 cpus: 24 25 26 27 28 29 30 31 88 89 90 91 92 93 94 95 node 3 size: 64509 MB node 3 free: 64157 MB node 4 cpus: 32 33 34 35 36 37 38 39 96 97 98 99 100 101 102 103 node 4 size: 64509 MB node 4 free: 64336 MB node 5 cpus: 40 41 42 43 44 45 46 47 104 105 106 107 108 109 110 111 node 5 size: 64509 MB node 5 free: 64352 MB node 6 cpus: 48 49 50 51 52 53 54 55 112 113 114 115 116 117 118 119 node 6 size: 64509 MB node 6 free: 64359 MB node 7 cpus: 56 57 58 59 60 61 62 63 120 121 122 123 124 125 126 127 node 7 size: 64508 MB node 7 free: 64350 MB node distances: node 0 1 2 3 4 5 6 7 0: 10 16 16 16 32 32 32 32 1: 16 10 16 16 32 32 32 32 2: 16 16 10 16 32 32 32 32 3: 16 16 16 10 32 32 32 32 4: 32 32 32 32 10 16 16 16 5: 32 32 32 32 16 10 16 16 6: 32 32 32 32 16 16 10 16 7: 32 32 32 32 16 16 16 10 Thanks, Swati On Tue, Jun 19, 2018 at 12:00 AM, Gustavo Romero wrote: > Hi Swati, > > On 06/16/2018 02:52 PM, Swati Sharma wrote: > >> Hi All, >> >> This is my first patch,I would appreciate if anyone can review the fix: >> >> Bug : https://bugs.openjdk.java.net/browse/JDK-8189922 < >> https://bugs.openjdk.java.net/browse/JDK-8189922> >> Webrev :http://cr.openjdk.java.net/~gromero/8189922/v1 >> >> The bug is about JVM flag UseNUMA which bypasses the user specified >> numactl --membind option and divides the whole heap in lgrps according to >> available numa nodes. >> >> The proposed solution is to disable UseNUMA if bound to single numa node. >> In case more than one numa node binding, create the lgrps according to >> bound nodes.If there is no binding, then JVM will divide the whole heap >> based on the number of NUMA nodes available on the system. >> >> I appreciate Gustavo's help for fixing the thread allocation based on >> numa distance for membind which was a dangling issue associated with main >> patch. >> > > Thanks. I have no further comments on it. LGTM. > > > Best regards, > Gustavo > > PS: Please, provide numactl -H information when possible. It helps to grasp > promptly the actual NUMA topology in question :) > > Tested the fix by running specjbb2015 composite workload on 8 NUMA node >> system. >> Case 1 : Single NUMA node bind >> numactl --cpunodebind=0 --membind=0 java -Xmx24g -Xms24g -Xmn22g >> -XX:+UseNUMA -Xlog:gc*=debug:file=gc.log:time,uptimemillis >> >> Before Patch: gc.log >> eden space 22511616K(22GB), 12% used >> lgrp 0 space 2813952K, 100% used >> lgrp 1 space 2813952K, 0% used >> lgrp 2 space 2813952K, 0% used >> lgrp 3 space 2813952K, 0% used >> lgrp 4 space 2813952K, 0% used >> lgrp 5 space 2813952K, 0% used >> lgrp 6 space 2813952K, 0% used >> lgrp 7 space 2813952K, 0% used >> After Patch : gc.log >> eden space 46718976K(45GB), 99% used(NUMA disabled) >> >> Case 2 : Multiple NUMA node bind >> numactl --cpunodebind=0,7 ?membind=0,7 java -Xms50g -Xmx50g -Xmn45g >> -XX:+UseNUMA -Xlog:gc*=debug:file=gc.log:time,uptimemillis >> >> Before Patch :gc.log >> eden space 46718976K, 6% used >> lgrp 0 space 5838848K, 14% used >> lgrp 1 space 5838848K, 0% used >> lgrp 2 space 5838848K, 0% used >> lgrp 3 space 5838848K, 0% used >> lgrp 4 space 5838848K, 0% used >> lgrp 5 space 5838848K, 0% used >> lgrp 6 space 5838848K, 0% used >> lgrp 7 space 5847040K, 35% used >> After Patch : gc.log >> eden space 46718976K(45GB), 99% used >> lgrp 0 space 23359488K(23.5GB), 100% used >> lgrp 7 space 23359488K(23.5GB), 99% used >> >> >> Note: The proposed solution is only for numactl membind option.The fix is >> not for --cpunodebind and localalloc which is a separate bug bug >> https://bugs.openjdk.java.net/browse/JDK-8205051 and fix is in progress >> on this. >> >> Thanks, >> Swati Sharma >> Software Engineer -2 at AMD >> >> > From aph at redhat.com Tue Jun 19 12:54:12 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 19 Jun 2018 13:54:12 +0100 Subject: TestFrames.java deoptimization In-Reply-To: References: <6a241492-65ba-6927-60dd-e3fd36cde976@redhat.com> Message-ID: <0dc99298-e8c3-ae09-4ca8-586edd38400d@redhat.com> On 06/19/2018 10:53 AM, Roland Westrelin wrote: > It's: > > http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-June/033060.html Thanks. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From aph at redhat.com Tue Jun 19 13:17:51 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 19 Jun 2018 14:17:51 +0100 Subject: Compiler deoptimization behaviour In-Reply-To: References: Message-ID: <3572a580-f9d0-6d1c-41ea-4cdca81726b5@redhat.com> On 06/12/2018 09:23 PM, David Griffiths wrote: > I wrote a simple little test to better understand the compiler frame layout > but it exhibits strange behaviour in that it starts off very fast and then > slows to a million times slower. From running with -XX:+PrintCompilation I > _think_ it is something to do with one of the bottom level methods getting > deoptimized - this message appears just as the performance falls off the > cliff: Here's what happens. If I run with PrintInlining I see: @ 21 TestFrames::add3 (31 bytes) inline (hot) @ 21 TestFrames::add4 (31 bytes) inline (hot) @ 21 TestFrames::add5 (32 bytes) inline (hot) @ 22 TestFrames::add6 (32 bytes) inline (hot) @ 22 TestFrames::add7 (32 bytes) inline (hot) @ 22 TestFrames::add8 (32 bytes) inline (hot) @ 22 TestFrames::add9 (32 bytes) inline (hot) @ 22 TestFrames::add10 (32 bytes) inline (hot) @ 22 TestFrames::add11 (32 bytes) inline (hot) @ 22 TestFrames::add12 (32 bytes) inline (hot) @ 22 TestFrames::add13 (12 bytes) inlining too deep then @ 21 TestFrames::add5 (32 bytes) inline (hot) @ 22 TestFrames::add6 (32 bytes) inline (hot) @ 22 TestFrames::add7 (32 bytes) inline (hot) @ 22 TestFrames::add8 (32 bytes) inline (hot) @ 22 TestFrames::add9 (32 bytes) inline (hot) @ 22 TestFrames::add10 (32 bytes) inline (hot) @ 22 TestFrames::add11 (32 bytes) inline (hot) @ 22 TestFrames::add12 (32 bytes) inline (hot) @ 22 TestFrames::add13 (12 bytes) inline (hot) then @ 21 TestFrames::add4 (31 bytes) inline (hot) @ 21 TestFrames::add5 (32 bytes) inline (hot) @ 22 TestFrames::add6 (32 bytes) inline (hot) @ 22 TestFrames::add7 (32 bytes) inline (hot) @ 22 TestFrames::add8 (32 bytes) inline (hot) @ 22 TestFrames::add9 (32 bytes) inline (hot) @ 22 TestFrames::add10 (32 bytes) inline (hot) @ 22 TestFrames::add11 (32 bytes) inline (hot) @ 22 TestFrames::add12 (32 bytes) inline (hot) @ 22 TestFrames::add13 (12 bytes) inline (hot) then @ 21 TestFrames::add2 (31 bytes) inline (hot) @ 21 TestFrames::add3 (31 bytes) inline (hot) @ 21 TestFrames::add4 (31 bytes) inline (hot) @ 21 TestFrames::add5 (32 bytes) inline (hot) @ 22 TestFrames::add6 (32 bytes) inline (hot) @ 22 TestFrames::add7 (32 bytes) inline (hot) @ 22 TestFrames::add8 (32 bytes) inline (hot) @ 22 TestFrames::add9 (32 bytes) inline (hot) @ 22 TestFrames::add10 (32 bytes) inline (hot) @ 22 TestFrames::add11 (32 bytes) inline (hot) @ 22 TestFrames::add12 (32 bytes) inlining too deep ... many, many times. These are just a few. The recompilation happens each time because counters in the outer loops are triggered. We end up with add[2-12] inlined in one method, calling out to add13. Very unfortunately, this mega-inlined method has a lot of live variables at the point of the call out to add12: every register is occupied. So, it dumps the entire register state onto the stack, calls add12, which does a tiny bit of work and returns, then the entire register state has to be restored, tra, la, la la. It would be much better to ignore the counter overflows in the outer loops and to inline the inner loops rather than the outer loops but we aren't smart enough to do that. Try -XX:MaxInlineLevel=15. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From daniel.daugherty at oracle.com Tue Jun 19 13:24:29 2018 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Tue, 19 Jun 2018 09:24:29 -0400 Subject: RFR(s): 8204166: TLH: Semaphore may not be destroy until signal have returned. In-Reply-To: References: <3db415fe-f96d-1805-76a2-b50d052c9e1a@oracle.com> <5cfeb06c-30d3-4af4-e085-4e8c60aa5d98@oracle.com> Message-ID: On 6/19/18 4:46 AM, Robbin Ehn wrote: > Hi David, > > On 06/19/2018 07:10 AM, David Holmes wrote: >> Hi Robbin, >> >> Overall changes seem okay. I gave a lot of thought as to whether an >> "old" thread still returning from sem_wait could potentially >> interfere with the next use of the sempahore, but it seems okay. >> Interesting (read "scary") glibc bug! > > Good, yes! > >> >> Minor comments: >> >> handshake.cpp: >> >> ??311?? if (thread->is_terminated()) { >> ??312???? // If thread is not on threads list but armed, cancel. >> ??313???? thread->cancel_handshake(); >> ??314???? return; >> ??315?? } >> >> did you actually encounter late handshakes in the thread lifecycle >> causing problems, or is this just being cautious? > > Just cautious. I have a vague memory of Mikael G. running into this problem during the early days of TLH... but I could be mis-remembering... Dan > >> >> 377?? if(vmthread_can_process_handshake(target)) { >> >> Space needed after "if" > > Fixed! > > Thanks David, Robbin > >> >> Thanks, >> David >> ----- >> >> On 19/06/2018 12:05 AM, Robbin Ehn wrote: >>> On 06/18/2018 03:07 PM, Robbin Ehn wrote: >>>> Hi all, >>>> >>>> After some internal discussions I changed the patch to: >>>> http://rehn-ws.se.oracle.com/cr_mirror/8204166/v2/ >>> >>> Correct external url: >>> http://cr.openjdk.java.net/~rehn/8204166/v2/ >>> >>> /Robbin >>> >>>> >>>> Which handles thread off javathreads list better. >>>> >>>> Passes handshake testing and ZGC testing seems okay. >>>> >>>> Thanks, Robbin >>>> >>>> On 06/14/2018 12:11 PM, Robbin Ehn wrote: >>>>> Hi all, please review. >>>>> >>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8204166 >>>>> Webrev: http://cr.openjdk.java.net/~rehn/8204166/v1/webrev/ >>>>> >>>>> The root cause of this failure is a bug in the posix semaphores: >>>>> https://sourceware.org/bugzilla/show_bug.cgi?id=12674 >>>>> >>>>> Thread a: >>>>> sem_post(my_sem); >>>>> >>>>> Thread b: >>>>> sem_wait(my_sem); >>>>> sem_destroy(my_sem); >>>>> >>>>> Thread b is waiting on my_sem (count 0), Thread a posts (count 0->1). >>>>> If Thread b start executing directly after the increment in post >>>>> but before >>>>> Thread a leaves the call to post and manage to destroy the >>>>> semaphore. Thread a >>>>> _can_ get EINVAL from sem_post! This is fixed in newer glibc(2.21). >>>>> >>>>> Note that mutexes have had same issue on some platforms: >>>>> https://sourceware.org/bugzilla/show_bug.cgi?id=13690 >>>>> Fixed in 2.23. >>>>> >>>>> Since we only have one handshake operation running at anytime >>>>> (safepoints and handshakes are also mutual exclusive, both run on >>>>> VM Thread) we can actually always use the same semaphore. This >>>>> patch changes the _done semaphore to be static instead, thus >>>>> avoiding the post<->destroy race. >>>>> >>>>> Patch also contains some small changes which remove of dead code, >>>>> remove unneeded state, handling of cases which we can't easily say >>>>> will never happen and some additional error checks. >>>>> >>>>> Handshakes test passes, but they don't trigger the original issue, >>>>> so more interesting is that this issue do not happen when running >>>>> ZGC which utilize handshakes with the static semaphore. >>>>> >>>>> Thanks, Robbin > From aph at redhat.com Tue Jun 19 13:31:34 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 19 Jun 2018 14:31:34 +0100 Subject: Compiler deoptimization behaviour In-Reply-To: <3572a580-f9d0-6d1c-41ea-4cdca81726b5@redhat.com> References: <3572a580-f9d0-6d1c-41ea-4cdca81726b5@redhat.com> Message-ID: <06c5b8ff-be3b-e5f4-09fb-af51ffdedcdb@redhat.com> On 06/19/2018 02:17 PM, Andrew Haley wrote: > Try -XX:MaxInlineLevel=15. And incidentally, if we do that and inline everything we get some truly beautiful code: Compiled method (c2) 4989 652 4 TestFrames::add2 (31 bytes) ... 0x000003ffa8b2a4f4: ldr x10, [x1,#16] 0x000003ffa8b2a4f8: add x10, x10, w2, sxtw ;; 0x14D6FBCFA76 0x000003ffa8b2a4fc: mov x11, #0xfa76 // #64118 0x000003ffa8b2a500: movk x11, #0x6fbc, lsl #16 0x000003ffa8b2a504: movk x11, #0x14d, lsl #32 0x000003ffa8b2a508: add x10, x10, x11 0x000003ffa8b2a50c: str x10, [x1,#16] ;*putfield var {reexecute=0 rethrow=0 return_oop=0} ; - TestFrames::add13 at 8 (line 103) ; - TestFrames::add12 at 22 (line 99) ; - TestFrames::add11 at 22 (line 93) ; - TestFrames::add10 at 22 (line 87) ; - TestFrames::add9 at 22 (line 81) ; - TestFrames::add8 at 22 (line 75) ; - TestFrames::add7 at 22 (line 69) ; - TestFrames::add6 at 22 (line 63) ; - TestFrames::add5 at 22 (line 57) ; - TestFrames::add4 at 21 (line 51) ; - TestFrames::add3 at 21 (line 45) ; - TestFrames::add2 at 21 (line 39) 0x000003ffa8b2a510: ldp xfp, xlr, [sp,#16] 0x000003ffa8b2a514: add sp, sp, #0x20 0x000003ffa8b2a518: ldr xscratch1, [xthread,#288] 0x000003ffa8b2a51c: ldr wzr, [xscratch1] ; {poll_return} 0x000003ffa8b2a520: ret Note that this is all of add2 and it contains no loops at all. It adds n to var, adds 0x14D6FBCFA76 to that, and stores the result in var. It has optimized 10*12 operations to 1. Now that's what I call optimization! -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From aph at redhat.com Tue Jun 19 13:32:41 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 19 Jun 2018 14:32:41 +0100 Subject: Compiler deoptimization behaviour In-Reply-To: <06c5b8ff-be3b-e5f4-09fb-af51ffdedcdb@redhat.com> References: <3572a580-f9d0-6d1c-41ea-4cdca81726b5@redhat.com> <06c5b8ff-be3b-e5f4-09fb-af51ffdedcdb@redhat.com> Message-ID: On 06/19/2018 02:31 PM, Andrew Haley wrote: > It has optimized 10*12 operations to 1. Now that's what I call Sorry, 10**12. And sorry for the duplicated messages: I think it's a bug in the latest release of Thunderbird. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From ChrisPhi at LGonQn.Org Tue Jun 19 13:49:36 2018 From: ChrisPhi at LGonQn.Org (Chris Phillips) Date: Tue, 19 Jun 2018 09:49:36 -0400 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: <8632cbab-75e8-7945-1b11-339e471b8ae2@oracle.com> References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> <034bbb1e-5f79-c429-878c-79e8f824dc3d@oracle.com> <7c8571dc-8781-cfdf-1bb9-d0c3b3297c17@oracle.com> <82ac960d-5794-0bfb-8d5b-ff0be858230a@LGonQn.Org> <8632cbab-75e8-7945-1b11-339e471b8ae2@oracle.com> Message-ID: Hi Per Thanks! New webrev : http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.3 All suggested changes made (see below inline). On 18/06/18 05:55 AM, Per Liden wrote: > On 06/14/2018 05:01 PM, Chris Phillips wrote: >> Hi >> Any further comments or changes? >> On 06/06/18 05:56 PM, Chris Phillips wrote: >>> Hi Per, >>> >>> On 06/06/18 05:48 PM, Per Liden wrote: >>>> Hi Chris, >>>> >>>> On 06/06/2018 11:15 PM, Chris Phillips wrote: >>>>> Hi Per, >>>>> >>>>> On 06/06/18 04:47 PM, Per Liden wrote: >>>>>> Hi Chris, >>>>>> >>>>>> On 06/06/2018 09:36 PM, Chris Phillips wrote: >>>>>>> Hi, >>>>>>> >>>>>>> On 06/06/18 02:23 PM, Per Liden wrote: >>>>>>>> On 2018-06-06 18:29, Andrew Haley wrote: >>>>>>>>> On 06/06/2018 04:47 PM, Chris Phillips wrote: >>>>>>>>>> Please review this set of changes to shared code >>>>>>>>>> related to S390 (31bit) Zero self-build type mis-match failures. >>>>>>>>>> >>>>>>>>>> Bug:??? https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>>>>>> webrev: http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 >>>>>>>>> >>>>>>>>> Can you explain this a little more?? What is the type of size_t on >>>>>>>>> s390x?? What is the type of uintptr_t?? What are the errors? >>>>>>>> >>>>>>>> I would like to understand this too. >>>>>>>> >>>>>>>> cheers, >>>>>>>> Per >>>>>>>> >>>>>>>> >>>>>>> Quoting from the original bug? review request: >>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>>>> >>>>>>> >>>>>>> "This >>>>>>> is a problem when one parameter is of size_t type and the second of >>>>>>> uintx type and the platform has size_t defined as eg. unsigned >>>>>>> long as >>>>>>> on s390 (32-bit)." >>>>>> >>>>>> Please clarify what the sizes of uintx (i.e. uintptr_t) and size_t >>>>>> are >>>>>> on s390? >>>>> See Dan's explanation. >>>>>> >>>>>> I fail to see how any of this matters to _entries here? What am I >>>>>> missing? >>>>>> >>>>> >>>>> By changing the type, to its actual usage, we avoid the >>>>> necessity of patching in >>>>> src/hotspot/share/gc/g1/g1StringDedupTable.cpp >>>>> around line 617, since its consistent usage and local I patched at the >>>>> definition. >>>>> >>>>> - _table->_entries, percent_of(_table->_entries, _table->_size), >>>>> _entry_cache->size(), _entries_added, _entries_removed); >>>>> + _table->_entries, percent_of( (size_t)(_table->_entries), >>>>> _table->_size), _entry_cache->size(), _entries_added, >>>>> _entries_removed); >>>>> >>>>> percent_of will complain about types otherwise. >>>> >>>> Ok, so why don't you just cast it in the call to percent_of? Your >>>> current patch has ripple effects that you fail to take into account. >>>> For >>>> example, _entries is still printed using UINTX_FORMAT and compared >>>> against other uintx variables. You're now mixing types in an unsound >>>> way. >>> >>> Hmm missed that, so will do the cast instead as you suggest. >>> (Fixing at the defn is what was suggested the last time around so I >>> tried to do that where it was consistent, obviously this is not. >>> Thanks. >>> >>>> cheers, >>>> Per >>>> >>>>> >>>>> >>>>>> src/hotspot/share/gc/g1/g1StringDedupTable.hpp >>>>>> @@ -120,11 +120,11 @@ >>>>>> ???? // Cache for reuse and fast alloc/free of table entries. >>>>>> ???? static G1StringDedupEntryCache* _entry_cache; >>>>>> >>>>>> ???? G1StringDedupEntry**??????????? _buckets; >>>>>> ???? size_t????????????????????????? _size; >>>>>> -? uintx?????????????????????????? _entries; >>>>>> +? size_t????????????????????????? _entries; >>>>>> ???? uintx?????????????????????????? _shrink_threshold; >>>>>> ???? uintx?????????????????????????? _grow_threshold; >>>>>> ???? bool??????????????????????????? _rehash_needed; >>>>>> >>>>>> cheers, >>>>>> Per >>>>>> >>>>>>> >>>>>>> Hope that helps, >>>>>>> Chris >>>>>>> >>>>>>> (I'll answer further if needed but the info is in the bugs and >>>>>>> review thread mostly) >>>>>>> See: >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>>> and: >>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>>>> >>>>>>> >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8046938 >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8074459 >>>>>>> For more info. >>>>>>> >>>>>> >>>>>> >>>> >>>> >>> Cheers! >>> Chris >>> >>> >>> >> >> Finally through testing and submit run again after Per's requested >> change, here's the knew webrev: >> ??? http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.2 >> attached is the passing run fron the submit queue. >> >> Please review... > > > src/hotspot/share/gc/cms/cms_globals.hpp > ---------------------------------------- > Instead of changing the type of ParGCDesiredObjsFromOverflowList, I'd > suggest you just change the single place where you need a cast, in > ParScanThreadState::take_from_overflow_stack(). > > If you change the type of ParGCDesiredObjsFromOverflowList, but you > otherwise have to clean up a number of places where it's already > explicitly cast to size_t in concurrentMaskSweepGeneration.cpp. > Done > > src/hotspot/share/gc/parallel/parallel_globals.hpp > -------------------------------------------------- > Please also change to type of ParallelOldDeadWoodLimiterMean to size_t. > > Done > src/hotspot/share/gc/parallel/psParallelCompact.cpp > --------------------------------------------------- > No need to cast ParallelOldDeadWoodLimiterStdDev, you're already changed > its type. And if you change ParallelOldDeadWoodLimiterMean to also being > size_t you don't need to touch this file at all. > > Done > src/hotspot/share/runtime/globals.hpp > ------------------------------------- > -define_pd_global(uintx,? InitialCodeCacheSize,?????? 160*K); > -define_pd_global(uintx,? ReservedCodeCacheSize,????? 32*M); > +define_pd_global(size_t,? InitialCodeCacheSize,?????? 160*K); > +define_pd_global(size_t,? ReservedCodeCacheSize,????? 32*M); > > I would avoid changing these types, otherwise you need to go around and > clean up a number of other places where it's says it's an uintx, like here: > > 1909?? product_pd(uintx, InitialCodeCacheSize, ??????? \ > 1910?????????? "Initial code cache size (in bytes)") ??????? \ > 1911?????????? range(os::vm_page_size(), max_uintx) ??????? \ > > Also, it seems you've already added the cast you need for > InitialCodeCacheSize in codeCache.cpp, so that type change looks > unnecessary. > Done > Btw, patch no longer applies to the latest jdk/jdk. Should work now, again. Tested on s390 31 bit and x86_64 64 bit, testing on s390x underway. > > > cheers, > Per > >> >> Chris >> > > > Cheers! Chris From david.griffiths at gmail.com Tue Jun 19 14:05:27 2018 From: david.griffiths at gmail.com (David Griffiths) Date: Tue, 19 Jun 2018 15:05:27 +0100 Subject: Compiler deoptimization behaviour In-Reply-To: <06c5b8ff-be3b-e5f4-09fb-af51ffdedcdb@redhat.com> References: <3572a580-f9d0-6d1c-41ea-4cdca81726b5@redhat.com> <06c5b8ff-be3b-e5f4-09fb-af51ffdedcdb@redhat.com> Message-ID: Ha, that's funny! Too good for me though, I was trying to create a simple program to test some stack walking code :-) Anyway thanks for the explanation - I saw all those stack saves in the unoptimized add2 and wondered what it was doing. Cheers, David On 19 June 2018 at 14:31, Andrew Haley wrote: > On 06/19/2018 02:17 PM, Andrew Haley wrote: >> Try -XX:MaxInlineLevel=15. > > And incidentally, if we do that and inline everything we get some truly > beautiful code: > > Compiled method (c2) 4989 652 4 TestFrames::add2 (31 bytes) > ... > 0x000003ffa8b2a4f4: ldr x10, [x1,#16] > 0x000003ffa8b2a4f8: add x10, x10, w2, sxtw > ;; 0x14D6FBCFA76 > 0x000003ffa8b2a4fc: mov x11, #0xfa76 // #64118 > 0x000003ffa8b2a500: movk x11, #0x6fbc, lsl #16 > 0x000003ffa8b2a504: movk x11, #0x14d, lsl #32 > 0x000003ffa8b2a508: add x10, x10, x11 > 0x000003ffa8b2a50c: str x10, [x1,#16] ;*putfield var {reexecute=0 rethrow=0 return_oop=0} > ; - TestFrames::add13 at 8 (line 103) > ; - TestFrames::add12 at 22 (line 99) > ; - TestFrames::add11 at 22 (line 93) > ; - TestFrames::add10 at 22 (line 87) > ; - TestFrames::add9 at 22 (line 81) > ; - TestFrames::add8 at 22 (line 75) > ; - TestFrames::add7 at 22 (line 69) > ; - TestFrames::add6 at 22 (line 63) > ; - TestFrames::add5 at 22 (line 57) > ; - TestFrames::add4 at 21 (line 51) > ; - TestFrames::add3 at 21 (line 45) > ; - TestFrames::add2 at 21 (line 39) > > 0x000003ffa8b2a510: ldp xfp, xlr, [sp,#16] > 0x000003ffa8b2a514: add sp, sp, #0x20 > 0x000003ffa8b2a518: ldr xscratch1, [xthread,#288] > 0x000003ffa8b2a51c: ldr wzr, [xscratch1] ; {poll_return} > 0x000003ffa8b2a520: ret > > Note that this is all of add2 and it contains no loops at all. It > adds n to var, adds 0x14D6FBCFA76 to that, and stores the result in var. > It has optimized 10*12 operations to 1. Now that's what I call > optimization! > > > -- > Andrew Haley > Java Platform Lead Engineer > Red Hat UK Ltd. > EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From martin.doerr at sap.com Tue Jun 19 14:38:36 2018 From: martin.doerr at sap.com (Doerr, Martin) Date: Tue, 19 Jun 2018 14:38:36 +0000 Subject: RFR(XS): 8205172: 32 bit build broken In-Reply-To: References: <2d5144f9a7e1431fa42307bbff8017cb@sap.com> <58f95a0e-b1fe-4f56-5da0-4bd6893df104@oracle.com> Message-ID: <6ede5bbf43b542f0bd28b19028d89d36@sap.com> Thanks for reviewing. I've pushed it. Best regards, Martin -----Original Message----- From: David Holmes [mailto:david.holmes at oracle.com] Sent: Dienstag, 19. Juni 2018 12:08 To: Doerr, Martin ; hotspot-dev developers (hotspot-dev at openjdk.java.net) ; JC Beyler Subject: Re: RFR(XS): 8205172: 32 bit build broken On 19/06/2018 6:28 PM, Doerr, Martin wrote: > Hi David, > > thanks for reviewing and for the quick response. > > I think it's good that the compiler warns about undefined subexpressions even when the result is not used. Not if you have to change the code to work around a warning. :) Static analysis tools often are not smart enough to see the complete picture. > Anyway, the fix is needed to get a correct 64 bit mask as you have already mentioned. > > I'm ok with using jlong as type for the size field. This is safe (even though I think sizes which don't fit into 32 bit should never occur on 32 bit platforms, but I'm not really familiar with this code). The event is specified to provide a size that is a jlong so one has to assume that such sizes are anticipated to occur. For the test we can obviously control that so it's not such a concern, but still best that the correct sized types flow through the code. > Here's the new webrev: > http://cr.openjdk.java.net/~mdoerr/8205172_32bit_build/webrev.01/ Looks good. Thanks, David > Thanks, > Martin > > > -----Original Message----- > From: David Holmes [mailto:david.holmes at oracle.com] > Sent: Dienstag, 19. Juni 2018 09:18 > To: Doerr, Martin ; hotspot-dev developers (hotspot-dev at openjdk.java.net) ; JC Beyler > Subject: Re: RFR(XS): 8205172: 32 bit build broken > > Hi Martin, > > On 19/06/2018 4:58 PM, Doerr, Martin wrote: >> Hi David, >> >> it's not a compiler bug. The problem is that "const intptr_t OneBit = 1;" uses intptr_t which is 32 bit on 32 bit platforms. >> We can't shift it by 48. Seems like the other usages of right_n_bits only use less than 32. > > No we can't, but we're also not going to try as we won't take that path > on a 32-bit system. So we're getting a compiler warning about code that > the compiler thinks might be executed even though it actually won't. > > In this case though the fix is correct as we're always dealing with > 64-bit and those macros depend on the pointer-size. > >> I just noticed that there's more to fix: >> --- a/test/hotspot/jtreg/serviceability/jvmti/HeapMonitor/libHeapMonitorTest.c Mon Jun 18 16:00:23 2018 +0200 >> +++ b/test/hotspot/jtreg/serviceability/jvmti/HeapMonitor/libHeapMonitorTest.c Tue Jun 19 08:53:26 2018 +0200 >> @@ -459,7 +459,7 @@ >> live_object = (ObjectTrace*) malloc(sizeof(*live_object)); >> live_object->frames = allocated_frames; >> live_object->frame_count = count; >> - live_object->size = size; >> + live_object->size = (size_t)size; > > It's not obvious that is the right fix as the incoming "size" is 64-bit. > I think strictly speaking this: > > typedef struct _ObjectTrace{ > jweak object; > size_t size; > > Should be using a 64-bit type for size. > > David > ----- > >> live_object->thread = thread; >> live_object->object = (*jni)->NewWeakGlobalRef(jni, object); >> >> Best regards, >> Martin >> >> >> -----Original Message----- >> From: David Holmes [mailto:david.holmes at oracle.com] >> Sent: Dienstag, 19. Juni 2018 07:53 >> To: Doerr, Martin ; hotspot-dev developers (hotspot-dev at openjdk.java.net) ; Roland Westrelin (roland.westrelin at oracle.com) ; JC Beyler >> Subject: Re: RFR(XS): 8205172: 32 bit build broken >> >> On 19/06/2018 12:10 AM, Doerr, Martin wrote: >>> Hi, >>> >>> 32 bit build is currently broken due to: >>> "trap_mask": jdk/src/hotspot/share/oops/methodData.hpp(142) : warning C4293: '<<' : shift count negative or too big, undefined behavior >>> "PrngModMask": jdk/src/hotspot/share/runtime/threadHeapSampler.cpp(50) : warning C4293: '<<' : shift count negative or too big, undefined behavior >> >> const uint64_t PrngModPower = 48; >> ! const uint64_t PrngModMask = right_n_bits(PrngModPower); >> >> So right_n_bits(48) should expand to: >> >> (nth_bit(48) - 1) >> >> where nth_bit is: >> >> (((n) >= BitsPerWord) ? 0 : (OneBit << (n))) >> >> so the compiler is complaining that (OneBit << 48) is undefined, but the >> conditional operator will not execute that path (as BitsPerWord == 32). >> That seems like a compiler bug to me. :( >> >> David >> ----- >> >>> Please review this small fix: >>> http://cr.openjdk.java.net/~mdoerr/8205172_32bit_build/webrev.00/ >>> >>> Best regards, >>> Martin >>> From per.liden at oracle.com Tue Jun 19 14:41:39 2018 From: per.liden at oracle.com (Per Liden) Date: Tue, 19 Jun 2018 16:41:39 +0200 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> <034bbb1e-5f79-c429-878c-79e8f824dc3d@oracle.com> <7c8571dc-8781-cfdf-1bb9-d0c3b3297c17@oracle.com> <82ac960d-5794-0bfb-8d5b-ff0be858230a@LGonQn.Org> <8632cbab-75e8-7945-1b11-339e471b8ae2@oracle.com> Message-ID: <2cdbd728-9016-a334-1612-e75f1bc4d595@oracle.com> Hi Chris, On 06/19/2018 03:49 PM, Chris Phillips wrote: > Hi Per > Thanks! > New webrev : http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.3 Looks good to me! /Per > All suggested changes made (see below inline). > > On 18/06/18 05:55 AM, Per Liden wrote: >> On 06/14/2018 05:01 PM, Chris Phillips wrote: >>> Hi >>> Any further comments or changes? >>> On 06/06/18 05:56 PM, Chris Phillips wrote: >>>> Hi Per, >>>> >>>> On 06/06/18 05:48 PM, Per Liden wrote: >>>>> Hi Chris, >>>>> >>>>> On 06/06/2018 11:15 PM, Chris Phillips wrote: >>>>>> Hi Per, >>>>>> >>>>>> On 06/06/18 04:47 PM, Per Liden wrote: >>>>>>> Hi Chris, >>>>>>> >>>>>>> On 06/06/2018 09:36 PM, Chris Phillips wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> On 06/06/18 02:23 PM, Per Liden wrote: >>>>>>>>> On 2018-06-06 18:29, Andrew Haley wrote: >>>>>>>>>> On 06/06/2018 04:47 PM, Chris Phillips wrote: >>>>>>>>>>> Please review this set of changes to shared code >>>>>>>>>>> related to S390 (31bit) Zero self-build type mis-match failures. >>>>>>>>>>> >>>>>>>>>>> Bug:??? https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>>>>>>> webrev: http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 >>>>>>>>>> >>>>>>>>>> Can you explain this a little more?? What is the type of size_t on >>>>>>>>>> s390x?? What is the type of uintptr_t?? What are the errors? >>>>>>>>> >>>>>>>>> I would like to understand this too. >>>>>>>>> >>>>>>>>> cheers, >>>>>>>>> Per >>>>>>>>> >>>>>>>>> >>>>>>>> Quoting from the original bug? review request: >>>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>>>>> >>>>>>>> >>>>>>>> "This >>>>>>>> is a problem when one parameter is of size_t type and the second of >>>>>>>> uintx type and the platform has size_t defined as eg. unsigned >>>>>>>> long as >>>>>>>> on s390 (32-bit)." >>>>>>> >>>>>>> Please clarify what the sizes of uintx (i.e. uintptr_t) and size_t >>>>>>> are >>>>>>> on s390? >>>>>> See Dan's explanation. >>>>>>> >>>>>>> I fail to see how any of this matters to _entries here? What am I >>>>>>> missing? >>>>>>> >>>>>> >>>>>> By changing the type, to its actual usage, we avoid the >>>>>> necessity of patching in >>>>>> src/hotspot/share/gc/g1/g1StringDedupTable.cpp >>>>>> around line 617, since its consistent usage and local I patched at the >>>>>> definition. >>>>>> >>>>>> - _table->_entries, percent_of(_table->_entries, _table->_size), >>>>>> _entry_cache->size(), _entries_added, _entries_removed); >>>>>> + _table->_entries, percent_of( (size_t)(_table->_entries), >>>>>> _table->_size), _entry_cache->size(), _entries_added, >>>>>> _entries_removed); >>>>>> >>>>>> percent_of will complain about types otherwise. >>>>> >>>>> Ok, so why don't you just cast it in the call to percent_of? Your >>>>> current patch has ripple effects that you fail to take into account. >>>>> For >>>>> example, _entries is still printed using UINTX_FORMAT and compared >>>>> against other uintx variables. You're now mixing types in an unsound >>>>> way. >>>> >>>> Hmm missed that, so will do the cast instead as you suggest. >>>> (Fixing at the defn is what was suggested the last time around so I >>>> tried to do that where it was consistent, obviously this is not. >>>> Thanks. >>>> >>>>> cheers, >>>>> Per >>>>> >>>>>> >>>>>> >>>>>>> src/hotspot/share/gc/g1/g1StringDedupTable.hpp >>>>>>> @@ -120,11 +120,11 @@ >>>>>>> ???? // Cache for reuse and fast alloc/free of table entries. >>>>>>> ???? static G1StringDedupEntryCache* _entry_cache; >>>>>>> >>>>>>> ???? G1StringDedupEntry**??????????? _buckets; >>>>>>> ???? size_t????????????????????????? _size; >>>>>>> -? uintx?????????????????????????? _entries; >>>>>>> +? size_t????????????????????????? _entries; >>>>>>> ???? uintx?????????????????????????? _shrink_threshold; >>>>>>> ???? uintx?????????????????????????? _grow_threshold; >>>>>>> ???? bool??????????????????????????? _rehash_needed; >>>>>>> >>>>>>> cheers, >>>>>>> Per >>>>>>> >>>>>>>> >>>>>>>> Hope that helps, >>>>>>>> Chris >>>>>>>> >>>>>>>> (I'll answer further if needed but the info is in the bugs and >>>>>>>> review thread mostly) >>>>>>>> See: >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>>>> and: >>>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>>>>> >>>>>>>> >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8046938 >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8074459 >>>>>>>> For more info. >>>>>>>> >>>>>>> >>>>>>> >>>>> >>>>> >>>> Cheers! >>>> Chris >>>> >>>> >>>> >>> >>> Finally through testing and submit run again after Per's requested >>> change, here's the knew webrev: >>> ??? http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.2 >>> attached is the passing run fron the submit queue. >>> >>> Please review... >> >> >> src/hotspot/share/gc/cms/cms_globals.hpp >> ---------------------------------------- >> Instead of changing the type of ParGCDesiredObjsFromOverflowList, I'd >> suggest you just change the single place where you need a cast, in >> ParScanThreadState::take_from_overflow_stack(). >> >> If you change the type of ParGCDesiredObjsFromOverflowList, but you >> otherwise have to clean up a number of places where it's already >> explicitly cast to size_t in concurrentMaskSweepGeneration.cpp. >> > Done > >> >> src/hotspot/share/gc/parallel/parallel_globals.hpp >> -------------------------------------------------- >> Please also change to type of ParallelOldDeadWoodLimiterMean to size_t. >> >> > Done > >> src/hotspot/share/gc/parallel/psParallelCompact.cpp >> --------------------------------------------------- >> No need to cast ParallelOldDeadWoodLimiterStdDev, you're already changed >> its type. And if you change ParallelOldDeadWoodLimiterMean to also being >> size_t you don't need to touch this file at all. >> >> > Done > >> src/hotspot/share/runtime/globals.hpp >> ------------------------------------- >> -define_pd_global(uintx,? InitialCodeCacheSize,?????? 160*K); >> -define_pd_global(uintx,? ReservedCodeCacheSize,????? 32*M); >> +define_pd_global(size_t,? InitialCodeCacheSize,?????? 160*K); >> +define_pd_global(size_t,? ReservedCodeCacheSize,????? 32*M); >> >> I would avoid changing these types, otherwise you need to go around and >> clean up a number of other places where it's says it's an uintx, like here: >> >> 1909?? product_pd(uintx, InitialCodeCacheSize, ??????? \ >> 1910?????????? "Initial code cache size (in bytes)") ??????? \ >> 1911?????????? range(os::vm_page_size(), max_uintx) ??????? \ >> >> Also, it seems you've already added the cast you need for >> InitialCodeCacheSize in codeCache.cpp, so that type change looks >> unnecessary. >> > Done > > >> Btw, patch no longer applies to the latest jdk/jdk. > > Should work now, again. > Tested on s390 31 bit and x86_64 64 bit, testing on s390x underway. >> >> >> cheers, >> Per >> >>> >>> Chris >>> >> >> >> > > Cheers! > Chris > From ChrisPhi at LGonQn.Org Tue Jun 19 14:58:01 2018 From: ChrisPhi at LGonQn.Org ("Chris Phillips"@T O) Date: Tue, 19 Jun 2018 10:58:01 -0400 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: <2cdbd728-9016-a334-1612-e75f1bc4d595@oracle.com> References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> <034bbb1e-5f79-c429-878c-79e8f824dc3d@oracle.com> <7c8571dc-8781-cfdf-1bb9-d0c3b3297c17@oracle.com> <82ac960d-5794-0bfb-8d5b-ff0be858230a@LGonQn.Org> <8632cbab-75e8-7945-1b11-339e471b8ae2@oracle.com> <2cdbd728-9016-a334-1612-e75f1bc4d595@oracle.com> Message-ID: <76545408-082f-f0c1-1da2-7c0609c5dfd9@LGonQn.Org> Hi Per, Thanks! On 19/06/18 10:41 AM, Per Liden wrote: > Hi Chris, > > On 06/19/2018 03:49 PM, Chris Phillips wrote: >> Hi Per >> Thanks! >> New webrev : http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.3 > > Looks good to me! > > /Per > >> All suggested changes made (see below inline). >> >> On 18/06/18 05:55 AM, Per Liden wrote: >>> On 06/14/2018 05:01 PM, Chris Phillips wrote: >>>> Hi >>>> Any further comments or changes? >>>> On 06/06/18 05:56 PM, Chris Phillips wrote: >>>>> Hi Per, >>>>> >>>>> On 06/06/18 05:48 PM, Per Liden wrote: >>>>>> Hi Chris, >>>>>> >>>>>> On 06/06/2018 11:15 PM, Chris Phillips wrote: >>>>>>> Hi Per, >>>>>>> >>>>>>> On 06/06/18 04:47 PM, Per Liden wrote: >>>>>>>> Hi Chris, >>>>>>>> >>>>>>>> On 06/06/2018 09:36 PM, Chris Phillips wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> On 06/06/18 02:23 PM, Per Liden wrote: >>>>>>>>>> On 2018-06-06 18:29, Andrew Haley wrote: >>>>>>>>>>> On 06/06/2018 04:47 PM, Chris Phillips wrote: >>>>>>>>>>>> Please review this set of changes to shared code >>>>>>>>>>>> related to S390 (31bit) Zero self-build type mis-match >>>>>>>>>>>> failures. >>>>>>>>>>>> >>>>>>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>>>>>>>> webrev: >>>>>>>>>>>> http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 >>>>>>>>>>> >>>>>>>>>>> Can you explain this a little more? What is the type of >>>>>>>>>>> size_t on >>>>>>>>>>> s390x? What is the type of uintptr_t? What are the errors? >>>>>>>>>> >>>>>>>>>> I would like to understand this too. >>>>>>>>>> >>>>>>>>>> cheers, >>>>>>>>>> Per >>>>>>>>>> >>>>>>>>>> >>>>>>>>> Quoting from the original bug review request: >>>>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> "This >>>>>>>>> is a problem when one parameter is of size_t type and the >>>>>>>>> second of >>>>>>>>> uintx type and the platform has size_t defined as eg. unsigned >>>>>>>>> long as >>>>>>>>> on s390 (32-bit)." >>>>>>>> >>>>>>>> Please clarify what the sizes of uintx (i.e. uintptr_t) and size_t >>>>>>>> are >>>>>>>> on s390? >>>>>>> See Dan's explanation. >>>>>>>> >>>>>>>> I fail to see how any of this matters to _entries here? What am I >>>>>>>> missing? >>>>>>>> >>>>>>> >>>>>>> By changing the type, to its actual usage, we avoid the >>>>>>> necessity of patching in >>>>>>> src/hotspot/share/gc/g1/g1StringDedupTable.cpp >>>>>>> around line 617, since its consistent usage and local I patched >>>>>>> at the >>>>>>> definition. >>>>>>> >>>>>>> - _table->_entries, percent_of(_table->_entries, _table->_size), >>>>>>> _entry_cache->size(), _entries_added, _entries_removed); >>>>>>> + _table->_entries, percent_of( (size_t)(_table->_entries), >>>>>>> _table->_size), _entry_cache->size(), _entries_added, >>>>>>> _entries_removed); >>>>>>> >>>>>>> percent_of will complain about types otherwise. >>>>>> >>>>>> Ok, so why don't you just cast it in the call to percent_of? Your >>>>>> current patch has ripple effects that you fail to take into account. >>>>>> For >>>>>> example, _entries is still printed using UINTX_FORMAT and compared >>>>>> against other uintx variables. You're now mixing types in an unsound >>>>>> way. >>>>> >>>>> Hmm missed that, so will do the cast instead as you suggest. >>>>> (Fixing at the defn is what was suggested the last time around so I >>>>> tried to do that where it was consistent, obviously this is not. >>>>> Thanks. >>>>> >>>>>> cheers, >>>>>> Per >>>>>> >>>>>>> >>>>>>> >>>>>>>> src/hotspot/share/gc/g1/g1StringDedupTable.hpp >>>>>>>> @@ -120,11 +120,11 @@ >>>>>>>> // Cache for reuse and fast alloc/free of table entries. >>>>>>>> static G1StringDedupEntryCache* _entry_cache; >>>>>>>> >>>>>>>> G1StringDedupEntry** _buckets; >>>>>>>> size_t _size; >>>>>>>> - uintx _entries; >>>>>>>> + size_t _entries; >>>>>>>> uintx _shrink_threshold; >>>>>>>> uintx _grow_threshold; >>>>>>>> bool _rehash_needed; >>>>>>>> >>>>>>>> cheers, >>>>>>>> Per >>>>>>>> >>>>>>>>> >>>>>>>>> Hope that helps, >>>>>>>>> Chris >>>>>>>>> >>>>>>>>> (I'll answer further if needed but the info is in the bugs and >>>>>>>>> review thread mostly) >>>>>>>>> See: >>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>>>>> and: >>>>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8046938 >>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8074459 >>>>>>>>> For more info. >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>> >>>>>> >>>>> Cheers! >>>>> Chris >>>>> >>>>> >>>>> >>>> >>>> Finally through testing and submit run again after Per's requested >>>> change, here's the knew webrev: >>>> http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.2 >>>> attached is the passing run fron the submit queue. >>>> >>>> Please review... >>> >>> >>> src/hotspot/share/gc/cms/cms_globals.hpp >>> ---------------------------------------- >>> Instead of changing the type of ParGCDesiredObjsFromOverflowList, I'd >>> suggest you just change the single place where you need a cast, in >>> ParScanThreadState::take_from_overflow_stack(). >>> >>> If you change the type of ParGCDesiredObjsFromOverflowList, but you >>> otherwise have to clean up a number of places where it's already >>> explicitly cast to size_t in concurrentMaskSweepGeneration.cpp. >>> >> Done >> >>> >>> src/hotspot/share/gc/parallel/parallel_globals.hpp >>> -------------------------------------------------- >>> Please also change to type of ParallelOldDeadWoodLimiterMean to size_t. >>> >>> >> Done >> >>> src/hotspot/share/gc/parallel/psParallelCompact.cpp >>> --------------------------------------------------- >>> No need to cast ParallelOldDeadWoodLimiterStdDev, you're already >>> changed >>> its type. And if you change ParallelOldDeadWoodLimiterMean to also >>> being >>> size_t you don't need to touch this file at all. >>> >>> >> Done >> >>> src/hotspot/share/runtime/globals.hpp >>> ------------------------------------- >>> -define_pd_global(uintx, InitialCodeCacheSize, 160*K); >>> -define_pd_global(uintx, ReservedCodeCacheSize, 32*M); >>> +define_pd_global(size_t, InitialCodeCacheSize, 160*K); >>> +define_pd_global(size_t, ReservedCodeCacheSize, 32*M); >>> >>> I would avoid changing these types, otherwise you need to go around and >>> clean up a number of other places where it's says it's an uintx, >>> like here: >>> >>> 1909 product_pd(uintx, InitialCodeCacheSize, \ >>> 1910 "Initial code cache size (in bytes)") \ >>> 1911 range(os::vm_page_size(), max_uintx) \ >>> >>> Also, it seems you've already added the cast you need for >>> InitialCodeCacheSize in codeCache.cpp, so that type change looks >>> unnecessary. >>> >> Done >> >> >>> Btw, patch no longer applies to the latest jdk/jdk. >> >> Should work now, again. >> Tested on s390 31 bit and x86_64 64 bit, testing on s390x underway. >>> >>> >>> cheers, >>> Per >>> >>>> >>>> Chris >>>> >>> >>> >>> >> >> Cheers! >> Chris >> > > > Cheers! Chris From ChrisPhi at LGonQn.Org Tue Jun 19 15:21:31 2018 From: ChrisPhi at LGonQn.Org ("Chris Phillips"@T O) Date: Tue, 19 Jun 2018 11:21:31 -0400 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> <034bbb1e-5f79-c429-878c-79e8f824dc3d@oracle.com> <7c8571dc-8781-cfdf-1bb9-d0c3b3297c17@oracle.com> <2770673c-3765-0680-feef-d2bed0f59426@LGonQn.Org> Message-ID: <7b7959f1-488d-0342-3645-cc241c35d4fb@LGonQn.Org> Hi Thomas Thanks for your comment! On 18/06/18 08:20 AM, Thomas St?fe wrote: > Hi Chris, > > it may be just me, but I dislike a bit the usage of "size_t" for > "number of things". size_t, to me, will always mean a memory range. > > Best Regards, Thomas Agreed , the POSIX definition of size_t is "the result of a sizeof op"[1]. But I think that's a different issue than the current bug (just making types cop-ascetic for s390 31 bit where they made size_t unsigned long int which is technically compatible but causes issues with compilers). Do you want to file a bug for that? Cheers, Chris [1] - The Open Group Library pubs.opengroup.org/onlinepubs/9699919799/basedefs/stddef.h.html This volume of /POSIX/.1-2017 defers to the ISO C standard. [Option End]. The ... /size_t/: Unsigned integer type of the result of the /sizeof/ operator. > > On Fri, Jun 15, 2018 at 5:36 PM, Chris Phillips wrote: >> On 14/06/18 11:01 AM, Chris Phillips wrote: >>> Hi >>> Any further comments or changes? >>> On 06/06/18 05:56 PM, Chris Phillips wrote: >>>> Hi Per, >>>> >>>> On 06/06/18 05:48 PM, Per Liden wrote: >>>>> Hi Chris, >>>>> >>>>> On 06/06/2018 11:15 PM, Chris Phillips wrote: >>>>>> Hi Per, >>>>>> >>>>>> On 06/06/18 04:47 PM, Per Liden wrote: >>>>>>> Hi Chris, >>>>>>> >>>>>>> On 06/06/2018 09:36 PM, Chris Phillips wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> On 06/06/18 02:23 PM, Per Liden wrote: >>>>>>>>> On 2018-06-06 18:29, Andrew Haley wrote: >>>>>>>>>> On 06/06/2018 04:47 PM, Chris Phillips wrote: >>>>>>>>>>> Please review this set of changes to shared code >>>>>>>>>>> related to S390 (31bit) Zero self-build type mis-match failures. >>>>>>>>>>> >>>>>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>>>>>>> webrev: http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 >>>>>>>>>> Can you explain this a little more? What is the type of size_t on >>>>>>>>>> s390x? What is the type of uintptr_t? What are the errors? >>>>>>>>> I would like to understand this too. >>>>>>>>> >>>>>>>>> cheers, >>>>>>>>> Per >>>>>>>>> >>>>>>>>> >>>>>>>> Quoting from the original bug review request: >>>>>>>> >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>>>>> "This >>>>>>>> is a problem when one parameter is of size_t type and the second of >>>>>>>> uintx type and the platform has size_t defined as eg. unsigned >> long as >>>>>>>> on s390 (32-bit)." >>>>>>> Please clarify what the sizes of uintx (i.e. uintptr_t) and size_t are >>>>>>> on s390? >>>>>> See Dan's explanation. >>>>>>> I fail to see how any of this matters to _entries here? What am I >>>>>>> missing? >>>>>>> >>>>>> By changing the type, to its actual usage, we avoid the >>>>>> necessity of patching in src/hotspot/share/gc/g1/g1StringDedupTable.cpp >>>>>> around line 617, since its consistent usage and local I patched at the >>>>>> definition. >>>>>> >>>>>> - _table->_entries, percent_of(_table->_entries, _table->_size), >>>>>> _entry_cache->size(), _entries_added, _entries_removed); >>>>>> + _table->_entries, percent_of( (size_t)(_table->_entries), >>>>>> _table->_size), _entry_cache->size(), _entries_added, >> _entries_removed); >>>>>> percent_of will complain about types otherwise. >>>>> Ok, so why don't you just cast it in the call to percent_of? Your >>>>> current patch has ripple effects that you fail to take into account. For >>>>> example, _entries is still printed using UINTX_FORMAT and compared >>>>> against other uintx variables. You're now mixing types in an unsound >> way. >>>> Hmm missed that, so will do the cast instead as you suggest. >>>> (Fixing at the defn is what was suggested the last time around so I >>>> tried to do that where it was consistent, obviously this is not. >>>> Thanks. >>>> >>>>> cheers, >>>>> Per >>>>> >>>>>> >>>>>>> src/hotspot/share/gc/g1/g1StringDedupTable.hpp >>>>>>> @@ -120,11 +120,11 @@ >>>>>>> // Cache for reuse and fast alloc/free of table entries. >>>>>>> static G1StringDedupEntryCache* _entry_cache; >>>>>>> >>>>>>> G1StringDedupEntry** _buckets; >>>>>>> size_t _size; >>>>>>> - uintx _entries; >>>>>>> + size_t _entries; >>>>>>> uintx _shrink_threshold; >>>>>>> uintx _grow_threshold; >>>>>>> bool _rehash_needed; >>>>>>> >>>>>>> cheers, >>>>>>> Per >>>>>>> >>>>>>>> Hope that helps, >>>>>>>> Chris >>>>>>>> >>>>>>>> (I'll answer further if needed but the info is in the bugs and >>>>>>>> review thread mostly) >>>>>>>> See: >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>>>> and: >>>>>>>> >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8046938 >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8074459 >>>>>>>> For more info. >>>>>>>> >>>>>>> >>>>> >>>> Cheers! >>>> Chris >>>> >>>> >>>> >>> Finally through testing and submit run again after Per's requested >>> change, here's the knew webrev: >>> http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.2 >>> attached is the passing run fron the submit queue. >>> >>> Please review... >>> >>> Chris >>> >> Hi >> Please may I have another review >> and someone to push ? >> >> Thanks! >> Chris >> >> Hmm attachments stripped... >> >> Here it is inline: >> >> Build Details: 2018-06-14-1347454.chrisphi.source >> 0 Failed Tests >> Mach5 Tasks Results Summary >> >> PASSED: 75 >> KILLED: 0 >> FAILED: 0 >> UNABLE_TO_RUN: 0 >> EXECUTED_WITH_FAILURE: 0 >> NA: 0 >> > From rkennke at redhat.com Tue Jun 19 17:14:15 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 19 Jun 2018 19:14:15 +0200 Subject: RFR: JDK-8205336: Modularize allocations in assembler Message-ID: <3e8b2df9-9182-5c99-475e-da64a7305c67@redhat.com> Much like GC should own allocation paths in the runtime, we need to do the same in assembler (which covers interpreter and C1). See also: https://bugs.openjdk.java.net/browse/JDK-8204803 In Shenandoah we need to allocate some extra space and initialize the fwd ptr of each object. More generally, I am of the opinion that it should be up to the GC implementation how and where to allocate objects. The proposed change achieves that for interpreter and C1. Only tlab allocation and eden allocation paths are relevant here, everything else goes via slowpath/runtime allocation, which is already covered by JDK-8204803. The change does (for x86 and aarch64): - fold incr_allocated_bytes() into MA::eden_allocate(). The two always come in pairs, they can just as well call incr_allocate() from eden_allocate(). This has the additional advantage that it doesn't generate dead code for GCs that don't support eden allocations at all. (Before it used to generate code for incr_allocated_bytes() but jump over it unconditionally). - Move implementations of both tlab_allocate() and eden_allocate() to BarrierSetAssembler. The methods in MacroAssembler now call the corresponding BSA methods. http://cr.openjdk.java.net/~rkennke/JDK-8205336/webrev.00/ Can I please get reviews? Thanks, Roman From aph at redhat.com Tue Jun 19 17:56:27 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 19 Jun 2018 18:56:27 +0100 Subject: RFR: JDK-8205336: Modularize allocations in assembler In-Reply-To: <3e8b2df9-9182-5c99-475e-da64a7305c67@redhat.com> References: <3e8b2df9-9182-5c99-475e-da64a7305c67@redhat.com> Message-ID: On 06/19/2018 06:14 PM, Roman Kennke wrote: > Can I please get reviews? AArch64 looks OK, but it makes no sense for Register thread to be an argument to eden_allocate: it's rthread. Otherwise fine. Unless you've messed up copying some of the code, but it'd be hard to check all that by hand without half a day to spare... :-( -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From jiangli.zhou at oracle.com Tue Jun 19 18:10:24 2018 From: jiangli.zhou at oracle.com (Jiangli Zhou) Date: Tue, 19 Jun 2018 11:10:24 -0700 Subject: RFR(M): 8204965: Fix '--disable-cds' and disable CDS on AIX by default In-Reply-To: References: <67efe9b2-bd7a-e23a-74d6-786cbc135d16@oracle.com> <8ada6cad-09f4-cde2-69c8-561a142bc3bb@oracle.com> Message-ID: > On Jun 19, 2018, at 5:21 AM, Volker Simonis wrote: > > On Tue, Jun 19, 2018 at 9:25 AM, David Holmes wrote: >> Hi Volker, >> >> On 19/06/2018 4:50 PM, Volker Simonis wrote: >>> >>> On Tue, Jun 19, 2018 at 6:54 AM, David Holmes >>> wrote: >>>> >>>> Hi Volker, >>>> >>>> v3 looks much cleaner - thanks. >>>> >>>> But AFAICS the change to jvmtiEnv.cpp is also not needed. >>>> ClassLoaderExt::append_boot_classpath exists regardless of INCLUDE_CDS >>>> but >>>> operates differently (just calling >>>> ClassLoader::add_to_boot_append_entries). >>>> >>> >>> That's not entirely true because the whole compilation unit (i.e. >>> classLoaderExt.cpp) which contains >>> 'ClassLoaderExt::append_boot_classpath()' is excluded from the >>> compilation if CDS is disabled (see make/hotspot/lib/JvmFeatures.gmk). >> >> >> Hmmm. There's a CDS bug there. Either classLoaderExt.cpp should not be >> excluded from a non-CDS build, or it should not contain any INCLUDE_CDS >> guards! I suspect it should not be excluded. >> >>> So I can either move the whole implementation of >>> 'ClassLoaderExt::append_boot_classpath()' into classLoaderExt.hpp in >>> which case things would work as you explained and my changes to >>> jvmtiEnv.cpp could be removed or leave the whole code and change as >>> is. Please let me know what you think? >> >> >> In the interest of moving forward you can push what you have and I will file >> a bug against CDS to sort out classLoaderExt.cpp. >> > > Thanks! As the current version also passed the submit-repo tests I've pushed it. > > Regarding classLoaderExt.cpp, I think it is OK to exclude it for > non-CDS builds. If my IDE doesn't cheat on me (see [1]), > ClassLoaderExt is mostly used from other CDS-only files > (classListParser.cpp, systemDictionaryShared.cpp, filemap.cpp, > metaspaceShared.cpp). The only references from non-CDS files are from > classLoader.cpp an jvmtiEnv.cpp. The ones from classLoader.cpp are all > guarded with 'INCLUDE_CDS' or they only use functions defined in > classLoaderExt.hpp. The single remaining reference from jvmtiEnv.cpp > has been guarded with 'INCLUDE_CDS' by my change. > > I think it is a matter of taste if we leave this as is or move the > offending function from classLoaderExt.cpp to classLoaderExt.hpp and > remove the new guard from jvmtiEnv.cpp. For the classLoaderExt.cpp bug, we could use a private function, ClassLoaderExt::disable_shared_platform_and_app_classes, which does the logic in the original ClassLoaderExt::append_boot_classpath #INCLUDE_CDS. ClassLoaderExt::append_boot_classpath could be defined in classLoaderExt.hpp as: void disable_shared_platform_and_app_classes() NOT_CDS_RETURN; void append_boot_classpath(ClassPathEntry* new_entry) { disable_shared_platform_and_app_classes(); ClassLoader::add_to_boot_append_entries(new_entry); } The new guard can be removed from jvmtiEnv.cpp with those. Reducing CDS specifics from general code probably is cleaner. Thanks, Jiangli > > Regards, > Volker > > [1] http://cr.openjdk.java.net/~simonis/webrevs/2018/ClassLoaderExt.html > >> Thanks, >> David >> >> >>> Regards, >>> Volker >>> >>>> Thanks, >>>> David >>>> >>>> >>>> On 19/06/2018 2:04 AM, Volker Simonis wrote: >>>>> >>>>> >>>>> On Mon, Jun 18, 2018 at 8:17 AM, David Holmes >>>>> wrote: >>>>>> >>>>>> >>>>>> Hi Volker, >>>>>> >>>>>> src/hotspot/share/runtime/globals.hpp >>>>>> >>>>>> This change should not be needed! We do minimal VM builds without CDS >>>>>> and >>>>>> we >>>>>> don't have to touch the UseSharedSpaces defaults (unless recent change >>>>>> have >>>>>> broken this - in which case that needs to be addressed in its own >>>>>> right!) >>>>>> >>>>> >>>>> Yes, you're right, CDS_ONLY/NOT_CDS isn't really required here, >>>>> because UseSharedSpaces is reseted later on at the end of >>>>> Arguments::parse(). I just thought it would be cleaner to disable it >>>>> statically, if the VM doesn't support it. But anyway I don't really >>>>> mind and I've reverted that change in globals.hpp. >>>>> >>>>>> src/hotspot/share/classfile/javaClasses.cpp >>>>>> >>>>>> AFAICS you should be using INCLUDE_CDS in the ifdefs not >>>>>> INCLUDE_CDS_JAVA_HEAP. But again I'm unclear (as was Thomas) why this >>>>>> should >>>>>> be needed as we have not needed it before. As Thomas notes we have: >>>>>> >>>>>> ./hotspot/share/memory/metaspaceShared.hpp: static bool >>>>>> is_archive_object(oop p) NOT_CDS_JAVA_HEAP_RETURN_(false); >>>>>> ./hotspot/share/classfile/stringTable.hpp: static oop >>>>>> create_archived_string(oop s, Thread* THREAD) >>>>>> NOT_CDS_JAVA_HEAP_RETURN_(NULL); >>>>>> >>>>>> so these methods should be defined when CDS is not available. >>>>>> >>>>> >>>>> Thomas and you are right. Must have been a mis-configuration on AIX >>>>> where I saw undefined symbols at link time. I've removed the ifdefs >>>>> from javaClasses.cpp now. >>>>> >>>>> Finally, I've also wrapped all the FileMapInfo fields in vmStructs.cpp >>>>> into CDS_ONLY macros as suggested by Jiangli because the really only >>>>> make sense for a CDS-enabled VM. >>>>> >>>>> Here's the new webrev: >>>>> >>>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965.v3/ >>>>> >>>>> Please let me know if you think there's still something missing. >>>>> >>>>> Regards, >>>>> Volker >>>>> >>>>> >>>>>> ?? >>>>>> >>>>>> Thanks, >>>>>> David >>>>>> ----- >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On 15/06/2018 12:26 AM, Volker Simonis wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> can I please have a review for the following fix: >>>>>>> >>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8204965/ >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8204965 >>>>>>> >>>>>>> CDS does currently not work on AIX because of the way how we >>>>>>> reserve/commit memory on AIX. The problem is that we're using a >>>>>>> combination of shmat/mmap depending on the page size and the size of >>>>>>> the memory chunk to reserve. This makes it impossible to reliably >>>>>>> reserve the memory for the CDS archive and later on map the various >>>>>>> parts of the archive into these regions. >>>>>>> >>>>>>> In order to fix this we would have to completely rework the memory >>>>>>> reserve/commit/uncommit logic on AIX which is currently out of our >>>>>>> scope because of resource limitations. >>>>>>> >>>>>>> Unfortunately, I could not simply disable CDS in the configure step >>>>>>> because some of the shared code apparently relies on parts of the CDS >>>>>>> code which gets excluded from the build when CDS is disabled. So I >>>>>>> also fixed the offending parts in hotspot and cleaned up the configure >>>>>>> logic for CDS. >>>>>>> >>>>>>> Thank you and best regards, >>>>>>> Volker >>>>>>> >>>>>>> PS: I did run the job through the submit forest >>>>>>> (mach5-one-simonis-JDK-8204965-20180613-1946-26349) but the results >>>>>>> weren't really useful because they mention build failures on linux-x64 >>>>>>> which I can't reproduce locally. >>>>>>> >>>>>> >>>> >> From rkennke at redhat.com Tue Jun 19 19:46:34 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 19 Jun 2018 21:46:34 +0200 Subject: RFR: JDK-8205336: Modularize allocations in assembler In-Reply-To: References: <3e8b2df9-9182-5c99-475e-da64a7305c67@redhat.com> Message-ID: Am 19.06.2018 um 19:56 schrieb Andrew Haley: > On 06/19/2018 06:14 PM, Roman Kennke wrote: >> Can I please get reviews? > > AArch64 looks OK, but it makes no sense for Register thread to be an > argument to eden_allocate: it's rthread. Otherwise fine. > > Unless you've messed up copying some of the code, but it'd be hard > to check all that by hand without half a day to spare... :-( Right. It was pre-existing (probably modeled after x86), and I wanted to make it a 1:1 copy/refactoring job as much as possible, but yeah this is pointless in aarch64. Let's remove it: Incremental: http://cr.openjdk.java.net/~rkennke/JDK-8205336/webrev.01.diff/ Full: http://cr.openjdk.java.net/~rkennke/JDK-8205336/webrev.01/ Good now? Thanks for reviewing! Roman From erik.joelsson at oracle.com Tue Jun 19 21:27:14 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Tue, 19 Jun 2018 14:27:14 -0700 Subject: RFR: JDK-8200115: System property java.vm.vendor value includes quotation marks Message-ID: Hello, Since JDK-8189761, the java.vm.vendor string for an OracleJDK build contains quotation marks. This would also be true for any other build of OpenJDK where the user set the --with-vendor-name configure option. I found the culprit of this to be the combination of the build setting the preprocessor flag as -DVENDOR='"value"' (double quotes inside single quotes) and vm_version.cpp using the XSTR macro to convert the value of VENDOR into a C string. The -DVENDOR argument is also given to libjava, where System.c does not apply a similar conversion, so it requires the double quotes to be supplied on the command line. I found the simplest solution for unifying the behavior between libjvm and libjava to be to remove the XSTR macro call in vm_version.cpp. Bug: https://bugs.openjdk.java.net/browse/JDK-8189761 Webrev: http://cr.openjdk.java.net/~erikj/8200115/webrev.01/ Testing: mach5 tier1 /Erik From tim.bell at oracle.com Tue Jun 19 21:40:58 2018 From: tim.bell at oracle.com (Tim Bell) Date: Tue, 19 Jun 2018 14:40:58 -0700 Subject: RFR: JDK-8200115: System property java.vm.vendor value includes quotation marks In-Reply-To: References: Message-ID: <5B29786A.8020007@oracle.com> Erik: > Since JDK-8189761, the java.vm.vendor string for an OracleJDK build > contains quotation marks. This would also be true for any other build of > OpenJDK where the user set the --with-vendor-name configure option. > > I found the culprit of this to be the combination of the build setting > the preprocessor flag as -DVENDOR='"value"' (double quotes inside single > quotes) and vm_version.cpp using the XSTR macro to convert the value of > VENDOR into a C string. The -DVENDOR argument is also given to libjava, > where System.c does not apply a similar conversion, so it requires the > double quotes to be supplied on the command line. > > I found the simplest solution for unifying the behavior between libjvm > and libjava to be to remove the XSTR macro call in vm_version.cpp. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8189761 Er ... the follow-on bug report is: https://bugs.openjdk.java.net/browse/JDK-8200115 > Webrev: http://cr.openjdk.java.net/~erikj/8200115/webrev.01/ > > Testing: mach5 tier1 Looks good. /Tim From paul.sandoz at oracle.com Wed Jun 20 00:08:30 2018 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Tue, 19 Jun 2018 17:08:30 -0700 Subject: RFR 8195650 Method references to VarHandle accessors Message-ID: <086A1684-4D97-4B6D-94F2-16A1261057B5@oracle.com> Hi, Please review the following fix to ensure method references to VarHandle signature polymorphic methods are supported at runtime (specifically the method handle to a signature polymorphic method can be loaded from the constant pool): http://cr.openjdk.java.net/~psandoz/jdk/JDK-8195650-varhandle-mref/webrev/ I also added a ?belts and braces? test to ensure a constant method handle to MethodHandle::invokeBasic cannot be loaded if outside of the j.l.invoke package. Paul. From ChrisPhi at LGonQn.Org Wed Jun 20 09:38:57 2018 From: ChrisPhi at LGonQn.Org (Chris Phillips) Date: Wed, 20 Jun 2018 05:38:57 -0400 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: <76545408-082f-f0c1-1da2-7c0609c5dfd9@LGonQn.Org> References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> <034bbb1e-5f79-c429-878c-79e8f824dc3d@oracle.com> <7c8571dc-8781-cfdf-1bb9-d0c3b3297c17@oracle.com> <82ac960d-5794-0bfb-8d5b-ff0be858230a@LGonQn.Org> <8632cbab-75e8-7945-1b11-339e471b8ae2@oracle.com> <2cdbd728-9016-a334-1612-e75f1bc4d595@oracle.com> <76545408-082f-f0c1-1da2-7c0609c5dfd9@LGonQn.Org> Message-ID: <0cd1dd17-522e-3b7a-955c-283dd30788c8@LGonQn.Org> Hi Another reviewer? On 19/06/18 10:58 AM, "Chris Phillips"@T O wrote: > Hi Per, > Thanks! > > On 19/06/18 10:41 AM, Per Liden wrote: >> Hi Chris, >> >> On 06/19/2018 03:49 PM, Chris Phillips wrote: >>> Hi Per >>> Thanks! >>> New webrev : http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.3 >> >> Looks good to me! >> >> /Per >> >>> All suggested changes made (see below inline). >>> >>> On 18/06/18 05:55 AM, Per Liden wrote: >>>> On 06/14/2018 05:01 PM, Chris Phillips wrote: >>>>> Hi >>>>> Any further comments or changes? >>>>> On 06/06/18 05:56 PM, Chris Phillips wrote: >>>>>> Hi Per, >>>>>> >>>>>> On 06/06/18 05:48 PM, Per Liden wrote: >>>>>>> Hi Chris, >>>>>>> >>>>>>> On 06/06/2018 11:15 PM, Chris Phillips wrote: >>>>>>>> Hi Per, >>>>>>>> >>>>>>>> On 06/06/18 04:47 PM, Per Liden wrote: >>>>>>>>> Hi Chris, >>>>>>>>> >>>>>>>>> On 06/06/2018 09:36 PM, Chris Phillips wrote: >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> On 06/06/18 02:23 PM, Per Liden wrote: >>>>>>>>>>> On 2018-06-06 18:29, Andrew Haley wrote: >>>>>>>>>>>> On 06/06/2018 04:47 PM, Chris Phillips wrote: >>>>>>>>>>>>> Please review this set of changes to shared code >>>>>>>>>>>>> related to S390 (31bit) Zero self-build type mis-match >>>>>>>>>>>>> failures. >>>>>>>>>>>>> >>>>>>>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>>>>>>>>> webrev: >>>>>>>>>>>>> http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 >>>>>>>>>>>> >>>>>>>>>>>> Can you explain this a little more?? What is the type of >>>>>>>>>>>> size_t on >>>>>>>>>>>> s390x?? What is the type of uintptr_t?? What are the errors? >>>>>>>>>>> >>>>>>>>>>> I would like to understand this too. >>>>>>>>>>> >>>>>>>>>>> cheers, >>>>>>>>>>> Per >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> Quoting from the original bug? review request: >>>>>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> "This >>>>>>>>>> is a problem when one parameter is of size_t type and the >>>>>>>>>> second of >>>>>>>>>> uintx type and the platform has size_t defined as eg. unsigned >>>>>>>>>> long as >>>>>>>>>> on s390 (32-bit)." >>>>>>>>> >>>>>>>>> Please clarify what the sizes of uintx (i.e. uintptr_t) and size_t >>>>>>>>> are >>>>>>>>> on s390? >>>>>>>> See Dan's explanation. >>>>>>>>> >>>>>>>>> I fail to see how any of this matters to _entries here? What am I >>>>>>>>> missing? >>>>>>>>> >>>>>>>> >>>>>>>> By changing the type, to its actual usage, we avoid the >>>>>>>> necessity of patching in >>>>>>>> src/hotspot/share/gc/g1/g1StringDedupTable.cpp >>>>>>>> around line 617, since its consistent usage and local I patched >>>>>>>> at the >>>>>>>> definition. >>>>>>>> >>>>>>>> - _table->_entries, percent_of(_table->_entries, _table->_size), >>>>>>>> _entry_cache->size(), _entries_added, _entries_removed); >>>>>>>> + _table->_entries, percent_of( (size_t)(_table->_entries), >>>>>>>> _table->_size), _entry_cache->size(), _entries_added, >>>>>>>> _entries_removed); >>>>>>>> >>>>>>>> percent_of will complain about types otherwise. >>>>>>> >>>>>>> Ok, so why don't you just cast it in the call to percent_of? Your >>>>>>> current patch has ripple effects that you fail to take into account. >>>>>>> For >>>>>>> example, _entries is still printed using UINTX_FORMAT and compared >>>>>>> against other uintx variables. You're now mixing types in an unsound >>>>>>> way. >>>>>> >>>>>> Hmm missed that, so will do the cast instead as you suggest. >>>>>> (Fixing at the defn is what was suggested the last time around so I >>>>>> tried to do that where it was consistent, obviously this is not. >>>>>> Thanks. >>>>>> >>>>>>> cheers, >>>>>>> Per >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> src/hotspot/share/gc/g1/g1StringDedupTable.hpp >>>>>>>>> @@ -120,11 +120,11 @@ >>>>>>>>> ????? // Cache for reuse and fast alloc/free of table entries. >>>>>>>>> ????? static G1StringDedupEntryCache* _entry_cache; >>>>>>>>> >>>>>>>>> ????? G1StringDedupEntry**??????????? _buckets; >>>>>>>>> ????? size_t????????????????????????? _size; >>>>>>>>> -? uintx?????????????????????????? _entries; >>>>>>>>> +? size_t????????????????????????? _entries; >>>>>>>>> ????? uintx _shrink_threshold; >>>>>>>>> ????? uintx _grow_threshold; >>>>>>>>> ????? bool _rehash_needed; >>>>>>>>> >>>>>>>>> cheers, >>>>>>>>> Per >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Hope that helps, >>>>>>>>>> Chris >>>>>>>>>> >>>>>>>>>> (I'll answer further if needed but the info is in the bugs and >>>>>>>>>> review thread mostly) >>>>>>>>>> See: >>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>>>>>> and: >>>>>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8046938 >>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8074459 >>>>>>>>>> For more info. >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>> >>>>>>> >>>>>> Cheers! >>>>>> Chris >>>>>> >>>>>> >>>>>> >>>>> >>>>> Finally through testing and submit run again after Per's requested >>>>> change, here's the knew webrev: >>>>> http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.2 >>>>> attached is the passing run fron the submit queue. >>>>> >>>>> Please review... >>>> >>>> >>>> src/hotspot/share/gc/cms/cms_globals.hpp >>>> ---------------------------------------- >>>> Instead of changing the type of ParGCDesiredObjsFromOverflowList, I'd >>>> suggest you just change the single place where you need a cast, in >>>> ParScanThreadState::take_from_overflow_stack(). >>>> >>>> If you change the type of ParGCDesiredObjsFromOverflowList, but you >>>> otherwise have to clean up a number of places where it's already >>>> explicitly cast to size_t in concurrentMaskSweepGeneration.cpp. >>>> >>> Done >>> >>>> >>>> src/hotspot/share/gc/parallel/parallel_globals.hpp >>>> -------------------------------------------------- >>>> Please also change to type of ParallelOldDeadWoodLimiterMean to size_t. >>>> >>>> >>> Done >>> >>>> src/hotspot/share/gc/parallel/psParallelCompact.cpp >>>> --------------------------------------------------- >>>> No need to cast ParallelOldDeadWoodLimiterStdDev, you're already >>>> changed >>>> its type. And if you change ParallelOldDeadWoodLimiterMean to also >>>> being >>>> size_t you don't need to touch this file at all. >>>> >>>> >>> Done >>> >>>> src/hotspot/share/runtime/globals.hpp >>>> ------------------------------------- >>>> -define_pd_global(uintx,? InitialCodeCacheSize,?????? 160*K); >>>> -define_pd_global(uintx,? ReservedCodeCacheSize,????? 32*M); >>>> +define_pd_global(size_t,? InitialCodeCacheSize,?????? 160*K); >>>> +define_pd_global(size_t,? ReservedCodeCacheSize,????? 32*M); >>>> >>>> I would avoid changing these types, otherwise you need to go around and >>>> clean up a number of other places where it's says it's an uintx, >>>> like here: >>>> >>>> 1909?? product_pd(uintx, InitialCodeCacheSize,???????? \ >>>> 1910?????????? "Initial code cache size (in bytes)")???????? \ >>>> 1911?????????? range(os::vm_page_size(), max_uintx)???????? \ >>>> >>>> Also, it seems you've already added the cast you need for >>>> InitialCodeCacheSize in codeCache.cpp, so that type change looks >>>> unnecessary. >>>> >>> Done >>> >>> >>>> Btw, patch no longer applies to the latest jdk/jdk. >>> >>> Should work now, again. >>> Tested on s390 31 bit and x86_64? 64 bit, testing on s390x underway. >>>> >>>> >>>> cheers, >>>> Per >>>> >>>>> >>>>> Chris >>>>> >>>> >>>> >>>> >>> >>> Cheers! >>> Chris >>> >> >> >> > Cheers! > > Chris > > > > Ready to go : http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.4 Build Details: 2018-06-19-1559053.chrisphi.source 0 Failed Tests Mach5 Tasks Results Summary FAILED: 0 KILLED: 0 UNABLE_TO_RUN: 0 NA: 0 PASSED: 75 EXECUTED_WITH_FAILURE: 0 Chris From stefan.karlsson at oracle.com Wed Jun 20 10:50:00 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 20 Jun 2018 12:50:00 +0200 Subject: RFR: 8204540: Automatic oop closure devirtualization Message-ID: <9609014c-4523-e944-05e1-80386aa7088a@oracle.com> Hi all, Please review this patch to get rid of the macro based oop_iterate devirtualization layer, and replace it with a new implementation based on templates that automatically determines when the closure function calls can be devirtualized. http://cr.openjdk.java.net/~stefank/8204540/webrev.01/ https://bugs.openjdk.java.net/browse/JDK-8204540 The ExtendedOopClosure is the base class of GC closures that are usable by the oopDesc::oop_iterate and friends. The oop_iterate functions call the implementation of the virtual ExtendedOopClosure::do_oop, of the passed in closure. As part of this patch, this closure is renamed OopIterateClosure to aid readers of the code to better understand the purpose of this closure. Today there exists a devirtualization layer, which looks at the static type of the closure and turns the call to the virtual do_oop into direct, inlinable calls. This used to be fully implemented by macros, but was partly replaced by templates in JDK-8075955. Because oopDesc::oop_iterate calls the *virtual* Klass::oop_oop_iterate functions, it's non-trivial to pass the closure type all the way through to the do_oop function calls. Therefore, overloads of Klass::oop_oop_iterate for all devirtualized closures were generated by macros. The generation of these overloads is finicky and requires the developer to split the closure up into different files and get them registered correctly in the specialized_oops_closure.hpp. It also means that we generate implementations of some of the oop_oop_iterate for all closures, even when they are not used. There's even code to exclude compilation of some of these functions when only SerialGC is built. I propose a new way to automatically devirtualize these do_oop calls when the compiler detects that static closure type is known to have the do_oop function implemented. This will get rid of all these macros, get rid of the *_specialized_oop_closure.* files, stop generating GC specific overloads into oopDesc and the *Klass classes, get rid of virtual calls for more closures, remove the _nv suffixes and template parameters. For this to work we need to impose a contract/convention that no OopIterateClosure sub class overrides an implementation of do_oop in any of it's ancestors. Going forward, when we move to a newer C++ standard, we'll be able to mark the do_oop functions as final, to prevent accidental overrides. There might also be a way to automatically detect such breaches with template metaprogramming, but that has not been implemented. The discussion above has been about the do_oop functions, but this also applies to the metadata functions as well (do_metadata, do_cld, do_klass). The proposed patch implements one dispatch table per OopIterateClosure (and oop_iterate variant) that gets used and devirtualized. Each such table has one entry per sub class of Klass. All these tables get generated during compile time, and installed when the static initializers are run. This gives us a single-call dispatch over the static type of the oop closure type and the dynamic type of the Klass sub class. The oop iteration functions also dispatch on UseCompressedOops, and the patch replaces those checks with an initialization-time replacement of the dispatch table, which installs the correct implementation (with or without compressed oops), making this a single-call dispatch over three dimensions. This is similar to how the RuntimeDispatch is done for the Access API. With this RFE the developers will be able to do the following to get devirtualized the code in do_oop inlineable into *Klass::oop_oop_iterate: class MarkingClosure : public MetadataVisitingOopIterateClosure { public: virtual void do_oop(oop* p) { marking(p); } virtual void do_oop(narrowOop* p) { marking(p); } }; ... MarkingClosure cl; obj->oop_iterate(&cl); I've described the mechanism in more detail in iterate.inline.hpp. See the two larger comment blocks in: http://cr.openjdk.java.net/~stefank/8204540/webrev.01/src/hotspot/share/memory/iterator.inline.hpp.udiff.html This has been tested with mach5 tier{1,2,3,4,5,6,7}. There are a few cleanup changes that would be good to handle after this change. For example: * Minimize OopIterateClosure and get rid of idempotent() * Don't automatically call verify() from oop_oop_iterate, but let the closures do their own verification. * Fix include cycle between functions in objArrayKlass.inline.hpp and objArrayOop.inline.hpp * Remove (or at least rename) NoHeaderExtendedOopClosure and oop_iterate_no_header() * Remove the UseCompressedOops checks that were pushed to the Parallel GC object visitors (oop_pc_* and oop_ps_* functions). This is already being worked on in JDK-8201436. Thanks, StefanK From david.holmes at oracle.com Wed Jun 20 11:15:55 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 20 Jun 2018 21:15:55 +1000 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: <0cd1dd17-522e-3b7a-955c-283dd30788c8@LGonQn.Org> References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> <034bbb1e-5f79-c429-878c-79e8f824dc3d@oracle.com> <7c8571dc-8781-cfdf-1bb9-d0c3b3297c17@oracle.com> <82ac960d-5794-0bfb-8d5b-ff0be858230a@LGonQn.Org> <8632cbab-75e8-7945-1b11-339e471b8ae2@oracle.com> <2cdbd728-9016-a334-1612-e75f1bc4d595@oracle.com> <76545408-082f-f0c1-1da2-7c0609c5dfd9@LGonQn.Org> <0cd1dd17-522e-3b7a-955c-283dd30788c8@LGonQn.Org> Message-ID: <022d4881-1062-4c5a-94f1-89d1d9b3aa53@oracle.com> Hi Chris, Reviewed. I think there is an underlying problem in mixing uintx types and size_t types, but that's not your problem to fix here. The casts deal with the immediate issue. Thanks, David On 20/06/2018 7:38 PM, Chris Phillips wrote: > > Hi > Another reviewer? > > On 19/06/18 10:58 AM, "Chris Phillips"@T O wrote: >> Hi Per, >> Thanks! >> >> On 19/06/18 10:41 AM, Per Liden wrote: >>> Hi Chris, >>> >>> On 06/19/2018 03:49 PM, Chris Phillips wrote: >>>> Hi Per >>>> Thanks! >>>> New webrev : http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.3 >>> >>> Looks good to me! >>> >>> /Per >>> >>>> All suggested changes made (see below inline). >>>> >>>> On 18/06/18 05:55 AM, Per Liden wrote: >>>>> On 06/14/2018 05:01 PM, Chris Phillips wrote: >>>>>> Hi >>>>>> Any further comments or changes? >>>>>> On 06/06/18 05:56 PM, Chris Phillips wrote: >>>>>>> Hi Per, >>>>>>> >>>>>>> On 06/06/18 05:48 PM, Per Liden wrote: >>>>>>>> Hi Chris, >>>>>>>> >>>>>>>> On 06/06/2018 11:15 PM, Chris Phillips wrote: >>>>>>>>> Hi Per, >>>>>>>>> >>>>>>>>> On 06/06/18 04:47 PM, Per Liden wrote: >>>>>>>>>> Hi Chris, >>>>>>>>>> >>>>>>>>>> On 06/06/2018 09:36 PM, Chris Phillips wrote: >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> On 06/06/18 02:23 PM, Per Liden wrote: >>>>>>>>>>>> On 2018-06-06 18:29, Andrew Haley wrote: >>>>>>>>>>>>> On 06/06/2018 04:47 PM, Chris Phillips wrote: >>>>>>>>>>>>>> Please review this set of changes to shared code >>>>>>>>>>>>>> related to S390 (31bit) Zero self-build type mis-match >>>>>>>>>>>>>> failures. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>>>>>>>>>> webrev: >>>>>>>>>>>>>> http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 >>>>>>>>>>>>> >>>>>>>>>>>>> Can you explain this a little more?? What is the type of >>>>>>>>>>>>> size_t on >>>>>>>>>>>>> s390x?? What is the type of uintptr_t?? What are the errors? >>>>>>>>>>>> >>>>>>>>>>>> I would like to understand this too. >>>>>>>>>>>> >>>>>>>>>>>> cheers, >>>>>>>>>>>> Per >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> Quoting from the original bug? review request: >>>>>>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> "This >>>>>>>>>>> is a problem when one parameter is of size_t type and the >>>>>>>>>>> second of >>>>>>>>>>> uintx type and the platform has size_t defined as eg. unsigned >>>>>>>>>>> long as >>>>>>>>>>> on s390 (32-bit)." >>>>>>>>>> >>>>>>>>>> Please clarify what the sizes of uintx (i.e. uintptr_t) and size_t >>>>>>>>>> are >>>>>>>>>> on s390? >>>>>>>>> See Dan's explanation. >>>>>>>>>> >>>>>>>>>> I fail to see how any of this matters to _entries here? What am I >>>>>>>>>> missing? >>>>>>>>>> >>>>>>>>> >>>>>>>>> By changing the type, to its actual usage, we avoid the >>>>>>>>> necessity of patching in >>>>>>>>> src/hotspot/share/gc/g1/g1StringDedupTable.cpp >>>>>>>>> around line 617, since its consistent usage and local I patched >>>>>>>>> at the >>>>>>>>> definition. >>>>>>>>> >>>>>>>>> - _table->_entries, percent_of(_table->_entries, _table->_size), >>>>>>>>> _entry_cache->size(), _entries_added, _entries_removed); >>>>>>>>> + _table->_entries, percent_of( (size_t)(_table->_entries), >>>>>>>>> _table->_size), _entry_cache->size(), _entries_added, >>>>>>>>> _entries_removed); >>>>>>>>> >>>>>>>>> percent_of will complain about types otherwise. >>>>>>>> >>>>>>>> Ok, so why don't you just cast it in the call to percent_of? Your >>>>>>>> current patch has ripple effects that you fail to take into account. >>>>>>>> For >>>>>>>> example, _entries is still printed using UINTX_FORMAT and compared >>>>>>>> against other uintx variables. You're now mixing types in an unsound >>>>>>>> way. >>>>>>> >>>>>>> Hmm missed that, so will do the cast instead as you suggest. >>>>>>> (Fixing at the defn is what was suggested the last time around so I >>>>>>> tried to do that where it was consistent, obviously this is not. >>>>>>> Thanks. >>>>>>> >>>>>>>> cheers, >>>>>>>> Per >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> src/hotspot/share/gc/g1/g1StringDedupTable.hpp >>>>>>>>>> @@ -120,11 +120,11 @@ >>>>>>>>>> ????? // Cache for reuse and fast alloc/free of table entries. >>>>>>>>>> ????? static G1StringDedupEntryCache* _entry_cache; >>>>>>>>>> >>>>>>>>>> ????? G1StringDedupEntry**??????????? _buckets; >>>>>>>>>> ????? size_t????????????????????????? _size; >>>>>>>>>> -? uintx?????????????????????????? _entries; >>>>>>>>>> +? size_t????????????????????????? _entries; >>>>>>>>>> ????? uintx _shrink_threshold; >>>>>>>>>> ????? uintx _grow_threshold; >>>>>>>>>> ????? bool _rehash_needed; >>>>>>>>>> >>>>>>>>>> cheers, >>>>>>>>>> Per >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Hope that helps, >>>>>>>>>>> Chris >>>>>>>>>>> >>>>>>>>>>> (I'll answer further if needed but the info is in the bugs and >>>>>>>>>>> review thread mostly) >>>>>>>>>>> See: >>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>>>>>>> and: >>>>>>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8046938 >>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8074459 >>>>>>>>>>> For more info. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> Cheers! >>>>>>> Chris >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> Finally through testing and submit run again after Per's requested >>>>>> change, here's the knew webrev: >>>>>> http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.2 >>>>>> attached is the passing run fron the submit queue. >>>>>> >>>>>> Please review... >>>>> >>>>> >>>>> src/hotspot/share/gc/cms/cms_globals.hpp >>>>> ---------------------------------------- >>>>> Instead of changing the type of ParGCDesiredObjsFromOverflowList, I'd >>>>> suggest you just change the single place where you need a cast, in >>>>> ParScanThreadState::take_from_overflow_stack(). >>>>> >>>>> If you change the type of ParGCDesiredObjsFromOverflowList, but you >>>>> otherwise have to clean up a number of places where it's already >>>>> explicitly cast to size_t in concurrentMaskSweepGeneration.cpp. >>>>> >>>> Done >>>> >>>>> >>>>> src/hotspot/share/gc/parallel/parallel_globals.hpp >>>>> -------------------------------------------------- >>>>> Please also change to type of ParallelOldDeadWoodLimiterMean to size_t. >>>>> >>>>> >>>> Done >>>> >>>>> src/hotspot/share/gc/parallel/psParallelCompact.cpp >>>>> --------------------------------------------------- >>>>> No need to cast ParallelOldDeadWoodLimiterStdDev, you're already >>>>> changed >>>>> its type. And if you change ParallelOldDeadWoodLimiterMean to also >>>>> being >>>>> size_t you don't need to touch this file at all. >>>>> >>>>> >>>> Done >>>> >>>>> src/hotspot/share/runtime/globals.hpp >>>>> ------------------------------------- >>>>> -define_pd_global(uintx,? InitialCodeCacheSize,?????? 160*K); >>>>> -define_pd_global(uintx,? ReservedCodeCacheSize,????? 32*M); >>>>> +define_pd_global(size_t,? InitialCodeCacheSize,?????? 160*K); >>>>> +define_pd_global(size_t,? ReservedCodeCacheSize,????? 32*M); >>>>> >>>>> I would avoid changing these types, otherwise you need to go around and >>>>> clean up a number of other places where it's says it's an uintx, >>>>> like here: >>>>> >>>>> 1909?? product_pd(uintx, InitialCodeCacheSize,???????? \ >>>>> 1910?????????? "Initial code cache size (in bytes)")???????? \ >>>>> 1911?????????? range(os::vm_page_size(), max_uintx)???????? \ >>>>> >>>>> Also, it seems you've already added the cast you need for >>>>> InitialCodeCacheSize in codeCache.cpp, so that type change looks >>>>> unnecessary. >>>>> >>>> Done >>>> >>>> >>>>> Btw, patch no longer applies to the latest jdk/jdk. >>>> >>>> Should work now, again. >>>> Tested on s390 31 bit and x86_64? 64 bit, testing on s390x underway. >>>>> >>>>> >>>>> cheers, >>>>> Per >>>>> >>>>>> >>>>>> Chris >>>>>> >>>>> >>>>> >>>>> >>>> >>>> Cheers! >>>> Chris >>>> >>> >>> >>> >> Cheers! >> >> Chris >> >> >> >> > > Ready to go : > > http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.4 > > Build Details: 2018-06-19-1559053.chrisphi.source > 0 Failed Tests > Mach5 Tasks Results Summary > > FAILED: 0 > KILLED: 0 > UNABLE_TO_RUN: 0 > NA: 0 > PASSED: 75 > EXECUTED_WITH_FAILURE: 0 > > > Chris > From thomas.schatzl at oracle.com Wed Jun 20 12:34:24 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Wed, 20 Jun 2018 14:34:24 +0200 Subject: 8006742: Initial TLAB sizing heuristics might provoke premature GCs In-Reply-To: References: Message-ID: <52529941fe1cf7b34a5e37a1c5f4f51166692c48.camel@oracle.com> Hi Roshan, On Wed, 2018-06-13 at 16:53 +0530, roshan mangal wrote: > Hi Everyone, > > This is my first patch as a new member of OpenJDK community. > > I have looked into minor bug > https://bugs.openjdk.java.net/browse/JDK-8006742 ( Initial TLAB > sizing > heuristics might provoke premature GCs ) > > Issue: - > The issue is due to late update of average threads count > "global_stats()->allocating_threads_avg". > The method "global_stats()->allocating_threads_avg()" always returns > 1 > until first young GC happens. > > ThreadLocalAllocBuffer::initial_desired_size() returns "init_sz= > tlab_capacity/ allocating_threads_avg *target_refills" > > i.e init_sz = tlab_capacity/1*50. > > Due to above calculation young GC happens before creating first 50 > threads. > > Issue happens with below command in jdk11 :- > $java -Xmn3520m -Xms3584m -Xmx3584m -XX:+PrintGC > -XX:+UseParallelOldGC > -XX:+UseParallelGC Threads 64 > [0.001s][warning][gc] -XX:+PrintGC is deprecated. Will use -Xlog:gc > instead. > [0.004s][info ][gc] Using Parallel > [0.209s][info ][gc] GC(0) Pause Young (Allocation Failure) > 2640M->1M(3144M) 14.863ms > > Proposed Solution: > The variable "GlobalTLABStats:: _allocating_threads" should be > updated > with each thread creation. > So incremented "GlobalTLABStats:: _allocating_threads" inside > ThreadLocalAllocBuffer::initialize ( call stack:- Thread -> > initialize_tlab() -> tlab().initialize() ) . > > Please find the patch below. > > ======================== PATCH > ========================================== > > diff -r d12828b7cd64 > src/hotspot/share/gc/shared/threadLocalAllocBuffer.cpp > --- a/src/hotspot/share/gc/shared/threadLocalAllocBuffer.cpp Wed > Jun 13 > 10:15:35 2018 +0200 > +++ b/src/hotspot/share/gc/shared/threadLocalAllocBuffer.cpp Wed > Jun 13 > 05:08:01 2018 -0500 > @@ -192,7 +192,8 @@ > initialize(NULL, // start > NULL, // top > NULL); // end > - > + global_stats()->update_allocating_threads(); > + global_stats()->publish(); > set_desired_size(initial_desired_size()); > > // Following check is needed because at startup the main > > ===================================================================== > === the change does solve the issue, but the problem is that the global data that is updated and changed here is updated/changed here without proper synchronization (at least GlobalTLABStats::_allocating_threads) from what I understand the code. I.e. Threads::initialize_tlab() as called by the Java thread entry point JavaThread::run() and also by the attach_current_thread() method is not synchronized in any way, so this may result in garbage further along. Unfortunately I do not have a quick other solution for this - also because you probably do not want to some heavyweight synchronization (lock) in that path is it would likely decrease java thread creation throughput. Thanks, Thomas From mikael.vidstedt at oracle.com Wed Jun 20 17:46:53 2018 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Wed, 20 Jun 2018 10:46:53 -0700 Subject: RFR (XS): Obsolete support for commercial features Message-ID: <5190D144-D836-45B5-AE69-6DCAF2911064@oracle.com> Please review the following change which obsoletes/removes the support for commercial features - both the concept of commercial VM flags, and also some other references to the concept of commercial features. Bug: https://bugs.openjdk.java.net/browse/JDK-8202331 Webrev: http://cr.openjdk.java.net/~mikael/webrevs/8202331/webrev.00/open/webrev/ Cheers, Mikael From vladimir.kozlov at oracle.com Wed Jun 20 17:54:42 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 20 Jun 2018 10:54:42 -0700 Subject: RFR (XS): Obsolete support for commercial features In-Reply-To: <5190D144-D836-45B5-AE69-6DCAF2911064@oracle.com> References: <5190D144-D836-45B5-AE69-6DCAF2911064@oracle.com> Message-ID: <18e679c0-8351-708d-b599-a149fecf3ed9@oracle.com> Looks good. thanks, Vladimir On 6/20/18 10:46 AM, Mikael Vidstedt wrote: > > Please review the following change which obsoletes/removes the support for commercial features - both the concept of commercial VM flags, and also some other references to the concept of commercial features. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8202331 > Webrev: http://cr.openjdk.java.net/~mikael/webrevs/8202331/webrev.00/open/webrev/ > > Cheers, > Mikael > From chris.plummer at oracle.com Wed Jun 20 18:17:36 2018 From: chris.plummer at oracle.com (Chris Plummer) Date: Wed, 20 Jun 2018 11:17:36 -0700 Subject: RFR (XS): Obsolete support for commercial features In-Reply-To: <5190D144-D836-45B5-AE69-6DCAF2911064@oracle.com> References: <5190D144-D836-45B5-AE69-6DCAF2911064@oracle.com> Message-ID: <8b76173b-a641-27a2-c09c-3ae9f9652ae1@oracle.com> Hi Mikael, Looks good other than copyrights needing updating. thanks, Chris On 6/20/18 10:46 AM, Mikael Vidstedt wrote: > Please review the following change which obsoletes/removes the support for commercial features - both the concept of commercial VM flags, and also some other references to the concept of commercial features. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8202331 > Webrev: http://cr.openjdk.java.net/~mikael/webrevs/8202331/webrev.00/open/webrev/ > > Cheers, > Mikael > From igor.ignatyev at oracle.com Wed Jun 20 19:33:39 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Wed, 20 Jun 2018 12:33:39 -0700 Subject: RFR(XXS) : 8205433 : clean up hotspot ProblemList Message-ID: http://cr.openjdk.java.net/~iignatyev//8205433/webrev.00/index.html > 10 lines changed: 1 ins; 8 del; 1 mod; Hi all, could you please review this tiny clean up of ProblemList.txt? compiler/c2/Test8007294.java hasn't been removed from the problem list when JDK-8192992 was fixed. the same test s also marked as failing due to JDK-8194310 (Java EE modules removal), but this test doesn't depend on any java ee modules. testing: compiler/c2/Test8007294.java on linux-x64,windows-x64,solaris-sparcv9,macos-x64 x {product,fastdebug} webrev: http://cr.openjdk.java.net/~iignatyev//8205433/webrev.00/index.html JBS: https://bugs.openjdk.java.net/browse/JDK-8205433 Thanks, -- Igor From vladimir.kozlov at oracle.com Wed Jun 20 19:38:56 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 20 Jun 2018 12:38:56 -0700 Subject: RFR(XXS) : 8205433 : clean up hotspot ProblemList In-Reply-To: References: Message-ID: <36ed5b48-cfee-9377-6522-203f7f76c62c@oracle.com> Looks good. Thanks, Vladimir On 6/20/18 12:33 PM, Igor Ignatyev wrote: > http://cr.openjdk.java.net/~iignatyev//8205433/webrev.00/index.html >> 10 lines changed: 1 ins; 8 del; 1 mod; > > Hi all, > > could you please review this tiny clean up of ProblemList.txt? > > compiler/c2/Test8007294.java hasn't been removed from the problem list when JDK-8192992 was fixed. the same test s also marked as failing due to JDK-8194310 (Java EE modules removal), but this test doesn't depend on any java ee modules. > > testing: compiler/c2/Test8007294.java on linux-x64,windows-x64,solaris-sparcv9,macos-x64 x {product,fastdebug} > webrev: http://cr.openjdk.java.net/~iignatyev//8205433/webrev.00/index.html > JBS: https://bugs.openjdk.java.net/browse/JDK-8205433 > > Thanks, > -- Igor > From igor.ignatyev at oracle.com Wed Jun 20 19:39:56 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Wed, 20 Jun 2018 12:39:56 -0700 Subject: RFR(XXS) : 8205433 : clean up hotspot ProblemList In-Reply-To: <36ed5b48-cfee-9377-6522-203f7f76c62c@oracle.com> References: <36ed5b48-cfee-9377-6522-203f7f76c62c@oracle.com> Message-ID: <6627A300-16CE-4D9B-93E7-20BFB4ED349D@oracle.com> Thanks Vladimir! -- Igor > On Jun 20, 2018, at 12:38 PM, Vladimir Kozlov wrote: > > Looks good. > > Thanks, > Vladimir > > On 6/20/18 12:33 PM, Igor Ignatyev wrote: >> http://cr.openjdk.java.net/~iignatyev//8205433/webrev.00/index.html >>> 10 lines changed: 1 ins; 8 del; 1 mod; >> Hi all, >> could you please review this tiny clean up of ProblemList.txt? >> compiler/c2/Test8007294.java hasn't been removed from the problem list when JDK-8192992 was fixed. the same test s also marked as failing due to JDK-8194310 (Java EE modules removal), but this test doesn't depend on any java ee modules. >> testing: compiler/c2/Test8007294.java on linux-x64,windows-x64,solaris-sparcv9,macos-x64 x {product,fastdebug} >> webrev: http://cr.openjdk.java.net/~iignatyev//8205433/webrev.00/index.html >> JBS: https://bugs.openjdk.java.net/browse/JDK-8205433 >> Thanks, >> -- Igor From ChrisPhi at LGonQn.Org Wed Jun 20 20:24:42 2018 From: ChrisPhi at LGonQn.Org (Chris Phillips) Date: Wed, 20 Jun 2018 16:24:42 -0400 Subject: [11]RFR: 8203030: Zero s390 31 bit size_t type conflicts in shared code In-Reply-To: <022d4881-1062-4c5a-94f1-89d1d9b3aa53@oracle.com> References: <2950748a-c7bc-390c-8357-7952a8601aa4@redhat.com> <3fc9038b-68c3-a3bf-7b45-b01a82e92213@oracle.com> <8c3edae8-e370-1ef4-2b90-bc3b4ceb295f@LGonQn.Org> <034bbb1e-5f79-c429-878c-79e8f824dc3d@oracle.com> <7c8571dc-8781-cfdf-1bb9-d0c3b3297c17@oracle.com> <82ac960d-5794-0bfb-8d5b-ff0be858230a@LGonQn.Org> <8632cbab-75e8-7945-1b11-339e471b8ae2@oracle.com> <2cdbd728-9016-a334-1612-e75f1bc4d595@oracle.com> <76545408-082f-f0c1-1da2-7c0609c5dfd9@LGonQn.Org> <0cd1dd17-522e-3b7a-955c-283dd30788c8@LGonQn.Org> <022d4881-1062-4c5a-94f1-89d1d9b3aa53@oracle.com> Message-ID: <34852e64-73d0-5ecd-abcb-f8215a48b8e8@LGonQn.Org> Hi David, On 20/06/18 07:15 AM, David Holmes wrote: > Hi Chris, > > Reviewed. > Thanks! > I think there is an underlying problem in mixing uintx types and size_t > types, but that's not your problem to fix here. The casts deal with the > immediate issue. > I agree, thanks again. > Thanks, > David Cheers! Chris > > On 20/06/2018 7:38 PM, Chris Phillips wrote: >> >> Hi >> Another reviewer? >> >> On 19/06/18 10:58 AM, "Chris Phillips"@T O wrote: >>> Hi Per, >>> Thanks! >>> >>> On 19/06/18 10:41 AM, Per Liden wrote: >>>> Hi Chris, >>>> >>>> On 06/19/2018 03:49 PM, Chris Phillips wrote: >>>>> Hi Per >>>>> Thanks! >>>>> New webrev : http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.3 >>>> >>>> Looks good to me! >>>> >>>> /Per >>>> >>>>> All suggested changes made (see below inline). >>>>> >>>>> On 18/06/18 05:55 AM, Per Liden wrote: >>>>>> On 06/14/2018 05:01 PM, Chris Phillips wrote: >>>>>>> Hi >>>>>>> Any further comments or changes? >>>>>>> On 06/06/18 05:56 PM, Chris Phillips wrote: >>>>>>>> Hi Per, >>>>>>>> >>>>>>>> On 06/06/18 05:48 PM, Per Liden wrote: >>>>>>>>> Hi Chris, >>>>>>>>> >>>>>>>>> On 06/06/2018 11:15 PM, Chris Phillips wrote: >>>>>>>>>> Hi Per, >>>>>>>>>> >>>>>>>>>> On 06/06/18 04:47 PM, Per Liden wrote: >>>>>>>>>>> Hi Chris, >>>>>>>>>>> >>>>>>>>>>> On 06/06/2018 09:36 PM, Chris Phillips wrote: >>>>>>>>>>>> Hi, >>>>>>>>>>>> >>>>>>>>>>>> On 06/06/18 02:23 PM, Per Liden wrote: >>>>>>>>>>>>> On 2018-06-06 18:29, Andrew Haley wrote: >>>>>>>>>>>>>> On 06/06/2018 04:47 PM, Chris Phillips wrote: >>>>>>>>>>>>>>> Please review this set of changes to shared code >>>>>>>>>>>>>>> related to S390 (31bit) Zero self-build type mis-match >>>>>>>>>>>>>>> failures. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>>>>>>>>>>> webrev: >>>>>>>>>>>>>>> http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.0 >>>>>>>>>>>>>> >>>>>>>>>>>>>> Can you explain this a little more?? What is the type of >>>>>>>>>>>>>> size_t on >>>>>>>>>>>>>> s390x?? What is the type of uintptr_t?? What are the errors? >>>>>>>>>>>>> >>>>>>>>>>>>> I would like to understand this too. >>>>>>>>>>>>> >>>>>>>>>>>>> cheers, >>>>>>>>>>>>> Per >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> Quoting from the original bug? review request: >>>>>>>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> "This >>>>>>>>>>>> is a problem when one parameter is of size_t type and the >>>>>>>>>>>> second of >>>>>>>>>>>> uintx type and the platform has size_t defined as eg. unsigned >>>>>>>>>>>> long as >>>>>>>>>>>> on s390 (32-bit)." >>>>>>>>>>> >>>>>>>>>>> Please clarify what the sizes of uintx (i.e. uintptr_t) and >>>>>>>>>>> size_t >>>>>>>>>>> are >>>>>>>>>>> on s390? >>>>>>>>>> See Dan's explanation. >>>>>>>>>>> >>>>>>>>>>> I fail to see how any of this matters to _entries here? What >>>>>>>>>>> am I >>>>>>>>>>> missing? >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> By changing the type, to its actual usage, we avoid the >>>>>>>>>> necessity of patching in >>>>>>>>>> src/hotspot/share/gc/g1/g1StringDedupTable.cpp >>>>>>>>>> around line 617, since its consistent usage and local I patched >>>>>>>>>> at the >>>>>>>>>> definition. >>>>>>>>>> >>>>>>>>>> - _table->_entries, percent_of(_table->_entries, _table->_size), >>>>>>>>>> _entry_cache->size(), _entries_added, _entries_removed); >>>>>>>>>> + _table->_entries, percent_of( (size_t)(_table->_entries), >>>>>>>>>> _table->_size), _entry_cache->size(), _entries_added, >>>>>>>>>> _entries_removed); >>>>>>>>>> >>>>>>>>>> percent_of will complain about types otherwise. >>>>>>>>> >>>>>>>>> Ok, so why don't you just cast it in the call to percent_of? Your >>>>>>>>> current patch has ripple effects that you fail to take into >>>>>>>>> account. >>>>>>>>> For >>>>>>>>> example, _entries is still printed using UINTX_FORMAT and compared >>>>>>>>> against other uintx variables. You're now mixing types in an >>>>>>>>> unsound >>>>>>>>> way. >>>>>>>> >>>>>>>> Hmm missed that, so will do the cast instead as you suggest. >>>>>>>> (Fixing at the defn is what was suggested the last time around so I >>>>>>>> tried to do that where it was consistent, obviously this is not. >>>>>>>> Thanks. >>>>>>>> >>>>>>>>> cheers, >>>>>>>>> Per >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> src/hotspot/share/gc/g1/g1StringDedupTable.hpp >>>>>>>>>>> @@ -120,11 +120,11 @@ >>>>>>>>>>> ?????? // Cache for reuse and fast alloc/free of table entries. >>>>>>>>>>> ?????? static G1StringDedupEntryCache* _entry_cache; >>>>>>>>>>> >>>>>>>>>>> ?????? G1StringDedupEntry**??????????? _buckets; >>>>>>>>>>> ?????? size_t????????????????????????? _size; >>>>>>>>>>> -? uintx?????????????????????????? _entries; >>>>>>>>>>> +? size_t????????????????????????? _entries; >>>>>>>>>>> ?????? uintx _shrink_threshold; >>>>>>>>>>> ?????? uintx _grow_threshold; >>>>>>>>>>> ?????? bool _rehash_needed; >>>>>>>>>>> >>>>>>>>>>> cheers, >>>>>>>>>>> Per >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Hope that helps, >>>>>>>>>>>> Chris >>>>>>>>>>>> >>>>>>>>>>>> (I'll answer further if needed but the info is in the bugs and >>>>>>>>>>>> review thread mostly) >>>>>>>>>>>> See: >>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8203030 >>>>>>>>>>>> and: >>>>>>>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-June/014254.html >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8046938 >>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8074459 >>>>>>>>>>>> For more info. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> Cheers! >>>>>>>> Chris >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> Finally through testing and submit run again after Per's requested >>>>>>> change, here's the knew webrev: >>>>>>> http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.2 >>>>>>> attached is the passing run fron the submit queue. >>>>>>> >>>>>>> Please review... >>>>>> >>>>>> >>>>>> src/hotspot/share/gc/cms/cms_globals.hpp >>>>>> ---------------------------------------- >>>>>> Instead of changing the type of ParGCDesiredObjsFromOverflowList, I'd >>>>>> suggest you just change the single place where you need a cast, in >>>>>> ParScanThreadState::take_from_overflow_stack(). >>>>>> >>>>>> If you change the type of ParGCDesiredObjsFromOverflowList, but you >>>>>> otherwise have to clean up a number of places where it's already >>>>>> explicitly cast to size_t in concurrentMaskSweepGeneration.cpp. >>>>>> >>>>> Done >>>>> >>>>>> >>>>>> src/hotspot/share/gc/parallel/parallel_globals.hpp >>>>>> -------------------------------------------------- >>>>>> Please also change to type of ParallelOldDeadWoodLimiterMean to >>>>>> size_t. >>>>>> >>>>>> >>>>> Done >>>>> >>>>>> src/hotspot/share/gc/parallel/psParallelCompact.cpp >>>>>> --------------------------------------------------- >>>>>> No need to cast ParallelOldDeadWoodLimiterStdDev, you're already >>>>>> changed >>>>>> its type. And if you change ParallelOldDeadWoodLimiterMean to also >>>>>> being >>>>>> size_t you don't need to touch this file at all. >>>>>> >>>>>> >>>>> Done >>>>> >>>>>> src/hotspot/share/runtime/globals.hpp >>>>>> ------------------------------------- >>>>>> -define_pd_global(uintx,? InitialCodeCacheSize,?????? 160*K); >>>>>> -define_pd_global(uintx,? ReservedCodeCacheSize,????? 32*M); >>>>>> +define_pd_global(size_t,? InitialCodeCacheSize,?????? 160*K); >>>>>> +define_pd_global(size_t,? ReservedCodeCacheSize,????? 32*M); >>>>>> >>>>>> I would avoid changing these types, otherwise you need to go >>>>>> around and >>>>>> clean up a number of other places where it's says it's an uintx, >>>>>> like here: >>>>>> >>>>>> 1909?? product_pd(uintx, InitialCodeCacheSize,???????? \ >>>>>> 1910?????????? "Initial code cache size (in bytes)")???????? \ >>>>>> 1911?????????? range(os::vm_page_size(), max_uintx)???????? \ >>>>>> >>>>>> Also, it seems you've already added the cast you need for >>>>>> InitialCodeCacheSize in codeCache.cpp, so that type change looks >>>>>> unnecessary. >>>>>> >>>>> Done >>>>> >>>>> >>>>>> Btw, patch no longer applies to the latest jdk/jdk. >>>>> >>>>> Should work now, again. >>>>> Tested on s390 31 bit and x86_64? 64 bit, testing on s390x underway. >>>>>> >>>>>> >>>>>> cheers, >>>>>> Per >>>>>> >>>>>>> >>>>>>> Chris >>>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>>> Cheers! >>>>> Chris >>>>> >>>> >>>> >>>> >>> Cheers! >>> >>> Chris >>> >>> >>> >>> >> >> Ready to? go : >> >> http://cr.openjdk.java.net/~chrisphi/JDK-8203030/webrev.4 >> >> Build Details: 2018-06-19-1559053.chrisphi.source >> 0 Failed Tests >> Mach5 Tasks Results Summary >> >> ???? FAILED: 0 >> ???? KILLED: 0 >> ???? UNABLE_TO_RUN: 0 >> ???? NA: 0 >> ???? PASSED: 75 >> ???? EXECUTED_WITH_FAILURE: 0 >> >> >> Chris >> > > > From lois.foltan at oracle.com Thu Jun 21 00:34:20 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Wed, 20 Jun 2018 20:34:20 -0400 Subject: RFR (M) JDK-8169559: Add class loader names to relevant VM messages Message-ID: <76623c90-6254-e37d-f45f-1ea49f3b7f55@oracle.com> Please review this change to introduce a new utility method Klass::class_in_module_of_loader() to uniformly provide a way to add a class' module name and class loader's name_and_id to error messages and potentially logging. The primary focus of this change was to remove the former method Klass::class_loader_and_module_name() and change any error messages currently using that functionality since it followed the StackTraceElement (https://docs.oracle.com/javase/9/docs/api/java/lang/StackTraceElement.html#toString--) format which is intended for stack traces not for use within error messages.? This change also includes a change to one IllegalAccessError message to demonstrate how an IAE would be formatted with the additional module and class loader information. This may conflict with the current review of JDK-8199940: Print more information about class loaders in IllegalAccessErrors. open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8169559/webrev/ bug link at https://bugs.openjdk.java.net/browse/JDK-8169559 JDK-8166633 outlines a new proposal where error messages follow a format of ERRROR: PROBLEM (REASON) where the PROBLEM is aggressively simple (and definitely avoids arbitrary-length loader names) so the REASON bears all the cost of explaining the PROBLEM with more specifics.? See the proposal in more detail at https://bugs.openjdk.java.net/browse/JDK-8166633. The new utility method Klass::class_in_module_of_loader() implements the proposed format. Testing: hs-tier(1-2), jdk-tier(1-2) complete ?????????????? hs-tier(3-5) in progress ?????????????? JCK vm, lang in progress Thanks, Lois From kim.barrett at oracle.com Thu Jun 21 01:52:52 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 20 Jun 2018 21:52:52 -0400 Subject: RFR: 8204540: Automatic oop closure devirtualization In-Reply-To: <9609014c-4523-e944-05e1-80386aa7088a@oracle.com> References: <9609014c-4523-e944-05e1-80386aa7088a@oracle.com> Message-ID: <321F445D-4D92-47EB-9B81-61C8ED0BFC8B@oracle.com> > On Jun 20, 2018, at 6:50 AM, Stefan Karlsson wrote: > > Hi all, > > Please review this patch to get rid of the macro based oop_iterate devirtualization layer, and replace it with a new implementation based on templates that automatically determines when the closure function calls can be devirtualized. > > http://cr.openjdk.java.net/~stefank/8204540/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8204540 Generally looks good, just a few minor comments and nits. ------------------------------------------------------------------------------ Any libjvm.so size comparison? We ought to get some benefit from not generating code we don't use. OTOH, it's possible this change might result in more inlining than previously. Any performance comparisons? My guess would be perhaps a small improvement, as we might get some inlining we weren't previously getting, in some (supposedly) not performance critical places. ------------------------------------------------------------------------------ src/hotspot/share/gc/cms/cmsOopClosures.inline.hpp 60 inline void cls::do_oop(oop* p) { cls::do_oop_work(p); } \ 61 inline void cls::do_oop(narrowOop* p) { cls::do_oop_work(p); } [pre-existing] I think the "cls::" qualifiers in the body should be unnecessary. ------------------------------------------------------------------------------ 58 #define DO_OOP_WORK_NV_IMPL(cls) \ This is no longer defining "_nv" functions, so the "_NV" in the name seems odd. ------------------------------------------------------------------------------ src/hotspot/share/gc/cms/concurrentMarkSweepGeneration.cpp 3321 MetadataVisitingOopIterateClosure(collector->ref_processor()), Indentation messed up. ------------------------------------------------------------------------------ src/hotspot/share/memory/iterator.hpp 370 class OopClosureDispatch { OopIterateClosureDispatch? ------------------------------------------------------------------------------ src/hotspot/share/gc/shared/genOopClosures.inline.hpp I think it might be better to put the trivial forwarding do_oop definitions that have been moved to here instead directly into the class declarations. I think doing so gives better / earlier error messages when forgetting to include the associated .inline.hpp file by callers. ------------------------------------------------------------------------------ src/hotspot/share/oops/instanceMirrorKlass.hpp 111 public: Unnecessary, we're already in public section. ------------------------------------------------------------------------------ src/hotspot/share/memory/iterator.inline.hpp 231 static const int NUM_KLASSES = 6; The value 6 is derived from the number of entries in enum KlassId, which is far away. How about defining KLASS_ID_COUNT with that enum? It might be that the enum and that constant need to be somewhere other than in klass.hpp though, to avoid include circularities. But see next comment too. ------------------------------------------------------------------------------ src/hotspot/share/memory/iterator.inline.hpp 106 const int _id; Maybe this should be of type KlassId? And similarly the accessor. Similarly the static const ID members in the various klass types. I'm surprised it can be const, because of the no-arg constructor. ------------------------------------------------------------------------------ src/hotspot/share/oops/typeArrayKlass.inline.hpp 38 // Performance tweak: We skip iterating over the klass pointer since we 39 // know that Universe::TypeArrayKlass never moves. [pre-existing] The wording of this comment seems like it might be left-over from permgen, and ought to be re-worded. ------------------------------------------------------------------------------ src/hotspot/share/memory/iterator.inline.hpp 77 // - If &OopClosureType::do_oop is resolved to &Base::do_oop, then there are no 78 // implementation of do_oop between Base and OopClosureType. However, there Either "is no implementation of" or "are no implementations of" ------------------------------------------------------------------------------ From gnu.andrew at redhat.com Thu Jun 21 03:56:43 2018 From: gnu.andrew at redhat.com (Andrew Hughes) Date: Thu, 21 Jun 2018 04:56:43 +0100 Subject: [8u] [RFR] Request for Review of Backport of JDK-8179887: Build failure with glibc >= 2.24: error: 'int readdir_r(DIR*, dirent*, dirent**)' is deprecated Message-ID: [CCing hotspot list for review] Bug: https://bugs.openjdk.java.net/browse/JDK-8179887 Webrev: http://cr.openjdk.java.net/~andrew/openjdk8/8179887/webrev.01/ Review thread: http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-April/031746.html Patch is basically the same as for OpenJDK 11, except we don't have to revert 8187667, which isn't present in OpenJDK 8. Thanks, -- Andrew :) Senior Free Java Software Engineer Red Hat, Inc. (http://www.redhat.com) Web Site: http://fuseyism.com Twitter: https://twitter.com/gnu_andrew_java PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net) Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222 From kim.barrett at oracle.com Thu Jun 21 08:25:25 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 21 Jun 2018 04:25:25 -0400 Subject: RFR: 8205459: Rename Access API flag decorators Message-ID: Please review these name and behavioral changes for some Access API decorators. We want to recategorize these as orthogonal boolean flags. Specifically, OOP_NOT_NULL => IS_NOT_NULL AS_DEST_NOT_INITIALIZED => IS_DEST_UNINITIALIZED IN_HEAP_ARRAY => IN_ARRAY In addition: Remove OOP_DECORATOR_MASK. Change IN_ARRAY (formerly IN_HEAP_ARRAY) to no longer imply IN_HEAP. Note that all of the places where IN_HEAP_ARRAY was being specified already also specified IN_HEAP. Some cleanups, such as renaming local variables from "on_array" to "is_array", for consistency with the associated decorator name. To aid reviewing, separate webrevs for each renaming are provided. Each of these renamings involves a lot of mechanical text replacement, with some further manual changes. Separate webrevs are also provided for the mechanical and manual parts of each renaming. CR: https://bugs.openjdk.java.net/browse/JDK-8205459 Testing: Mach5 tier1,2,3,hs-tier4,5. Webrevs: The entire set of changes: http://cr.openjdk.java.net/~kbarrett/8205459/open.00/ That's made up of the following three changes, applied in order: (1) Renaming OOP_NOT_NULL => IS_NOT_NULL http://cr.openjdk.java.net/~kbarrett/8205459/rename_oop_not_null/ which consists of these: http://cr.openjdk.java.net/~kbarrett/8205459/rename_oop_not_null_mechanical/ http://cr.openjdk.java.net/~kbarrett/8205459/rename_oop_not_null_manual/ The mechanical part was made using the following bash commands, executed from src/hotspot: find . -type f -name "*.[ch]pp" \ -exec grep -q OOP_NOT_NULL {} \; -print \ | xargs sed -i 's/OOP_NOT_NULL/IS_NOT_NULL/' sed -i '/const DecoratorSet OOP_DECORATOR_MASK/d' share/oops/accessDecorators.hpp find . -type f -name "*.[ch]pp" \ -exec grep -q OOP_DECORATOR_MASK {} \; -print \ | xargs sed -i 's/OOP_DECORATOR_MASK/IS_NOT_NULL/' (2) Renaming AS_DEST_NOT_INITIALIZED to IS_DEST_UNINITIALIZED http://cr.openjdk.java.net/~kbarrett/8205459/rename_as_dest_not_initialized/ which consists of these: http://cr.openjdk.java.net/~kbarrett/8205459/rename_as_dest_not_initialized_mechanical/ http://cr.openjdk.java.net/~kbarrett/8205459/rename_as_dest_not_initialized_manual/ The mechanical part was made using the following bash command, executed from src/hotspot: find . -type f -name "*.[ch]pp" \ -exec grep -q AS_DEST_NOT_INITIALIZED {} \; -print \ | xargs sed -i 's/AS_DEST_NOT_INITIALIZED/IS_DEST_UNINITIALIZED/' (3) Renaming IN_HEAP_ARRAY to IN_ARRAY http://cr.openjdk.java.net/~kbarrett/8205459/rename_in_heap_array/ which consists of these: http://cr.openjdk.java.net/~kbarrett/8205459/rename_in_heap_array_mechanical/ http://cr.openjdk.java.net/~kbarrett/8205459/rename_in_heap_array_manual/ The mechanical part was made using the following bash commands, executed from src/hotspot: find . -type f -name "*.[ch]pp" \ -exec grep -q IN_HEAP_ARRAY {} \; -print \ | xargs sed -i 's/IN_HEAP_ARRAY/IS_ARRAY/' cd cpu find . -type f -name "*.[ch]pp" \ -exec grep -q "bool on_array = " {} \; -print \ | xargs sed -i 's/on_array/is_array/' From aph at redhat.com Thu Jun 21 08:32:23 2018 From: aph at redhat.com (Andrew Haley) Date: Thu, 21 Jun 2018 09:32:23 +0100 Subject: RFR: JDK-8205336: Modularize allocations in assembler In-Reply-To: References: <3e8b2df9-9182-5c99-475e-da64a7305c67@redhat.com> Message-ID: On 06/19/2018 08:46 PM, Roman Kennke wrote: > Incremental: > http://cr.openjdk.java.net/~rkennke/JDK-8205336/webrev.01.diff/ > Full: > http://cr.openjdk.java.net/~rkennke/JDK-8205336/webrev.01/ > > Good now? Looks reasonable. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From erik.osterlund at oracle.com Thu Jun 21 09:14:54 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 21 Jun 2018 11:14:54 +0200 Subject: RFR: JDK-8205336: Modularize allocations in assembler In-Reply-To: References: <3e8b2df9-9182-5c99-475e-da64a7305c67@redhat.com> Message-ID: <5B2B6C8E.2050604@oracle.com> Hi Roman, I'm looking at the x86 code. I noticed that We used to always call incr_allocated_bytes after calling eden_allocate. However, now you moved incr_allocated_bytes into eden_allocate, which makes sense, and I like that. But it is only incremented if inline contig alloc is supported, otherwise it calls the slowpath without incrementing allocation bytes. So that is seemingly changing the behaviour from before. Is that a bug in your code, a bug in the current code, or am I misunderstanding this code? It looks to me that your code is fixing a bug in the case when not using TLAB and not using contig allocation. Today it seems to me that we will always call the slowpath that always increments allocated bytes, and then always account it one more time in the interpreter. There might be more subtle differences as well that I have not considered yet. Thoughts? Otherwise, I think this looks reasonable. Thanks, /Erik Other than that, it looks reasonable. On 2018-06-19 21:46, Roman Kennke wrote: > Am 19.06.2018 um 19:56 schrieb Andrew Haley: >> On 06/19/2018 06:14 PM, Roman Kennke wrote: >>> Can I please get reviews? >> AArch64 looks OK, but it makes no sense for Register thread to be an >> argument to eden_allocate: it's rthread. Otherwise fine. >> >> Unless you've messed up copying some of the code, but it'd be hard >> to check all that by hand without half a day to spare... :-( > Right. It was pre-existing (probably modeled after x86), and I wanted to > make it a 1:1 copy/refactoring job as much as possible, but yeah this is > pointless in aarch64. Let's remove it: > > Incremental: > http://cr.openjdk.java.net/~rkennke/JDK-8205336/webrev.01.diff/ > Full: > http://cr.openjdk.java.net/~rkennke/JDK-8205336/webrev.01/ > > Good now? > > Thanks for reviewing! > Roman > From rkennke at redhat.com Thu Jun 21 09:16:23 2018 From: rkennke at redhat.com (Roman Kennke) Date: Thu, 21 Jun 2018 11:16:23 +0200 Subject: RFR: JDK-8205336: Modularize allocations in assembler In-Reply-To: <5B2B6C8E.2050604@oracle.com> References: <3e8b2df9-9182-5c99-475e-da64a7305c67@redhat.com> <5B2B6C8E.2050604@oracle.com> Message-ID: <3b25f020-7b40-5d88-49a3-955a115c4f0d@redhat.com> Am 21.06.2018 um 11:14 schrieb Erik ?sterlund: > Hi Roman, > > I'm looking at the x86 code. I noticed that We used to always call > incr_allocated_bytes after calling eden_allocate. However, now you moved > incr_allocated_bytes into eden_allocate, which makes sense, and I like > that. But it is only incremented if inline contig alloc is supported, > otherwise it calls the slowpath without incrementing allocation bytes. > So that is seemingly changing the behaviour from before. Is that a bug > in your code, a bug in the current code, or am I misunderstanding this > code? It did generate the code for incr_allocated_bytes(), but always unconditionally jumped over it if inline contig alloc is not supported. I wouldn't call it a bug, but it's bogus code. It looks to me that your code is fixing a bug in the case when not > using TLAB and not using contig allocation. Today it seems to me that we > will always call the slowpath that always increments allocated bytes, > and then always account it one more time in the interpreter. There might > be more subtle differences as well that I have not considered yet. > Thoughts? As far as I can tell, the incr_allocated_bytes() issue is the only difference. And it's not really a bug, just a slight inefficiency in that it generates some dead instructions. > Otherwise, I think this looks reasonable. Ok, thanks for reviewing. Can you verify that what I said above is correct? I'll push it through Mach5 in the meantime. Thanks, Roman > > Thanks, > /Erik > > Other than that, it looks reasonable. > > On 2018-06-19 21:46, Roman Kennke wrote: >> Am 19.06.2018 um 19:56 schrieb Andrew Haley: >>> On 06/19/2018 06:14 PM, Roman Kennke wrote: >>>> Can I please get reviews? >>> AArch64 looks OK, but it makes no sense for Register thread to be an >>> argument to eden_allocate: it's rthread.? Otherwise fine. >>> >>> Unless you've messed up copying some of the code, but it'd be hard >>> to check all that by hand without half a day to spare...? :-( >> Right. It was pre-existing (probably modeled after x86), and I wanted to >> make it a 1:1 copy/refactoring job as much as possible, but yeah this is >> pointless in aarch64. Let's remove it: >> >> Incremental: >> http://cr.openjdk.java.net/~rkennke/JDK-8205336/webrev.01.diff/ >> Full: >> http://cr.openjdk.java.net/~rkennke/JDK-8205336/webrev.01/ >> >> Good now? >> >> Thanks for reviewing! >> Roman >> > From per.liden at oracle.com Thu Jun 21 09:32:55 2018 From: per.liden at oracle.com (Per Liden) Date: Thu, 21 Jun 2018 11:32:55 +0200 Subject: RFR: 8205459: Rename Access API flag decorators In-Reply-To: References: Message-ID: <78f4af5f-0347-7124-856b-2c6f92a837ba@oracle.com> Hi Kim, On 06/21/2018 10:25 AM, Kim Barrett wrote: > Please review these name and behavioral changes for some Access API > decorators. We want to recategorize these as orthogonal boolean > flags. Specifically, > > OOP_NOT_NULL => IS_NOT_NULL > AS_DEST_NOT_INITIALIZED => IS_DEST_UNINITIALIZED > IN_HEAP_ARRAY => IN_ARRAY > > In addition: > > Remove OOP_DECORATOR_MASK. > > Change IN_ARRAY (formerly IN_HEAP_ARRAY) to no longer imply IN_HEAP. > Note that all of the places where IN_HEAP_ARRAY was being specified > already also specified IN_HEAP. > > Some cleanups, such as renaming local variables from "on_array" to > "is_array", for consistency with the associated decorator name. > > To aid reviewing, separate webrevs for each renaming are provided. > > Each of these renamings involves a lot of mechanical text replacement, > with some further manual changes. Separate webrevs are also provided > for the mechanical and manual parts of each renaming. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8205459 > > Testing: > Mach5 tier1,2,3,hs-tier4,5. > > Webrevs: > > The entire set of changes: > > http://cr.openjdk.java.net/~kbarrett/8205459/open.00/ Looks good, just two comments: src/hotspot/share/oops/accessDecorators.hpp ------------------------------------------- Looks like the decorator numbers got a bit mixed up: [...] 155 const DecoratorSet AS_RAW = UCONST64(1) << 12; 156 const DecoratorSet AS_NO_KEEPALIVE = UCONST64(1) << 14; [...] 185 const DecoratorSet IN_HEAP = UCONST64(1) << 20; 186 const DecoratorSet IN_NATIVE = UCONST64(1) << 22; [...] 196 const DecoratorSet IS_ARRAY = UCONST64(1) << 21; 197 const DecoratorSet IS_DEST_UNINITIALIZED = UCONST64(1) << 13; 198 const DecoratorSet IS_NOT_NULL = UCONST64(1) << 25; [...] Stefan just told me you planned to fix this in a different patch? src/hotspot/share/oops/access.hpp --------------------------------- Could we turn this: 123 IS_ARRAY | IS_NOT_NULL | 124 IN_HEAP; Into a single line, with IN_HEAP first, like: 123 IN_HEAP | IS_ARRAY | IS_NOT_NULL; cheers, Per > > That's made up of the following three changes, applied in order: > > (1) Renaming OOP_NOT_NULL => IS_NOT_NULL > > http://cr.openjdk.java.net/~kbarrett/8205459/rename_oop_not_null/ > > which consists of these: > http://cr.openjdk.java.net/~kbarrett/8205459/rename_oop_not_null_mechanical/ > http://cr.openjdk.java.net/~kbarrett/8205459/rename_oop_not_null_manual/ > > The mechanical part was made using the following bash commands, > executed from src/hotspot: > > find . -type f -name "*.[ch]pp" \ > -exec grep -q OOP_NOT_NULL {} \; -print \ > | xargs sed -i 's/OOP_NOT_NULL/IS_NOT_NULL/' > > sed -i '/const DecoratorSet OOP_DECORATOR_MASK/d' share/oops/accessDecorators.hpp > > find . -type f -name "*.[ch]pp" \ > -exec grep -q OOP_DECORATOR_MASK {} \; -print \ > | xargs sed -i 's/OOP_DECORATOR_MASK/IS_NOT_NULL/' > > (2) Renaming AS_DEST_NOT_INITIALIZED to IS_DEST_UNINITIALIZED > > http://cr.openjdk.java.net/~kbarrett/8205459/rename_as_dest_not_initialized/ > > which consists of these: > http://cr.openjdk.java.net/~kbarrett/8205459/rename_as_dest_not_initialized_mechanical/ > http://cr.openjdk.java.net/~kbarrett/8205459/rename_as_dest_not_initialized_manual/ > > The mechanical part was made using the following bash command, > executed from src/hotspot: > > find . -type f -name "*.[ch]pp" \ > -exec grep -q AS_DEST_NOT_INITIALIZED {} \; -print \ > | xargs sed -i 's/AS_DEST_NOT_INITIALIZED/IS_DEST_UNINITIALIZED/' > > (3) Renaming IN_HEAP_ARRAY to IN_ARRAY > > http://cr.openjdk.java.net/~kbarrett/8205459/rename_in_heap_array/ > > which consists of these: > http://cr.openjdk.java.net/~kbarrett/8205459/rename_in_heap_array_mechanical/ > http://cr.openjdk.java.net/~kbarrett/8205459/rename_in_heap_array_manual/ > > The mechanical part was made using the following bash commands, > executed from src/hotspot: > > find . -type f -name "*.[ch]pp" \ > -exec grep -q IN_HEAP_ARRAY {} \; -print \ > | xargs sed -i 's/IN_HEAP_ARRAY/IS_ARRAY/' > > cd cpu > find . -type f -name "*.[ch]pp" \ > -exec grep -q "bool on_array = " {} \; -print \ > | xargs sed -i 's/on_array/is_array/' > > From stefan.karlsson at oracle.com Thu Jun 21 09:30:52 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 21 Jun 2018 11:30:52 +0200 Subject: RFR: 8205459: Rename Access API flag decorators In-Reply-To: References: Message-ID: <2a5d924f-2809-7c29-0711-b7da8b364093@oracle.com> Looks good. StefanK On 2018-06-21 10:25, Kim Barrett wrote: > Please review these name and behavioral changes for some Access API > decorators. We want to recategorize these as orthogonal boolean > flags. Specifically, > > OOP_NOT_NULL => IS_NOT_NULL > AS_DEST_NOT_INITIALIZED => IS_DEST_UNINITIALIZED > IN_HEAP_ARRAY => IN_ARRAY > > In addition: > > Remove OOP_DECORATOR_MASK. > > Change IN_ARRAY (formerly IN_HEAP_ARRAY) to no longer imply IN_HEAP. > Note that all of the places where IN_HEAP_ARRAY was being specified > already also specified IN_HEAP. > > Some cleanups, such as renaming local variables from "on_array" to > "is_array", for consistency with the associated decorator name. > > To aid reviewing, separate webrevs for each renaming are provided. > > Each of these renamings involves a lot of mechanical text replacement, > with some further manual changes. Separate webrevs are also provided > for the mechanical and manual parts of each renaming. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8205459 > > Testing: > Mach5 tier1,2,3,hs-tier4,5. > > Webrevs: > > The entire set of changes: > > http://cr.openjdk.java.net/~kbarrett/8205459/open.00/ > > That's made up of the following three changes, applied in order: > > (1) Renaming OOP_NOT_NULL => IS_NOT_NULL > > http://cr.openjdk.java.net/~kbarrett/8205459/rename_oop_not_null/ > > which consists of these: > http://cr.openjdk.java.net/~kbarrett/8205459/rename_oop_not_null_mechanical/ > http://cr.openjdk.java.net/~kbarrett/8205459/rename_oop_not_null_manual/ > > The mechanical part was made using the following bash commands, > executed from src/hotspot: > > find . -type f -name "*.[ch]pp" \ > -exec grep -q OOP_NOT_NULL {} \; -print \ > | xargs sed -i 's/OOP_NOT_NULL/IS_NOT_NULL/' > > sed -i '/const DecoratorSet OOP_DECORATOR_MASK/d' share/oops/accessDecorators.hpp > > find . -type f -name "*.[ch]pp" \ > -exec grep -q OOP_DECORATOR_MASK {} \; -print \ > | xargs sed -i 's/OOP_DECORATOR_MASK/IS_NOT_NULL/' > > (2) Renaming AS_DEST_NOT_INITIALIZED to IS_DEST_UNINITIALIZED > > http://cr.openjdk.java.net/~kbarrett/8205459/rename_as_dest_not_initialized/ > > which consists of these: > http://cr.openjdk.java.net/~kbarrett/8205459/rename_as_dest_not_initialized_mechanical/ > http://cr.openjdk.java.net/~kbarrett/8205459/rename_as_dest_not_initialized_manual/ > > The mechanical part was made using the following bash command, > executed from src/hotspot: > > find . -type f -name "*.[ch]pp" \ > -exec grep -q AS_DEST_NOT_INITIALIZED {} \; -print \ > | xargs sed -i 's/AS_DEST_NOT_INITIALIZED/IS_DEST_UNINITIALIZED/' > > (3) Renaming IN_HEAP_ARRAY to IN_ARRAY > > http://cr.openjdk.java.net/~kbarrett/8205459/rename_in_heap_array/ > > which consists of these: > http://cr.openjdk.java.net/~kbarrett/8205459/rename_in_heap_array_mechanical/ > http://cr.openjdk.java.net/~kbarrett/8205459/rename_in_heap_array_manual/ > > The mechanical part was made using the following bash commands, > executed from src/hotspot: > > find . -type f -name "*.[ch]pp" \ > -exec grep -q IN_HEAP_ARRAY {} \; -print \ > | xargs sed -i 's/IN_HEAP_ARRAY/IS_ARRAY/' > > cd cpu > find . -type f -name "*.[ch]pp" \ > -exec grep -q "bool on_array = " {} \; -print \ > | xargs sed -i 's/on_array/is_array/' > > From erik.osterlund at oracle.com Thu Jun 21 09:48:40 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 21 Jun 2018 11:48:40 +0200 Subject: RFR: JDK-8205336: Modularize allocations in assembler In-Reply-To: <3b25f020-7b40-5d88-49a3-955a115c4f0d@redhat.com> References: <3e8b2df9-9182-5c99-475e-da64a7305c67@redhat.com> <5B2B6C8E.2050604@oracle.com> <3b25f020-7b40-5d88-49a3-955a115c4f0d@redhat.com> Message-ID: <5B2B7478.5060306@oracle.com> Hi Roman, On 2018-06-21 11:16, Roman Kennke wrote: > Am 21.06.2018 um 11:14 schrieb Erik ?sterlund: >> Hi Roman, >> >> I'm looking at the x86 code. I noticed that We used to always call >> incr_allocated_bytes after calling eden_allocate. However, now you moved >> incr_allocated_bytes into eden_allocate, which makes sense, and I like >> that. But it is only incremented if inline contig alloc is supported, >> otherwise it calls the slowpath without incrementing allocation bytes. >> So that is seemingly changing the behaviour from before. Is that a bug >> in your code, a bug in the current code, or am I misunderstanding this >> code? > It did generate the code for incr_allocated_bytes(), but always > unconditionally jumped over it if inline contig alloc is not supported. > I wouldn't call it a bug, but it's bogus code. I see. So we previously generated dead accounting code that we always jumped over. Eww. > It looks to me that your code is fixing a bug in the case when not >> using TLAB and not using contig allocation. Today it seems to me that we >> will always call the slowpath that always increments allocated bytes, >> and then always account it one more time in the interpreter. There might >> be more subtle differences as well that I have not considered yet. >> Thoughts? > As far as I can tell, the incr_allocated_bytes() issue is the only > difference. And it's not really a bug, just a slight inefficiency in > that it generates some dead instructions. Yeah, your code makes more sense. It is very confusing to have incorrect accounting code generated, and jump over it, and I'd rather not generate that code. >> Otherwise, I think this looks reasonable. > Ok, thanks for reviewing. Can you verify that what I said above is > correct? I'll push it through Mach5 in the meantime. Confirmed. Looks good. Thanks, /Erik > Thanks, > Roman > > >> Thanks, >> /Erik >> >> Other than that, it looks reasonable. >> >> On 2018-06-19 21:46, Roman Kennke wrote: >>> Am 19.06.2018 um 19:56 schrieb Andrew Haley: >>>> On 06/19/2018 06:14 PM, Roman Kennke wrote: >>>>> Can I please get reviews? >>>> AArch64 looks OK, but it makes no sense for Register thread to be an >>>> argument to eden_allocate: it's rthread. Otherwise fine. >>>> >>>> Unless you've messed up copying some of the code, but it'd be hard >>>> to check all that by hand without half a day to spare... :-( >>> Right. It was pre-existing (probably modeled after x86), and I wanted to >>> make it a 1:1 copy/refactoring job as much as possible, but yeah this is >>> pointless in aarch64. Let's remove it: >>> >>> Incremental: >>> http://cr.openjdk.java.net/~rkennke/JDK-8205336/webrev.01.diff/ >>> Full: >>> http://cr.openjdk.java.net/~rkennke/JDK-8205336/webrev.01/ >>> >>> Good now? >>> >>> Thanks for reviewing! >>> Roman >>> > From stefan.karlsson at oracle.com Thu Jun 21 09:44:32 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 21 Jun 2018 11:44:32 +0200 Subject: RFR: 8204540: Automatic oop closure devirtualization In-Reply-To: <321F445D-4D92-47EB-9B81-61C8ED0BFC8B@oracle.com> References: <9609014c-4523-e944-05e1-80386aa7088a@oracle.com> <321F445D-4D92-47EB-9B81-61C8ED0BFC8B@oracle.com> Message-ID: Hi Kim, Thanks for reviewing this! Updated webrevs: http://cr.openjdk.java.net/~stefank/8204540/webrev.02.delta http://cr.openjdk.java.net/~stefank/8204540/webrev.02 Comments below: On 2018-06-21 03:52, Kim Barrett wrote: >> On Jun 20, 2018, at 6:50 AM, Stefan Karlsson wrote: >> >> Hi all, >> >> Please review this patch to get rid of the macro based oop_iterate devirtualization layer, and replace it with a new implementation based on templates that automatically determines when the closure function calls can be devirtualized. >> >> http://cr.openjdk.java.net/~stefank/8204540/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8204540 > > Generally looks good, just a few minor comments and nits. > > ------------------------------------------------------------------------------ > > Any libjvm.so size comparison? We ought to get some benefit from not > generating code we don't use. OTOH, it's possible this change might > result in more inlining than previously. Before patch: 22833008 bytes After patch: 22790160 bytes > > Any performance comparisons? My guess would be perhaps a small > improvement, as we might get some inlining we weren't previously > getting, in some (supposedly) not performance critical places. I did runs on our internal perf system, and the scores were the same. I looked at pause times for some of the runs and couldn't find a difference. I'll run more runs over the weekend. > > ------------------------------------------------------------------------------ > src/hotspot/share/gc/cms/cmsOopClosures.inline.hpp > 60 inline void cls::do_oop(oop* p) { cls::do_oop_work(p); } \ > 61 inline void cls::do_oop(narrowOop* p) { cls::do_oop_work(p); } > > [pre-existing] > I think the "cls::" qualifiers in the body should be unnecessary. Removed. > > ------------------------------------------------------------------------------ > 58 #define DO_OOP_WORK_NV_IMPL(cls) \ > > This is no longer defining "_nv" functions, so the "_NV" in the name > seems odd. OK. I merged DO_OOP_WORK_NV_IMPL and DO_OOP_WORK_IMPL. > > ------------------------------------------------------------------------------ > src/hotspot/share/gc/cms/concurrentMarkSweepGeneration.cpp > 3321 MetadataVisitingOopIterateClosure(collector->ref_processor()), > > Indentation messed up. Fixed. > > ------------------------------------------------------------------------------ > src/hotspot/share/memory/iterator.hpp > 370 class OopClosureDispatch { > > OopIterateClosureDispatch? Yes. Fixed. > > ------------------------------------------------------------------------------ > src/hotspot/share/gc/shared/genOopClosures.inline.hpp > > I think it might be better to put the trivial forwarding do_oop > definitions that have been moved to here instead directly into the > class declarations. I think doing so gives better / earlier error > messages when forgetting to include the associated .inline.hpp file by > callers. I tried your proposal. It has the unfortunate effect that whenever you include genOopClosures.hpp you get a compile error, even when the functions are not used. I think we can get what you are looking for by changing 'virtual void do_oop(oop* p)' to 'inline virtual void do_oop(oop* p)'. I'm not sure this should be done for this RFE, though? > > ------------------------------------------------------------------------------ > src/hotspot/share/oops/instanceMirrorKlass.hpp > 111 public: > > Unnecessary, we're already in public section. Removed. > > ------------------------------------------------------------------------------ > src/hotspot/share/memory/iterator.inline.hpp > 231 static const int NUM_KLASSES = 6; > > The value 6 is derived from the number of entries in enum KlassId, > which is far away. How about defining KLASS_ID_COUNT with that enum? > It might be that the enum and that constant need to be somewhere other > than in klass.hpp though, to avoid include circularities. But see > next comment too. I was thinking about that as well. > > ------------------------------------------------------------------------------ > src/hotspot/share/memory/iterator.inline.hpp > 106 const int _id; > > Maybe this should be of type KlassId? And similarly the accessor. > > Similarly the static const ID members in the various klass types. Done. > > I'm surprised it can be const, because of the no-arg constructor. A dummy value is set in the non-arg Klass constructor. Klasses generated with this constructor is only used in CDS to copy the vtables. > > ------------------------------------------------------------------------------ > src/hotspot/share/oops/typeArrayKlass.inline.hpp > 38 // Performance tweak: We skip iterating over the klass pointer since we > 39 // know that Universe::TypeArrayKlass never moves. > > [pre-existing] > > The wording of this comment seems like it might be left-over from > permgen, and ought to be re-worded. OK. Updated the text. > > ------------------------------------------------------------------------------ > src/hotspot/share/memory/iterator.inline.hpp > 77 // - If &OopClosureType::do_oop is resolved to &Base::do_oop, then there are no > 78 // implementation of do_oop between Base and OopClosureType. However, there > > Either "is no implementation of" or "are no implementations of" Updated. Thanks, StefanK > > ------------------------------------------------------------------------------ > From david.holmes at oracle.com Thu Jun 21 10:57:19 2018 From: david.holmes at oracle.com (David Holmes) Date: Thu, 21 Jun 2018 20:57:19 +1000 Subject: RFR (XS): Obsolete support for commercial features In-Reply-To: <5190D144-D836-45B5-AE69-6DCAF2911064@oracle.com> References: <5190D144-D836-45B5-AE69-6DCAF2911064@oracle.com> Message-ID: Hi Mikael, Looks good! Thanks, David On 21/06/2018 3:46 AM, Mikael Vidstedt wrote: > > Please review the following change which obsoletes/removes the support for commercial features - both the concept of commercial VM flags, and also some other references to the concept of commercial features. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8202331 > Webrev: http://cr.openjdk.java.net/~mikael/webrevs/8202331/webrev.00/open/webrev/ > > Cheers, > Mikael > From adinn at redhat.com Thu Jun 21 14:20:21 2018 From: adinn at redhat.com (Andrew Dinn) Date: Thu, 21 Jun 2018 15:20:21 +0100 Subject: RFR: 8204331: AArch64: fix CAS not embedded in normal graph error Message-ID: The following patch fixes a problem in the AArch64 port which was caused by the introduction of the GC API which delegates responsibility for barrier generation to GC code (JDK-8202377) webrev: http://cr.openjdk.java.net/~adinn/8204331/webrev.00/ JIRA: https://bugs.openjdk.java.net/browse/JDK-8204331 The new G1 barrier assembler in combination with the C2 fence generator introduced a few small changes to the the memory flow layout in ideal subgraphs arising from (normal and Unsafe) volatile stores and (Unsafe) volatile CASes. This affects the results returned by the AARch64 predicates employed by ad file instructions to suppress generation of certain hw memory barriers instructions and, instead, generate acquiring loads + releasing stores. The change was caught by asserts in the CAS code which detected the new, unexpected graph shape. Reviews would be very welcome! The Patch: The fix involves several small changes to reflect each change introduced by the GC code. 1) Re-ordering of trailing volatile/cpu order barriers The C2 fence generator class reversed the order of the volatile and cpu order pair generated at the end of an Unsafe volatile store subgraph (from Volatile=>CPUOrder to CPUOrder=>Volatile). This is now uniform with the order of the trailing acquire/cpu order barriers (CPUOrder=>Acquire). The relevant predicates have been updated to expect and check for barriers in this order. 2) Extra memory flows in CAS graphs. Unsafe CASObject graphs for GCs which employ a conditional card mark (G1GC or CMS+USeCondCardMark) may now include an extra If=>IfTrue/IfFalse ===> Region+Phi(BotIdx) between the card mark memory barrier and the trailing cpu order/acquire barriers. The new IfNode models a test of the boolean result returned by the CAS operation. It gets wrapped up in the barrier flow because of the different order of generation of the GC post barrier and the (SCMemProj) memory projection from the CAS itself. Previously the CAS and SCMemProj simply fed direct into the trailing barriers so there was no need to check the GC post-barrier subgraph. Now a CASObject looks more like a volatile StoreN/P, both of them feeding their memory flow into the card mark membar. In consequence, testing for a CASObject now requires two stages as per a StoreN/P -- matching the pair of memory subgraphs from leading to card mark membar and from card mark to trailing membar. The predicates which traverse the memory flow between leading and trailing/card mark membars have been updated so they now look for the same pattern in either case, modulo the presence of a Store or a CAS+SCMemProj pair. In the case where a card mark member is present this new test is now combined with a check on the trailing graph in the CAS case, just as with a StoreN/P. The predicates which traverse the memory flow between card mark and trailing membars have been updated to search through (potentially) one more Phi(BotIdx) link. So, the maximum number of Phis to indirect trough has been increased to 4 for G1GC and 1 for ConcMarkSweep+UseCondCardMark. Testing: This has been tested by running tier1 tests. All the tests which were previously failing are now working. There were 5 failures due to timeouts which do not appear to relate to this issue. The code has also been tested by printing generated code for some simple usages of volatile load/store and CAS, checking the generated code by eyeball. This was repeated for SerialGC, ParallelGC, ConcMarkSweep-UseCondCardMark, ConcMarkSweep+UseCondCardMark and G1GC. It would be good to also run further tests (esp jcstress). However, since this breakage is stopping testing of other changes and since it seems to remedy the problem in common cases at least I suggest committing this fix now and running these tests afterwards. Additional test specific to this issue?: There are currently no proper tests for this code as it was difficult to know how to identify what code gets generated. Detection of failures is left to asserts in the predicates. That approach has worked in the present case but only after a breaking change has been pushed and only by detecting one of several breaking changes. So the current regime is rather fragile. It would be good to have some jtreg tests to help detect breakages like this more quickly and effectively. A slowdebug build with an available hsdis-aarch64.so could be used to print out generated assembly. Usefully, in debug mode the C2 back end will print out block comments indicating both where membars have been placed and where they have been elided. If a jtreg test was run in an otherjvm with CompileCommand options to compile and print target methods then the output could be scanned to look for the relevant membar/membar elided comments and for presence or absence of ldar/stlr and ldaxr/stlxr instructions. This would be enough to test that basic transformations are sound. Is it possible to rely on an hsdis library being available in jtreg tests? Alternatively, is it possible to make execution of the test conditional on the library being present? regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From aph at redhat.com Thu Jun 21 14:24:42 2018 From: aph at redhat.com (Andrew Haley) Date: Thu, 21 Jun 2018 15:24:42 +0100 Subject: RFR (XS): Obsolete support for commercial features In-Reply-To: <5190D144-D836-45B5-AE69-6DCAF2911064@oracle.com> References: <5190D144-D836-45B5-AE69-6DCAF2911064@oracle.com> Message-ID: <69d4ee9e-8819-c6ed-1f4e-1d629dae98b7@redhat.com> On 06/20/2018 06:46 PM, Mikael Vidstedt wrote: > Please review the following change which obsoletes/removes the support for commercial features - both the concept of commercial VM flags, and also some other references to the concept of commercial features. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8202331 > Webrev: http://cr.openjdk.java.net/~mikael/webrevs/8202331/webrev.00/open/webrev/ I very much approve. :-) -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From patric.hedlin at oracle.com Thu Jun 21 15:26:08 2018 From: patric.hedlin at oracle.com (Patric Hedlin) Date: Thu, 21 Jun 2018 17:26:08 +0200 Subject: RFR(S): JDK-8191339: [JVMCI] BigInteger compiler intrinsics on Graal. Message-ID: <28011331-bd43-2c32-dba4-e41879ffe28a@oracle.com> Dear all, I would like to ask for help to review the following change/update: Issue: https://bugs.openjdk.java.net/browse/JDK-8191339 Webrev: http://cr.openjdk.java.net/~phedlin/tr8191339/ 8191339: [JVMCI] BigInteger compiler intrinsics on Graal. Enabling BigInteger intrinsics via JVMCI. Best regards, Patric From kim.barrett at oracle.com Thu Jun 21 16:18:18 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 21 Jun 2018 12:18:18 -0400 Subject: RFR: 8204540: Automatic oop closure devirtualization In-Reply-To: References: <9609014c-4523-e944-05e1-80386aa7088a@oracle.com> <321F445D-4D92-47EB-9B81-61C8ED0BFC8B@oracle.com> Message-ID: <70EF48AF-676A-4E0A-9A6B-501BC4257A54@oracle.com> > On Jun 21, 2018, at 5:44 AM, Stefan Karlsson wrote: > > Hi Kim, > > Thanks for reviewing this! > > Updated webrevs: > http://cr.openjdk.java.net/~stefank/8204540/webrev.02.delta > http://cr.openjdk.java.net/~stefank/8204540/webrev.02 Looks good, other than a few tiny nits. I don't need another webrev. ------------------------------------------------------------------------------ src/hotspot/share/memory/iterator.hpp Devirtualizer and OopIteratorClosureDispatch should be AllStatic. Sorry I missed this in the first round. ------------------------------------------------------------------------------ 50 const int KLASS_ID_COUNT = 6; KLASS_ID_COUNT should be an unsigned type, like uint. ------------------------------------------------------------------------------ rc/hotspot/share/oops/klass.hpp 47 ObjArrayKlassID, Trailing comma for last enumerator in KlassId is a C99/C++11 feature and not valid in C++98, though some compilers may allow it in some modes. ------------------------------------------------------------------------------ src/hotspot/share/oops/typeArrayKlass.inline.hpp I think the key point here is that these klasses are guaranteed to be processed via the null class loader. That klasses don't move is kind of obvious, since they are not Java objects themselves. How about something like: Performance tweak: We skip processing the klass pointer since all TypeArrayKlasses are guaranteed processed via the null class loader. [And now I wonder if this remains true with Value Types.] ------------------------------------------------------------------------------ A couple more inline comments below: >> Any performance comparisons? My guess would be perhaps a small >> improvement, as we might get some inlining we weren't previously >> getting, in some (supposedly) not performance critical places. > > I did runs on our internal perf system, and the scores were the same. I looked at pause times for some of the runs and couldn't find a difference. I'll run more runs over the weekend. I was hoping that merging the UseCompressedOops dispatch into the other dispatches might provide some measurable benefit. Oh well, the code improvement is well worth the change, even if it?s performance neutral. > ----------------------------------------------------------------------------- >> src/hotspot/share/gc/shared/genOopClosures.inline.hpp >> I think it might be better to put the trivial forwarding do_oop >> definitions that have been moved to here instead directly into the >> class declarations. I think doing so gives better / earlier error >> messages when forgetting to include the associated .inline.hpp file by >> callers. > > I tried your proposal. It has the unfortunate effect that whenever you include genOopClosures.hpp you get a compile error, even when the functions are not used. > > I think we can get what you are looking for by changing 'virtual void do_oop(oop* p)' to 'inline virtual void do_oop(oop* p)'. I'm not sure this should be done for this RFE, though? I?m fine with deferring. We can discuss offline. I?m curious about the compile error. >> I'm surprised it can be const, because of the no-arg constructor. > > A dummy value is set in the non-arg Klass constructor. Klasses generated with this constructor is only used in CDS to copy the vtables. Thanks for the explanation. From kim.barrett at oracle.com Thu Jun 21 16:24:19 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 21 Jun 2018 12:24:19 -0400 Subject: RFR: 8205459: Rename Access API flag decorators In-Reply-To: <2a5d924f-2809-7c29-0711-b7da8b364093@oracle.com> References: <2a5d924f-2809-7c29-0711-b7da8b364093@oracle.com> Message-ID: > On Jun 21, 2018, at 5:30 AM, Stefan Karlsson wrote: > > Looks good. > > StefanK Thanks. From kim.barrett at oracle.com Thu Jun 21 16:26:39 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 21 Jun 2018 12:26:39 -0400 Subject: RFR: 8205459: Rename Access API flag decorators In-Reply-To: <78f4af5f-0347-7124-856b-2c6f92a837ba@oracle.com> References: <78f4af5f-0347-7124-856b-2c6f92a837ba@oracle.com> Message-ID: > On Jun 21, 2018, at 5:32 AM, Per Liden wrote: > > Looks good, just two comments: Thanks. > > src/hotspot/share/oops/accessDecorators.hpp > ------------------------------------------- > Looks like the decorator numbers got a bit mixed up: > > [?] > Stefan just told me you planned to fix this in a different patch? Yes. There?s still one more change to go that affects the numbering, the removal of IN_CONCURRENT_ROOT. I should have mentioned that in the RFR email though. > src/hotspot/share/oops/access.hpp > --------------------------------- > Could we turn this: > > 123 IS_ARRAY | IS_NOT_NULL | > 124 IN_HEAP; > > Into a single line, with IN_HEAP first, like: > > 123 IN_HEAP | IS_ARRAY | IS_NOT_NULL; Will do. From erik.osterlund at oracle.com Thu Jun 21 16:33:09 2018 From: erik.osterlund at oracle.com (Erik Osterlund) Date: Thu, 21 Jun 2018 18:33:09 +0200 Subject: RFR: 8204540: Automatic oop closure devirtualization In-Reply-To: References: <9609014c-4523-e944-05e1-80386aa7088a@oracle.com> <321F445D-4D92-47EB-9B81-61C8ED0BFC8B@oracle.com> Message-ID: Hi Stefan, Looks amazing. I have wanted this to happen in one way or another for years. Thank you for doing this. Thanks, /Erik > On 21 Jun 2018, at 11:44, Stefan Karlsson wrote: > > Hi Kim, > > Thanks for reviewing this! > > Updated webrevs: > http://cr.openjdk.java.net/~stefank/8204540/webrev.02.delta > http://cr.openjdk.java.net/~stefank/8204540/webrev.02 > > Comments below: > > On 2018-06-21 03:52, Kim Barrett wrote: >>> On Jun 20, 2018, at 6:50 AM, Stefan Karlsson wrote: >>> >>> Hi all, >>> >>> Please review this patch to get rid of the macro based oop_iterate devirtualization layer, and replace it with a new implementation based on templates that automatically determines when the closure function calls can be devirtualized. >>> >>> http://cr.openjdk.java.net/~stefank/8204540/webrev.01/ >>> https://bugs.openjdk.java.net/browse/JDK-8204540 >> Generally looks good, just a few minor comments and nits. >> ------------------------------------------------------------------------------ >> Any libjvm.so size comparison? We ought to get some benefit from not >> generating code we don't use. OTOH, it's possible this change might >> result in more inlining than previously. > > Before patch: 22833008 bytes > After patch: 22790160 bytes > >> Any performance comparisons? My guess would be perhaps a small >> improvement, as we might get some inlining we weren't previously >> getting, in some (supposedly) not performance critical places. > > I did runs on our internal perf system, and the scores were the same. I looked at pause times for some of the runs and couldn't find a difference. I'll run more runs over the weekend. > >> ------------------------------------------------------------------------------ >> src/hotspot/share/gc/cms/cmsOopClosures.inline.hpp >> 60 inline void cls::do_oop(oop* p) { cls::do_oop_work(p); } \ >> 61 inline void cls::do_oop(narrowOop* p) { cls::do_oop_work(p); } >> [pre-existing] >> I think the "cls::" qualifiers in the body should be unnecessary. > > Removed. > >> ------------------------------------------------------------------------------ >> 58 #define DO_OOP_WORK_NV_IMPL(cls) \ >> This is no longer defining "_nv" functions, so the "_NV" in the name >> seems odd. > > OK. I merged DO_OOP_WORK_NV_IMPL and DO_OOP_WORK_IMPL. > >> ------------------------------------------------------------------------------ >> src/hotspot/share/gc/cms/concurrentMarkSweepGeneration.cpp >> 3321 MetadataVisitingOopIterateClosure(collector->ref_processor()), >> Indentation messed up. > > Fixed. > >> ------------------------------------------------------------------------------ >> src/hotspot/share/memory/iterator.hpp >> 370 class OopClosureDispatch { >> OopIterateClosureDispatch? > > Yes. Fixed. > >> ------------------------------------------------------------------------------ >> src/hotspot/share/gc/shared/genOopClosures.inline.hpp >> I think it might be better to put the trivial forwarding do_oop >> definitions that have been moved to here instead directly into the >> class declarations. I think doing so gives better / earlier error >> messages when forgetting to include the associated .inline.hpp file by >> callers. > > I tried your proposal. It has the unfortunate effect that whenever you include genOopClosures.hpp you get a compile error, even when the functions are not used. > > I think we can get what you are looking for by changing 'virtual void do_oop(oop* p)' to 'inline virtual void do_oop(oop* p)'. I'm not sure this should be done for this RFE, though? > >> ------------------------------------------------------------------------------ >> src/hotspot/share/oops/instanceMirrorKlass.hpp >> 111 public: >> Unnecessary, we're already in public section. > > Removed. > >> ------------------------------------------------------------------------------ >> src/hotspot/share/memory/iterator.inline.hpp >> 231 static const int NUM_KLASSES = 6; >> The value 6 is derived from the number of entries in enum KlassId, >> which is far away. How about defining KLASS_ID_COUNT with that enum? >> It might be that the enum and that constant need to be somewhere other >> than in klass.hpp though, to avoid include circularities. But see >> next comment too. > > I was thinking about that as well. > >> ------------------------------------------------------------------------------ >> src/hotspot/share/memory/iterator.inline.hpp >> 106 const int _id; >> Maybe this should be of type KlassId? And similarly the accessor. >> Similarly the static const ID members in the various klass types. > > > Done. > > >> I'm surprised it can be const, because of the no-arg constructor. > > A dummy value is set in the non-arg Klass constructor. Klasses generated with this constructor is only used in CDS to copy the vtables. > >> ------------------------------------------------------------------------------ >> src/hotspot/share/oops/typeArrayKlass.inline.hpp >> 38 // Performance tweak: We skip iterating over the klass pointer since we >> 39 // know that Universe::TypeArrayKlass never moves. >> [pre-existing] >> The wording of this comment seems like it might be left-over from >> permgen, and ought to be re-worded. > > OK. Updated the text. > >> ------------------------------------------------------------------------------ >> src/hotspot/share/memory/iterator.inline.hpp >> 77 // - If &OopClosureType::do_oop is resolved to &Base::do_oop, then there are no >> 78 // implementation of do_oop between Base and OopClosureType. However, there >> Either "is no implementation of" or "are no implementations of" > > Updated. > > Thanks, > StefanK > >> ------------------------------------------------------------------------------ From per.liden at oracle.com Thu Jun 21 16:46:32 2018 From: per.liden at oracle.com (Per Liden) Date: Thu, 21 Jun 2018 18:46:32 +0200 Subject: RFR: 8205459: Rename Access API flag decorators In-Reply-To: References: <78f4af5f-0347-7124-856b-2c6f92a837ba@oracle.com> Message-ID: <6675f675-fedd-66b5-58f0-28a46606c37b@oracle.com> On 2018-06-21 18:26, Kim Barrett wrote: >> On Jun 21, 2018, at 5:32 AM, Per Liden wrote: >> >> Looks good, just two comments: > > Thanks. Btw, I don't need to see a new webrev. /Per > >> >> src/hotspot/share/oops/accessDecorators.hpp >> ------------------------------------------- >> Looks like the decorator numbers got a bit mixed up: >> >> [?] >> Stefan just told me you planned to fix this in a different patch? > > Yes. There?s still one more change to go that affects the numbering, > the removal of IN_CONCURRENT_ROOT. I should have mentioned > that in the RFR email though. > >> src/hotspot/share/oops/access.hpp >> --------------------------------- >> Could we turn this: >> >> 123 IS_ARRAY | IS_NOT_NULL | >> 124 IN_HEAP; >> >> Into a single line, with IN_HEAP first, like: >> >> 123 IN_HEAP | IS_ARRAY | IS_NOT_NULL; > > Will do. > From rkennke at redhat.com Thu Jun 21 17:49:26 2018 From: rkennke at redhat.com (Roman Kennke) Date: Thu, 21 Jun 2018 19:49:26 +0200 Subject: [aarch64-port-dev ] RFR: 8204331: AArch64: fix CAS not embedded in normal graph error In-Reply-To: References: Message-ID: <75b946c0-44ee-109e-471d-32ba2cdf142e@redhat.com> Hi Andrew, thanks so much for fixing this! Also, very nice ASCII drawings and explanations! :-) As far as I can tell, the patch is good. I'm wondering why it's worth to go through that effort to verify the correct shape of CAS and similar nodes, and keep maintaining this code in light of changes (Shenandoah will be the next headache to fix this). Do the instruction only match those particular shapes, and if it changes, it'd throw off the matcher badly? Thanks, Roman > The following patch fixes a problem in the AArch64 port which was caused > by the introduction of the GC API which delegates responsibility for > barrier generation to GC code (JDK-8202377) > > webrev: http://cr.openjdk.java.net/~adinn/8204331/webrev.00/ > > JIRA: https://bugs.openjdk.java.net/browse/JDK-8204331 > > The new G1 barrier assembler in combination with the C2 fence generator > introduced a few small changes to the the memory flow layout in ideal > subgraphs arising from (normal and Unsafe) volatile stores and (Unsafe) > volatile CASes. This affects the results returned by the AARch64 > predicates employed by ad file instructions to suppress generation of > certain hw memory barriers instructions and, instead, generate acquiring > loads + releasing stores. The change was caught by asserts in the CAS > code which detected the new, unexpected graph shape. > > Reviews would be very welcome! > > The Patch: > > The fix involves several small changes to reflect each change introduced > by the GC code. > > 1) Re-ordering of trailing volatile/cpu order barriers > The C2 fence generator class reversed the order of the volatile and cpu > order pair generated at the end of an Unsafe volatile store subgraph > (from Volatile=>CPUOrder to CPUOrder=>Volatile). This is now uniform > with the order of the trailing acquire/cpu order barriers > (CPUOrder=>Acquire). > > The relevant predicates have been updated to expect and check for > barriers in this order. > > 2) Extra memory flows in CAS graphs. Unsafe CASObject graphs for GCs > which employ a conditional card mark (G1GC or CMS+USeCondCardMark) may > now include an extra If=>IfTrue/IfFalse ===> Region+Phi(BotIdx) between > the card mark memory barrier and the trailing cpu order/acquire > barriers. The new IfNode models a test of the boolean result returned by > the CAS operation. It gets wrapped up in the barrier flow because of the > different order of generation of the GC post barrier and the (SCMemProj) > memory projection from the CAS itself. > > Previously the CAS and SCMemProj simply fed direct into the trailing > barriers so there was no need to check the GC post-barrier subgraph. Now > a CASObject looks more like a volatile StoreN/P, both of them feeding > their memory flow into the card mark membar. In consequence, testing for > a CASObject now requires two stages as per a StoreN/P -- matching the > pair of memory subgraphs from leading to card mark membar and from card > mark to trailing membar. > > The predicates which traverse the memory flow between leading and > trailing/card mark membars have been updated so they now look for the > same pattern in either case, modulo the presence of a Store or a > CAS+SCMemProj pair. In the case where a card mark member is present this > new test is now combined with a check on the trailing graph in the CAS > case, just as with a StoreN/P. > > The predicates which traverse the memory flow between card mark and > trailing membars have been updated to search through (potentially) one > more Phi(BotIdx) link. So, the maximum number of Phis to indirect trough > has been increased to 4 for G1GC and 1 for ConcMarkSweep+UseCondCardMark. > > Testing: > > This has been tested by running tier1 tests. All the tests which were > previously failing are now working. There were 5 failures due to > timeouts which do not appear to relate to this issue. > > The code has also been tested by printing generated code for some simple > usages of volatile load/store and CAS, checking the generated code by > eyeball. This was repeated for SerialGC, ParallelGC, > ConcMarkSweep-UseCondCardMark, ConcMarkSweep+UseCondCardMark and G1GC. > > It would be good to also run further tests (esp jcstress). However, > since this breakage is stopping testing of other changes and since it > seems to remedy the problem in common cases at least I suggest > committing this fix now and running these tests afterwards. > > Additional test specific to this issue?: > > There are currently no proper tests for this code as it was difficult to > know how to identify what code gets generated. Detection of failures is > left to asserts in the predicates. That approach has worked in the > present case but only after a breaking change has been pushed and only > by detecting one of several breaking changes. So the current regime is > rather fragile. It would be good to have some jtreg tests to help detect > breakages like this more quickly and effectively. > > A slowdebug build with an available hsdis-aarch64.so could be used to > print out generated assembly. Usefully, in debug mode the C2 back end > will print out block comments indicating both where membars have been > placed and where they have been elided. If a jtreg test was run in an > otherjvm with CompileCommand options to compile and print target methods > then the output could be scanned to look for the relevant membar/membar > elided comments and for presence or absence of ldar/stlr and ldaxr/stlxr > instructions. This would be enough to test that basic transformations > are sound. > > Is it possible to rely on an hsdis library being available in jtreg > tests? Alternatively, is it possible to make execution of the test > conditional on the library being present? > > regards, > > > Andrew Dinn > ----------- > Senior Principal Software Engineer > Red Hat UK Ltd > Registered in England and Wales under Company Registration No. 03798903 > Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander > From stefan.karlsson at oracle.com Thu Jun 21 18:39:28 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 21 Jun 2018 20:39:28 +0200 Subject: RFR: 8204540: Automatic oop closure devirtualization In-Reply-To: <70EF48AF-676A-4E0A-9A6B-501BC4257A54@oracle.com> References: <9609014c-4523-e944-05e1-80386aa7088a@oracle.com> <321F445D-4D92-47EB-9B81-61C8ED0BFC8B@oracle.com> <70EF48AF-676A-4E0A-9A6B-501BC4257A54@oracle.com> Message-ID: On 2018-06-21 18:18, Kim Barrett wrote: >> On Jun 21, 2018, at 5:44 AM, Stefan Karlsson wrote: >> >> Hi Kim, >> >> Thanks for reviewing this! >> >> Updated webrevs: >> http://cr.openjdk.java.net/~stefank/8204540/webrev.02.delta >> http://cr.openjdk.java.net/~stefank/8204540/webrev.02 > Looks good, other than a few tiny nits. I don't need another webrev. Thanks. Here's one anyway: ?http://cr.openjdk.java.net/~stefank/8204540/webrev.03.delta ?http://cr.openjdk.java.net/~stefank/8204540/webrev.03 Inlined: > > ------------------------------------------------------------------------------ > src/hotspot/share/memory/iterator.hpp > > Devirtualizer and OopIteratorClosureDispatch should be AllStatic. > Sorry I missed this in the first round. Done. > > ------------------------------------------------------------------------------ > 50 const int KLASS_ID_COUNT = 6; > > KLASS_ID_COUNT should be an unsigned type, like uint. OK. > > ------------------------------------------------------------------------------ > rc/hotspot/share/oops/klass.hpp > 47 ObjArrayKlassID, > > Trailing comma for last enumerator in KlassId is a C99/C++11 feature > and not valid in C++98, though some compilers may allow it in some > modes. Right. This was unintentional left-overs from some intermediate code. Removed. > > ------------------------------------------------------------------------------ > src/hotspot/share/oops/typeArrayKlass.inline.hpp > > I think the key point here is that these klasses are guaranteed to be > processed via the null class loader. That klasses don't move is kind > of obvious, since they are not Java objects themselves. Right. > How about > something like: > > Performance tweak: We skip processing the klass pointer since all > TypeArrayKlasses are guaranteed processed via the null class loader. Copy-n-pasted your comment. > > [And now I wonder if this remains true with Value Types.] > > ------------------------------------------------------------------------------ > > A couple more inline comments below: > >>> Any performance comparisons? My guess would be perhaps a small >>> improvement, as we might get some inlining we weren't previously >>> getting, in some (supposedly) not performance critical places. >> I did runs on our internal perf system, and the scores were the same. I looked at pause times for some of the runs and couldn't find a difference. I'll run more runs over the weekend. > I was hoping that merging the UseCompressedOops dispatch into the > other dispatches might provide some measurable benefit. Oh well, the > code improvement is well worth the change, even if it?s performance > neutral. > >> ----------------------------------------------------------------------------- >>> src/hotspot/share/gc/shared/genOopClosures.inline.hpp >>> I think it might be better to put the trivial forwarding do_oop >>> definitions that have been moved to here instead directly into the >>> class declarations. I think doing so gives better / earlier error >>> messages when forgetting to include the associated .inline.hpp file by >>> callers. >> I tried your proposal. It has the unfortunate effect that whenever you include genOopClosures.hpp you get a compile error, even when the functions are not used. >> >> I think we can get what you are looking for by changing 'virtual void do_oop(oop* p)' to 'inline virtual void do_oop(oop* p)'. I'm not sure this should be done for this RFE, though? > I?m fine with deferring. We can discuss offline. I?m curious about the compile error. With: diff --git a/src/hotspot/share/gc/shared/aaaa.cpp b/src/hotspot/share/gc/shared/aaaa.cpp new file mode 100644 --- /dev/null +++ b/src/hotspot/share/gc/shared/aaaa.cpp @@ -0,0 +1,2 @@ +#include "precompiled.hpp" +#include "genOopClosures.hpp" diff --git a/src/hotspot/share/gc/shared/genOopClosures.hpp b/src/hotspot/share/gc/shared/genOopClosures.hpp --- a/src/hotspot/share/gc/shared/genOopClosures.hpp +++ b/src/hotspot/share/gc/shared/genOopClosures.hpp @@ -118,8 +118,8 @@ ?? template inline void do_oop_work(T* p); ? public: ?? ScanClosure(DefNewGeneration* g, bool gc_barrier); -? virtual void do_oop(oop* p); -? virtual void do_oop(narrowOop* p); +? virtual void do_oop(oop* p) { do_oop_work(p); } +? virtual void do_oop(narrowOop* p) { do_oop_work(p); } ?}; ?// Closure for scanning DefNewGeneration. diff --git a/src/hotspot/share/gc/shared/genOopClosures.inline.hpp b/src/hotspot/share/gc/shared/genOopClosures.inline.hpp --- a/src/hotspot/share/gc/shared/genOopClosures.inline.hpp +++ b/src/hotspot/share/gc/shared/genOopClosures.inline.hpp @@ -108,8 +108,10 @@ ?? } ?} +/* ?inline void ScanClosure::do_oop(oop* p)?????? { ScanClosure::do_oop_work(p); } ?inline void ScanClosure::do_oop(narrowOop* p) { ScanClosure::do_oop_work(p); } +*/ ?// NOTE! Any changes made here should also be made ?// in ScanClosure::do_oop_work() and without PCH, I get: $ bash /home/stefank/hg/jdk/jdk/build/slowdebug/hotspot/variant-server/libjvm/objs/aaa.o.cmdline In file included from /home/stefank/hg/jdk/jdk/open/src/hotspot/share/gc/z/aaa.cpp:2:0: /home/stefank/hg/jdk/jdk/open/src/hotspot/share/gc/shared/genOopClosures.hpp:118:34: error: inline function 'void ScanClosure::do_oop_work(T*) [with T = oopDesc*]' used but never defined [-Werror] ?? template inline void do_oop_work(T* p); ????????????????????????????????? ^~~~~~~~~~~ /home/stefank/hg/jdk/jdk/open/src/hotspot/share/gc/shared/genOopClosures.hpp:118:34: error: inline function 'void ScanClosure::do_oop_work(T*) [with T = unsigned int]' used but never defined [-Werror] cc1plus: all warnings being treated as errors StefanK > >>> I'm surprised it can be const, because of the no-arg constructor. >> A dummy value is set in the non-arg Klass constructor. Klasses generated with this constructor is only used in CDS to copy the vtables. > Thanks for the explanation. > From stefan.karlsson at oracle.com Thu Jun 21 19:46:15 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 21 Jun 2018 21:46:15 +0200 Subject: RFR: 8204540: Automatic oop closure devirtualization In-Reply-To: References: <9609014c-4523-e944-05e1-80386aa7088a@oracle.com> <321F445D-4D92-47EB-9B81-61C8ED0BFC8B@oracle.com> Message-ID: <7d20f3c7-1a3e-d271-1f25-68d7ec5c51b2@oracle.com> On 2018-06-21 18:33, Erik Osterlund wrote: > Hi Stefan, > > Looks amazing. I have wanted this to happen in one way or another for years. Thank you for doing this. Thanks for the review! And thanks to both you and Kim for teaching me some of the techniques used to implement this. StefanK > > Thanks, > /Erik > >> On 21 Jun 2018, at 11:44, Stefan Karlsson wrote: >> >> Hi Kim, >> >> Thanks for reviewing this! >> >> Updated webrevs: >> http://cr.openjdk.java.net/~stefank/8204540/webrev.02.delta >> http://cr.openjdk.java.net/~stefank/8204540/webrev.02 >> >> Comments below: >> >> On 2018-06-21 03:52, Kim Barrett wrote: >>>> On Jun 20, 2018, at 6:50 AM, Stefan Karlsson wrote: >>>> >>>> Hi all, >>>> >>>> Please review this patch to get rid of the macro based oop_iterate devirtualization layer, and replace it with a new implementation based on templates that automatically determines when the closure function calls can be devirtualized. >>>> >>>> http://cr.openjdk.java.net/~stefank/8204540/webrev.01/ >>>> https://bugs.openjdk.java.net/browse/JDK-8204540 >>> Generally looks good, just a few minor comments and nits. >>> ------------------------------------------------------------------------------ >>> Any libjvm.so size comparison? We ought to get some benefit from not >>> generating code we don't use. OTOH, it's possible this change might >>> result in more inlining than previously. >> Before patch: 22833008 bytes >> After patch: 22790160 bytes >> >>> Any performance comparisons? My guess would be perhaps a small >>> improvement, as we might get some inlining we weren't previously >>> getting, in some (supposedly) not performance critical places. >> I did runs on our internal perf system, and the scores were the same. I looked at pause times for some of the runs and couldn't find a difference. I'll run more runs over the weekend. >> >>> ------------------------------------------------------------------------------ >>> src/hotspot/share/gc/cms/cmsOopClosures.inline.hpp >>> 60 inline void cls::do_oop(oop* p) { cls::do_oop_work(p); } \ >>> 61 inline void cls::do_oop(narrowOop* p) { cls::do_oop_work(p); } >>> [pre-existing] >>> I think the "cls::" qualifiers in the body should be unnecessary. >> Removed. >> >>> ------------------------------------------------------------------------------ >>> 58 #define DO_OOP_WORK_NV_IMPL(cls) \ >>> This is no longer defining "_nv" functions, so the "_NV" in the name >>> seems odd. >> OK. I merged DO_OOP_WORK_NV_IMPL and DO_OOP_WORK_IMPL. >> >>> ------------------------------------------------------------------------------ >>> src/hotspot/share/gc/cms/concurrentMarkSweepGeneration.cpp >>> 3321 MetadataVisitingOopIterateClosure(collector->ref_processor()), >>> Indentation messed up. >> Fixed. >> >>> ------------------------------------------------------------------------------ >>> src/hotspot/share/memory/iterator.hpp >>> 370 class OopClosureDispatch { >>> OopIterateClosureDispatch? >> Yes. Fixed. >> >>> ------------------------------------------------------------------------------ >>> src/hotspot/share/gc/shared/genOopClosures.inline.hpp >>> I think it might be better to put the trivial forwarding do_oop >>> definitions that have been moved to here instead directly into the >>> class declarations. I think doing so gives better / earlier error >>> messages when forgetting to include the associated .inline.hpp file by >>> callers. >> I tried your proposal. It has the unfortunate effect that whenever you include genOopClosures.hpp you get a compile error, even when the functions are not used. >> >> I think we can get what you are looking for by changing 'virtual void do_oop(oop* p)' to 'inline virtual void do_oop(oop* p)'. I'm not sure this should be done for this RFE, though? >> >>> ------------------------------------------------------------------------------ >>> src/hotspot/share/oops/instanceMirrorKlass.hpp >>> 111 public: >>> Unnecessary, we're already in public section. >> Removed. >> >>> ------------------------------------------------------------------------------ >>> src/hotspot/share/memory/iterator.inline.hpp >>> 231 static const int NUM_KLASSES = 6; >>> The value 6 is derived from the number of entries in enum KlassId, >>> which is far away. How about defining KLASS_ID_COUNT with that enum? >>> It might be that the enum and that constant need to be somewhere other >>> than in klass.hpp though, to avoid include circularities. But see >>> next comment too. >> I was thinking about that as well. >> >>> ------------------------------------------------------------------------------ >>> src/hotspot/share/memory/iterator.inline.hpp >>> 106 const int _id; >>> Maybe this should be of type KlassId? And similarly the accessor. >>> Similarly the static const ID members in the various klass types. >> >> Done. >> >> >>> I'm surprised it can be const, because of the no-arg constructor. >> A dummy value is set in the non-arg Klass constructor. Klasses generated with this constructor is only used in CDS to copy the vtables. >> >>> ------------------------------------------------------------------------------ >>> src/hotspot/share/oops/typeArrayKlass.inline.hpp >>> 38 // Performance tweak: We skip iterating over the klass pointer since we >>> 39 // know that Universe::TypeArrayKlass never moves. >>> [pre-existing] >>> The wording of this comment seems like it might be left-over from >>> permgen, and ought to be re-worded. >> OK. Updated the text. >> >>> ------------------------------------------------------------------------------ >>> src/hotspot/share/memory/iterator.inline.hpp >>> 77 // - If &OopClosureType::do_oop is resolved to &Base::do_oop, then there are no >>> 78 // implementation of do_oop between Base and OopClosureType. However, there >>> Either "is no implementation of" or "are no implementations of" >> Updated. >> >> Thanks, >> StefanK >> >>> ------------------------------------------------------------------------------ From harold.seigel at oracle.com Thu Jun 21 20:07:31 2018 From: harold.seigel at oracle.com (Harold David Seigel) Date: Thu, 21 Jun 2018 16:07:31 -0400 Subject: RFR (M) JDK-8169559: Add class loader names to relevant VM messages In-Reply-To: <76623c90-6254-e37d-f45f-1ea49f3b7f55@oracle.com> References: <76623c90-6254-e37d-f45f-1ea49f3b7f55@oracle.com> Message-ID: <579e0a23-d406-3404-c7b8-66894501be56@oracle.com> Hi Lois, The change looks good.? I just have a couple of comments about the tests. It looks like there are characters in the c1.jasm and c2.jasm files. Also, in ExpQualToM1PrivateMethodIAE.java, I think you want "||" not "&&" operators here: 108 if (!message.contains("IllegalAccessError") && 109 !message.contains("tried to access method p2.c2.method2()V from class p1.c1 (p2.c2 is in module m2x of loader myloaders.MySameClassLoader @") && 110 !message.contains("; p1.c1 is in module m1x of loader myloaders.MySameClassLoader @")) { I don't need a new webrev. Thanks, Harold On 6/20/2018 8:34 PM, Lois Foltan wrote: > Please review this change to introduce a new utility method > Klass::class_in_module_of_loader() to uniformly provide a way to add a > class' module name and class loader's name_and_id to error messages > and potentially logging. > > The primary focus of this change was to remove the former method > Klass::class_loader_and_module_name() and change any error messages > currently using that functionality since it followed the > StackTraceElement > (https://docs.oracle.com/javase/9/docs/api/java/lang/StackTraceElement.html#toString--) > format which is intended for stack traces not for use within error > messages.? This change also includes a change to one > IllegalAccessError message to demonstrate how an IAE would be > formatted with the additional module and class loader information. > This may conflict with the current review of JDK-8199940: Print more > information about class loaders in IllegalAccessErrors. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8169559/webrev/ > bug link at https://bugs.openjdk.java.net/browse/JDK-8169559 > > JDK-8166633 outlines a new proposal where error messages follow a > format of ERRROR: PROBLEM (REASON) where the PROBLEM is aggressively > simple (and definitely avoids arbitrary-length loader names) so the > REASON bears all the cost of explaining the PROBLEM with more > specifics.? See the proposal in more detail at > https://bugs.openjdk.java.net/browse/JDK-8166633. The new utility > method Klass::class_in_module_of_loader() implements the proposed format. > > Testing: hs-tier(1-2), jdk-tier(1-2) complete > ?????????????? hs-tier(3-5) in progress > ?????????????? JCK vm, lang in progress > > Thanks, > Lois > > > > > From lois.foltan at oracle.com Thu Jun 21 20:16:08 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Thu, 21 Jun 2018 16:16:08 -0400 Subject: RFR (M) JDK-8169559: Add class loader names to relevant VM messages In-Reply-To: <579e0a23-d406-3404-c7b8-66894501be56@oracle.com> References: <76623c90-6254-e37d-f45f-1ea49f3b7f55@oracle.com> <579e0a23-d406-3404-c7b8-66894501be56@oracle.com> Message-ID: On 6/21/2018 4:07 PM, Harold David Seigel wrote: > Hi Lois, > > The change looks good.? I just have a couple of comments about the tests. Thanks Harold for the review! > > It looks like there are characters in the c1.jasm and c2.jasm > files. Will fix. > > Also, in ExpQualToM1PrivateMethodIAE.java, I think you want "||" not > "&&" operators here: > > ???? 108???????????? if (!message.contains("IllegalAccessError") && > ???? 109???????????????? !message.contains("tried to access method > p2.c2.method2()V from class p1.c1 (p2.c2 is in module m2x of loader > myloaders.MySameClassLoader @") && > ???? 110???????????????? !message.contains("; p1.c1 is in module m1x > of loader myloaders.MySameClassLoader @")) { Good catch!? Will correct that. Thanks, Lois > > I don't need a new webrev. > > Thanks, Harold > > > On 6/20/2018 8:34 PM, Lois Foltan wrote: >> Please review this change to introduce a new utility method >> Klass::class_in_module_of_loader() to uniformly provide a way to add >> a class' module name and class loader's name_and_id to error messages >> and potentially logging. >> >> The primary focus of this change was to remove the former method >> Klass::class_loader_and_module_name() and change any error messages >> currently using that functionality since it followed the >> StackTraceElement >> (https://docs.oracle.com/javase/9/docs/api/java/lang/StackTraceElement.html#toString--) >> format which is intended for stack traces not for use within error >> messages.? This change also includes a change to one >> IllegalAccessError message to demonstrate how an IAE would be >> formatted with the additional module and class loader information. >> This may conflict with the current review of JDK-8199940: Print more >> information about class loaders in IllegalAccessErrors. >> >> open webrev at >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8169559/webrev/ >> bug link at https://bugs.openjdk.java.net/browse/JDK-8169559 >> >> JDK-8166633 outlines a new proposal where error messages follow a >> format of ERRROR: PROBLEM (REASON) where the PROBLEM is aggressively >> simple (and definitely avoids arbitrary-length loader names) so the >> REASON bears all the cost of explaining the PROBLEM with more >> specifics.? See the proposal in more detail at >> https://bugs.openjdk.java.net/browse/JDK-8166633. The new utility >> method Klass::class_in_module_of_loader() implements the proposed >> format. >> >> Testing: hs-tier(1-2), jdk-tier(1-2) complete >> ?????????????? hs-tier(3-5) in progress >> ?????????????? JCK vm, lang in progress >> >> Thanks, >> Lois >> >> >> >> >> > From kim.barrett at oracle.com Thu Jun 21 20:22:48 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 21 Jun 2018 16:22:48 -0400 Subject: RFR: 8204540: Automatic oop closure devirtualization In-Reply-To: References: <9609014c-4523-e944-05e1-80386aa7088a@oracle.com> <321F445D-4D92-47EB-9B81-61C8ED0BFC8B@oracle.com> <70EF48AF-676A-4E0A-9A6B-501BC4257A54@oracle.com> Message-ID: <184A647A-1862-454B-8621-B6A8047D3D3E@oracle.com> > On Jun 21, 2018, at 2:39 PM, Stefan Karlsson wrote: > > On 2018-06-21 18:18, Kim Barrett wrote: >>> On Jun 21, 2018, at 5:44 AM, Stefan Karlsson wrote: >>> >>> Hi Kim, >>> >>> Thanks for reviewing this! >>> >>> Updated webrevs: >>> http://cr.openjdk.java.net/~stefank/8204540/webrev.02.delta >>> http://cr.openjdk.java.net/~stefank/8204540/webrev.02 >> Looks good, other than a few tiny nits. I don't need another webrev. > > Thanks. Here's one anyway: > http://cr.openjdk.java.net/~stefank/8204540/webrev.03.delta > http://cr.openjdk.java.net/~stefank/8204540/webrev.03 Looks good. >>> I think we can get what you are looking for by changing 'virtual void do_oop(oop* p)' to 'inline virtual void do_oop(oop* p)'. I'm not sure this should be done for this RFE, though? >> I?m fine with deferring. We can discuss offline. I?m curious about the compile error. > With: > diff --git a/src/hotspot/share/gc/shared/aaaa.cpp b/src/hotspot/share/gc/shared/aaaa.cpp > new file mode 100644 > --- /dev/null > +++ b/src/hotspot/share/gc/shared/aaaa.cpp > [?] > $ bash /home/stefank/hg/jdk/jdk/build/slowdebug/hotspot/variant-server/libjvm/objs/aaa.o.cmdline > In file included from /home/stefank/hg/jdk/jdk/open/src/hotspot/share/gc/z/aaa.cpp:2:0: > /home/stefank/hg/jdk/jdk/open/src/hotspot/share/gc/shared/genOopClosures.hpp:118:34: error: inline function 'void ScanClosure::do_oop_work(T*) [with T = oopDesc*]' used but never defined [-Werror] > template inline void do_oop_work(T* p); > ^~~~~~~~~~~ > /home/stefank/hg/jdk/jdk/open/src/hotspot/share/gc/shared/genOopClosures.hpp:118:34: error: inline function 'void ScanClosure::do_oop_work(T*) [with T = unsigned int]' used but never defined [-Werror] > cc1plus: all warnings being treated as errors Drat! I might look at this later, but this seems like it makes what I was suggesting just not work. From mandy.chung at oracle.com Thu Jun 21 20:26:32 2018 From: mandy.chung at oracle.com (mandy chung) Date: Thu, 21 Jun 2018 13:26:32 -0700 Subject: RFR 8195650 Method references to VarHandle accessors In-Reply-To: <086A1684-4D97-4B6D-94F2-16A1261057B5@oracle.com> References: <086A1684-4D97-4B6D-94F2-16A1261057B5@oracle.com> Message-ID: This looks good to me AFAICT. Mandy On 6/19/18 5:08 PM, Paul Sandoz wrote: > Hi, > > Please review the following fix to ensure method references to > VarHandle signature polymorphic methods are supported at runtime > (specifically the method handle to a signature polymorphic method can > be loaded from the constant pool): > > http://cr.openjdk.java.net/~psandoz/jdk/JDK-8195650-varhandle-mref/webrev/ > > I also added a ?belts and braces? test to ensure a constant method > handle to MethodHandle::invokeBasic cannot be loaded if outside of > the j.l.invoke package. > > Paul. > From mandy.chung at oracle.com Thu Jun 21 20:37:00 2018 From: mandy.chung at oracle.com (mandy chung) Date: Thu, 21 Jun 2018 13:37:00 -0700 Subject: RFR (M) JDK-8169559: Add class loader names to relevant VM messages In-Reply-To: <76623c90-6254-e37d-f45f-1ea49f3b7f55@oracle.com> References: <76623c90-6254-e37d-f45f-1ea49f3b7f55@oracle.com> Message-ID: <35409b6d-00f3-53df-427d-5c32bfdd3ce0@oracle.com> Hi Lois, On 6/20/18 5:34 PM, Lois Foltan wrote: > Please review this change to introduce a new utility method > Klass::class_in_module_of_loader() to uniformly provide a way to add a > class' module name and class loader's name_and_id to error messages and > potentially logging. > : > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8169559/webrev/ Thanks for doing this. src/hotspot/share/classfile/moduleEntry.hpp 'M' is meant to be lower case in "unnamed Module", right? src/hotspot/share/interpreter/linkResolver.cpp + "tried to access method %s.%s%s from class %s (%s%s%s)", Since you are on this file, it may read better if rephrased to: "class %s tried to access method %s.%s%s (%s%s%s)", Thanks Mandy From mikael.vidstedt at oracle.com Fri Jun 22 00:17:38 2018 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Thu, 21 Jun 2018 17:17:38 -0700 Subject: RFR (XS): Obsolete support for commercial features In-Reply-To: <69d4ee9e-8819-c6ed-1f4e-1d629dae98b7@redhat.com> References: <5190D144-D836-45B5-AE69-6DCAF2911064@oracle.com> <69d4ee9e-8819-c6ed-1f4e-1d629dae98b7@redhat.com> Message-ID: <20B15E20-3BF2-4655-AF0D-89BBEBA55F2A@oracle.com> > On Jun 21, 2018, at 7:24 AM, Andrew Haley wrote: > > On 06/20/2018 06:46 PM, Mikael Vidstedt wrote: >> Please review the following change which obsoletes/removes the support for commercial features - both the concept of commercial VM flags, and also some other references to the concept of commercial features. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8202331 >> Webrev: http://cr.openjdk.java.net/~mikael/webrevs/8202331/webrev.00/open/webrev/ > > I very much approve. :-) I knew you?d be hard to convince. :) Cheers, Mikael From aph at redhat.com Fri Jun 22 07:45:03 2018 From: aph at redhat.com (Andrew Haley) Date: Fri, 22 Jun 2018 08:45:03 +0100 Subject: RFR(S): JDK-8191339: [JVMCI] BigInteger compiler intrinsics on Graal. In-Reply-To: <28011331-bd43-2c32-dba4-e41879ffe28a@oracle.com> References: <28011331-bd43-2c32-dba4-e41879ffe28a@oracle.com> Message-ID: <2c6a8eec-3253-8790-0d49-544842baf994@redhat.com> On 06/21/2018 04:26 PM, Patric Hedlin wrote: > I would like to ask for help to review the following change/update: Can you tell me what tests to use for this functionality? -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From rwestrel at redhat.com Fri Jun 22 07:54:24 2018 From: rwestrel at redhat.com (Roland Westrelin) Date: Fri, 22 Jun 2018 09:54:24 +0200 Subject: [aarch64-port-dev ] RFR: 8204331: AArch64: fix CAS not embedded in normal graph error In-Reply-To: References: Message-ID: > webrev: http://cr.openjdk.java.net/~adinn/8204331/webrev.00/ That looks good to me. Roland. From aph at redhat.com Fri Jun 22 08:09:03 2018 From: aph at redhat.com (Andrew Haley) Date: Fri, 22 Jun 2018 09:09:03 +0100 Subject: RFR: 8204331: AArch64: fix CAS not embedded in normal graph error In-Reply-To: References: Message-ID: On 06/21/2018 03:20 PM, Andrew Dinn wrote: > There are currently no proper tests for this code as it was difficult to > know how to identify what code gets generated. Detection of failures is > left to asserts in the predicates. That approach has worked in the > present case but only after a breaking change has been pushed and only > by detecting one of several breaking changes. So the current regime is > rather fragile. It would be good to have some jtreg tests to help detect > breakages like this more quickly and effectively. > > A slowdebug build with an available hsdis-aarch64.so could be used to > print out generated assembly. Usefully, in debug mode the C2 back end > will print out block comments indicating both where membars have been > placed and where they have been elided. If a jtreg test was run in an > otherjvm with CompileCommand options to compile and print target methods > then the output could be scanned to look for the relevant membar/membar > elided comments and for presence or absence of ldar/stlr and ldaxr/stlxr > instructions. This would be enough to test that basic transformations > are sound. > > Is it possible to rely on an hsdis library being available in jtreg > tests? Alternatively, is it possible to make execution of the test > conditional on the library being present? Here's an idea: we could scan the generated code. We could also annotate the assembler so that it asserted that all ldar/stlr were generated. We could certainly do that even in a product build. If your tests force C2 compilation and disable inlining we can make the tests repeatable. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From rwestrel at redhat.com Fri Jun 22 08:26:03 2018 From: rwestrel at redhat.com (Roland Westrelin) Date: Fri, 22 Jun 2018 10:26:03 +0200 Subject: [aarch64-port-dev ] RFR: 8204331: AArch64: fix CAS not embedded in normal graph error In-Reply-To: References: Message-ID: > Is it possible to rely on an hsdis library being available in jtreg > tests? Alternatively, is it possible to make execution of the test > conditional on the library being present? What about -XX:+PrintOptoAssembly? It doesn't need hsdis. Roland. From adinn at redhat.com Fri Jun 22 08:30:16 2018 From: adinn at redhat.com (Andrew Dinn) Date: Fri, 22 Jun 2018 09:30:16 +0100 Subject: [aarch64-port-dev ] RFR: 8204331: AArch64: fix CAS not embedded in normal graph error In-Reply-To: <75b946c0-44ee-109e-471d-32ba2cdf142e@redhat.com> References: <75b946c0-44ee-109e-471d-32ba2cdf142e@redhat.com> Message-ID: <26eaabda-4b45-d9fa-004d-f23738995799@redhat.com> Hi Roman, On 21/06/18 18:49, Roman Kennke wrote: > thanks so much for fixing this! > > Also, very nice ASCII drawings and explanations! :-) > > As far as I can tell, the patch is good. Ok, thanks for the review. > I'm wondering why it's worth to go through that effort to verify the > correct shape of CAS and similar nodes, and keep maintaining this code > in light of changes (Shenandoah will be the next headache to fix this). > Do the instruction only match those particular shapes, and if it > changes, it'd throw off the matcher badly? I would very much like to find a better, less fragile solution. The problem is finding another way to identify whether or not a Store/Load/CAS needs to translate to acquiring/releasing instructions and (at the same time) whether memory barrier operations in the graph need, respectively, to be elided or generated. The specific, complex graph shapes the predicates test for are indeed unique to volatile operations and so the presence of these shapes enables the former strategy to be adopted. If nodes are found not to be embedded in such a subgraph then they need to be translated via the latter strategy. Perhaps the relevant nodes could be clearly labelled at create time as belonging to a volatile load/store/CAS operation. At present there is no such info in a MemBar node and the memory order property on a load/store node can indicate that it is an ordered operation even when it has not been planted because of a volatile load/store/CAS. The danger here would be losing this info if any nodes were merged or elided by compiler phases although I am not sure that would ever happen. For Graal, I managed to finesse this problem by adding extra graph links from leading membar to Load/Store/CAS to trailing membar. That was easier to do than it is for C2 because in Graal you can annotate links with type tags, indicating that they have a specific semantic. Choosing a special link type makes it possible to avoid breaking the many algorithms which analyze the graph linkage. I don't really know how to achieve that in C2. Roland or Vladimir (take your pick of them ;-) should be in a much better position to address the merits of these two alternatives than I am. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From rwestrel at redhat.com Fri Jun 22 08:44:45 2018 From: rwestrel at redhat.com (Roland Westrelin) Date: Fri, 22 Jun 2018 10:44:45 +0200 Subject: [aarch64-port-dev ] RFR: 8204331: AArch64: fix CAS not embedded in normal graph error In-Reply-To: <26eaabda-4b45-d9fa-004d-f23738995799@redhat.com> References: <75b946c0-44ee-109e-471d-32ba2cdf142e@redhat.com> <26eaabda-4b45-d9fa-004d-f23738995799@redhat.com> Message-ID: > Roland or Vladimir (take your pick of them ;-) should be in a much > better position to address the merits of these two alternatives than I am. Let me take a look. In any case, for JDK 11, I think we should move forward with your fix. Roland. From adinn at redhat.com Fri Jun 22 09:23:24 2018 From: adinn at redhat.com (Andrew Dinn) Date: Fri, 22 Jun 2018 10:23:24 +0100 Subject: [aarch64-port-dev ] RFR: 8204331: AArch64: fix CAS not embedded in normal graph error In-Reply-To: References: Message-ID: <3e26f4e2-9afa-23b6-78e7-65a830de0cc7@redhat.com> On 22/06/18 09:26, Roland Westrelin wrote: >> Is it possible to rely on an hsdis library being available in jtreg >> tests? Alternatively, is it possible to make execution of the test >> conditional on the library being present? > > What about -XX:+PrintOptoAssembly? It doesn't need hsdis. Ah yes, I forgot that was not dependent on hsdis. This would allow us to generate assembler output with comments showing the presence or absence of membars and the appropriate sequence of ld(x)r/st(x)r or lda(x)r or stl(x)r instructions. The test driver could set things up to compile and print a suitable test method in an othervm and check the othervm output to ensure that the membar comments and generated instructions match up. I'll work on implementing a jtreg test based on that but: I'd liek to do that as a follow up to this fix. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From adinn at redhat.com Fri Jun 22 09:24:50 2018 From: adinn at redhat.com (Andrew Dinn) Date: Fri, 22 Jun 2018 10:24:50 +0100 Subject: RFR: 8204331: AArch64: fix CAS not embedded in normal graph error In-Reply-To: References: Message-ID: <9758cf9b-6388-e73c-532a-4a7cdbf72ff7@redhat.com> On 22/06/18 09:09, Andrew Haley wrote: > Here's an idea: we could scan the generated code. We could also annotate > the assembler so that it asserted that all ldar/stlr were generated. We > could certainly do that even in a product build. If your tests force C2 > compilation and disable inlining we can make the tests repeatable. When you say "we could scan the generated code" the obvious question is: "How?". As in how to do that from a jtreg test? regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From adinn at redhat.com Fri Jun 22 10:52:41 2018 From: adinn at redhat.com (Andrew Dinn) Date: Fri, 22 Jun 2018 11:52:41 +0100 Subject: [aarch64-port-dev ] RFR: 8204331: AArch64: fix CAS not embedded in normal graph error In-Reply-To: References: Message-ID: <01d1e054-7ad8-ab0c-905b-eddfdccfced1@redhat.com> On 22/06/18 08:54, Roland Westrelin wrote: > >> webrev: http://cr.openjdk.java.net/~adinn/8204331/webrev.00/ > > That looks good to me. Ok, I pushed the patch with Roman and Roland as reviewers. Roman, enjoy tweaking this to work for Shenandoah :-) regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From poonam.bajaj at oracle.com Fri Jun 22 13:07:50 2018 From: poonam.bajaj at oracle.com (Poonam Parhar) Date: Fri, 22 Jun 2018 06:07:50 -0700 Subject: RFR (8u): JDK-8146115: Improve docker container detection and resource configuration usage In-Reply-To: <456bd0f8-c89b-1cd0-8369-9ecf0ac3b0e4@oracle.com> References: <3968d009-c1a2-87e7-bc22-70c348ee5b69@oracle.com> <69915730-5963-41AE-AB75-B0C223E39035@oracle.com> <456bd0f8-c89b-1cd0-8369-9ecf0ac3b0e4@oracle.com> Message-ID: <0889f79a-ad65-1796-053f-dd8e34cf0267@oracle.com> Hello, Could I get one more review for the docker backport changes, please! The latest webrev including the suggestions from Bob's review is here: http://cr.openjdk.java.net/~poonam/8146115/webrev.02/ Thanks, Poonam On 5/18/18 6:30 AM, Poonam Parhar wrote: > Hello Bob, > > Thanks a lot for reviewing the changes! > > On 5/17/18 11:12 AM, Bob Vandette wrote: >> The backport of my changes look pretty good. >> >> If the new PrintContainerInfo option is only referenced on Linux >> platforms, you might >> want to move it to globals_linux.hpp. > Yes, currently it is being used only on Linux platforms, but I think > it is a general option and might be used on other platforms at a later > date.? So I think we can leave it in globals.hpp. >> >> Is there a reason PrintActiveCpus is a diagnostic flag but >> PrintContainerInfo is not? > Yes, PrintContainerInfo should also be a diagnostic option. I have > changed it. > Updated webrev: http://cr.openjdk.java.net/~poonam/8146115/webrev.01/ > >> >> Is it acceptable to add these new VM flags in a backport that won?t >> be supported in the latest release? > Since JDK 9 and later releases use Unified JVM logging, and we don't > have that in JDK8, a new option is required in 8 to log the > information which is being logged under 'container' tag in 10 and 11. > We will need to have a CSR request approved for the new JVM options > added as part of this backport. > > Thanks, > Poonam > >> >> Bob. >> >> >>> On May 15, 2018, at 4:46 PM, Poonam Parhar >>> wrote: >>> >>> Hello, >>> >>> Please review the docker container support changes backported to JDK >>> 8u. These changes include the backport of the following enhancement >>> and the follow-on bug fixes done on top of that in jdk 10 and 11. >>> >>> Webrev: http://cr.openjdk.java.net/~poonam/8146115/webrev.00/ >>> >>> Enhancement:JDK-8146115: Improve docker container detection and >>> resource configuration usage >>> >>> >>> The changes also include the fixes for the following two bugs: >>> Bug JDK-8186248: Allow more flexibility in selecting Heap % of >>> available RAM >>> >>> BugJDK-8190283: >>> Default heap sizing options select a MaxHeapSize larger than >>> available physical memory in some cases >>> >>> >>> These changes add a new JVM option 'PrintContainerInfo' for tracing >>> container related information which is specific to jdk 8. >>> >>> Testing results (with -XX:+UnlockDiagnosticVMOptions >>> -XX:+PrintContainerInfo -XX:+PrintActiveCpus JVM options): >>> -------------- >>> poonam at poonam-VirtualBox:~/docker-image$ java TestCPUsMemory >>> Number of Processors: 3 >>> Max Memory: 921174016 >>> poonam at poonam-VirtualBox:~/docker-image$ sudo docker run --cpus 1 >>> -m1024m? --rm myimage >>> [sudo] password for poonam: >>> WARNING: Your kernel does not support swap limit capabilities or the >>> cgroup is not mounted. Memory limited without swap. >>> OSContainer::init: Initializing Container Support >>> Path to /memory.limit_in_bytes is >>> /sys/fs/cgroup/memory/memory.limit_in_bytes >>> Memory Limit is: 1073741824 >>> active_processor_count: sched_getaffinity processor count: 3 >>> Path to /cpu.cfs_quota_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>> CPU Quota is: 100000 >>> Path to /cpu.cfs_period_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>> CPU Period is: 100000 >>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>> CPU Shares is: 1024 >>> CPU Quota count based on quota/period: 1 >>> OSContainer::active_processor_count: 1 >>> active_processor_count: determined by OSContainer: 1 >>> active_processor_count: sched_getaffinity processor count: 3 >>> Path to /cpu.cfs_quota_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>> CPU Quota is: 100000 >>> Path to /cpu.cfs_period_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>> CPU Period is: 100000 >>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>> CPU Shares is: 1024 >>> CPU Quota count based on quota/period: 1 >>> OSContainer::active_processor_count: 1 >>> active_processor_count: determined by OSContainer: 1 >>> Path to /memory.limit_in_bytes is >>> /sys/fs/cgroup/memory/memory.limit_in_bytes >>> Memory Limit is: 1073741824 >>> total container memory: 1073741824 >>> active_processor_count: sched_getaffinity processor count: 3 >>> Path to /cpu.cfs_quota_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>> CPU Quota is: 100000 >>> Path to /cpu.cfs_period_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>> CPU Period is: 100000 >>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>> CPU Shares is: 1024 >>> CPU Quota count based on quota/period: 1 >>> OSContainer::active_processor_count: 1 >>> active_processor_count: determined by OSContainer: 1 >>> active_processor_count: sched_getaffinity processor count: 3 >>> Path to /cpu.cfs_quota_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>> CPU Quota is: 100000 >>> Path to /cpu.cfs_period_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>> CPU Period is: 100000 >>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>> CPU Shares is: 1024 >>> CPU Quota count based on quota/period: 1 >>> OSContainer::active_processor_count: 1 >>> active_processor_count: determined by OSContainer: 1 >>> >>> active_processor_count: sched_getaffinity processor count: 3 >>> Path to /cpu.cfs_quota_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>> CPU Quota is: 100000 >>> Path to /cpu.cfs_period_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>> CPU Period is: 100000 >>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>> CPU Shares is: 1024 >>> CPU Quota count based on quota/period: 1 >>> OSContainer::active_processor_count: 1 >>> active_processor_count: determined by OSContainer: 1 >>> Number of Processors: 1 >>> Max Memory: 259522560 >>> >>> poonam at poonam-VirtualBox:~/docker-image$ sudo docker run >>> --cpu-shares 2048 -m1024m? --rm myimage >>> WARNING: Your kernel does not support swap limit capabilities or the >>> cgroup is not mounted. Memory limited without swap. >>> OSContainer::init: Initializing Container Support >>> Path to /memory.limit_in_bytes is >>> /sys/fs/cgroup/memory/memory.limit_in_bytes >>> Memory Limit is: 1073741824 >>> active_processor_count: sched_getaffinity processor count: 3 >>> Path to /cpu.cfs_quota_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>> CPU Quota is: -1 >>> Path to /cpu.cfs_period_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>> CPU Period is: 100000 >>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>> CPU Shares is: 2048 >>> CPU Share count based on shares: 2 >>> OSContainer::active_processor_count: 2 >>> active_processor_count: determined by OSContainer: 2 >>> active_processor_count: sched_getaffinity processor count: 3 >>> Path to /cpu.cfs_quota_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>> CPU Quota is: -1 >>> Path to /cpu.cfs_period_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>> CPU Period is: 100000 >>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>> CPU Shares is: 2048 >>> CPU Share count based on shares: 2 >>> OSContainer::active_processor_count: 2 >>> active_processor_count: determined by OSContainer: 2 >>> Path to /memory.limit_in_bytes is >>> /sys/fs/cgroup/memory/memory.limit_in_bytes >>> Memory Limit is: 1073741824 >>> total container memory: 1073741824 >>> Path to /memory.limit_in_bytes is >>> /sys/fs/cgroup/memory/memory.limit_in_bytes >>> Memory Limit is: 1073741824 >>> total container memory: 1073741824 >>> active_processor_count: sched_getaffinity processor count: 3 >>> Path to /cpu.cfs_quota_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>> CPU Quota is: -1 >>> Path to /cpu.cfs_period_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>> CPU Period is: 100000 >>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>> CPU Shares is: 2048 >>> CPU Share count based on shares: 2 >>> OSContainer::active_processor_count: 2 >>> active_processor_count: determined by OSContainer: 2 >>> active_processor_count: sched_getaffinity processor count: 3 >>> Path to /cpu.cfs_quota_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>> CPU Quota is: -1 >>> Path to /cpu.cfs_period_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>> CPU Period is: 100000 >>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>> CPU Shares is: 2048 >>> CPU Share count based on shares: 2 >>> OSContainer::active_processor_count: 2 >>> active_processor_count: determined by OSContainer: 2 >>> >>> active_processor_count: sched_getaffinity processor count: 3 >>> Path to /cpu.cfs_quota_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>> CPU Quota is: -1 >>> Path to /cpu.cfs_period_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>> CPU Period is: 100000 >>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>> CPU Shares is: 2048 >>> CPU Share count based on shares: 2 >>> OSContainer::active_processor_count: 2 >>> active_processor_count: determined by OSContainer: 2 >>> Number of Processors: 2 >>> Max Memory: 259522560 >>> >>> poonam at poonam-VirtualBox:~/docker-image$ sudo docker run >>> --cpuset-cpus 0-1 -m1024m? --rm myimage >>> WARNING: Your kernel does not support swap limit capabilities or the >>> cgroup is not mounted. Memory limited without swap. >>> OSContainer::init: Initializing Container Support >>> Path to /memory.limit_in_bytes is >>> /sys/fs/cgroup/memory/memory.limit_in_bytes >>> Memory Limit is: 1073741824 >>> active_processor_count: sched_getaffinity processor count: 2 >>> Path to /cpu.cfs_quota_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>> CPU Quota is: -1 >>> Path to /cpu.cfs_period_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>> CPU Period is: 100000 >>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>> CPU Shares is: 1024 >>> OSContainer::active_processor_count: 2 >>> active_processor_count: determined by OSContainer: 2 >>> active_processor_count: sched_getaffinity processor count: 2 >>> Path to /cpu.cfs_quota_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>> CPU Quota is: -1 >>> Path to /cpu.cfs_period_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>> CPU Period is: 100000 >>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>> CPU Shares is: 1024 >>> OSContainer::active_processor_count: 2 >>> active_processor_count: determined by OSContainer: 2 >>> Path to /memory.limit_in_bytes is >>> /sys/fs/cgroup/memory/memory.limit_in_bytes >>> Memory Limit is: 1073741824 >>> total container memory: 1073741824 >>> Path to /memory.limit_in_bytes is >>> /sys/fs/cgroup/memory/memory.limit_in_bytes >>> Memory Limit is: 1073741824 >>> total container memory: 1073741824 >>> active_processor_count: sched_getaffinity processor count: 2 >>> Path to /cpu.cfs_quota_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>> CPU Quota is: -1 >>> Path to /cpu.cfs_period_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>> CPU Period is: 100000 >>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>> CPU Shares is: 1024 >>> OSContainer::active_processor_count: 2 >>> active_processor_count: determined by OSContainer: 2 >>> active_processor_count: sched_getaffinity processor count: 2 >>> Path to /cpu.cfs_quota_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>> CPU Quota is: -1 >>> Path to /cpu.cfs_period_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>> CPU Period is: 100000 >>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>> CPU Shares is: 1024 >>> OSContainer::active_processor_count: 2 >>> active_processor_count: determined by OSContainer: 2 >>> >>> active_processor_count: sched_getaffinity processor count: 2 >>> Path to /cpu.cfs_quota_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>> CPU Quota is: -1 >>> Path to /cpu.cfs_period_us is >>> /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>> CPU Period is: 100000 >>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>> CPU Shares is: 1024 >>> OSContainer::active_processor_count: 2 >>> active_processor_count: determined by OSContainer: 2 >>> Number of Processors: 2 >>> Max Memory: 259522560 >>> ------------------ >>> >>> Thanks, >>> Poonam >>> > From aph at redhat.com Fri Jun 22 13:40:06 2018 From: aph at redhat.com (Andrew Haley) Date: Fri, 22 Jun 2018 14:40:06 +0100 Subject: RFR: 8204331: AArch64: fix CAS not embedded in normal graph error In-Reply-To: <9758cf9b-6388-e73c-532a-4a7cdbf72ff7@redhat.com> References: <9758cf9b-6388-e73c-532a-4a7cdbf72ff7@redhat.com> Message-ID: <1bf1f08f-8793-2c37-7974-04d42474a58b@redhat.com> On 06/22/2018 10:24 AM, Andrew Dinn wrote: > On 22/06/18 09:09, Andrew Haley wrote: >> Here's an idea: we could scan the generated code. We could also annotate >> the assembler so that it asserted that all ldar/stlr were generated. We >> could certainly do that even in a product build. If your tests force C2 >> compilation and disable inlining we can make the tests repeatable. > When you say "we could scan the generated code" the obvious question is: > "How?". As in how to do that from a jtreg test? We could add a command-line option to dump the hex. Or even to scan for certain patterns. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From rkennke at redhat.com Fri Jun 22 14:14:20 2018 From: rkennke at redhat.com (Roman Kennke) Date: Fri, 22 Jun 2018 16:14:20 +0200 Subject: RFR: JDK-8205523: Explicit barriers for interpreter Message-ID: <3aae894d-dae1-f4b3-7702-d99621382864@redhat.com> Hi all, A number of operations cannot reasonably make use of the Access API but require explicit read- and write-barriers for GCs like Shenandoah that need to ensure to-space consistency. Examples are monitor-enter/-exit and some intrinsics. The change adds APIs to BarrierSetAssembler (x86 and aarch64) to support these kinds of explicit barriers, and the necessary calls in relevant places. The default implementation does nothing. These barriers have been found and tested over several years in Shenandoah. Bug: https://bugs.openjdk.java.net/browse/JDK-8205523 Webrev: http://cr.openjdk.java.net/~rkennke/JDK-8205523/webrev.00/ Testing: hotspot/tier1, will submit into Mach5 after reviews. Can I please get reviews? Thanks, Roman From bob.vandette at oracle.com Fri Jun 22 15:08:59 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Fri, 22 Jun 2018 11:08:59 -0400 Subject: RFR (8u): JDK-8146115: Improve docker container detection and resource configuration usage In-Reply-To: <0889f79a-ad65-1796-053f-dd8e34cf0267@oracle.com> References: <3968d009-c1a2-87e7-bc22-70c348ee5b69@oracle.com> <69915730-5963-41AE-AB75-B0C223E39035@oracle.com> <456bd0f8-c89b-1cd0-8369-9ecf0ac3b0e4@oracle.com> <0889f79a-ad65-1796-053f-dd8e34cf0267@oracle.com> Message-ID: The latest webrev looks fine. Bob. > On Jun 22, 2018, at 9:07 AM, Poonam Parhar wrote: > > Hello, > > Could I get one more review for the docker backport changes, please! > > The latest webrev including the suggestions from Bob's review is here: > http://cr.openjdk.java.net/~poonam/8146115/webrev.02/ > > Thanks, > Poonam > > > On 5/18/18 6:30 AM, Poonam Parhar wrote: >> Hello Bob, >> >> Thanks a lot for reviewing the changes! >> >> On 5/17/18 11:12 AM, Bob Vandette wrote: >>> The backport of my changes look pretty good. >>> >>> If the new PrintContainerInfo option is only referenced on Linux platforms, you might >>> want to move it to globals_linux.hpp. >> Yes, currently it is being used only on Linux platforms, but I think it is a general option and might be used on other platforms at a later date. So I think we can leave it in globals.hpp. >>> >>> Is there a reason PrintActiveCpus is a diagnostic flag but PrintContainerInfo is not? >> Yes, PrintContainerInfo should also be a diagnostic option. I have changed it. >> Updated webrev: http://cr.openjdk.java.net/~poonam/8146115/webrev.01/ >> >>> >>> Is it acceptable to add these new VM flags in a backport that won?t be supported in the latest release? >> Since JDK 9 and later releases use Unified JVM logging, and we don't have that in JDK8, a new option is required in 8 to log the information which is being logged under 'container' tag in 10 and 11. We will need to have a CSR request approved for the new JVM options added as part of this backport. >> >> Thanks, >> Poonam >> >>> >>> Bob. >>> >>> >>>> On May 15, 2018, at 4:46 PM, Poonam Parhar wrote: >>>> >>>> Hello, >>>> >>>> Please review the docker container support changes backported to JDK 8u. These changes include the backport of the following enhancement and the follow-on bug fixes done on top of that in jdk 10 and 11. >>>> >>>> Webrev: http://cr.openjdk.java.net/~poonam/8146115/webrev.00/ >>>> >>>> Enhancement:JDK-8146115: Improve docker container detection and resource configuration usage >>>> >>>> The changes also include the fixes for the following two bugs: >>>> Bug JDK-8186248: Allow more flexibility in selecting Heap % of available RAM >>>> BugJDK-8190283: Default heap sizing options select a MaxHeapSize larger than available physical memory in some cases >>>> >>>> These changes add a new JVM option 'PrintContainerInfo' for tracing container related information which is specific to jdk 8. >>>> >>>> Testing results (with -XX:+UnlockDiagnosticVMOptions -XX:+PrintContainerInfo -XX:+PrintActiveCpus JVM options): >>>> -------------- >>>> poonam at poonam-VirtualBox:~/docker-image$ java TestCPUsMemory >>>> Number of Processors: 3 >>>> Max Memory: 921174016 >>>> poonam at poonam-VirtualBox:~/docker-image$ sudo docker run --cpus 1 -m1024m --rm myimage >>>> [sudo] password for poonam: >>>> WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap. >>>> OSContainer::init: Initializing Container Support >>>> Path to /memory.limit_in_bytes is /sys/fs/cgroup/memory/memory.limit_in_bytes >>>> Memory Limit is: 1073741824 >>>> active_processor_count: sched_getaffinity processor count: 3 >>>> Path to /cpu.cfs_quota_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>>> CPU Quota is: 100000 >>>> Path to /cpu.cfs_period_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>>> CPU Period is: 100000 >>>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>>> CPU Shares is: 1024 >>>> CPU Quota count based on quota/period: 1 >>>> OSContainer::active_processor_count: 1 >>>> active_processor_count: determined by OSContainer: 1 >>>> active_processor_count: sched_getaffinity processor count: 3 >>>> Path to /cpu.cfs_quota_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>>> CPU Quota is: 100000 >>>> Path to /cpu.cfs_period_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>>> CPU Period is: 100000 >>>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>>> CPU Shares is: 1024 >>>> CPU Quota count based on quota/period: 1 >>>> OSContainer::active_processor_count: 1 >>>> active_processor_count: determined by OSContainer: 1 >>>> Path to /memory.limit_in_bytes is /sys/fs/cgroup/memory/memory.limit_in_bytes >>>> Memory Limit is: 1073741824 >>>> total container memory: 1073741824 >>>> active_processor_count: sched_getaffinity processor count: 3 >>>> Path to /cpu.cfs_quota_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>>> CPU Quota is: 100000 >>>> Path to /cpu.cfs_period_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>>> CPU Period is: 100000 >>>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>>> CPU Shares is: 1024 >>>> CPU Quota count based on quota/period: 1 >>>> OSContainer::active_processor_count: 1 >>>> active_processor_count: determined by OSContainer: 1 >>>> active_processor_count: sched_getaffinity processor count: 3 >>>> Path to /cpu.cfs_quota_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>>> CPU Quota is: 100000 >>>> Path to /cpu.cfs_period_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>>> CPU Period is: 100000 >>>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>>> CPU Shares is: 1024 >>>> CPU Quota count based on quota/period: 1 >>>> OSContainer::active_processor_count: 1 >>>> active_processor_count: determined by OSContainer: 1 >>>> >>>> active_processor_count: sched_getaffinity processor count: 3 >>>> Path to /cpu.cfs_quota_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>>> CPU Quota is: 100000 >>>> Path to /cpu.cfs_period_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>>> CPU Period is: 100000 >>>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>>> CPU Shares is: 1024 >>>> CPU Quota count based on quota/period: 1 >>>> OSContainer::active_processor_count: 1 >>>> active_processor_count: determined by OSContainer: 1 >>>> Number of Processors: 1 >>>> Max Memory: 259522560 >>>> >>>> poonam at poonam-VirtualBox:~/docker-image$ sudo docker run --cpu-shares 2048 -m1024m --rm myimage >>>> WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap. >>>> OSContainer::init: Initializing Container Support >>>> Path to /memory.limit_in_bytes is /sys/fs/cgroup/memory/memory.limit_in_bytes >>>> Memory Limit is: 1073741824 >>>> active_processor_count: sched_getaffinity processor count: 3 >>>> Path to /cpu.cfs_quota_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>>> CPU Quota is: -1 >>>> Path to /cpu.cfs_period_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>>> CPU Period is: 100000 >>>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>>> CPU Shares is: 2048 >>>> CPU Share count based on shares: 2 >>>> OSContainer::active_processor_count: 2 >>>> active_processor_count: determined by OSContainer: 2 >>>> active_processor_count: sched_getaffinity processor count: 3 >>>> Path to /cpu.cfs_quota_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>>> CPU Quota is: -1 >>>> Path to /cpu.cfs_period_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>>> CPU Period is: 100000 >>>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>>> CPU Shares is: 2048 >>>> CPU Share count based on shares: 2 >>>> OSContainer::active_processor_count: 2 >>>> active_processor_count: determined by OSContainer: 2 >>>> Path to /memory.limit_in_bytes is /sys/fs/cgroup/memory/memory.limit_in_bytes >>>> Memory Limit is: 1073741824 >>>> total container memory: 1073741824 >>>> Path to /memory.limit_in_bytes is /sys/fs/cgroup/memory/memory.limit_in_bytes >>>> Memory Limit is: 1073741824 >>>> total container memory: 1073741824 >>>> active_processor_count: sched_getaffinity processor count: 3 >>>> Path to /cpu.cfs_quota_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>>> CPU Quota is: -1 >>>> Path to /cpu.cfs_period_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>>> CPU Period is: 100000 >>>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>>> CPU Shares is: 2048 >>>> CPU Share count based on shares: 2 >>>> OSContainer::active_processor_count: 2 >>>> active_processor_count: determined by OSContainer: 2 >>>> active_processor_count: sched_getaffinity processor count: 3 >>>> Path to /cpu.cfs_quota_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>>> CPU Quota is: -1 >>>> Path to /cpu.cfs_period_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>>> CPU Period is: 100000 >>>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>>> CPU Shares is: 2048 >>>> CPU Share count based on shares: 2 >>>> OSContainer::active_processor_count: 2 >>>> active_processor_count: determined by OSContainer: 2 >>>> >>>> active_processor_count: sched_getaffinity processor count: 3 >>>> Path to /cpu.cfs_quota_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>>> CPU Quota is: -1 >>>> Path to /cpu.cfs_period_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>>> CPU Period is: 100000 >>>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>>> CPU Shares is: 2048 >>>> CPU Share count based on shares: 2 >>>> OSContainer::active_processor_count: 2 >>>> active_processor_count: determined by OSContainer: 2 >>>> Number of Processors: 2 >>>> Max Memory: 259522560 >>>> >>>> poonam at poonam-VirtualBox:~/docker-image$ sudo docker run --cpuset-cpus 0-1 -m1024m --rm myimage >>>> WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap. >>>> OSContainer::init: Initializing Container Support >>>> Path to /memory.limit_in_bytes is /sys/fs/cgroup/memory/memory.limit_in_bytes >>>> Memory Limit is: 1073741824 >>>> active_processor_count: sched_getaffinity processor count: 2 >>>> Path to /cpu.cfs_quota_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>>> CPU Quota is: -1 >>>> Path to /cpu.cfs_period_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>>> CPU Period is: 100000 >>>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>>> CPU Shares is: 1024 >>>> OSContainer::active_processor_count: 2 >>>> active_processor_count: determined by OSContainer: 2 >>>> active_processor_count: sched_getaffinity processor count: 2 >>>> Path to /cpu.cfs_quota_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>>> CPU Quota is: -1 >>>> Path to /cpu.cfs_period_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>>> CPU Period is: 100000 >>>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>>> CPU Shares is: 1024 >>>> OSContainer::active_processor_count: 2 >>>> active_processor_count: determined by OSContainer: 2 >>>> Path to /memory.limit_in_bytes is /sys/fs/cgroup/memory/memory.limit_in_bytes >>>> Memory Limit is: 1073741824 >>>> total container memory: 1073741824 >>>> Path to /memory.limit_in_bytes is /sys/fs/cgroup/memory/memory.limit_in_bytes >>>> Memory Limit is: 1073741824 >>>> total container memory: 1073741824 >>>> active_processor_count: sched_getaffinity processor count: 2 >>>> Path to /cpu.cfs_quota_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>>> CPU Quota is: -1 >>>> Path to /cpu.cfs_period_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>>> CPU Period is: 100000 >>>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>>> CPU Shares is: 1024 >>>> OSContainer::active_processor_count: 2 >>>> active_processor_count: determined by OSContainer: 2 >>>> active_processor_count: sched_getaffinity processor count: 2 >>>> Path to /cpu.cfs_quota_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>>> CPU Quota is: -1 >>>> Path to /cpu.cfs_period_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>>> CPU Period is: 100000 >>>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>>> CPU Shares is: 1024 >>>> OSContainer::active_processor_count: 2 >>>> active_processor_count: determined by OSContainer: 2 >>>> >>>> active_processor_count: sched_getaffinity processor count: 2 >>>> Path to /cpu.cfs_quota_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us >>>> CPU Quota is: -1 >>>> Path to /cpu.cfs_period_us is /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us >>>> CPU Period is: 100000 >>>> Path to /cpu.shares is /sys/fs/cgroup/cpu,cpuacct/cpu.shares >>>> CPU Shares is: 1024 >>>> OSContainer::active_processor_count: 2 >>>> active_processor_count: determined by OSContainer: 2 >>>> Number of Processors: 2 >>>> Max Memory: 259522560 >>>> ------------------ >>>> >>>> Thanks, >>>> Poonam >>>> >> > From vladimir.kozlov at oracle.com Fri Jun 22 16:04:45 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 22 Jun 2018 09:04:45 -0700 Subject: RFR(S): JDK-8191339: [JVMCI] BigInteger compiler intrinsics on Graal. In-Reply-To: <28011331-bd43-2c32-dba4-e41879ffe28a@oracle.com> References: <28011331-bd43-2c32-dba4-e41879ffe28a@oracle.com> Message-ID: <02f34a26-2a97-6a30-384f-115327781aac@oracle.com> Hi Patric, Do you need Graal changes for this? Or it already has these intrinsics and the only problem is these flags were not set in vm_version_x86.cpp? Small note. In vm_version_x86.cpp previous code has already COMPILER2_OR_JVMCI check. You can remove previous #endif and new #ifdef. Also change comment for closing #endif at line 1080 to // COMPILER2_OR_JVMCI 1080 #endif // COMPILER2 What testing you did? Thanks, Vladimir On 6/21/18 8:26 AM, Patric Hedlin wrote: > Dear all, > > I would like to ask for help to review the following change/update: > > Issue:? https://bugs.openjdk.java.net/browse/JDK-8191339 > > Webrev: http://cr.openjdk.java.net/~phedlin/tr8191339/ > > > 8191339: [JVMCI] BigInteger compiler intrinsics on Graal. > > ??? Enabling BigInteger intrinsics via JVMCI. > > > > Best regards, > Patric From aph at redhat.com Fri Jun 22 16:16:46 2018 From: aph at redhat.com (Andrew Haley) Date: Fri, 22 Jun 2018 17:16:46 +0100 Subject: 8205118: CodeStrings::copy() assertion caused by -XX:+VerifyOops -XX:+PrintStubCode In-Reply-To: <7c93f042-d668-6764-10cc-c74eb6b07d51@redhat.com> References: <0da3b5fa-cb47-2803-f5b0-959ddc30c667@redhat.com> <7c93f042-d668-6764-10cc-c74eb6b07d51@redhat.com> Message-ID: On 06/18/2018 06:07 PM, Aleksey Shipilev wrote: > On 06/18/2018 07:02 PM, Andrew Haley wrote: >> My recent patch to re-enable the printing of code comments in >> PrintStubCode revealed a latent bug in CodeStrings::copy(). >> VerifyOops uses CodeStrings to hold its assertion strings, and these >> are distinguished from code comments by an offset of -1. (Presumably >> to make sure they're not interpreted as code comments by the >> disassembler.) Unfortunately, CodeStrings::copy() triggers an >> assertion failure when it sees any of the assertion strings. >> >> The best fix, IMO, is to correct CodeStrings::copy(): it shouldn't >> fail whatever the code strings are. >> >> http://cr.openjdk.java.net/~aph/8205118-1/ http://cr.openjdk.java.net/~aph/8205118-2/ -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From lois.foltan at oracle.com Fri Jun 22 23:29:37 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 22 Jun 2018 19:29:37 -0400 Subject: RFR (M) JDK-8169559: Add class loader names to relevant VM messages In-Reply-To: <35409b6d-00f3-53df-427d-5c32bfdd3ce0@oracle.com> References: <76623c90-6254-e37d-f45f-1ea49f3b7f55@oracle.com> <35409b6d-00f3-53df-427d-5c32bfdd3ce0@oracle.com> Message-ID: <2c74beb2-3b12-a562-3099-c54a0471571c@oracle.com> On 6/21/2018 4:37 PM, mandy chung wrote: > Hi Lois, > > On 6/20/18 5:34 PM, Lois Foltan wrote: >> Please review this change to introduce a new utility method >> Klass::class_in_module_of_loader() to uniformly provide a way to add >> a class' module name and class loader's name_and_id to error messages >> and potentially logging. >> : open webrev at >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8169559/webrev/ > > Thanks for doing this. > > src/hotspot/share/classfile/moduleEntry.hpp > ?? 'M' is meant to be lower case in "unnamed Module", right? > > src/hotspot/share/interpreter/linkResolver.cpp > +????? "tried to access method %s.%s%s from class %s (%s%s%s)", > > Since you are on this file, it may read better if rephrased to: > ? "class %s tried to access method %s.%s%s (%s%s%s)", Hi Mandy, Thanks for the review.? All your comments have been addressed, see new webrev at: http://cr.openjdk.java.net/~lfoltan/bug_jdk8169559.1/webrev/ Lois > > Thanks > Mandy From mandy.chung at oracle.com Fri Jun 22 23:36:51 2018 From: mandy.chung at oracle.com (mandy chung) Date: Fri, 22 Jun 2018 16:36:51 -0700 Subject: RFR (M) JDK-8169559: Add class loader names to relevant VM messages In-Reply-To: <2c74beb2-3b12-a562-3099-c54a0471571c@oracle.com> References: <76623c90-6254-e37d-f45f-1ea49f3b7f55@oracle.com> <35409b6d-00f3-53df-427d-5c32bfdd3ce0@oracle.com> <2c74beb2-3b12-a562-3099-c54a0471571c@oracle.com> Message-ID: +1 Mandy On 6/22/18 4:29 PM, Lois Foltan wrote: > Hi Mandy, > Thanks for the review.? All your comments have been addressed, see new > webrev at: > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8169559.1/webrev/ > > Lois From lois.foltan at oracle.com Fri Jun 22 23:40:01 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 22 Jun 2018 19:40:01 -0400 Subject: RFR (M) JDK-8169559: Add class loader names to relevant VM messages In-Reply-To: References: <76623c90-6254-e37d-f45f-1ea49f3b7f55@oracle.com> <35409b6d-00f3-53df-427d-5c32bfdd3ce0@oracle.com> <2c74beb2-3b12-a562-3099-c54a0471571c@oracle.com> Message-ID: Thanks Mandy! Lois On 6/22/2018 7:36 PM, mandy chung wrote: > +1 > > Mandy > > On 6/22/18 4:29 PM, Lois Foltan wrote: > >> Hi Mandy, >> Thanks for the review.? All your comments have been addressed, see >> new webrev at: >> >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8169559.1/webrev/ >> >> Lois From kim.barrett at oracle.com Sun Jun 24 21:51:23 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Sun, 24 Jun 2018 17:51:23 -0400 Subject: RFR: 8205559: Remove IN_CONCURRENT_ROOT Access decorator Message-ID: <6B77E769-9614-403F-AA14-D83EB70222D1@oracle.com> Please review the removal of the IN_CONCURRENT_ROOT Access decorator. All non-AS_RAW IN_NATIVE accesses will be treated as potentially concurrent, and so will require barriers appropriate to the selected collector. Renumbered the Access decorators, eliminating gaps and mis-orderings that had accrued from previous removals and renamings. Fixed StringDedupTable::lookup usage of the Access API. It was using IN_CONCURRENT_ROOT incorrectly, and was not using AS_NO_KEEPALIVE where it should have been. There are other places where this class is not using the Access API but should be, involving the _obj field of StringDedupTableEntry; addressing those will be part of a future CR, since currently only ZGC would fail because of these and ZGC does not yet support string deduplication. CR: https://bugs.openjdk.java.net/browse/JDK-8205559 Webrev: http://cr.openjdk.java.net/~kbarrett/8205559/open.00/ Testing: Mach5 tier1,2,3, hs-tier4,5 Aurora perf testing for G1 and Parallel detected no performance regressions. From aph at redhat.com Mon Jun 25 07:32:15 2018 From: aph at redhat.com (Andrew Haley) Date: Mon, 25 Jun 2018 08:32:15 +0100 Subject: [aarch64-port-dev ] RFR(M): 8196402: AARCH64: create intrinsic for Math.log In-Reply-To: <24cf3572-e2c8-6419-9c81-1da0e283b20a@redhat.com> References: <296818a4-2e9c-6e60-70b3-cd5a52e49cb1@bell-sw.com> <87f69722-0a1f-dfbc-e6a5-10ea7c0bbd4b@redhat.com> <5dbf89dd-fc06-8a36-039a-7b5a18178050@bell-sw.com> <12856099-ecfb-c77b-dcc6-c57b5b3dbf72@bell-sw.com> <55835b6e-844c-c858-1c73-8609d155a619@redhat.com> <2f1a75b8-717c-c7ee-214a-70644e54012b@redhat.com> <51f95a09-9bc0-0be5-8302-4a797e31e029@bell-sw.com> <24cf3572-e2c8-6419-9c81-1da0e283b20a@redhat.com> Message-ID: On 06/25/2018 08:18 AM, Andrew Haley wrote: > On 06/20/2018 03:26 PM, Dmitrij Pochepko wrote: >> Here, the original code has table in HEX and also the article has HEX >> representation, so I kept HEX in the table. >> >> Please take a look at updated webrev: >> http://cr.openjdk.java.net/~dpochepk/8196402/webrev.06/ > > Looks good. Thanks. > This should have gone to hotspot-dev too. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From goetz.lindenmaier at sap.com Mon Jun 25 07:45:25 2018 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Mon, 25 Jun 2018 07:45:25 +0000 Subject: RFR (M) JDK-8169559: Add class loader names to relevant VM messages In-Reply-To: <2c74beb2-3b12-a562-3099-c54a0471571c@oracle.com> References: <76623c90-6254-e37d-f45f-1ea49f3b7f55@oracle.com> <35409b6d-00f3-53df-427d-5c32bfdd3ce0@oracle.com> <2c74beb2-3b12-a562-3099-c54a0471571c@oracle.com> Message-ID: <901b61d1ab1146659e75e55fd3aadfee@sap.com> Hi Lois, thanks for doing this. Also that there is joint_in_module_of_loader(). One comment about this method: class jit.t.t113.kid1 cannot be cast to class jit.t.t113.kid2 (jit.t.t113.kid1 and jit.t.t113.kid2 are in unnamed Module of loader 'app') This seems to be very redundant. Why not skip the class names in the module information: class jit.t.t113.kid1 cannot be cast to class jit.t.t113.kid2 (both in unnamed Module of loader 'app') or class jit.t.t113.kid1 cannot be cast to class jit.t.t113.kid2 (classes are in unnamed Module of loader 'app') I'll wait until this is pushed, and then see what is left of my change 8199940. We should have tests where there are custom modules and loaders printed. Best regards, Goetz. > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On > Behalf Of Lois Foltan > Sent: Samstag, 23. Juni 2018 01:30 > To: mandy chung > Cc: hotspot-dev developers > Subject: Re: RFR (M) JDK-8169559: Add class loader names to relevant VM > messages > > On 6/21/2018 4:37 PM, mandy chung wrote: > > Hi Lois, > > > > On 6/20/18 5:34 PM, Lois Foltan wrote: > >> Please review this change to introduce a new utility method > >> Klass::class_in_module_of_loader() to uniformly provide a way to add > >> a class' module name and class loader's name_and_id to error messages > >> and potentially logging. > >> : open webrev at > >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8169559/webrev/ > > > > Thanks for doing this. > > > > src/hotspot/share/classfile/moduleEntry.hpp > > ?? 'M' is meant to be lower case in "unnamed Module", right? > > > > src/hotspot/share/interpreter/linkResolver.cpp > > +????? "tried to access method %s.%s%s from class %s (%s%s%s)", > > > > Since you are on this file, it may read better if rephrased to: > > ? "class %s tried to access method %s.%s%s (%s%s%s)", > Hi Mandy, > Thanks for the review.? All your comments have been addressed, see new > webrev at: > > http://cr.openjdk.java.net/~lfoltan/bug_jdk8169559.1/webrev/ > > Lois > > > > > Thanks > > Mandy From kevin.walls at oracle.com Mon Jun 25 12:36:37 2018 From: kevin.walls at oracle.com (Kevin Walls) Date: Mon, 25 Jun 2018 13:36:37 +0100 Subject: [8u] RFR(XS): 8205440: [8u] DWORD64 required for later Windows compilers In-Reply-To: <0f7c7a61-2886-1376-82b4-b6e1464f60d0@oracle.com> References: <0f7c7a61-2886-1376-82b4-b6e1464f60d0@oracle.com> Message-ID: <5336532c-558f-96bb-3c90-22e07a3add37@oracle.com> Hi, I'd? like to get a review of a small change which will help enable compilation on Windows with later Visual Studio compilers: 8205440: [8u] DWORD64 required for later Windows compilers https://bugs.openjdk.java.net/browse/JDK-8205440 The change is: src/os/windows/vm/os_windows.cpp @@ -2261,9 +2296,9 @@ ?? assert((pc[1] & ~0x7) == 0xF8, "cannot handle non-register operands"); ?? assert(ctx->Rax == min_jint, "unexpected idiv exception"); ?? // set correct result values and continue after idiv instruction -? ctx->Rip = (DWORD)pc + 2;??????? // idiv reg, reg? is 2 bytes -? ctx->Rax = (DWORD)min_jint;????? // result -? ctx->Rdx = (DWORD)0;???????????? // remainder +? ctx->Rip = (DWORD64)pc + 2;??????? // idiv reg, reg? is 2 bytes +? ctx->Rax = (DWORD64)min_jint;????? // result +? ctx->Rdx = (DWORD64)0;???????????? // remainder ?? // Continue the execution ?? #else ?? PCONTEXT ctx = exceptionInfo->ContextRecord; This change is inside Handle_IDiv_Exception, and is within an #ifdef _M_AMD64 (there is use of DWORD in the #else).? At other points in the same file we correctly use DWORD64 already.? If that's unclear of needs a webrev I'll produce one... In JDK9, these DWORD changed to DWORD64 as a minor byproduct of: 8136421: JEP 243: Java-Level JVM Compiler Interface ...which we aren't implementing in jdk8 right now. I've been running with this change in my local builds with VS2017 and jprt builds with the regular 8u compiler. Thanks Kevin From per.liden at oracle.com Mon Jun 25 14:49:34 2018 From: per.liden at oracle.com (Per Liden) Date: Mon, 25 Jun 2018 16:49:34 +0200 Subject: RFR: 8205559: Remove IN_CONCURRENT_ROOT Access decorator In-Reply-To: <6B77E769-9614-403F-AA14-D83EB70222D1@oracle.com> References: <6B77E769-9614-403F-AA14-D83EB70222D1@oracle.com> Message-ID: On 06/24/2018 11:51 PM, Kim Barrett wrote: > Please review the removal of the IN_CONCURRENT_ROOT Access decorator. > > All non-AS_RAW IN_NATIVE accesses will be treated as potentially > concurrent, and so will require barriers appropriate to the selected > collector. > > Renumbered the Access decorators, eliminating gaps and mis-orderings > that had accrued from previous removals and renamings. > > Fixed StringDedupTable::lookup usage of the Access API. It was using > IN_CONCURRENT_ROOT incorrectly, and was not using AS_NO_KEEPALIVE > where it should have been. There are other places where this class is > not using the Access API but should be, involving the _obj field of > StringDedupTableEntry; addressing those will be part of a future CR, > since currently only ZGC would fail because of these and ZGC does not > yet support string deduplication. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8205559 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8205559/open.00/ Looks good to me! /Per > > Testing: > Mach5 tier1,2,3, hs-tier4,5 > > Aurora perf testing for G1 and Parallel detected no performance regressions. > From lois.foltan at oracle.com Mon Jun 25 14:54:15 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Mon, 25 Jun 2018 10:54:15 -0400 Subject: RFR (M) JDK-8169559: Add class loader names to relevant VM messages In-Reply-To: <901b61d1ab1146659e75e55fd3aadfee@sap.com> References: <76623c90-6254-e37d-f45f-1ea49f3b7f55@oracle.com> <35409b6d-00f3-53df-427d-5c32bfdd3ce0@oracle.com> <2c74beb2-3b12-a562-3099-c54a0471571c@oracle.com> <901b61d1ab1146659e75e55fd3aadfee@sap.com> Message-ID: On 6/25/2018 3:45 AM, Lindenmaier, Goetz wrote: > Hi Lois, > > thanks for doing this. Also that there is joint_in_module_of_loader(). > One comment about this method: > class jit.t.t113.kid1 cannot be cast to class jit.t.t113.kid2 (jit.t.t113.kid1 and jit.t.t113.kid2 are in unnamed Module of loader 'app') > This seems to be very redundant. Why not skip the class names in the module information: > class jit.t.t113.kid1 cannot be cast to class jit.t.t113.kid2 (both in unnamed Module of loader 'app') > or > class jit.t.t113.kid1 cannot be cast to class jit.t.t113.kid2 (classes are in unnamed Module of loader 'app') Thanks Goetz for the review!? I like your suggestion above but would like to include it within an RFE, I anticipate we will continue to improve the wording of error messages going forward.? I have created, https://bugs.openjdk.java.net/browse/JDK-8205611 for follow on work. > > I'll wait until this is pushed, and then see what is left of my change 8199940. > We should have tests where there are custom modules and loaders printed. I couldn't agree more, I'm hoping by including the new test ExpQualToM1PrivateMethodIAE.java, it might provide an example of how to do this for other error messages. Thanks, Lois > > Best regards, > Goetz. > > >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >> Behalf Of Lois Foltan >> Sent: Samstag, 23. Juni 2018 01:30 >> To: mandy chung >> Cc: hotspot-dev developers >> Subject: Re: RFR (M) JDK-8169559: Add class loader names to relevant VM >> messages >> >> On 6/21/2018 4:37 PM, mandy chung wrote: >>> Hi Lois, >>> >>> On 6/20/18 5:34 PM, Lois Foltan wrote: >>>> Please review this change to introduce a new utility method >>>> Klass::class_in_module_of_loader() to uniformly provide a way to add >>>> a class' module name and class loader's name_and_id to error messages >>>> and potentially logging. >>>> : open webrev at >>>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8169559/webrev/ >>> Thanks for doing this. >>> >>> src/hotspot/share/classfile/moduleEntry.hpp >>> ?? 'M' is meant to be lower case in "unnamed Module", right? >>> >>> src/hotspot/share/interpreter/linkResolver.cpp >>> +????? "tried to access method %s.%s%s from class %s (%s%s%s)", >>> >>> Since you are on this file, it may read better if rephrased to: >>> ? "class %s tried to access method %s.%s%s (%s%s%s)", >> Hi Mandy, >> Thanks for the review.? All your comments have been addressed, see new >> webrev at: >> >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8169559.1/webrev/ >> >> Lois >> >>> Thanks >>> Mandy From paul.sandoz at oracle.com Mon Jun 25 16:11:14 2018 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Mon, 25 Jun 2018 09:11:14 -0700 Subject: RFR 8195650 Method references to VarHandle accessors In-Reply-To: <086A1684-4D97-4B6D-94F2-16A1261057B5@oracle.com> References: <086A1684-4D97-4B6D-94F2-16A1261057B5@oracle.com> Message-ID: <355600B2-AB78-4170-8B2B-4C7F754B6A85@oracle.com> Gentle reminder. I would like to get this reviews and pushed before the ramp down phase one kicks in this week. Paul. > On Jun 19, 2018, at 5:08 PM, Paul Sandoz wrote: > > Hi, > > Please review the following fix to ensure method references to VarHandle signature polymorphic methods are supported at runtime (specifically the method handle to a signature polymorphic method can be loaded from the constant pool): > > http://cr.openjdk.java.net/~psandoz/jdk/JDK-8195650-varhandle-mref/webrev/ > > I also added a ?belts and braces? test to ensure a constant method handle to MethodHandle::invokeBasic cannot be loaded if outside of the j.l.invoke package. > > Paul. > From gromero at linux.vnet.ibm.com Mon Jun 25 17:38:46 2018 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Mon, 25 Jun 2018 14:38:46 -0300 Subject: Build failure after "8204540: Automatic oop closure devirtualization" Message-ID: Hi, I'm facing the following build error on jdk/jdk tip (on Power and x86_64): ERROR: Build failed for target 'default (exploded-image)' in configuration 'linux-x86_64-normal-server-release' (exit code 2) === Output from failing command(s) repeated here === * For target hotspot_variant-server_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link: /home/gromero/hg/jdk/jdk/build/linux-x86_64-normal-server-release/hotspot/variant-server/libjvm/objs/blockOffsetTable.o: In function `void MarkSweep::mark_and_push(unsigned int*)': /home/gromero/hg/jdk/jdk/src/hotspot/share/gc/serial/markSweep.inline.hpp:54: undefined reference to `Stack::push(oopDesc*)' /home/gromero/hg/jdk/jdk/build/linux-x86_64-normal-server-release/hotspot/variant-server/libjvm/objs/blockOffsetTable.o: In function `void MarkSweep::mark_and_push(oopDesc**)': /home/gromero/hg/jdk/jdk/src/hotspot/share/gc/serial/markSweep.inline.hpp:54: undefined reference to `Stack::push(oopDesc*)' /home/gromero/hg/jdk/jdk/src/hotspot/share/gc/serial/markSweep.inline.hpp:54: undefined reference to `Stack::push(oopDesc*)' collect2: error: ld returned 1 exit status * For target hotspot_variant-server_libjvm_objs_BUILD_LIBJVM_link: /home/gromero/hg/jdk/jdk/build/linux-x86_64-normal-server-release/hotspot/variant-server/libjvm/objs/blockOffsetTable.o: In function `void MarkSweep::mark_and_push(unsigned int*)': /home/gromero/hg/jdk/jdk/src/hotspot/share/gc/serial/markSweep.inline.hpp:54: undefined reference to `Stack::push(oopDesc*)' /home/gromero/hg/jdk/jdk/build/linux-x86_64-normal-server-release/hotspot/variant-server/libjvm/objs/blockOffsetTable.o: In function `void MarkSweep::mark_and_push(oopDesc**)': /home/gromero/hg/jdk/jdk/src/hotspot/share/gc/serial/markSweep.inline.hpp:54: undefined reference to `Stack::push(oopDesc*)' /home/gromero/hg/jdk/jdk/src/hotspot/share/gc/serial/markSweep.inline.hpp:54: undefined reference to `Stack::push(oopDesc*)' collect2: error: ld returned 1 exit status * All command lines available in /home/gromero/hg/jdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs. === End of repeated output === === Make failed targets repeated here === lib/CompileJvm.gmk:149: recipe for target '/home/gromero/hg/jdk/jdk/build/linux-x86_64-normal-server-release/support/modules_libs/java.base/server/libjvm.so' failed lib/CompileGtest.gmk:58: recipe for target '/home/gromero/hg/jdk/jdk/build/linux-x86_64-normal-server-release/hotspot/variant-server/libjvm/gtest/libjvm.so' failed make/Main.gmk:257: recipe for target 'hotspot-server-libs' failed === End of repeated output === And a quick bisect seems to point to change: 8204540: Automatic oop closure devirtualization http://hg.openjdk.java.net/jdk/jdk/rev/9d62da00bf15 I'm wondering if somebody is experiencing the same issue? Thanks. Regards, Gustavo From stefan.karlsson at oracle.com Mon Jun 25 19:37:15 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 25 Jun 2018 21:37:15 +0200 Subject: Build failure after "8204540: Automatic oop closure devirtualization" In-Reply-To: References: Message-ID: <4f858622-f79e-48ff-53be-e9fa3487a542@oracle.com> Hi Gustavo, On 2018-06-25 19:38, Gustavo Romero wrote: > Hi, > > I'm facing the following build error on jdk/jdk tip > (on Power and x86_64): > > ERROR: Build failed for target 'default (exploded-image)' in > configuration 'linux-x86_64-normal-server-release' (exit code 2) > > === Output from failing command(s) repeated here === > * For target > hotspot_variant-server_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link: > /home/gromero/hg/jdk/jdk/build/linux-x86_64-normal-server-release/hotspot/variant-server/libjvm/objs/blockOffsetTable.o: > In function `void MarkSweep::mark_and_push(unsigned int*)': > /home/gromero/hg/jdk/jdk/src/hotspot/share/gc/serial/markSweep.inline.hpp:54: > undefined reference to `Stack::push(oopDesc*)' > /home/gromero/hg/jdk/jdk/build/linux-x86_64-normal-server-release/hotspot/variant-server/libjvm/objs/blockOffsetTable.o: > In function `void MarkSweep::mark_and_push(oopDesc**)': > /home/gromero/hg/jdk/jdk/src/hotspot/share/gc/serial/markSweep.inline.hpp:54: > undefined reference to `Stack::push(oopDesc*)' > /home/gromero/hg/jdk/jdk/src/hotspot/share/gc/serial/markSweep.inline.hpp:54: > undefined reference to `Stack::push(oopDesc*)' > collect2: error: ld returned 1 exit status > * For target hotspot_variant-server_libjvm_objs_BUILD_LIBJVM_link: > /home/gromero/hg/jdk/jdk/build/linux-x86_64-normal-server-release/hotspot/variant-server/libjvm/objs/blockOffsetTable.o: > In function `void MarkSweep::mark_and_push(unsigned int*)': > /home/gromero/hg/jdk/jdk/src/hotspot/share/gc/serial/markSweep.inline.hpp:54: > undefined reference to `Stack::push(oopDesc*)' > /home/gromero/hg/jdk/jdk/build/linux-x86_64-normal-server-release/hotspot/variant-server/libjvm/objs/blockOffsetTable.o: > In function `void MarkSweep::mark_and_push(oopDesc**)': > /home/gromero/hg/jdk/jdk/src/hotspot/share/gc/serial/markSweep.inline.hpp:54: > undefined reference to `Stack::push(oopDesc*)' > /home/gromero/hg/jdk/jdk/src/hotspot/share/gc/serial/markSweep.inline.hpp:54: > undefined reference to `Stack::push(oopDesc*)' > collect2: error: ld returned 1 exit status > > * All command lines available in > /home/gromero/hg/jdk/jdk/build/linux-x86_64-normal-server-release/make-support/failure-logs. > === End of repeated output === > > === Make failed targets repeated here === > lib/CompileJvm.gmk:149: recipe for target > '/home/gromero/hg/jdk/jdk/build/linux-x86_64-normal-server-release/support/modules_libs/java.base/server/libjvm.so' > failed > lib/CompileGtest.gmk:58: recipe for target > '/home/gromero/hg/jdk/jdk/build/linux-x86_64-normal-server-release/hotspot/variant-server/libjvm/gtest/libjvm.so' > failed > make/Main.gmk:257: recipe for target 'hotspot-server-libs' failed > === End of repeated output === > > > And a quick bisect seems to point to change: > > 8204540: Automatic oop closure devirtualization > http://hg.openjdk.java.net/jdk/jdk/rev/9d62da00bf15 > > I'm wondering if somebody is experiencing the same issue? This doesn't happen in our build farm, but I see that markSweep.inline.hpp is missing an include of stack.inline.hpp. Could you try with the following?: diff --git a/src/hotspot/share/gc/serial/markSweep.inline.hpp b/src/hotspot/share/gc/serial/markSweep.inline.hpp --- a/src/hotspot/share/gc/serial/markSweep.inline.hpp +++ b/src/hotspot/share/gc/serial/markSweep.inline.hpp @@ -33,6 +33,7 @@ ?#include "oops/access.inline.hpp" ?#include "oops/compressedOops.inline.hpp" ?#include "oops/oop.inline.hpp" +#include "utilities/stack.inline.hpp" ?inline void MarkSweep::mark_object(oop obj) { ?? // some marks may contain information we need to preserve so we store them away Thanks, StefanK > > > Thanks. > > Regards, > Gustavo > From gromero at linux.vnet.ibm.com Mon Jun 25 19:54:14 2018 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Mon, 25 Jun 2018 16:54:14 -0300 Subject: Build failure after "8204540: Automatic oop closure devirtualization" In-Reply-To: <4f858622-f79e-48ff-53be-e9fa3487a542@oracle.com> References: <4f858622-f79e-48ff-53be-e9fa3487a542@oracle.com> Message-ID: <9117ead8-f616-15a0-c504-eaa7bf9cf222@linux.vnet.ibm.com> Hi Stefan, On 06/25/2018 04:37 PM, Stefan Karlsson wrote: > This doesn't happen in our build farm, but I see that markSweep.inline.hpp is missing an include of stack.inline.hpp. > > Could you try with the following?: > > diff --git a/src/hotspot/share/gc/serial/markSweep.inline.hpp b/src/hotspot/share/gc/serial/markSweep.inline.hpp > --- a/src/hotspot/share/gc/serial/markSweep.inline.hpp > +++ b/src/hotspot/share/gc/serial/markSweep.inline.hpp > @@ -33,6 +33,7 @@ > ?#include "oops/access.inline.hpp" > ?#include "oops/compressedOops.inline.hpp" > ?#include "oops/oop.inline.hpp" > +#include "utilities/stack.inline.hpp" > > ?inline void MarkSweep::mark_object(oop obj) { > ?? // some marks may contain information we need to preserve so we store them away Yes, including that header fixed the issue at my side. Thanks! Best regards, Gustavo From gromero at linux.vnet.ibm.com Mon Jun 25 20:17:03 2018 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Mon, 25 Jun 2018 17:17:03 -0300 Subject: Build failure after "8204540: Automatic oop closure devirtualization" In-Reply-To: <9117ead8-f616-15a0-c504-eaa7bf9cf222@linux.vnet.ibm.com> References: <4f858622-f79e-48ff-53be-e9fa3487a542@oracle.com> <9117ead8-f616-15a0-c504-eaa7bf9cf222@linux.vnet.ibm.com> Message-ID: <9043c6a3-5fad-6afe-0655-8d298684aa92@linux.vnet.ibm.com> Hi Stefan, On 06/25/2018 04:54 PM, Gustavo Romero wrote: > Hi Stefan, > > On 06/25/2018 04:37 PM, Stefan Karlsson wrote: >> This doesn't happen in our build farm, but I see that markSweep.inline.hpp is missing an include of stack.inline.hpp. >> >> Could you try with the following?: >> >> diff --git a/src/hotspot/share/gc/serial/markSweep.inline.hpp b/src/hotspot/share/gc/serial/markSweep.inline.hpp >> --- a/src/hotspot/share/gc/serial/markSweep.inline.hpp >> +++ b/src/hotspot/share/gc/serial/markSweep.inline.hpp >> @@ -33,6 +33,7 @@ >> #include "oops/access.inline.hpp" >> #include "oops/compressedOops.inline.hpp" >> #include "oops/oop.inline.hpp" >> +#include "utilities/stack.inline.hpp" >> >> inline void MarkSweep::mark_object(oop obj) { >> // some marks may contain information we need to preserve so we store them away > > Yes, including that header fixed the issue at my side. Thanks! I'm not sure how you would like to handle this: should I open a bug or are you planning to include that a next change? Please, let me know. Thanks. Gustavo From stefan.karlsson at oracle.com Mon Jun 25 20:26:04 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 25 Jun 2018 22:26:04 +0200 Subject: Build failure after "8204540: Automatic oop closure devirtualization" In-Reply-To: <9043c6a3-5fad-6afe-0655-8d298684aa92@linux.vnet.ibm.com> References: <4f858622-f79e-48ff-53be-e9fa3487a542@oracle.com> <9117ead8-f616-15a0-c504-eaa7bf9cf222@linux.vnet.ibm.com> <9043c6a3-5fad-6afe-0655-8d298684aa92@linux.vnet.ibm.com> Message-ID: On 2018-06-25 22:17, Gustavo Romero wrote: > Hi Stefan, > > On 06/25/2018 04:54 PM, Gustavo Romero wrote: >> Hi Stefan, >> >> On 06/25/2018 04:37 PM, Stefan Karlsson wrote: >>> This doesn't happen in our build farm, but I see that >>> markSweep.inline.hpp is missing an include of stack.inline.hpp. >>> >>> Could you try with the following?: >>> >>> diff --git a/src/hotspot/share/gc/serial/markSweep.inline.hpp >>> b/src/hotspot/share/gc/serial/markSweep.inline.hpp >>> --- a/src/hotspot/share/gc/serial/markSweep.inline.hpp >>> +++ b/src/hotspot/share/gc/serial/markSweep.inline.hpp >>> @@ -33,6 +33,7 @@ >>> ? #include "oops/access.inline.hpp" >>> ? #include "oops/compressedOops.inline.hpp" >>> ? #include "oops/oop.inline.hpp" >>> +#include "utilities/stack.inline.hpp" >>> >>> ? inline void MarkSweep::mark_object(oop obj) { >>> ??? // some marks may contain information we need to preserve so we >>> store them away >> >> Yes, including that header fixed the issue at my side. Thanks! > > I'm not sure how you would like to handle this: should I open a bug or > are you > planning to include that a next change? > > Please, let me know. I can send out an RFR. Thanks, StefanK > > > Thanks. > Gustavo > From john.r.rose at oracle.com Mon Jun 25 20:26:37 2018 From: john.r.rose at oracle.com (John Rose) Date: Mon, 25 Jun 2018 13:26:37 -0700 Subject: RFR 8195650 Method references to VarHandle accessors In-Reply-To: <355600B2-AB78-4170-8B2B-4C7F754B6A85@oracle.com> References: <086A1684-4D97-4B6D-94F2-16A1261057B5@oracle.com> <355600B2-AB78-4170-8B2B-4C7F754B6A85@oracle.com> Message-ID: Good fix. Reviewed. > On Jun 25, 2018, at 9:11 AM, Paul Sandoz wrote: > > Gentle reminder. > > I would like to get this reviews and pushed before the ramp down phase one kicks in this week. > > Paul. > >> On Jun 19, 2018, at 5:08 PM, Paul Sandoz wrote: >> >> Hi, >> >> Please review the following fix to ensure method references to VarHandle signature polymorphic methods are supported at runtime (specifically the method handle to a signature polymorphic method can be loaded from the constant pool): >> >> http://cr.openjdk.java.net/~psandoz/jdk/JDK-8195650-varhandle-mref/webrev/ >> >> I also added a ?belts and braces? test to ensure a constant method handle to MethodHandle::invokeBasic cannot be loaded if outside of the j.l.invoke package. >> >> Paul. >> > From karen.kinnear at oracle.com Mon Jun 25 21:17:19 2018 From: karen.kinnear at oracle.com (Karen Kinnear) Date: Mon, 25 Jun 2018 17:17:19 -0400 Subject: RFR 8195650 Method references to VarHandle accessors In-Reply-To: <086A1684-4D97-4B6D-94F2-16A1261057B5@oracle.com> References: <086A1684-4D97-4B6D-94F2-16A1261057B5@oracle.com> Message-ID: <1E2D21BC-58DC-4B74-B2C8-3A430064F330@oracle.com> Looks good. Matches the existing JVMS. Thanks for the tests. thanks, Karen > On Jun 19, 2018, at 8:08 PM, Paul Sandoz wrote: > > Hi, > > Please review the following fix to ensure method references to VarHandle signature polymorphic methods are supported at runtime (specifically the method handle to a signature polymorphic method can be loaded from the constant pool): > > http://cr.openjdk.java.net/~psandoz/jdk/JDK-8195650-varhandle-mref/webrev/ > > I also added a ?belts and braces? test to ensure a constant method handle to MethodHandle::invokeBasic cannot be loaded if outside of the j.l.invoke package. > > Paul. > From stefan.karlsson at oracle.com Mon Jun 25 21:41:55 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 25 Jun 2018 23:41:55 +0200 Subject: RFR: 8205632: Include stack.inline.hpp in markSweep.inline.hpp - Re: Build failure after "8204540: Automatic oop closure devirtualization" In-Reply-To: References: <4f858622-f79e-48ff-53be-e9fa3487a542@oracle.com> <9117ead8-f616-15a0-c504-eaa7bf9cf222@linux.vnet.ibm.com> <9043c6a3-5fad-6afe-0655-8d298684aa92@linux.vnet.ibm.com> Message-ID: <7b09e1d1-c8b7-829d-0be5-a7867559c050@oracle.com> Hi all, Please review this trivial patch to fix the build issue below. http://cr.openjdk.java.net/~stefank/8205632/webrev.01/ https://bugs.openjdk.java.net/browse/JDK-8205632 Thanks, StefanK On 2018-06-25 22:26, Stefan Karlsson wrote: > On 2018-06-25 22:17, Gustavo Romero wrote: >> Hi Stefan, >> >> On 06/25/2018 04:54 PM, Gustavo Romero wrote: >>> Hi Stefan, >>> >>> On 06/25/2018 04:37 PM, Stefan Karlsson wrote: >>>> This doesn't happen in our build farm, but I see that >>>> markSweep.inline.hpp is missing an include of stack.inline.hpp. >>>> >>>> Could you try with the following?: >>>> >>>> diff --git a/src/hotspot/share/gc/serial/markSweep.inline.hpp >>>> b/src/hotspot/share/gc/serial/markSweep.inline.hpp >>>> --- a/src/hotspot/share/gc/serial/markSweep.inline.hpp >>>> +++ b/src/hotspot/share/gc/serial/markSweep.inline.hpp >>>> @@ -33,6 +33,7 @@ >>>> ? #include "oops/access.inline.hpp" >>>> ? #include "oops/compressedOops.inline.hpp" >>>> ? #include "oops/oop.inline.hpp" >>>> +#include "utilities/stack.inline.hpp" >>>> >>>> ? inline void MarkSweep::mark_object(oop obj) { >>>> ??? // some marks may contain information we need to preserve so we >>>> store them away >>> >>> Yes, including that header fixed the issue at my side. Thanks! >> >> I'm not sure how you would like to handle this: should I open a bug >> or are you >> planning to include that a next change? >> >> Please, let me know. > > I can send out an RFR. > > Thanks, > StefanK > >> >> >> Thanks. >> Gustavo >> > From kim.barrett at oracle.com Mon Jun 25 22:26:36 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 25 Jun 2018 18:26:36 -0400 Subject: RFR: 8205632: Include stack.inline.hpp in markSweep.inline.hpp - Re: Build failure after "8204540: Automatic oop closure devirtualization" In-Reply-To: <7b09e1d1-c8b7-829d-0be5-a7867559c050@oracle.com> References: <4f858622-f79e-48ff-53be-e9fa3487a542@oracle.com> <9117ead8-f616-15a0-c504-eaa7bf9cf222@linux.vnet.ibm.com> <9043c6a3-5fad-6afe-0655-8d298684aa92@linux.vnet.ibm.com> <7b09e1d1-c8b7-829d-0be5-a7867559c050@oracle.com> Message-ID: > On Jun 25, 2018, at 5:41 PM, Stefan Karlsson wrote: > > Hi all, > > Please review this trivial patch to fix the build issue below. > > http://cr.openjdk.java.net/~stefank/8205632/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8205632 Looks good, and trivial. From kim.barrett at oracle.com Mon Jun 25 22:27:10 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 25 Jun 2018 18:27:10 -0400 Subject: RFR: 8205559: Remove IN_CONCURRENT_ROOT Access decorator In-Reply-To: References: <6B77E769-9614-403F-AA14-D83EB70222D1@oracle.com> Message-ID: <087BC9F8-3DE4-4F57-A721-579860512B1B@oracle.com> > On Jun 25, 2018, at 10:49 AM, Per Liden wrote: > > On 06/24/2018 11:51 PM, Kim Barrett wrote: >> Please review the removal of the IN_CONCURRENT_ROOT Access decorator. >> All non-AS_RAW IN_NATIVE accesses will be treated as potentially >> concurrent, and so will require barriers appropriate to the selected >> collector. >> Renumbered the Access decorators, eliminating gaps and mis-orderings >> that had accrued from previous removals and renamings. >> Fixed StringDedupTable::lookup usage of the Access API. It was using >> IN_CONCURRENT_ROOT incorrectly, and was not using AS_NO_KEEPALIVE >> where it should have been. There are other places where this class is >> not using the Access API but should be, involving the _obj field of >> StringDedupTableEntry; addressing those will be part of a future CR, >> since currently only ZGC would fail because of these and ZGC does not >> yet support string deduplication. >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8205559 >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8205559/open.00/ > > Looks good to me! > > /Per > >> Testing: >> Mach5 tier1,2,3, hs-tier4,5 >> Aurora perf testing for G1 and Parallel detected no performance regressions. Thanks. From david.holmes at oracle.com Mon Jun 25 22:54:16 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 26 Jun 2018 08:54:16 +1000 Subject: [8u] RFR(XS): 8205440: [8u] DWORD64 required for later Windows compilers In-Reply-To: <5336532c-558f-96bb-3c90-22e07a3add37@oracle.com> References: <0f7c7a61-2886-1376-82b4-b6e1464f60d0@oracle.com> <5336532c-558f-96bb-3c90-22e07a3add37@oracle.com> Message-ID: <6117cf82-c68c-2982-0b54-5f5208aa09ea@oracle.com> Seems fine Kevin. Thanks, David On 25/06/2018 10:36 PM, Kevin Walls wrote: > Hi, > > I'd? like to get a review of a small change which will help enable > compilation on Windows with later Visual Studio compilers: > > 8205440: [8u] DWORD64 required for later Windows compilers > https://bugs.openjdk.java.net/browse/JDK-8205440 > > The change is: > > src/os/windows/vm/os_windows.cpp > @@ -2261,9 +2296,9 @@ > ?? assert((pc[1] & ~0x7) == 0xF8, "cannot handle non-register operands"); > ?? assert(ctx->Rax == min_jint, "unexpected idiv exception"); > ?? // set correct result values and continue after idiv instruction > -? ctx->Rip = (DWORD)pc + 2;??????? // idiv reg, reg? is 2 bytes > -? ctx->Rax = (DWORD)min_jint;????? // result > -? ctx->Rdx = (DWORD)0;???????????? // remainder > +? ctx->Rip = (DWORD64)pc + 2;??????? // idiv reg, reg? is 2 bytes > +? ctx->Rax = (DWORD64)min_jint;????? // result > +? ctx->Rdx = (DWORD64)0;???????????? // remainder > ?? // Continue the execution > ?? #else > ?? PCONTEXT ctx = exceptionInfo->ContextRecord; > > This change is inside Handle_IDiv_Exception, and is within an #ifdef > _M_AMD64 (there is use of DWORD in the #else).? At other points in the > same file we correctly use DWORD64 already.? If that's unclear of needs > a webrev I'll produce one... > > In JDK9, these DWORD changed to DWORD64 as a minor byproduct of: > 8136421: JEP 243: Java-Level JVM Compiler Interface > ...which we aren't implementing in jdk8 right now. > > I've been running with this change in my local builds with VS2017 and > jprt builds with the regular 8u compiler. > > Thanks > Kevin > > From stefan.karlsson at oracle.com Mon Jun 25 23:28:40 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Tue, 26 Jun 2018 01:28:40 +0200 Subject: RFR: 8205632: Include stack.inline.hpp in markSweep.inline.hpp - Re: Build failure after "8204540: Automatic oop closure devirtualization" In-Reply-To: References: <4f858622-f79e-48ff-53be-e9fa3487a542@oracle.com> <9117ead8-f616-15a0-c504-eaa7bf9cf222@linux.vnet.ibm.com> <9043c6a3-5fad-6afe-0655-8d298684aa92@linux.vnet.ibm.com> <7b09e1d1-c8b7-829d-0be5-a7867559c050@oracle.com> Message-ID: Thanks, Kim. StefanK On 2018-06-26 00:26, Kim Barrett wrote: >> On Jun 25, 2018, at 5:41 PM, Stefan Karlsson wrote: >> >> Hi all, >> >> Please review this trivial patch to fix the build issue below. >> >> http://cr.openjdk.java.net/~stefank/8205632/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8205632 > Looks good, and trivial. > From mikael.vidstedt at oracle.com Mon Jun 25 23:44:26 2018 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Mon, 25 Jun 2018 16:44:26 -0700 Subject: RFR(S): 8205615: Start of release updates for JDK 12 / 8205619: Bump maximum recognized class file version to 56 for JDK 12 Message-ID: <46215D79-8F5C-4C12-BB3A-541CD0B475A5@oracle.com> All, Shamelessly stealing the background/intro from Joe?s email to build-dev[1] earlier today: With the JDK 11 and 12 split fast approaching [1], it is time to work on the various start of release update tasks for JDK 12. Those tasks are being tracked under the umbrella bug JDK-8205615: "Start of release updates for JDK 12". This thread is to review the hotspot-related portions of the work including JDK-8205619: "Bump maximum recognized class file version to 56 for JDK 12?. Bug: https://bugs.openjdk.java.net/browse/JDK-8205619 CSR: https://bugs.openjdk.java.net/browse/JDK-8205642 Webrev: http://cr.openjdk.java.net/~darcy/8205615.4/ Of specific interest for hotspot are the following files: * src/hotspot/share/classfile/classFileParser.cpp Added a JAVA_12_VERSION definition which is, for now, unused. * src/jdk.internal.vm.compiler/share/classes/org.graalvm.compiler.replacements/src/org/graalvm/compiler/replacements/classfile/Classfile.java Bumped the maximum recognized version. Doug has promised to help upstream this to Graal. * test/hotspot/jtreg/runtime/CommandLine/VMDeprecatedOptions.java Removed verification of the previously deprecated, now obsolete, VM flags. * test/jdk/com/sun/management/HotSpotDiagnosticMXBean/CheckOrigin.java Removed the use of the previously deprecated, now obsolete, PrintSafepointStatistics flag. Testing: The webrev as a whole has been tested using tier1-3. There are no hotspot related failures. Cheers, Mikael [1] http://mail.openjdk.java.net/pipermail/build-dev/2018-June/022528.html [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-June/001462.html From david.holmes at oracle.com Tue Jun 26 05:38:43 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 26 Jun 2018 15:38:43 +1000 Subject: RFR(S): 8205615: Start of release updates for JDK 12 / 8205619: Bump maximum recognized class file version to 56 for JDK 12 In-Reply-To: <46215D79-8F5C-4C12-BB3A-541CD0B475A5@oracle.com> References: <46215D79-8F5C-4C12-BB3A-541CD0B475A5@oracle.com> Message-ID: <3597a949-bda6-8d2e-651a-ee34b4b6beda@oracle.com> Hi Mikael, Generally looks good, only one issue ... On 26/06/2018 9:44 AM, Mikael Vidstedt wrote: > > All, > > Shamelessly stealing the background/intro from Joe?s email to build-dev[1] earlier today: > > With the JDK 11 and 12 split fast approaching [1], it is time to work on the various start of release update tasks for JDK 12. Those tasks are being tracked under the umbrella bug JDK-8205615: "Start of release updates for JDK 12". > > This thread is to review the hotspot-related portions of the work including JDK-8205619: "Bump maximum recognized class file version to 56 for JDK 12?. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8205619 > CSR: https://bugs.openjdk.java.net/browse/JDK-8205642 > Webrev: http://cr.openjdk.java.net/~darcy/8205615.4/ > > > Of specific interest for hotspot are the following files: > > > * src/hotspot/share/classfile/classFileParser.cpp > > Added a JAVA_12_VERSION definition which is, for now, unused. Good. > > * src/jdk.internal.vm.compiler/share/classes/org.graalvm.compiler.replacements/src/org/graalvm/compiler/replacements/classfile/Classfile.java > > Bumped the maximum recognized version. Doug has promised to help upstream this to Graal. Good. > > * test/hotspot/jtreg/runtime/CommandLine/VMDeprecatedOptions.java > > Removed verification of the previously deprecated, now obsolete, VM flags. Good. > > * test/jdk/com/sun/management/HotSpotDiagnosticMXBean/CheckOrigin.java > > Removed the use of the previously deprecated, now obsolete, PrintSafepointStatistics flag. Bad. Sorry you need to replace the obsolete flag with something non-obsolete (and preferably not deprecated) and leave the test case in place. I suggest the innocuous PrintVMQWaitTime. Thanks, David > Testing: > > The webrev as a whole has been tested using tier1-3. There are no hotspot related failures. > > Cheers, > Mikael > > [1] http://mail.openjdk.java.net/pipermail/build-dev/2018-June/022528.html > [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-June/001462.html > From mikael.vidstedt at oracle.com Tue Jun 26 06:23:34 2018 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Mon, 25 Jun 2018 23:23:34 -0700 Subject: RFR(S): 8205615: Start of release updates for JDK 12 / 8205619: Bump maximum recognized class file version to 56 for JDK 12 In-Reply-To: <3597a949-bda6-8d2e-651a-ee34b4b6beda@oracle.com> References: <46215D79-8F5C-4C12-BB3A-541CD0B475A5@oracle.com> <3597a949-bda6-8d2e-651a-ee34b4b6beda@oracle.com> Message-ID: > On Jun 25, 2018, at 10:38 PM, David Holmes wrote: > > Hi Mikael, > > Generally looks good, only one issue ... > > On 26/06/2018 9:44 AM, Mikael Vidstedt wrote: >> All, >> Shamelessly stealing the background/intro from Joe?s email to build-dev[1] earlier today: >> With the JDK 11 and 12 split fast approaching [1], it is time to work on the various start of release update tasks for JDK 12. Those tasks are being tracked under the umbrella bug JDK-8205615: "Start of release updates for JDK 12". >> This thread is to review the hotspot-related portions of the work including JDK-8205619: "Bump maximum recognized class file version to 56 for JDK 12?. >> Bug: https://bugs.openjdk.java.net/browse/JDK-8205619 >> CSR: https://bugs.openjdk.java.net/browse/JDK-8205642 >> Webrev: http://cr.openjdk.java.net/~darcy/8205615.4/ >> Of specific interest for hotspot are the following files: >> * src/hotspot/share/classfile/classFileParser.cpp >> Added a JAVA_12_VERSION definition which is, for now, unused. > > Good. > >> * src/jdk.internal.vm.compiler/share/classes/org.graalvm.compiler.replacements/src/org/graalvm/compiler/replacements/classfile/Classfile.java >> Bumped the maximum recognized version. Doug has promised to help upstream this to Graal. > > Good. > >> * test/hotspot/jtreg/runtime/CommandLine/VMDeprecatedOptions.java >> Removed verification of the previously deprecated, now obsolete, VM flags. > > Good. > >> * test/jdk/com/sun/management/HotSpotDiagnosticMXBean/CheckOrigin.java >> Removed the use of the previously deprecated, now obsolete, PrintSafepointStatistics flag. > > Bad. Sorry you need to replace the obsolete flag with something non-obsolete (and preferably not deprecated) and leave the test case in place. I suggest the innocuous PrintVMQWaitTime. Oh, I seeeee. Thanks for catching! I?ll fix by reverting the changes and simply changing PrintSafepointStatistics to PrintVMQWaitTime as you suggest instead: diff -r 356eaea05bf0 test/jdk/com/sun/management/HotSpotDiagnosticMXBean/CheckOrigin.java --- a/test/jdk/com/sun/management/HotSpotDiagnosticMXBean/CheckOrigin.java Mon Jun 25 21:22:16 2018 +0300 +++ b/test/jdk/com/sun/management/HotSpotDiagnosticMXBean/CheckOrigin.java Mon Jun 25 23:21:49 2018 -0700 @@ -58,7 +58,7 @@ File flagsFile = File.createTempFile("CheckOriginFlags", null); try (PrintWriter pw = new PrintWriter(new FileWriter(flagsFile))) { - pw.println("+PrintSafepointStatistics"); + pw.println("+PrintVMQWaitTime"); } ProcessBuilder pb = ProcessTools. @@ -108,7 +108,7 @@ checkOrigin("IgnoreUnrecognizedVMOptions", Origin.ENVIRON_VAR); checkOrigin("PrintVMOptions", Origin.ENVIRON_VAR); // Set in -XX:Flags file - checkOrigin("PrintSafepointStatistics", Origin.CONFIG_FILE); + checkOrigin("PrintVMQWaitTime", Origin.CONFIG_FILE); // Set through j.l.m checkOrigin("HeapDumpOnOutOfMemoryError", Origin.MANAGEMENT); // Should be set by the VM, when we set UseConcMarkSweepGC With that change the test (still) passes locally. Cheers, Mikael > > Thanks, > David > >> Testing: >> The webrev as a whole has been tested using tier1-3. There are no hotspot related failures. >> Cheers, >> Mikael >> [1] http://mail.openjdk.java.net/pipermail/build-dev/2018-June/022528.html >> [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-June/001462.html From david.holmes at oracle.com Tue Jun 26 06:30:14 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 26 Jun 2018 16:30:14 +1000 Subject: RFR(S): 8205615: Start of release updates for JDK 12 / 8205619: Bump maximum recognized class file version to 56 for JDK 12 In-Reply-To: References: <46215D79-8F5C-4C12-BB3A-541CD0B475A5@oracle.com> <3597a949-bda6-8d2e-651a-ee34b4b6beda@oracle.com> Message-ID: <4a615552-020d-b168-3ec3-aa201945436b@oracle.com> On 26/06/2018 4:23 PM, Mikael Vidstedt wrote: > > >> On Jun 25, 2018, at 10:38 PM, David Holmes wrote: >> >> Hi Mikael, >> >> Generally looks good, only one issue ... >> >> On 26/06/2018 9:44 AM, Mikael Vidstedt wrote: >>> All, >>> Shamelessly stealing the background/intro from Joe?s email to build-dev[1] earlier today: >>> With the JDK 11 and 12 split fast approaching [1], it is time to work on the various start of release update tasks for JDK 12. Those tasks are being tracked under the umbrella bug JDK-8205615: "Start of release updates for JDK 12". >>> This thread is to review the hotspot-related portions of the work including JDK-8205619: "Bump maximum recognized class file version to 56 for JDK 12?. >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8205619 >>> CSR: https://bugs.openjdk.java.net/browse/JDK-8205642 >>> Webrev: http://cr.openjdk.java.net/~darcy/8205615.4/ >>> Of specific interest for hotspot are the following files: >>> * src/hotspot/share/classfile/classFileParser.cpp >>> Added a JAVA_12_VERSION definition which is, for now, unused. >> >> Good. >> >>> * src/jdk.internal.vm.compiler/share/classes/org.graalvm.compiler.replacements/src/org/graalvm/compiler/replacements/classfile/Classfile.java >>> Bumped the maximum recognized version. Doug has promised to help upstream this to Graal. >> >> Good. >> >>> * test/hotspot/jtreg/runtime/CommandLine/VMDeprecatedOptions.java >>> Removed verification of the previously deprecated, now obsolete, VM flags. >> >> Good. >> >>> * test/jdk/com/sun/management/HotSpotDiagnosticMXBean/CheckOrigin.java >>> Removed the use of the previously deprecated, now obsolete, PrintSafepointStatistics flag. >> >> Bad. Sorry you need to replace the obsolete flag with something non-obsolete (and preferably not deprecated) and leave the test case in place. I suggest the innocuous PrintVMQWaitTime. > > Oh, I seeeee. Thanks for catching! I?ll fix by reverting the changes and simply changing PrintSafepointStatistics to PrintVMQWaitTime as you suggest instead: > > diff -r 356eaea05bf0 test/jdk/com/sun/management/HotSpotDiagnosticMXBean/CheckOrigin.java > --- a/test/jdk/com/sun/management/HotSpotDiagnosticMXBean/CheckOrigin.java Mon Jun 25 21:22:16 2018 +0300 > +++ b/test/jdk/com/sun/management/HotSpotDiagnosticMXBean/CheckOrigin.java Mon Jun 25 23:21:49 2018 -0700 > @@ -58,7 +58,7 @@ > File flagsFile = File.createTempFile("CheckOriginFlags", null); > try (PrintWriter pw = > new PrintWriter(new FileWriter(flagsFile))) { > - pw.println("+PrintSafepointStatistics"); > + pw.println("+PrintVMQWaitTime"); > } > > ProcessBuilder pb = ProcessTools. > @@ -108,7 +108,7 @@ > checkOrigin("IgnoreUnrecognizedVMOptions", Origin.ENVIRON_VAR); > checkOrigin("PrintVMOptions", Origin.ENVIRON_VAR); > // Set in -XX:Flags file > - checkOrigin("PrintSafepointStatistics", Origin.CONFIG_FILE); > + checkOrigin("PrintVMQWaitTime", Origin.CONFIG_FILE); > // Set through j.l.m > checkOrigin("HeapDumpOnOutOfMemoryError", Origin.MANAGEMENT); > // Should be set by the VM, when we set UseConcMarkSweepGC > > With that change the test (still) passes locally. Looks good. Thanks, David > Cheers, > Mikael > > >> >> Thanks, >> David >> >>> Testing: >>> The webrev as a whole has been tested using tier1-3. There are no hotspot related failures. >>> Cheers, >>> Mikael >>> [1] http://mail.openjdk.java.net/pipermail/build-dev/2018-June/022528.html >>> [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-June/001462.html > From per.liden at oracle.com Tue Jun 26 07:51:39 2018 From: per.liden at oracle.com (Per Liden) Date: Tue, 26 Jun 2018 09:51:39 +0200 Subject: RFR: 8205664: Move detailed metaspace logging from debug to trace Message-ID: Using -Xlog:gc*=debug typically provides extended GC information useful for GC debugging. Metaspace recently(?) started to log a lot of very detailed information (using gc+metaspace+...) on the debug level, making gc*=debug overly verbose. I propose that we move some of the detailed metaspace logging to the trace level. Bug: https://bugs.openjdk.java.net/browse/JDK-8205664 Webrev: http://cr.openjdk.java.net/~pliden/8205664/webrev.0 /Per From thomas.stuefe at gmail.com Tue Jun 26 08:02:48 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 26 Jun 2018 10:02:48 +0200 Subject: RFR: 8205664: Move detailed metaspace logging from debug to trace In-Reply-To: References: Message-ID: Hi Per, this makes sense. Change is good (imho trivial too). Thanks, Thomas On Tue, Jun 26, 2018 at 9:51 AM, Per Liden wrote: > Using -Xlog:gc*=debug typically provides extended GC information useful for > GC debugging. Metaspace recently(?) started to log a lot of very detailed > information (using gc+metaspace+...) on the debug level, making gc*=debug > overly verbose. I propose that we move some of the detailed metaspace > logging to the trace level. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8205664 > Webrev: http://cr.openjdk.java.net/~pliden/8205664/webrev.0 > > /Per From stefan.karlsson at oracle.com Tue Jun 26 08:02:47 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Tue, 26 Jun 2018 10:02:47 +0200 Subject: RFR: 8205664: Move detailed metaspace logging from debug to trace In-Reply-To: References: Message-ID: <9add441a-aec5-98e5-9f15-8eb088b1e2d6@oracle.com> Looks good. StefanK On 2018-06-26 09:51, Per Liden wrote: > Using -Xlog:gc*=debug typically provides extended GC information useful > for GC debugging. Metaspace recently(?) started to log a lot of very > detailed information (using gc+metaspace+...) on the debug level, making > gc*=debug overly verbose. I propose that we move some of the detailed > metaspace logging to the trace level. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8205664 > Webrev: http://cr.openjdk.java.net/~pliden/8205664/webrev.0 > > /Per From per.liden at oracle.com Tue Jun 26 08:07:28 2018 From: per.liden at oracle.com (Per Liden) Date: Tue, 26 Jun 2018 10:07:28 +0200 Subject: RFR: 8205664: Move detailed metaspace logging from debug to trace In-Reply-To: References: Message-ID: <4edf2292-5322-8dc5-791e-32f177f8eb3c@oracle.com> Thanks for reviewing Thomas! cheers, Per On 06/26/2018 10:02 AM, Thomas St?fe wrote: > Hi Per, > > this makes sense. Change is good (imho trivial too). > > Thanks, Thomas > > On Tue, Jun 26, 2018 at 9:51 AM, Per Liden wrote: >> Using -Xlog:gc*=debug typically provides extended GC information useful for >> GC debugging. Metaspace recently(?) started to log a lot of very detailed >> information (using gc+metaspace+...) on the debug level, making gc*=debug >> overly verbose. I propose that we move some of the detailed metaspace >> logging to the trace level. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8205664 >> Webrev: http://cr.openjdk.java.net/~pliden/8205664/webrev.0 >> >> /Per From per.liden at oracle.com Tue Jun 26 08:08:01 2018 From: per.liden at oracle.com (Per Liden) Date: Tue, 26 Jun 2018 10:08:01 +0200 Subject: RFR: 8205664: Move detailed metaspace logging from debug to trace In-Reply-To: <9add441a-aec5-98e5-9f15-8eb088b1e2d6@oracle.com> References: <9add441a-aec5-98e5-9f15-8eb088b1e2d6@oracle.com> Message-ID: <19f151ce-1891-e701-80a2-d74b7c33a450@oracle.com> Thanks Stefan! /Per On 06/26/2018 10:02 AM, Stefan Karlsson wrote: > Looks good. > > StefanK > > On 2018-06-26 09:51, Per Liden wrote: >> Using -Xlog:gc*=debug typically provides extended GC information >> useful for GC debugging. Metaspace recently(?) started to log a lot of >> very detailed information (using gc+metaspace+...) on the debug level, >> making gc*=debug overly verbose. I propose that we move some of the >> detailed metaspace logging to the trace level. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8205664 >> Webrev: http://cr.openjdk.java.net/~pliden/8205664/webrev.0 >> >> /Per From stefan.karlsson at oracle.com Tue Jun 26 11:50:09 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Tue, 26 Jun 2018 13:50:09 +0200 Subject: RFR: 8205559: Remove IN_CONCURRENT_ROOT Access decorator In-Reply-To: <6B77E769-9614-403F-AA14-D83EB70222D1@oracle.com> References: <6B77E769-9614-403F-AA14-D83EB70222D1@oracle.com> Message-ID: Looks good. StefanK On 2018-06-24 23:51, Kim Barrett wrote: > Please review the removal of the IN_CONCURRENT_ROOT Access decorator. > > All non-AS_RAW IN_NATIVE accesses will be treated as potentially > concurrent, and so will require barriers appropriate to the selected > collector. > > Renumbered the Access decorators, eliminating gaps and mis-orderings > that had accrued from previous removals and renamings. > > Fixed StringDedupTable::lookup usage of the Access API. It was using > IN_CONCURRENT_ROOT incorrectly, and was not using AS_NO_KEEPALIVE > where it should have been. There are other places where this class is > not using the Access API but should be, involving the _obj field of > StringDedupTableEntry; addressing those will be part of a future CR, > since currently only ZGC would fail because of these and ZGC does not > yet support string deduplication. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8205559 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8205559/open.00/ > > Testing: > Mach5 tier1,2,3, hs-tier4,5 > > Aurora perf testing for G1 and Parallel detected no performance regressions. > From gnu.andrew at redhat.com Tue Jun 26 17:39:07 2018 From: gnu.andrew at redhat.com (Andrew Hughes) Date: Tue, 26 Jun 2018 18:39:07 +0100 Subject: [8u] [RFR] Request for Review of Backport of JDK-8179887: Build failure with glibc >= 2.24: error: 'int readdir_r(DIR*, dirent*, dirent**)' is deprecated In-Reply-To: References: Message-ID: On 21 June 2018 at 04:56, Andrew Hughes wrote: > [CCing hotspot list for review] > > Bug: https://bugs.openjdk.java.net/browse/JDK-8179887 > Webrev: http://cr.openjdk.java.net/~andrew/openjdk8/8179887/webrev.01/ > Review thread: http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-April/031746.html > > Patch is basically the same as for OpenJDK 11, except we don't have to > revert 8187667, which isn't present in OpenJDK 8. > > Thanks, > -- > Andrew :) > > Senior Free Java Software Engineer > Red Hat, Inc. (http://www.redhat.com) > > Web Site: http://fuseyism.com > Twitter: https://twitter.com/gnu_andrew_java > PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net) > Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222 Ping? -- Andrew :) Senior Free Java Software Engineer Red Hat, Inc. (http://www.redhat.com) Web Site: http://fuseyism.com Twitter: https://twitter.com/gnu_andrew_java PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net) Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222 From kim.barrett at oracle.com Tue Jun 26 18:26:30 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 26 Jun 2018 14:26:30 -0400 Subject: RFR: 8205559: Remove IN_CONCURRENT_ROOT Access decorator In-Reply-To: References: <6B77E769-9614-403F-AA14-D83EB70222D1@oracle.com> Message-ID: <8C775F92-A40B-4A6E-8339-EDE59A582BCF@oracle.com> > On Jun 26, 2018, at 7:50 AM, Stefan Karlsson wrote: > > Looks good. > > StefanK Thanks. > > On 2018-06-24 23:51, Kim Barrett wrote: >> Please review the removal of the IN_CONCURRENT_ROOT Access decorator. >> All non-AS_RAW IN_NATIVE accesses will be treated as potentially >> concurrent, and so will require barriers appropriate to the selected >> collector. >> Renumbered the Access decorators, eliminating gaps and mis-orderings >> that had accrued from previous removals and renamings. >> Fixed StringDedupTable::lookup usage of the Access API. It was using >> IN_CONCURRENT_ROOT incorrectly, and was not using AS_NO_KEEPALIVE >> where it should have been. There are other places where this class is >> not using the Access API but should be, involving the _obj field of >> StringDedupTableEntry; addressing those will be part of a future CR, >> since currently only ZGC would fail because of these and ZGC does not >> yet support string deduplication. >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8205559 >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8205559/open.00/ >> Testing: >> Mach5 tier1,2,3, hs-tier4,5 >> Aurora perf testing for G1 and Parallel detected no performance regressions. From kevin.walls at oracle.com Tue Jun 26 20:35:45 2018 From: kevin.walls at oracle.com (Kevin Walls) Date: Tue, 26 Jun 2018 21:35:45 +0100 Subject: [8u] RFR(S): 8204872: [8u] VS2017: more instances of "error C3680: cannot concatenate user-defined string literals with mismatched literal suffix identifiers" Message-ID: Hi, I'd like to get a review of... 8204872: [8u] VS2017: more instances of "error C3680: cannot concatenate user-defined string literals with mismatched literal suffix identifiers" https://bugs.openjdk.java.net/browse/JDK-8204872 These are whitespace additions, in the pattern of https://bugs.openjdk.java.net/browse/JDK-8081202 ...? I think these are the last of the literal suffix spacing issues when using a later Windows VS compiler on 8u. 8u webrev: http://cr.openjdk.java.net/~kevinw/8204872/webrev.00/ Thanks! Kevin From igor.ignatyev at oracle.com Wed Jun 27 01:14:16 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Tue, 26 Jun 2018 18:14:16 -0700 Subject: RFR(S) : 8199265 : java/util/Arrays/TimSortStackSize2.java fails with OOM Message-ID: <45DA249C-AC00-4E59-A1E7-1D1BF9176F21@oracle.com> http://cr.openjdk.java.net/~iignatyev/8199265/webrev.00/ > 10 lines changed: 0 ins; 2 del; 8 mod; Hi all could you please review another attempt to make TimSortStackSize2 more robust? 8190679 changed the test to set both Xmx and Xms to avoid failures caused by Xmx being smaller than Xms (when Xmx is specified as external vm flag). it appears that current values of heap size might not be enough in some cases, therefore setting Xmx equal to Xms leads to OOM. The proposed fix sets Xmx two times more than Xms, this should be enough for the test to pass and also ensures that Xmx is always bigger than Xms (since the test specific flags are appended, external Xms/Xmx flags will be ignored). JBS: https://bugs.openjdk.java.net/browse/JDK-8199265 webrev: http://cr.openjdk.java.net/~iignatyev/8199265/webrev.00/ testing: java/util/Arrays/TimSortStackSize2.java multiple times on different platforms Thanks, -- Igor From david.holmes at oracle.com Wed Jun 27 01:50:40 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 27 Jun 2018 11:50:40 +1000 Subject: [8u] RFR(S): 8204872: [8u] VS2017: more instances of "error C3680: cannot concatenate user-defined string literals with mismatched literal suffix identifiers" In-Reply-To: References: Message-ID: <96386946-80cd-7813-345d-95159611b2e2@oracle.com> Looks good! Thanks, David On 27/06/2018 6:35 AM, Kevin Walls wrote: > Hi, > > I'd like to get a review of... > > 8204872: [8u] VS2017: more instances of "error C3680: cannot concatenate > user-defined string literals with mismatched literal suffix identifiers" > https://bugs.openjdk.java.net/browse/JDK-8204872 > > These are whitespace additions, in the pattern of > https://bugs.openjdk.java.net/browse/JDK-8081202 ...? I think these are > the last of the literal suffix spacing issues when using a later Windows > VS compiler on 8u. > > 8u webrev: http://cr.openjdk.java.net/~kevinw/8204872/webrev.00/ > > Thanks! > Kevin > From kim.barrett at oracle.com Wed Jun 27 05:03:59 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 27 Jun 2018 01:03:59 -0400 Subject: RFR: 8205577: parallel/TestPrintGCDetailsVerbose.java fails assertion Message-ID: Please review this fix of an assertion failure during ParallelGC young collections. We have a task (thread) walking the code cache, copying and forwarding oop references in those cached nmethods containing scavengable (e.g. young) oops. After the oops in an nmethod are processed, a second pass is made to determine whether any of the oops are still scavengable, removing the nmethod from the list containing scavengable oops. That speeds up future collection cycles. That second pass check logs the values of any scavengable oops referred to by the nmethod. This is where things go wrong. If the object being printed is a j.l.Class, deep in the guts of the printing there is an assert that the mirror of the klass of the object is that object. But if the klass mirror has not yet been forwarded, we can be comparing a forward object with the not yet forward version, and the assertion fails. There is another task that is running in parallel with the code cache walk that is processing the CLD graph, but it may not have reached the relevant klass yet, so the klass's mirror hasn't been forwarded yet. But there's another complication that makes this crash even more difficult to trigger. The ParallelGC young collection's code cache walk promotes nmethod-referenced oops, and that promotion makes the object no longer scavengable. So to trigger the crash, some third thread must have copied a young class to survivor space, then the code cache walk processes an nmethod referring to that class and prints the still young class, all before the CLD graph walk has forwarded the klass's mirror. This problem has been lurking all along, but JDK-8203837 changed the printing from only occurring when TraceScavenge was true to printing when gc+nmethod=trace logging is enabled. As there were no tests that enabled TraceScavenge, we didn't previously notice the problem. But there are tests that enable gc+nmethod=trace, enabling the problematic printing. The solution being taken here is to change that logging to no longer attempt to print the object. Printing an object while in the middle of a copying GC, especially one that is parallel, seems quite risky, so let's just not do that. CR: https://bugs.openjdk.java.net/browse/JDK-8205577 Webrev: http://cr.openjdk.java.net/~kbarrett/8205577/open.00/ Testing: mach5 tier1,2,3 I haven't been able to reproduce the failure; as discussed above, it's very timing sensitive. But the scenario described above accounts for both the problem and the timing of its appearance. From david.holmes at oracle.com Wed Jun 27 05:44:36 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 27 Jun 2018 15:44:36 +1000 Subject: RFR: 8205577: parallel/TestPrintGCDetailsVerbose.java fails assertion In-Reply-To: References: Message-ID: Hi Kim, Not printing seems extremely reasonable. Thanks, David On 27/06/2018 3:03 PM, Kim Barrett wrote: > Please review this fix of an assertion failure during ParallelGC young > collections. > > We have a task (thread) walking the code cache, copying and forwarding > oop references in those cached nmethods containing scavengable > (e.g. young) oops. After the oops in an nmethod are processed, a > second pass is made to determine whether any of the oops are still > scavengable, removing the nmethod from the list containing scavengable > oops. That speeds up future collection cycles. > > That second pass check logs the values of any scavengable oops > referred to by the nmethod. This is where things go wrong. If the > object being printed is a j.l.Class, deep in the guts of the printing > there is an assert that the mirror of the klass of the object is that > object. But if the klass mirror has not yet been forwarded, we can be > comparing a forward object with the not yet forward version, and the > assertion fails. > > There is another task that is running in parallel with the code cache > walk that is processing the CLD graph, but it may not have reached the > relevant klass yet, so the klass's mirror hasn't been forwarded yet. > > But there's another complication that makes this crash even more > difficult to trigger. The ParallelGC young collection's code cache > walk promotes nmethod-referenced oops, and that promotion makes the > object no longer scavengable. So to trigger the crash, some third > thread must have copied a young class to survivor space, then the code > cache walk processes an nmethod referring to that class and prints the > still young class, all before the CLD graph walk has forwarded the > klass's mirror. > > This problem has been lurking all along, but JDK-8203837 changed the > printing from only occurring when TraceScavenge was true to printing > when gc+nmethod=trace logging is enabled. As there were no tests that > enabled TraceScavenge, we didn't previously notice the problem. But > there are tests that enable gc+nmethod=trace, enabling the problematic > printing. > > The solution being taken here is to change that logging to no longer > attempt to print the object. Printing an object while in the middle > of a copying GC, especially one that is parallel, seems quite risky, > so let's just not do that. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8205577 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8205577/open.00/ > > Testing: > mach5 tier1,2,3 > > I haven't been able to reproduce the failure; as discussed above, it's > very timing sensitive. But the scenario described above accounts for > both the problem and the timing of its appearance. > > From david.holmes at oracle.com Wed Jun 27 06:03:19 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 27 Jun 2018 16:03:19 +1000 Subject: RFR(S) : 8199265 : java/util/Arrays/TimSortStackSize2.java fails with OOM In-Reply-To: <45DA249C-AC00-4E59-A1E7-1D1BF9176F21@oracle.com> References: <45DA249C-AC00-4E59-A1E7-1D1BF9176F21@oracle.com> Message-ID: <44434e17-2c72-d96e-cce7-ecff479a19d1@oracle.com> Hi Igor, Using 2x max heap seems a reasonable approach. AFAICT the other changes are just cleanup and don't make any difference to the way the test is run - right? Only nit: ProcessTools.executeTestJvm is deprecated in favor of ProcessTools.executeTestJava. Thanks, David On 27/06/2018 11:14 AM, Igor Ignatyev wrote: > http://cr.openjdk.java.net/~iignatyev/8199265/webrev.00/ >> 10 lines changed: 0 ins; 2 del; 8 mod; > > Hi all > > could you please review another attempt to make TimSortStackSize2 more robust? > > 8190679 changed the test to set both Xmx and Xms to avoid failures caused by Xmx being smaller than Xms (when Xmx is specified as external vm flag). it appears that current values of heap size might not be enough in some cases, therefore setting Xmx equal to Xms leads to OOM. > > The proposed fix sets Xmx two times more than Xms, this should be enough for the test to pass and also ensures that Xmx is always bigger than Xms (since the test specific flags are appended, external Xms/Xmx flags will be ignored). > > JBS: https://bugs.openjdk.java.net/browse/JDK-8199265 > webrev: http://cr.openjdk.java.net/~iignatyev/8199265/webrev.00/ > testing: java/util/Arrays/TimSortStackSize2.java multiple times on different platforms > > Thanks, > -- Igor > From igor.ignatyev at oracle.com Wed Jun 27 06:42:39 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Tue, 26 Jun 2018 23:42:39 -0700 Subject: RFR(S) : 8199265 : java/util/Arrays/TimSortStackSize2.java fails with OOM In-Reply-To: <44434e17-2c72-d96e-cce7-ecff479a19d1@oracle.com> References: <45DA249C-AC00-4E59-A1E7-1D1BF9176F21@oracle.com> <44434e17-2c72-d96e-cce7-ecff479a19d1@oracle.com> Message-ID: Hi David, thanks for reviewing. please see my answers inline. -- Igor > On Jun 26, 2018, at 11:03 PM, David Holmes wrote: > > Hi Igor, > > Using 2x max heap seems a reasonable approach. > > AFAICT the other changes are just cleanup and don't make any difference to the way the test is run - right? right. > > Only nit: ProcessTools.executeTestJvm is deprecated in favor of ProcessTools.executeTestJava. if you look at their javadocs it looks over way around, executeTestJvm has a proper javadoc, while executeTestJava just says see #executeTestJvm; at least it's so in /test/lib testlibrary. the library in /test/jdk/test/lib is(should be) deprecated in favor of one in /test/lib. I don't have a strong preference b/w ProcessTools::executeTestJava and executeTestJvm though, so if executeTestJava sounds better for more people, I'm fine w/ totally fine w/ using it instead of executeTestJvm. > > > Thanks, > David > > On 27/06/2018 11:14 AM, Igor Ignatyev wrote: >> http://cr.openjdk.java.net/~iignatyev/8199265/webrev.00/ >>> 10 lines changed: 0 ins; 2 del; 8 mod; >> Hi all >> could you please review another attempt to make TimSortStackSize2 more robust? >> 8190679 changed the test to set both Xmx and Xms to avoid failures caused by Xmx being smaller than Xms (when Xmx is specified as external vm flag). it appears that current values of heap size might not be enough in some cases, therefore setting Xmx equal to Xms leads to OOM. >> The proposed fix sets Xmx two times more than Xms, this should be enough for the test to pass and also ensures that Xmx is always bigger than Xms (since the test specific flags are appended, external Xms/Xmx flags will be ignored). >> JBS: https://bugs.openjdk.java.net/browse/JDK-8199265 >> webrev: http://cr.openjdk.java.net/~iignatyev/8199265/webrev.00/ >> testing: java/util/Arrays/TimSortStackSize2.java multiple times on different platforms >> Thanks, >> -- Igor From david.holmes at oracle.com Wed Jun 27 06:50:23 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 27 Jun 2018 16:50:23 +1000 Subject: RFR(S) : 8199265 : java/util/Arrays/TimSortStackSize2.java fails with OOM In-Reply-To: References: <45DA249C-AC00-4E59-A1E7-1D1BF9176F21@oracle.com> <44434e17-2c72-d96e-cce7-ecff479a19d1@oracle.com> Message-ID: <5adb47c3-24a5-92d3-b0da-97b774293e43@oracle.com> On 27/06/2018 4:42 PM, Igor Ignatyev wrote: > Hi David, > > thanks for reviewing. > > please see my answers inline. > > -- Igor > >> On Jun 26, 2018, at 11:03 PM, David Holmes wrote: >> >> Hi Igor, >> >> Using 2x max heap seems a reasonable approach. >> >> AFAICT the other changes are just cleanup and don't make any difference to the way the test is run - right? > right. >> >> Only nit: ProcessTools.executeTestJvm is deprecated in favor of ProcessTools.executeTestJava. > if you look at their javadocs it looks over way around, executeTestJvm has a proper javadoc, while executeTestJava just says see #executeTestJvm; at least it's so in /test/lib testlibrary. the library in /test/jdk/test/lib is(should be) deprecated in favor of one in /test/lib. I don't have a strong preference b/w ProcessTools::executeTestJava and executeTestJvm though, so if executeTestJava sounds better for more people, I'm fine w/ totally fine w/ using it instead of executeTestJvm. Okay I didn't know we had two versions of the test library still. That's not very good. I was looking at the test/jdk/lib one which has: /** * @deprecated Use executeTestJava instead */ public static OutputAnalyzer executeTestJvm(String... options) throws Exception { If you're using the other one then I guess it doesn't matter. Having both methods seems pointless regardless. Cheers, David >> >> >> Thanks, >> David >> >> On 27/06/2018 11:14 AM, Igor Ignatyev wrote: >>> http://cr.openjdk.java.net/~iignatyev/8199265/webrev.00/ >>>> 10 lines changed: 0 ins; 2 del; 8 mod; >>> Hi all >>> could you please review another attempt to make TimSortStackSize2 more robust? >>> 8190679 changed the test to set both Xmx and Xms to avoid failures caused by Xmx being smaller than Xms (when Xmx is specified as external vm flag). it appears that current values of heap size might not be enough in some cases, therefore setting Xmx equal to Xms leads to OOM. >>> The proposed fix sets Xmx two times more than Xms, this should be enough for the test to pass and also ensures that Xmx is always bigger than Xms (since the test specific flags are appended, external Xms/Xmx flags will be ignored). >>> JBS: https://bugs.openjdk.java.net/browse/JDK-8199265 >>> webrev: http://cr.openjdk.java.net/~iignatyev/8199265/webrev.00/ >>> testing: java/util/Arrays/TimSortStackSize2.java multiple times on different platforms >>> Thanks, >>> -- Igor > From thomas.schatzl at oracle.com Wed Jun 27 07:30:38 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Wed, 27 Jun 2018 09:30:38 +0200 Subject: RFR: 8205577: parallel/TestPrintGCDetailsVerbose.java fails assertion In-Reply-To: References: Message-ID: <9302a4bace800093fe38d8f9af34c64dccbe9e43.camel@oracle.com> Hi, On Wed, 2018-06-27 at 01:03 -0400, Kim Barrett wrote: > Please review this fix of an assertion failure during ParallelGC > young collections. > > We have a task (thread) walking the code cache, copying and > forwarding oop references in those cached nmethods containing > scavengable (e.g. young) oops. After the oops in an nmethod are > processed, a second pass is made to determine whether any of the oops > are still scavengable, removing the nmethod from the list containing > scavengable oops. That speeds up future collection cycles. > [...] > The solution being taken here is to change that logging to no longer > attempt to print the object. Printing an object while in the middle > of a copying GC, especially one that is parallel, seems quite risky, > so let's just not do that. Looks good. Thomas From rkennke at redhat.com Wed Jun 27 08:45:47 2018 From: rkennke at redhat.com (Roman Kennke) Date: Wed, 27 Jun 2018 10:45:47 +0200 Subject: Ping: RFR: JDK-8205523: Explicit barriers for interpreter In-Reply-To: <3aae894d-dae1-f4b3-7702-d99621382864@redhat.com> References: <3aae894d-dae1-f4b3-7702-d99621382864@redhat.com> Message-ID: <75ea0ba0-6990-9e18-c724-355aad788cb6@redhat.com> > Hi all, > > A number of operations cannot reasonably make use of the Access API but > require explicit read- and write-barriers for GCs like Shenandoah that > need to ensure to-space consistency. Examples are monitor-enter/-exit > and some intrinsics. > > The change adds APIs to BarrierSetAssembler (x86 and aarch64) to support > these kinds of explicit barriers, and the necessary calls in relevant > places. The default implementation does nothing. These barriers have > been found and tested over several years in Shenandoah. > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8205523 > Webrev: > http://cr.openjdk.java.net/~rkennke/JDK-8205523/webrev.00/ > > Testing: hotspot/tier1, will submit into Mach5 after reviews. > > Can I please get reviews? > > Thanks, Roman > From erik.osterlund at oracle.com Wed Jun 27 10:02:08 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Wed, 27 Jun 2018 12:02:08 +0200 Subject: Ping: RFR: JDK-8205523: Explicit barriers for interpreter In-Reply-To: <75ea0ba0-6990-9e18-c724-355aad788cb6@redhat.com> References: <3aae894d-dae1-f4b3-7702-d99621382864@redhat.com> <75ea0ba0-6990-9e18-c724-355aad788cb6@redhat.com> Message-ID: <9904470c-4a10-48c3-2533-c5f8c3fe94e0@oracle.com> Hi Roman, How important is it for you to distinguish resolve for write vs read? What would the impact be for you if we were to stick with just "resolve", the way we did in runtime? I think that for the non-GC developers, it gets more tricky if they have to remember not to perform any writes because somewhere further up in the call hierarchy, the oop was only resolved for reading. Or that any reads used have to be non-volatile, or you risk running into a subtle IRIW situation (even on TSO hardware) due to having multiple reads riding on the same read resolve, potentially causing inconsistencies. It won't be a problem in the places you inserted the read resolves to now. However, by letting resolve always do what you refer to as write resolve (which we do in the runtime now), the user does not need to think about this, and the conceptual overhead for the non-GC expert is lower. The cost of keeping it simpler seems low in the interpreter. In my experience, this kind of optimization does not pay off in the interpreter. So how would you feel about sticking with just "resolve"? Thanks, /Erik On 2018-06-27 10:45, Roman Kennke wrote: >> Hi all, >> >> A number of operations cannot reasonably make use of the Access API but >> require explicit read- and write-barriers for GCs like Shenandoah that >> need to ensure to-space consistency. Examples are monitor-enter/-exit >> and some intrinsics. >> >> The change adds APIs to BarrierSetAssembler (x86 and aarch64) to support >> these kinds of explicit barriers, and the necessary calls in relevant >> places. The default implementation does nothing. These barriers have >> been found and tested over several years in Shenandoah. >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8205523 >> Webrev: >> http://cr.openjdk.java.net/~rkennke/JDK-8205523/webrev.00/ >> >> Testing: hotspot/tier1, will submit into Mach5 after reviews. >> >> Can I please get reviews? >> >> Thanks, Roman >> > From rkennke at redhat.com Wed Jun 27 10:29:04 2018 From: rkennke at redhat.com (Roman Kennke) Date: Wed, 27 Jun 2018 12:29:04 +0200 Subject: Ping: RFR: JDK-8205523: Explicit barriers for interpreter In-Reply-To: <9904470c-4a10-48c3-2533-c5f8c3fe94e0@oracle.com> References: <3aae894d-dae1-f4b3-7702-d99621382864@redhat.com> <75ea0ba0-6990-9e18-c724-355aad788cb6@redhat.com> <9904470c-4a10-48c3-2533-c5f8c3fe94e0@oracle.com> Message-ID: Read barriers in Shenandoah are just a single load instruction. It's either loading from L1 cache (self-reference) or warming up for subsequent read. Write barriers OTOH are always testing+ branching on fast-path, and possibly copying on slow-path. This can be particularly bad if it encounters a large-ish array. Given that reads are much more frequent than writes, we'd like to avoid that. Consider array copy. Normally we need RB on src and WB on dst. If we were to follow your suggestion, we'd probably first copy src (via WB), then copy dst (via WB, usually doesn't happen because dst is new) and then copy src *again*. I was already cringing when we did that in runtime. I don't think the overhead API-wise and brain-wise is as bad as you make it (or maybe it's just me because I'm so into it..?). We could *probably* do this in interpreter. I would not want to have this discussion again in c1 and c2 though, in compilers it's provably performance relevant. Roman Am 27. Juni 2018 12:02:08 MESZ schrieb "Erik ?sterlund" : >Hi Roman, > >How important is it for you to distinguish resolve for write vs read? >What would the impact be for you if we were to stick with just >"resolve", the way we did in runtime? >I think that for the non-GC developers, it gets more tricky if they >have >to remember not to perform any writes because somewhere further up in >the call hierarchy, the oop was only resolved for reading. Or that any >reads used have to be non-volatile, or you risk running into a subtle >IRIW situation (even on TSO hardware) due to having multiple reads >riding on the same read resolve, potentially causing inconsistencies. >It >won't be a problem in the places you inserted the read resolves to now. > >However, by letting resolve always do what you refer to as write >resolve >(which we do in the runtime now), the user does not need to think about > >this, and the conceptual overhead for the non-GC expert is lower. The >cost of keeping it simpler seems low in the interpreter. In my >experience, this kind of optimization does not pay off in the >interpreter. > >So how would you feel about sticking with just "resolve"? > >Thanks, >/Erik > >On 2018-06-27 10:45, Roman Kennke wrote: >>> Hi all, >>> >>> A number of operations cannot reasonably make use of the Access API >but >>> require explicit read- and write-barriers for GCs like Shenandoah >that >>> need to ensure to-space consistency. Examples are >monitor-enter/-exit >>> and some intrinsics. >>> >>> The change adds APIs to BarrierSetAssembler (x86 and aarch64) to >support >>> these kinds of explicit barriers, and the necessary calls in >relevant >>> places. The default implementation does nothing. These barriers have >>> been found and tested over several years in Shenandoah. >>> >>> Bug: >>> https://bugs.openjdk.java.net/browse/JDK-8205523 >>> Webrev: >>> http://cr.openjdk.java.net/~rkennke/JDK-8205523/webrev.00/ >>> >>> Testing: hotspot/tier1, will submit into Mach5 after reviews. >>> >>> Can I please get reviews? >>> >>> Thanks, Roman >>> >> -- Diese Nachricht wurde von meinem Android-Ger?t mit K-9 Mail gesendet. From aph at redhat.com Wed Jun 27 10:31:26 2018 From: aph at redhat.com (Andrew Haley) Date: Wed, 27 Jun 2018 11:31:26 +0100 Subject: Ping: RFR: JDK-8205523: Explicit barriers for interpreter In-Reply-To: <9904470c-4a10-48c3-2533-c5f8c3fe94e0@oracle.com> References: <3aae894d-dae1-f4b3-7702-d99621382864@redhat.com> <75ea0ba0-6990-9e18-c724-355aad788cb6@redhat.com> <9904470c-4a10-48c3-2533-c5f8c3fe94e0@oracle.com> Message-ID: <0a85ec44-5487-c11f-1436-8912958dc9ad@redhat.com> On 06/27/2018 11:02 AM, Erik ?sterlund wrote: > I think that for the non-GC developers, it gets more tricky if they > have to remember not to perform any writes because somewhere further > up in the call hierarchy, the oop was only resolved for reading. Or > that any reads used have to be non-volatile, or you risk running > into a subtle IRIW situation (even on TSO hardware) due to having > multiple reads riding on the same read resolve, potentially causing > inconsistencies. It won't be a problem in the places you inserted > the read resolves to now. It would be correct to resolve everything for writes, but would not help performance, and we're trying to improve performance, not make it worse. With respect to volatility, read and write barriers mean that developers have to pay attention to volatility. But that's already true even for TSO: for example, you can't hoist the result of a volatile read into a register and use it again. Given that non-GC developers have to be aware of volatility anyway, what would be hurt by having to mark volatile reads? I would argue that it would make code more explicit and thus more maintainable, and we have to do it in all of the C++ code anyway. Sure, this is a new discipline for x86 HotSpot developers. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From rkennke at redhat.com Wed Jun 27 10:42:35 2018 From: rkennke at redhat.com (Roman Kennke) Date: Wed, 27 Jun 2018 12:42:35 +0200 Subject: Ping: RFR: JDK-8205523: Explicit barriers for interpreter In-Reply-To: <0a85ec44-5487-c11f-1436-8912958dc9ad@redhat.com> References: <3aae894d-dae1-f4b3-7702-d99621382864@redhat.com> <75ea0ba0-6990-9e18-c724-355aad788cb6@redhat.com> <9904470c-4a10-48c3-2533-c5f8c3fe94e0@oracle.com> <0a85ec44-5487-c11f-1436-8912958dc9ad@redhat.com> Message-ID: Am 27. Juni 2018 12:31:26 MESZ schrieb Andrew Haley : >On 06/27/2018 11:02 AM, Erik ?sterlund wrote: > >> I think that for the non-GC developers, it gets more tricky if they >> have to remember not to perform any writes because somewhere further >> up in the call hierarchy, the oop was only resolved for reading. Or >> that any reads used have to be non-volatile, or you risk running >> into a subtle IRIW situation (even on TSO hardware) due to having >> multiple reads riding on the same read resolve, potentially causing >> inconsistencies. It won't be a problem in the places you inserted >> the read resolves to now. > >It would be correct to resolve everything for writes, but would not >help performance, and we're trying to improve performance, not make it >worse. > >With respect to volatility, read and write barriers mean that >developers have to pay attention to volatility. But that's already >true even for TSO: for example, you can't hoist the result of a >volatile read into a register and use it again. Given that non-GC >developers have to be aware of volatility anyway, what would be hurt >by having to mark volatile reads? I would argue that it would make >code more explicit and thus more maintainable, and we have to do it in >all of the C++ code anyway. Sure, this is a new discipline for x86 >HotSpot developers. It should be noted that normal loads and stores are already covered by the Access API, and we emit the correct read- and write barriers. This is about a few places that don't easily fit that model and still require barriers. That is monitor enter/exit (need WBs anyway) and a few intrinsics, in the interpreter only CRC32. This requires an RB on the buffer array. Yeah, it's probably OK to emit WB there too. If it's really hot, it'd be compiled by C1 or C2. Roman -- Diese Nachricht wurde von meinem Android-Ger?t mit K-9 Mail gesendet. From aph at redhat.com Wed Jun 27 10:44:58 2018 From: aph at redhat.com (Andrew Haley) Date: Wed, 27 Jun 2018 11:44:58 +0100 Subject: Ping: RFR: JDK-8205523: Explicit barriers for interpreter In-Reply-To: References: <3aae894d-dae1-f4b3-7702-d99621382864@redhat.com> <75ea0ba0-6990-9e18-c724-355aad788cb6@redhat.com> <9904470c-4a10-48c3-2533-c5f8c3fe94e0@oracle.com> <0a85ec44-5487-c11f-1436-8912958dc9ad@redhat.com> Message-ID: <1e211420-7f77-f0d3-8e08-3b499ff66899@redhat.com> On 06/27/2018 11:42 AM, Roman Kennke wrote: > It should be noted that normal loads and stores are already covered > by the Access API, and we emit the correct read- and write > barriers. This is about a few places that don't easily fit that > model and still require barriers. That is monitor enter/exit (need > WBs anyway) and a few intrinsics, in the interpreter only > CRC32. This requires an RB on the buffer array. Yeah, it's probably > OK to emit WB there too. If it's really hot, it'd be compiled by C1 > or C2. Right, but as you correctly note we'll have exactly the same discussion about C1, with the same points made. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From erik.osterlund at oracle.com Wed Jun 27 11:13:58 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Wed, 27 Jun 2018 13:13:58 +0200 Subject: Ping: RFR: JDK-8205523: Explicit barriers for interpreter In-Reply-To: <1e211420-7f77-f0d3-8e08-3b499ff66899@redhat.com> References: <3aae894d-dae1-f4b3-7702-d99621382864@redhat.com> <75ea0ba0-6990-9e18-c724-355aad788cb6@redhat.com> <9904470c-4a10-48c3-2533-c5f8c3fe94e0@oracle.com> <0a85ec44-5487-c11f-1436-8912958dc9ad@redhat.com> <1e211420-7f77-f0d3-8e08-3b499ff66899@redhat.com> Message-ID: Hi Andrew and Roman, I am a fan of profile guided optimization. I would definitely not mind introducing these concepts in the compilers where they are with no doubt necessary (and we also have the right tools for dealing with this better). In fact, they already have read/write decorators that could be used for resolve barriers in our compilers, and can use algorithms to safely elide barriers where provably correct, so it makes perfect sense for me to use such concepts there. I'm just not sure that the interpreter needs to be polluted with this conceptual overhead, unless there is at least one benchmark that can show that we are solving an actual problem with this. Remember, premature optimizations are the root of all evil. In you experience, have you ever observed a difference in any application or benchmark, due to the less than handful paths in the interpreter having a slightly suboptimal barrier being used? If so, I could change my mind. Thanks, /Erik On 2018-06-27 12:44, Andrew Haley wrote: > On 06/27/2018 11:42 AM, Roman Kennke wrote: > >> It should be noted that normal loads and stores are already covered >> by the Access API, and we emit the correct read- and write >> barriers. This is about a few places that don't easily fit that >> model and still require barriers. That is monitor enter/exit (need >> WBs anyway) and a few intrinsics, in the interpreter only >> CRC32. This requires an RB on the buffer array. Yeah, it's probably >> OK to emit WB there too. If it's really hot, it'd be compiled by C1 >> or C2. > Right, but as you correctly note we'll have exactly the same > discussion about C1, with the same points made. > From daniel.daugherty at oracle.com Wed Jun 27 13:15:27 2018 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Wed, 27 Jun 2018 09:15:27 -0400 Subject: RFR: 8205577: parallel/TestPrintGCDetailsVerbose.java fails assertion In-Reply-To: References: Message-ID: <90c22ea2-4acd-a4ea-ba9a-2faccf64888b@oracle.com> Wonderful analysis Kim! > http://cr.openjdk.java.net/~kbarrett/8205577/open.00/ src/hotspot/share/code/nmethod.cpp ??? (old) L1694: ????? (*p)->print_value_on(&ls); ??????? I agree with not printing, but you might want to leave a ??????? comment behind. Something like: ??????? // Do not try "(*p)->print_value_on()" here because it ??????? // is racy with parallel operations. Thumbs up in any case. Dan On 6/27/18 1:03 AM, Kim Barrett wrote: > Please review this fix of an assertion failure during ParallelGC young > collections. > > We have a task (thread) walking the code cache, copying and forwarding > oop references in those cached nmethods containing scavengable > (e.g. young) oops. After the oops in an nmethod are processed, a > second pass is made to determine whether any of the oops are still > scavengable, removing the nmethod from the list containing scavengable > oops. That speeds up future collection cycles. > > That second pass check logs the values of any scavengable oops > referred to by the nmethod. This is where things go wrong. If the > object being printed is a j.l.Class, deep in the guts of the printing > there is an assert that the mirror of the klass of the object is that > object. But if the klass mirror has not yet been forwarded, we can be > comparing a forward object with the not yet forward version, and the > assertion fails. > > There is another task that is running in parallel with the code cache > walk that is processing the CLD graph, but it may not have reached the > relevant klass yet, so the klass's mirror hasn't been forwarded yet. > > But there's another complication that makes this crash even more > difficult to trigger. The ParallelGC young collection's code cache > walk promotes nmethod-referenced oops, and that promotion makes the > object no longer scavengable. So to trigger the crash, some third > thread must have copied a young class to survivor space, then the code > cache walk processes an nmethod referring to that class and prints the > still young class, all before the CLD graph walk has forwarded the > klass's mirror. > > This problem has been lurking all along, but JDK-8203837 changed the > printing from only occurring when TraceScavenge was true to printing > when gc+nmethod=trace logging is enabled. As there were no tests that > enabled TraceScavenge, we didn't previously notice the problem. But > there are tests that enable gc+nmethod=trace, enabling the problematic > printing. > > The solution being taken here is to change that logging to no longer > attempt to print the object. Printing an object while in the middle > of a copying GC, especially one that is parallel, seems quite risky, > so let's just not do that. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8205577 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8205577/open.00/ > > Testing: > mach5 tier1,2,3 > > I haven't been able to reproduce the failure; as discussed above, it's > very timing sensitive. But the scenario described above accounts for > both the problem and the timing of its appearance. > > From lois.foltan at oracle.com Wed Jun 27 13:20:25 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Wed, 27 Jun 2018 09:20:25 -0400 Subject: [12] RFR (M) JDK-8205611: Improve the wording of LinkageErrors to include module and class loader information Message-ID: <3e39812a-2f18-8dad-55df-92c7e28aac45@oracle.com> Please review this change to migrate existing loader constraint LinkageErrors to the new error message format proposal.? The actual wording of the loader constraint messages has not changed.? Module and class loader information have been moved into the error message's REASON section.? This change also removes the method java_lang_ClassLoader::describe_external() in favor of Klass::class_in_module_of_loader(). open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8205611/webrev/ bug link https://bugs.openjdk.java.net/browse/JDK-8205611 JDK-8166633 outlines a new proposal where error messages follow a format of ERRROR: PROBLEM (REASON) where the PROBLEM is aggressively simple (and definitely avoids arbitrary-length loader names) so the REASON bears all the cost of explaining the PROBLEM with more specifics.? See the proposal in more detail at https://bugs.openjdk.java.net/browse/JDK-8166633. The new utility method Klass::class_in_module_of_loader() implements the proposed format. Some example text: (JDK 11) 'DuplicateLE_Test_Loader_IF' @ (instance of PreemptingClassLoader, child of 'app' jdk.internal.loader.ClassLoaders$AppClassLoader) attempted duplicate interface definition for test.J. to (JDK 12) loader 'DuplicateLE_Test_Loader_IF' @6eeee674 attempted duplicate interface definition for test.J. (test.J is in unnamed module of loader 'DuplicateLE_Test_Loader_IF' @6eeee674, parent loader 'app') (JDK 11) loader constraint violation: loader PreemptingClassLoader @ (instance of PreemptingClassLoader, child of 'app' jdk.internal.loader.ClassLoaders$AppClassLoader) wants to load class test.D_ambgs. A different class with the same name was previously loaded by 'app' (instance of jdk.internal.loader.ClassLoaders$AppClassLoader). to (JDK 12) loader constraint violation: loader PreemptingClassLoader @5bc79a1c wants to load class test.D_ambgs. A different class with the same name was previously loaded by 'app'. (test.D_ambgs is in unnamed module of loader 'app') (JDK 11) loader constraint violation for class test.Task: when selecting overriding method test.Task.m()Ltest/Foo; the class loader PreemptingClassLoader @ (instance of PreemptingClassLoader, child of 'app' jdk.internal.loader.ClassLoaders$AppClassLoader) of the selected method's type test.Task, and the class loader 'app' (instance of jdk.internal.loader.ClassLoaders$AppClassLoader) for its super type test.J have different Class objects for the type test.Foo used in the signature to (JDK 12) loader constraint violation for class test.Task: when selecting overriding method test.Task.m()Ltest/Foo; the class loader PreemptingClassLoader @7884e077 of the selected method's type test.Task, and the class loader 'app' for its super type test.J have different Class objects for the type test.Foo used in the signature (test.Task is in unnamed module of loader PreemptingClassLoader @7884e077, parent loader 'app'; test.J is in unnamed module of loader 'app') Testing: hs-tier(1-3), jdk-tier(1-3) complete ?????????????? hs-tier(4,5) in progress ?????????????? JCK vm, lang in progress Thanks, Lois From coleen.phillimore at oracle.com Wed Jun 27 13:43:20 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 27 Jun 2018 09:43:20 -0400 Subject: RFR: 8205577: parallel/TestPrintGCDetailsVerbose.java fails assertion In-Reply-To: References: Message-ID: Kim, This looks good.? Thank you for the detailed analysis of how this failed. Coleen On 6/27/18 1:03 AM, Kim Barrett wrote: > Please review this fix of an assertion failure during ParallelGC young > collections. > > We have a task (thread) walking the code cache, copying and forwarding > oop references in those cached nmethods containing scavengable > (e.g. young) oops. After the oops in an nmethod are processed, a > second pass is made to determine whether any of the oops are still > scavengable, removing the nmethod from the list containing scavengable > oops. That speeds up future collection cycles. > > That second pass check logs the values of any scavengable oops > referred to by the nmethod. This is where things go wrong. If the > object being printed is a j.l.Class, deep in the guts of the printing > there is an assert that the mirror of the klass of the object is that > object. But if the klass mirror has not yet been forwarded, we can be > comparing a forward object with the not yet forward version, and the > assertion fails. > > There is another task that is running in parallel with the code cache > walk that is processing the CLD graph, but it may not have reached the > relevant klass yet, so the klass's mirror hasn't been forwarded yet. > > But there's another complication that makes this crash even more > difficult to trigger. The ParallelGC young collection's code cache > walk promotes nmethod-referenced oops, and that promotion makes the > object no longer scavengable. So to trigger the crash, some third > thread must have copied a young class to survivor space, then the code > cache walk processes an nmethod referring to that class and prints the > still young class, all before the CLD graph walk has forwarded the > klass's mirror. > > This problem has been lurking all along, but JDK-8203837 changed the > printing from only occurring when TraceScavenge was true to printing > when gc+nmethod=trace logging is enabled. As there were no tests that > enabled TraceScavenge, we didn't previously notice the problem. But > there are tests that enable gc+nmethod=trace, enabling the problematic > printing. > > The solution being taken here is to change that logging to no longer > attempt to print the object. Printing an object while in the middle > of a copying GC, especially one that is parallel, seems quite risky, > so let's just not do that. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8205577 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8205577/open.00/ > > Testing: > mach5 tier1,2,3 > > I haven't been able to reproduce the failure; as discussed above, it's > very timing sensitive. But the scenario described above accounts for > both the problem and the timing of its appearance. > > From igor.ignatyev at oracle.com Wed Jun 27 22:53:39 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Wed, 27 Jun 2018 15:53:39 -0700 Subject: RFR(XS) : 8205954 : clean up hotspot ProblemList Message-ID: <808B3811-1D61-44AA-9023-DD5406FD5192@oracle.com> http://cr.openjdk.java.net/~iignatyev//8205954/webrev.00/index.html > 1 line changed: 0 ins; 1 del; 0 mod; Hi all, could you please review this clean up for hotspot problem list? JDK-8199578 has been fixed but the problem list still contains "vmTestbase/vm/mlvm/indy/func/jdi/breakpoint". testing: vmTestbase/vm/mlvm/indy/func/jdi/breakpoint multiple times webrev: http://cr.openjdk.java.net/~iignatyev//8205954/webrev.00/index.html JBS: https://bugs.openjdk.java.net/browse/JDK-8205954 Thanks, -- Igor From vladimir.kozlov at oracle.com Wed Jun 27 22:59:34 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 27 Jun 2018 15:59:34 -0700 Subject: RFR(XS) : 8205954 : clean up hotspot ProblemList In-Reply-To: <808B3811-1D61-44AA-9023-DD5406FD5192@oracle.com> References: <808B3811-1D61-44AA-9023-DD5406FD5192@oracle.com> Message-ID: Good. Thanks, Vladimir On 6/27/18 3:53 PM, Igor Ignatyev wrote: > http://cr.openjdk.java.net/~iignatyev//8205954/webrev.00/index.html >> 1 line changed: 0 ins; 1 del; 0 mod; > > Hi all, > > could you please review this clean up for hotspot problem list? JDK-8199578 has been fixed but the problem list still contains "vmTestbase/vm/mlvm/indy/func/jdi/breakpoint". > > testing: vmTestbase/vm/mlvm/indy/func/jdi/breakpoint multiple times > webrev: http://cr.openjdk.java.net/~iignatyev//8205954/webrev.00/index.html > JBS: https://bugs.openjdk.java.net/browse/JDK-8205954 > > Thanks, > -- Igor > From kim.barrett at oracle.com Thu Jun 28 04:29:26 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 28 Jun 2018 00:29:26 -0400 Subject: RFR: 8205577: parallel/TestPrintGCDetailsVerbose.java fails assertion In-Reply-To: <90c22ea2-4acd-a4ea-ba9a-2faccf64888b@oracle.com> References: <90c22ea2-4acd-a4ea-ba9a-2faccf64888b@oracle.com> Message-ID: <8B2271C9-3927-488C-9F85-3FEA46B5F125@oracle.com> > On Jun 27, 2018, at 9:15 AM, Daniel D. Daugherty wrote: > > Wonderful analysis Kim! > > > > http://cr.openjdk.java.net/~kbarrett/8205577/open.00/ > > src/hotspot/share/code/nmethod.cpp > (old) L1694: (*p)->print_value_on(&ls); > I agree with not printing, but you might want to leave a > comment behind. Something like: > > // Do not try "(*p)->print_value_on()" here because it > // is racy with parallel operations. > > Thumbs up in any case. Thanks. I thought about adding a comment, but one such comment seems odd; there are tons of places where we shouldn?t do things like that. From kim.barrett at oracle.com Thu Jun 28 04:29:41 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 28 Jun 2018 00:29:41 -0400 Subject: RFR: 8205577: parallel/TestPrintGCDetailsVerbose.java fails assertion In-Reply-To: References: Message-ID: <105749FA-7291-488F-BD37-FFEE7E794C4B@oracle.com> > On Jun 27, 2018, at 1:44 AM, David Holmes wrote: > > Hi Kim, > > Not printing seems extremely reasonable. Thanks. From kim.barrett at oracle.com Thu Jun 28 04:29:52 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 28 Jun 2018 00:29:52 -0400 Subject: RFR: 8205577: parallel/TestPrintGCDetailsVerbose.java fails assertion In-Reply-To: <9302a4bace800093fe38d8f9af34c64dccbe9e43.camel@oracle.com> References: <9302a4bace800093fe38d8f9af34c64dccbe9e43.camel@oracle.com> Message-ID: > On Jun 27, 2018, at 3:30 AM, Thomas Schatzl wrote: > > Hi, > > On Wed, 2018-06-27 at 01:03 -0400, Kim Barrett wrote: >> Please review this fix of an assertion failure during ParallelGC >> young collections. >> >> We have a task (thread) walking the code cache, copying and >> forwarding oop references in those cached nmethods containing >> scavengable (e.g. young) oops. After the oops in an nmethod are >> processed, a second pass is made to determine whether any of the oops >> are still scavengable, removing the nmethod from the list containing >> scavengable oops. That speeds up future collection cycles. >> > [...] >> The solution being taken here is to change that logging to no longer >> attempt to print the object. Printing an object while in the middle >> of a copying GC, especially one that is parallel, seems quite risky, >> so let's just not do that. > > Looks good. > > Thomas Thanks. From kim.barrett at oracle.com Thu Jun 28 04:30:07 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 28 Jun 2018 00:30:07 -0400 Subject: RFR: 8205577: parallel/TestPrintGCDetailsVerbose.java fails assertion In-Reply-To: References: Message-ID: <93A7E80E-598E-4182-9FE4-356B1DA8B445@oracle.com> > On Jun 27, 2018, at 9:43 AM, coleen.phillimore at oracle.com wrote: > > > Kim, This looks good. Thank you for the detailed analysis of how this failed. > Coleen Thanks. From rkennke at redhat.com Thu Jun 28 09:06:19 2018 From: rkennke at redhat.com (Roman Kennke) Date: Thu, 28 Jun 2018 11:06:19 +0200 Subject: Ping: RFR: JDK-8205523: Explicit barriers for interpreter In-Reply-To: References: <3aae894d-dae1-f4b3-7702-d99621382864@redhat.com> <75ea0ba0-6990-9e18-c724-355aad788cb6@redhat.com> <9904470c-4a10-48c3-2533-c5f8c3fe94e0@oracle.com> <0a85ec44-5487-c11f-1436-8912958dc9ad@redhat.com> <1e211420-7f77-f0d3-8e08-3b499ff66899@redhat.com> Message-ID: There are other reasons why I would like to distinguish reads and writes: - write-barriers in Shenandoah generate *much* more code - I am not sure we can actually easily generate a write-barrier for the CRC32 intrinsic. Do we have an interpreter frame? I thought about this a little more. We could trim the API down, and still retain the flexibility to differentiate between reads and writes. Instead of: oop resolve_read_read(oop obj); oop resolve_read_write(oop obj); we could do: oop resolve(DecoratorSet decorators, oop obj); and pass something like ACCESS_READ or ACCESS_WRITE (kindof what we already have in C1... probably simply rename those C1-specific decorators). If backend doesn't see any decorators, it would do the safe thing. What do you think? Maybe even extend the runtime-access-interface to take the same? Roman > Hi Andrew and Roman, > > I am a fan of profile guided optimization. I would definitely not mind > introducing these concepts in the compilers where they are with no doubt > necessary (and we also have the right tools for dealing with this > better). In fact, they already have read/write decorators that could be > used for resolve barriers in our compilers, and can use algorithms to > safely elide barriers where provably correct, so it makes perfect sense > for me to use such concepts there. > I'm just not sure that the interpreter needs to be polluted with this > conceptual overhead, unless there is at least one benchmark that can > show that we are solving an actual problem with this. Remember, > premature optimizations are the root of all evil. In you experience, > have you ever observed a difference in any application or benchmark, due > to the less than handful paths in the interpreter having a slightly > suboptimal barrier being used? If so, I could change my mind. > > Thanks, > /Erik > > On 2018-06-27 12:44, Andrew Haley wrote: >> On 06/27/2018 11:42 AM, Roman Kennke wrote: >> >>> It should be noted that normal loads and stores are already covered >>> by the Access API, and we emit the correct read- and write >>> barriers. This is about a few places that don't easily fit that >>> model and still require barriers. That is monitor enter/exit (need >>> WBs anyway) and a few intrinsics, in the interpreter only >>> CRC32. This requires an RB on the buffer array. Yeah, it's probably >>> OK to emit WB there too. If it's really hot, it'd be compiled by C1 >>> or C2. >> Right, but as you correctly note we'll have exactly the same >> discussion about C1, with the same points made. >> > From markus.gronlund at oracle.com Thu Jun 28 11:23:39 2018 From: markus.gronlund at oracle.com (Markus Gronlund) Date: Thu, 28 Jun 2018 04:23:39 -0700 (PDT) Subject: FW: RFR(XXXS): 8205996: JDK-8205906 broke the build on Mac In-Reply-To: <4c927e10-e0ec-4d5b-ba44-9ac5759a5589@default> References: <4c927e10-e0ec-4d5b-ba44-9ac5759a5589@default> Message-ID: <74011707-b197-49e9-8b8e-76edca8d811c@default> Widening this a bit for some quicker reviews (since this is a P1). Thanks in advance Markus -----Original Message----- From: Markus Gronlund Sent: den 28 juni 2018 13:07 To: hotspot-jfr-dev at openjdk.java.net; Tobias Hartmann Subject: RFR(XXXS): 8205996: JDK-8205906 broke the build on Mac Greetings, Looks like the check-in for https://bugs.openjdk.java.net/browse/JDK-8205906 broke the product build when running on Mac (builds fine using the "debug" target that I tested before pushing. There is a P1 filed now to restore the Mach builds, so I would appreciate a quick ok. Webrev: http://cr.openjdk.java.net/~mgronlun/8205996/webrev00/ Bug: https://bugs.openjdk.java.net/browse/JDK-8205996 Summary: inlining char constants and lengths. Currently running a verification job in MACH5 (product build) Markus From tobias.hartmann at oracle.com Thu Jun 28 11:38:39 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Thu, 28 Jun 2018 13:38:39 +0200 Subject: FW: RFR(XXXS): 8205996: JDK-8205906 broke the build on Mac In-Reply-To: <74011707-b197-49e9-8b8e-76edca8d811c@default> References: <4c927e10-e0ec-4d5b-ba44-9ac5759a5589@default> <74011707-b197-49e9-8b8e-76edca8d811c@default> Message-ID: Hi Markus, looks good to me. Thanks, Tobias On 28.06.2018 13:23, Markus Gronlund wrote: > Widening this a bit for some quicker reviews (since this is a P1). > > Thanks in advance > > Markus > > -----Original Message----- > From: Markus Gronlund > Sent: den 28 juni 2018 13:07 > To: hotspot-jfr-dev at openjdk.java.net; Tobias Hartmann > Subject: RFR(XXXS): 8205996: JDK-8205906 broke the build on Mac > > Greetings, > > Looks like the check-in for https://bugs.openjdk.java.net/browse/JDK-8205906 broke the product build when running on Mac (builds fine using the "debug" target that I tested before pushing. > > There is a P1 filed now to restore the Mach builds, so I would appreciate a quick ok. > > Webrev: http://cr.openjdk.java.net/~mgronlun/8205996/webrev00/ > Bug: https://bugs.openjdk.java.net/browse/JDK-8205996 > > Summary: inlining char constants and lengths. > > Currently running a verification job in MACH5 (product build) > > Markus > From erik.helin at oracle.com Thu Jun 28 11:46:54 2018 From: erik.helin at oracle.com (Erik Helin) Date: Thu, 28 Jun 2018 13:46:54 +0200 Subject: FW: RFR(XXXS): 8205996: JDK-8205906 broke the build on Mac In-Reply-To: <74011707-b197-49e9-8b8e-76edca8d811c@default> References: <4c927e10-e0ec-4d5b-ba44-9ac5759a5589@default> <74011707-b197-49e9-8b8e-76edca8d811c@default> Message-ID: <92dd4c25-8db8-8d52-8592-bff87f5a2efb@oracle.com> On 06/28/2018 01:23 PM, Markus Gronlund wrote: > Widening this a bit for some quicker reviews (since this is a P1). Given that the build is broken, I think you better just push this. The patch can (and should) be improved IMO, please use strlen instead of "24" and "25". Two of the changed places are asserts, so the performance of strlen doesn't matter. The remaining place is part of parse_start_flight_recording_option which I would presume is only called when a jcmd is executed (or at JVM startup), so it can't be a hot path (please correct me if I'm wrong). Thanks, Erik > Thanks in advance > > Markus > > -----Original Message----- > From: Markus Gronlund > Sent: den 28 juni 2018 13:07 > To: hotspot-jfr-dev at openjdk.java.net; Tobias Hartmann > Subject: RFR(XXXS): 8205996: JDK-8205906 broke the build on Mac > > Greetings, > > Looks like the check-in for https://bugs.openjdk.java.net/browse/JDK-8205906 broke the product build when running on Mac (builds fine using the "debug" target that I tested before pushing. > > There is a P1 filed now to restore the Mach builds, so I would appreciate a quick ok. > > Webrev: http://cr.openjdk.java.net/~mgronlun/8205996/webrev00/ > Bug: https://bugs.openjdk.java.net/browse/JDK-8205996 > > Summary: inlining char constants and lengths. > > Currently running a verification job in MACH5 (product build) > > Markus > From markus.gronlund at oracle.com Thu Jun 28 12:09:00 2018 From: markus.gronlund at oracle.com (Markus Gronlund) Date: Thu, 28 Jun 2018 05:09:00 -0700 (PDT) Subject: FW: RFR(XXXS): 8205996: JDK-8205906 broke the build on Mac In-Reply-To: <92dd4c25-8db8-8d52-8592-bff87f5a2efb@oracle.com> References: <4c927e10-e0ec-4d5b-ba44-9ac5759a5589@default> <74011707-b197-49e9-8b8e-76edca8d811c@default> <92dd4c25-8db8-8d52-8592-bff87f5a2efb@oracle.com> Message-ID: <0d6a7765-8673-4611-89c1-71c9df300d48@default> Erik and Tobias, Thank you very much for your quick reviews. Erik, You are right, I was stressing a bit getting a quick fix in place. I will rework this section along the lines you suggest. Thanks again Markus -----Original Message----- From: Erik Helin Sent: den 28 juni 2018 13:47 To: Markus Gronlund ; hotspot-dev developers Subject: Re: FW: RFR(XXXS): 8205996: JDK-8205906 broke the build on Mac On 06/28/2018 01:23 PM, Markus Gronlund wrote: > Widening this a bit for some quicker reviews (since this is a P1). Given that the build is broken, I think you better just push this. The patch can (and should) be improved IMO, please use strlen instead of "24" and "25". Two of the changed places are asserts, so the performance of strlen doesn't matter. The remaining place is part of parse_start_flight_recording_option which I would presume is only called when a jcmd is executed (or at JVM startup), so it can't be a hot path (please correct me if I'm wrong). Thanks, Erik > Thanks in advance > > Markus > > -----Original Message----- > From: Markus Gronlund > Sent: den 28 juni 2018 13:07 > To: hotspot-jfr-dev at openjdk.java.net; Tobias Hartmann > > Subject: RFR(XXXS): 8205996: JDK-8205906 broke the build on Mac > > Greetings, > > Looks like the check-in for https://bugs.openjdk.java.net/browse/JDK-8205906 broke the product build when running on Mac (builds fine using the "debug" target that I tested before pushing. > > There is a P1 filed now to restore the Mach builds, so I would appreciate a quick ok. > > Webrev: http://cr.openjdk.java.net/~mgronlun/8205996/webrev00/ > Bug: https://bugs.openjdk.java.net/browse/JDK-8205996 > > Summary: inlining char constants and lengths. > > Currently running a verification job in MACH5 (product build) > > Markus > From stuart.monteith at linaro.org Thu Jun 28 14:37:10 2018 From: stuart.monteith at linaro.org (Stuart Monteith) Date: Thu, 28 Jun 2018 15:37:10 +0100 Subject: 8205118: CodeStrings::copy() assertion caused by -XX:+VerifyOops -XX:+PrintStubCode In-Reply-To: References: <0da3b5fa-cb47-2803-f5b0-959ddc30c667@redhat.com> <7c93f042-d668-6764-10cc-c74eb6b07d51@redhat.com> Message-ID: Hi, It looks good to me. Running with the following on aarch64 and x86_64 didn't throw up any issues: java -XX:+VerifyOops -XX:+PrintStubCode -XX:+PrintInterpreter -XX:+PrintAssembly As well as with jtreg tier1 tests on aarch64 fastdebug build. BR, StuartThanks Andrew, it checks On Fri, 22 Jun 2018 at 17:27, Andrew Haley wrote: > > On 06/18/2018 06:07 PM, Aleksey Shipilev wrote: > > On 06/18/2018 07:02 PM, Andrew Haley wrote: > >> My recent patch to re-enable the printing of code comments in > >> PrintStubCode revealed a latent bug in CodeStrings::copy(). > >> VerifyOops uses CodeStrings to hold its assertion strings, and these > >> are distinguished from code comments by an offset of -1. (Presumably > >> to make sure they're not interpreted as code comments by the > >> disassembler.) Unfortunately, CodeStrings::copy() triggers an > >> assertion failure when it sees any of the assertion strings. > >> > >> The best fix, IMO, is to correct CodeStrings::copy(): it shouldn't > >> fail whatever the code strings are. > >> > >> http://cr.openjdk.java.net/~aph/8205118-1/ > > http://cr.openjdk.java.net/~aph/8205118-2/ > > -- > Andrew Haley > Java Platform Lead Engineer > Red Hat UK Ltd. > EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From igor.ignatyev at oracle.com Thu Jun 28 18:09:50 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Thu, 28 Jun 2018 11:09:50 -0700 Subject: RFR(S/M) : 8202561 : clean up TEST.groups file Message-ID: <19F6C0D7-EA7D-456D-BB3E-68EE786C3125@oracle.com> http://cr.openjdk.java.net/~iignatyev//8202561/webrev.00/index.html > 2262 lines changed: 2 ins; 2258 del; 2 mod Hi all, could you please review the clean up for hotspot TEST.groups file? the patch moves all vmTestbase_*_quick groups to a separate file TEST.quick-groups. webrev: http://cr.openjdk.java.net/~iignatyev//8202561/webrev.00/index.html JBS: https://bugs.openjdk.java.net/browse/JDK-8202561 Thanks, -- Igor From ekaterina.pavlova at oracle.com Thu Jun 28 20:43:19 2018 From: ekaterina.pavlova at oracle.com (Ekaterina Pavlova) Date: Thu, 28 Jun 2018 13:43:19 -0700 Subject: RFR(S/M) : 8202561 : clean up TEST.groups file In-Reply-To: <19F6C0D7-EA7D-456D-BB3E-68EE786C3125@oracle.com> References: <19F6C0D7-EA7D-456D-BB3E-68EE786C3125@oracle.com> Message-ID: <5cf5b46a-1ae7-3286-1a5f-9f2141312401@oracle.com> Looks good, thanks, -katya On 6/28/18 11:09 AM, Igor Ignatyev wrote: > http://cr.openjdk.java.net/~iignatyev//8202561/webrev.00/index.html >> 2262 lines changed: 2 ins; 2258 del; 2 mod > > Hi all, > > could you please review the clean up for hotspot TEST.groups file? > > the patch moves all vmTestbase_*_quick groups to a separate file TEST.quick-groups. > > webrev: http://cr.openjdk.java.net/~iignatyev//8202561/webrev.00/index.html > JBS: https://bugs.openjdk.java.net/browse/JDK-8202561 > > Thanks, > -- Igor > From igor.ignatyev at oracle.com Fri Jun 29 04:42:21 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Thu, 28 Jun 2018 21:42:21 -0700 Subject: RFR(XXS) : 8206088 : 8205207 broke builds Message-ID: <2B0324FD-5550-497E-8736-A3AF1FE5C66C@oracle.com> http://cr.openjdk.java.net/~iignatyev//8206088/webrev.00/index.html > 1 line changed: 0 ins; 0 del; 1 mod; Hi all, could you please review this one liner fix? webrev: http://cr.openjdk.java.net/~iignatyev//8206088/webrev.00/index.html JBS: https://bugs.openjdk.java.net/browse/JDK-8206088 testing: make build-test-hotspot-jtreg-graal w/ empty GRAALUNIT_LIB Thanks, -- Igor From ekaterina.pavlova at oracle.com Fri Jun 29 04:47:54 2018 From: ekaterina.pavlova at oracle.com (Ekaterina Pavlova) Date: Thu, 28 Jun 2018 21:47:54 -0700 Subject: RFR(XXS) : 8206088 : 8205207 broke builds In-Reply-To: <2B0324FD-5550-497E-8736-A3AF1FE5C66C@oracle.com> References: <2B0324FD-5550-497E-8736-A3AF1FE5C66C@oracle.com> Message-ID: Good, thanks for quick response and webrev! -katya On 6/28/18 9:42 PM, Igor Ignatyev wrote: > http://cr.openjdk.java.net/~iignatyev//8206088/webrev.00/index.html >> 1 line changed: 0 ins; 0 del; 1 mod; > > Hi all, > > could you please review this one liner fix? > > webrev: http://cr.openjdk.java.net/~iignatyev//8206088/webrev.00/index.html > JBS: https://bugs.openjdk.java.net/browse/JDK-8206088 > testing: make build-test-hotspot-jtreg-graal w/ empty GRAALUNIT_LIB > > Thanks, > -- Igor > From erik.helin at oracle.com Fri Jun 29 04:50:32 2018 From: erik.helin at oracle.com (Erik Helin) Date: Fri, 29 Jun 2018 06:50:32 +0200 Subject: RFR(XXS) : 8206088 : 8205207 broke builds In-Reply-To: <2B0324FD-5550-497E-8736-A3AF1FE5C66C@oracle.com> References: <2B0324FD-5550-497E-8736-A3AF1FE5C66C@oracle.com> Message-ID: <392fe894-88a4-0934-fc02-b3171e02d074@oracle.com> On 06/29/2018 06:42 AM, Igor Ignatyev wrote: > http://cr.openjdk.java.net/~iignatyev//8206088/webrev.00/index.html >> 1 line changed: 0 ins; 0 del; 1 mod; > > Hi all, > > could you please review this one liner fix? > > webrev: http://cr.openjdk.java.net/~iignatyev//8206088/webrev.00/index.html Hmmm, it seems like you are missing an end-of-string quotation mark in + $(info "Skip building of Graal unit tests because 3rd party libraries directory is not specified) Or did I misunderstand the patch? Thanks, Erik > JBS: https://bugs.openjdk.java.net/browse/JDK-8206088 > testing: make build-test-hotspot-jtreg-graal w/ empty GRAALUNIT_LIB > > Thanks, > -- Igor > From igor.ignatyev at oracle.com Fri Jun 29 04:55:09 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Thu, 28 Jun 2018 21:55:09 -0700 Subject: RFR(XXS) : 8206088 : 8205207 broke builds In-Reply-To: <392fe894-88a4-0934-fc02-b3171e02d074@oracle.com> References: <2B0324FD-5550-497E-8736-A3AF1FE5C66C@oracle.com> <392fe894-88a4-0934-fc02-b3171e02d074@oracle.com> Message-ID: Hi Erik, actually, I have a redundant quotation mark at the begging, removed. thanks for spotting this. Thanks, -- Igor > On Jun 28, 2018, at 9:50 PM, Erik Helin wrote: > > On 06/29/2018 06:42 AM, Igor Ignatyev wrote: >> http://cr.openjdk.java.net/~iignatyev//8206088/webrev.00/index.html >>> 1 line changed: 0 ins; 0 del; 1 mod; >> Hi all, >> could you please review this one liner fix? >> webrev: http://cr.openjdk.java.net/~iignatyev//8206088/webrev.00/index.html > > Hmmm, it seems like you are missing an end-of-string quotation mark in > > + $(info "Skip building of Graal unit tests because 3rd party libraries directory is not specified) > > Or did I misunderstand the patch? > Thanks, > Erik > >> JBS: https://bugs.openjdk.java.net/browse/JDK-8206088 >> testing: make build-test-hotspot-jtreg-graal w/ empty GRAALUNIT_LIB >> Thanks, >> -- Igor From erik.helin at oracle.com Fri Jun 29 04:56:26 2018 From: erik.helin at oracle.com (Erik Helin) Date: Fri, 29 Jun 2018 06:56:26 +0200 Subject: RFR(XXS) : 8206088 : 8205207 broke builds In-Reply-To: References: <2B0324FD-5550-497E-8736-A3AF1FE5C66C@oracle.com> <392fe894-88a4-0934-fc02-b3171e02d074@oracle.com> Message-ID: On 06/29/2018 06:55 AM, Igor Ignatyev wrote: > Hi Erik, > > actually, I have a redundant quotation mark at the begging, removed. thanks for spotting this. Yes, I realized that just as I had sent my email :) Anyways, patch looks good now, Reviewed. Thanks, Erik > Thanks, > -- Igor > >> On Jun 28, 2018, at 9:50 PM, Erik Helin wrote: >> >> On 06/29/2018 06:42 AM, Igor Ignatyev wrote: >>> http://cr.openjdk.java.net/~iignatyev//8206088/webrev.00/index.html >>>> 1 line changed: 0 ins; 0 del; 1 mod; >>> Hi all, >>> could you please review this one liner fix? >>> webrev: http://cr.openjdk.java.net/~iignatyev//8206088/webrev.00/index.html >> >> Hmmm, it seems like you are missing an end-of-string quotation mark in >> >> + $(info "Skip building of Graal unit tests because 3rd party libraries directory is not specified) >> >> Or did I misunderstand the patch? >> Thanks, >> Erik >> >>> JBS: https://bugs.openjdk.java.net/browse/JDK-8206088 >>> testing: make build-test-hotspot-jtreg-graal w/ empty GRAALUNIT_LIB >>> Thanks, >>> -- Igor > From igor.ignatyev at oracle.com Fri Jun 29 05:00:11 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Thu, 28 Jun 2018 22:00:11 -0700 Subject: RFR(XXS) : 8206088 : 8205207 broke builds In-Reply-To: References: <2B0324FD-5550-497E-8736-A3AF1FE5C66C@oracle.com> <392fe894-88a4-0934-fc02-b3171e02d074@oracle.com> Message-ID: <912AACE2-FF01-4199-B2C6-C018F83888E1@oracle.com> Erik, Katya, thank you for such fast reviews. Cheers, -- Igor > On Jun 28, 2018, at 9:56 PM, Erik Helin wrote: > > On 06/29/2018 06:55 AM, Igor Ignatyev wrote: >> Hi Erik, >> actually, I have a redundant quotation mark at the begging, removed. thanks for spotting this. > > Yes, I realized that just as I had sent my email :) Anyways, patch looks good now, Reviewed. > > Thanks, > Erik > >> Thanks, >> -- Igor >>> On Jun 28, 2018, at 9:50 PM, Erik Helin wrote: >>> >>> On 06/29/2018 06:42 AM, Igor Ignatyev wrote: >>>> http://cr.openjdk.java.net/~iignatyev//8206088/webrev.00/index.html >>>>> 1 line changed: 0 ins; 0 del; 1 mod; >>>> Hi all, >>>> could you please review this one liner fix? >>>> webrev: http://cr.openjdk.java.net/~iignatyev//8206088/webrev.00/index.html >>> >>> Hmmm, it seems like you are missing an end-of-string quotation mark in >>> >>> + $(info "Skip building of Graal unit tests because 3rd party libraries directory is not specified) >>> >>> Or did I misunderstand the patch? >>> Thanks, >>> Erik >>> >>>> JBS: https://bugs.openjdk.java.net/browse/JDK-8206088 >>>> testing: make build-test-hotspot-jtreg-graal w/ empty GRAALUNIT_LIB >>>> Thanks, >>>> -- Igor From david.holmes at oracle.com Fri Jun 29 05:32:55 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 29 Jun 2018 15:32:55 +1000 Subject: RFR(XXS) : 8206088 : 8205207 broke builds In-Reply-To: <2B0324FD-5550-497E-8736-A3AF1FE5C66C@oracle.com> References: <2B0324FD-5550-497E-8736-A3AF1FE5C66C@oracle.com> Message-ID: <1eda34bc-5028-46cf-6e47-bcc73a6b84de@oracle.com> So now I only get spammed with this if LOG=info+ ? Not sure why I need to be told about this if not asking for anything Graal related ... David On 29/06/2018 2:42 PM, Igor Ignatyev wrote: > http://cr.openjdk.java.net/~iignatyev//8206088/webrev.00/index.html >> 1 line changed: 0 ins; 0 del; 1 mod; > > Hi all, > > could you please review this one liner fix? > > webrev: http://cr.openjdk.java.net/~iignatyev//8206088/webrev.00/index.html > JBS: https://bugs.openjdk.java.net/browse/JDK-8206088 > testing: make build-test-hotspot-jtreg-graal w/ empty GRAALUNIT_LIB > > Thanks, > -- Igor > From aph at redhat.com Fri Jun 29 12:59:56 2018 From: aph at redhat.com (Andrew Haley) Date: Fri, 29 Jun 2018 13:59:56 +0100 Subject: Ping: RFR: JDK-8205523: Explicit barriers for interpreter In-Reply-To: References: <3aae894d-dae1-f4b3-7702-d99621382864@redhat.com> <75ea0ba0-6990-9e18-c724-355aad788cb6@redhat.com> <9904470c-4a10-48c3-2533-c5f8c3fe94e0@oracle.com> <0a85ec44-5487-c11f-1436-8912958dc9ad@redhat.com> <1e211420-7f77-f0d3-8e08-3b499ff66899@redhat.com> Message-ID: <38724390-8f6b-45ac-8654-e79082e820c6@redhat.com> Hi, On 06/27/2018 12:13 PM, Erik ?sterlund wrote: > I am a fan of profile guided optimization. I would definitely not mind > introducing these concepts in the compilers where they are with no doubt > necessary (and we also have the right tools for dealing with this > better). In fact, they already have read/write decorators that could be > used for resolve barriers in our compilers, and can use algorithms to > safely elide barriers where provably correct, so it makes perfect sense > for me to use such concepts there. > I'm just not sure that the interpreter needs to be polluted with this > conceptual overhead, unless there is at least one benchmark that can > show that we are solving an actual problem with this. Remember, > premature optimizations are the root of all evil. but efficient systems are made from thousands of tiny optimizations, each one too small to be measured above the noise on its own. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From ChrisPhi at LGonQn.Org Fri Jun 29 15:48:38 2018 From: ChrisPhi at LGonQn.Org (Chris Phillips) Date: Fri, 29 Jun 2018 11:48:38 -0400 Subject: Fwd: [Mach5] mach5-one-chrisphi-JDK-8203030-3-20180629-0508-29134: Build tasks UNSTABLE. Test tasks UNSTABLE. In-Reply-To: <348780983.18.1530253571674.JavaMail.root@sca00lvx.us.oracle.com> References: <348780983.18.1530253571674.JavaMail.root@sca00lvx.us.oracle.com> Message-ID: Hi, Though the JDK8203030 patch was last submitted for Mach5 testing over a week ago it still seems to be being run, but now failing? Last night and this morning (see attached)... any idea whats going on? Chris PS Builds of http://hg.openjdk.java.net/jdk/jdk11 tip: changeset: 50897:9816d7cc655e tag: tip user: thartmann date: Fri Jun 29 11:10:47 2018 +0200 summary: 8205940: LoadNode::find_previous_arraycopy fails with "broken allocation" assert which contain the fix , seem to work just fine? From rkennke at redhat.com Fri Jun 29 15:53:24 2018 From: rkennke at redhat.com (Roman Kennke) Date: Fri, 29 Jun 2018 17:53:24 +0200 Subject: Fwd: [Mach5] mach5-one-chrisphi-JDK-8203030-3-20180629-0508-29134: Build tasks UNSTABLE. Test tasks UNSTABLE. In-Reply-To: References: <348780983.18.1530253571674.JavaMail.root@sca00lvx.us.oracle.com> Message-ID: <5a57bd01-df4d-7852-7498-f153d1678711@redhat.com> Hi Chris, as far as I know, you might need to merge from latest default branch into your testing branch. Also, be aware of repo-fork jdk/jdk -> jdk/jdk11 and corresponding submit-repo forks. If you no longer need it tested, close the branch. Roman > Hi, > > Though the JDK8203030 patch was last submitted for Mach5 testing over a > week ago it still seems to be being run, but now failing? Last night and > this morning (see attached)... any idea whats going on? > > Chris > PS > > Builds of > http://hg.openjdk.java.net/jdk/jdk11 tip: > > changeset: 50897:9816d7cc655e > tag: tip > user: thartmann > date: Fri Jun 29 11:10:47 2018 +0200 > summary: 8205940: LoadNode::find_previous_arraycopy fails with > "broken allocation" assert > > which contain the fix , seem to work just fine? > From ChrisPhi at LGonQn.Org Fri Jun 29 15:55:55 2018 From: ChrisPhi at LGonQn.Org (Chris Phillips) Date: Fri, 29 Jun 2018 11:55:55 -0400 Subject: Fwd: [Mach5] mach5-one-chrisphi-JDK-8203030-3-20180629-0508-29134: Build tasks UNSTABLE. Test tasks UNSTABLE. In-Reply-To: References: <348780983.18.1530253571674.JavaMail.root@sca00lvx.us.oracle.com> Message-ID: Hi Oops [Mach5] runs were attached... but stripped of course... inline below: On 29/06/18 11:48 AM, Chris Phillips wrote: > Hi, > > Though the JDK8203030 patch was last submitted for Mach5 testing over a > week ago it still seems to be being run, but now failing? Last night and > this morning (see attached)... any idea whats going on? > > Chris > PS > > Builds of > http://hg.openjdk.java.net/jdk/jdk11 tip: > > changeset: 50897:9816d7cc655e > tag: tip > user: thartmann > date: Fri Jun 29 11:10:47 2018 +0200 > summary: 8205940: LoadNode::find_previous_arraycopy fails with > "broken allocation" assert > > which contain the fix , seem to work just fine? > > > Build Details: 2018-06-29-0505306.chrisphi.source 0 Failed Tests Mach5 Tasks Results Summary FAILED: 0 EXECUTED_WITH_FAILURE: 2 PASSED: 55 NA: 0 UNABLE_TO_RUN: 18 KILLED: 0 Build 3 Not run linux-x64-linux-x64-build-0 error while building, return value: 2 linux-x64-debug-linux-x64-build-1 error while building, return value: 2 linux-x64-install-linux-x64-build-14 Dependency task failed: mach5...180629-0508-29134-linux-x64-linux-x64-build-0 Test 17 Not run tier1-product-jdk_open_test_hotspot_jtreg_tier1_common-linux-x64-18 Dependency task failed: mach5...180629-0508-29134-linux-x64-linux-x64-build-0 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_common-linux-x64-debug-24 Dependency task failed: mach5...-0508-29134-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_1-linux-x64-debug-27 Dependency task failed: mach5...-0508-29134-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_2-linux-x64-debug-30 Dependency task failed: mach5...-0508-29134-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_3-linux-x64-debug-33 Dependency task failed: mach5...-0508-29134-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_not_xcomp-linux-x64-debug-36 Dependency task failed: mach5...-0508-29134-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_1-linux-x64-debug-39 Dependency task failed: mach5...-0508-29134-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_2-linux-x64-debug-42 Dependency task failed: mach5...-0508-29134-linux-x64-debug-linux-x64-build-1 tier1-product-jdk_open_test_hotspot_jtreg_tier1_gc_gcbasher-linux-x64-21 Dependency task failed: mach5...180629-0508-29134-linux-x64-linux-x64-build-0 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_gcbasher-linux-x64-debug-45 Dependency task failed: mach5...-0508-29134-linux-x64-debug-linux-x64-build-1 See all 17... Build Details: 2018-06-29-1311224.chrisphi.source 0 Failed Tests Mach5 Tasks Results Summary FAILED: 0 EXECUTED_WITH_FAILURE: 2 PASSED: 55 NA: 0 UNABLE_TO_RUN: 18 KILLED: 0 Build 3 Not run linux-x64-linux-x64-build-0 error while building, return value: 2 linux-x64-debug-linux-x64-build-1 error while building, return value: 2 linux-x64-install-linux-x64-build-14 Dependency task failed: mach5...180629-1314-29182-linux-x64-linux-x64-build-0 Test 17 Not run tier1-product-jdk_open_test_hotspot_jtreg_tier1_common-linux-x64-18 Dependency task failed: mach5...180629-1314-29182-linux-x64-linux-x64-build-0 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_common-linux-x64-debug-24 Dependency task failed: mach5...-1314-29182-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_1-linux-x64-debug-27 Dependency task failed: mach5...-1314-29182-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_2-linux-x64-debug-30 Dependency task failed: mach5...-1314-29182-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_3-linux-x64-debug-33 Dependency task failed: mach5...-1314-29182-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_not_xcomp-linux-x64-debug-36 Dependency task failed: mach5...-1314-29182-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_1-linux-x64-debug-39 Dependency task failed: mach5...-1314-29182-linux-x64-debug-linux-x64-build-1 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_2-linux-x64-debug-42 Dependency task failed: mach5...-1314-29182-linux-x64-debug-linux-x64-build-1 tier1-product-jdk_open_test_hotspot_jtreg_tier1_gc_gcbasher-linux-x64-21 Dependency task failed: mach5...180629-1314-29182-linux-x64-linux-x64-build-0 tier1-debug-jdk_open_test_hotspot_jtreg_tier1_gc_gcbasher-linux-x64-debug-45 Dependency task failed: mach5...-1314-29182-linux-x64-debug-linux-x64-build-1 See all 17... From swatibits14 at gmail.com Fri Jun 29 16:46:20 2018 From: swatibits14 at gmail.com (Swati Sharma) Date: Fri, 29 Jun 2018 22:16:20 +0530 Subject: [ping] Re: [11] RFR(M): 8189922: UseNUMA memory interleaving vs membind Message-ID: Hi, Could I get a review for this change that affects the JVM when there are pinned memory nodes please? It's already reviewed and tested on PPC64 and on AARCH64 by Gustavo and Derek, however both are not Reviewers so I need additional reviews for that change. Thanks in advance. Swati On Tue, Jun 19, 2018 at 5:58 PM, Swati Sharma wrote: > Hi All, > > Here is the numa information of the system : > swati at java-diesel1:~$ numactl -H > available: 8 nodes (0-7) > node 0 cpus: 0 1 2 3 4 5 6 7 64 65 66 67 68 69 70 71 > node 0 size: 64386 MB > node 0 free: 64134 MB > node 1 cpus: 8 9 10 11 12 13 14 15 72 73 74 75 76 77 78 79 > node 1 size: 64509 MB > node 1 free: 64232 MB > node 2 cpus: 16 17 18 19 20 21 22 23 80 81 82 83 84 85 86 87 > node 2 size: 64509 MB > node 2 free: 64215 MB > node 3 cpus: 24 25 26 27 28 29 30 31 88 89 90 91 92 93 94 95 > node 3 size: 64509 MB > node 3 free: 64157 MB > node 4 cpus: 32 33 34 35 36 37 38 39 96 97 98 99 100 101 102 103 > node 4 size: 64509 MB > node 4 free: 64336 MB > node 5 cpus: 40 41 42 43 44 45 46 47 104 105 106 107 108 109 110 111 > node 5 size: 64509 MB > node 5 free: 64352 MB > node 6 cpus: 48 49 50 51 52 53 54 55 112 113 114 115 116 117 118 119 > node 6 size: 64509 MB > node 6 free: 64359 MB > node 7 cpus: 56 57 58 59 60 61 62 63 120 121 122 123 124 125 126 127 > node 7 size: 64508 MB > node 7 free: 64350 MB > node distances: > node 0 1 2 3 4 5 6 7 > 0: 10 16 16 16 32 32 32 32 > 1: 16 10 16 16 32 32 32 32 > 2: 16 16 10 16 32 32 32 32 > 3: 16 16 16 10 32 32 32 32 > 4: 32 32 32 32 10 16 16 16 > 5: 32 32 32 32 16 10 16 16 > 6: 32 32 32 32 16 16 10 16 > 7: 32 32 32 32 16 16 16 10 > > Thanks, > Swati > > On Tue, Jun 19, 2018 at 12:00 AM, Gustavo Romero < > gromero at linux.vnet.ibm.com> wrote: > >> Hi Swati, >> >> On 06/16/2018 02:52 PM, Swati Sharma wrote: >> >>> Hi All, >>> >>> This is my first patch,I would appreciate if anyone can review the fix: >>> >>> Bug : https://bugs.openjdk.java.net/browse/JDK-8189922 < >>> https://bugs.openjdk.java.net/browse/JDK-8189922> >>> Webrev :http://cr.openjdk.java.net/~gromero/8189922/v1 >>> >>> The bug is about JVM flag UseNUMA which bypasses the user specified >>> numactl --membind option and divides the whole heap in lgrps according to >>> available numa nodes. >>> >>> The proposed solution is to disable UseNUMA if bound to single numa >>> node. In case more than one numa node binding, create the lgrps according >>> to bound nodes.If there is no binding, then JVM will divide the whole heap >>> based on the number of NUMA nodes available on the system. >>> >>> I appreciate Gustavo's help for fixing the thread allocation based on >>> numa distance for membind which was a dangling issue associated with main >>> patch. >>> >> >> Thanks. I have no further comments on it. LGTM. >> >> >> Best regards, >> Gustavo >> >> PS: Please, provide numactl -H information when possible. It helps to >> grasp >> promptly the actual NUMA topology in question :) >> >> Tested the fix by running specjbb2015 composite workload on 8 NUMA node >>> system. >>> Case 1 : Single NUMA node bind >>> numactl --cpunodebind=0 --membind=0 java -Xmx24g -Xms24g -Xmn22g >>> -XX:+UseNUMA -Xlog:gc*=debug:file=gc.log:time,uptimemillis >>> >>> Before Patch: gc.log >>> eden space 22511616K(22GB), 12% used >>> lgrp 0 space 2813952K, 100% used >>> lgrp 1 space 2813952K, 0% used >>> lgrp 2 space 2813952K, 0% used >>> lgrp 3 space 2813952K, 0% used >>> lgrp 4 space 2813952K, 0% used >>> lgrp 5 space 2813952K, 0% used >>> lgrp 6 space 2813952K, 0% used >>> lgrp 7 space 2813952K, 0% used >>> After Patch : gc.log >>> eden space 46718976K(45GB), 99% used(NUMA disabled) >>> >>> Case 2 : Multiple NUMA node bind >>> numactl --cpunodebind=0,7 ?membind=0,7 java -Xms50g -Xmx50g -Xmn45g >>> -XX:+UseNUMA -Xlog:gc*=debug:file=gc.log:time,uptimemillis >>> >>> Before Patch :gc.log >>> eden space 46718976K, 6% used >>> lgrp 0 space 5838848K, 14% used >>> lgrp 1 space 5838848K, 0% used >>> lgrp 2 space 5838848K, 0% used >>> lgrp 3 space 5838848K, 0% used >>> lgrp 4 space 5838848K, 0% used >>> lgrp 5 space 5838848K, 0% used >>> lgrp 6 space 5838848K, 0% used >>> lgrp 7 space 5847040K, 35% used >>> After Patch : gc.log >>> eden space 46718976K(45GB), 99% used >>> lgrp 0 space 23359488K(23.5GB), 100% used >>> lgrp 7 space 23359488K(23.5GB), 99% used >>> >>> >>> Note: The proposed solution is only for numactl membind option.The fix >>> is not for --cpunodebind and localalloc which is a separate bug bug >>> https://bugs.openjdk.java.net/browse/JDK-8205051 and fix is in progress >>> on this. >>> >>> Thanks, >>> Swati Sharma >>> Software Engineer -2 at AMD >>> >>> >> > From igor.ignatyev at oracle.com Fri Jun 29 19:25:16 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Fri, 29 Jun 2018 12:25:16 -0700 Subject: RFR(S) : 8206117 : failed to get JDK properties for JVM w/o JVMCI Message-ID: <501E9895-80C7-4CE1-AAB0-9E5B19420844@oracle.com> http://cr.openjdk.java.net/~iignatyev//8204517/webrev.00/index.html > 39 lines changed: 0 ins; 0 del; 39 mod; Hi all, could you please review this small fix? if JVM has been build w/o JVMCI, it won't have EnableJVMCI flag, as a result jtreg-ext/requires/VMProps will throw NPE trying to set 'vm.opt.final.EnableJVMC' property. w/ this fix, we will get 'null' value for nonexistent flags. compiler/graalunit/graalunit/generateTests.sh has been updated to generate tests which can handle such values, compiler/graalunit/ tests have been regenerated. webrev: http://cr.openjdk.java.net/~iignatyev//8204517/webrev.00/index.html JBS: https://bugs.openjdk.java.net/browse/JDK-8206117 testing: compiler/graalunit tests on builds w/ and w/o JVMCI Thanks, -- Igor From vladimir.kozlov at oracle.com Fri Jun 29 19:34:00 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 29 Jun 2018 12:34:00 -0700 Subject: RFR(S) : 8206117 : failed to get JDK properties for JVM w/o JVMCI In-Reply-To: <501E9895-80C7-4CE1-AAB0-9E5B19420844@oracle.com> References: <501E9895-80C7-4CE1-AAB0-9E5B19420844@oracle.com> Message-ID: <793b7ff3-3407-a8df-105f-ac59754da04c@oracle.com> Why not to check for 'null' in VMProps.java and assign 'false' value in such case? Thanks, Vladimir On 6/29/18 12:25 PM, Igor Ignatyev wrote: > http://cr.openjdk.java.net/~iignatyev//8204517/webrev.00/index.html >> 39 lines changed: 0 ins; 0 del; 39 mod; > > Hi all, > > could you please review this small fix? if JVM has been build w/o JVMCI, it won't have EnableJVMCI flag, as a result jtreg-ext/requires/VMProps will throw NPE trying to set 'vm.opt.final.EnableJVMC' property. w/ this fix, we will get 'null' value for nonexistent flags. compiler/graalunit/graalunit/generateTests.sh has been updated to generate tests which can handle such values, compiler/graalunit/ tests have been regenerated. > > webrev: http://cr.openjdk.java.net/~iignatyev//8204517/webrev.00/index.html > JBS: https://bugs.openjdk.java.net/browse/JDK-8206117 > testing: compiler/graalunit tests on builds w/ and w/o JVMCI > > Thanks, > -- Igor > From igor.ignatyev at oracle.com Fri Jun 29 19:51:09 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Fri, 29 Jun 2018 12:51:09 -0700 Subject: RFR(S) : 8206117 : failed to get JDK properties for JVM w/o JVMCI In-Reply-To: <793b7ff3-3407-a8df-105f-ac59754da04c@oracle.com> References: <501E9895-80C7-4CE1-AAB0-9E5B19420844@oracle.com> <793b7ff3-3407-a8df-105f-ac59754da04c@oracle.com> Message-ID: I ain't sure setting 'false' to nonexistent flags would the right choice in all cases, and I don't want real 'false' and 'false' from nonexistent flags to be mixed up. -- Igor > On Jun 29, 2018, at 12:34 PM, Vladimir Kozlov wrote: > > Why not to check for 'null' in VMProps.java and assign 'false' value in such case? > > Thanks, > Vladimir > > On 6/29/18 12:25 PM, Igor Ignatyev wrote: >> http://cr.openjdk.java.net/~iignatyev//8204517/webrev.00/index.html >>> 39 lines changed: 0 ins; 0 del; 39 mod; >> Hi all, >> could you please review this small fix? if JVM has been build w/o JVMCI, it won't have EnableJVMCI flag, as a result jtreg-ext/requires/VMProps will throw NPE trying to set 'vm.opt.final.EnableJVMC' property. w/ this fix, we will get 'null' value for nonexistent flags. compiler/graalunit/graalunit/generateTests.sh has been updated to generate tests which can handle such values, compiler/graalunit/ tests have been regenerated. >> webrev: http://cr.openjdk.java.net/~iignatyev//8204517/webrev.00/index.html >> JBS: https://bugs.openjdk.java.net/browse/JDK-8206117 >> testing: compiler/graalunit tests on builds w/ and w/o JVMCI >> Thanks, >> -- Igor From vladimir.kozlov at oracle.com Fri Jun 29 19:56:57 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 29 Jun 2018 12:56:57 -0700 Subject: RFR(S) : 8206117 : failed to get JDK properties for JVM w/o JVMCI In-Reply-To: References: <501E9895-80C7-4CE1-AAB0-9E5B19420844@oracle.com> <793b7ff3-3407-a8df-105f-ac59754da04c@oracle.com> Message-ID: <64e5f211-0e59-2917-3784-d1d6e348f2c1@oracle.com> Okay. thanks, Vladimir On 6/29/18 12:51 PM, Igor Ignatyev wrote: > I ain't sure setting 'false' to nonexistent flags would the right choice in all cases, and I don't want real 'false' and 'false' from nonexistent flags to be mixed up. > > -- Igor > >> On Jun 29, 2018, at 12:34 PM, Vladimir Kozlov wrote: >> >> Why not to check for 'null' in VMProps.java and assign 'false' value in such case? >> >> Thanks, >> Vladimir >> >> On 6/29/18 12:25 PM, Igor Ignatyev wrote: >>> http://cr.openjdk.java.net/~iignatyev//8204517/webrev.00/index.html >>>> 39 lines changed: 0 ins; 0 del; 39 mod; >>> Hi all, >>> could you please review this small fix? if JVM has been build w/o JVMCI, it won't have EnableJVMCI flag, as a result jtreg-ext/requires/VMProps will throw NPE trying to set 'vm.opt.final.EnableJVMC' property. w/ this fix, we will get 'null' value for nonexistent flags. compiler/graalunit/graalunit/generateTests.sh has been updated to generate tests which can handle such values, compiler/graalunit/ tests have been regenerated. >>> webrev: http://cr.openjdk.java.net/~iignatyev//8204517/webrev.00/index.html >>> JBS: https://bugs.openjdk.java.net/browse/JDK-8206117 >>> testing: compiler/graalunit tests on builds w/ and w/o JVMCI >>> Thanks, >>> -- Igor > From igor.ignatyev at oracle.com Fri Jun 29 20:15:54 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Fri, 29 Jun 2018 13:15:54 -0700 Subject: RFR(S) : 8206117 : failed to get JDK properties for JVM w/o JVMCI In-Reply-To: <501E9895-80C7-4CE1-AAB0-9E5B19420844@oracle.com> References: <501E9895-80C7-4CE1-AAB0-9E5B19420844@oracle.com> Message-ID: for the sake of history, the right webrev is http://cr.openjdk.java.net/~iignatyev//8206117/webrev.00/index.html -- Igor > On Jun 29, 2018, at 12:25 PM, Igor Ignatyev wrote: > > http://cr.openjdk.java.net/~iignatyev//8204517/webrev.00/index.html >> 39 lines changed: 0 ins; 0 del; 39 mod; > > Hi all, > > could you please review this small fix? if JVM has been build w/o JVMCI, it won't have EnableJVMCI flag, as a result jtreg-ext/requires/VMProps will throw NPE trying to set 'vm.opt.final.EnableJVMC' property. w/ this fix, we will get 'null' value for nonexistent flags. compiler/graalunit/graalunit/generateTests.sh has been updated to generate tests which can handle such values, compiler/graalunit/ tests have been regenerated. > > webrev: http://cr.openjdk.java.net/~iignatyev//8204517/webrev.00/index.html > JBS: https://bugs.openjdk.java.net/browse/JDK-8206117 > testing: compiler/graalunit tests on builds w/ and w/o JVMCI > > Thanks, > -- Igor