From paul.sandoz at oracle.com Thu Feb 1 00:28:06 2018 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Wed, 31 Jan 2018 16:28:06 -0800 Subject: [11] RFR 8196533 Update CondyNestedTest.java to compile jcod file Message-ID: Hi, Please review an update to a constant dynamic test that updates to compile the jcod file rather than loading a class from class file bytes encoded in a base64 string: http://cr.openjdk.java.net/~psandoz/jdk/JDK-8196533-CondyNestedTest-compile-jcod/webrev/ This will be pushed to the hs repo. Thanks, Paul. From mandy.chung at oracle.com Thu Feb 1 00:44:36 2018 From: mandy.chung at oracle.com (mandy chung) Date: Wed, 31 Jan 2018 16:44:36 -0800 Subject: [11] RFR 8196533 Update CondyNestedTest.java to compile jcod file In-Reply-To: References: Message-ID: Looks okay. Mandy On 1/31/18 4:28 PM, Paul Sandoz wrote: > Hi, > > Please review an update to a constant dynamic test that updates to compile the jcod file rather than loading a class from class file bytes encoded in a base64 string: > > http://cr.openjdk.java.net/~psandoz/jdk/JDK-8196533-CondyNestedTest-compile-jcod/webrev/ > > This will be pushed to the hs repo. > > Thanks, > Paul. From paul.sandoz at oracle.com Thu Feb 1 00:45:54 2018 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Wed, 31 Jan 2018 16:45:54 -0800 Subject: [11] RFR 8195694: ConstantBootstraps.invoke does not preserve variable arity In-Reply-To: <476DA321-6EAB-4C27-A5CE-5776E29F7646@oracle.com> References: <6A5707B7-518C-43C2-98C9-6F52AAF3FE6F@oracle.com> <476DA321-6EAB-4C27-A5CE-5776E29F7646@oracle.com> Message-ID: <6CBF6FD0-66EB-47A0-89E4-92307F45586B@oracle.com> > On Jan 31, 2018, at 3:49 PM, John Rose wrote: > > On second thought, you should also use invokeWithArguments to support jumbo arities. > It does, but non-selectively based on the arity: 245 return handle.invokeWithArguments(args); > This tricky idiom should be put into a utility method, package private for starters. A version of it also appears in BSM invocation code. > Are you in part referring to the approach of switching on the number of arguments and using invoke with unpacking for small cases? If you don?t object i would like to follow up on that with another issue. >> On Jan 31, 2018, at 3:23 PM, John Rose wrote: >> >> If you remove the old asType call it?s good! >> Ah! something went wrong when importing the patch from the amber repo. Updated in place. Paul. >>> On Jan 31, 2018, at 3:15 PM, Paul Sandoz wrote: >>> >>> Hi, >>> >>> Please review this fix to the invoke BSM so that it preserves variable arity, if any: >>> >>> http://cr.openjdk.java.net/~psandoz/jdk/JDK-8195694-constant-bsms-invoke-arity/webrev/ >>> >>> This will be pushed to the hs repo. >>> >>> Thanks, >>> Paul. >> > From mikhailo.seledtsov at oracle.com Thu Feb 1 00:55:14 2018 From: mikhailo.seledtsov at oracle.com (Mikhailo Seledtsov) Date: Wed, 31 Jan 2018 16:55:14 -0800 Subject: RFR : 8196062 : Enable docker container related tests for linux ppc64le In-Reply-To: <64a3268575d14ddcad90f7d46bab64dd@sap.com> References: <3374db8c-6e7b-fe0b-6874-8289e9e369bc@oracle.com> <6fa3fe84aad946eabb2b46281031e21d@sap.com> <4be36ed6-2f8e-2dd8-a87a-5f76d639550e@oracle.com> <64a3268575d14ddcad90f7d46bab64dd@sap.com> Message-ID: <5A726572.7030905@oracle.com> Changes look good to me, Thank you, Misha On 1/31/18, 6:15 AM, Baesken, Matthias wrote: > Hello , I created a second webrev : > > > http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.1/8196062.1/webrev/ > > - changed DockerTestUtils.buildJdkDockerImage in the suggested way (this should be extendable to linux s390x soon) > >>>>> Can you add "return;" in each test for subsystem not found messages > - added returns in the tests for the subsystems in osContainer_linux.cpp > > - moved some checks at the beginning of subsystem_file_contents (suggested by Dmitry) > > > Best regards, Matthias > > > >> -----Original Message----- >> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] >> Sent: Donnerstag, 25. Januar 2018 18:43 >> To: Baesken, Matthias; Bob Vandette >> >> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >> ; Langer, Christoph >> ; Doerr, Martin >> Subject: Re: RFR : 8196062 : Enable docker container related tests for linux >> ppc64le >> >> Hi Matthias, >> >> >> On 01/25/2018 12:15 AM, Baesken, Matthias wrote: >>>> Perhaps, you could add code to DockerTestUtils.buildJdkDockerImage() >>>> that does the following or similar: >>>> 1. Construct a name for platform-specific docker file: >>>> String platformSpecificDockerfile = dockerfile + "-" + >>>> Platform.getOsArch(); >>>> (Platform is jdk.test.lib.Platform) >>>> >>> Hello, the doc says : >>> >>> * Build a docker image that contains JDK under test. >>> * The jdk will be placed under the "/jdk/" folder inside the docker file >> system. >>> ..... >>> param dockerfile name of the dockerfile residing in the test source >>> ..... >>> public static void buildJdkDockerImage(String imageName, String >> dockerfile, String buildDirName) >>> >>> >>> It does not say anything about doing hidden insertions of some platform >> names into the dockerfile name. >>> So should the jtreg API doc be changed ? >>> If so who needs to approve this ? >> Thank you for your concerns about the clarity of API and corresponding >> documentation. This is a test library API, so no need to file CCC or CSR. >> >> This API can be changed via a regular RFR/webrev review process, as soon >> as on one objects. I am a VM SQE engineer covering the docker and Linux >> container area, I am OK with this change. >> And I agree with you, we should update the javadoc header on this method >> to reflect this implicit part of API contract. >> >> >> Thank you, >> Misha >> >> >> >>> (as far as I see so far only the test at >> hotspot/jtreg/runtime/containers/docker/ use this so it should not be a big >> deal to change the interface?) >>> Best regards, Matthias >>> >>> >>> >>> >>>> -----Original Message----- >>>> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] >>>> Sent: Mittwoch, 24. Januar 2018 20:09 >>>> To: Bob Vandette; Baesken, Matthias >>>> >>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >>>> ; Langer, Christoph >>>> ; Doerr, Martin >>>> Subject: Re: RFR : 8196062 : Enable docker container related tests for linux >>>> ppc64le >>>> >>>> Hi Matthias, >>>> >>>> Please see my comments about the test changes inline. >>>> >>>> >>>> On 01/24/2018 07:13 AM, Bob Vandette wrote: >>>>> osContainer_linux.cpp: >>>>> >>>>> Can you add "return;" in each test for subsystem not found messages >> and >>>>> remove these 3 lines OR move your tests for NULL& messages inside. >> The >>>> compiler can >>>>> probably optimize this but I?d prefer more compact code. >>>>> >>>>> if (memory == NULL || cpuset == NULL || cpu == NULL || cpuacct == >> NULL) >>>> { >>>>> 342 return; >>>>> 343 } >>>>> >>>>> >>>>> The other changes in osContainer_linux.cpp look ok. >>>>> >>>>> I forwarded your test changes to Misha, who wrote these. >>>>> >>>>> Since it?s likely that other platforms, such as aarch64, are going to run >> into >>>> the same problem, >>>>> It would have been better to enable the tests based on the existence of >> an >>>> arch specific >>>>> Dockerfile-BasicTest-{os.arch} rather than enabling specific arch?s in >>>> VPProps.java. >>>>> This approach would reduce the number of changes significantly and >> allow >>>> support to >>>>> be added with 1 new file. >>>>> >>>>> You wouldn?t need "String dockerFileName = >>>> Common.getDockerFileName();? >>>>> in every test. Just make DockerTestUtils automatically add arch. >>>> I like Bob's idea on handling platform-specific Dockerfiles. >>>> >>>> Perhaps, you could add code to DockerTestUtils.buildJdkDockerImage() >>>> that does the following or similar: >>>> 1. Construct a name for platform-specific docker file: >>>> String platformSpecificDockerfile = dockerfile + "-" + >>>> Platform.getOsArch(); >>>> (Platform is jdk.test.lib.Platform) >>>> >>>> 2. Check if platformSpecificDockerfile file exists in the test >>>> source directory >>>> File.exists(Paths.get(Utils.TEST_SRC, platformSpecificDockerFile) >>>> If it does, then use it. Otherwise continue using the >>>> default/original dockerfile name. >>>> >>>> I think this will considerably simplify your change, as well as make it >>>> easy to extend support to other platforms/configurations >>>> in the future. Let us know what you think of this approach ? >>>> >>>> >>>> Once your change gets (R)eviewed and approved, I can sponsor the push. >>>> >>>> >>>> Thank you, >>>> Misha >>>> >>>> >>>> >>>>> Bob. >>>>> >>>>> >>>>> >>>>>> On Jan 24, 2018, at 9:24 AM, Baesken, Matthias >>>> wrote: >>>>>> Hello, could you please review the following change : 8196062 : Enable >>>> docker container related tests for linux ppc64le . >>>>>> It adds docker container testing for linux ppc64 le (little endian) . >>>>>> >>>>>> A number of things had to be done : >>>>>> ? Add a separate docker file >>>> test/hotspot/jtreg/runtime/containers/docker/Dockerfile-BasicTest- >> ppc64le >>>> for linux ppc64 le which uses Ubuntu ( the Oracle Linux 7.2 used for >>>> x86_64 seems not to be available for ppc64le ) >>>>>> ? Fix parsing /proc/self/mountinfo and /proc/self/cgroup in >>>> src/hotspot/os/linux/osContainer_linux.cpp , it could not handle the >>>> format seen on SUSE LINUX 12.1 ppc64le (Host) and Ubuntu (Docker >>>> container) >>>>>> ? Add a bit more logging >>>>>> >>>>>> >>>>>> Webrev : >>>>>> >>>>>> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062/ >>>>>> >>>>>> >>>>>> Bug : >>>>>> >>>>>> https://bugs.openjdk.java.net/browse/JDK-8196062 >>>>>> >>>>>> >>>>>> After these adjustments I could run the runtime/containers/docker >> - >>>> jtreg tests successfully . >>>>>> Best regards, Matthias From coleen.phillimore at oracle.com Thu Feb 1 01:41:01 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 31 Jan 2018 20:41:01 -0500 Subject: RFR (S) 8196199: Remove miscellaneous oop comparison operators In-Reply-To: References: <0a17630f-3132-f5fa-fc87-7dd8023b02b1@oracle.com> <63314b97-a727-68cb-f56a-b911e5809ba3@oracle.com> <4041e0ac-fef5-a20c-b249-29a3987a7328@oracle.com> Message-ID: <25c411eb-5a5f-a349-fa2e-9efad10989c4@oracle.com> On 1/31/18 5:11 PM, Kim Barrett wrote: >> On Jan 31, 2018, at 4:05 PM, coleen.phillimore at oracle.com wrote: >> >> >> >> On 1/31/18 4:01 PM, Kim Barrett wrote: >>>> On Jan 31, 2018, at 2:30 PM, harold seigel wrote: >>>> >>>> Hi Coleen, >>>> >>>> This change looks good. >>>> >>>> In jniCheck.cpp, you could use the is_null(oop obj) function defined in oop.hpp instead of 'oop == NULL?. >>> I like this suggestion, and wish I?d thought of it while writing the initial code. I think it should be applied to all of the former uses of operator!. Coleen? >> There are a zillion places where oop is compared with NULL. > I wasn?t suggesting a zillion places be changed. Only the half dozen places that formerly used operator! (via ?!x?) > and are being changed to instead use ?x == NULL? in 8196199.01/webrev. > This doesn't make sense to me.?? The is_null(oop obj) function isn't (generally) used anywhere else other than inside oop.cpp.? It seems confusing to have only these instances use this function.? It would be hard to explain why these are special because it's only because they used some wrong negation operator. Coleen From brian.goetz at oracle.com Thu Feb 1 02:39:20 2018 From: brian.goetz at oracle.com (Brian Goetz) Date: Wed, 31 Jan 2018 21:39:20 -0500 Subject: Constant dynamic pushed to the hs repo In-Reply-To: References: Message-ID: <5fb64511-213d-3133-c531-c6e64372fd1b@oracle.com> Yay! On 1/31/2018 5:43 PM, Paul Sandoz wrote: > Hi, > > I just pushed the constant dynamic change sets to hs [*]. It took a little longer than I anticipated to work through some of the review process given the holiday break. > > We should now be able to follow up, in the hs repo until the merge in some cases, with dependent issues such as the changes to support AArch64, SPARC, AoT/Graal, additional tests, and some bug/performance fixes. > > Thanks, > Paul. > > [*] I?ll delay marking the JEP as integrated until a merge with the jdk master repo. After that we can then garbage collect the condy branch in the amber repo. From david.holmes at oracle.com Thu Feb 1 07:38:43 2018 From: david.holmes at oracle.com (David Holmes) Date: Thu, 1 Feb 2018 17:38:43 +1000 Subject: RFR: 8196361: JTReg failure in serviceability/sa/ClhsdbInspect.java In-Reply-To: <3318aed7-f04a-3b92-2660-39cdf13c2d24@oracle.com> References: <6e3441f3c28f4f7387d2174f52283fa7@NASANEXM01E.na.qualcomm.com> <44a71e46-3da2-53c6-7e6b-82658183ae8c@oracle.com> <3318aed7-f04a-3b92-2660-39cdf13c2d24@oracle.com> Message-ID: <55fbc66b-5f0e-4650-c199-f8fdc525c5d7@oracle.com> On 30/01/2018 9:57 PM, Jini George wrote: > Hi Daniel, David, > > Thanks, Daniel, for bringing this up. The intent of the test is to get > the oop address corresponding to a java.lang.ref.ReferenceQueue$Lock, > which can typically be obtained from the stack traces of the > Common-Cleaner or the Finalizer threads. The stack traces which I had > been noticing were typically of the form: > > > "Common-Cleaner" #8 daemon prio=8 tid=0x00007f09c82ac000 nid=0xf6e in > Object.wait() [0x00007f09a18d2000] > ?? java.lang.Thread.State: TIMED_WAITING (on object monitor) > ?? JavaThread state: _thread_blocked > ?- java.lang.Object.wait(long) @bci=0, pc=0x00007f09b7d6480b, > Method*=0x00007f09acc43d60 (Interpreted frame) > ??????? - waiting on <0x000000072e61f6e0> (a > java.lang.ref.ReferenceQueue$Lock) > ?- java.lang.ref.ReferenceQueue.remove(long) @bci=59, line=151, > pc=0x00007f09b7d55243, Method*=0x00007f09acdab9b0 (Interpreted frame) > ??????? - waiting to re-lock in wait() <0x000000072e61f6e0> (a > java.lang.ref.ReferenceQueue$Lock) > ... > > I chose 'waiting to re-lock in wait' since that was what I had been > observing next to the oop address of java.lang.ref.ReferenceQueue$Lock. Actually that output is itself a bug: https://bugs.openjdk.java.net/browse/JDK-8150689 David ----- > But I see how with a timing difference, one could get 'waiting to lock' > as in your case. So, a good way to fix might be to check for the line > containing '(a java.lang.ref.ReferenceQueue$Lock)', getting the oop > address from that line (should be the address appearing immediately > before '(a java.lang.ref.ReferenceQueue$Lock)') and passing that to the > 'inspect' command. > > Thanks much, > Jini. > > On 1/30/2018 3:35 AM, David Holmes wrote: >> Hi Daniel, >> >> Serviceability issues should go to serviceability-dev at openjdk.java.net >> - now cc'd. >> >> On 30/01/2018 7:53 AM, stewartd.qdt wrote: >>> Please review this webrev [1] which attempts to fix a test error in >>> serviceability/sa/ClhsdbInspect.java when it is run under an AArch64 >>> system (not necessarily exclusive to this system, but it was the >>> system under test). The bug report [2] provides further details. >>> Essentially the line "waiting to re-lock in wait" never actually >>> occurs. Instead I have the line "waiting to lock" which occurs for >>> the referenced item of /java/lang/ref/ReferenceQueue$Lock. >>> Unfortunately the test is written such that only the first "waiting >>> to lock" occurrence is seen (for java/lang/Class), which is already >>> accounted for in the test. >> >> I can't tell exactly what the test expects, or why, but it would be >> extremely hard to arrange for "waiting to re-lock in wait" to be seen >> for the ReferenceQueue lock! That requires acquiring the lock >> yourself, issuing a notify() to unblock the wait(), and then issuing >> the jstack command while still holding the lock! >> >> David >> ----- >> >>> I'm not overly happy with this approach as it actually removes a test >>> line. However, the test line does not actually appear in the output >>> (at least on my system) and the test is not currently written to look >>> for the second occurrence of the line "waiting to lock". Perhaps the >>> original author could chime in and provide further guidance as to the >>> intention of the test. >>> >>> I am happy to modify the patch as necessary. >>> >>> Regards, >>> Daniel Stewart >>> >>> >>> [1] -? http://cr.openjdk.java.net/~dstewart/8196361/webrev.00/ >>> [2] - https://bugs.openjdk.java.net/browse/JDK-8196361 >>> From david.holmes at oracle.com Thu Feb 1 07:51:05 2018 From: david.holmes at oracle.com (David Holmes) Date: Thu, 1 Feb 2018 17:51:05 +1000 Subject: RFR: 8196361: JTReg failure in serviceability/sa/ClhsdbInspect.java In-Reply-To: <24c556061ffb4fde9e87a8806c04c8f7@NASANEXM01E.na.qualcomm.com> References: <6e3441f3c28f4f7387d2174f52283fa7@NASANEXM01E.na.qualcomm.com> <44a71e46-3da2-53c6-7e6b-82658183ae8c@oracle.com> <3318aed7-f04a-3b92-2660-39cdf13c2d24@oracle.com> <24c556061ffb4fde9e87a8806c04c8f7@NASANEXM01E.na.qualcomm.com> Message-ID: <0eca7345-fb57-43e4-6169-c4b3531250e4@oracle.com> Hi Daniel, On 1/02/2018 2:45 AM, stewartd.qdt wrote: > Hi Jini, David, > > Please have a look at the revised webrev: http://cr.openjdk.java.net/~dstewart/8196361/webrev.01/ > > In this webrev I have changed the approach to finding the addresses. This was necessary because in the case of matching for the locks the addresses are before what is matched and in the case of Method the address is after it. The existing code only looked for the addresses after the matched string. I've also tried to align what tokens are being looked for in the lock case. I've taken an approach of breaking the jstack output into lines and then searching each line for it containing what we want. Once found, the line is broken into pieces to find the actual address we want. > > Please let me know if this is an unacceptable approach or any changes you would like to see. I'm not clear on the overall approach as I'm unclear exactly how inspect operates or exactly what the test is trying to verify. One comment on breaking things into lines though: 73 String newline = System.getProperty("line.separator"); 74 String[] lines = jstackOutput.split(newline); As split() takes a regex, I suggest using \R to cover all potential line-breaks, rather than the platform specific line-seperator. We've been recently bitten by the distinction between output that comes from reading a process's stdout/stderr (and for which a newline \n is translated into the platform line-seperator), and output that comes across a socket connection (for which \n is not translated). This could result in failing to parse things correctly on Windows. It's safer/simpler to expect any kind of line-seperator. Thanks, David > Thanks, > Daniel > > > -----Original Message----- > From: Jini George [mailto:jini.george at oracle.com] > Sent: Tuesday, January 30, 2018 6:58 AM > To: David Holmes ; stewartd.qdt > Cc: serviceability-dev ; hotspot-dev at openjdk.java.net > Subject: Re: RFR: 8196361: JTReg failure in serviceability/sa/ClhsdbInspect.java > > Hi Daniel, David, > > Thanks, Daniel, for bringing this up. The intent of the test is to get the oop address corresponding to a java.lang.ref.ReferenceQueue$Lock, > which can typically be obtained from the stack traces of the Common-Cleaner or the Finalizer threads. The stack traces which I had been noticing were typically of the form: > > > "Common-Cleaner" #8 daemon prio=8 tid=0x00007f09c82ac000 nid=0xf6e in > Object.wait() [0x00007f09a18d2000] > java.lang.Thread.State: TIMED_WAITING (on object monitor) > JavaThread state: _thread_blocked > - java.lang.Object.wait(long) @bci=0, pc=0x00007f09b7d6480b, > Method*=0x00007f09acc43d60 (Interpreted frame) > - waiting on <0x000000072e61f6e0> (a > java.lang.ref.ReferenceQueue$Lock) > - java.lang.ref.ReferenceQueue.remove(long) @bci=59, line=151, pc=0x00007f09b7d55243, Method*=0x00007f09acdab9b0 (Interpreted frame) > - waiting to re-lock in wait() <0x000000072e61f6e0> (a > java.lang.ref.ReferenceQueue$Lock) > ... > > I chose 'waiting to re-lock in wait' since that was what I had been observing next to the oop address of java.lang.ref.ReferenceQueue$Lock. > But I see how with a timing difference, one could get 'waiting to lock' > as in your case. So, a good way to fix might be to check for the line containing '(a java.lang.ref.ReferenceQueue$Lock)', getting the oop address from that line (should be the address appearing immediately before '(a java.lang.ref.ReferenceQueue$Lock)') and passing that to the 'inspect' command. > > Thanks much, > Jini. > > On 1/30/2018 3:35 AM, David Holmes wrote: >> Hi Daniel, >> >> Serviceability issues should go to serviceability-dev at openjdk.java.net >> - now cc'd. >> >> On 30/01/2018 7:53 AM, stewartd.qdt wrote: >>> Please review this webrev [1] which attempts to fix a test error in >>> serviceability/sa/ClhsdbInspect.java when it is run under an AArch64 >>> system (not necessarily exclusive to this system, but it was the >>> system under test). The bug report [2] provides further details. >>> Essentially the line "waiting to re-lock in wait" never actually >>> occurs. Instead I have the line "waiting to lock" which occurs for >>> the referenced item of /java/lang/ref/ReferenceQueue$Lock. >>> Unfortunately the test is written such that only the first "waiting to lock" >>> occurrence is seen (for java/lang/Class), which is already accounted >>> for in the test. >> >> I can't tell exactly what the test expects, or why, but it would be >> extremely hard to arrange for "waiting to re-lock in wait" to be seen >> for the ReferenceQueue lock! That requires acquiring the lock >> yourself, issuing a notify() to unblock the wait(), and then issuing >> the jstack command while still holding the lock! >> >> David >> ----- >> >>> I'm not overly happy with this approach as it actually removes a test >>> line. However, the test line does not actually appear in the output >>> (at least on my system) and the test is not currently written to look >>> for the second occurrence of the line "waiting to lock". Perhaps the >>> original author could chime in and provide further guidance as to the >>> intention of the test. >>> >>> I am happy to modify the patch as necessary. >>> >>> Regards, >>> Daniel Stewart >>> >>> >>> [1] -? http://cr.openjdk.java.net/~dstewart/8196361/webrev.00/ >>> [2] - https://bugs.openjdk.java.net/browse/JDK-8196361 >>> From kim.barrett at oracle.com Thu Feb 1 09:06:54 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 1 Feb 2018 04:06:54 -0500 Subject: RFR (S) 8196199: Remove miscellaneous oop comparison operators In-Reply-To: <25c411eb-5a5f-a349-fa2e-9efad10989c4@oracle.com> References: <0a17630f-3132-f5fa-fc87-7dd8023b02b1@oracle.com> <63314b97-a727-68cb-f56a-b911e5809ba3@oracle.com> <4041e0ac-fef5-a20c-b249-29a3987a7328@oracle.com> <25c411eb-5a5f-a349-fa2e-9efad10989c4@oracle.com> Message-ID: <499382C0-E7BC-4293-9BDA-B6B035EB5410@oracle.com> > On Jan 31, 2018, at 8:41 PM, coleen.phillimore at oracle.com wrote: > > > > On 1/31/18 5:11 PM, Kim Barrett wrote: >>> On Jan 31, 2018, at 4:05 PM, coleen.phillimore at oracle.com wrote: >>> >>> >>> >>> On 1/31/18 4:01 PM, Kim Barrett wrote: >>>>> On Jan 31, 2018, at 2:30 PM, harold seigel wrote: >>>>> >>>>> Hi Coleen, >>>>> >>>>> This change looks good. >>>>> >>>>> In jniCheck.cpp, you could use the is_null(oop obj) function defined in oop.hpp instead of 'oop == NULL?. >>>> I like this suggestion, and wish I?d thought of it while writing the initial code. I think it should be applied to all of the former uses of operator!. Coleen? >>> There are a zillion places where oop is compared with NULL. >> I wasn?t suggesting a zillion places be changed. Only the half dozen places that formerly used operator! (via ?!x?) >> and are being changed to instead use ?x == NULL? in 8196199.01/webrev. >> > This doesn't make sense to me. The is_null(oop obj) function isn't (generally) used anywhere else other than inside oop.cpp. It seems confusing to have only these instances use this function. It would be hard to explain why these are special because it's only because they used some wrong negation operator. Okay. Looks good then. > > Coleen From aph at redhat.com Thu Feb 1 10:09:20 2018 From: aph at redhat.com (Andrew Haley) Date: Thu, 1 Feb 2018 10:09:20 +0000 Subject: Constant dynamic pushed to the hs repo In-Reply-To: References: Message-ID: <093b9c05-4414-6341-9e39-c2e1cb5d9059@redhat.com> On 31/01/18 22:43, Paul Sandoz wrote: > I just pushed the constant dynamic change sets to hs [*]. It took a little longer than I anticipated to work through some of the review process given the holiday break. > > We should now be able to follow up, in the hs repo until the merge in some cases, with dependent issues such as the changes to support AArch64, SPARC, AoT/Graal, additional tests, and some bug/performance fixes. OK. Can you please send a list of those changesets? I guess they're just everything pushed by you on Jan 31, but I wanted to check. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From goetz.lindenmaier at sap.com Thu Feb 1 10:39:50 2018 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Thu, 1 Feb 2018 10:39:50 +0000 Subject: RFR : 8196062 : Enable docker container related tests for linux ppc64le In-Reply-To: <64a3268575d14ddcad90f7d46bab64dd@sap.com> References: <3374db8c-6e7b-fe0b-6874-8289e9e369bc@oracle.com> <6fa3fe84aad946eabb2b46281031e21d@sap.com> <4be36ed6-2f8e-2dd8-a87a-5f76d639550e@oracle.com> <64a3268575d14ddcad90f7d46bab64dd@sap.com> Message-ID: <10f2abc8dbc347f7b7f1a851e89a220a@sap.com> Hi Matthias, thanks for enabling this test. Looks good. I would appreciate if you would add a line "Summary: also fix cgroup subsystem recognition" to the bug description. Else this might be mistaken for a mere testbug. Best regards, Goetz. > -----Original Message----- > From: Baesken, Matthias > Sent: Mittwoch, 31. Januar 2018 15:15 > To: mikhailo ; Bob Vandette > > Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz > ; Langer, Christoph > ; Doerr, Martin ; > Dmitry Samersoff > Subject: RE: RFR : 8196062 : Enable docker container related tests for linux > ppc64le > > Hello , I created a second webrev : > > > http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.1/8196062.1/webr > ev/ > > - changed DockerTestUtils.buildJdkDockerImage in the suggested way (this > should be extendable to linux s390x soon) > > > >>> Can you add "return;" in each test for subsystem not found messages > > - added returns in the tests for the subsystems in osContainer_linux.cpp > > - moved some checks at the beginning of subsystem_file_contents > (suggested by Dmitry) > > > Best regards, Matthias > > > > > -----Original Message----- > > From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] > > Sent: Donnerstag, 25. Januar 2018 18:43 > > To: Baesken, Matthias ; Bob Vandette > > > > Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz > > ; Langer, Christoph > > ; Doerr, Martin > > Subject: Re: RFR : 8196062 : Enable docker container related tests for linux > > ppc64le > > > > Hi Matthias, > > > > > > On 01/25/2018 12:15 AM, Baesken, Matthias wrote: > > >> Perhaps, you could add code to DockerTestUtils.buildJdkDockerImage() > > >> that does the following or similar: > > >> 1. Construct a name for platform-specific docker file: > > >> String platformSpecificDockerfile = dockerfile + "-" + > > >> Platform.getOsArch(); > > >> (Platform is jdk.test.lib.Platform) > > >> > > > Hello, the doc says : > > > > > > * Build a docker image that contains JDK under test. > > > * The jdk will be placed under the "/jdk/" folder inside the docker file > > system. > > > ..... > > > param dockerfile name of the dockerfile residing in the test source > > > ..... > > > public static void buildJdkDockerImage(String imageName, String > > dockerfile, String buildDirName) > > > > > > > > > > > > It does not say anything about doing hidden insertions of some platform > > names into the dockerfile name. > > > So should the jtreg API doc be changed ? > > > If so who needs to approve this ? > > Thank you for your concerns about the clarity of API and corresponding > > documentation. This is a test library API, so no need to file CCC or CSR. > > > > This API can be changed via a regular RFR/webrev review process, as soon > > as on one objects. I am a VM SQE engineer covering the docker and Linux > > container area, I am OK with this change. > > And I agree with you, we should update the javadoc header on this > method > > to reflect this implicit part of API contract. > > > > > > Thank you, > > Misha > > > > > > > > > (as far as I see so far only the test at > > hotspot/jtreg/runtime/containers/docker/ use this so it should not be a > big > > deal to change the interface?) > > > > > > Best regards, Matthias > > > > > > > > > > > > > > >> -----Original Message----- > > >> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] > > >> Sent: Mittwoch, 24. Januar 2018 20:09 > > >> To: Bob Vandette ; Baesken, Matthias > > >> > > >> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz > > >> ; Langer, Christoph > > >> ; Doerr, Martin > > >> Subject: Re: RFR : 8196062 : Enable docker container related tests for > linux > > >> ppc64le > > >> > > >> Hi Matthias, > > >> > > >> ?? Please see my comments about the test changes inline. > > >> > > >> > > >> On 01/24/2018 07:13 AM, Bob Vandette wrote: > > >>> osContainer_linux.cpp: > > >>> > > >>> Can you add "return;" in each test for subsystem not found messages > > and > > >>> remove these 3 lines OR move your tests for NULL & messages inside. > > The > > >> compiler can > > >>> probably optimize this but I?d prefer more compact code. > > >>> > > >>> if (memory == NULL || cpuset == NULL || cpu == NULL || cpuacct == > > NULL) > > >> { > > >>> 342 return; > > >>> 343 } > > >>> > > >>> > > >>> The other changes in osContainer_linux.cpp look ok. > > >>> > > >>> I forwarded your test changes to Misha, who wrote these. > > >>> > > >>> Since it?s likely that other platforms, such as aarch64, are going to run > > into > > >> the same problem, > > >>> It would have been better to enable the tests based on the existence > of > > an > > >> arch specific > > >>> Dockerfile-BasicTest-{os.arch} rather than enabling specific arch?s in > > >> VPProps.java. > > >>> This approach would reduce the number of changes significantly and > > allow > > >> support to > > >>> be added with 1 new file. > > >>> > > >>> You wouldn?t need "String dockerFileName = > > >> Common.getDockerFileName();? > > >>> in every test. Just make DockerTestUtils automatically add arch. > > >> I like Bob's idea on handling platform-specific Dockerfiles. > > >> > > >> Perhaps, you could add code to DockerTestUtils.buildJdkDockerImage() > > >> that does the following or similar: > > >> ??? 1. Construct a name for platform-specific docker file: > > >> ?????????? String platformSpecificDockerfile = dockerfile + "-" + > > >> Platform.getOsArch(); > > >> ?????????? (Platform is jdk.test.lib.Platform) > > >> > > >> ??? 2. Check if platformSpecificDockerfile file exists in the test > > >> source directory > > >> ????????? File.exists(Paths.get(Utils.TEST_SRC, platformSpecificDockerFile) > > >> ????????? If it does, then use it. Otherwise continue using the > > >> default/original dockerfile name. > > >> > > >> I think this will considerably simplify your change, as well as make it > > >> easy to extend support to other platforms/configurations > > >> in the future. Let us know what you think of this approach ? > > >> > > >> > > >> Once your change gets (R)eviewed and approved, I can sponsor the > push. > > >> > > >> > > >> Thank you, > > >> Misha > > >> > > >> > > >> > > >>> Bob. > > >>> > > >>> > > >>> > > >>>> On Jan 24, 2018, at 9:24 AM, Baesken, Matthias > > >> wrote: > > >>>> Hello, could you please review the following change : 8196062 : > Enable > > >> docker container related tests for linux ppc64le . > > >>>> It adds docker container testing for linux ppc64 le (little endian) . > > >>>> > > >>>> A number of things had to be done : > > >>>> ? Add a separate docker file > > >> test/hotspot/jtreg/runtime/containers/docker/Dockerfile-BasicTest- > > ppc64le > > >> for linux ppc64 le which uses Ubuntu ( the Oracle Linux 7.2 used for > > >> x86_64 seems not to be available for ppc64le ) > > >>>> ? Fix parsing /proc/self/mountinfo and /proc/self/cgroup > in > > >> src/hotspot/os/linux/osContainer_linux.cpp , it could not handle the > > >> format seen on SUSE LINUX 12.1 ppc64le (Host) and Ubuntu (Docker > > >> container) > > >>>> ? Add a bit more logging > > >>>> > > >>>> > > >>>> Webrev : > > >>>> > > >>>> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062/ > > >>>> > > >>>> > > >>>> Bug : > > >>>> > > >>>> https://bugs.openjdk.java.net/browse/JDK-8196062 > > >>>> > > >>>> > > >>>> After these adjustments I could run the runtime/containers/docker > > - > > >> jtreg tests successfully . > > >>>> > > >>>> Best regards, Matthias From tobias.hartmann at oracle.com Thu Feb 1 10:53:43 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Thu, 1 Feb 2018 11:53:43 +0100 Subject: RFR(S): 8195731: [Graal] runtime/SharedArchiveFile/serviceability/transformRelatedClasses/TransformSuperSubTwoPckgs.java intermittently fails with Graal JIT Message-ID: <85fd0060-5d97-c918-aef3-1eadf4b2b79b@oracle.com> Hi, please review the following patch: https://bugs.openjdk.java.net/browse/JDK-8195731 http://cr.openjdk.java.net/~thartmann/8195731/webrev.00/ The TransformSuperSubTwoPckgs test fails with Graal because JVMCI initialization (or Graal compilation) is triggered in SimpleTransformer::transform() which then triggers class loading of the to-be-transformed class, resulting in a ClassCircularityError (see JDK-8164165 [1]) which is ignored. Most transformations fail silently, including the expected transformation of the test classes. I've added code that catches and reports exceptions in transform() similar to what we do in runtime/RedefineTests/RedefineAnnotations.java to avoid silently failing transformations. The problem of Graal interfering with class transformations will be gone once we move to Substrate VM so we should not execute these test with Graal for now. Thanks, Tobias [1] https://bugs.openjdk.java.net/browse/JDK-8164165 From maurizio.cimadamore at oracle.com Thu Feb 1 11:30:35 2018 From: maurizio.cimadamore at oracle.com (Maurizio Cimadamore) Date: Thu, 1 Feb 2018 11:30:35 +0000 Subject: Constant dynamic pushed to the hs repo In-Reply-To: References: <5fb64511-213d-3133-c531-c6e64372fd1b@oracle.com> Message-ID: That's right. We have some experimental changes in that vein in the amber repo - but in general, support for condy in lambda requires updated ASM support, and we'll get an updated ASM with condy support only _after_ JDK11 is shipped. So, as a rule of thumb, features that rely on ASM (such as our lambda metafactory) have to wait until the JDK ASM understands the new bytecode goodies. Maurizio On 01/02/18 02:48, Tagir Valeev wrote: > Hello! > > Am I understand correctly that no javac changes were pushed (e.g. > compiling lambdas using condy)? > > With best regards, > Tagir Valeev. > > On Thu, Feb 1, 2018 at 9:39 AM, Brian Goetz wrote: >> Yay! >> >> >> On 1/31/2018 5:43 PM, Paul Sandoz wrote: >>> Hi, >>> >>> I just pushed the constant dynamic change sets to hs [*]. It took a little >>> longer than I anticipated to work through some of the review process given >>> the holiday break. >>> >>> We should now be able to follow up, in the hs repo until the merge in some >>> cases, with dependent issues such as the changes to support AArch64, SPARC, >>> AoT/Graal, additional tests, and some bug/performance fixes. >>> >>> Thanks, >>> Paul. >>> >>> [*] I?ll delay marking the JEP as integrated until a merge with the jdk >>> master repo. After that we can then garbage collect the condy branch in the >>> amber repo. >> From john.r.rose at oracle.com Thu Feb 1 12:26:59 2018 From: john.r.rose at oracle.com (John Rose) Date: Thu, 1 Feb 2018 12:26:59 +0000 Subject: [11] RFR 8195694: ConstantBootstraps.invoke does not preserve variable arity In-Reply-To: <6CBF6FD0-66EB-47A0-89E4-92307F45586B@oracle.com> References: <6A5707B7-518C-43C2-98C9-6F52AAF3FE6F@oracle.com> <476DA321-6EAB-4C27-A5CE-5776E29F7646@oracle.com> <6CBF6FD0-66EB-47A0-89E4-92307F45586B@oracle.com> Message-ID: <49545AD4-C5F4-412F-9651-984BC40C1AA1@oracle.com> On Feb 1, 2018, at 12:45 AM, Paul Sandoz wrote: > > > >> On Jan 31, 2018, at 3:49 PM, John Rose wrote: >> >> On second thought, you should also use invokeWithArguments to support jumbo arities. >> > > It does, but non-selectively based on the arity: > > 245 return handle.invokeWithArguments(args); > > >> This tricky idiom should be put into a utility method, package private for starters. A version of it also appears in BSM invocation code. >> > > Are you in part referring to the approach of switching on the number of arguments and using invoke with unpacking for small cases? That might be worth doing as an internal optimization in a reusable library function. But my main concern is correctness: packaging a tricky idiom instead of having users re-derive it by noticing bugs in the near-miss approximations. We put in withVarargs to capture some of the idiom, but we aren?t there yet. The big you are fixing had happened in several places and the root cause is the lack of an advertised best practice. Hence my interest in a utility method, eventually a public one. > > If you don?t object i would like to follow up on that with another issue. Sure. > > >>> On Jan 31, 2018, at 3:23 PM, John Rose wrote: >>> >>> If you remove the old asType call it?s good! >>> > > > Ah! something went wrong when importing the patch from the amber repo. Updated in place. > > Paul. > >>>> On Jan 31, 2018, at 3:15 PM, Paul Sandoz wrote: >>>> >>>> Hi, >>>> >>>> Please review this fix to the invoke BSM so that it preserves variable arity, if any: >>>> >>>> http://cr.openjdk.java.net/~psandoz/jdk/JDK-8195694-constant-bsms-invoke-arity/webrev/ >>>> >>>> This will be pushed to the hs repo. >>>> >>>> Thanks, >>>> Paul. >>> >> > From coleen.phillimore at oracle.com Thu Feb 1 13:32:34 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 1 Feb 2018 08:32:34 -0500 Subject: RFR (S) 8196199: Remove miscellaneous oop comparison operators In-Reply-To: <499382C0-E7BC-4293-9BDA-B6B035EB5410@oracle.com> References: <0a17630f-3132-f5fa-fc87-7dd8023b02b1@oracle.com> <63314b97-a727-68cb-f56a-b911e5809ba3@oracle.com> <4041e0ac-fef5-a20c-b249-29a3987a7328@oracle.com> <25c411eb-5a5f-a349-fa2e-9efad10989c4@oracle.com> <499382C0-E7BC-4293-9BDA-B6B035EB5410@oracle.com> Message-ID: <7e47a449-09fd-9b9c-aa78-16cdb37e233c@oracle.com> On 2/1/18 4:06 AM, Kim Barrett wrote: >> On Jan 31, 2018, at 8:41 PM, coleen.phillimore at oracle.com wrote: >> >> >> >> On 1/31/18 5:11 PM, Kim Barrett wrote: >>>> On Jan 31, 2018, at 4:05 PM, coleen.phillimore at oracle.com wrote: >>>> >>>> >>>> >>>> On 1/31/18 4:01 PM, Kim Barrett wrote: >>>>>> On Jan 31, 2018, at 2:30 PM, harold seigel wrote: >>>>>> >>>>>> Hi Coleen, >>>>>> >>>>>> This change looks good. >>>>>> >>>>>> In jniCheck.cpp, you could use the is_null(oop obj) function defined in oop.hpp instead of 'oop == NULL?. >>>>> I like this suggestion, and wish I?d thought of it while writing the initial code. I think it should be applied to all of the former uses of operator!. Coleen? >>>> There are a zillion places where oop is compared with NULL. >>> I wasn?t suggesting a zillion places be changed. Only the half dozen places that formerly used operator! (via ?!x?) >>> and are being changed to instead use ?x == NULL? in 8196199.01/webrev. >>> >> This doesn't make sense to me. The is_null(oop obj) function isn't (generally) used anywhere else other than inside oop.cpp. It seems confusing to have only these instances use this function. It would be hard to explain why these are special because it's only because they used some wrong negation operator. > Okay. Looks good then. Thanks! Coleen > >> Coleen > From tobias.hartmann at oracle.com Thu Feb 1 13:35:34 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Thu, 1 Feb 2018 14:35:34 +0100 Subject: [11] RFR(S): 8195695: NativeLibraryTest.java fails w/ 'Expected unloaded=1 but got=0' Message-ID: <840057aa-b4f2-4028-9369-24004b64452a@oracle.com> Hi, please review the following patch: https://bugs.openjdk.java.net/browse/JDK-8195695 http://cr.openjdk.java.net/~thartmann/8195695/webrev.00/ The test fails with -Xcomp because the native library is not unloaded. The problem is that -Xcomp significantly slows down execution (especially on SPARC) and as a result 100 ms is not enough for unloading the native library. I've increased the wait time to 1s and added 10 attempts. Verified on the machine that reproduced the problem. Thanks, Tobias From stewartd.qdt at qualcommdatacenter.com Thu Feb 1 14:54:48 2018 From: stewartd.qdt at qualcommdatacenter.com (stewartd.qdt) Date: Thu, 1 Feb 2018 14:54:48 +0000 Subject: RFR: 8196361: JTReg failure in serviceability/sa/ClhsdbInspect.java In-Reply-To: <0eca7345-fb57-43e4-6169-c4b3531250e4@oracle.com> References: <6e3441f3c28f4f7387d2174f52283fa7@NASANEXM01E.na.qualcomm.com> <44a71e46-3da2-53c6-7e6b-82658183ae8c@oracle.com> <3318aed7-f04a-3b92-2660-39cdf13c2d24@oracle.com> <24c556061ffb4fde9e87a8806c04c8f7@NASANEXM01E.na.qualcomm.com> <0eca7345-fb57-43e4-6169-c4b3531250e4@oracle.com> Message-ID: <0ee09169513f46509ab002337d9e41df@NASANEXM01E.na.qualcomm.com> David, Thanks for the review. I'll change the split() to look for '\r' instead. I was unaware of the problems with line.separator, and was actually trying to avoid cross-platform issues by using it. But things are always more complicated than they seem! As far as the original intent of the test and how inspect operates, I will defer to the original author. I was just trying to get the same information, but had to change the original search approach because the original approach assumed the addresses came _after_ the string that was being searched. In searching for the name of the actual class instead of just "waiting to lock", the address comes before the string. I assume the classes I am searching for is correct, as these are the classes that actually get found in the original approach and the classes that seem to be looked for by the subsequent OutputAnalyzer. Daniel -----Original Message----- From: David Holmes [mailto:david.holmes at oracle.com] Sent: Thursday, February 1, 2018 2:51 AM To: stewartd.qdt ; Jini George Cc: serviceability-dev ; hotspot-dev at openjdk.java.net Subject: Re: RFR: 8196361: JTReg failure in serviceability/sa/ClhsdbInspect.java Hi Daniel, On 1/02/2018 2:45 AM, stewartd.qdt wrote: > Hi Jini, David, > > Please have a look at the revised webrev: > http://cr.openjdk.java.net/~dstewart/8196361/webrev.01/ > > In this webrev I have changed the approach to finding the addresses. This was necessary because in the case of matching for the locks the addresses are before what is matched and in the case of Method the address is after it. The existing code only looked for the addresses after the matched string. I've also tried to align what tokens are being looked for in the lock case. I've taken an approach of breaking the jstack output into lines and then searching each line for it containing what we want. Once found, the line is broken into pieces to find the actual address we want. > > Please let me know if this is an unacceptable approach or any changes you would like to see. I'm not clear on the overall approach as I'm unclear exactly how inspect operates or exactly what the test is trying to verify. One comment on breaking things into lines though: 73 String newline = System.getProperty("line.separator"); 74 String[] lines = jstackOutput.split(newline); As split() takes a regex, I suggest using \R to cover all potential line-breaks, rather than the platform specific line-seperator. We've been recently bitten by the distinction between output that comes from reading a process's stdout/stderr (and for which a newline \n is translated into the platform line-seperator), and output that comes across a socket connection (for which \n is not translated). This could result in failing to parse things correctly on Windows. It's safer/simpler to expect any kind of line-seperator. Thanks, David > Thanks, > Daniel > > > -----Original Message----- > From: Jini George [mailto:jini.george at oracle.com] > Sent: Tuesday, January 30, 2018 6:58 AM > To: David Holmes ; stewartd.qdt > > Cc: serviceability-dev ; > hotspot-dev at openjdk.java.net > Subject: Re: RFR: 8196361: JTReg failure in > serviceability/sa/ClhsdbInspect.java > > Hi Daniel, David, > > Thanks, Daniel, for bringing this up. The intent of the test is to get > the oop address corresponding to a java.lang.ref.ReferenceQueue$Lock, > which can typically be obtained from the stack traces of the Common-Cleaner or the Finalizer threads. The stack traces which I had been noticing were typically of the form: > > > "Common-Cleaner" #8 daemon prio=8 tid=0x00007f09c82ac000 nid=0xf6e in > Object.wait() [0x00007f09a18d2000] > java.lang.Thread.State: TIMED_WAITING (on object monitor) > JavaThread state: _thread_blocked > - java.lang.Object.wait(long) @bci=0, pc=0x00007f09b7d6480b, > Method*=0x00007f09acc43d60 (Interpreted frame) > - waiting on <0x000000072e61f6e0> (a > java.lang.ref.ReferenceQueue$Lock) > - java.lang.ref.ReferenceQueue.remove(long) @bci=59, line=151, pc=0x00007f09b7d55243, Method*=0x00007f09acdab9b0 (Interpreted frame) > - waiting to re-lock in wait() <0x000000072e61f6e0> (a > java.lang.ref.ReferenceQueue$Lock) > ... > > I chose 'waiting to re-lock in wait' since that was what I had been observing next to the oop address of java.lang.ref.ReferenceQueue$Lock. > But I see how with a timing difference, one could get 'waiting to lock' > as in your case. So, a good way to fix might be to check for the line containing '(a java.lang.ref.ReferenceQueue$Lock)', getting the oop address from that line (should be the address appearing immediately before '(a java.lang.ref.ReferenceQueue$Lock)') and passing that to the 'inspect' command. > > Thanks much, > Jini. > > On 1/30/2018 3:35 AM, David Holmes wrote: >> Hi Daniel, >> >> Serviceability issues should go to >> serviceability-dev at openjdk.java.net >> - now cc'd. >> >> On 30/01/2018 7:53 AM, stewartd.qdt wrote: >>> Please review this webrev [1] which attempts to fix a test error in >>> serviceability/sa/ClhsdbInspect.java when it is run under an AArch64 >>> system (not necessarily exclusive to this system, but it was the >>> system under test). The bug report [2] provides further details. >>> Essentially the line "waiting to re-lock in wait" never actually >>> occurs. Instead I have the line "waiting to lock" which occurs for >>> the referenced item of /java/lang/ref/ReferenceQueue$Lock. >>> Unfortunately the test is written such that only the first "waiting to lock" >>> occurrence is seen (for java/lang/Class), which is already accounted >>> for in the test. >> >> I can't tell exactly what the test expects, or why, but it would be >> extremely hard to arrange for "waiting to re-lock in wait" to be seen >> for the ReferenceQueue lock! That requires acquiring the lock >> yourself, issuing a notify() to unblock the wait(), and then issuing >> the jstack command while still holding the lock! >> >> David >> ----- >> >>> I'm not overly happy with this approach as it actually removes a >>> test line. However, the test line does not actually appear in the >>> output (at least on my system) and the test is not currently written >>> to look for the second occurrence of the line "waiting to lock". >>> Perhaps the original author could chime in and provide further >>> guidance as to the intention of the test. >>> >>> I am happy to modify the patch as necessary. >>> >>> Regards, >>> Daniel Stewart >>> >>> >>> [1] -? http://cr.openjdk.java.net/~dstewart/8196361/webrev.00/ >>> [2] - https://bugs.openjdk.java.net/browse/JDK-8196361 >>> From Derek.White at cavium.com Thu Feb 1 15:33:35 2018 From: Derek.White at cavium.com (White, Derek) Date: Thu, 1 Feb 2018 15:33:35 +0000 Subject: Constant dynamic pushed to the hs repo In-Reply-To: <093b9c05-4414-6341-9e39-c2e1cb5d9059@redhat.com> References: <093b9c05-4414-6341-9e39-c2e1cb5d9059@redhat.com> Message-ID: Hi Andrew, Dmitry Samersoff has been tracking this for aarch64 as well. He might be able to add some insights - although I'm not sure if has had time to look at the last batch. https://bugs.openjdk.java.net/browse/JDK-8190428 - Derek > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On > Behalf Of Andrew Haley > Sent: Thursday, February 01, 2018 5:09 AM > To: Paul Sandoz ; hotspot-dev dev at openjdk.java.net>; amber-dev > Subject: Re: Constant dynamic pushed to the hs repo > > On 31/01/18 22:43, Paul Sandoz wrote: > > I just pushed the constant dynamic change sets to hs [*]. It took a little > longer than I anticipated to work through some of the review process given > the holiday break. > > > > We should now be able to follow up, in the hs repo until the merge in some > cases, with dependent issues such as the changes to support AArch64, > SPARC, AoT/Graal, additional tests, and some bug/performance fixes. > > OK. Can you please send a list of those changesets? I guess they're just > everything pushed by you on Jan 31, but I wanted to check. > > -- > Andrew Haley > Java Platform Lead Engineer > Red Hat UK Ltd. > EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From stewartd.qdt at qualcommdatacenter.com Thu Feb 1 15:50:56 2018 From: stewartd.qdt at qualcommdatacenter.com (stewartd.qdt) Date: Thu, 1 Feb 2018 15:50:56 +0000 Subject: RFR: 8196361: JTReg failure in serviceability/sa/ClhsdbInspect.java In-Reply-To: <0eca7345-fb57-43e4-6169-c4b3531250e4@oracle.com> References: <6e3441f3c28f4f7387d2174f52283fa7@NASANEXM01E.na.qualcomm.com> <44a71e46-3da2-53c6-7e6b-82658183ae8c@oracle.com> <3318aed7-f04a-3b92-2660-39cdf13c2d24@oracle.com> <24c556061ffb4fde9e87a8806c04c8f7@NASANEXM01E.na.qualcomm.com> <0eca7345-fb57-43e4-6169-c4b3531250e4@oracle.com> Message-ID: <4f46f527c17f4d988e4b46e14f93cd4d@NASANEXM01E.na.qualcomm.com> Please have a look at the newest changes at: http://cr.openjdk.java.net/~dstewart/8196361/webrev.02/ The only difference between this and the last changeset is the use of "\\R" instead of whatever is the platform line.separator. Thank you, Daniel -----Original Message----- From: David Holmes [mailto:david.holmes at oracle.com] Sent: Thursday, February 1, 2018 2:51 AM To: stewartd.qdt ; Jini George Cc: serviceability-dev ; hotspot-dev at openjdk.java.net Subject: Re: RFR: 8196361: JTReg failure in serviceability/sa/ClhsdbInspect.java Hi Daniel, On 1/02/2018 2:45 AM, stewartd.qdt wrote: > Hi Jini, David, > > Please have a look at the revised webrev: > http://cr.openjdk.java.net/~dstewart/8196361/webrev.01/ > > In this webrev I have changed the approach to finding the addresses. This was necessary because in the case of matching for the locks the addresses are before what is matched and in the case of Method the address is after it. The existing code only looked for the addresses after the matched string. I've also tried to align what tokens are being looked for in the lock case. I've taken an approach of breaking the jstack output into lines and then searching each line for it containing what we want. Once found, the line is broken into pieces to find the actual address we want. > > Please let me know if this is an unacceptable approach or any changes you would like to see. I'm not clear on the overall approach as I'm unclear exactly how inspect operates or exactly what the test is trying to verify. One comment on breaking things into lines though: 73 String newline = System.getProperty("line.separator"); 74 String[] lines = jstackOutput.split(newline); As split() takes a regex, I suggest using \R to cover all potential line-breaks, rather than the platform specific line-seperator. We've been recently bitten by the distinction between output that comes from reading a process's stdout/stderr (and for which a newline \n is translated into the platform line-seperator), and output that comes across a socket connection (for which \n is not translated). This could result in failing to parse things correctly on Windows. It's safer/simpler to expect any kind of line-seperator. Thanks, David > Thanks, > Daniel > > > -----Original Message----- > From: Jini George [mailto:jini.george at oracle.com] > Sent: Tuesday, January 30, 2018 6:58 AM > To: David Holmes ; stewartd.qdt > > Cc: serviceability-dev ; > hotspot-dev at openjdk.java.net > Subject: Re: RFR: 8196361: JTReg failure in > serviceability/sa/ClhsdbInspect.java > > Hi Daniel, David, > > Thanks, Daniel, for bringing this up. The intent of the test is to get > the oop address corresponding to a java.lang.ref.ReferenceQueue$Lock, > which can typically be obtained from the stack traces of the Common-Cleaner or the Finalizer threads. The stack traces which I had been noticing were typically of the form: > > > "Common-Cleaner" #8 daemon prio=8 tid=0x00007f09c82ac000 nid=0xf6e in > Object.wait() [0x00007f09a18d2000] > java.lang.Thread.State: TIMED_WAITING (on object monitor) > JavaThread state: _thread_blocked > - java.lang.Object.wait(long) @bci=0, pc=0x00007f09b7d6480b, > Method*=0x00007f09acc43d60 (Interpreted frame) > - waiting on <0x000000072e61f6e0> (a > java.lang.ref.ReferenceQueue$Lock) > - java.lang.ref.ReferenceQueue.remove(long) @bci=59, line=151, pc=0x00007f09b7d55243, Method*=0x00007f09acdab9b0 (Interpreted frame) > - waiting to re-lock in wait() <0x000000072e61f6e0> (a > java.lang.ref.ReferenceQueue$Lock) > ... > > I chose 'waiting to re-lock in wait' since that was what I had been observing next to the oop address of java.lang.ref.ReferenceQueue$Lock. > But I see how with a timing difference, one could get 'waiting to lock' > as in your case. So, a good way to fix might be to check for the line containing '(a java.lang.ref.ReferenceQueue$Lock)', getting the oop address from that line (should be the address appearing immediately before '(a java.lang.ref.ReferenceQueue$Lock)') and passing that to the 'inspect' command. > > Thanks much, > Jini. > > On 1/30/2018 3:35 AM, David Holmes wrote: >> Hi Daniel, >> >> Serviceability issues should go to >> serviceability-dev at openjdk.java.net >> - now cc'd. >> >> On 30/01/2018 7:53 AM, stewartd.qdt wrote: >>> Please review this webrev [1] which attempts to fix a test error in >>> serviceability/sa/ClhsdbInspect.java when it is run under an AArch64 >>> system (not necessarily exclusive to this system, but it was the >>> system under test). The bug report [2] provides further details. >>> Essentially the line "waiting to re-lock in wait" never actually >>> occurs. Instead I have the line "waiting to lock" which occurs for >>> the referenced item of /java/lang/ref/ReferenceQueue$Lock. >>> Unfortunately the test is written such that only the first "waiting to lock" >>> occurrence is seen (for java/lang/Class), which is already accounted >>> for in the test. >> >> I can't tell exactly what the test expects, or why, but it would be >> extremely hard to arrange for "waiting to re-lock in wait" to be seen >> for the ReferenceQueue lock! That requires acquiring the lock >> yourself, issuing a notify() to unblock the wait(), and then issuing >> the jstack command while still holding the lock! >> >> David >> ----- >> >>> I'm not overly happy with this approach as it actually removes a >>> test line. However, the test line does not actually appear in the >>> output (at least on my system) and the test is not currently written >>> to look for the second occurrence of the line "waiting to lock". >>> Perhaps the original author could chime in and provide further >>> guidance as to the intention of the test. >>> >>> I am happy to modify the patch as necessary. >>> >>> Regards, >>> Daniel Stewart >>> >>> >>> [1] -? http://cr.openjdk.java.net/~dstewart/8196361/webrev.00/ >>> [2] - https://bugs.openjdk.java.net/browse/JDK-8196361 >>> From matthias.baesken at sap.com Thu Feb 1 16:16:59 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Thu, 1 Feb 2018 16:16:59 +0000 Subject: RFR : 8196578 : enhance errno_to_string function in os.cpp with some additional errno texts from AIX 7.1 Message-ID: Hello , I enhanced the errno - to - error-text mappings in os.cpp for a few errnos we find on AIX 7.1 . Some of these added errnos are found as well on Linux (e.g. SLES 11 / 12 ). Could you please check and review ? ( btw. there is good cross platform info about the errnos at http://www.ioplex.com/~miallen/errcmp.html ) Bug : https://bugs.openjdk.java.net/browse/JDK-8196578 Webrev : http://cr.openjdk.java.net/~mbaesken/webrevs/8196578/ Best regards, Matthias From bob.vandette at oracle.com Thu Feb 1 16:52:43 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Thu, 1 Feb 2018 11:52:43 -0500 Subject: RFR : 8196062 : Enable docker container related tests for linux ppc64le In-Reply-To: <10f2abc8dbc347f7b7f1a851e89a220a@sap.com> References: <3374db8c-6e7b-fe0b-6874-8289e9e369bc@oracle.com> <6fa3fe84aad946eabb2b46281031e21d@sap.com> <4be36ed6-2f8e-2dd8-a87a-5f76d639550e@oracle.com> <64a3268575d14ddcad90f7d46bab64dd@sap.com> <10f2abc8dbc347f7b7f1a851e89a220a@sap.com> Message-ID: Looks good to me. Bob. > On Feb 1, 2018, at 5:39 AM, Lindenmaier, Goetz wrote: > > Hi Matthias, > > thanks for enabling this test. Looks good. > I would appreciate if you would add a line > "Summary: also fix cgroup subsystem recognition" > to the bug description. Else this might be mistaken > for a mere testbug. > > Best regards, > Goetz. > > >> -----Original Message----- >> From: Baesken, Matthias >> Sent: Mittwoch, 31. Januar 2018 15:15 >> To: mikhailo ; Bob Vandette >> >> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >> ; Langer, Christoph >> ; Doerr, Martin ; >> Dmitry Samersoff >> Subject: RE: RFR : 8196062 : Enable docker container related tests for linux >> ppc64le >> >> Hello , I created a second webrev : >> >> >> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.1/8196062.1/webr >> ev/ >> >> - changed DockerTestUtils.buildJdkDockerImage in the suggested way (this >> should be extendable to linux s390x soon) >> >>>>>> Can you add "return;" in each test for subsystem not found messages >> >> - added returns in the tests for the subsystems in osContainer_linux.cpp >> >> - moved some checks at the beginning of subsystem_file_contents >> (suggested by Dmitry) >> >> >> Best regards, Matthias >> >> >> >>> -----Original Message----- >>> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] >>> Sent: Donnerstag, 25. Januar 2018 18:43 >>> To: Baesken, Matthias ; Bob Vandette >>> >>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >>> ; Langer, Christoph >>> ; Doerr, Martin >>> Subject: Re: RFR : 8196062 : Enable docker container related tests for linux >>> ppc64le >>> >>> Hi Matthias, >>> >>> >>> On 01/25/2018 12:15 AM, Baesken, Matthias wrote: >>>>> Perhaps, you could add code to DockerTestUtils.buildJdkDockerImage() >>>>> that does the following or similar: >>>>> 1. Construct a name for platform-specific docker file: >>>>> String platformSpecificDockerfile = dockerfile + "-" + >>>>> Platform.getOsArch(); >>>>> (Platform is jdk.test.lib.Platform) >>>>> >>>> Hello, the doc says : >>>> >>>> * Build a docker image that contains JDK under test. >>>> * The jdk will be placed under the "/jdk/" folder inside the docker file >>> system. >>>> ..... >>>> param dockerfile name of the dockerfile residing in the test source >>>> ..... >>>> public static void buildJdkDockerImage(String imageName, String >>> dockerfile, String buildDirName) >>>> >>>> >>>> >>>> It does not say anything about doing hidden insertions of some platform >>> names into the dockerfile name. >>>> So should the jtreg API doc be changed ? >>>> If so who needs to approve this ? >>> Thank you for your concerns about the clarity of API and corresponding >>> documentation. This is a test library API, so no need to file CCC or CSR. >>> >>> This API can be changed via a regular RFR/webrev review process, as soon >>> as on one objects. I am a VM SQE engineer covering the docker and Linux >>> container area, I am OK with this change. >>> And I agree with you, we should update the javadoc header on this >> method >>> to reflect this implicit part of API contract. >>> >>> >>> Thank you, >>> Misha >>> >>> >>> >>>> (as far as I see so far only the test at >>> hotspot/jtreg/runtime/containers/docker/ use this so it should not be a >> big >>> deal to change the interface?) >>>> >>>> Best regards, Matthias >>>> >>>> >>>> >>>> >>>>> -----Original Message----- >>>>> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] >>>>> Sent: Mittwoch, 24. Januar 2018 20:09 >>>>> To: Bob Vandette ; Baesken, Matthias >>>>> >>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >>>>> ; Langer, Christoph >>>>> ; Doerr, Martin >>>>> Subject: Re: RFR : 8196062 : Enable docker container related tests for >> linux >>>>> ppc64le >>>>> >>>>> Hi Matthias, >>>>> >>>>> Please see my comments about the test changes inline. >>>>> >>>>> >>>>> On 01/24/2018 07:13 AM, Bob Vandette wrote: >>>>>> osContainer_linux.cpp: >>>>>> >>>>>> Can you add "return;" in each test for subsystem not found messages >>> and >>>>>> remove these 3 lines OR move your tests for NULL & messages inside. >>> The >>>>> compiler can >>>>>> probably optimize this but I?d prefer more compact code. >>>>>> >>>>>> if (memory == NULL || cpuset == NULL || cpu == NULL || cpuacct == >>> NULL) >>>>> { >>>>>> 342 return; >>>>>> 343 } >>>>>> >>>>>> >>>>>> The other changes in osContainer_linux.cpp look ok. >>>>>> >>>>>> I forwarded your test changes to Misha, who wrote these. >>>>>> >>>>>> Since it?s likely that other platforms, such as aarch64, are going to run >>> into >>>>> the same problem, >>>>>> It would have been better to enable the tests based on the existence >> of >>> an >>>>> arch specific >>>>>> Dockerfile-BasicTest-{os.arch} rather than enabling specific arch?s in >>>>> VPProps.java. >>>>>> This approach would reduce the number of changes significantly and >>> allow >>>>> support to >>>>>> be added with 1 new file. >>>>>> >>>>>> You wouldn?t need "String dockerFileName = >>>>> Common.getDockerFileName();? >>>>>> in every test. Just make DockerTestUtils automatically add arch. >>>>> I like Bob's idea on handling platform-specific Dockerfiles. >>>>> >>>>> Perhaps, you could add code to DockerTestUtils.buildJdkDockerImage() >>>>> that does the following or similar: >>>>> 1. Construct a name for platform-specific docker file: >>>>> String platformSpecificDockerfile = dockerfile + "-" + >>>>> Platform.getOsArch(); >>>>> (Platform is jdk.test.lib.Platform) >>>>> >>>>> 2. Check if platformSpecificDockerfile file exists in the test >>>>> source directory >>>>> File.exists(Paths.get(Utils.TEST_SRC, platformSpecificDockerFile) >>>>> If it does, then use it. Otherwise continue using the >>>>> default/original dockerfile name. >>>>> >>>>> I think this will considerably simplify your change, as well as make it >>>>> easy to extend support to other platforms/configurations >>>>> in the future. Let us know what you think of this approach ? >>>>> >>>>> >>>>> Once your change gets (R)eviewed and approved, I can sponsor the >> push. >>>>> >>>>> >>>>> Thank you, >>>>> Misha >>>>> >>>>> >>>>> >>>>>> Bob. >>>>>> >>>>>> >>>>>> >>>>>>> On Jan 24, 2018, at 9:24 AM, Baesken, Matthias >>>>> wrote: >>>>>>> Hello, could you please review the following change : 8196062 : >> Enable >>>>> docker container related tests for linux ppc64le . >>>>>>> It adds docker container testing for linux ppc64 le (little endian) . >>>>>>> >>>>>>> A number of things had to be done : >>>>>>> ? Add a separate docker file >>>>> test/hotspot/jtreg/runtime/containers/docker/Dockerfile-BasicTest- >>> ppc64le >>>>> for linux ppc64 le which uses Ubuntu ( the Oracle Linux 7.2 used for >>>>> x86_64 seems not to be available for ppc64le ) >>>>>>> ? Fix parsing /proc/self/mountinfo and /proc/self/cgroup >> in >>>>> src/hotspot/os/linux/osContainer_linux.cpp , it could not handle the >>>>> format seen on SUSE LINUX 12.1 ppc64le (Host) and Ubuntu (Docker >>>>> container) >>>>>>> ? Add a bit more logging >>>>>>> >>>>>>> >>>>>>> Webrev : >>>>>>> >>>>>>> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062/ >>>>>>> >>>>>>> >>>>>>> Bug : >>>>>>> >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8196062 >>>>>>> >>>>>>> >>>>>>> After these adjustments I could run the runtime/containers/docker >>> - >>>>> jtreg tests successfully . >>>>>>> >>>>>>> Best regards, Matthias > From thomas.stuefe at gmail.com Thu Feb 1 17:37:34 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 1 Feb 2018 18:37:34 +0100 Subject: RFR : 8196578 : enhance errno_to_string function in os.cpp with some additional errno texts from AIX 7.1 In-Reply-To: References: Message-ID: Hi Matthias, This would probably better discussed in hotspot-runtime, no? The old error codes and their descriptions were Posix ( http://pubs.opengroup.org/onlinepubs/000095399/basedefs/errno.h.html). I do not really like spamming a shared file with AIX specific errno codes. Can we move platform specific error codes to platform files? Eg by having a platform specific version pd_errno_to_string(), which has a first shot at translating errno values, and only if that one returns no result reverting back to the shared version? Small nit: - DEFINE_ENTRY(ESTALE, "Reserved") + DEFINE_ENTRY(ESTALE, "No filesystem / stale NFS file handle") I like the glibc text better, just "Stale file handle". NFS seems too specific, can handles for other remote file systems not get stale? Kind Regards, Thomas On Thu, Feb 1, 2018 at 5:16 PM, Baesken, Matthias wrote: > Hello , I enhanced the errno - to - error-text mappings in os.cpp > for a few errnos we find on AIX 7.1 . > Some of these added errnos are found as well on Linux (e.g. SLES 11 / 12 > ). > > Could you please check and review ? > > ( btw. there is good cross platform info about the errnos at > http://www.ioplex.com/~miallen/errcmp.html ) > > Bug : > > https://bugs.openjdk.java.net/browse/JDK-8196578 > > Webrev : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8196578/ > > > > Best regards, Matthias > From paul.sandoz at oracle.com Thu Feb 1 17:41:59 2018 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Thu, 1 Feb 2018 09:41:59 -0800 Subject: Constant dynamic pushed to the hs repo In-Reply-To: <093b9c05-4414-6341-9e39-c2e1cb5d9059@redhat.com> References: <093b9c05-4414-6341-9e39-c2e1cb5d9059@redhat.com> Message-ID: > On Feb 1, 2018, at 2:09 AM, Andrew Haley wrote: > > On 31/01/18 22:43, Paul Sandoz wrote: >> I just pushed the constant dynamic change sets to hs [*]. It took a little longer than I anticipated to work through some of the review process given the holiday break. >> >> We should now be able to follow up, in the hs repo until the merge in some cases, with dependent issues such as the changes to support AArch64, SPARC, AoT/Graal, additional tests, and some bug/performance fixes. > > OK. Can you please send a list of those changesets? I guess they're > just everything pushed by you on Jan 31, but I wanted to check. Yes, here are the main change sets: 8186209: Tool support for ConstantDynamic 8186046: Minimal ConstantDynamic support 8190972: Ensure that AOT/Graal filters out class files containing CONSTANT_Dynamic ahead of full AOT support Reviewed-by: acorn, coleenp, kvn Contributed-by: lois.foltan at oracle.com, john.r.rose at oracle.com, paul.sandoz at oracle.com http://hg.openjdk.java.net/jdk/hs/rev/c4d9d1b08e2e 8187742: Minimal set of bootstrap methods for constant dynamic Reviewed-by: jrose, forax Contributed-by: brian.goetz at oracle.com, paul.sandoz at oracle.com http://hg.openjdk.java.net/jdk/hs/rev/8772acd913e5 And here is the review thread for AArch64 changes: http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-November/029435.html Paul. > > -- > Andrew Haley > Java Platform Lead Engineer > Red Hat UK Ltd. > EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From mandy.chung at oracle.com Thu Feb 1 17:56:59 2018 From: mandy.chung at oracle.com (mandy chung) Date: Thu, 1 Feb 2018 09:56:59 -0800 Subject: [11] RFR(S): 8195695: NativeLibraryTest.java fails w/ 'Expected unloaded=1 but got=0' In-Reply-To: <840057aa-b4f2-4028-9369-24004b64452a@oracle.com> References: <840057aa-b4f2-4028-9369-24004b64452a@oracle.com> Message-ID: <7245603d-5fc1-9cfa-99b6-ab94253c6a65@oracle.com> On 2/1/18 5:35 AM, Tobias Hartmann wrote: > Hi, > > please review the following patch: > https://bugs.openjdk.java.net/browse/JDK-8195695 > http://cr.openjdk.java.net/~thartmann/8195695/webrev.00/ > > The test fails with -Xcomp because the native library is not unloaded. The problem is that -Xcomp significantly slows > down execution (especially on SPARC) and as a result 100 ms is not enough for unloading the native library. I've > increased the wait time to 1s and added 10 attempts. > > Verified on the machine that reproduced the problem. This change looks okay.?? Just curious, is it also reproduced with product build? or just fastdebug build only? Mandy From amaembo at gmail.com Thu Feb 1 02:48:12 2018 From: amaembo at gmail.com (Tagir Valeev) Date: Thu, 1 Feb 2018 09:48:12 +0700 Subject: Constant dynamic pushed to the hs repo In-Reply-To: <5fb64511-213d-3133-c531-c6e64372fd1b@oracle.com> References: <5fb64511-213d-3133-c531-c6e64372fd1b@oracle.com> Message-ID: Hello! Am I understand correctly that no javac changes were pushed (e.g. compiling lambdas using condy)? With best regards, Tagir Valeev. On Thu, Feb 1, 2018 at 9:39 AM, Brian Goetz wrote: > Yay! > > > On 1/31/2018 5:43 PM, Paul Sandoz wrote: >> >> Hi, >> >> I just pushed the constant dynamic change sets to hs [*]. It took a little >> longer than I anticipated to work through some of the review process given >> the holiday break. >> >> We should now be able to follow up, in the hs repo until the merge in some >> cases, with dependent issues such as the changes to support AArch64, SPARC, >> AoT/Graal, additional tests, and some bug/performance fixes. >> >> Thanks, >> Paul. >> >> [*] I?ll delay marking the JEP as integrated until a merge with the jdk >> master repo. After that we can then garbage collect the condy branch in the >> amber repo. > > From paul.sandoz at oracle.com Thu Feb 1 18:44:10 2018 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Thu, 1 Feb 2018 10:44:10 -0800 Subject: [11] 8196583 Update jib and test jtreg version to 4.2 b12 Message-ID: Hi, The constant dynamic test i recently updated and pushed to hs [*] requires a later version of jtreg to execute correctly (it requires an updated version of asmtools bundled with jtreg). Here are the changes: diff -r 11920d5d14a8 make/conf/jib-profiles.js --- a/make/conf/jib-profiles.js Wed Jan 31 17:43:46 2018 -0800 +++ b/make/conf/jib-profiles.js Thu Feb 01 10:39:38 2018 -0800 @@ -829,7 +829,7 @@ jtreg: { server: "javare", revision: "4.2", - build_number: "b11", + build_number: "b12", checksum_file: "MD5_VALUES", file: "jtreg_bin-4.2.zip", environment_name: "JT_HOME", diff -r 11920d5d14a8 test/hotspot/jtreg/TEST.ROOT --- a/test/hotspot/jtreg/TEST.ROOT Wed Jan 31 17:43:46 2018 -0800 +++ b/test/hotspot/jtreg/TEST.ROOT Thu Feb 01 10:39:38 2018 -0800 @@ -58,7 +58,7 @@ docker.support # Minimum jtreg version -requiredVersion=4.2 b11 +requiredVersion=4.2 b12 # Path to libraries in the topmost test directory. This is needed so @library # does not need ../../../ notation to reach them diff -r 11920d5d14a8 test/jaxp/TEST.ROOT --- a/test/jaxp/TEST.ROOT Wed Jan 31 17:43:46 2018 -0800 +++ b/test/jaxp/TEST.ROOT Thu Feb 01 10:39:38 2018 -0800 @@ -23,7 +23,7 @@ groups=TEST.groups # Minimum jtreg version -requiredVersion=4.2 b11 +requiredVersion=4.2 b12 # Path to libraries in the topmost test directory. This is needed so @library # does not need ../../ notation to reach them diff -r 11920d5d14a8 test/jdk/TEST.ROOT --- a/test/jdk/TEST.ROOT Wed Jan 31 17:43:46 2018 -0800 +++ b/test/jdk/TEST.ROOT Thu Feb 01 10:39:38 2018 -0800 @@ -40,7 +40,7 @@ vm.cds # Minimum jtreg version -requiredVersion=4.2 b11 +requiredVersion=4.2 b12 # Path to libraries in the topmost test directory. This is needed so @library # does not need ../../ notation to reach them diff -r 11920d5d14a8 test/langtools/TEST.ROOT --- a/test/langtools/TEST.ROOT Wed Jan 31 17:43:46 2018 -0800 +++ b/test/langtools/TEST.ROOT Thu Feb 01 10:39:38 2018 -0800 @@ -15,7 +15,7 @@ groups=TEST.groups # Minimum jtreg version -requiredVersion=4.2 b11 +requiredVersion=4.2 b12 # Use new module options useNewOptions=true diff -r 11920d5d14a8 test/nashorn/TEST.ROOT --- a/test/nashorn/TEST.ROOT Wed Jan 31 17:43:46 2018 -0800 +++ b/test/nashorn/TEST.ROOT Thu Feb 01 10:39:38 2018 -0800 @@ -8,7 +8,7 @@ groups=TEST.groups # Minimum jtreg version -requiredVersion=4.2 b11 +requiredVersion=4.2 b12 # Use new module options useNewOptions=true Paul. [*] http://hg.openjdk.java.net/jdk/hs/rev/11920d5d14a8 From mandy.chung at oracle.com Thu Feb 1 18:48:22 2018 From: mandy.chung at oracle.com (mandy chung) Date: Thu, 1 Feb 2018 10:48:22 -0800 Subject: [11] 8196583 Update jib and test jtreg version to 4.2 b12 In-Reply-To: References: Message-ID: +1 Mandy On 2/1/18 10:44 AM, Paul Sandoz wrote: > Hi, > > The constant dynamic test i recently updated and pushed to hs [*] requires a later version of jtreg to execute correctly (it requires an updated version of asmtools bundled with jtreg). > > Here are the changes: > > diff -r 11920d5d14a8 make/conf/jib-profiles.js > --- a/make/conf/jib-profiles.js Wed Jan 31 17:43:46 2018 -0800 > +++ b/make/conf/jib-profiles.js Thu Feb 01 10:39:38 2018 -0800 > @@ -829,7 +829,7 @@ > jtreg: { > server: "javare", > revision: "4.2", > - build_number: "b11", > + build_number: "b12", > checksum_file: "MD5_VALUES", > file: "jtreg_bin-4.2.zip", > environment_name: "JT_HOME", > diff -r 11920d5d14a8 test/hotspot/jtreg/TEST.ROOT > --- a/test/hotspot/jtreg/TEST.ROOT Wed Jan 31 17:43:46 2018 -0800 > +++ b/test/hotspot/jtreg/TEST.ROOT Thu Feb 01 10:39:38 2018 -0800 > @@ -58,7 +58,7 @@ > docker.support > > # Minimum jtreg version > -requiredVersion=4.2 b11 > +requiredVersion=4.2 b12 > > # Path to libraries in the topmost test directory. This is needed so @library > # does not need ../../../ notation to reach them > diff -r 11920d5d14a8 test/jaxp/TEST.ROOT > --- a/test/jaxp/TEST.ROOT Wed Jan 31 17:43:46 2018 -0800 > +++ b/test/jaxp/TEST.ROOT Thu Feb 01 10:39:38 2018 -0800 > @@ -23,7 +23,7 @@ > groups=TEST.groups > > # Minimum jtreg version > -requiredVersion=4.2 b11 > +requiredVersion=4.2 b12 > > # Path to libraries in the topmost test directory. This is needed so @library > # does not need ../../ notation to reach them > diff -r 11920d5d14a8 test/jdk/TEST.ROOT > --- a/test/jdk/TEST.ROOT Wed Jan 31 17:43:46 2018 -0800 > +++ b/test/jdk/TEST.ROOT Thu Feb 01 10:39:38 2018 -0800 > @@ -40,7 +40,7 @@ > vm.cds > > # Minimum jtreg version > -requiredVersion=4.2 b11 > +requiredVersion=4.2 b12 > > # Path to libraries in the topmost test directory. This is needed so @library > # does not need ../../ notation to reach them > diff -r 11920d5d14a8 test/langtools/TEST.ROOT > --- a/test/langtools/TEST.ROOT Wed Jan 31 17:43:46 2018 -0800 > +++ b/test/langtools/TEST.ROOT Thu Feb 01 10:39:38 2018 -0800 > @@ -15,7 +15,7 @@ > groups=TEST.groups > > # Minimum jtreg version > -requiredVersion=4.2 b11 > +requiredVersion=4.2 b12 > > # Use new module options > useNewOptions=true > diff -r 11920d5d14a8 test/nashorn/TEST.ROOT > --- a/test/nashorn/TEST.ROOT Wed Jan 31 17:43:46 2018 -0800 > +++ b/test/nashorn/TEST.ROOT Thu Feb 01 10:39:38 2018 -0800 > @@ -8,7 +8,7 @@ > groups=TEST.groups > > # Minimum jtreg version > -requiredVersion=4.2 b11 > +requiredVersion=4.2 b12 > > # Use new module options > useNewOptions=true > > Paul. > > [*] http://hg.openjdk.java.net/jdk/hs/rev/11920d5d14a8 From christoph.langer at sap.com Thu Feb 1 18:55:14 2018 From: christoph.langer at sap.com (Langer, Christoph) Date: Thu, 1 Feb 2018 18:55:14 +0000 Subject: RFR : 8196578 : enhance errno_to_string function in os.cpp with some additional errno texts from AIX 7.1 In-Reply-To: References: Message-ID: <25e023bc47914bc3b1c5009abc5b03d8@sap.com> Hi Matthias, the change looks good to me. You'll need a sponsor, though. Best regards Christoph From: Baesken, Matthias Sent: Donnerstag, 1. Februar 2018 17:17 To: 'hotspot-dev at openjdk.java.net' ; ppc-aix-port-dev at openjdk.java.net Cc: Langer, Christoph Subject: RFR : 8196578 : enhance errno_to_string function in os.cpp with some additional errno texts from AIX 7.1 Hello , I enhanced the errno - to - error-text mappings in os.cpp for a few errnos we find on AIX 7.1 . Some of these added errnos are found as well on Linux (e.g. SLES 11 / 12 ). Could you please check and review ? ( btw. there is good cross platform info about the errnos at http://www.ioplex.com/~miallen/errcmp.html ) Bug : https://bugs.openjdk.java.net/browse/JDK-8196578 Webrev : http://cr.openjdk.java.net/~mbaesken/webrevs/8196578/ Best regards, Matthias From lois.foltan at oracle.com Thu Feb 1 19:06:30 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Thu, 1 Feb 2018 14:06:30 -0500 Subject: [11] 8196583 Update jib and test jtreg version to 4.2 b12 In-Reply-To: References: Message-ID: <23cb8743-f403-9f6b-bbdb-3c2346301435@oracle.com> Looks good. Lois On 2/1/2018 1:44 PM, Paul Sandoz wrote: > Hi, > > The constant dynamic test i recently updated and pushed to hs [*] requires a later version of jtreg to execute correctly (it requires an updated version of asmtools bundled with jtreg). > > Here are the changes: > > diff -r 11920d5d14a8 make/conf/jib-profiles.js > --- a/make/conf/jib-profiles.js Wed Jan 31 17:43:46 2018 -0800 > +++ b/make/conf/jib-profiles.js Thu Feb 01 10:39:38 2018 -0800 > @@ -829,7 +829,7 @@ > jtreg: { > server: "javare", > revision: "4.2", > - build_number: "b11", > + build_number: "b12", > checksum_file: "MD5_VALUES", > file: "jtreg_bin-4.2.zip", > environment_name: "JT_HOME", > diff -r 11920d5d14a8 test/hotspot/jtreg/TEST.ROOT > --- a/test/hotspot/jtreg/TEST.ROOT Wed Jan 31 17:43:46 2018 -0800 > +++ b/test/hotspot/jtreg/TEST.ROOT Thu Feb 01 10:39:38 2018 -0800 > @@ -58,7 +58,7 @@ > docker.support > > # Minimum jtreg version > -requiredVersion=4.2 b11 > +requiredVersion=4.2 b12 > > # Path to libraries in the topmost test directory. This is needed so @library > # does not need ../../../ notation to reach them > diff -r 11920d5d14a8 test/jaxp/TEST.ROOT > --- a/test/jaxp/TEST.ROOT Wed Jan 31 17:43:46 2018 -0800 > +++ b/test/jaxp/TEST.ROOT Thu Feb 01 10:39:38 2018 -0800 > @@ -23,7 +23,7 @@ > groups=TEST.groups > > # Minimum jtreg version > -requiredVersion=4.2 b11 > +requiredVersion=4.2 b12 > > # Path to libraries in the topmost test directory. This is needed so @library > # does not need ../../ notation to reach them > diff -r 11920d5d14a8 test/jdk/TEST.ROOT > --- a/test/jdk/TEST.ROOT Wed Jan 31 17:43:46 2018 -0800 > +++ b/test/jdk/TEST.ROOT Thu Feb 01 10:39:38 2018 -0800 > @@ -40,7 +40,7 @@ > vm.cds > > # Minimum jtreg version > -requiredVersion=4.2 b11 > +requiredVersion=4.2 b12 > > # Path to libraries in the topmost test directory. This is needed so @library > # does not need ../../ notation to reach them > diff -r 11920d5d14a8 test/langtools/TEST.ROOT > --- a/test/langtools/TEST.ROOT Wed Jan 31 17:43:46 2018 -0800 > +++ b/test/langtools/TEST.ROOT Thu Feb 01 10:39:38 2018 -0800 > @@ -15,7 +15,7 @@ > groups=TEST.groups > > # Minimum jtreg version > -requiredVersion=4.2 b11 > +requiredVersion=4.2 b12 > > # Use new module options > useNewOptions=true > diff -r 11920d5d14a8 test/nashorn/TEST.ROOT > --- a/test/nashorn/TEST.ROOT Wed Jan 31 17:43:46 2018 -0800 > +++ b/test/nashorn/TEST.ROOT Thu Feb 01 10:39:38 2018 -0800 > @@ -8,7 +8,7 @@ > groups=TEST.groups > > # Minimum jtreg version > -requiredVersion=4.2 b11 > +requiredVersion=4.2 b12 > > # Use new module options > useNewOptions=true > > Paul. > > [*] http://hg.openjdk.java.net/jdk/hs/rev/11920d5d14a8 From vladimir.kozlov at oracle.com Thu Feb 1 19:55:41 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 1 Feb 2018 11:55:41 -0800 Subject: RFR(S): 8195731: [Graal] runtime/SharedArchiveFile/serviceability/transformRelatedClasses/TransformSuperSubTwoPckgs.java intermittently fails with Graal JIT In-Reply-To: <85fd0060-5d97-c918-aef3-1eadf4b2b79b@oracle.com> References: <85fd0060-5d97-c918-aef3-1eadf4b2b79b@oracle.com> Message-ID: <8f0f7118-bbb3-10dd-cb69-21911955e53b@oracle.com> I thought we should not use System.exit() and throw some Error or RuntimeException instead. I remember Igor I. did some changes but I forgot which tests. Thanks, Vladimir On 2/1/18 2:53 AM, Tobias Hartmann wrote: > Hi, > > please review the following patch: > https://bugs.openjdk.java.net/browse/JDK-8195731 > http://cr.openjdk.java.net/~thartmann/8195731/webrev.00/ > > The TransformSuperSubTwoPckgs test fails with Graal because JVMCI initialization (or Graal compilation) is triggered in > SimpleTransformer::transform() which then triggers class loading of the to-be-transformed class, resulting in a > ClassCircularityError (see JDK-8164165 [1]) which is ignored. Most transformations fail silently, including the expected > transformation of the test classes. > > I've added code that catches and reports exceptions in transform() similar to what we do in > runtime/RedefineTests/RedefineAnnotations.java to avoid silently failing transformations. The problem of Graal > interfering with class transformations will be gone once we move to Substrate VM so we should not execute these test > with Graal for now. > > Thanks, > Tobias > > [1] https://bugs.openjdk.java.net/browse/JDK-8164165 > From vladimir.kozlov at oracle.com Thu Feb 1 20:01:07 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 1 Feb 2018 12:01:07 -0800 Subject: [11] RFR(S): 8195695: NativeLibraryTest.java fails w/ 'Expected unloaded=1 but got=0' In-Reply-To: <840057aa-b4f2-4028-9369-24004b64452a@oracle.com> References: <840057aa-b4f2-4028-9369-24004b64452a@oracle.com> Message-ID: <7ff0cadc-190e-c4d6-ffe0-aab4e4a17622@oracle.com> Increase wait time will not always work. May be better to add -Xmixed flag to test run command to overwrite passed -Xcomp flag. I think it is fine for this tests to not run with -Xcomp - it has other purpose than testing JIT. Thanks, Vladimir On 2/1/18 5:35 AM, Tobias Hartmann wrote: > Hi, > > please review the following patch: > https://bugs.openjdk.java.net/browse/JDK-8195695 > http://cr.openjdk.java.net/~thartmann/8195695/webrev.00/ > > The test fails with -Xcomp because the native library is not unloaded. The problem is that -Xcomp significantly slows > down execution (especially on SPARC) and as a result 100 ms is not enough for unloading the native library. I've > increased the wait time to 1s and added 10 attempts. > > Verified on the machine that reproduced the problem. > > Thanks, > Tobias > From serguei.spitsyn at oracle.com Thu Feb 1 20:08:16 2018 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Thu, 1 Feb 2018 12:08:16 -0800 Subject: RFR(S): 8195731: [Graal] runtime/SharedArchiveFile/serviceability/transformRelatedClasses/TransformSuperSubTwoPckgs.java intermittently fails with Graal JIT In-Reply-To: <8f0f7118-bbb3-10dd-cb69-21911955e53b@oracle.com> References: <85fd0060-5d97-c918-aef3-1eadf4b2b79b@oracle.com> <8f0f7118-bbb3-10dd-cb69-21911955e53b@oracle.com> Message-ID: Hi Tobias, +1 to Vladimir's comment. Otherwise, looks good. Thanks, Serguei On 2/1/18 11:55, Vladimir Kozlov wrote: > I thought we should not use System.exit() and throw some Error or > RuntimeException instead. I remember Igor I. did some changes but I > forgot which tests. > > Thanks, > Vladimir > > On 2/1/18 2:53 AM, Tobias Hartmann wrote: >> Hi, >> >> please review the following patch: >> https://bugs.openjdk.java.net/browse/JDK-8195731 >> http://cr.openjdk.java.net/~thartmann/8195731/webrev.00/ >> >> The TransformSuperSubTwoPckgs test fails with Graal because JVMCI >> initialization (or Graal compilation) is triggered in >> SimpleTransformer::transform() which then triggers class loading of >> the to-be-transformed class, resulting in a >> ClassCircularityError (see JDK-8164165 [1]) which is ignored. Most >> transformations fail silently, including the expected >> transformation of the test classes. >> >> I've added code that catches and reports exceptions in transform() >> similar to what we do in >> runtime/RedefineTests/RedefineAnnotations.java to avoid silently >> failing transformations. The problem of Graal >> interfering with class transformations will be gone once we move to >> Substrate VM so we should not execute these test >> with Graal for now. >> >> Thanks, >> Tobias >> >> [1] https://bugs.openjdk.java.net/browse/JDK-8164165 >> From david.holmes at oracle.com Thu Feb 1 22:51:45 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 2 Feb 2018 08:51:45 +1000 Subject: [11] RFR(S): 8195695: NativeLibraryTest.java fails w/ 'Expected unloaded=1 but got=0' In-Reply-To: <7ff0cadc-190e-c4d6-ffe0-aab4e4a17622@oracle.com> References: <840057aa-b4f2-4028-9369-24004b64452a@oracle.com> <7ff0cadc-190e-c4d6-ffe0-aab4e4a17622@oracle.com> Message-ID: <7a6c1d9f-b080-9694-f4a0-01c37fcd2622@oracle.com> On 2/02/2018 6:01 AM, Vladimir Kozlov wrote: > Increase wait time will not always work. > May be better to add -Xmixed flag to test run command to overwrite > passed -Xcomp flag. > I think it is fine for this tests to not run with -Xcomp - it has other > purpose than testing JIT. Wouldn't you just skip in -Xcomp as it would already be tested in mixed mode as part of another run? @ requires vm.mode == "null" ?? David ----- > Thanks, > Vladimir > > On 2/1/18 5:35 AM, Tobias Hartmann wrote: >> Hi, >> >> please review the following patch: >> https://bugs.openjdk.java.net/browse/JDK-8195695 >> http://cr.openjdk.java.net/~thartmann/8195695/webrev.00/ >> >> The test fails with -Xcomp because the native library is not unloaded. >> The problem is that -Xcomp significantly slows >> down execution (especially on SPARC) and as a result 100 ms is not >> enough for unloading the native library. I've >> increased the wait time to 1s and added 10 attempts. >> >> Verified on the machine that reproduced the problem. >> >> Thanks, >> Tobias >> From vladimir.kozlov at oracle.com Thu Feb 1 23:07:00 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 1 Feb 2018 15:07:00 -0800 Subject: [11] RFR(S): 8195695: NativeLibraryTest.java fails w/ 'Expected unloaded=1 but got=0' In-Reply-To: <7a6c1d9f-b080-9694-f4a0-01c37fcd2622@oracle.com> References: <840057aa-b4f2-4028-9369-24004b64452a@oracle.com> <7ff0cadc-190e-c4d6-ffe0-aab4e4a17622@oracle.com> <7a6c1d9f-b080-9694-f4a0-01c37fcd2622@oracle.com> Message-ID: <4edb8b33-30d2-f3ba-6ba6-72e0eb178a46@oracle.com> On 2/1/18 2:51 PM, David Holmes wrote: > On 2/02/2018 6:01 AM, Vladimir Kozlov wrote: >> Increase wait time will not always work. >> May be better to add -Xmixed flag to test run command to overwrite passed -Xcomp flag. >> I think it is fine for this tests to not run with -Xcomp - it has other purpose than testing JIT. > > Wouldn't you just skip in -Xcomp as it would already be tested in mixed mode as part of another run? > > @ requires vm.mode == "null" ?? > > David > ----- Yes, skipping it for -Xcomp is also acceptable: @requires vm.compMode != "Xcomp" Vladimir > >> Thanks, >> Vladimir >> >> On 2/1/18 5:35 AM, Tobias Hartmann wrote: >>> Hi, >>> >>> please review the following patch: >>> https://bugs.openjdk.java.net/browse/JDK-8195695 >>> http://cr.openjdk.java.net/~thartmann/8195695/webrev.00/ >>> >>> The test fails with -Xcomp because the native library is not unloaded. The problem is that -Xcomp >>> significantly slows >>> down execution (especially on SPARC) and as a result 100 ms is not enough for unloading the >>> native library. I've >>> increased the wait time to 1s and added 10 attempts. >>> >>> Verified on the machine that reproduced the problem. >>> >>> Thanks, >>> Tobias >>> From david.holmes at oracle.com Thu Feb 1 23:11:08 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 2 Feb 2018 09:11:08 +1000 Subject: RFR : 8196578 : enhance errno_to_string function in os.cpp with some additional errno texts from AIX 7.1 In-Reply-To: References: Message-ID: <0d2e9880-cb76-c24f-5185-d877cea8d0dc@oracle.com> +1 on moving to platform specific code. Thanks, David On 2/02/2018 3:37 AM, Thomas St?fe wrote: > Hi Matthias, > > This would probably better discussed in hotspot-runtime, no? > > The old error codes and their descriptions were Posix ( > http://pubs.opengroup.org/onlinepubs/000095399/basedefs/errno.h.html). I do > not really like spamming a shared file with AIX specific errno codes. Can > we move platform specific error codes to platform files? Eg by having a > platform specific version pd_errno_to_string(), which has a first shot at > translating errno values, and only if that one returns no result reverting > back to the shared version? > > Small nit: > > - DEFINE_ENTRY(ESTALE, "Reserved") > + DEFINE_ENTRY(ESTALE, "No filesystem / stale NFS file handle") > > I like the glibc text better, just "Stale file handle". NFS seems too > specific, can handles for other remote file systems not get stale? > > Kind Regards, Thomas > > On Thu, Feb 1, 2018 at 5:16 PM, Baesken, Matthias > wrote: > >> Hello , I enhanced the errno - to - error-text mappings in os.cpp >> for a few errnos we find on AIX 7.1 . >> Some of these added errnos are found as well on Linux (e.g. SLES 11 / 12 >> ). >> >> Could you please check and review ? >> >> ( btw. there is good cross platform info about the errnos at >> http://www.ioplex.com/~miallen/errcmp.html ) >> >> Bug : >> >> https://bugs.openjdk.java.net/browse/JDK-8196578 >> >> Webrev : >> >> http://cr.openjdk.java.net/~mbaesken/webrevs/8196578/ >> >> >> >> Best regards, Matthias >> From mandy.chung at oracle.com Thu Feb 1 23:17:25 2018 From: mandy.chung at oracle.com (mandy chung) Date: Thu, 1 Feb 2018 15:17:25 -0800 Subject: [11] RFR(S): 8195695: NativeLibraryTest.java fails w/ 'Expected unloaded=1 but got=0' In-Reply-To: <4edb8b33-30d2-f3ba-6ba6-72e0eb178a46@oracle.com> References: <840057aa-b4f2-4028-9369-24004b64452a@oracle.com> <7ff0cadc-190e-c4d6-ffe0-aab4e4a17622@oracle.com> <7a6c1d9f-b080-9694-f4a0-01c37fcd2622@oracle.com> <4edb8b33-30d2-f3ba-6ba6-72e0eb178a46@oracle.com> Message-ID: On 2/1/18 3:07 PM, Vladimir Kozlov wrote: > On 2/1/18 2:51 PM, David Holmes wrote: >> On 2/02/2018 6:01 AM, Vladimir Kozlov wrote: >>> Increase wait time will not always work. >>> May be better to add -Xmixed flag to test run command to overwrite >>> passed -Xcomp flag. >>> I think it is fine for this tests to not run with -Xcomp - it has >>> other purpose than testing JIT. >> >> Wouldn't you just skip in -Xcomp as it would already be tested in >> mixed mode as part of another run? >> >> @ requires vm.mode == "null" ?? >> >> David >> ----- > > Yes, skipping it for -Xcomp is also acceptable: > > @requires vm.compMode != "Xcomp" This is even better.?? This filtering option is useful (thanks I didn't know about this mechanism). Mandy From david.holmes at oracle.com Fri Feb 2 02:01:36 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 2 Feb 2018 12:01:36 +1000 Subject: RFR: 8196361: JTReg failure in serviceability/sa/ClhsdbInspect.java In-Reply-To: <4f46f527c17f4d988e4b46e14f93cd4d@NASANEXM01E.na.qualcomm.com> References: <6e3441f3c28f4f7387d2174f52283fa7@NASANEXM01E.na.qualcomm.com> <44a71e46-3da2-53c6-7e6b-82658183ae8c@oracle.com> <3318aed7-f04a-3b92-2660-39cdf13c2d24@oracle.com> <24c556061ffb4fde9e87a8806c04c8f7@NASANEXM01E.na.qualcomm.com> <0eca7345-fb57-43e4-6169-c4b3531250e4@oracle.com> <4f46f527c17f4d988e4b46e14f93cd4d@NASANEXM01E.na.qualcomm.com> Message-ID: On 2/02/2018 1:50 AM, stewartd.qdt wrote: > Please have a look at the newest changes at: http://cr.openjdk.java.net/~dstewart/8196361/webrev.02/ > > The only difference between this and the last changeset is the use of "\\R" instead of whatever is the platform line.separator. Thanks for that. The overall changes seem reasonable but I'll defer to Jini for final approval. If Jini approves then consider this Reviewed. Thanks, David > Thank you, > Daniel > > -----Original Message----- > From: David Holmes [mailto:david.holmes at oracle.com] > Sent: Thursday, February 1, 2018 2:51 AM > To: stewartd.qdt ; Jini George > Cc: serviceability-dev ; hotspot-dev at openjdk.java.net > Subject: Re: RFR: 8196361: JTReg failure in serviceability/sa/ClhsdbInspect.java > > Hi Daniel, > > On 1/02/2018 2:45 AM, stewartd.qdt wrote: >> Hi Jini, David, >> >> Please have a look at the revised webrev: >> http://cr.openjdk.java.net/~dstewart/8196361/webrev.01/ >> >> In this webrev I have changed the approach to finding the addresses. This was necessary because in the case of matching for the locks the addresses are before what is matched and in the case of Method the address is after it. The existing code only looked for the addresses after the matched string. I've also tried to align what tokens are being looked for in the lock case. I've taken an approach of breaking the jstack output into lines and then searching each line for it containing what we want. Once found, the line is broken into pieces to find the actual address we want. >> >> Please let me know if this is an unacceptable approach or any changes you would like to see. > > I'm not clear on the overall approach as I'm unclear exactly how inspect operates or exactly what the test is trying to verify. One comment on breaking things into lines though: > > 73 String newline = System.getProperty("line.separator"); > 74 String[] lines = jstackOutput.split(newline); > > As split() takes a regex, I suggest using \R to cover all potential line-breaks, rather than the platform specific line-seperator. We've been recently bitten by the distinction between output that comes from reading a process's stdout/stderr (and for which a newline \n is translated into the platform line-seperator), and output that comes across a socket connection (for which \n is not translated). This could result in failing to parse things correctly on Windows. It's safer/simpler to expect any kind of line-seperator. > > Thanks, > David > >> Thanks, >> Daniel >> >> >> -----Original Message----- >> From: Jini George [mailto:jini.george at oracle.com] >> Sent: Tuesday, January 30, 2018 6:58 AM >> To: David Holmes ; stewartd.qdt >> >> Cc: serviceability-dev ; >> hotspot-dev at openjdk.java.net >> Subject: Re: RFR: 8196361: JTReg failure in >> serviceability/sa/ClhsdbInspect.java >> >> Hi Daniel, David, >> >> Thanks, Daniel, for bringing this up. The intent of the test is to get >> the oop address corresponding to a java.lang.ref.ReferenceQueue$Lock, >> which can typically be obtained from the stack traces of the Common-Cleaner or the Finalizer threads. The stack traces which I had been noticing were typically of the form: >> >> >> "Common-Cleaner" #8 daemon prio=8 tid=0x00007f09c82ac000 nid=0xf6e in >> Object.wait() [0x00007f09a18d2000] >> java.lang.Thread.State: TIMED_WAITING (on object monitor) >> JavaThread state: _thread_blocked >> - java.lang.Object.wait(long) @bci=0, pc=0x00007f09b7d6480b, >> Method*=0x00007f09acc43d60 (Interpreted frame) >> - waiting on <0x000000072e61f6e0> (a >> java.lang.ref.ReferenceQueue$Lock) >> - java.lang.ref.ReferenceQueue.remove(long) @bci=59, line=151, pc=0x00007f09b7d55243, Method*=0x00007f09acdab9b0 (Interpreted frame) >> - waiting to re-lock in wait() <0x000000072e61f6e0> (a >> java.lang.ref.ReferenceQueue$Lock) >> ... >> >> I chose 'waiting to re-lock in wait' since that was what I had been observing next to the oop address of java.lang.ref.ReferenceQueue$Lock. >> But I see how with a timing difference, one could get 'waiting to lock' >> as in your case. So, a good way to fix might be to check for the line containing '(a java.lang.ref.ReferenceQueue$Lock)', getting the oop address from that line (should be the address appearing immediately before '(a java.lang.ref.ReferenceQueue$Lock)') and passing that to the 'inspect' command. >> >> Thanks much, >> Jini. >> >> On 1/30/2018 3:35 AM, David Holmes wrote: >>> Hi Daniel, >>> >>> Serviceability issues should go to >>> serviceability-dev at openjdk.java.net >>> - now cc'd. >>> >>> On 30/01/2018 7:53 AM, stewartd.qdt wrote: >>>> Please review this webrev [1] which attempts to fix a test error in >>>> serviceability/sa/ClhsdbInspect.java when it is run under an AArch64 >>>> system (not necessarily exclusive to this system, but it was the >>>> system under test). The bug report [2] provides further details. >>>> Essentially the line "waiting to re-lock in wait" never actually >>>> occurs. Instead I have the line "waiting to lock" which occurs for >>>> the referenced item of /java/lang/ref/ReferenceQueue$Lock. >>>> Unfortunately the test is written such that only the first "waiting to lock" >>>> occurrence is seen (for java/lang/Class), which is already accounted >>>> for in the test. >>> >>> I can't tell exactly what the test expects, or why, but it would be >>> extremely hard to arrange for "waiting to re-lock in wait" to be seen >>> for the ReferenceQueue lock! That requires acquiring the lock >>> yourself, issuing a notify() to unblock the wait(), and then issuing >>> the jstack command while still holding the lock! >>> >>> David >>> ----- >>> >>>> I'm not overly happy with this approach as it actually removes a >>>> test line. However, the test line does not actually appear in the >>>> output (at least on my system) and the test is not currently written >>>> to look for the second occurrence of the line "waiting to lock". >>>> Perhaps the original author could chime in and provide further >>>> guidance as to the intention of the test. >>>> >>>> I am happy to modify the patch as necessary. >>>> >>>> Regards, >>>> Daniel Stewart >>>> >>>> >>>> [1] -? http://cr.openjdk.java.net/~dstewart/8196361/webrev.00/ >>>> [2] - https://bugs.openjdk.java.net/browse/JDK-8196361 >>>> From jini.george at oracle.com Fri Feb 2 06:19:27 2018 From: jini.george at oracle.com (Jini George) Date: Fri, 2 Feb 2018 11:49:27 +0530 Subject: RFR: 8196361: JTReg failure in serviceability/sa/ClhsdbInspect.java In-Reply-To: References: <6e3441f3c28f4f7387d2174f52283fa7@NASANEXM01E.na.qualcomm.com> <44a71e46-3da2-53c6-7e6b-82658183ae8c@oracle.com> <3318aed7-f04a-3b92-2660-39cdf13c2d24@oracle.com> <24c556061ffb4fde9e87a8806c04c8f7@NASANEXM01E.na.qualcomm.com> <0eca7345-fb57-43e4-6169-c4b3531250e4@oracle.com> <4f46f527c17f4d988e4b46e14f93cd4d@NASANEXM01E.na.qualcomm.com> Message-ID: <1b753efa-6abc-4947-e4f3-f6a29020c082@oracle.com> Hi Daniel, Your changes look good to me overall. Just some nits: * Please do add 2018 to the copyright year. * Since the rest of the file follows 4 spaces for indentation, please keep the indentation to 4 spaces. * Line 81: It would be great if the opening brace is at line 80, so that it would be consistent with the rest of the file. * Line 65: The declaration could be a part of line 79. * Line 51: Please add the 'oop address of a java.lang.Class' to the comment. Thanks! Jini. On 2/2/2018 7:31 AM, David Holmes wrote: > On 2/02/2018 1:50 AM, stewartd.qdt wrote: >> Please have? a look at the newest changes at: >> http://cr.openjdk.java.net/~dstewart/8196361/webrev.02/ >> >> The only difference between this and the last changeset is the use of >> "\\R" instead of whatever is the platform line.separator. > > Thanks for that. > > The overall changes seem reasonable but I'll defer to Jini for final > approval. If Jini approves then consider this Reviewed. > > Thanks, > David > >> Thank you, >> Daniel >> >> -----Original Message----- >> From: David Holmes [mailto:david.holmes at oracle.com] >> Sent: Thursday, February 1, 2018 2:51 AM >> To: stewartd.qdt ; Jini George >> >> Cc: serviceability-dev ; >> hotspot-dev at openjdk.java.net >> Subject: Re: RFR: 8196361: JTReg failure in >> serviceability/sa/ClhsdbInspect.java >> >> Hi Daniel, >> >> On 1/02/2018 2:45 AM, stewartd.qdt wrote: >>> Hi Jini, David, >>> >>> Please have a look at the revised webrev: >>> http://cr.openjdk.java.net/~dstewart/8196361/webrev.01/ >>> >>> In this webrev I have changed the approach to finding the addresses. >>> This was necessary because in the case of matching for the locks the >>> addresses are before what is matched and in the case of Method the >>> address is after it.? The existing code only looked for the addresses >>> after the matched string. I've also tried to align what tokens? are >>> being looked for in the lock case. I've taken an approach of breaking >>> the jstack output into lines and then searching each line for it >>> containing what we want. Once found, the line is broken into pieces >>> to find the actual address we want. >>> >>> Please let me know if this is an unacceptable approach or any changes >>> you would like to see. >> >> I'm not clear on the overall approach as I'm unclear exactly how >> inspect operates or exactly what the test is trying to verify. One >> comment on breaking things into lines though: >> >> ??? 73???????????? String newline = System.getProperty("line.separator"); >> ??? 74???????????? String[] lines = jstackOutput.split(newline); >> >> As split() takes a regex, I suggest using \R to cover all potential >> line-breaks, rather than the platform specific line-seperator. We've >> been recently bitten by the distinction between output that comes from >> reading a process's stdout/stderr (and for which a newline \n is >> translated into the platform line-seperator), and output that comes >> across a socket connection (for which \n is not translated). This >> could result in failing to parse things correctly on Windows. It's >> safer/simpler to expect any kind of line-seperator. >> >> Thanks, >> David >> >>> Thanks, >>> Daniel >>> >>> >>> -----Original Message----- >>> From: Jini George [mailto:jini.george at oracle.com] >>> Sent: Tuesday, January 30, 2018 6:58 AM >>> To: David Holmes ; stewartd.qdt >>> >>> Cc: serviceability-dev ; >>> hotspot-dev at openjdk.java.net >>> Subject: Re: RFR: 8196361: JTReg failure in >>> serviceability/sa/ClhsdbInspect.java >>> >>> Hi Daniel, David, >>> >>> Thanks, Daniel, for bringing this up. The intent of the test is to get >>> the oop address corresponding to a java.lang.ref.ReferenceQueue$Lock, >>> which can typically be obtained from the stack traces of the >>> Common-Cleaner or the Finalizer threads. The stack traces which I had >>> been noticing were typically of the form: >>> >>> >>> "Common-Cleaner" #8 daemon prio=8 tid=0x00007f09c82ac000 nid=0xf6e in >>> Object.wait() [0x00007f09a18d2000] >>> ????? java.lang.Thread.State: TIMED_WAITING (on object monitor) >>> ????? JavaThread state: _thread_blocked >>> ??? - java.lang.Object.wait(long) @bci=0, pc=0x00007f09b7d6480b, >>> Method*=0x00007f09acc43d60 (Interpreted frame) >>> ?????????? - waiting on <0x000000072e61f6e0> (a >>> java.lang.ref.ReferenceQueue$Lock) >>> ??? - java.lang.ref.ReferenceQueue.remove(long) @bci=59, line=151, >>> pc=0x00007f09b7d55243, Method*=0x00007f09acdab9b0 (Interpreted frame) >>> ?????????? - waiting to re-lock in wait() <0x000000072e61f6e0> (a >>> java.lang.ref.ReferenceQueue$Lock) >>> ... >>> >>> I chose 'waiting to re-lock in wait' since that was what I had been >>> observing next to the oop address of java.lang.ref.ReferenceQueue$Lock. >>> But I see how with a timing difference, one could get 'waiting to lock' >>> as in your case. So, a good way to fix might be to check for the line >>> containing '(a java.lang.ref.ReferenceQueue$Lock)', getting the oop >>> address from that line (should be the address appearing immediately >>> before '(a java.lang.ref.ReferenceQueue$Lock)') and passing that to >>> the 'inspect' command. >>> >>> Thanks much, >>> Jini. >>> >>> On 1/30/2018 3:35 AM, David Holmes wrote: >>>> Hi Daniel, >>>> >>>> Serviceability issues should go to >>>> serviceability-dev at openjdk.java.net >>>> - now cc'd. >>>> >>>> On 30/01/2018 7:53 AM, stewartd.qdt wrote: >>>>> Please review this webrev [1] which attempts to fix a test error in >>>>> serviceability/sa/ClhsdbInspect.java when it is run under an AArch64 >>>>> system (not necessarily exclusive to this system, but it was the >>>>> system under test). The bug report [2] provides further details. >>>>> Essentially the line "waiting to re-lock in wait" never actually >>>>> occurs. Instead I have the line "waiting to lock" which occurs for >>>>> the referenced item of /java/lang/ref/ReferenceQueue$Lock. >>>>> Unfortunately the test is written such that only the first "waiting >>>>> to lock" >>>>> occurrence is seen (for java/lang/Class), which is already accounted >>>>> for in the test. >>>> >>>> I can't tell exactly what the test expects, or why, but it would be >>>> extremely hard to arrange for "waiting to re-lock in wait" to be seen >>>> for the ReferenceQueue lock! That requires acquiring the lock >>>> yourself, issuing a notify() to unblock the wait(), and then issuing >>>> the jstack command while still holding the lock! >>>> >>>> David >>>> ----- >>>> >>>>> I'm not overly happy with this approach as it actually removes a >>>>> test line. However, the test line does not actually appear in the >>>>> output (at least on my system) and the test is not currently written >>>>> to look for the second occurrence of the line "waiting to lock". >>>>> Perhaps the original author could chime in and provide further >>>>> guidance as to the intention of the test. >>>>> >>>>> I am happy to modify the patch as necessary. >>>>> >>>>> Regards, >>>>> Daniel Stewart >>>>> >>>>> >>>>> [1] -? http://cr.openjdk.java.net/~dstewart/8196361/webrev.00/ >>>>> [2] - https://bugs.openjdk.java.net/browse/JDK-8196361 >>>>> From matthias.baesken at sap.com Fri Feb 2 08:02:42 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Fri, 2 Feb 2018 08:02:42 +0000 Subject: RFR : 8196578 : enhance errno_to_string function in os.cpp with some additional errno texts from AIX 7.1 In-Reply-To: References: Message-ID: <93fc660476f1490da815cdfba98ff623@sap.com> * I do not really like spamming a shared file with AIX specific errno codes. Hi, I wrote ?for a few errnos ***we find*** on AIX 7.1? , not that they are AIX ***specific***. Checked the first few added ones : 1522 // some more errno numbers from AIX 7.1 (some are also supported on Linux) 1523 #ifdef ENOTBLK 1524 DEFINE_ENTRY(ENOTBLK, "Block device required") 1525 #endif 1526 #ifdef ECHRNG 1527 DEFINE_ENTRY(ECHRNG, "Channel number out of range") 1528 #endif 1529 #ifdef ELNRNG 1530 DEFINE_ENTRY(ELNRNG, "Link number out of range") 1531 #endif According to http://www.ioplex.com/~miallen/errcmp.html ENOTBLK ? found on AIX, Solaris, Linux, ? ECHRNG - found on AIX, Solaris, Linux ELNRNG - found on AIX, Solaris, Linux I would suggest to keep the multi-platform errnos in os.cpp just where they are . * Can we move platform specific error codes to platform files? Eg by having a platform specific version pd_errno_to_string(), * which has a first shot at translating errno values, and only if that one returns no result reverting back to the shared version? * Can go through the list of added errnos and check if there are really a few in that exist only on AIX. If there are a significant number we might do what you suggest , but for only a small number I wouldn?t do it. >Small nit: > >- DEFINE_ENTRY(ESTALE, "Reserved") >+ DEFINE_ENTRY(ESTALE, "No filesystem / stale NFS file handle") > >I like the glibc text better, just "Stale file handle". NFS seems too specific, can handles for other remote file systems not get stale? That?s fine with me, I can change this to what you suggest. Best regards, Matthias From: Thomas St?fe [mailto:thomas.stuefe at gmail.com] Sent: Donnerstag, 1. Februar 2018 18:38 To: Baesken, Matthias Cc: hotspot-dev at openjdk.java.net; ppc-aix-port-dev at openjdk.java.net Subject: Re: RFR : 8196578 : enhance errno_to_string function in os.cpp with some additional errno texts from AIX 7.1 Hi Matthias, This would probably better discussed in hotspot-runtime, no? The old error codes and their descriptions were Posix (http://pubs.opengroup.org/onlinepubs/000095399/basedefs/errno.h.html). I do not really like spamming a shared file with AIX specific errno codes. Can we move platform specific error codes to platform files? Eg by having a platform specific version pd_errno_to_string(), which has a first shot at translating errno values, and only if that one returns no result reverting back to the shared version? Small nit: - DEFINE_ENTRY(ESTALE, "Reserved") + DEFINE_ENTRY(ESTALE, "No filesystem / stale NFS file handle") I like the glibc text better, just "Stale file handle". NFS seems too specific, can handles for other remote file systems not get stale? Kind Regards, Thomas On Thu, Feb 1, 2018 at 5:16 PM, Baesken, Matthias > wrote: Hello , I enhanced the errno - to - error-text mappings in os.cpp for a few errnos we find on AIX 7.1 . Some of these added errnos are found as well on Linux (e.g. SLES 11 / 12 ). Could you please check and review ? ( btw. there is good cross platform info about the errnos at http://www.ioplex.com/~miallen/errcmp.html ) Bug : https://bugs.openjdk.java.net/browse/JDK-8196578 Webrev : http://cr.openjdk.java.net/~mbaesken/webrevs/8196578/ Best regards, Matthias From tobias.hartmann at oracle.com Fri Feb 2 08:11:03 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Fri, 2 Feb 2018 09:11:03 +0100 Subject: [11] RFR(S): 8195695: NativeLibraryTest.java fails w/ 'Expected unloaded=1 but got=0' In-Reply-To: References: <840057aa-b4f2-4028-9369-24004b64452a@oracle.com> <7ff0cadc-190e-c4d6-ffe0-aab4e4a17622@oracle.com> <7a6c1d9f-b080-9694-f4a0-01c37fcd2622@oracle.com> <4edb8b33-30d2-f3ba-6ba6-72e0eb178a46@oracle.com> Message-ID: Hi Vladimir, >>> On 2/02/2018 6:01 AM, Vladimir Kozlov wrote: >>>> Increase wait time will not always work. I've decided to go with increasing the wait time because there are other flags that might slow down execution (for example, -XX:+DeoptimizeALot, -XX:+AggressiveOpts, ...) but excluding -Xcomp should be fine for now. >> Yes, skipping it for -Xcomp is also acceptable: >> >> @requires vm.compMode != "Xcomp" Okay, here's the new webrev: http://cr.openjdk.java.net/~thartmann/8195695/webrev.01/ Thanks, Tobias From tobias.hartmann at oracle.com Fri Feb 2 08:13:00 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Fri, 2 Feb 2018 09:13:00 +0100 Subject: [11] RFR(S): 8195695: NativeLibraryTest.java fails w/ 'Expected unloaded=1 but got=0' In-Reply-To: <7245603d-5fc1-9cfa-99b6-ab94253c6a65@oracle.com> References: <840057aa-b4f2-4028-9369-24004b64452a@oracle.com> <7245603d-5fc1-9cfa-99b6-ab94253c6a65@oracle.com> Message-ID: <9f6b20bf-cb24-5511-0a25-b9bfea0de03d@oracle.com> Hi Mandy On 01.02.2018 18:56, mandy chung wrote: > This change looks okay.?? Just curious, is it also reproduced with product build? or just fastdebug build only? Thanks for looking at this. If only seen it with a fastdebug build but I think it could also show up with a product build. Best regards, Tobias From matthias.baesken at sap.com Fri Feb 2 08:39:36 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Fri, 2 Feb 2018 08:39:36 +0000 Subject: RFR : 8196062 : Enable docker container related tests for linux ppc64le In-Reply-To: References: <3374db8c-6e7b-fe0b-6874-8289e9e369bc@oracle.com> <6fa3fe84aad946eabb2b46281031e21d@sap.com> <4be36ed6-2f8e-2dd8-a87a-5f76d639550e@oracle.com> <64a3268575d14ddcad90f7d46bab64dd@sap.com> <10f2abc8dbc347f7b7f1a851e89a220a@sap.com> Message-ID: <81c11685dd1b4edd9419e4897e96292a@sap.com> Thanks for the reviews . I added info about the fix for /proc/self/cgroup and /proc/self/mountinfo parsing to the bug : https://bugs.openjdk.java.net/browse/JDK-8196062 Guess I need a sponsor now to get it pushed ? Best regards, Matthias > -----Original Message----- > From: Bob Vandette [mailto:bob.vandette at oracle.com] > Sent: Donnerstag, 1. Februar 2018 17:53 > To: Lindenmaier, Goetz > Cc: Baesken, Matthias ; mikhailo > ; hotspot-dev at openjdk.java.net; Langer, > Christoph ; Doerr, Martin > ; Dmitry Samersoff sw.com> > Subject: Re: RFR : 8196062 : Enable docker container related tests for linux > ppc64le > > Looks good to me. > > Bob. > > > On Feb 1, 2018, at 5:39 AM, Lindenmaier, Goetz > wrote: > > > > Hi Matthias, > > > > thanks for enabling this test. Looks good. > > I would appreciate if you would add a line > > "Summary: also fix cgroup subsystem recognition" > > to the bug description. Else this might be mistaken > > for a mere testbug. > > > > Best regards, > > Goetz. > > > > > >> -----Original Message----- > >> From: Baesken, Matthias > >> Sent: Mittwoch, 31. Januar 2018 15:15 > >> To: mikhailo ; Bob Vandette > >> > >> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz > >> ; Langer, Christoph > >> ; Doerr, Martin ; > >> Dmitry Samersoff > >> Subject: RE: RFR : 8196062 : Enable docker container related tests for linux > >> ppc64le > >> > >> Hello , I created a second webrev : > >> > >> > >> > http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.1/8196062.1/webr > >> ev/ > >> > >> - changed DockerTestUtils.buildJdkDockerImage in the suggested way > (this > >> should be extendable to linux s390x soon) > >> > >>>>>> Can you add "return;" in each test for subsystem not found > messages > >> > >> - added returns in the tests for the subsystems in osContainer_linux.cpp > >> > >> - moved some checks at the beginning of subsystem_file_contents > >> (suggested by Dmitry) > >> > >> > >> Best regards, Matthias > >> > >> > >> > >>> -----Original Message----- > >>> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] > >>> Sent: Donnerstag, 25. Januar 2018 18:43 > >>> To: Baesken, Matthias ; Bob Vandette > >>> > >>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz > >>> ; Langer, Christoph > >>> ; Doerr, Martin > >>> Subject: Re: RFR : 8196062 : Enable docker container related tests for > linux > >>> ppc64le > >>> > >>> Hi Matthias, > >>> > >>> > >>> On 01/25/2018 12:15 AM, Baesken, Matthias wrote: > >>>>> Perhaps, you could add code to > DockerTestUtils.buildJdkDockerImage() > >>>>> that does the following or similar: > >>>>> 1. Construct a name for platform-specific docker file: > >>>>> String platformSpecificDockerfile = dockerfile + "-" + > >>>>> Platform.getOsArch(); > >>>>> (Platform is jdk.test.lib.Platform) > >>>>> > >>>> Hello, the doc says : > >>>> > >>>> * Build a docker image that contains JDK under test. > >>>> * The jdk will be placed under the "/jdk/" folder inside the docker > file > >>> system. > >>>> ..... > >>>> param dockerfile name of the dockerfile residing in the test source > >>>> ..... > >>>> public static void buildJdkDockerImage(String imageName, String > >>> dockerfile, String buildDirName) > >>>> > >>>> > >>>> > >>>> It does not say anything about doing hidden insertions of some > platform > >>> names into the dockerfile name. > >>>> So should the jtreg API doc be changed ? > >>>> If so who needs to approve this ? > >>> Thank you for your concerns about the clarity of API and corresponding > >>> documentation. This is a test library API, so no need to file CCC or CSR. > >>> > >>> This API can be changed via a regular RFR/webrev review process, as > soon > >>> as on one objects. I am a VM SQE engineer covering the docker and > Linux > >>> container area, I am OK with this change. > >>> And I agree with you, we should update the javadoc header on this > >> method > >>> to reflect this implicit part of API contract. > >>> > >>> > >>> Thank you, > >>> Misha > >>> > >>> > >>> > >>>> (as far as I see so far only the test at > >>> hotspot/jtreg/runtime/containers/docker/ use this so it should not be > a > >> big > >>> deal to change the interface?) > >>>> > >>>> Best regards, Matthias > >>>> > >>>> > >>>> > >>>> > >>>>> -----Original Message----- > >>>>> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] > >>>>> Sent: Mittwoch, 24. Januar 2018 20:09 > >>>>> To: Bob Vandette ; Baesken, Matthias > >>>>> > >>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz > >>>>> ; Langer, Christoph > >>>>> ; Doerr, Martin > > >>>>> Subject: Re: RFR : 8196062 : Enable docker container related tests for > >> linux > >>>>> ppc64le > >>>>> > >>>>> Hi Matthias, > >>>>> > >>>>> Please see my comments about the test changes inline. > >>>>> > >>>>> > >>>>> On 01/24/2018 07:13 AM, Bob Vandette wrote: > >>>>>> osContainer_linux.cpp: > >>>>>> > >>>>>> Can you add "return;" in each test for subsystem not found > messages > >>> and > >>>>>> remove these 3 lines OR move your tests for NULL & messages > inside. > >>> The > >>>>> compiler can > >>>>>> probably optimize this but I?d prefer more compact code. > >>>>>> > >>>>>> if (memory == NULL || cpuset == NULL || cpu == NULL || cpuacct == > >>> NULL) > >>>>> { > >>>>>> 342 return; > >>>>>> 343 } > >>>>>> > >>>>>> > >>>>>> The other changes in osContainer_linux.cpp look ok. > >>>>>> > >>>>>> I forwarded your test changes to Misha, who wrote these. > >>>>>> > >>>>>> Since it?s likely that other platforms, such as aarch64, are going to run > >>> into > >>>>> the same problem, > >>>>>> It would have been better to enable the tests based on the > existence > >> of > >>> an > >>>>> arch specific > >>>>>> Dockerfile-BasicTest-{os.arch} rather than enabling specific arch?s in > >>>>> VPProps.java. > >>>>>> This approach would reduce the number of changes significantly and > >>> allow > >>>>> support to > >>>>>> be added with 1 new file. > >>>>>> > >>>>>> You wouldn?t need "String dockerFileName = > >>>>> Common.getDockerFileName();? > >>>>>> in every test. Just make DockerTestUtils automatically add arch. > >>>>> I like Bob's idea on handling platform-specific Dockerfiles. > >>>>> > >>>>> Perhaps, you could add code to > DockerTestUtils.buildJdkDockerImage() > >>>>> that does the following or similar: > >>>>> 1. Construct a name for platform-specific docker file: > >>>>> String platformSpecificDockerfile = dockerfile + "-" + > >>>>> Platform.getOsArch(); > >>>>> (Platform is jdk.test.lib.Platform) > >>>>> > >>>>> 2. Check if platformSpecificDockerfile file exists in the test > >>>>> source directory > >>>>> File.exists(Paths.get(Utils.TEST_SRC, platformSpecificDockerFile) > >>>>> If it does, then use it. Otherwise continue using the > >>>>> default/original dockerfile name. > >>>>> > >>>>> I think this will considerably simplify your change, as well as make it > >>>>> easy to extend support to other platforms/configurations > >>>>> in the future. Let us know what you think of this approach ? > >>>>> > >>>>> > >>>>> Once your change gets (R)eviewed and approved, I can sponsor the > >> push. > >>>>> > >>>>> > >>>>> Thank you, > >>>>> Misha > >>>>> > >>>>> > >>>>> > >>>>>> Bob. > >>>>>> > >>>>>> > >>>>>> > >>>>>>> On Jan 24, 2018, at 9:24 AM, Baesken, Matthias > >>>>> wrote: > >>>>>>> Hello, could you please review the following change : 8196062 : > >> Enable > >>>>> docker container related tests for linux ppc64le . > >>>>>>> It adds docker container testing for linux ppc64 le (little endian) . > >>>>>>> > >>>>>>> A number of things had to be done : > >>>>>>> ? Add a separate docker file > >>>>> test/hotspot/jtreg/runtime/containers/docker/Dockerfile-BasicTest- > >>> ppc64le > >>>>> for linux ppc64 le which uses Ubuntu ( the Oracle Linux 7.2 used > for > >>>>> x86_64 seems not to be available for ppc64le ) > >>>>>>> ? Fix parsing /proc/self/mountinfo and /proc/self/cgroup > >> in > >>>>> src/hotspot/os/linux/osContainer_linux.cpp , it could not handle > the > >>>>> format seen on SUSE LINUX 12.1 ppc64le (Host) and Ubuntu (Docker > >>>>> container) > >>>>>>> ? Add a bit more logging > >>>>>>> > >>>>>>> > >>>>>>> Webrev : > >>>>>>> > >>>>>>> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062/ > >>>>>>> > >>>>>>> > >>>>>>> Bug : > >>>>>>> > >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8196062 > >>>>>>> > >>>>>>> > >>>>>>> After these adjustments I could run the > runtime/containers/docker > >>> - > >>>>> jtreg tests successfully . > >>>>>>> > >>>>>>> Best regards, Matthias > > From thomas.stuefe at gmail.com Fri Feb 2 08:40:36 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Fri, 2 Feb 2018 09:40:36 +0100 Subject: RFR : 8196578 : enhance errno_to_string function in os.cpp with some additional errno texts from AIX 7.1 In-Reply-To: <93fc660476f1490da815cdfba98ff623@sap.com> References: <93fc660476f1490da815cdfba98ff623@sap.com> Message-ID: On Fri, Feb 2, 2018 at 9:02 AM, Baesken, Matthias wrote: > > - I do not really like spamming a shared file with AIX specific errno > codes. > > > > Hi, I wrote ?for a few errnos ***we find*** on AIX 7.1? , not that > they are AIX ***specific***. > > Checked the first few added ones : > > > > 1522 // some more errno numbers from AIX 7.1 (some are also supported > on Linux) > > 1523 #ifdef ENOTBLK > > 1524 DEFINE_ENTRY(ENOTBLK, "Block device required") > > 1525 #endif > > 1526 #ifdef ECHRNG > > 1527 DEFINE_ENTRY(ECHRNG, "Channel number out of range") > > 1528 #endif > > 1529 #ifdef ELNRNG > > 1530 DEFINE_ENTRY(ELNRNG, "Link number out of range") > > 1531 #endif > > > > According to > > > > http://www.ioplex.com/~miallen/errcmp.html > > > > ENOTBLK ? found on AIX, Solaris, Linux, ? > > ECHRNG - found on AIX, Solaris, Linux > > ELNRNG - found on AIX, Solaris, Linux > > > The argument can easily made in the other direction. Checking the last n errno codes I see: AIX, MAC + #ifdef EPROCLIM AIX only + #ifdef ECORRUPT AIX only + #ifdef ESYSERROR AIX only + DEFINE_ENTRY(ESOFT, "I/O completed, but needs relocation") AIX, MAC + #ifdef ENOATTR AIX only + DEFINE_ENTRY(ESAD, "Security authentication denied") AIX only + #ifdef ENOTRUST ... > I would suggest to keep the multi-platform errnos in os.cpp just where > they are . > > > I am still not convinced and like my original suggestion better. Lets wait for others to chime in and see what the consensus is. Best Regards, Thomas > - Can we move platform specific error codes to platform files? Eg by > having a platform specific version pd_errno_to_string(), > - which has a first shot at translating errno values, and only if that > one returns no result reverting back to the shared version? > - > > > > Can go through the list of added errnos and check if there are really a > few in that exist only on AIX. > > If there are a significant number we might do what you suggest , but for > only a small number I wouldn?t do it. > > > > > > >Small nit: > > > > > >- DEFINE_ENTRY(ESTALE, "Reserved") > > >+ DEFINE_ENTRY(ESTALE, "No filesystem / stale NFS file handle") > > > > > >I like the glibc text better, just "Stale file handle". NFS seems too > specific, can handles for other remote file systems not get stale? > > > > That?s fine with me, I can change this to what you suggest. > > > > Best regards, Matthias > > > > > > *From:* Thomas St?fe [mailto:thomas.stuefe at gmail.com] > *Sent:* Donnerstag, 1. Februar 2018 18:38 > *To:* Baesken, Matthias > *Cc:* hotspot-dev at openjdk.java.net; ppc-aix-port-dev at openjdk.java.net > *Subject:* Re: RFR : 8196578 : enhance errno_to_string function in os.cpp > with some additional errno texts from AIX 7.1 > > > > Hi Matthias, > > > > This would probably better discussed in hotspot-runtime, no? > > > > The old error codes and their descriptions were Posix ( > http://pubs.opengroup.org/onlinepubs/000095399/basedefs/errno.h.html). I > do not really like spamming a shared file with AIX specific errno codes. > Can we move platform specific error codes to platform files? Eg by having a > platform specific version pd_errno_to_string(), which has a first shot at > translating errno values, and only if that one returns no result reverting > back to the shared version? > > > > Small nit: > > > > - DEFINE_ENTRY(ESTALE, "Reserved") > > + DEFINE_ENTRY(ESTALE, "No filesystem / stale NFS file handle") > > > > I like the glibc text better, just "Stale file handle". NFS seems too > specific, can handles for other remote file systems not get stale? > > Kind Regards, Thomas > > > > On Thu, Feb 1, 2018 at 5:16 PM, Baesken, Matthias < > matthias.baesken at sap.com> wrote: > > Hello , I enhanced the errno - to - error-text mappings in os.cpp > for a few errnos we find on AIX 7.1 . > Some of these added errnos are found as well on Linux (e.g. SLES 11 / 12 > ). > > Could you please check and review ? > > ( btw. there is good cross platform info about the errnos at > http://www.ioplex.com/~miallen/errcmp.html ) > > Bug : > > https://bugs.openjdk.java.net/browse/JDK-8196578 > > Webrev : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8196578/ > > > > Best regards, Matthias > > > From tobias.hartmann at oracle.com Fri Feb 2 08:41:51 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Fri, 2 Feb 2018 09:41:51 +0100 Subject: RFR(S): 8195731: [Graal] runtime/SharedArchiveFile/serviceability/transformRelatedClasses/TransformSuperSubTwoPckgs.java intermittently fails with Graal JIT In-Reply-To: References: <85fd0060-5d97-c918-aef3-1eadf4b2b79b@oracle.com> <8f0f7118-bbb3-10dd-cb69-21911955e53b@oracle.com> Message-ID: <70ab054a-479f-6fe4-624a-bfd285d5f0ab@oracle.com> Hi Vladimir and Serguei, thanks for looking at this! > On 2/1/18 11:55, Vladimir Kozlov wrote: >> I thought we should not use System.exit() and throw some Error or RuntimeException instead. I remember Igor I. did >> some changes but I forgot which tests. I think Igor just removed System.exit in the case where the test *passes*. I think it should be fine to bail out with System.exit(1) in the *failing* case. @Igor: What's the reason to avoid System.exit? The problem is that throwing an exception in ClassFileTransformer::transform() is silently ignored: "If the transformer throws an exception (which it doesn't catch), subsequent transformers will still be called and the load, redefine or retransform will still be attempted. Thus, throwing an exception has the same effect as returning null." [1] As a result, the test fails without any information. I've basically copied this code from runtime/RedefineTests/RedefineAnnotations.java [2] were we use System.exit as well. If there's a reason to avoid System.exit here, we can also just print an error and fail later with the generic exception: "java.lang.RuntimeException: 'parent-transform-check: this-has-been--transformed' missing from stdout/stderr " http://cr.openjdk.java.net/~thartmann/8195731/webrev.01/ Thanks, Tobias [1] https://docs.oracle.com/javase/8/docs/api/java/lang/instrument/ClassFileTransformer.html [2] http://hg.openjdk.java.net/jdk/hs/file/e50e326a2bfc/test/hotspot/jtreg/runtime/RedefineTests/RedefineAnnotations.java#l145 From david.holmes at oracle.com Fri Feb 2 08:57:41 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 2 Feb 2018 18:57:41 +1000 Subject: RFR : 8196578 : enhance errno_to_string function in os.cpp with some additional errno texts from AIX 7.1 In-Reply-To: References: <93fc660476f1490da815cdfba98ff623@sap.com> Message-ID: While I did not do an exhaustive check of the existing codes even the ones under // The following enums are not defined on all platforms. are at least defined by POSIX (even if just listed as "Reserved"). So I am still reluctant to introduce OS specific codes into a shared file. Plus there's the problem of different OS having different meanings for the same error code - suggesting per-OS specialization might be useful (but tricky to implement). That said I have to re-question whether we should be maintaining this explicit string mapping table anyway? strerror() is not thread-safe but strerror_l() seems to be, or at worst we need buffer management with strerror_r(). I know this topic has arisen before ... Cheers, David On 2/02/2018 6:40 PM, Thomas St?fe wrote: > On Fri, Feb 2, 2018 at 9:02 AM, Baesken, Matthias > wrote: > >> >> - I do not really like spamming a shared file with AIX specific errno >> codes. >> >> >> >> Hi, I wrote ?for a few errnos ***we find*** on AIX 7.1? , not that >> they are AIX ***specific***. >> >> Checked the first few added ones : >> >> >> >> 1522 // some more errno numbers from AIX 7.1 (some are also supported >> on Linux) >> >> 1523 #ifdef ENOTBLK >> >> 1524 DEFINE_ENTRY(ENOTBLK, "Block device required") >> >> 1525 #endif >> >> 1526 #ifdef ECHRNG >> >> 1527 DEFINE_ENTRY(ECHRNG, "Channel number out of range") >> >> 1528 #endif >> >> 1529 #ifdef ELNRNG >> >> 1530 DEFINE_ENTRY(ELNRNG, "Link number out of range") >> >> 1531 #endif >> >> >> >> According to >> >> >> >> http://www.ioplex.com/~miallen/errcmp.html >> >> >> >> ENOTBLK ? found on AIX, Solaris, Linux, ? >> >> ECHRNG - found on AIX, Solaris, Linux >> >> ELNRNG - found on AIX, Solaris, Linux >> >> >> > > The argument can easily made in the other direction. Checking the last n > errno codes I see: > > AIX, MAC + #ifdef EPROCLIM > AIX only + #ifdef ECORRUPT > AIX only + #ifdef ESYSERROR > AIX only + DEFINE_ENTRY(ESOFT, "I/O completed, but needs relocation") > AIX, MAC + #ifdef ENOATTR > AIX only + DEFINE_ENTRY(ESAD, "Security authentication denied") > AIX only + #ifdef ENOTRUST > ... > > >> I would suggest to keep the multi-platform errnos in os.cpp just where >> they are . >> >> >> > > I am still not convinced and like my original suggestion better. Lets wait > for others to chime in and see what the consensus is. > > Best Regards, Thomas > > > > >> - Can we move platform specific error codes to platform files? Eg by >> having a platform specific version pd_errno_to_string(), >> - which has a first shot at translating errno values, and only if that >> one returns no result reverting back to the shared version? >> - >> >> >> >> Can go through the list of added errnos and check if there are really a >> few in that exist only on AIX. >> >> If there are a significant number we might do what you suggest , but for >> only a small number I wouldn?t do it. >> >> >> >> >> >>> Small nit: >> >>> >> >>> - DEFINE_ENTRY(ESTALE, "Reserved") >> >>> + DEFINE_ENTRY(ESTALE, "No filesystem / stale NFS file handle") >> >>> >> >>> I like the glibc text better, just "Stale file handle". NFS seems too >> specific, can handles for other remote file systems not get stale? >> >> >> >> That?s fine with me, I can change this to what you suggest. >> >> >> >> Best regards, Matthias >> >> >> >> >> >> *From:* Thomas St?fe [mailto:thomas.stuefe at gmail.com] >> *Sent:* Donnerstag, 1. Februar 2018 18:38 >> *To:* Baesken, Matthias >> *Cc:* hotspot-dev at openjdk.java.net; ppc-aix-port-dev at openjdk.java.net >> *Subject:* Re: RFR : 8196578 : enhance errno_to_string function in os.cpp >> with some additional errno texts from AIX 7.1 >> >> >> >> Hi Matthias, >> >> >> >> This would probably better discussed in hotspot-runtime, no? >> >> >> >> The old error codes and their descriptions were Posix ( >> http://pubs.opengroup.org/onlinepubs/000095399/basedefs/errno.h.html). I >> do not really like spamming a shared file with AIX specific errno codes. >> Can we move platform specific error codes to platform files? Eg by having a >> platform specific version pd_errno_to_string(), which has a first shot at >> translating errno values, and only if that one returns no result reverting >> back to the shared version? >> >> >> >> Small nit: >> >> >> >> - DEFINE_ENTRY(ESTALE, "Reserved") >> >> + DEFINE_ENTRY(ESTALE, "No filesystem / stale NFS file handle") >> >> >> >> I like the glibc text better, just "Stale file handle". NFS seems too >> specific, can handles for other remote file systems not get stale? >> >> Kind Regards, Thomas >> >> >> >> On Thu, Feb 1, 2018 at 5:16 PM, Baesken, Matthias < >> matthias.baesken at sap.com> wrote: >> >> Hello , I enhanced the errno - to - error-text mappings in os.cpp >> for a few errnos we find on AIX 7.1 . >> Some of these added errnos are found as well on Linux (e.g. SLES 11 / 12 >> ). >> >> Could you please check and review ? >> >> ( btw. there is good cross platform info about the errnos at >> http://www.ioplex.com/~miallen/errcmp.html ) >> >> Bug : >> >> https://bugs.openjdk.java.net/browse/JDK-8196578 >> >> Webrev : >> >> http://cr.openjdk.java.net/~mbaesken/webrevs/8196578/ >> >> >> >> Best regards, Matthias >> >> >> From david.holmes at oracle.com Fri Feb 2 09:15:03 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 2 Feb 2018 19:15:03 +1000 Subject: RFR(S): 8195731: [Graal] runtime/SharedArchiveFile/serviceability/transformRelatedClasses/TransformSuperSubTwoPckgs.java intermittently fails with Graal JIT In-Reply-To: <70ab054a-479f-6fe4-624a-bfd285d5f0ab@oracle.com> References: <85fd0060-5d97-c918-aef3-1eadf4b2b79b@oracle.com> <8f0f7118-bbb3-10dd-cb69-21911955e53b@oracle.com> <70ab054a-479f-6fe4-624a-bfd285d5f0ab@oracle.com> Message-ID: <6cf7fb88-4bbd-1bc3-237c-794568de944a@oracle.com> On 2/02/2018 6:41 PM, Tobias Hartmann wrote: > Hi Vladimir and Serguei, > > thanks for looking at this! > >> On 2/1/18 11:55, Vladimir Kozlov wrote: >>> I thought we should not use System.exit() and throw some Error or RuntimeException instead. I remember Igor I. did >>> some changes but I forgot which tests. > > I think Igor just removed System.exit in the case where the test *passes*. I think it should be fine to bail out with > System.exit(1) in the *failing* case. > > @Igor: What's the reason to avoid System.exit? http://openjdk.java.net/jtreg/faq.html#question2.6 2.6. Should a test call the System.exit method? No. Depending on how you run the tests, you may get a security exception from the harness. --- Plus if you call System.exit you have to run in othervm mode. So generally we avoid System.exit and just fail by throwing an exception from "main" (or whatever the test entry point is, depending on which framework it uses - like testng). There are exceptions of course (pardon the pun) and a lot of legacy tests use System.exit(97) or System.exit(95) to indicate success or failure. > The problem is that throwing an exception in ClassFileTransformer::transform() is silently ignored: > "If the transformer throws an exception (which it doesn't catch), subsequent transformers will still be called and the > load, redefine or retransform will still be attempted. Thus, throwing an exception has the same effect as returning > null." [1] > > As a result, the test fails without any information. I've basically copied this code from > runtime/RedefineTests/RedefineAnnotations.java [2] were we use System.exit as well. If there's a reason to avoid > System.exit here, we can also just print an error and fail later with the generic exception: > "java.lang.RuntimeException: 'parent-transform-check: this-has-been--transformed' missing from stdout/stderr" This does sounds like a case where you need System.exit to force immediate termination. David ----- > http://cr.openjdk.java.net/~thartmann/8195731/webrev.01/ > > Thanks, > Tobias > > [1] https://docs.oracle.com/javase/8/docs/api/java/lang/instrument/ClassFileTransformer.html > [2] > http://hg.openjdk.java.net/jdk/hs/file/e50e326a2bfc/test/hotspot/jtreg/runtime/RedefineTests/RedefineAnnotations.java#l145 > From thomas.stuefe at gmail.com Fri Feb 2 09:20:12 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Fri, 2 Feb 2018 10:20:12 +0100 Subject: RFR : 8196578 : enhance errno_to_string function in os.cpp with some additional errno texts from AIX 7.1 In-Reply-To: References: <93fc660476f1490da815cdfba98ff623@sap.com> Message-ID: On Fri, Feb 2, 2018 at 9:57 AM, David Holmes wrote: > While I did not do an exhaustive check of the existing codes even the ones > under > > // The following enums are not defined on all platforms. > > are at least defined by POSIX (even if just listed as "Reserved"). > > So I am still reluctant to introduce OS specific codes into a shared file. > Plus there's the problem of different OS having different meanings for the > same error code - suggesting per-OS specialization might be useful (but > tricky to implement). > > That said I have to re-question whether we should be maintaining this > explicit string mapping table anyway? strerror() is not thread-safe but > strerror_l() seems to be, or at worst we need buffer management with > strerror_r(). I know this topic has arisen before ... > > How about we build the string table dynamically at process start by iterating the first n errnos and calling strerror() :) Just kidding. Yes, I admit this table starts to feel weird. Original discussions were here: https://bugs.openjdk.java.net/browse/JDK-8148425 I originally just wanted a static translation of errno numbers to literalized errno constants (e.g. ETOOMANYREFS => "ETOOMANYREFS"), because in 99% of cases where we call os::strerror() we do this to print log output for developers, and as a developer I find "ETOOMANYREFS" far more succinct than whatever strerror() returns. This would also bypass any localization issues. If I see "ETOOMANYREFS" in a log file I immediately know this is an error code from the libc, and can look it up in the man page or google it. But when I read "Too many references: can't splice" - potentially in Portuguese :) - I would have to dig a bit until I find out what is actually happening. Of course, there are cases where we want the human readable, localized text, but those cases are rarer and could be rewritten to use strerror_r. Just my 5 cent. ..Thomas Cheers, > David > > On 2/02/2018 6:40 PM, Thomas St?fe wrote: > >> On Fri, Feb 2, 2018 at 9:02 AM, Baesken, Matthias < >> matthias.baesken at sap.com> >> wrote: >> >> >>> - I do not really like spamming a shared file with AIX specific errno >>> >>> codes. >>> >>> >>> >>> Hi, I wrote ?for a few errnos ***we find*** on AIX 7.1? , not that >>> they are AIX ***specific***. >>> >>> Checked the first few added ones : >>> >>> >>> >>> 1522 // some more errno numbers from AIX 7.1 (some are also supported >>> on Linux) >>> >>> 1523 #ifdef ENOTBLK >>> >>> 1524 DEFINE_ENTRY(ENOTBLK, "Block device required") >>> >>> 1525 #endif >>> >>> 1526 #ifdef ECHRNG >>> >>> 1527 DEFINE_ENTRY(ECHRNG, "Channel number out of range") >>> >>> 1528 #endif >>> >>> 1529 #ifdef ELNRNG >>> >>> 1530 DEFINE_ENTRY(ELNRNG, "Link number out of range") >>> >>> 1531 #endif >>> >>> >>> >>> According to >>> >>> >>> >>> http://www.ioplex.com/~miallen/errcmp.html >>> >>> >>> >>> ENOTBLK ? found on AIX, Solaris, Linux, ? >>> >>> ECHRNG - found on AIX, Solaris, Linux >>> >>> ELNRNG - found on AIX, Solaris, Linux >>> >>> >>> >>> >> The argument can easily made in the other direction. Checking the last n >> errno codes I see: >> >> AIX, MAC + #ifdef EPROCLIM >> AIX only + #ifdef ECORRUPT >> AIX only + #ifdef ESYSERROR >> AIX only + DEFINE_ENTRY(ESOFT, "I/O completed, but needs relocation") >> AIX, MAC + #ifdef ENOATTR >> AIX only + DEFINE_ENTRY(ESAD, "Security authentication denied") >> AIX only + #ifdef ENOTRUST >> ... >> >> >> I would suggest to keep the multi-platform errnos in os.cpp just where >>> they are . >>> >>> >>> >>> >> I am still not convinced and like my original suggestion better. Lets wait >> for others to chime in and see what the consensus is. >> >> Best Regards, Thomas >> >> >> >> >> - Can we move platform specific error codes to platform files? Eg by >>> having a platform specific version pd_errno_to_string(), >>> - which has a first shot at translating errno values, and only if >>> that >>> one returns no result reverting back to the shared version? >>> - >>> >>> >>> >>> Can go through the list of added errnos and check if there are really a >>> few in that exist only on AIX. >>> >>> If there are a significant number we might do what you suggest , but for >>> only a small number I wouldn?t do it. >>> >>> >>> >>> >>> >>> Small nit: >>>> >>> >>> >>>> >>> - DEFINE_ENTRY(ESTALE, "Reserved") >>>> >>> >>> + DEFINE_ENTRY(ESTALE, "No filesystem / stale NFS file handle") >>>> >>> >>> >>>> >>> I like the glibc text better, just "Stale file handle". NFS seems too >>>> >>> specific, can handles for other remote file systems not get stale? >>> >>> >>> >>> That?s fine with me, I can change this to what you suggest. >>> >>> >>> >>> Best regards, Matthias >>> >>> >>> >>> >>> >>> *From:* Thomas St?fe [mailto:thomas.stuefe at gmail.com] >>> *Sent:* Donnerstag, 1. Februar 2018 18:38 >>> *To:* Baesken, Matthias >>> *Cc:* hotspot-dev at openjdk.java.net; ppc-aix-port-dev at openjdk.java.net >>> *Subject:* Re: RFR : 8196578 : enhance errno_to_string function in os.cpp >>> >>> with some additional errno texts from AIX 7.1 >>> >>> >>> >>> Hi Matthias, >>> >>> >>> >>> This would probably better discussed in hotspot-runtime, no? >>> >>> >>> >>> The old error codes and their descriptions were Posix ( >>> http://pubs.opengroup.org/onlinepubs/000095399/basedefs/errno.h.html). I >>> do not really like spamming a shared file with AIX specific errno codes. >>> Can we move platform specific error codes to platform files? Eg by >>> having a >>> platform specific version pd_errno_to_string(), which has a first shot at >>> translating errno values, and only if that one returns no result >>> reverting >>> back to the shared version? >>> >>> >>> >>> Small nit: >>> >>> >>> >>> - DEFINE_ENTRY(ESTALE, "Reserved") >>> >>> + DEFINE_ENTRY(ESTALE, "No filesystem / stale NFS file handle") >>> >>> >>> >>> I like the glibc text better, just "Stale file handle". NFS seems too >>> specific, can handles for other remote file systems not get stale? >>> >>> Kind Regards, Thomas >>> >>> >>> >>> On Thu, Feb 1, 2018 at 5:16 PM, Baesken, Matthias < >>> matthias.baesken at sap.com> wrote: >>> >>> Hello , I enhanced the errno - to - error-text mappings in os.cpp >>> for a few errnos we find on AIX 7.1 . >>> Some of these added errnos are found as well on Linux (e.g. SLES 11 / 12 >>> ). >>> >>> Could you please check and review ? >>> >>> ( btw. there is good cross platform info about the errnos at >>> http://www.ioplex.com/~miallen/errcmp.html ) >>> >>> Bug : >>> >>> https://bugs.openjdk.java.net/browse/JDK-8196578 >>> >>> Webrev : >>> >>> http://cr.openjdk.java.net/~mbaesken/webrevs/8196578/ >>> >>> >>> >>> Best regards, Matthias >>> >>> >>> >>> From aph at redhat.com Fri Feb 2 09:27:26 2018 From: aph at redhat.com (Andrew Haley) Date: Fri, 2 Feb 2018 09:27:26 +0000 Subject: Constant dynamic pushed to the hs repo In-Reply-To: References: <093b9c05-4414-6341-9e39-c2e1cb5d9059@redhat.com> Message-ID: On 01/02/18 17:41, Paul Sandoz wrote: > And here is the review thread for AArch64 changes: > > http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-November/029435.html Mmm. So it was reviewed and approved with minor changes, but not checked in anywhere? -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From tobias.hartmann at oracle.com Fri Feb 2 09:36:47 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Fri, 2 Feb 2018 10:36:47 +0100 Subject: RFR(S): 8195731: [Graal] runtime/SharedArchiveFile/serviceability/transformRelatedClasses/TransformSuperSubTwoPckgs.java intermittently fails with Graal JIT In-Reply-To: <6cf7fb88-4bbd-1bc3-237c-794568de944a@oracle.com> References: <85fd0060-5d97-c918-aef3-1eadf4b2b79b@oracle.com> <8f0f7118-bbb3-10dd-cb69-21911955e53b@oracle.com> <70ab054a-479f-6fe4-624a-bfd285d5f0ab@oracle.com> <6cf7fb88-4bbd-1bc3-237c-794568de944a@oracle.com> Message-ID: <133abf1f-b081-7c85-f8f0-e2a9c457ab83@oracle.com> Hi David, On 02.02.2018 10:15, David Holmes wrote: > http://openjdk.java.net/jtreg/faq.html#question2.6 > > 2.6. Should a test call the System.exit method? > > No. Depending on how you run the tests, you may get a security exception from the harness. > > --- > > Plus if you call System.exit you have to run in othervm mode. > > So generally we avoid System.exit and just fail by throwing an exception from "main" (or whatever the test entry point > is, depending on which framework it uses - like testng). There are exceptions of course (pardon the pun) and a lot of > legacy tests use System.exit(97) or System.exit(95) to indicate success or failure. Thanks for the pointer, that makes sense to me. >> The problem is that throwing an exception in ClassFileTransformer::transform() is silently ignored: >> "If the transformer throws an exception (which it doesn't catch), subsequent transformers will still be called and the >> load, redefine or retransform will still be attempted. Thus, throwing an exception has the same effect as returning >> null." [1] >> >> As a result, the test fails without any information. I've basically copied this code from >> runtime/RedefineTests/RedefineAnnotations.java [2] were we use System.exit as well. If there's a reason to avoid >> System.exit here, we can also just print an error and fail later with the generic exception: >> "java.lang.RuntimeException: 'parent-transform-check: this-has-been--transformed' missing from stdout/stderr" > > This does sounds like a case where you need System.exit to force immediate termination. Yes, I think so too. Thanks, Tobias From dmitry.samersoff at bell-sw.com Fri Feb 2 09:49:29 2018 From: dmitry.samersoff at bell-sw.com (Dmitry Samersov) Date: Fri, 2 Feb 2018 12:49:29 +0300 Subject: RFR : 8196062 : Enable docker container related tests for linux ppc64le In-Reply-To: <64a3268575d14ddcad90f7d46bab64dd@sap.com> References: <3374db8c-6e7b-fe0b-6874-8289e9e369bc@oracle.com> <6fa3fe84aad946eabb2b46281031e21d@sap.com> <4be36ed6-2f8e-2dd8-a87a-5f76d639550e@oracle.com> <64a3268575d14ddcad90f7d46bab64dd@sap.com> Message-ID: Matthias, The fix looks good to me. PS: I would prefer to check that file path fits to MAXPATHLEN before doing any copying and save a bit of computer power or, ever better, use snprintf here but these changes are clearly out of scope of your fix. -Dmitry On 31.01.2018 17:15, Baesken, Matthias wrote: > Hello , I created a second webrev : > > > http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.1/8196062.1/webrev/ > > - changed DockerTestUtils.buildJdkDockerImage in the suggested way (this should be extendable to linux s390x soon) > >>>>> Can you add "return;" in each test for subsystem not found messages > > - added returns in the tests for the subsystems in osContainer_linux.cpp > > - moved some checks at the beginning of subsystem_file_contents (suggested by Dmitry) > > > Best regards, Matthias > > > >> -----Original Message----- >> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] >> Sent: Donnerstag, 25. Januar 2018 18:43 >> To: Baesken, Matthias ; Bob Vandette >> >> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >> ; Langer, Christoph >> ; Doerr, Martin >> Subject: Re: RFR : 8196062 : Enable docker container related tests for linux >> ppc64le >> >> Hi Matthias, >> >> >> On 01/25/2018 12:15 AM, Baesken, Matthias wrote: >>>> Perhaps, you could add code to DockerTestUtils.buildJdkDockerImage() >>>> that does the following or similar: >>>> 1. Construct a name for platform-specific docker file: >>>> String platformSpecificDockerfile = dockerfile + "-" + >>>> Platform.getOsArch(); >>>> (Platform is jdk.test.lib.Platform) >>>> >>> Hello, the doc says : >>> >>> * Build a docker image that contains JDK under test. >>> * The jdk will be placed under the "/jdk/" folder inside the docker file >> system. >>> ..... >>> param dockerfile name of the dockerfile residing in the test source >>> ..... >>> public static void buildJdkDockerImage(String imageName, String >> dockerfile, String buildDirName) >>> >>> >>> >>> It does not say anything about doing hidden insertions of some platform >> names into the dockerfile name. >>> So should the jtreg API doc be changed ? >>> If so who needs to approve this ? >> Thank you for your concerns about the clarity of API and corresponding >> documentation. This is a test library API, so no need to file CCC or CSR. >> >> This API can be changed via a regular RFR/webrev review process, as soon >> as on one objects. I am a VM SQE engineer covering the docker and Linux >> container area, I am OK with this change. >> And I agree with you, we should update the javadoc header on this method >> to reflect this implicit part of API contract. >> >> >> Thank you, >> Misha >> >> >> >>> (as far as I see so far only the test at >> hotspot/jtreg/runtime/containers/docker/ use this so it should not be a big >> deal to change the interface?) >>> >>> Best regards, Matthias >>> >>> >>> >>> >>>> -----Original Message----- >>>> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] >>>> Sent: Mittwoch, 24. Januar 2018 20:09 >>>> To: Bob Vandette ; Baesken, Matthias >>>> >>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >>>> ; Langer, Christoph >>>> ; Doerr, Martin >>>> Subject: Re: RFR : 8196062 : Enable docker container related tests for linux >>>> ppc64le >>>> >>>> Hi Matthias, >>>> >>>> ?? Please see my comments about the test changes inline. >>>> >>>> >>>> On 01/24/2018 07:13 AM, Bob Vandette wrote: >>>>> osContainer_linux.cpp: >>>>> >>>>> Can you add "return;" in each test for subsystem not found messages >> and >>>>> remove these 3 lines OR move your tests for NULL & messages inside. >> The >>>> compiler can >>>>> probably optimize this but I?d prefer more compact code. >>>>> >>>>> if (memory == NULL || cpuset == NULL || cpu == NULL || cpuacct == >> NULL) >>>> { >>>>> 342 return; >>>>> 343 } >>>>> >>>>> >>>>> The other changes in osContainer_linux.cpp look ok. >>>>> >>>>> I forwarded your test changes to Misha, who wrote these. >>>>> >>>>> Since it?s likely that other platforms, such as aarch64, are going to run >> into >>>> the same problem, >>>>> It would have been better to enable the tests based on the existence of >> an >>>> arch specific >>>>> Dockerfile-BasicTest-{os.arch} rather than enabling specific arch?s in >>>> VPProps.java. >>>>> This approach would reduce the number of changes significantly and >> allow >>>> support to >>>>> be added with 1 new file. >>>>> >>>>> You wouldn?t need "String dockerFileName = >>>> Common.getDockerFileName();? >>>>> in every test. Just make DockerTestUtils automatically add arch. >>>> I like Bob's idea on handling platform-specific Dockerfiles. >>>> >>>> Perhaps, you could add code to DockerTestUtils.buildJdkDockerImage() >>>> that does the following or similar: >>>> ??? 1. Construct a name for platform-specific docker file: >>>> ?????????? String platformSpecificDockerfile = dockerfile + "-" + >>>> Platform.getOsArch(); >>>> ?????????? (Platform is jdk.test.lib.Platform) >>>> >>>> ??? 2. Check if platformSpecificDockerfile file exists in the test >>>> source directory >>>> ????????? File.exists(Paths.get(Utils.TEST_SRC, platformSpecificDockerFile) >>>> ????????? If it does, then use it. Otherwise continue using the >>>> default/original dockerfile name. >>>> >>>> I think this will considerably simplify your change, as well as make it >>>> easy to extend support to other platforms/configurations >>>> in the future. Let us know what you think of this approach ? >>>> >>>> >>>> Once your change gets (R)eviewed and approved, I can sponsor the push. >>>> >>>> >>>> Thank you, >>>> Misha >>>> >>>> >>>> >>>>> Bob. >>>>> >>>>> >>>>> >>>>>> On Jan 24, 2018, at 9:24 AM, Baesken, Matthias >>>> wrote: >>>>>> Hello, could you please review the following change : 8196062 : Enable >>>> docker container related tests for linux ppc64le . >>>>>> It adds docker container testing for linux ppc64 le (little endian) . >>>>>> >>>>>> A number of things had to be done : >>>>>> ? Add a separate docker file >>>> test/hotspot/jtreg/runtime/containers/docker/Dockerfile-BasicTest- >> ppc64le >>>> for linux ppc64 le which uses Ubuntu ( the Oracle Linux 7.2 used for >>>> x86_64 seems not to be available for ppc64le ) >>>>>> ? Fix parsing /proc/self/mountinfo and /proc/self/cgroup in >>>> src/hotspot/os/linux/osContainer_linux.cpp , it could not handle the >>>> format seen on SUSE LINUX 12.1 ppc64le (Host) and Ubuntu (Docker >>>> container) >>>>>> ? Add a bit more logging >>>>>> >>>>>> >>>>>> Webrev : >>>>>> >>>>>> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062/ >>>>>> >>>>>> >>>>>> Bug : >>>>>> >>>>>> https://bugs.openjdk.java.net/browse/JDK-8196062 >>>>>> >>>>>> >>>>>> After these adjustments I could run the runtime/containers/docker >> - >>>> jtreg tests successfully . >>>>>> >>>>>> Best regards, Matthias > From dmitry.samersoff at bell-sw.com Fri Feb 2 09:58:09 2018 From: dmitry.samersoff at bell-sw.com (Dmitry Samersov) Date: Fri, 2 Feb 2018 12:58:09 +0300 Subject: Constant dynamic pushed to the hs repo In-Reply-To: References: <093b9c05-4414-6341-9e39-c2e1cb5d9059@redhat.com> Message-ID: Andrew, On 02.02.2018 12:27, Andrew Haley wrote: > On 01/02/18 17:41, Paul Sandoz wrote: >> And here is the review thread for AArch64 changes: >> >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-November/029435.html > > Mmm. So it was reviewed and approved with minor changes, but not > checked in anywhere? > Correct! I waited for final patch from Paul to do some additional testing. I plan to re-rise review request and than push the changes shortly. -- -Dmitry From stewartd.qdt at qualcommdatacenter.com Fri Feb 2 13:58:19 2018 From: stewartd.qdt at qualcommdatacenter.com (stewartd.qdt) Date: Fri, 2 Feb 2018 13:58:19 +0000 Subject: RFR: 8196361: JTReg failure in serviceability/sa/ClhsdbInspect.java In-Reply-To: <1b753efa-6abc-4947-e4f3-f6a29020c082@oracle.com> References: <6e3441f3c28f4f7387d2174f52283fa7@NASANEXM01E.na.qualcomm.com> <44a71e46-3da2-53c6-7e6b-82658183ae8c@oracle.com> <3318aed7-f04a-3b92-2660-39cdf13c2d24@oracle.com> <24c556061ffb4fde9e87a8806c04c8f7@NASANEXM01E.na.qualcomm.com> <0eca7345-fb57-43e4-6169-c4b3531250e4@oracle.com> <4f46f527c17f4d988e4b46e14f93cd4d@NASANEXM01E.na.qualcomm.com> <1b753efa-6abc-4947-e4f3-f6a29020c082@oracle.com> Message-ID: <4dc3cedf0b1e40b7bceff6672cb3fa1a@NASANEXM01E.na.qualcomm.com> Hi Jini, Thank you for the review. I have made the requested changes and posted them at http://cr.openjdk.java.net/~dstewart/8196361/webrev.03/ Please have a look and review the changes. Thanks, Daniel -----Original Message----- From: Jini George [mailto:jini.george at oracle.com] Sent: Friday, February 2, 2018 1:19 AM To: David Holmes ; stewartd.qdt Cc: serviceability-dev ; hotspot-dev at openjdk.java.net Subject: Re: RFR: 8196361: JTReg failure in serviceability/sa/ClhsdbInspect.java Hi Daniel, Your changes look good to me overall. Just some nits: * Please do add 2018 to the copyright year. * Since the rest of the file follows 4 spaces for indentation, please keep the indentation to 4 spaces. * Line 81: It would be great if the opening brace is at line 80, so that it would be consistent with the rest of the file. * Line 65: The declaration could be a part of line 79. * Line 51: Please add the 'oop address of a java.lang.Class' to the comment. Thanks! Jini. On 2/2/2018 7:31 AM, David Holmes wrote: > On 2/02/2018 1:50 AM, stewartd.qdt wrote: >> Please have? a look at the newest changes at: >> http://cr.openjdk.java.net/~dstewart/8196361/webrev.02/ >> >> The only difference between this and the last changeset is the use of >> "\\R" instead of whatever is the platform line.separator. > > Thanks for that. > > The overall changes seem reasonable but I'll defer to Jini for final > approval. If Jini approves then consider this Reviewed. > > Thanks, > David > >> Thank you, >> Daniel >> >> -----Original Message----- >> From: David Holmes [mailto:david.holmes at oracle.com] >> Sent: Thursday, February 1, 2018 2:51 AM >> To: stewartd.qdt ; Jini George >> >> Cc: serviceability-dev ; >> hotspot-dev at openjdk.java.net >> Subject: Re: RFR: 8196361: JTReg failure in >> serviceability/sa/ClhsdbInspect.java >> >> Hi Daniel, >> >> On 1/02/2018 2:45 AM, stewartd.qdt wrote: >>> Hi Jini, David, >>> >>> Please have a look at the revised webrev: >>> http://cr.openjdk.java.net/~dstewart/8196361/webrev.01/ >>> >>> In this webrev I have changed the approach to finding the addresses. >>> This was necessary because in the case of matching for the locks the >>> addresses are before what is matched and in the case of Method the >>> address is after it.? The existing code only looked for the >>> addresses after the matched string. I've also tried to align what >>> tokens? are being looked for in the lock case. I've taken an >>> approach of breaking the jstack output into lines and then searching >>> each line for it containing what we want. Once found, the line is >>> broken into pieces to find the actual address we want. >>> >>> Please let me know if this is an unacceptable approach or any >>> changes you would like to see. >> >> I'm not clear on the overall approach as I'm unclear exactly how >> inspect operates or exactly what the test is trying to verify. One >> comment on breaking things into lines though: >> >> ??? 73???????????? String newline = >> System.getProperty("line.separator"); >> ??? 74???????????? String[] lines = jstackOutput.split(newline); >> >> As split() takes a regex, I suggest using \R to cover all potential >> line-breaks, rather than the platform specific line-seperator. We've >> been recently bitten by the distinction between output that comes >> from reading a process's stdout/stderr (and for which a newline \n is >> translated into the platform line-seperator), and output that comes >> across a socket connection (for which \n is not translated). This >> could result in failing to parse things correctly on Windows. It's >> safer/simpler to expect any kind of line-seperator. >> >> Thanks, >> David >> >>> Thanks, >>> Daniel >>> >>> >>> -----Original Message----- >>> From: Jini George [mailto:jini.george at oracle.com] >>> Sent: Tuesday, January 30, 2018 6:58 AM >>> To: David Holmes ; stewartd.qdt >>> >>> Cc: serviceability-dev ; >>> hotspot-dev at openjdk.java.net >>> Subject: Re: RFR: 8196361: JTReg failure in >>> serviceability/sa/ClhsdbInspect.java >>> >>> Hi Daniel, David, >>> >>> Thanks, Daniel, for bringing this up. The intent of the test is to >>> get the oop address corresponding to a >>> java.lang.ref.ReferenceQueue$Lock, >>> which can typically be obtained from the stack traces of the >>> Common-Cleaner or the Finalizer threads. The stack traces which I >>> had been noticing were typically of the form: >>> >>> >>> "Common-Cleaner" #8 daemon prio=8 tid=0x00007f09c82ac000 nid=0xf6e >>> in >>> Object.wait() [0x00007f09a18d2000] >>> ????? java.lang.Thread.State: TIMED_WAITING (on object monitor) >>> ????? JavaThread state: _thread_blocked >>> ??? - java.lang.Object.wait(long) @bci=0, pc=0x00007f09b7d6480b, >>> Method*=0x00007f09acc43d60 (Interpreted frame) >>> ?????????? - waiting on <0x000000072e61f6e0> (a >>> java.lang.ref.ReferenceQueue$Lock) >>> ??? - java.lang.ref.ReferenceQueue.remove(long) @bci=59, line=151, >>> pc=0x00007f09b7d55243, Method*=0x00007f09acdab9b0 (Interpreted >>> frame) >>> ?????????? - waiting to re-lock in wait() <0x000000072e61f6e0> (a >>> java.lang.ref.ReferenceQueue$Lock) >>> ... >>> >>> I chose 'waiting to re-lock in wait' since that was what I had been >>> observing next to the oop address of java.lang.ref.ReferenceQueue$Lock. >>> But I see how with a timing difference, one could get 'waiting to lock' >>> as in your case. So, a good way to fix might be to check for the >>> line containing '(a java.lang.ref.ReferenceQueue$Lock)', getting the >>> oop address from that line (should be the address appearing >>> immediately before '(a java.lang.ref.ReferenceQueue$Lock)') and >>> passing that to the 'inspect' command. >>> >>> Thanks much, >>> Jini. >>> >>> On 1/30/2018 3:35 AM, David Holmes wrote: >>>> Hi Daniel, >>>> >>>> Serviceability issues should go to >>>> serviceability-dev at openjdk.java.net >>>> - now cc'd. >>>> >>>> On 30/01/2018 7:53 AM, stewartd.qdt wrote: >>>>> Please review this webrev [1] which attempts to fix a test error >>>>> in serviceability/sa/ClhsdbInspect.java when it is run under an >>>>> AArch64 system (not necessarily exclusive to this system, but it >>>>> was the system under test). The bug report [2] provides further details. >>>>> Essentially the line "waiting to re-lock in wait" never actually >>>>> occurs. Instead I have the line "waiting to lock" which occurs for >>>>> the referenced item of /java/lang/ref/ReferenceQueue$Lock. >>>>> Unfortunately the test is written such that only the first >>>>> "waiting to lock" >>>>> occurrence is seen (for java/lang/Class), which is already >>>>> accounted for in the test. >>>> >>>> I can't tell exactly what the test expects, or why, but it would be >>>> extremely hard to arrange for "waiting to re-lock in wait" to be >>>> seen for the ReferenceQueue lock! That requires acquiring the lock >>>> yourself, issuing a notify() to unblock the wait(), and then >>>> issuing the jstack command while still holding the lock! >>>> >>>> David >>>> ----- >>>> >>>>> I'm not overly happy with this approach as it actually removes a >>>>> test line. However, the test line does not actually appear in the >>>>> output (at least on my system) and the test is not currently >>>>> written to look for the second occurrence of the line "waiting to lock". >>>>> Perhaps the original author could chime in and provide further >>>>> guidance as to the intention of the test. >>>>> >>>>> I am happy to modify the patch as necessary. >>>>> >>>>> Regards, >>>>> Daniel Stewart >>>>> >>>>> >>>>> [1] -? http://cr.openjdk.java.net/~dstewart/8196361/webrev.00/ >>>>> [2] - https://bugs.openjdk.java.net/browse/JDK-8196361 >>>>> From mandy.chung at oracle.com Fri Feb 2 16:22:33 2018 From: mandy.chung at oracle.com (mandy chung) Date: Fri, 2 Feb 2018 08:22:33 -0800 Subject: [11] RFR(S): 8195695: NativeLibraryTest.java fails w/ 'Expected unloaded=1 but got=0' In-Reply-To: References: <840057aa-b4f2-4028-9369-24004b64452a@oracle.com> <7ff0cadc-190e-c4d6-ffe0-aab4e4a17622@oracle.com> <7a6c1d9f-b080-9694-f4a0-01c37fcd2622@oracle.com> <4edb8b33-30d2-f3ba-6ba6-72e0eb178a46@oracle.com> Message-ID: <9d5e3eb8-9838-aa78-8597-f30ec2391837@oracle.com> On 2/2/18 12:11 AM, Tobias Hartmann wrote: > Okay, here's the new webrev: > http://cr.openjdk.java.net/~thartmann/8195695/webrev.01/ > +1 Mandy From tobias.hartmann at oracle.com Fri Feb 2 16:34:32 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Fri, 2 Feb 2018 17:34:32 +0100 Subject: [11] RFR(S): 8195695: NativeLibraryTest.java fails w/ 'Expected unloaded=1 but got=0' In-Reply-To: <9d5e3eb8-9838-aa78-8597-f30ec2391837@oracle.com> References: <840057aa-b4f2-4028-9369-24004b64452a@oracle.com> <7ff0cadc-190e-c4d6-ffe0-aab4e4a17622@oracle.com> <7a6c1d9f-b080-9694-f4a0-01c37fcd2622@oracle.com> <4edb8b33-30d2-f3ba-6ba6-72e0eb178a46@oracle.com> <9d5e3eb8-9838-aa78-8597-f30ec2391837@oracle.com> Message-ID: <49227c0e-000c-7275-56ba-a139519da995@oracle.com> Thanks Mandy! Best regards, Tobias On 02.02.2018 17:22, mandy chung wrote: > > > On 2/2/18 12:11 AM, Tobias Hartmann wrote: >> Okay, here's the new webrev: >> http://cr.openjdk.java.net/~thartmann/8195695/webrev.01/ >> > +1 > > Mandy From vladimir.kozlov at oracle.com Fri Feb 2 17:55:46 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 2 Feb 2018 09:55:46 -0800 Subject: [11] RFR(S): 8195695: NativeLibraryTest.java fails w/ 'Expected unloaded=1 but got=0' In-Reply-To: References: <840057aa-b4f2-4028-9369-24004b64452a@oracle.com> <7ff0cadc-190e-c4d6-ffe0-aab4e4a17622@oracle.com> <7a6c1d9f-b080-9694-f4a0-01c37fcd2622@oracle.com> <4edb8b33-30d2-f3ba-6ba6-72e0eb178a46@oracle.com> Message-ID: <61785a90-35a9-7e4f-ac2b-b6923a0f8581@oracle.com> Good. Thanks, Vladimir On 2/2/18 12:11 AM, Tobias Hartmann wrote: > Hi Vladimir, > >>>> On 2/02/2018 6:01 AM, Vladimir Kozlov wrote: >>>>> Increase wait time will not always work. > > I've decided to go with increasing the wait time because there are other flags that might slow down execution (for > example, -XX:+DeoptimizeALot, -XX:+AggressiveOpts, ...) but excluding -Xcomp should be fine for now. > >>> Yes, skipping it for -Xcomp is also acceptable: >>> >>> @requires vm.compMode != "Xcomp" > > Okay, here's the new webrev: > http://cr.openjdk.java.net/~thartmann/8195695/webrev.01/ > > Thanks, > Tobias > From vladimir.kozlov at oracle.com Fri Feb 2 18:26:43 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 2 Feb 2018 10:26:43 -0800 Subject: RFR(S): 8195731: [Graal] runtime/SharedArchiveFile/serviceability/transformRelatedClasses/TransformSuperSubTwoPckgs.java intermittently fails with Graal JIT In-Reply-To: <133abf1f-b081-7c85-f8f0-e2a9c457ab83@oracle.com> References: <85fd0060-5d97-c918-aef3-1eadf4b2b79b@oracle.com> <8f0f7118-bbb3-10dd-cb69-21911955e53b@oracle.com> <70ab054a-479f-6fe4-624a-bfd285d5f0ab@oracle.com> <6cf7fb88-4bbd-1bc3-237c-794568de944a@oracle.com> <133abf1f-b081-7c85-f8f0-e2a9c457ab83@oracle.com> Message-ID: Thank you, Tobias and David. With this information I agree to use System.exit(). May be just add your new log("Transformation failed!"); to webrev.00 Thanks, Vladimir On 2/2/18 1:36 AM, Tobias Hartmann wrote: > Hi David, > > On 02.02.2018 10:15, David Holmes wrote: >> http://openjdk.java.net/jtreg/faq.html#question2.6 >> >> 2.6. Should a test call the System.exit method? >> >> No. Depending on how you run the tests, you may get a security exception from the harness. >> >> --- >> >> Plus if you call System.exit you have to run in othervm mode. >> >> So generally we avoid System.exit and just fail by throwing an exception from "main" (or whatever the test entry point >> is, depending on which framework it uses - like testng). There are exceptions of course (pardon the pun) and a lot of >> legacy tests use System.exit(97) or System.exit(95) to indicate success or failure. > > Thanks for the pointer, that makes sense to me. > >>> The problem is that throwing an exception in ClassFileTransformer::transform() is silently ignored: >>> "If the transformer throws an exception (which it doesn't catch), subsequent transformers will still be called and the >>> load, redefine or retransform will still be attempted. Thus, throwing an exception has the same effect as returning >>> null." [1] >>> >>> As a result, the test fails without any information. I've basically copied this code from >>> runtime/RedefineTests/RedefineAnnotations.java [2] were we use System.exit as well. If there's a reason to avoid >>> System.exit here, we can also just print an error and fail later with the generic exception: >>> "java.lang.RuntimeException: 'parent-transform-check: this-has-been--transformed' missing from stdout/stderr" >> >> This does sounds like a case where you need System.exit to force immediate termination. > > Yes, I think so too. > > Thanks, > Tobias > From serguei.spitsyn at oracle.com Fri Feb 2 20:56:32 2018 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Fri, 2 Feb 2018 12:56:32 -0800 Subject: RFR(S): 8195731: [Graal] runtime/SharedArchiveFile/serviceability/transformRelatedClasses/TransformSuperSubTwoPckgs.java intermittently fails with Graal JIT In-Reply-To: References: <85fd0060-5d97-c918-aef3-1eadf4b2b79b@oracle.com> <8f0f7118-bbb3-10dd-cb69-21911955e53b@oracle.com> <70ab054a-479f-6fe4-624a-bfd285d5f0ab@oracle.com> <6cf7fb88-4bbd-1bc3-237c-794568de944a@oracle.com> <133abf1f-b081-7c85-f8f0-e2a9c457ab83@oracle.com> Message-ID: <08e65093-a47b-6cf5-bb71-df73c57cae39@oracle.com> Thanks, guys! I'm Okay with this fix too. Interesting that I've just investigated similar situation in Transformer and was puzzled why the exception was not propagated. Thanks, Serguei On 2/2/18 10:26, Vladimir Kozlov wrote: > Thank you, Tobias and David. > > With this information I agree to use System.exit(). > May be just add your new log("Transformation failed!"); to webrev.00 > > Thanks, > Vladimir > > On 2/2/18 1:36 AM, Tobias Hartmann wrote: >> Hi David, >> >> On 02.02.2018 10:15, David Holmes wrote: >>> http://openjdk.java.net/jtreg/faq.html#question2.6 >>> >>> 2.6. Should a test call the System.exit method? >>> >>> No. Depending on how you run the tests, you may get a security >>> exception from the harness. >>> >>> --- >>> >>> Plus if you call System.exit you have to run in othervm mode. >>> >>> So generally we avoid System.exit and just fail by throwing an >>> exception from "main" (or whatever the test entry point >>> is, depending on which framework it uses - like testng). There are >>> exceptions of course (pardon the pun) and a lot of >>> legacy tests use System.exit(97) or System.exit(95) to indicate >>> success or failure. >> >> Thanks for the pointer, that makes sense to me. >> >>>> The problem is that throwing an exception in >>>> ClassFileTransformer::transform() is silently ignored: >>>> "If the transformer throws an exception (which it doesn't catch), >>>> subsequent transformers will still be called and the >>>> load, redefine or retransform will still be attempted. Thus, >>>> throwing an exception has the same effect as returning >>>> null." [1] >>>> >>>> As a result, the test fails without any information. I've basically >>>> copied this code from >>>> runtime/RedefineTests/RedefineAnnotations.java [2] were we use >>>> System.exit as well. If there's a reason to avoid >>>> System.exit here, we can also just print an error and fail later >>>> with the generic exception: >>>> "java.lang.RuntimeException: 'parent-transform-check: >>>> this-has-been--transformed' missing from stdout/stderr" >>> >>> This does sounds like a case where you need System.exit to force >>> immediate termination. >> >> Yes, I think so too. >> >> Thanks, >> Tobias >> From mikhailo.seledtsov at oracle.com Fri Feb 2 23:00:46 2018 From: mikhailo.seledtsov at oracle.com (Mikhailo Seledtsov) Date: Fri, 02 Feb 2018 15:00:46 -0800 Subject: RFR : 8196062 : Enable docker container related tests for linux ppc64le In-Reply-To: <81c11685dd1b4edd9419e4897e96292a@sap.com> References: <3374db8c-6e7b-fe0b-6874-8289e9e369bc@oracle.com> <6fa3fe84aad946eabb2b46281031e21d@sap.com> <4be36ed6-2f8e-2dd8-a87a-5f76d639550e@oracle.com> <64a3268575d14ddcad90f7d46bab64dd@sap.com> <10f2abc8dbc347f7b7f1a851e89a220a@sap.com> <81c11685dd1b4edd9419e4897e96292a@sap.com> Message-ID: <5A74ED9E.8060503@oracle.com> Hi Matthias, I can sponsor your change if you'd like. Once you addressed all the feedback from code review, please sync to the tip, build and test. Then export the changeset and send it to me (see: http://openjdk.java.net/sponsor/) I will import your change set, run all required testing and push the change. Thank you, Misha On 2/2/18, 12:39 AM, Baesken, Matthias wrote: > Thanks for the reviews . > > I added info about the fix for /proc/self/cgroup and /proc/self/mountinfo parsing to the bug : > > https://bugs.openjdk.java.net/browse/JDK-8196062 > > Guess I need a sponsor now to get it pushed ? > > > Best regards, Matthias > > > >> -----Original Message----- >> From: Bob Vandette [mailto:bob.vandette at oracle.com] >> Sent: Donnerstag, 1. Februar 2018 17:53 >> To: Lindenmaier, Goetz >> Cc: Baesken, Matthias; mikhailo >> ; hotspot-dev at openjdk.java.net; Langer, >> Christoph; Doerr, Martin >> ; Dmitry Samersoff> sw.com> >> Subject: Re: RFR : 8196062 : Enable docker container related tests for linux >> ppc64le >> >> Looks good to me. >> >> Bob. >> >>> On Feb 1, 2018, at 5:39 AM, Lindenmaier, Goetz >> wrote: >>> Hi Matthias, >>> >>> thanks for enabling this test. Looks good. >>> I would appreciate if you would add a line >>> "Summary: also fix cgroup subsystem recognition" >>> to the bug description. Else this might be mistaken >>> for a mere testbug. >>> >>> Best regards, >>> Goetz. >>> >>> >>>> -----Original Message----- >>>> From: Baesken, Matthias >>>> Sent: Mittwoch, 31. Januar 2018 15:15 >>>> To: mikhailo; Bob Vandette >>>> >>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >>>> ; Langer, Christoph >>>> ; Doerr, Martin; >>>> Dmitry Samersoff >>>> Subject: RE: RFR : 8196062 : Enable docker container related tests for linux >>>> ppc64le >>>> >>>> Hello , I created a second webrev : >>>> >>>> >>>> >> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.1/8196062.1/webr >>>> ev/ >>>> >>>> - changed DockerTestUtils.buildJdkDockerImage in the suggested way >> (this >>>> should be extendable to linux s390x soon) >>>> >>>>>>>> Can you add "return;" in each test for subsystem not found >> messages >>>> - added returns in the tests for the subsystems in osContainer_linux.cpp >>>> >>>> - moved some checks at the beginning of subsystem_file_contents >>>> (suggested by Dmitry) >>>> >>>> >>>> Best regards, Matthias >>>> >>>> >>>> >>>>> -----Original Message----- >>>>> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] >>>>> Sent: Donnerstag, 25. Januar 2018 18:43 >>>>> To: Baesken, Matthias; Bob Vandette >>>>> >>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >>>>> ; Langer, Christoph >>>>> ; Doerr, Martin >>>>> Subject: Re: RFR : 8196062 : Enable docker container related tests for >> linux >>>>> ppc64le >>>>> >>>>> Hi Matthias, >>>>> >>>>> >>>>> On 01/25/2018 12:15 AM, Baesken, Matthias wrote: >>>>>>> Perhaps, you could add code to >> DockerTestUtils.buildJdkDockerImage() >>>>>>> that does the following or similar: >>>>>>> 1. Construct a name for platform-specific docker file: >>>>>>> String platformSpecificDockerfile = dockerfile + "-" + >>>>>>> Platform.getOsArch(); >>>>>>> (Platform is jdk.test.lib.Platform) >>>>>>> >>>>>> Hello, the doc says : >>>>>> >>>>>> * Build a docker image that contains JDK under test. >>>>>> * The jdk will be placed under the "/jdk/" folder inside the docker >> file >>>>> system. >>>>>> ..... >>>>>> param dockerfile name of the dockerfile residing in the test source >>>>>> ..... >>>>>> public static void buildJdkDockerImage(String imageName, String >>>>> dockerfile, String buildDirName) >>>>>> >>>>>> >>>>>> It does not say anything about doing hidden insertions of some >> platform >>>>> names into the dockerfile name. >>>>>> So should the jtreg API doc be changed ? >>>>>> If so who needs to approve this ? >>>>> Thank you for your concerns about the clarity of API and corresponding >>>>> documentation. This is a test library API, so no need to file CCC or CSR. >>>>> >>>>> This API can be changed via a regular RFR/webrev review process, as >> soon >>>>> as on one objects. I am a VM SQE engineer covering the docker and >> Linux >>>>> container area, I am OK with this change. >>>>> And I agree with you, we should update the javadoc header on this >>>> method >>>>> to reflect this implicit part of API contract. >>>>> >>>>> >>>>> Thank you, >>>>> Misha >>>>> >>>>> >>>>> >>>>>> (as far as I see so far only the test at >>>>> hotspot/jtreg/runtime/containers/docker/ use this so it should not be >> a >>>> big >>>>> deal to change the interface?) >>>>>> Best regards, Matthias >>>>>> >>>>>> >>>>>> >>>>>> >>>>>>> -----Original Message----- >>>>>>> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] >>>>>>> Sent: Mittwoch, 24. Januar 2018 20:09 >>>>>>> To: Bob Vandette; Baesken, Matthias >>>>>>> >>>>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >>>>>>> ; Langer, Christoph >>>>>>> ; Doerr, Martin >> >>>>>>> Subject: Re: RFR : 8196062 : Enable docker container related tests for >>>> linux >>>>>>> ppc64le >>>>>>> >>>>>>> Hi Matthias, >>>>>>> >>>>>>> Please see my comments about the test changes inline. >>>>>>> >>>>>>> >>>>>>> On 01/24/2018 07:13 AM, Bob Vandette wrote: >>>>>>>> osContainer_linux.cpp: >>>>>>>> >>>>>>>> Can you add "return;" in each test for subsystem not found >> messages >>>>> and >>>>>>>> remove these 3 lines OR move your tests for NULL& messages >> inside. >>>>> The >>>>>>> compiler can >>>>>>>> probably optimize this but I?d prefer more compact code. >>>>>>>> >>>>>>>> if (memory == NULL || cpuset == NULL || cpu == NULL || cpuacct == >>>>> NULL) >>>>>>> { >>>>>>>> 342 return; >>>>>>>> 343 } >>>>>>>> >>>>>>>> >>>>>>>> The other changes in osContainer_linux.cpp look ok. >>>>>>>> >>>>>>>> I forwarded your test changes to Misha, who wrote these. >>>>>>>> >>>>>>>> Since it?s likely that other platforms, such as aarch64, are going to run >>>>> into >>>>>>> the same problem, >>>>>>>> It would have been better to enable the tests based on the >> existence >>>> of >>>>> an >>>>>>> arch specific >>>>>>>> Dockerfile-BasicTest-{os.arch} rather than enabling specific arch?s in >>>>>>> VPProps.java. >>>>>>>> This approach would reduce the number of changes significantly and >>>>> allow >>>>>>> support to >>>>>>>> be added with 1 new file. >>>>>>>> >>>>>>>> You wouldn?t need "String dockerFileName = >>>>>>> Common.getDockerFileName();? >>>>>>>> in every test. Just make DockerTestUtils automatically add arch. >>>>>>> I like Bob's idea on handling platform-specific Dockerfiles. >>>>>>> >>>>>>> Perhaps, you could add code to >> DockerTestUtils.buildJdkDockerImage() >>>>>>> that does the following or similar: >>>>>>> 1. Construct a name for platform-specific docker file: >>>>>>> String platformSpecificDockerfile = dockerfile + "-" + >>>>>>> Platform.getOsArch(); >>>>>>> (Platform is jdk.test.lib.Platform) >>>>>>> >>>>>>> 2. Check if platformSpecificDockerfile file exists in the test >>>>>>> source directory >>>>>>> File.exists(Paths.get(Utils.TEST_SRC, platformSpecificDockerFile) >>>>>>> If it does, then use it. Otherwise continue using the >>>>>>> default/original dockerfile name. >>>>>>> >>>>>>> I think this will considerably simplify your change, as well as make it >>>>>>> easy to extend support to other platforms/configurations >>>>>>> in the future. Let us know what you think of this approach ? >>>>>>> >>>>>>> >>>>>>> Once your change gets (R)eviewed and approved, I can sponsor the >>>> push. >>>>>>> >>>>>>> Thank you, >>>>>>> Misha >>>>>>> >>>>>>> >>>>>>> >>>>>>>> Bob. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> On Jan 24, 2018, at 9:24 AM, Baesken, Matthias >>>>>>> wrote: >>>>>>>>> Hello, could you please review the following change : 8196062 : >>>> Enable >>>>>>> docker container related tests for linux ppc64le . >>>>>>>>> It adds docker container testing for linux ppc64 le (little endian) . >>>>>>>>> >>>>>>>>> A number of things had to be done : >>>>>>>>> ? Add a separate docker file >>>>>>> test/hotspot/jtreg/runtime/containers/docker/Dockerfile-BasicTest- >>>>> ppc64le >>>>>>> for linux ppc64 le which uses Ubuntu ( the Oracle Linux 7.2 used >> for >>>>>>> x86_64 seems not to be available for ppc64le ) >>>>>>>>> ? Fix parsing /proc/self/mountinfo and /proc/self/cgroup >>>> in >>>>>>> src/hotspot/os/linux/osContainer_linux.cpp , it could not handle >> the >>>>>>> format seen on SUSE LINUX 12.1 ppc64le (Host) and Ubuntu (Docker >>>>>>> container) >>>>>>>>> ? Add a bit more logging >>>>>>>>> >>>>>>>>> >>>>>>>>> Webrev : >>>>>>>>> >>>>>>>>> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062/ >>>>>>>>> >>>>>>>>> >>>>>>>>> Bug : >>>>>>>>> >>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8196062 >>>>>>>>> >>>>>>>>> >>>>>>>>> After these adjustments I could run the >> runtime/containers/docker >>>>> - >>>>>>> jtreg tests successfully . >>>>>>>>> Best regards, Matthias From kim.barrett at oracle.com Sat Feb 3 00:35:59 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 2 Feb 2018 19:35:59 -0500 Subject: RFR: 8196083: Avoid locking in OopStorage::release Message-ID: Please review this change to the OopStorage::release operations to eliminate their use of locks. Rather than directly performing the _allocate_list updates when the block containing the entries being released undergoes a state transition (full to not-full, not-full to empty), we instead record the occurrence of the transition. This recording is performed via a lock-free push of the block onto a list of such deferred updates, if the block is not already present in the list. Update requests are processed by later allocate and delete_empty_block operations. Also backed out the JDK-8195979 lock rank changes for the JNI mutexes. Those are no longer required to nested lock rank ordering errors. CR: https://bugs.openjdk.java.net/browse/JDK-8196083 Webrev: http://cr.openjdk.java.net/~kbarrett/8196083/open.00/ Testing: Reproducer from JDK-8195979. Mach5 {hs,jdk}-tier{1,2,3} From tobias.hartmann at oracle.com Sat Feb 3 09:13:39 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Sat, 3 Feb 2018 10:13:39 +0100 Subject: [11] RFR(S): 8195695: NativeLibraryTest.java fails w/ 'Expected unloaded=1 but got=0' In-Reply-To: <61785a90-35a9-7e4f-ac2b-b6923a0f8581@oracle.com> References: <840057aa-b4f2-4028-9369-24004b64452a@oracle.com> <7ff0cadc-190e-c4d6-ffe0-aab4e4a17622@oracle.com> <7a6c1d9f-b080-9694-f4a0-01c37fcd2622@oracle.com> <4edb8b33-30d2-f3ba-6ba6-72e0eb178a46@oracle.com> <61785a90-35a9-7e4f-ac2b-b6923a0f8581@oracle.com> Message-ID: Thanks Vladimir! Best regards, Tobias On 02.02.2018 18:55, Vladimir Kozlov wrote: > Good. > > Thanks, > Vladimir > > On 2/2/18 12:11 AM, Tobias Hartmann wrote: >> Hi Vladimir, >> >>>>> On 2/02/2018 6:01 AM, Vladimir Kozlov wrote: >>>>>> Increase wait time will not always work. >> >> I've decided to go with increasing the wait time because there are other flags that might slow down execution (for >> example, -XX:+DeoptimizeALot, -XX:+AggressiveOpts, ...) but excluding -Xcomp should be fine for now. >> >>>> Yes, skipping it for -Xcomp is also acceptable: >>>> >>>> @requires vm.compMode != "Xcomp" >> >> Okay, here's the new webrev: >> http://cr.openjdk.java.net/~thartmann/8195695/webrev.01/ >> >> Thanks, >> Tobias >> From erik.osterlund at oracle.com Sun Feb 4 12:38:06 2018 From: erik.osterlund at oracle.com (Erik Osterlund) Date: Sun, 4 Feb 2018 13:38:06 +0100 Subject: RFR: 8196083: Avoid locking in OopStorage::release In-Reply-To: References: Message-ID: Hi Kim, Looks complicated but good. It would be great in the future if the deadlock detection system could be improved to not trigger such false positives that make us implement tricky lock-free code to dodge the obviously false positive deadlock assert. But I suppose that is out of scope for this. Thanks, /Erik > On 3 Feb 2018, at 01:35, Kim Barrett wrote: > > Please review this change to the OopStorage::release operations to > eliminate their use of locks. Rather than directly performing the > _allocate_list updates when the block containing the entries being > released undergoes a state transition (full to not-full, not-full to > empty), we instead record the occurrence of the transition. This > recording is performed via a lock-free push of the block onto a list > of such deferred updates, if the block is not already present in the > list. Update requests are processed by later allocate and > delete_empty_block operations. > > Also backed out the JDK-8195979 lock rank changes for the JNI mutexes. > Those are no longer required to nested lock rank ordering errors. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8196083 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8196083/open.00/ > > Testing: > Reproducer from JDK-8195979. > Mach5 {hs,jdk}-tier{1,2,3} > From david.holmes at oracle.com Sun Feb 4 12:55:32 2018 From: david.holmes at oracle.com (David Holmes) Date: Sun, 4 Feb 2018 22:55:32 +1000 Subject: RFR: 8196083: Avoid locking in OopStorage::release In-Reply-To: References: Message-ID: On 4/02/2018 10:38 PM, Erik Osterlund wrote: > Hi Kim, > > Looks complicated but good. > > It would be great in the future if the deadlock detection system could be improved to not trigger such false positives that make us implement tricky lock-free code to dodge the obviously false positive deadlock assert. But I suppose that is out of scope for this. It isn't a deadlock-detection system it is a deadlock prevention system. If you honour the lock rankings then you can't get deadlocks. If you don't honour the lock rankings then you may get deadlocks. There isn't sufficient information in the ranking alone to know for sure whether you will or not. If the deadlock possibility is so obviously not actually possible then that could be captured somehow for the specific locks involved. But I'm not aware of any tools we have that actually help us track what locks may concurrently be acquired - if we did then we would not need rank-based deadlock prevention checks. David ----- > Thanks, > /Erik > >> On 3 Feb 2018, at 01:35, Kim Barrett wrote: >> >> Please review this change to the OopStorage::release operations to >> eliminate their use of locks. Rather than directly performing the >> _allocate_list updates when the block containing the entries being >> released undergoes a state transition (full to not-full, not-full to >> empty), we instead record the occurrence of the transition. This >> recording is performed via a lock-free push of the block onto a list >> of such deferred updates, if the block is not already present in the >> list. Update requests are processed by later allocate and >> delete_empty_block operations. >> >> Also backed out the JDK-8195979 lock rank changes for the JNI mutexes. >> Those are no longer required to nested lock rank ordering errors. >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8196083 >> >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8196083/open.00/ >> >> Testing: >> Reproducer from JDK-8195979. >> Mach5 {hs,jdk}-tier{1,2,3} >> > From erik.osterlund at oracle.com Sun Feb 4 14:15:38 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Sun, 4 Feb 2018 15:15:38 +0100 Subject: RFR: 8196083: Avoid locking in OopStorage::release In-Reply-To: References: Message-ID: Hi David, This is starting to go a bit off-topic for this thread. But here goes... On 2018-02-04 13:55, David Holmes wrote: > On 4/02/2018 10:38 PM, Erik Osterlund wrote: >> Hi Kim, >> >> Looks complicated but good. >> >> It would be great in the future if the deadlock detection system >> could be improved to not trigger such false positives that make us >> implement tricky lock-free code to dodge the obviously false positive >> deadlock assert. But I suppose that is out of scope for this. > > It isn't a deadlock-detection system it is a deadlock prevention > system. If you honour the lock rankings then you can't get deadlocks. > If you don't honour the lock rankings then you may get deadlocks. > There isn't sufficient information in the ranking alone to know for > sure whether you will or not. Okay. I guess I should have called it a potential deadlock situation detection system. But it does not prevent deadlocks - that is up to us. And since the checking is dynamic, we are never guaranteed not to get deadlocks. > If the deadlock possibility is so obviously not actually possible then > that could be captured somehow for the specific locks involved. But > I'm not aware of any tools we have that actually help us track what > locks may concurrently be acquired - if we did then we would not need > rank-based deadlock prevention checks. What I had in mind is something along the lines of dynamically constructing a global partial ordering of the locks as they are acquired, and to verify the global partial ordering is consistent and not violated. That would be like a more precise version of the manually constructed ordering we have today, and save us the trouble of doing this manual picking of a number X, giving it a silly name nobody understands like "leaf + 3", where leaf is not actually a leaf at all - that's for "special", oh wait no there are more special lock ranks than special. And then as testing is run, either manually shuffling the ranks around to reflect the actual partial ordering the code adheres to, or rewriting the code to be lock-free in fear of getting intermittent false positive asserts triggered in testing after moving ranks around (despite every failing test run actually strictly conforming to a global partial ordering that was just not reflected accurately by the numbers we picked). With such an automatic solution, we could also get a better picture of the interactions between the locks when adding a new lock by printing the actual partial ordering of the locks that was found at runtime, instead of trying to figure out which other relevant locks the "+ 3" in "leaf + 3" referred to. Of course, this is just an idea for the bright future where a magical system can do lock ordering consistency checks automagically without us resorting to complicated lock-free solutions for code that never violates global lock ordering consistency (which is what the system was designed to detect), because it is found easier to write a lock-free solution to the problem at hand than to figure out how best to shuffle the ranks around to capture the actual partial ordering the locks consistently conform to. But for now, since we do not have such a system, and its future existence is merely hypothetical, I am okay with the proposed lock-free solution instead. Thanks, /Erik > David > ----- > >> Thanks, >> /Erik >> >>> On 3 Feb 2018, at 01:35, Kim Barrett wrote: >>> >>> Please review this change to the OopStorage::release operations to >>> eliminate their use of locks.? Rather than directly performing the >>> _allocate_list updates when the block containing the entries being >>> released undergoes a state transition (full to not-full, not-full to >>> empty), we instead record the occurrence of the transition. This >>> recording is performed via a lock-free push of the block onto a list >>> of such deferred updates, if the block is not already present in the >>> list.? Update requests are processed by later allocate and >>> delete_empty_block operations. >>> >>> Also backed out the JDK-8195979 lock rank changes for the JNI mutexes. >>> Those are no longer required to nested lock rank ordering errors. >>> >>> CR: >>> https://bugs.openjdk.java.net/browse/JDK-8196083 >>> >>> Webrev: >>> http://cr.openjdk.java.net/~kbarrett/8196083/open.00/ >>> >>> Testing: >>> Reproducer from JDK-8195979. >>> Mach5 {hs,jdk}-tier{1,2,3} >>> >> From jini.george at oracle.com Mon Feb 5 03:55:30 2018 From: jini.george at oracle.com (Jini George) Date: Mon, 5 Feb 2018 09:25:30 +0530 Subject: RFR: 8196361: JTReg failure in serviceability/sa/ClhsdbInspect.java In-Reply-To: <4dc3cedf0b1e40b7bceff6672cb3fa1a@NASANEXM01E.na.qualcomm.com> References: <6e3441f3c28f4f7387d2174f52283fa7@NASANEXM01E.na.qualcomm.com> <44a71e46-3da2-53c6-7e6b-82658183ae8c@oracle.com> <3318aed7-f04a-3b92-2660-39cdf13c2d24@oracle.com> <24c556061ffb4fde9e87a8806c04c8f7@NASANEXM01E.na.qualcomm.com> <0eca7345-fb57-43e4-6169-c4b3531250e4@oracle.com> <4f46f527c17f4d988e4b46e14f93cd4d@NASANEXM01E.na.qualcomm.com> <1b753efa-6abc-4947-e4f3-f6a29020c082@oracle.com> <4dc3cedf0b1e40b7bceff6672cb3fa1a@NASANEXM01E.na.qualcomm.com> Message-ID: Your changes look good, Daniel. I can sponsor the changes. Thank you, Jini. On 2/2/2018 7:28 PM, stewartd.qdt wrote: > Hi Jini, > > Thank you for the review. I have made the requested changes and posted them at http://cr.openjdk.java.net/~dstewart/8196361/webrev.03/ > > Please have a look and review the changes. > > Thanks, > Daniel > > > -----Original Message----- > From: Jini George [mailto:jini.george at oracle.com] > Sent: Friday, February 2, 2018 1:19 AM > To: David Holmes ; stewartd.qdt > Cc: serviceability-dev ; hotspot-dev at openjdk.java.net > Subject: Re: RFR: 8196361: JTReg failure in serviceability/sa/ClhsdbInspect.java > > Hi Daniel, > > Your changes look good to me overall. Just some nits: > > * Please do add 2018 to the copyright year. > * Since the rest of the file follows 4 spaces for indentation, please keep the indentation to 4 spaces. > * Line 81: It would be great if the opening brace is at line 80, so that it would be consistent with the rest of the file. > * Line 65: The declaration could be a part of line 79. > * Line 51: Please add the 'oop address of a java.lang.Class' to the comment. > > Thanks! > Jini. > > > On 2/2/2018 7:31 AM, David Holmes wrote: >> On 2/02/2018 1:50 AM, stewartd.qdt wrote: >>> Please have? a look at the newest changes at: >>> http://cr.openjdk.java.net/~dstewart/8196361/webrev.02/ >>> >>> The only difference between this and the last changeset is the use of >>> "\\R" instead of whatever is the platform line.separator. >> >> Thanks for that. >> >> The overall changes seem reasonable but I'll defer to Jini for final >> approval. If Jini approves then consider this Reviewed. >> >> Thanks, >> David >> >>> Thank you, >>> Daniel >>> >>> -----Original Message----- >>> From: David Holmes [mailto:david.holmes at oracle.com] >>> Sent: Thursday, February 1, 2018 2:51 AM >>> To: stewartd.qdt ; Jini George >>> >>> Cc: serviceability-dev ; >>> hotspot-dev at openjdk.java.net >>> Subject: Re: RFR: 8196361: JTReg failure in >>> serviceability/sa/ClhsdbInspect.java >>> >>> Hi Daniel, >>> >>> On 1/02/2018 2:45 AM, stewartd.qdt wrote: >>>> Hi Jini, David, >>>> >>>> Please have a look at the revised webrev: >>>> http://cr.openjdk.java.net/~dstewart/8196361/webrev.01/ >>>> >>>> In this webrev I have changed the approach to finding the addresses. >>>> This was necessary because in the case of matching for the locks the >>>> addresses are before what is matched and in the case of Method the >>>> address is after it.? The existing code only looked for the >>>> addresses after the matched string. I've also tried to align what >>>> tokens? are being looked for in the lock case. I've taken an >>>> approach of breaking the jstack output into lines and then searching >>>> each line for it containing what we want. Once found, the line is >>>> broken into pieces to find the actual address we want. >>>> >>>> Please let me know if this is an unacceptable approach or any >>>> changes you would like to see. >>> >>> I'm not clear on the overall approach as I'm unclear exactly how >>> inspect operates or exactly what the test is trying to verify. One >>> comment on breaking things into lines though: >>> >>> ??? 73???????????? String newline = >>> System.getProperty("line.separator"); >>> ??? 74???????????? String[] lines = jstackOutput.split(newline); >>> >>> As split() takes a regex, I suggest using \R to cover all potential >>> line-breaks, rather than the platform specific line-seperator. We've >>> been recently bitten by the distinction between output that comes >>> from reading a process's stdout/stderr (and for which a newline \n is >>> translated into the platform line-seperator), and output that comes >>> across a socket connection (for which \n is not translated). This >>> could result in failing to parse things correctly on Windows. It's >>> safer/simpler to expect any kind of line-seperator. >>> >>> Thanks, >>> David >>> >>>> Thanks, >>>> Daniel >>>> >>>> >>>> -----Original Message----- >>>> From: Jini George [mailto:jini.george at oracle.com] >>>> Sent: Tuesday, January 30, 2018 6:58 AM >>>> To: David Holmes ; stewartd.qdt >>>> >>>> Cc: serviceability-dev ; >>>> hotspot-dev at openjdk.java.net >>>> Subject: Re: RFR: 8196361: JTReg failure in >>>> serviceability/sa/ClhsdbInspect.java >>>> >>>> Hi Daniel, David, >>>> >>>> Thanks, Daniel, for bringing this up. The intent of the test is to >>>> get the oop address corresponding to a >>>> java.lang.ref.ReferenceQueue$Lock, >>>> which can typically be obtained from the stack traces of the >>>> Common-Cleaner or the Finalizer threads. The stack traces which I >>>> had been noticing were typically of the form: >>>> >>>> >>>> "Common-Cleaner" #8 daemon prio=8 tid=0x00007f09c82ac000 nid=0xf6e >>>> in >>>> Object.wait() [0x00007f09a18d2000] >>>> ????? java.lang.Thread.State: TIMED_WAITING (on object monitor) >>>> ????? JavaThread state: _thread_blocked >>>> ??? - java.lang.Object.wait(long) @bci=0, pc=0x00007f09b7d6480b, >>>> Method*=0x00007f09acc43d60 (Interpreted frame) >>>> ?????????? - waiting on <0x000000072e61f6e0> (a >>>> java.lang.ref.ReferenceQueue$Lock) >>>> ??? - java.lang.ref.ReferenceQueue.remove(long) @bci=59, line=151, >>>> pc=0x00007f09b7d55243, Method*=0x00007f09acdab9b0 (Interpreted >>>> frame) >>>> ?????????? - waiting to re-lock in wait() <0x000000072e61f6e0> (a >>>> java.lang.ref.ReferenceQueue$Lock) >>>> ... >>>> >>>> I chose 'waiting to re-lock in wait' since that was what I had been >>>> observing next to the oop address of java.lang.ref.ReferenceQueue$Lock. >>>> But I see how with a timing difference, one could get 'waiting to lock' >>>> as in your case. So, a good way to fix might be to check for the >>>> line containing '(a java.lang.ref.ReferenceQueue$Lock)', getting the >>>> oop address from that line (should be the address appearing >>>> immediately before '(a java.lang.ref.ReferenceQueue$Lock)') and >>>> passing that to the 'inspect' command. >>>> >>>> Thanks much, >>>> Jini. >>>> >>>> On 1/30/2018 3:35 AM, David Holmes wrote: >>>>> Hi Daniel, >>>>> >>>>> Serviceability issues should go to >>>>> serviceability-dev at openjdk.java.net >>>>> - now cc'd. >>>>> >>>>> On 30/01/2018 7:53 AM, stewartd.qdt wrote: >>>>>> Please review this webrev [1] which attempts to fix a test error >>>>>> in serviceability/sa/ClhsdbInspect.java when it is run under an >>>>>> AArch64 system (not necessarily exclusive to this system, but it >>>>>> was the system under test). The bug report [2] provides further details. >>>>>> Essentially the line "waiting to re-lock in wait" never actually >>>>>> occurs. Instead I have the line "waiting to lock" which occurs for >>>>>> the referenced item of /java/lang/ref/ReferenceQueue$Lock. >>>>>> Unfortunately the test is written such that only the first >>>>>> "waiting to lock" >>>>>> occurrence is seen (for java/lang/Class), which is already >>>>>> accounted for in the test. >>>>> >>>>> I can't tell exactly what the test expects, or why, but it would be >>>>> extremely hard to arrange for "waiting to re-lock in wait" to be >>>>> seen for the ReferenceQueue lock! That requires acquiring the lock >>>>> yourself, issuing a notify() to unblock the wait(), and then >>>>> issuing the jstack command while still holding the lock! >>>>> >>>>> David >>>>> ----- >>>>> >>>>>> I'm not overly happy with this approach as it actually removes a >>>>>> test line. However, the test line does not actually appear in the >>>>>> output (at least on my system) and the test is not currently >>>>>> written to look for the second occurrence of the line "waiting to lock". >>>>>> Perhaps the original author could chime in and provide further >>>>>> guidance as to the intention of the test. >>>>>> >>>>>> I am happy to modify the patch as necessary. >>>>>> >>>>>> Regards, >>>>>> Daniel Stewart >>>>>> >>>>>> >>>>>> [1] -? http://cr.openjdk.java.net/~dstewart/8196361/webrev.00/ >>>>>> [2] - https://bugs.openjdk.java.net/browse/JDK-8196361 >>>>>> From manasthakur17 at gmail.com Mon Feb 5 05:17:02 2018 From: manasthakur17 at gmail.com (Manas Thakur) Date: Mon, 5 Feb 2018 10:47:02 +0530 Subject: Way to count run-time numbers Message-ID: Dear all, Is there a way to count the run-time numbers of the following: 1. Number of locks acquired 2. Number of null-checks inserted I need to compare some statistics and was wondering whether there is a builtin option or some well-known profiling tool that does this. Warm regards, Manas From david.holmes at oracle.com Mon Feb 5 06:07:36 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 5 Feb 2018 16:07:36 +1000 Subject: Way to count run-time numbers In-Reply-To: References: Message-ID: <93cf0a4b-30db-b516-a88a-49ab81a3f214@oracle.com> Hi Manas, On 5/02/2018 3:17 PM, Manas Thakur wrote: > Dear all, > > Is there a way to count the run-time numbers of the following: No but you'd need to clarify what you mean anyway > 1. Number of locks acquired Do you mean Java level monitors or internal VM Mutexes (or Monitors). Do you mean number of distinct lock instances or the number of times a "lock" has succeeded? > 2. Number of null-checks inserted Inserted by what? The Java source compiler may add some explicit null checks, but most are implicit in the semantics of the bytecodes. Then the JIT does what it can to elide unnecessary null-checks. Cheers, David > I need to compare some statistics and was wondering whether there is a > builtin option or some well-known profiling tool that does this. > > Warm regards, > Manas > From manasthakur17 at gmail.com Mon Feb 5 06:18:23 2018 From: manasthakur17 at gmail.com (Manas Thakur) Date: Mon, 5 Feb 2018 11:48:23 +0530 Subject: Way to count run-time numbers In-Reply-To: <93cf0a4b-30db-b516-a88a-49ab81a3f214@oracle.com> References: <93cf0a4b-30db-b516-a88a-49ab81a3f214@oracle.com> Message-ID: <243A166E-E5D9-4F6B-A139-2F9C3A516347@gmail.com> Hi David, Sorry for the confusion. Let me try to clarify: >> 1. Number of locks acquired > > Do you mean Java level monitors or internal VM Mutexes (or Monitors). Do you mean number of distinct lock instances or the number of times a "lock" has succeeded? I mean the Java level monitors. Essentially, the number of times a ?monitorenter? instruction is executed by all threads in the program during execution. Anything is fine: number of times it succeeds or including spinning etc. >> 2. Number of null-checks inserted > > Inserted by what? The Java source compiler may add some explicit null checks, but most are implicit in the semantics of the bytecodes. Then the JIT does what it can to elide unnecessary null-checks. Sorry; ?inserted? should be replaced with ?executed?. I could find the place (in the OpenJDK source code) where the JIT compiler (C2) removes unnecessary null-checks (explicit as well as implicit in the Bytecode). I would like to count the number of times the remaining ones are executed during execution. Regards, Manas From david.holmes at oracle.com Mon Feb 5 06:27:58 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 5 Feb 2018 16:27:58 +1000 Subject: Way to count run-time numbers In-Reply-To: <243A166E-E5D9-4F6B-A139-2F9C3A516347@gmail.com> References: <93cf0a4b-30db-b516-a88a-49ab81a3f214@oracle.com> <243A166E-E5D9-4F6B-A139-2F9C3A516347@gmail.com> Message-ID: <73ba1af8-828d-7b24-260e-873db159b1f9@oracle.com> On 5/02/2018 4:18 PM, Manas Thakur wrote: > Hi David, > > Sorry for the confusion. Let me try to clarify: > >>> 1. Number of locks acquired >> >> Do you mean Java level monitors or internal VM Mutexes (or Monitors). Do you mean number of distinct lock instances or the number of times a "lock" has succeeded? > > I mean the Java level monitors. Essentially, the number of times a ?monitorenter? instruction is executed by all threads in the program during execution. Anything is fine: number of times it succeeds or including spinning etc. No, there's nothing that would give that information. >>> 2. Number of null-checks inserted >> >> Inserted by what? The Java source compiler may add some explicit null checks, but most are implicit in the semantics of the bytecodes. Then the JIT does what it can to elide unnecessary null-checks. > > Sorry; ?inserted? should be replaced with ?executed?. I could find the place (in the OpenJDK source code) where the JIT compiler (C2) removes unnecessary null-checks (explicit as well as implicit in the Bytecode). I would like to count the number of times the remaining ones are executed during execution. Again no. Most compiled "null checks" are not actual tests "if (ptr == NULL)" but rather the code assumes it is not null and then if we hit a SEGV doing the access we determine that it was actually null and so throw NullPointerException. David > Regards, > Manas > From manasthakur17 at gmail.com Mon Feb 5 06:39:34 2018 From: manasthakur17 at gmail.com (Manas Thakur) Date: Mon, 5 Feb 2018 12:09:34 +0530 Subject: Way to count run-time numbers In-Reply-To: <73ba1af8-828d-7b24-260e-873db159b1f9@oracle.com> References: <93cf0a4b-30db-b516-a88a-49ab81a3f214@oracle.com> <243A166E-E5D9-4F6B-A139-2F9C3A516347@gmail.com> <73ba1af8-828d-7b24-260e-873db159b1f9@oracle.com> Message-ID: <51F7A5CA-7EB1-47F7-B062-24EFCC810498@gmail.com> Hi David, >>>> 1. Number of locks acquired >>> >>> Do you mean Java level monitors or internal VM Mutexes (or Monitors). Do you mean number of distinct lock instances or the number of times a "lock" has succeeded? >> I mean the Java level monitors. Essentially, the number of times a ?monitorenter? instruction is executed by all threads in the program during execution. Anything is fine: number of times it succeeds or including spinning etc. > > No, there's nothing that would give that information. Oh I see. I don?t know how difficult would that be, but does it sound doable to insert an explicit counter while generating the code, may be just for a specific JIT compiler, say in the Ideal IR of C2? Are you aware of any profiler that could help me? >>>> 2. Number of null-checks inserted >>> >>> Inserted by what? The Java source compiler may add some explicit null checks, but most are implicit in the semantics of the bytecodes. Then the JIT does what it can to elide unnecessary null-checks. >> Sorry; ?inserted? should be replaced with ?executed?. I could find the place (in the OpenJDK source code) where the JIT compiler (C2) removes unnecessary null-checks (explicit as well as implicit in the Bytecode). I would like to count the number of times the remaining ones are executed during execution. > > Again no. Most compiled "null checks" are not actual tests "if (ptr == NULL)" but rather the code assumes it is not null and then if we hit a SEGV doing the access we determine that it was actually null and so throw NullPointerException. Okay; I recall this is how the JVM treats implicit null-checks (optimistically assuming and then reconstructing the stack-trace when there is a SEGV). Anyway, thanks for the reply. Regards, Manas From tobias.hartmann at oracle.com Mon Feb 5 07:06:35 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Mon, 5 Feb 2018 08:06:35 +0100 Subject: RFR(S): 8195731: [Graal] runtime/SharedArchiveFile/serviceability/transformRelatedClasses/TransformSuperSubTwoPckgs.java intermittently fails with Graal JIT In-Reply-To: <08e65093-a47b-6cf5-bb71-df73c57cae39@oracle.com> References: <85fd0060-5d97-c918-aef3-1eadf4b2b79b@oracle.com> <8f0f7118-bbb3-10dd-cb69-21911955e53b@oracle.com> <70ab054a-479f-6fe4-624a-bfd285d5f0ab@oracle.com> <6cf7fb88-4bbd-1bc3-237c-794568de944a@oracle.com> <133abf1f-b081-7c85-f8f0-e2a9c457ab83@oracle.com> <08e65093-a47b-6cf5-bb71-df73c57cae39@oracle.com> Message-ID: Thanks Vladimir and Serguei! For the record, I'm going to push this version: http://cr.openjdk.java.net/~thartmann/8195731/webrev.02/ Best regards, Tobias On 02.02.2018 21:56, serguei.spitsyn at oracle.com wrote: > Thanks, guys! > I'm Okay with this fix too. > Interesting that I've just investigated similar situation in Transformer > and was puzzled why the exception was not propagated. > > Thanks, > Serguei > > > On 2/2/18 10:26, Vladimir Kozlov wrote: >> Thank you, Tobias and David. >> >> With this information I agree to use System.exit(). >> May be just add your new log("Transformation failed!"); to webrev.00 >> >> Thanks, >> Vladimir >> >> On 2/2/18 1:36 AM, Tobias Hartmann wrote: >>> Hi David, >>> >>> On 02.02.2018 10:15, David Holmes wrote: >>>> http://openjdk.java.net/jtreg/faq.html#question2.6 >>>> >>>> 2.6. Should a test call the System.exit method? >>>> >>>> No. Depending on how you run the tests, you may get a security exception from the harness. >>>> >>>> --- >>>> >>>> Plus if you call System.exit you have to run in othervm mode. >>>> >>>> So generally we avoid System.exit and just fail by throwing an exception from "main" (or whatever the test entry point >>>> is, depending on which framework it uses - like testng). There are exceptions of course (pardon the pun) and a lot of >>>> legacy tests use System.exit(97) or System.exit(95) to indicate success or failure. >>> >>> Thanks for the pointer, that makes sense to me. >>> >>>>> The problem is that throwing an exception in ClassFileTransformer::transform() is silently ignored: >>>>> "If the transformer throws an exception (which it doesn't catch), subsequent transformers will still be called and the >>>>> load, redefine or retransform will still be attempted. Thus, throwing an exception has the same effect as returning >>>>> null." [1] >>>>> >>>>> As a result, the test fails without any information. I've basically copied this code from >>>>> runtime/RedefineTests/RedefineAnnotations.java [2] were we use System.exit as well. If there's a reason to avoid >>>>> System.exit here, we can also just print an error and fail later with the generic exception: >>>>> "java.lang.RuntimeException: 'parent-transform-check: this-has-been--transformed' missing from stdout/stderr" >>>> >>>> This does sounds like a case where you need System.exit to force immediate termination. >>> >>> Yes, I think so too. >>> >>> Thanks, >>> Tobias >>> > From david.holmes at oracle.com Mon Feb 5 07:18:44 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 5 Feb 2018 17:18:44 +1000 Subject: Way to count run-time numbers In-Reply-To: <51F7A5CA-7EB1-47F7-B062-24EFCC810498@gmail.com> References: <93cf0a4b-30db-b516-a88a-49ab81a3f214@oracle.com> <243A166E-E5D9-4F6B-A139-2F9C3A516347@gmail.com> <73ba1af8-828d-7b24-260e-873db159b1f9@oracle.com> <51F7A5CA-7EB1-47F7-B062-24EFCC810498@gmail.com> Message-ID: <36a8b49b-46cd-f594-250c-ffedd438733b@oracle.com> On 5/02/2018 4:39 PM, Manas Thakur wrote: > Hi David, > >>>>> 1. Number of locks acquired >>>> >>>> Do you mean Java level monitors or internal VM Mutexes (or Monitors). Do you mean number of distinct lock instances or the number of times a "lock" has succeeded? >>> I mean the Java level monitors. Essentially, the number of times a ?monitorenter? instruction is executed by all threads in the program during execution. Anything is fine: number of times it succeeds or including spinning etc. >> >> No, there's nothing that would give that information. > > > Oh I see. I don?t know how difficult would that be, but does it sound doable to insert an explicit counter while generating the code, may be just for a specific JIT compiler, say in the Ideal IR of C2? Are you aware of any profiler that could help me? The JIT folk would have to answer that in detail, but you'd need instrumentation in the interpreter as well. I don't know if JFR may be useful here. Cheers, David ----- > >>>>> 2. Number of null-checks inserted >>>> >>>> Inserted by what? The Java source compiler may add some explicit null checks, but most are implicit in the semantics of the bytecodes. Then the JIT does what it can to elide unnecessary null-checks. >>> Sorry; ?inserted? should be replaced with ?executed?. I could find the place (in the OpenJDK source code) where the JIT compiler (C2) removes unnecessary null-checks (explicit as well as implicit in the Bytecode). I would like to count the number of times the remaining ones are executed during execution. >> >> Again no. Most compiled "null checks" are not actual tests "if (ptr == NULL)" but rather the code assumes it is not null and then if we hit a SEGV doing the access we determine that it was actually null and so throw NullPointerException. > > > Okay; I recall this is how the JVM treats implicit null-checks (optimistically assuming and then reconstructing the stack-trace when there is a SEGV). > > Anyway, thanks for the reply. > > Regards, > Manas > From cnewland at chrisnewland.com Mon Feb 5 08:40:04 2018 From: cnewland at chrisnewland.com (Chris Newland) Date: Mon, 5 Feb 2018 08:40:04 -0000 Subject: Way to count run-time numbers In-Reply-To: <93cf0a4b-30db-b516-a88a-49ab81a3f214@oracle.com> References: <93cf0a4b-30db-b516-a88a-49ab81a3f214@oracle.com> Message-ID: Hi Manas, Would the information output in LogCompilation regarding null check uncommon traps be of any use to you? E.g. when the JIT has speculated not null but inserted a deoptimisation trap in case it is: Cheers, Chris On Mon, February 5, 2018 06:07, David Holmes wrote: > Hi Manas, > > > On 5/02/2018 3:17 PM, Manas Thakur wrote: > >> Dear all, >> >> >> Is there a way to count the run-time numbers of the following: >> > > No but you'd need to clarify what you mean anyway > > >> 1. Number of locks acquired >> > > Do you mean Java level monitors or internal VM Mutexes (or Monitors). Do > you mean number of distinct lock instances or the number of times a "lock" > has succeeded? > >> 2. Number of null-checks inserted >> > > Inserted by what? The Java source compiler may add some explicit null > checks, but most are implicit in the semantics of the bytecodes. Then the > JIT does what it can to elide unnecessary null-checks. > > > Cheers, > David > > >> I need to compare some statistics and was wondering whether there is a >> builtin option or some well-known profiling tool that does this. >> >> Warm regards, >> Manas >> >> > From manasthakur17 at gmail.com Mon Feb 5 11:31:20 2018 From: manasthakur17 at gmail.com (Manas Thakur) Date: Mon, 5 Feb 2018 17:01:20 +0530 Subject: Way to count run-time numbers In-Reply-To: References: <93cf0a4b-30db-b516-a88a-49ab81a3f214@oracle.com> Message-ID: <9B45D201-0FE7-4D7F-B2A7-C5B56F3B33D9@gmail.com> Hi Chris, Thanks for the reply. This will be useful, but not sufficient: It would tell me the number of cases the JVM had guessed wrong and deoptimized because of the assumption failure. However, I was also interested in the number of times the JVM would have to execute the compare instruction, which it seems couldn?t be counted due to the checks actually not being implicit. Regards, Manas > On 05-Feb-2018, at 2:10 PM, Chris Newland wrote: > > Hi Manas, > > Would the information output in LogCompilation regarding null check > uncommon traps be of any use to you? > > E.g. when the JIT has speculated not null but inserted a deoptimisation > trap in case it is: > > debug_id='0'/> > > Cheers, > > Chris > > On Mon, February 5, 2018 06:07, David Holmes wrote: >> Hi Manas, >> >> >> On 5/02/2018 3:17 PM, Manas Thakur wrote: >> >>> Dear all, >>> >>> >>> Is there a way to count the run-time numbers of the following: >>> >> >> No but you'd need to clarify what you mean anyway >> >> >>> 1. Number of locks acquired >>> >> >> Do you mean Java level monitors or internal VM Mutexes (or Monitors). Do >> you mean number of distinct lock instances or the number of times a "lock" >> has succeeded? >> >>> 2. Number of null-checks inserted >>> >> >> Inserted by what? The Java source compiler may add some explicit null >> checks, but most are implicit in the semantics of the bytecodes. Then the >> JIT does what it can to elide unnecessary null-checks. >> >> >> Cheers, >> David >> >> >>> I need to compare some statistics and was wondering whether there is a >>> builtin option or some well-known profiling tool that does this. >>> >>> Warm regards, >>> Manas >>> >>> >> > > From manasthakur17 at gmail.com Mon Feb 5 11:39:48 2018 From: manasthakur17 at gmail.com (Manas Thakur) Date: Mon, 5 Feb 2018 17:09:48 +0530 Subject: Way to count run-time numbers In-Reply-To: <9B45D201-0FE7-4D7F-B2A7-C5B56F3B33D9@gmail.com> References: <93cf0a4b-30db-b516-a88a-49ab81a3f214@oracle.com> <9B45D201-0FE7-4D7F-B2A7-C5B56F3B33D9@gmail.com> Message-ID: <0AB70580-89CF-4B41-83CB-6ADA6B00FCD5@gmail.com> In the last sentence, I meant ?due to the checks actually being implicit?. Will you be able to suggest me something regarding counting the run-time counts of the explicit checks (i.e, those not removed by the JIT compilers) though? Regards, Manas > On 05-Feb-2018, at 5:01 PM, Manas Thakur wrote: > > Hi Chris, > > Thanks for the reply. > > This will be useful, but not sufficient: It would tell me the number of cases the JVM had guessed wrong and deoptimized because of the assumption failure. However, I was also interested in the number of times the JVM would have to execute the compare instruction, which it seems couldn?t be counted due to the checks actually not being implicit. > > Regards, > Manas > >> On 05-Feb-2018, at 2:10 PM, Chris Newland wrote: >> >> Hi Manas, >> >> Would the information output in LogCompilation regarding null check >> uncommon traps be of any use to you? >> >> E.g. when the JIT has speculated not null but inserted a deoptimisation >> trap in case it is: >> >> > debug_id='0'/> >> >> Cheers, >> >> Chris >> >> On Mon, February 5, 2018 06:07, David Holmes wrote: >>> Hi Manas, >>> >>> >>> On 5/02/2018 3:17 PM, Manas Thakur wrote: >>> >>>> Dear all, >>>> >>>> >>>> Is there a way to count the run-time numbers of the following: >>>> >>> >>> No but you'd need to clarify what you mean anyway >>> >>> >>>> 1. Number of locks acquired >>>> >>> >>> Do you mean Java level monitors or internal VM Mutexes (or Monitors). Do >>> you mean number of distinct lock instances or the number of times a "lock" >>> has succeeded? >>> >>>> 2. Number of null-checks inserted >>>> >>> >>> Inserted by what? The Java source compiler may add some explicit null >>> checks, but most are implicit in the semantics of the bytecodes. Then the >>> JIT does what it can to elide unnecessary null-checks. >>> >>> >>> Cheers, >>> David >>> >>> >>>> I need to compare some statistics and was wondering whether there is a >>>> builtin option or some well-known profiling tool that does this. >>>> >>>> Warm regards, >>>> Manas >>>> >>>> >>> >> >> > From stewartd.qdt at qualcommdatacenter.com Mon Feb 5 13:59:57 2018 From: stewartd.qdt at qualcommdatacenter.com (stewartd.qdt) Date: Mon, 5 Feb 2018 13:59:57 +0000 Subject: RFR: 8196361: JTReg failure in serviceability/sa/ClhsdbInspect.java In-Reply-To: References: <6e3441f3c28f4f7387d2174f52283fa7@NASANEXM01E.na.qualcomm.com> <44a71e46-3da2-53c6-7e6b-82658183ae8c@oracle.com> <3318aed7-f04a-3b92-2660-39cdf13c2d24@oracle.com> <24c556061ffb4fde9e87a8806c04c8f7@NASANEXM01E.na.qualcomm.com> <0eca7345-fb57-43e4-6169-c4b3531250e4@oracle.com> <4f46f527c17f4d988e4b46e14f93cd4d@NASANEXM01E.na.qualcomm.com> <1b753efa-6abc-4947-e4f3-f6a29020c082@oracle.com> <4dc3cedf0b1e40b7bceff6672cb3fa1a@NASANEXM01E.na.qualcomm.com> Message-ID: <16636f3867fb42f6ae26b1b8c2602591@NASANEXM01E.na.qualcomm.com> Thanks Jini. Regards, Daniel -----Original Message----- From: Jini George [mailto:jini.george at oracle.com] Sent: Sunday, February 4, 2018 10:56 PM To: stewartd.qdt ; David Holmes Cc: serviceability-dev ; hotspot-dev at openjdk.java.net Subject: Re: RFR: 8196361: JTReg failure in serviceability/sa/ClhsdbInspect.java Your changes look good, Daniel. I can sponsor the changes. Thank you, Jini. On 2/2/2018 7:28 PM, stewartd.qdt wrote: > Hi Jini, > > Thank you for the review. I have made the requested changes and posted > them at http://cr.openjdk.java.net/~dstewart/8196361/webrev.03/ > > Please have a look and review the changes. > > Thanks, > Daniel > > > -----Original Message----- > From: Jini George [mailto:jini.george at oracle.com] > Sent: Friday, February 2, 2018 1:19 AM > To: David Holmes ; stewartd.qdt > > Cc: serviceability-dev ; > hotspot-dev at openjdk.java.net > Subject: Re: RFR: 8196361: JTReg failure in > serviceability/sa/ClhsdbInspect.java > > Hi Daniel, > > Your changes look good to me overall. Just some nits: > > * Please do add 2018 to the copyright year. > * Since the rest of the file follows 4 spaces for indentation, please keep the indentation to 4 spaces. > * Line 81: It would be great if the opening brace is at line 80, so that it would be consistent with the rest of the file. > * Line 65: The declaration could be a part of line 79. > * Line 51: Please add the 'oop address of a java.lang.Class' to the comment. > > Thanks! > Jini. > > > On 2/2/2018 7:31 AM, David Holmes wrote: >> On 2/02/2018 1:50 AM, stewartd.qdt wrote: >>> Please have? a look at the newest changes at: >>> http://cr.openjdk.java.net/~dstewart/8196361/webrev.02/ >>> >>> The only difference between this and the last changeset is the use >>> of "\\R" instead of whatever is the platform line.separator. >> >> Thanks for that. >> >> The overall changes seem reasonable but I'll defer to Jini for final >> approval. If Jini approves then consider this Reviewed. >> >> Thanks, >> David >> >>> Thank you, >>> Daniel >>> >>> -----Original Message----- >>> From: David Holmes [mailto:david.holmes at oracle.com] >>> Sent: Thursday, February 1, 2018 2:51 AM >>> To: stewartd.qdt ; Jini George >>> >>> Cc: serviceability-dev ; >>> hotspot-dev at openjdk.java.net >>> Subject: Re: RFR: 8196361: JTReg failure in >>> serviceability/sa/ClhsdbInspect.java >>> >>> Hi Daniel, >>> >>> On 1/02/2018 2:45 AM, stewartd.qdt wrote: >>>> Hi Jini, David, >>>> >>>> Please have a look at the revised webrev: >>>> http://cr.openjdk.java.net/~dstewart/8196361/webrev.01/ >>>> >>>> In this webrev I have changed the approach to finding the addresses. >>>> This was necessary because in the case of matching for the locks >>>> the addresses are before what is matched and in the case of Method >>>> the address is after it.? The existing code only looked for the >>>> addresses after the matched string. I've also tried to align what >>>> tokens? are being looked for in the lock case. I've taken an >>>> approach of breaking the jstack output into lines and then >>>> searching each line for it containing what we want. Once found, the >>>> line is broken into pieces to find the actual address we want. >>>> >>>> Please let me know if this is an unacceptable approach or any >>>> changes you would like to see. >>> >>> I'm not clear on the overall approach as I'm unclear exactly how >>> inspect operates or exactly what the test is trying to verify. One >>> comment on breaking things into lines though: >>> >>> ??? 73???????????? String newline = >>> System.getProperty("line.separator"); >>> ??? 74???????????? String[] lines = jstackOutput.split(newline); >>> >>> As split() takes a regex, I suggest using \R to cover all potential >>> line-breaks, rather than the platform specific line-seperator. We've >>> been recently bitten by the distinction between output that comes >>> from reading a process's stdout/stderr (and for which a newline \n >>> is translated into the platform line-seperator), and output that >>> comes across a socket connection (for which \n is not translated). >>> This could result in failing to parse things correctly on Windows. >>> It's safer/simpler to expect any kind of line-seperator. >>> >>> Thanks, >>> David >>> >>>> Thanks, >>>> Daniel >>>> >>>> >>>> -----Original Message----- >>>> From: Jini George [mailto:jini.george at oracle.com] >>>> Sent: Tuesday, January 30, 2018 6:58 AM >>>> To: David Holmes ; stewartd.qdt >>>> >>>> Cc: serviceability-dev ; >>>> hotspot-dev at openjdk.java.net >>>> Subject: Re: RFR: 8196361: JTReg failure in >>>> serviceability/sa/ClhsdbInspect.java >>>> >>>> Hi Daniel, David, >>>> >>>> Thanks, Daniel, for bringing this up. The intent of the test is to >>>> get the oop address corresponding to a >>>> java.lang.ref.ReferenceQueue$Lock, >>>> which can typically be obtained from the stack traces of the >>>> Common-Cleaner or the Finalizer threads. The stack traces which I >>>> had been noticing were typically of the form: >>>> >>>> >>>> "Common-Cleaner" #8 daemon prio=8 tid=0x00007f09c82ac000 nid=0xf6e >>>> in >>>> Object.wait() [0x00007f09a18d2000] >>>> ????? java.lang.Thread.State: TIMED_WAITING (on object monitor) >>>> ????? JavaThread state: _thread_blocked >>>> ??? - java.lang.Object.wait(long) @bci=0, pc=0x00007f09b7d6480b, >>>> Method*=0x00007f09acc43d60 (Interpreted frame) >>>> ?????????? - waiting on <0x000000072e61f6e0> (a >>>> java.lang.ref.ReferenceQueue$Lock) >>>> ??? - java.lang.ref.ReferenceQueue.remove(long) @bci=59, line=151, >>>> pc=0x00007f09b7d55243, Method*=0x00007f09acdab9b0 (Interpreted >>>> frame) >>>> ?????????? - waiting to re-lock in wait() <0x000000072e61f6e0> (a >>>> java.lang.ref.ReferenceQueue$Lock) >>>> ... >>>> >>>> I chose 'waiting to re-lock in wait' since that was what I had been >>>> observing next to the oop address of java.lang.ref.ReferenceQueue$Lock. >>>> But I see how with a timing difference, one could get 'waiting to lock' >>>> as in your case. So, a good way to fix might be to check for the >>>> line containing '(a java.lang.ref.ReferenceQueue$Lock)', getting >>>> the oop address from that line (should be the address appearing >>>> immediately before '(a java.lang.ref.ReferenceQueue$Lock)') and >>>> passing that to the 'inspect' command. >>>> >>>> Thanks much, >>>> Jini. >>>> >>>> On 1/30/2018 3:35 AM, David Holmes wrote: >>>>> Hi Daniel, >>>>> >>>>> Serviceability issues should go to >>>>> serviceability-dev at openjdk.java.net >>>>> - now cc'd. >>>>> >>>>> On 30/01/2018 7:53 AM, stewartd.qdt wrote: >>>>>> Please review this webrev [1] which attempts to fix a test error >>>>>> in serviceability/sa/ClhsdbInspect.java when it is run under an >>>>>> AArch64 system (not necessarily exclusive to this system, but it >>>>>> was the system under test). The bug report [2] provides further details. >>>>>> Essentially the line "waiting to re-lock in wait" never actually >>>>>> occurs. Instead I have the line "waiting to lock" which occurs >>>>>> for the referenced item of /java/lang/ref/ReferenceQueue$Lock. >>>>>> Unfortunately the test is written such that only the first >>>>>> "waiting to lock" >>>>>> occurrence is seen (for java/lang/Class), which is already >>>>>> accounted for in the test. >>>>> >>>>> I can't tell exactly what the test expects, or why, but it would >>>>> be extremely hard to arrange for "waiting to re-lock in wait" to >>>>> be seen for the ReferenceQueue lock! That requires acquiring the >>>>> lock yourself, issuing a notify() to unblock the wait(), and then >>>>> issuing the jstack command while still holding the lock! >>>>> >>>>> David >>>>> ----- >>>>> >>>>>> I'm not overly happy with this approach as it actually removes a >>>>>> test line. However, the test line does not actually appear in the >>>>>> output (at least on my system) and the test is not currently >>>>>> written to look for the second occurrence of the line "waiting to lock". >>>>>> Perhaps the original author could chime in and provide further >>>>>> guidance as to the intention of the test. >>>>>> >>>>>> I am happy to modify the patch as necessary. >>>>>> >>>>>> Regards, >>>>>> Daniel Stewart >>>>>> >>>>>> >>>>>> [1] -? http://cr.openjdk.java.net/~dstewart/8196361/webrev.00/ >>>>>> [2] - https://bugs.openjdk.java.net/browse/JDK-8196361 >>>>>> From vladimir.kozlov at oracle.com Mon Feb 5 18:42:26 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 5 Feb 2018 10:42:26 -0800 Subject: RFR(S): 8195731: [Graal] runtime/SharedArchiveFile/serviceability/transformRelatedClasses/TransformSuperSubTwoPckgs.java intermittently fails with Graal JIT In-Reply-To: References: <85fd0060-5d97-c918-aef3-1eadf4b2b79b@oracle.com> <8f0f7118-bbb3-10dd-cb69-21911955e53b@oracle.com> <70ab054a-479f-6fe4-624a-bfd285d5f0ab@oracle.com> <6cf7fb88-4bbd-1bc3-237c-794568de944a@oracle.com> <133abf1f-b081-7c85-f8f0-e2a9c457ab83@oracle.com> <08e65093-a47b-6cf5-bb71-df73c57cae39@oracle.com> Message-ID: <5ff8485c-699d-7c9d-7f22-1a398fa9b007@oracle.com> Looks good. Thanks, Vladimir On 2/4/18 11:06 PM, Tobias Hartmann wrote: > Thanks Vladimir and Serguei! > > For the record, I'm going to push this version: > http://cr.openjdk.java.net/~thartmann/8195731/webrev.02/ > > Best regards, > Tobias > > On 02.02.2018 21:56, serguei.spitsyn at oracle.com wrote: >> Thanks, guys! >> I'm Okay with this fix too. >> Interesting that I've just investigated similar situation in Transformer >> and was puzzled why the exception was not propagated. >> >> Thanks, >> Serguei >> >> >> On 2/2/18 10:26, Vladimir Kozlov wrote: >>> Thank you, Tobias and David. >>> >>> With this information I agree to use System.exit(). >>> May be just add your new log("Transformation failed!"); to webrev.00 >>> >>> Thanks, >>> Vladimir >>> >>> On 2/2/18 1:36 AM, Tobias Hartmann wrote: >>>> Hi David, >>>> >>>> On 02.02.2018 10:15, David Holmes wrote: >>>>> http://openjdk.java.net/jtreg/faq.html#question2.6 >>>>> >>>>> 2.6. Should a test call the System.exit method? >>>>> >>>>> No. Depending on how you run the tests, you may get a security exception from the harness. >>>>> >>>>> --- >>>>> >>>>> Plus if you call System.exit you have to run in othervm mode. >>>>> >>>>> So generally we avoid System.exit and just fail by throwing an exception from "main" (or whatever the test entry point >>>>> is, depending on which framework it uses - like testng). There are exceptions of course (pardon the pun) and a lot of >>>>> legacy tests use System.exit(97) or System.exit(95) to indicate success or failure. >>>> >>>> Thanks for the pointer, that makes sense to me. >>>> >>>>>> The problem is that throwing an exception in ClassFileTransformer::transform() is silently ignored: >>>>>> "If the transformer throws an exception (which it doesn't catch), subsequent transformers will still be called and the >>>>>> load, redefine or retransform will still be attempted. Thus, throwing an exception has the same effect as returning >>>>>> null." [1] >>>>>> >>>>>> As a result, the test fails without any information. I've basically copied this code from >>>>>> runtime/RedefineTests/RedefineAnnotations.java [2] were we use System.exit as well. If there's a reason to avoid >>>>>> System.exit here, we can also just print an error and fail later with the generic exception: >>>>>> "java.lang.RuntimeException: 'parent-transform-check: this-has-been--transformed' missing from stdout/stderr" >>>>> >>>>> This does sounds like a case where you need System.exit to force immediate termination. >>>> >>>> Yes, I think so too. >>>> >>>> Thanks, >>>> Tobias >>>> >> From serguei.spitsyn at oracle.com Mon Feb 5 18:47:20 2018 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Mon, 5 Feb 2018 10:47:20 -0800 Subject: RFR(S): 8195731: [Graal] runtime/SharedArchiveFile/serviceability/transformRelatedClasses/TransformSuperSubTwoPckgs.java intermittently fails with Graal JIT In-Reply-To: <5ff8485c-699d-7c9d-7f22-1a398fa9b007@oracle.com> References: <85fd0060-5d97-c918-aef3-1eadf4b2b79b@oracle.com> <8f0f7118-bbb3-10dd-cb69-21911955e53b@oracle.com> <70ab054a-479f-6fe4-624a-bfd285d5f0ab@oracle.com> <6cf7fb88-4bbd-1bc3-237c-794568de944a@oracle.com> <133abf1f-b081-7c85-f8f0-e2a9c457ab83@oracle.com> <08e65093-a47b-6cf5-bb71-df73c57cae39@oracle.com> <5ff8485c-699d-7c9d-7f22-1a398fa9b007@oracle.com> Message-ID: +1 Thanks, Serguei On 2/5/18 10:42, Vladimir Kozlov wrote: > Looks good. > > Thanks, > Vladimir > > On 2/4/18 11:06 PM, Tobias Hartmann wrote: >> Thanks Vladimir and Serguei! >> >> For the record, I'm going to push this version: >> http://cr.openjdk.java.net/~thartmann/8195731/webrev.02/ >> >> Best regards, >> Tobias >> >> On 02.02.2018 21:56, serguei.spitsyn at oracle.com wrote: >>> Thanks, guys! >>> I'm Okay with this fix too. >>> Interesting that I've just investigated similar situation in >>> Transformer >>> and was puzzled why the exception was not propagated. >>> >>> Thanks, >>> Serguei >>> >>> >>> On 2/2/18 10:26, Vladimir Kozlov wrote: >>>> Thank you, Tobias and David. >>>> >>>> With this information I agree to use System.exit(). >>>> May be just add your new log("Transformation failed!"); to webrev.00 >>>> >>>> Thanks, >>>> Vladimir >>>> >>>> On 2/2/18 1:36 AM, Tobias Hartmann wrote: >>>>> Hi David, >>>>> >>>>> On 02.02.2018 10:15, David Holmes wrote: >>>>>> http://openjdk.java.net/jtreg/faq.html#question2.6 >>>>>> >>>>>> 2.6. Should a test call the System.exit method? >>>>>> >>>>>> No. Depending on how you run the tests, you may get a security >>>>>> exception from the harness. >>>>>> >>>>>> --- >>>>>> >>>>>> Plus if you call System.exit you have to run in othervm mode. >>>>>> >>>>>> So generally we avoid System.exit and just fail by throwing an >>>>>> exception from "main" (or whatever the test entry point >>>>>> is, depending on which framework it uses - like testng). There >>>>>> are exceptions of course (pardon the pun) and a lot of >>>>>> legacy tests use System.exit(97) or System.exit(95) to indicate >>>>>> success or failure. >>>>> >>>>> Thanks for the pointer, that makes sense to me. >>>>> >>>>>>> The problem is that throwing an exception in >>>>>>> ClassFileTransformer::transform() is silently ignored: >>>>>>> "If the transformer throws an exception (which it doesn't >>>>>>> catch), subsequent transformers will still be called and the >>>>>>> load, redefine or retransform will still be attempted. Thus, >>>>>>> throwing an exception has the same effect as returning >>>>>>> null." [1] >>>>>>> >>>>>>> As a result, the test fails without any information. I've >>>>>>> basically copied this code from >>>>>>> runtime/RedefineTests/RedefineAnnotations.java [2] were we use >>>>>>> System.exit as well. If there's a reason to avoid >>>>>>> System.exit here, we can also just print an error and fail later >>>>>>> with the generic exception: >>>>>>> "java.lang.RuntimeException: 'parent-transform-check: >>>>>>> this-has-been--transformed' missing from stdout/stderr" >>>>>> >>>>>> This does sounds like a case where you need System.exit to force >>>>>> immediate termination. >>>>> >>>>> Yes, I think so too. >>>>> >>>>> Thanks, >>>>> Tobias >>>>> >>> From kim.barrett at oracle.com Mon Feb 5 19:06:31 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 5 Feb 2018 14:06:31 -0500 Subject: RFR: 8196083: Avoid locking in OopStorage::release In-Reply-To: References: Message-ID: <47B076CC-F95F-4EA1-A18B-7C6E34C76C48@oracle.com> > On Feb 4, 2018, at 7:38 AM, Erik Osterlund wrote: > > Hi Kim, > > Looks complicated but good. Thanks for looking at it. > It would be great in the future if the deadlock detection system could be improved to not trigger such false positives that make us implement tricky lock-free code to dodge the obviously false positive deadlock assert. But I suppose that is out of scope for this. While I agree that the global lock ranking mechanism has usage problems, so does every other mechanism I've seen tried for this issue. It's possible we would have eventually made this change anyway. Release was already "mostly" lock-free, to avoid contention when doing parallel cleanup of data structures containing storage entries. Whether that would have been good enough is unknown, since we haven't implemented and measured any such parallel cleanups yet. > Thanks, > /Erik > >> On 3 Feb 2018, at 01:35, Kim Barrett wrote: >> >> Please review this change to the OopStorage::release operations to >> eliminate their use of locks. Rather than directly performing the >> _allocate_list updates when the block containing the entries being >> released undergoes a state transition (full to not-full, not-full to >> empty), we instead record the occurrence of the transition. This >> recording is performed via a lock-free push of the block onto a list >> of such deferred updates, if the block is not already present in the >> list. Update requests are processed by later allocate and >> delete_empty_block operations. >> >> Also backed out the JDK-8195979 lock rank changes for the JNI mutexes. >> Those are no longer required to nested lock rank ordering errors. >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8196083 >> >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8196083/open.00/ >> >> Testing: >> Reproducer from JDK-8195979. >> Mach5 {hs,jdk}-tier{1,2,3} From mikhailo.seledtsov at oracle.com Mon Feb 5 19:14:43 2018 From: mikhailo.seledtsov at oracle.com (Mikhailo Seledtsov) Date: Mon, 05 Feb 2018 11:14:43 -0800 Subject: RFR(S): 8195731: [Graal] runtime/SharedArchiveFile/serviceability/transformRelatedClasses/TransformSuperSubTwoPckgs.java intermittently fails with Graal JIT In-Reply-To: References: <85fd0060-5d97-c918-aef3-1eadf4b2b79b@oracle.com> <8f0f7118-bbb3-10dd-cb69-21911955e53b@oracle.com> <70ab054a-479f-6fe4-624a-bfd285d5f0ab@oracle.com> <6cf7fb88-4bbd-1bc3-237c-794568de944a@oracle.com> <133abf1f-b081-7c85-f8f0-e2a9c457ab83@oracle.com> <08e65093-a47b-6cf5-bb71-df73c57cae39@oracle.com> <5ff8485c-699d-7c9d-7f22-1a398fa9b007@oracle.com> Message-ID: <5A78AD23.3000907@oracle.com> +1, Thank you, Misha On 2/5/18, 10:47 AM, serguei.spitsyn at oracle.com wrote: > +1 > > Thanks, > Serguei > > On 2/5/18 10:42, Vladimir Kozlov wrote: >> Looks good. >> >> Thanks, >> Vladimir >> >> On 2/4/18 11:06 PM, Tobias Hartmann wrote: >>> Thanks Vladimir and Serguei! >>> >>> For the record, I'm going to push this version: >>> http://cr.openjdk.java.net/~thartmann/8195731/webrev.02/ >>> >>> Best regards, >>> Tobias >>> >>> On 02.02.2018 21:56, serguei.spitsyn at oracle.com wrote: >>>> Thanks, guys! >>>> I'm Okay with this fix too. >>>> Interesting that I've just investigated similar situation in >>>> Transformer >>>> and was puzzled why the exception was not propagated. >>>> >>>> Thanks, >>>> Serguei >>>> >>>> >>>> On 2/2/18 10:26, Vladimir Kozlov wrote: >>>>> Thank you, Tobias and David. >>>>> >>>>> With this information I agree to use System.exit(). >>>>> May be just add your new log("Transformation failed!"); to webrev.00 >>>>> >>>>> Thanks, >>>>> Vladimir >>>>> >>>>> On 2/2/18 1:36 AM, Tobias Hartmann wrote: >>>>>> Hi David, >>>>>> >>>>>> On 02.02.2018 10:15, David Holmes wrote: >>>>>>> http://openjdk.java.net/jtreg/faq.html#question2.6 >>>>>>> >>>>>>> 2.6. Should a test call the System.exit method? >>>>>>> >>>>>>> No. Depending on how you run the tests, you may get a security >>>>>>> exception from the harness. >>>>>>> >>>>>>> --- >>>>>>> >>>>>>> Plus if you call System.exit you have to run in othervm mode. >>>>>>> >>>>>>> So generally we avoid System.exit and just fail by throwing an >>>>>>> exception from "main" (or whatever the test entry point >>>>>>> is, depending on which framework it uses - like testng). There >>>>>>> are exceptions of course (pardon the pun) and a lot of >>>>>>> legacy tests use System.exit(97) or System.exit(95) to indicate >>>>>>> success or failure. >>>>>> >>>>>> Thanks for the pointer, that makes sense to me. >>>>>> >>>>>>>> The problem is that throwing an exception in >>>>>>>> ClassFileTransformer::transform() is silently ignored: >>>>>>>> "If the transformer throws an exception (which it doesn't >>>>>>>> catch), subsequent transformers will still be called and the >>>>>>>> load, redefine or retransform will still be attempted. Thus, >>>>>>>> throwing an exception has the same effect as returning >>>>>>>> null." [1] >>>>>>>> >>>>>>>> As a result, the test fails without any information. I've >>>>>>>> basically copied this code from >>>>>>>> runtime/RedefineTests/RedefineAnnotations.java [2] were we use >>>>>>>> System.exit as well. If there's a reason to avoid >>>>>>>> System.exit here, we can also just print an error and fail >>>>>>>> later with the generic exception: >>>>>>>> "java.lang.RuntimeException: 'parent-transform-check: >>>>>>>> this-has-been--transformed' missing from stdout/stderr" >>>>>>> >>>>>>> This does sounds like a case where you need System.exit to force >>>>>>> immediate termination. >>>>>> >>>>>> Yes, I think so too. >>>>>> >>>>>> Thanks, >>>>>> Tobias >>>>>> >>>> > From lois.foltan at oracle.com Mon Feb 5 19:51:30 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Mon, 5 Feb 2018 14:51:30 -0500 Subject: (11) RFR (S) JDK-8196601: IllegalAccessError: cannot access class jdk.jfr.internal.handlers.EventHandler Message-ID: Please review this partial back out of a recent change that caused incorrect class accessibility checks to occur in the case of calls to java/lang/Class::getDeclaredFields as well as within JVM signature stream processing.? Testing exposed the issue where a call to getDeclaredFields resulted incorrectly in an IllegalAccessError. open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196601/webrev/ bug link https://bugs.openjdk.java.net/browse/JDK-8196601 Testing in progress (hs-tier1-5,jdk-tier1,2,3) Thanks, Lois From paul.sandoz at oracle.com Mon Feb 5 20:04:02 2018 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Mon, 5 Feb 2018 12:04:02 -0800 Subject: (11) RFR (S) JDK-8196601: IllegalAccessError: cannot access class jdk.jfr.internal.handlers.EventHandler In-Reply-To: References: Message-ID: Hi Lois, Looks good. I eyeballed your changes to ensure they reverted those recent changes. Paul. > On Feb 5, 2018, at 11:51 AM, Lois Foltan wrote: > > Please review this partial back out of a recent change that caused incorrect class accessibility checks to occur in the case of calls to java/lang/Class::getDeclaredFields as well as within JVM signature stream processing. Testing exposed the issue where a call to getDeclaredFields resulted incorrectly in an IllegalAccessError. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196601/webrev/ > bug link https://bugs.openjdk.java.net/browse/JDK-8196601 > > Testing in progress (hs-tier1-5,jdk-tier1,2,3) > > Thanks, > Lois From karen.kinnear at oracle.com Mon Feb 5 20:43:07 2018 From: karen.kinnear at oracle.com (Karen Kinnear) Date: Mon, 5 Feb 2018 15:43:07 -0500 Subject: (11) RFR (S) JDK-8196601: IllegalAccessError: cannot access class jdk.jfr.internal.handlers.EventHandler In-Reply-To: References: Message-ID: <04258F83-8AC2-49ED-8387-8E7A1CC636FA@oracle.com> Lois, Looks good. Thank you for doing this. With reflection and internal signature handling, the backward compatibility model is that we do not perform the access check when returning a reflection field or reflection method or performing internal signature processing. The access checking is done when actually performing the access of the reflected field or method etc. So I believe this should match the backward compatibility requirements. thank you so much, Karen > On Feb 5, 2018, at 3:04 PM, Paul Sandoz wrote: > > Hi Lois, > > Looks good. I eyeballed your changes to ensure they reverted those recent changes. > > Paul. > >> On Feb 5, 2018, at 11:51 AM, Lois Foltan wrote: >> >> Please review this partial back out of a recent change that caused incorrect class accessibility checks to occur in the case of calls to java/lang/Class::getDeclaredFields as well as within JVM signature stream processing. Testing exposed the issue where a call to getDeclaredFields resulted incorrectly in an IllegalAccessError. >> >> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196601/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8196601 >> >> Testing in progress (hs-tier1-5,jdk-tier1,2,3) >> >> Thanks, >> Lois > From lois.foltan at oracle.com Mon Feb 5 20:49:00 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Mon, 5 Feb 2018 15:49:00 -0500 Subject: (11) RFR (S) JDK-8196601: IllegalAccessError: cannot access class jdk.jfr.internal.handlers.EventHandler In-Reply-To: References: Message-ID: <53b1c932-9fa7-052d-e1e8-12a737b5bf6c@oracle.com> Thanks Paul! Lois On 2/5/2018 3:04 PM, Paul Sandoz wrote: > Hi Lois, > > Looks good. I eyeballed your changes to ensure they reverted those recent changes. > > Paul. > >> On Feb 5, 2018, at 11:51 AM, Lois Foltan wrote: >> >> Please review this partial back out of a recent change that caused incorrect class accessibility checks to occur in the case of calls to java/lang/Class::getDeclaredFields as well as within JVM signature stream processing. Testing exposed the issue where a call to getDeclaredFields resulted incorrectly in an IllegalAccessError. >> >> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196601/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8196601 >> >> Testing in progress (hs-tier1-5,jdk-tier1,2,3) >> >> Thanks, >> Lois From lois.foltan at oracle.com Mon Feb 5 20:49:54 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Mon, 5 Feb 2018 15:49:54 -0500 Subject: (11) RFR (S) JDK-8196601: IllegalAccessError: cannot access class jdk.jfr.internal.handlers.EventHandler In-Reply-To: <04258F83-8AC2-49ED-8387-8E7A1CC636FA@oracle.com> References: <04258F83-8AC2-49ED-8387-8E7A1CC636FA@oracle.com> Message-ID: <779570d6-d219-305e-bd45-cca910a3c6dc@oracle.com> Thank you Karen! Lois On 2/5/2018 3:43 PM, Karen Kinnear wrote: > Lois, > > Looks good. Thank you for doing this. With reflection and internal signature handling, the backward compatibility > model is that we do not perform the access check when returning a reflection field or reflection method or performing > internal signature processing. The access checking is done when actually performing the access of > the reflected field or method etc. So I believe this should match the backward compatibility requirements. > > thank you so much, > Karen > >> On Feb 5, 2018, at 3:04 PM, Paul Sandoz wrote: >> >> Hi Lois, >> >> Looks good. I eyeballed your changes to ensure they reverted those recent changes. >> >> Paul. >> >>> On Feb 5, 2018, at 11:51 AM, Lois Foltan wrote: >>> >>> Please review this partial back out of a recent change that caused incorrect class accessibility checks to occur in the case of calls to java/lang/Class::getDeclaredFields as well as within JVM signature stream processing. Testing exposed the issue where a call to getDeclaredFields resulted incorrectly in an IllegalAccessError. >>> >>> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196601/webrev/ >>> bug link https://bugs.openjdk.java.net/browse/JDK-8196601 >>> >>> Testing in progress (hs-tier1-5,jdk-tier1,2,3) >>> >>> Thanks, >>> Lois From shafi.s.ahmad at oracle.com Tue Feb 6 05:56:48 2018 From: shafi.s.ahmad at oracle.com (Shafi Ahmad) Date: Mon, 5 Feb 2018 21:56:48 -0800 (PST) Subject: [8u] RFR for backport of JDK-8026331: hs_err improvement: Print if we have seen any OutOfMemoryErrors or StackOverflowErrors In-Reply-To: <1e6b48f9-9d19-40f4-aae0-61ffa4d51800@default> References: <84e4010f-e1ed-4940-ad24-5e7fc1667899@default> <1e6b48f9-9d19-40f4-aae0-61ffa4d51800@default> Message-ID: <1bf9373c-ab90-46a1-a499-9bc0a7f01a86@default> Hi, Could someone please review it. Regards, Shafi > -----Original Message----- > From: Shafi Ahmad > Sent: Monday, January 29, 2018 10:16 AM > To: hotspot-dev at openjdk.java.net > Subject: RE: [8u] RFR for backport of JDK-8026331: hs_err improvement: Print > if we have seen any OutOfMemoryErrors or StackOverflowErrors > > 2nd try... > > Regards, > Shafi > > > -----Original Message----- > > From: Shafi Ahmad > > Sent: Wednesday, January 24, 2018 3:16 PM > > To: hotspot-dev at openjdk.java.net > > Subject: [8u] RFR for backport of JDK-8026331: hs_err improvement: > > Print if we have seen any OutOfMemoryErrors or StackOverflowErrors > > > > Hi, > > > > Please review the backport of bug: " JDK-8026331: hs_err improvement: > > Print if we have seen any OutOfMemoryErrors or StackOverflowErrors" to > > jdk8u- dev. > > > > Please note that this is not a clean backport as I got below conflicts > > - > > > > hotspot$ find ./ -name "*.rej" -exec cat {} \; > > --- metaspace.cpp > > +++ metaspace.cpp > > @@ -3132,10 +3132,21 @@ > > initialize_class_space(metaspace_rs); > > > > if (PrintCompressedOopsMode || (PrintMiscellaneous && Verbose)) { > > - gclog_or_tty->print_cr("Narrow klass base: " PTR_FORMAT ", Narrow > > klass shift: %d", > > - p2i(Universe::narrow_klass_base()), > > Universe::narrow_klass_shift()); > > - gclog_or_tty->print_cr("Compressed class space size: " SIZE_FORMAT " > > Address: " PTR_FORMAT " Req Addr: " PTR_FORMAT, > > - compressed_class_space_size(), p2i(metaspace_rs.base()), > > p2i(requested_addr)); > > + print_compressed_class_space(gclog_or_tty, requested_addr); > > + } > > +} > > + > > +void Metaspace::print_compressed_class_space(outputStream* st, const > > char* requested_addr) { > > + st->print_cr("Narrow klass base: " PTR_FORMAT ", Narrow klass shift: > %d", > > + p2i(Universe::narrow_klass_base()), > > Universe::narrow_klass_shift()); > > + if (_class_space_list != NULL) { > > + address base = > > + (address)_class_space_list->current_virtual_space()- > > >bottom(); > > + st->print("Compressed class space size: " SIZE_FORMAT " Address: " > > PTR_FORMAT, > > + compressed_class_space_size(), p2i(base)); > > + if (requested_addr != 0) { > > + st->print(" Req Addr: " PTR_FORMAT, p2i(requested_addr)); > > + } > > + st->cr(); > > } > > } > > > > --- universe.cpp > > +++ universe.cpp > > @@ -781,27 +781,24 @@ > > return JNI_OK; > > } > > > > -void Universe::print_compressed_oops_mode() { > > - tty->cr(); > > - tty->print("heap address: " PTR_FORMAT ", size: " SIZE_FORMAT " > > MB", > > +void Universe::print_compressed_oops_mode(outputStream* st) { > > + st->print("heap address: " PTR_FORMAT ", size: " SIZE_FORMAT " MB", > > p2i(Universe::heap()->base()), Universe::heap()- > > >reserved_region().byte_size()/M); > > > > - tty->print(", Compressed Oops mode: %s", > > narrow_oop_mode_to_string(narrow_oop_mode())); > > + st->print(", Compressed Oops mode: %s", > > narrow_oop_mode_to_string(narrow_oop_mode())); > > > > if (Universe::narrow_oop_base() != 0) { > > - tty->print(": " PTR_FORMAT, p2i(Universe::narrow_oop_base())); > > + st->print(": " PTR_FORMAT, p2i(Universe::narrow_oop_base())); > > } > > > > if (Universe::narrow_oop_shift() != 0) { > > - tty->print(", Oop shift amount: %d", Universe::narrow_oop_shift()); > > + st->print(", Oop shift amount: %d", > > + Universe::narrow_oop_shift()); > > } > > > > if (!Universe::narrow_oop_use_implicit_null_checks()) { > > - tty->print(", no protected page in front of the heap"); > > + st->print(", no protected page in front of the heap"); > > } > > - > > - tty->cr(); > > - tty->cr(); > > + st->cr(); > > } > > > > Webrev: http://cr.openjdk.java.net/~shshahma/8026331/webrev.00/ > > Jdk9 bug: https://bugs.openjdk.java.net/browse/JDK-8026331 > > Original patch pushed to jdk9: > > http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/cf5a0377f578 > > > > Test: Run jprt -testset hotspot and jtreg - hotspot/test > > > > Regards, > > Shafi From matthias.baesken at sap.com Tue Feb 6 07:50:28 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Tue, 6 Feb 2018 07:50:28 +0000 Subject: RFR : 8196062 : Enable docker container related tests for linux ppc64le In-Reply-To: <5A74ED9E.8060503@oracle.com> References: <3374db8c-6e7b-fe0b-6874-8289e9e369bc@oracle.com> <6fa3fe84aad946eabb2b46281031e21d@sap.com> <4be36ed6-2f8e-2dd8-a87a-5f76d639550e@oracle.com> <64a3268575d14ddcad90f7d46bab64dd@sap.com> <10f2abc8dbc347f7b7f1a851e89a220a@sap.com> <81c11685dd1b4edd9419e4897e96292a@sap.com> <5A74ED9E.8060503@oracle.com> Message-ID: <930717626b314defb436c1947ecd5ef6@sap.com> I only had to correct some whitespace changes found by hg jcheck , updated : http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.2/ The fix has been reviewed by goetz, dsamersoff, bobv . Feedback was added, Summary updated too (suggested by Goetz). Tested with docker on SLES 12.1 / Ubuntu based container . Best regards, Matthias > -----Original Message----- > From: Mikhailo Seledtsov [mailto:mikhailo.seledtsov at oracle.com] > Sent: Samstag, 3. Februar 2018 00:01 > To: Baesken, Matthias > Cc: Bob Vandette ; Lindenmaier, Goetz > ; hotspot-dev at openjdk.java.net; Langer, > Christoph ; Doerr, Martin > ; Dmitry Samersoff sw.com> > Subject: Re: RFR : 8196062 : Enable docker container related tests for linux > ppc64le > > Hi Matthias, > > I can sponsor your change if you'd like. > Once you addressed all the feedback from code review, please sync to the > tip, build and test. > Then export the changeset and send it to me (see: > http://openjdk.java.net/sponsor/) > > I will import your change set, run all required testing and push the change. > > > Thank you, > Misha > > On 2/2/18, 12:39 AM, Baesken, Matthias wrote: > > Thanks for the reviews . > > > > I added info about the fix for /proc/self/cgroup and /proc/self/mountinfo > parsing to the bug : > > > > https://bugs.openjdk.java.net/browse/JDK-8196062 > > > > Guess I need a sponsor now to get it pushed ? > > > > > > Best regards, Matthias > > > > > > > >> -----Original Message----- > >> From: Bob Vandette [mailto:bob.vandette at oracle.com] > >> Sent: Donnerstag, 1. Februar 2018 17:53 > >> To: Lindenmaier, Goetz > >> Cc: Baesken, Matthias; mikhailo > >> ; hotspot-dev at openjdk.java.net; > Langer, > >> Christoph; Doerr, Martin > >> ; Dmitry Samersoff >> sw.com> > >> Subject: Re: RFR : 8196062 : Enable docker container related tests for linux > >> ppc64le > >> > >> Looks good to me. > >> > >> Bob. > >> > >>> On Feb 1, 2018, at 5:39 AM, Lindenmaier, Goetz > >> wrote: > >>> Hi Matthias, > >>> > >>> thanks for enabling this test. Looks good. > >>> I would appreciate if you would add a line > >>> "Summary: also fix cgroup subsystem recognition" > >>> to the bug description. Else this might be mistaken > >>> for a mere testbug. > >>> > >>> Best regards, > >>> Goetz. > >>> > >>> > >>>> -----Original Message----- > >>>> From: Baesken, Matthias > >>>> Sent: Mittwoch, 31. Januar 2018 15:15 > >>>> To: mikhailo; Bob Vandette > >>>> > >>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz > >>>> ; Langer, Christoph > >>>> ; Doerr, Martin; > >>>> Dmitry Samersoff > >>>> Subject: RE: RFR : 8196062 : Enable docker container related tests for > linux > >>>> ppc64le > >>>> > >>>> Hello , I created a second webrev : > >>>> > >>>> > >>>> > >> > http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.1/8196062.1/webr > >>>> ev/ > >>>> > >>>> - changed DockerTestUtils.buildJdkDockerImage in the suggested way > >> (this > >>>> should be extendable to linux s390x soon) > >>>> > >>>>>>>> Can you add "return;" in each test for subsystem not found > >> messages > >>>> - added returns in the tests for the subsystems in > osContainer_linux.cpp > >>>> > >>>> - moved some checks at the beginning of subsystem_file_contents > >>>> (suggested by Dmitry) > >>>> > >>>> > >>>> Best regards, Matthias > >>>> > >>>> > >>>> > >>>>> -----Original Message----- > >>>>> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] > >>>>> Sent: Donnerstag, 25. Januar 2018 18:43 > >>>>> To: Baesken, Matthias; Bob Vandette > >>>>> > >>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz > >>>>> ; Langer, Christoph > >>>>> ; Doerr, Martin > >>>>> Subject: Re: RFR : 8196062 : Enable docker container related tests for > >> linux > >>>>> ppc64le > >>>>> > >>>>> Hi Matthias, > >>>>> > >>>>> > >>>>> On 01/25/2018 12:15 AM, Baesken, Matthias wrote: > >>>>>>> Perhaps, you could add code to > >> DockerTestUtils.buildJdkDockerImage() > >>>>>>> that does the following or similar: > >>>>>>> 1. Construct a name for platform-specific docker file: > >>>>>>> String platformSpecificDockerfile = dockerfile + "-" + > >>>>>>> Platform.getOsArch(); > >>>>>>> (Platform is jdk.test.lib.Platform) > >>>>>>> > >>>>>> Hello, the doc says : > >>>>>> > >>>>>> * Build a docker image that contains JDK under test. > >>>>>> * The jdk will be placed under the "/jdk/" folder inside the docker > >> file > >>>>> system. > >>>>>> ..... > >>>>>> param dockerfile name of the dockerfile residing in the test > source > >>>>>> ..... > >>>>>> public static void buildJdkDockerImage(String imageName, String > >>>>> dockerfile, String buildDirName) > >>>>>> > >>>>>> > >>>>>> It does not say anything about doing hidden insertions of some > >> platform > >>>>> names into the dockerfile name. > >>>>>> So should the jtreg API doc be changed ? > >>>>>> If so who needs to approve this ? > >>>>> Thank you for your concerns about the clarity of API and > corresponding > >>>>> documentation. This is a test library API, so no need to file CCC or CSR. > >>>>> > >>>>> This API can be changed via a regular RFR/webrev review process, as > >> soon > >>>>> as on one objects. I am a VM SQE engineer covering the docker and > >> Linux > >>>>> container area, I am OK with this change. > >>>>> And I agree with you, we should update the javadoc header on this > >>>> method > >>>>> to reflect this implicit part of API contract. > >>>>> > >>>>> > >>>>> Thank you, > >>>>> Misha > >>>>> > >>>>> > >>>>> > >>>>>> (as far as I see so far only the test at > >>>>> hotspot/jtreg/runtime/containers/docker/ use this so it should not > be > >> a > >>>> big > >>>>> deal to change the interface?) > >>>>>> Best regards, Matthias > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>>> -----Original Message----- > >>>>>>> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] > >>>>>>> Sent: Mittwoch, 24. Januar 2018 20:09 > >>>>>>> To: Bob Vandette; Baesken, Matthias > >>>>>>> > >>>>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz > >>>>>>> ; Langer, Christoph > >>>>>>> ; Doerr, Martin > >> > >>>>>>> Subject: Re: RFR : 8196062 : Enable docker container related tests > for > >>>> linux > >>>>>>> ppc64le > >>>>>>> > >>>>>>> Hi Matthias, > >>>>>>> > >>>>>>> Please see my comments about the test changes inline. > >>>>>>> > >>>>>>> > >>>>>>> On 01/24/2018 07:13 AM, Bob Vandette wrote: > >>>>>>>> osContainer_linux.cpp: > >>>>>>>> > >>>>>>>> Can you add "return;" in each test for subsystem not found > >> messages > >>>>> and > >>>>>>>> remove these 3 lines OR move your tests for NULL& messages > >> inside. > >>>>> The > >>>>>>> compiler can > >>>>>>>> probably optimize this but I?d prefer more compact code. > >>>>>>>> > >>>>>>>> if (memory == NULL || cpuset == NULL || cpu == NULL || cpuacct > == > >>>>> NULL) > >>>>>>> { > >>>>>>>> 342 return; > >>>>>>>> 343 } > >>>>>>>> > >>>>>>>> > >>>>>>>> The other changes in osContainer_linux.cpp look ok. > >>>>>>>> > >>>>>>>> I forwarded your test changes to Misha, who wrote these. > >>>>>>>> > >>>>>>>> Since it?s likely that other platforms, such as aarch64, are going to > run > >>>>> into > >>>>>>> the same problem, > >>>>>>>> It would have been better to enable the tests based on the > >> existence > >>>> of > >>>>> an > >>>>>>> arch specific > >>>>>>>> Dockerfile-BasicTest-{os.arch} rather than enabling specific arch?s > in > >>>>>>> VPProps.java. > >>>>>>>> This approach would reduce the number of changes significantly > and > >>>>> allow > >>>>>>> support to > >>>>>>>> be added with 1 new file. > >>>>>>>> > >>>>>>>> You wouldn?t need "String dockerFileName = > >>>>>>> Common.getDockerFileName();? > >>>>>>>> in every test. Just make DockerTestUtils automatically add arch. > >>>>>>> I like Bob's idea on handling platform-specific Dockerfiles. > >>>>>>> > >>>>>>> Perhaps, you could add code to > >> DockerTestUtils.buildJdkDockerImage() > >>>>>>> that does the following or similar: > >>>>>>> 1. Construct a name for platform-specific docker file: > >>>>>>> String platformSpecificDockerfile = dockerfile + "-" + > >>>>>>> Platform.getOsArch(); > >>>>>>> (Platform is jdk.test.lib.Platform) > >>>>>>> > >>>>>>> 2. Check if platformSpecificDockerfile file exists in the test > >>>>>>> source directory > >>>>>>> File.exists(Paths.get(Utils.TEST_SRC, > platformSpecificDockerFile) > >>>>>>> If it does, then use it. Otherwise continue using the > >>>>>>> default/original dockerfile name. > >>>>>>> > >>>>>>> I think this will considerably simplify your change, as well as make it > >>>>>>> easy to extend support to other platforms/configurations > >>>>>>> in the future. Let us know what you think of this approach ? > >>>>>>> > >>>>>>> > >>>>>>> Once your change gets (R)eviewed and approved, I can sponsor > the > >>>> push. > >>>>>>> > >>>>>>> Thank you, > >>>>>>> Misha > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>>> Bob. > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>>> On Jan 24, 2018, at 9:24 AM, Baesken, Matthias > >>>>>>> wrote: > >>>>>>>>> Hello, could you please review the following change : 8196062 : > >>>> Enable > >>>>>>> docker container related tests for linux ppc64le . > >>>>>>>>> It adds docker container testing for linux ppc64 le (little > endian) . > >>>>>>>>> > >>>>>>>>> A number of things had to be done : > >>>>>>>>> ? Add a separate docker file > >>>>>>> test/hotspot/jtreg/runtime/containers/docker/Dockerfile- > BasicTest- > >>>>> ppc64le > >>>>>>> for linux ppc64 le which uses Ubuntu ( the Oracle Linux 7.2 used > >> for > >>>>>>> x86_64 seems not to be available for ppc64le ) > >>>>>>>>> ? Fix parsing /proc/self/mountinfo and /proc/self/cgroup > >>>> in > >>>>>>> src/hotspot/os/linux/osContainer_linux.cpp , it could not handle > >> the > >>>>>>> format seen on SUSE LINUX 12.1 ppc64le (Host) and Ubuntu > (Docker > >>>>>>> container) > >>>>>>>>> ? Add a bit more logging > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> Webrev : > >>>>>>>>> > >>>>>>>>> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062/ > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> Bug : > >>>>>>>>> > >>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8196062 > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> After these adjustments I could run the > >> runtime/containers/docker > >>>>> - > >>>>>>> jtreg tests successfully . > >>>>>>>>> Best regards, Matthias From tobias.hartmann at oracle.com Tue Feb 6 09:06:54 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Tue, 6 Feb 2018 10:06:54 +0100 Subject: RFR(S): 8195731: [Graal] runtime/SharedArchiveFile/serviceability/transformRelatedClasses/TransformSuperSubTwoPckgs.java intermittently fails with Graal JIT In-Reply-To: <5A78AD23.3000907@oracle.com> References: <85fd0060-5d97-c918-aef3-1eadf4b2b79b@oracle.com> <8f0f7118-bbb3-10dd-cb69-21911955e53b@oracle.com> <70ab054a-479f-6fe4-624a-bfd285d5f0ab@oracle.com> <6cf7fb88-4bbd-1bc3-237c-794568de944a@oracle.com> <133abf1f-b081-7c85-f8f0-e2a9c457ab83@oracle.com> <08e65093-a47b-6cf5-bb71-df73c57cae39@oracle.com> <5ff8485c-699d-7c9d-7f22-1a398fa9b007@oracle.com> <5A78AD23.3000907@oracle.com> Message-ID: Thanks everyone! Best regards, Tobias On 05.02.2018 20:14, Mikhailo Seledtsov wrote: > +1, > > Thank you, > Misha > > On 2/5/18, 10:47 AM, serguei.spitsyn at oracle.com wrote: >> +1 >> >> Thanks, >> Serguei >> >> On 2/5/18 10:42, Vladimir Kozlov wrote: >>> Looks good. >>> >>> Thanks, >>> Vladimir >>> >>> On 2/4/18 11:06 PM, Tobias Hartmann wrote: >>>> Thanks Vladimir and Serguei! >>>> >>>> For the record, I'm going to push this version: >>>> http://cr.openjdk.java.net/~thartmann/8195731/webrev.02/ >>>> >>>> Best regards, >>>> Tobias >>>> >>>> On 02.02.2018 21:56, serguei.spitsyn at oracle.com wrote: >>>>> Thanks, guys! >>>>> I'm Okay with this fix too. >>>>> Interesting that I've just investigated similar situation in Transformer >>>>> and was puzzled why the exception was not propagated. >>>>> >>>>> Thanks, >>>>> Serguei >>>>> >>>>> >>>>> On 2/2/18 10:26, Vladimir Kozlov wrote: >>>>>> Thank you, Tobias and David. >>>>>> >>>>>> With this information I agree to use System.exit(). >>>>>> May be just add your new log("Transformation failed!"); to webrev.00 >>>>>> >>>>>> Thanks, >>>>>> Vladimir >>>>>> >>>>>> On 2/2/18 1:36 AM, Tobias Hartmann wrote: >>>>>>> Hi David, >>>>>>> >>>>>>> On 02.02.2018 10:15, David Holmes wrote: >>>>>>>> http://openjdk.java.net/jtreg/faq.html#question2.6 >>>>>>>> >>>>>>>> 2.6. Should a test call the System.exit method? >>>>>>>> >>>>>>>> No. Depending on how you run the tests, you may get a security exception from the harness. >>>>>>>> >>>>>>>> --- >>>>>>>> >>>>>>>> Plus if you call System.exit you have to run in othervm mode. >>>>>>>> >>>>>>>> So generally we avoid System.exit and just fail by throwing an exception from "main" (or whatever the test entry >>>>>>>> point >>>>>>>> is, depending on which framework it uses - like testng). There are exceptions of course (pardon the pun) and a >>>>>>>> lot of >>>>>>>> legacy tests use System.exit(97) or System.exit(95) to indicate success or failure. >>>>>>> >>>>>>> Thanks for the pointer, that makes sense to me. >>>>>>> >>>>>>>>> The problem is that throwing an exception in ClassFileTransformer::transform() is silently ignored: >>>>>>>>> "If the transformer throws an exception (which it doesn't catch), subsequent transformers will still be called >>>>>>>>> and the >>>>>>>>> load, redefine or retransform will still be attempted. Thus, throwing an exception has the same effect as >>>>>>>>> returning >>>>>>>>> null." [1] >>>>>>>>> >>>>>>>>> As a result, the test fails without any information. I've basically copied this code from >>>>>>>>> runtime/RedefineTests/RedefineAnnotations.java [2] were we use System.exit as well. If there's a reason to avoid >>>>>>>>> System.exit here, we can also just print an error and fail later with the generic exception: >>>>>>>>> "java.lang.RuntimeException: 'parent-transform-check: this-has-been--transformed' missing from stdout/stderr" >>>>>>>> >>>>>>>> This does sounds like a case where you need System.exit to force immediate termination. >>>>>>> >>>>>>> Yes, I think so too. >>>>>>> >>>>>>> Thanks, >>>>>>> Tobias >>>>>>> >>>>> >> From glaubitz at physik.fu-berlin.de Tue Feb 6 10:03:39 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Tue, 6 Feb 2018 11:03:39 +0100 Subject: Something external breaks Zero Message-ID: Hi! I got home from FOSDEM yesterday, ran my usual "hg pull && hg update --clean" and tried rebuilding Zero on my new shiny AMD Epyc machine and ran into this: glaubitz at epyc:/srv/glaubitz/openjdk/hs/build/linux-x86_64-normal-zero-release/jdk/bin$ ./java # # A fatal error has been detected by the Java Runtime Environment: # # Internal Error (os_linux_zero.cpp:271), pid=43611, tid=43612 # fatal error: caught unhandled signal 11 # # JRE version: (10.0) (build ) # Java VM: OpenJDK 64-Bit Zero VM (10-internal+0-adhoc.glaubitz.hs, interpreted mode, serial gc, linux-amd64) # Core dump will be written. Default location: /srv/glaubitz/openjdk/hs/build/linux-x86_64-normal-zero-release/jdk/bin/core # # An error report file with more information is saved as: # /srv/glaubitz/openjdk/hs/build/linux-x86_64-normal-zero-release/jdk/bin/hs_err_pid43611.log # # If you would like to submit a bug report, please visit: # http://bugreport.java.com/bugreport/crash.jsp # Aborted (core dumped) glaubitz at epyc:/srv/glaubitz/openjdk/hs/build/linux-x86_64-normal-zero-release/jdk/bin$ Enabling core dumps and passing the core file to gdb yields: glaubitz at epyc:/srv/glaubitz/openjdk/hs/build/linux-x86_64-normal-zero-release/jdk/bin$ gdb ./java --core=core GNU gdb (Debian 7.12-6) 7.12.0.20161007-git Copyright (C) 2016 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: . Find the GDB manual and other documentation resources online at: . For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from ./java...(no debugging symbols found)...done. [New LWP 43588] [New LWP 43587] [New LWP 43589] [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Core was generated by `./java -Xlog:all=debug -XX:-UseContainerSupport'. Program terminated with signal SIGABRT, Aborted. #0 __GI_raise (sig=sig at entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51 51 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. [Current thread is 1 (Thread 0x7feca1453700 (LWP 43588))] (gdb) bt #0 __GI_raise (sig=sig at entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51 #1 0x00007feca06b3cf7 in __GI_abort () at abort.c:90 #2 0x00007feca0184c29 in os::abort (dump_core=, siginfo=, context=) at /srv/glaubitz/openjdk/hs/src/hotspot/os/linux/os_linux.cpp:1416 #3 0x00007feca02b5052 in VMError::report_and_die (id=id at entry=-536870912, message=message at entry=0x7feca03044a4 "fatal error", detail_fmt=detail_fmt at entry=0x7feca1396040 "caught unhandled signal 11", detail_args=detail_args at entry=0x7feca1395f38, thread=, pc=pc at entry=0x0, siginfo=0x0, context=0x0, filename=0x7feca0331910 "/srv/glaubitz/openjdk/hs/src/hotspot/os_cpu/linux_zero/os_linux_zero.cpp", lineno=271, size=0) at /srv/glaubitz/openjdk/hs/src/hotspot/share/utilities/vmError.cpp:1494 #4 0x00007feca02b595f in VMError::report_and_die (thread=, filename=filename at entry=0x7feca0331910 "/srv/glaubitz/openjdk/hs/src/hotspot/os_cpu/linux_zero/os_linux_zero.cpp", lineno=lineno at entry=271, message=message at entry=0x7feca03044a4 "fatal error", detail_fmt=detail_fmt at entry=0x7feca1396040 "caught unhandled signal 11", detail_args=detail_args at entry=0x7feca1395f38) at /srv/glaubitz/openjdk/hs/src/hotspot/share/utilities/vmError.cpp:1240 #5 0x00007fec9fed5cb8 in report_fatal (file=file at entry=0x7feca0331910 "/srv/glaubitz/openjdk/hs/src/hotspot/os_cpu/linux_zero/os_linux_zero.cpp", line=line at entry=271, detail_fmt=detail_fmt at entry=0x7feca1396040 "caught unhandled signal 11") at /srv/glaubitz/openjdk/hs/src/hotspot/share/utilities/debug.cpp:228 #6 0x00007feca018e2d5 in JVM_handle_linux_signal (sig=sig at entry=11, info=info at entry=0x7feca1396230, ucVoid=ucVoid at entry=0x7feca1396100, abort_if_unrecognized=abort_if_unrecognized at entry=1) at /srv/glaubitz/openjdk/hs/src/hotspot/os_cpu/linux_zero/os_linux_zero.cpp:271 #7 0x00007feca01834d8 in signalHandler (sig=11, info=0x7feca1396230, uc=0x7feca1396100) at /srv/glaubitz/openjdk/hs/src/hotspot/os/linux/os_linux.cpp:4401 #8 #9 0x00007fec9fdbb3cc in itableMethodEntry::method (this=) at /srv/glaubitz/openjdk/hs/src/hotspot/share/oops/klassVtable.hpp:265 #10 BytecodeInterpreter::run (istate=0x7feca1452250) at /srv/glaubitz/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:2605 #11 0x00007fec9fed318a in CppInterpreter::main_loop (recurse=, __the_thread__=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/cpu/zero/cppInterpreter_zero.cpp:133 #12 0x00007fec9fed3847 in CppInterpreter::normal_entry (method=0x7fec9c3f4dc0, UNUSED=, __the_thread__=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/cpu/zero/cppInterpreter_zero.cpp:76 #13 0x00007fec9fed24be in ZeroEntry::invoke (__the_thread__=0x7fec9800f540, method=method at entry=0x7fec9800f540, this=this at entry=0x7fec980023f8) at /srv/glaubitz/openjdk/hs/src/hotspot/cpu/zero/entry_zero.hpp:59 #14 CppInterpreter::invoke_method (method=method at entry=0x7fec9c3f4dc0, entry_point=entry_point at entry=0x7fec9cd2c160 " 5\355\237\354\177", __the_thread__=__the_thread__ at entry=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/share/interpreter/cppInterpreter.cpp:66 #15 0x00007feca0242729 in StubGenerator::call_stub (call_wrapper=0x7feca1396c10, result=0x7feca1396e38, result_type=T_LONG, method=0x7fec9c3f4dc0, entry_point=0x7fec9cd2c160 " 5\355\237\354\177", parameters=, parameter_words=2, __the_thread__=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/cpu/zero/stubGenerator_zero.cpp:98 #16 0x00007feca00113aa in JavaCalls::call_helper (result=0x7feca1396e30, method=..., args=, __the_thread__=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/share/runtime/javaCalls.cpp:408 #17 0x00007feca0012b2d in JavaCalls::call (__the_thread__=0x7fec9800f540, args=0x7feca1396d40, method=..., result=0x7feca1396e30) at /srv/glaubitz/openjdk/hs/src/hotspot/share/runtime/javaCalls.cpp:307 #18 JavaCalls::call_static (__the_thread__=0x7fec9800f540, args=0x7feca1396d40, signature=, name=, klass=, result=0x7feca1396e30, this=) at /srv/glaubitz/openjdk/hs/src/hotspot/share/runtime/javaCalls.cpp:265 #19 JavaCalls::call_static (result=result at entry=0x7feca1396e30, klass=klass at entry=0x7fec9c3f5fa8, name=, signature=, arg1=..., arg2=..., __the_thread__=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/share/runtime/javaCalls.cpp:285 #20 0x00007feca01599c0 in NativeLookup::lookup_style (method=..., pure_name=pure_name at entry=0x7fec9800fab0 "Java_java_security_AccessController_doPrivileged", long_name=long_name at entry=0x7feca03175d4 "", args_size=args_size at entry=3, os_style=os_style at entry=true, in_base_library=@0x7feca139701f: false, __the_thread__=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/share/prims/nativeLookup.cpp:182 #21 0x00007feca0159c77 in NativeLookup::lookup_entry (method=..., in_base_library=@0x7feca139701f: false, __the_thread__=__the_thread__ at entry=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/share/prims/nativeLookup.cpp:255 #22 0x00007feca015a2a2 in NativeLookup::lookup_base (__the_thread__=0x7fec9800f540, in_base_library=@0x7feca139701f: false, method=...) at /srv/glaubitz/openjdk/hs/src/hotspot/share/prims/nativeLookup.cpp:372 #23 NativeLookup::lookup (method=..., in_base_library=@0x7feca139701f: false, __the_thread__=__the_thread__ at entry=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/share/prims/nativeLookup.cpp:388 #24 0x00007feca0008485 in InterpreterRuntime::prepare_native_call (thread=thread at entry=0x7fec9800f540, method=method at entry=0x7fec9c507cb8) at /srv/glaubitz/openjdk/hs/src/hotspot/share/interpreter/interpreterRuntime.cpp:1414 #25 0x00007fec9fed4082 in CppInterpreter::native_entry (method=0x7fec9c507cb8, UNUSED=, __the_thread__=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/cpu/zero/cppInterpreter_zero.cpp:292 #26 0x00007fec9fed24be in ZeroEntry::invoke (__the_thread__=0x7fec9800f540, method=, this=) at /srv/glaubitz/openjdk/hs/src/hotspot/cpu/zero/entry_zero.hpp:59 #27 CppInterpreter::invoke_method (method=, entry_point=, __the_thread__=__the_thread__ at entry=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/share/interpreter/cppInterpreter.cpp:66 #28 0x00007fec9fed31af in CppInterpreter::main_loop (recurse=, __the_thread__=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/cpu/zero/cppInterpreter_zero.cpp:147 #29 0x00007fec9fed3847 in CppInterpreter::normal_entry (method=0x7fec9c4324f0, UNUSED=, __the_thread__=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/cpu/zero/cppInterpreter_zero.cpp:76 #30 0x00007fec9fed24be in ZeroEntry::invoke (__the_thread__=0x7fec9800f540, method=method at entry=0x7fec9800f540, this=this at entry=0x7feca1397220) at /srv/glaubitz/openjdk/hs/src/hotspot/cpu/zero/entry_zero.hpp:59 #31 CppInterpreter::invoke_method (method=method at entry=0x7fec9c4324f0, entry_point=entry_point at entry=0x7fec9cd2c160 " 5\355\237\354\177", __the_thread__=__the_thread__ at entry=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/share/interpreter/cppInterpreter.cpp:66 #32 0x00007feca0242729 in StubGenerator::call_stub (call_wrapper=0x7feca14525c0, result=0x7feca1452688, result_type=T_INT, method=0x7fec9c4324f0, entry_point=0x7fec9cd2c160 " 5\355\237\354\177", parameters=, parameter_words=0, __the_thread__=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/cpu/zero/stubGenerator_zero.cpp:98 #33 0x00007feca00113aa in JavaCalls::call_helper (result=0x7feca1452680, method=..., args=, __the_thread__=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/share/runtime/javaCalls.cpp:408 #34 0x00007fec9fff047b in InstanceKlass::call_class_initializer (this=this at entry=0x7fec9c432578, __the_thread__=__the_thread__ at entry=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/share/oops/instanceKlass.cpp:1104 #35 0x00007fec9fff0a9d in InstanceKlass::initialize_impl (this=0x7fec9c432578, __the_thread__=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/share/oops/instanceKlass.cpp:813 #36 0x00007fec9fff0908 in InstanceKlass::initialize_impl (this=0x7fec9c43f8c8, __the_thread__=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/share/oops/instanceKlass.cpp:771 #37 0x00007fec9fff0908 in InstanceKlass::initialize_impl (this=0x7fec9c43fd40, __the_thread__=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/share/oops/instanceKlass.cpp:771 #38 0x00007feca0269d98 in Threads::initialize_java_lang_classes (main_thread=main_thread at entry=0x7fec9800f540, __the_thread__=__the_thread__ at entry=0x7fec9800f540) at /srv/glaubitz/openjdk/hs/src/hotspot/share/runtime/thread.cpp:3609 #39 0x00007feca026e74c in Threads::create_vm (args=, canTryAgain=canTryAgain at entry=0x7feca1452dd7) at /srv/glaubitz/openjdk/hs/src/hotspot/share/runtime/thread.cpp:3834 #40 0x00007feca00384b2 in JNI_CreateJavaVM_inner (args=, penv=0x7feca1452e98, vm=0x7feca1452e90) at /srv/glaubitz/openjdk/hs/src/hotspot/share/prims/jni.cpp:3911 #41 JNI_CreateJavaVM (vm=0x7feca1452e90, penv=0x7feca1452e98, args=) at /srv/glaubitz/openjdk/hs/src/hotspot/share/prims/jni.cpp:4006 #42 0x00007feca0c3bd24 in InitializeJVM (ifn=, penv=0x7feca1452e98, pvm=0x7feca1452e90) at /srv/glaubitz/openjdk/hs/src/java.base/share/native/libjli/java.c:1478 #43 JavaMain (_args=) at /srv/glaubitz/openjdk/hs/src/java.base/share/native/libjli/java.c:411 #44 0x00007feca0e4f51a in start_thread (arg=0x7feca1453700) at pthread_create.c:465 #45 0x00007feca07733ef in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 (gdb) Now the interesting part is that it seems that this is not a regression in OpenJDK as this also occurs with older trees which definitely worked fine before. And what's even more interesting is that it doesn't crash on my SPARC T5: glaubitz at deb4g:~/openjdk/hs/build/linux-sparcv9-normal-zero-release/jdk/bin$ ./java --version openjdk 10-internal OpenJDK Runtime Environment (build 10-internal+0-adhoc.glaubitz.hs) OpenJDK 64-Bit Zero VM (build 10-internal+0-adhoc.glaubitz.hs, interpreted mode) glaubitz at deb4g:~/openjdk/hs/build/linux-sparcv9-normal-zero-release/jdk/bin$ But it crashes on the Sun Fire T2000: glaubitz at stadler:/srv/openjdk/hs/build/linux-sparcv9-normal-zero-release/jdk/bin$ ./java --version # # A fatal error has been detected by the Java Runtime Environment: # # Internal Error (os_linux_zero.cpp:271), pid=10309, tid=10310 # fatal error: caught unhandled signal 11 # # JRE version: (10.0) (build ) # Java VM: OpenJDK 64-Bit Zero VM (10-internal+0-adhoc.glaubitz.hs, interpreted mode, serial gc, linux-sparc) # No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again # # An error report file with more information is saved as: # /srv/openjdk/hs/build/linux-sparcv9-normal-zero-release/jdk/bin/hs_err_pid10309.log # # If you would like to submit a bug report, please visit: # http://bugreport.java.com/bugreport/crash.jsp # Aborted glaubitz at stadler:/srv/openjdk/hs/build/linux-sparcv9-normal-zero-release/jdk/bin$ All machines run Debian unstable with the latest packages. Hrmpf. Will try digging now. Adrian PS: It was very nice meeting you all at FOSDEM. I wished I would have had more time :|. -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From aph at redhat.com Tue Feb 6 10:44:34 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 6 Feb 2018 10:44:34 +0000 Subject: Something external breaks Zero In-Reply-To: References: Message-ID: On 06/02/18 10:03, John Paul Adrian Glaubitz wrote: > I got home from FOSDEM yesterday, ran my usual "hg pull && hg update --clean" and tried > rebuilding Zero on my new shiny AMD Epyc machine and ran into this: Exactly what tree did you clone? -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From glaubitz at physik.fu-berlin.de Tue Feb 6 10:45:25 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Tue, 6 Feb 2018 11:45:25 +0100 Subject: Something external breaks Zero In-Reply-To: References: Message-ID: <8801cb63-566c-e279-d9e2-9f546489f12e@physik.fu-berlin.de> Hi Andrew! On 02/06/2018 11:44 AM, Andrew Haley wrote: > On 06/02/18 10:03, John Paul Adrian Glaubitz wrote: >> I got home from FOSDEM yesterday, ran my usual "hg pull && hg update --clean" and tried >> rebuilding Zero on my new shiny AMD Epyc machine and ran into this: > > Exactly what tree did you clone? I'm on the JDK tree all the time: http://hg.openjdk.java.net/jdk/hs/ Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From david.holmes at oracle.com Tue Feb 6 10:53:16 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 6 Feb 2018 20:53:16 +1000 Subject: Something external breaks Zero In-Reply-To: References: Message-ID: Hi Adrian, This looks familiar ... let me do some digging .. something from the CPU (security update) ... David On 6/02/2018 8:03 PM, John Paul Adrian Glaubitz wrote: > Hi! > > I got home from FOSDEM yesterday, ran my usual "hg pull && hg update > --clean" and tried > rebuilding Zero on my new shiny AMD Epyc machine and ran into this: > > glaubitz at epyc:/srv/glaubitz/openjdk/hs/build/linux-x86_64-normal-zero-release/jdk/bin$ > ./java > # > # A fatal error has been detected by the Java Runtime Environment: > # > #? Internal Error (os_linux_zero.cpp:271), pid=43611, tid=43612 > #? fatal error: caught unhandled signal 11 > # > # JRE version:? (10.0) (build ) > # Java VM: OpenJDK 64-Bit Zero VM (10-internal+0-adhoc.glaubitz.hs, > interpreted mode, serial gc, linux-amd64) > # Core dump will be written. Default location: > /srv/glaubitz/openjdk/hs/build/linux-x86_64-normal-zero-release/jdk/bin/core > > # > # An error report file with more information is saved as: > # > /srv/glaubitz/openjdk/hs/build/linux-x86_64-normal-zero-release/jdk/bin/hs_err_pid43611.log > > # > # If you would like to submit a bug report, please visit: > #?? http://bugreport.java.com/bugreport/crash.jsp > # > Aborted (core dumped) > glaubitz at epyc:/srv/glaubitz/openjdk/hs/build/linux-x86_64-normal-zero-release/jdk/bin$ > > > Enabling core dumps and passing the core file to gdb yields: > > glaubitz at epyc:/srv/glaubitz/openjdk/hs/build/linux-x86_64-normal-zero-release/jdk/bin$ > gdb ./java --core=core > GNU gdb (Debian 7.12-6) 7.12.0.20161007-git > Copyright (C) 2016 Free Software Foundation, Inc. > License GPLv3+: GNU GPL version 3 or later > > This is free software: you are free to change and redistribute it. > There is NO WARRANTY, to the extent permitted by law.? Type "show copying" > and "show warranty" for details. > This GDB was configured as "x86_64-linux-gnu". > Type "show configuration" for configuration details. > For bug reporting instructions, please see: > . > Find the GDB manual and other documentation resources online at: > . > For help, type "help". > Type "apropos word" to search for commands related to "word"... > Reading symbols from ./java...(no debugging symbols found)...done. > [New LWP 43588] > [New LWP 43587] > [New LWP 43589] > [Thread debugging using libthread_db enabled] > Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". > Core was generated by `./java -Xlog:all=debug -XX:-UseContainerSupport'. > Program terminated with signal SIGABRT, Aborted. > #0? __GI_raise (sig=sig at entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51 > 51????? ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. > [Current thread is 1 (Thread 0x7feca1453700 (LWP 43588))] > (gdb) bt > #0? __GI_raise (sig=sig at entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51 > #1? 0x00007feca06b3cf7 in __GI_abort () at abort.c:90 > #2? 0x00007feca0184c29 in os::abort (dump_core=, > siginfo=, context=) at > /srv/glaubitz/openjdk/hs/src/hotspot/os/linux/os_linux.cpp:1416 > #3? 0x00007feca02b5052 in VMError::report_and_die > (id=id at entry=-536870912, message=message at entry=0x7feca03044a4 "fatal > error", detail_fmt=detail_fmt at entry=0x7feca1396040 "caught unhandled > signal 11", detail_args=detail_args at entry=0x7feca1395f38, > thread=, > ??? pc=pc at entry=0x0, siginfo=0x0, context=0x0, filename=0x7feca0331910 > "/srv/glaubitz/openjdk/hs/src/hotspot/os_cpu/linux_zero/os_linux_zero.cpp", > lineno=271, size=0) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/utilities/vmError.cpp:1494 > #4? 0x00007feca02b595f in VMError::report_and_die (thread= out>, filename=filename at entry=0x7feca0331910 > "/srv/glaubitz/openjdk/hs/src/hotspot/os_cpu/linux_zero/os_linux_zero.cpp", > lineno=lineno at entry=271, message=message at entry=0x7feca03044a4 "fatal > error", > ??? detail_fmt=detail_fmt at entry=0x7feca1396040 "caught unhandled signal > 11", detail_args=detail_args at entry=0x7feca1395f38) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/utilities/vmError.cpp:1240 > #5? 0x00007fec9fed5cb8 in report_fatal (file=file at entry=0x7feca0331910 > "/srv/glaubitz/openjdk/hs/src/hotspot/os_cpu/linux_zero/os_linux_zero.cpp", > line=line at entry=271, detail_fmt=detail_fmt at entry=0x7feca1396040 "caught > unhandled signal 11") > ??? at /srv/glaubitz/openjdk/hs/src/hotspot/share/utilities/debug.cpp:228 > #6? 0x00007feca018e2d5 in JVM_handle_linux_signal (sig=sig at entry=11, > info=info at entry=0x7feca1396230, ucVoid=ucVoid at entry=0x7feca1396100, > abort_if_unrecognized=abort_if_unrecognized at entry=1) at > /srv/glaubitz/openjdk/hs/src/hotspot/os_cpu/linux_zero/os_linux_zero.cpp:271 > > #7? 0x00007feca01834d8 in signalHandler (sig=11, info=0x7feca1396230, > uc=0x7feca1396100) at > /srv/glaubitz/openjdk/hs/src/hotspot/os/linux/os_linux.cpp:4401 > #8? > #9? 0x00007fec9fdbb3cc in itableMethodEntry::method (this= out>) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/oops/klassVtable.hpp:265 > #10 BytecodeInterpreter::run (istate=0x7feca1452250) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:2605 > > #11 0x00007fec9fed318a in CppInterpreter::main_loop (recurse= out>, __the_thread__=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/cpu/zero/cppInterpreter_zero.cpp:133 > #12 0x00007fec9fed3847 in CppInterpreter::normal_entry > (method=0x7fec9c3f4dc0, UNUSED=, > __the_thread__=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/cpu/zero/cppInterpreter_zero.cpp:76 > #13 0x00007fec9fed24be in ZeroEntry::invoke > (__the_thread__=0x7fec9800f540, method=method at entry=0x7fec9800f540, > this=this at entry=0x7fec980023f8) at > /srv/glaubitz/openjdk/hs/src/hotspot/cpu/zero/entry_zero.hpp:59 > #14 CppInterpreter::invoke_method (method=method at entry=0x7fec9c3f4dc0, > entry_point=entry_point at entry=0x7fec9cd2c160 " 5\355\237\354\177", > __the_thread__=__the_thread__ at entry=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/interpreter/cppInterpreter.cpp:66 > > #15 0x00007feca0242729 in StubGenerator::call_stub > (call_wrapper=0x7feca1396c10, result=0x7feca1396e38, result_type=T_LONG, > method=0x7fec9c3f4dc0, entry_point=0x7fec9cd2c160 " 5\355\237\354\177", > parameters=, parameter_words=2, > ??? __the_thread__=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/cpu/zero/stubGenerator_zero.cpp:98 > #16 0x00007feca00113aa in JavaCalls::call_helper (result=0x7feca1396e30, > method=..., args=, __the_thread__=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/runtime/javaCalls.cpp:408 > #17 0x00007feca0012b2d in JavaCalls::call > (__the_thread__=0x7fec9800f540, args=0x7feca1396d40, method=..., > result=0x7feca1396e30) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/runtime/javaCalls.cpp:307 > #18 JavaCalls::call_static (__the_thread__=0x7fec9800f540, > args=0x7feca1396d40, signature=, name=, > klass=, result=0x7feca1396e30, this=) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/runtime/javaCalls.cpp:265 > #19 JavaCalls::call_static (result=result at entry=0x7feca1396e30, > klass=klass at entry=0x7fec9c3f5fa8, name=, > signature=, arg1=..., arg2=..., > __the_thread__=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/runtime/javaCalls.cpp:285 > #20 0x00007feca01599c0 in NativeLookup::lookup_style (method=..., > pure_name=pure_name at entry=0x7fec9800fab0 > "Java_java_security_AccessController_doPrivileged", > long_name=long_name at entry=0x7feca03175d4 "", > args_size=args_size at entry=3, os_style=os_style at entry=true, > ??? in_base_library=@0x7feca139701f: false, > __the_thread__=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/prims/nativeLookup.cpp:182 > #21 0x00007feca0159c77 in NativeLookup::lookup_entry (method=..., > in_base_library=@0x7feca139701f: false, > __the_thread__=__the_thread__ at entry=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/prims/nativeLookup.cpp:255 > #22 0x00007feca015a2a2 in NativeLookup::lookup_base > (__the_thread__=0x7fec9800f540, in_base_library=@0x7feca139701f: false, > method=...) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/prims/nativeLookup.cpp:372 > #23 NativeLookup::lookup (method=..., in_base_library=@0x7feca139701f: > false, __the_thread__=__the_thread__ at entry=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/prims/nativeLookup.cpp:388 > #24 0x00007feca0008485 in InterpreterRuntime::prepare_native_call > (thread=thread at entry=0x7fec9800f540, method=method at entry=0x7fec9c507cb8) > at > /srv/glaubitz/openjdk/hs/src/hotspot/share/interpreter/interpreterRuntime.cpp:1414 > > #25 0x00007fec9fed4082 in CppInterpreter::native_entry > (method=0x7fec9c507cb8, UNUSED=, > __the_thread__=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/cpu/zero/cppInterpreter_zero.cpp:292 > #26 0x00007fec9fed24be in ZeroEntry::invoke > (__the_thread__=0x7fec9800f540, method=, this= out>) at /srv/glaubitz/openjdk/hs/src/hotspot/cpu/zero/entry_zero.hpp:59 > #27 CppInterpreter::invoke_method (method=, > entry_point=, > __the_thread__=__the_thread__ at entry=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/interpreter/cppInterpreter.cpp:66 > > #28 0x00007fec9fed31af in CppInterpreter::main_loop (recurse= out>, __the_thread__=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/cpu/zero/cppInterpreter_zero.cpp:147 > #29 0x00007fec9fed3847 in CppInterpreter::normal_entry > (method=0x7fec9c4324f0, UNUSED=, > __the_thread__=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/cpu/zero/cppInterpreter_zero.cpp:76 > #30 0x00007fec9fed24be in ZeroEntry::invoke > (__the_thread__=0x7fec9800f540, method=method at entry=0x7fec9800f540, > this=this at entry=0x7feca1397220) at > /srv/glaubitz/openjdk/hs/src/hotspot/cpu/zero/entry_zero.hpp:59 > #31 CppInterpreter::invoke_method (method=method at entry=0x7fec9c4324f0, > entry_point=entry_point at entry=0x7fec9cd2c160 " 5\355\237\354\177", > __the_thread__=__the_thread__ at entry=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/interpreter/cppInterpreter.cpp:66 > > #32 0x00007feca0242729 in StubGenerator::call_stub > (call_wrapper=0x7feca14525c0, result=0x7feca1452688, result_type=T_INT, > method=0x7fec9c4324f0, entry_point=0x7fec9cd2c160 " 5\355\237\354\177", > parameters=, parameter_words=0, > ??? __the_thread__=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/cpu/zero/stubGenerator_zero.cpp:98 > #33 0x00007feca00113aa in JavaCalls::call_helper (result=0x7feca1452680, > method=..., args=, __the_thread__=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/runtime/javaCalls.cpp:408 > #34 0x00007fec9fff047b in InstanceKlass::call_class_initializer > (this=this at entry=0x7fec9c432578, > __the_thread__=__the_thread__ at entry=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/oops/instanceKlass.cpp:1104 > #35 0x00007fec9fff0a9d in InstanceKlass::initialize_impl > (this=0x7fec9c432578, __the_thread__=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/oops/instanceKlass.cpp:813 > #36 0x00007fec9fff0908 in InstanceKlass::initialize_impl > (this=0x7fec9c43f8c8, __the_thread__=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/oops/instanceKlass.cpp:771 > #37 0x00007fec9fff0908 in InstanceKlass::initialize_impl > (this=0x7fec9c43fd40, __the_thread__=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/oops/instanceKlass.cpp:771 > #38 0x00007feca0269d98 in Threads::initialize_java_lang_classes > (main_thread=main_thread at entry=0x7fec9800f540, > __the_thread__=__the_thread__ at entry=0x7fec9800f540) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/runtime/thread.cpp:3609 > #39 0x00007feca026e74c in Threads::create_vm (args=, > canTryAgain=canTryAgain at entry=0x7feca1452dd7) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/runtime/thread.cpp:3834 > #40 0x00007feca00384b2 in JNI_CreateJavaVM_inner (args=, > penv=0x7feca1452e98, vm=0x7feca1452e90) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/prims/jni.cpp:3911 > #41 JNI_CreateJavaVM (vm=0x7feca1452e90, penv=0x7feca1452e98, > args=) at > /srv/glaubitz/openjdk/hs/src/hotspot/share/prims/jni.cpp:4006 > #42 0x00007feca0c3bd24 in InitializeJVM (ifn=, > penv=0x7feca1452e98, pvm=0x7feca1452e90) at > /srv/glaubitz/openjdk/hs/src/java.base/share/native/libjli/java.c:1478 > #43 JavaMain (_args=) at > /srv/glaubitz/openjdk/hs/src/java.base/share/native/libjli/java.c:411 > #44 0x00007feca0e4f51a in start_thread (arg=0x7feca1453700) at > pthread_create.c:465 > #45 0x00007feca07733ef in clone () at > ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 > (gdb) > > Now the interesting part is that it seems that this is not a regression in > OpenJDK as this also occurs with older trees which definitely worked fine > before. > > And what's even more interesting is that it doesn't crash on my SPARC T5: > > glaubitz at deb4g:~/openjdk/hs/build/linux-sparcv9-normal-zero-release/jdk/bin$ > ./java --version > openjdk 10-internal > OpenJDK Runtime Environment (build 10-internal+0-adhoc.glaubitz.hs) > OpenJDK 64-Bit Zero VM (build 10-internal+0-adhoc.glaubitz.hs, > interpreted mode) > glaubitz at deb4g:~/openjdk/hs/build/linux-sparcv9-normal-zero-release/jdk/bin$ > > > But it crashes on the Sun Fire T2000: > > glaubitz at stadler:/srv/openjdk/hs/build/linux-sparcv9-normal-zero-release/jdk/bin$ > ./java --version > # > # A fatal error has been detected by the Java Runtime Environment: > # > #? Internal Error (os_linux_zero.cpp:271), pid=10309, tid=10310 > #? fatal error: caught unhandled signal 11 > # > # JRE version:? (10.0) (build ) > # Java VM: OpenJDK 64-Bit Zero VM (10-internal+0-adhoc.glaubitz.hs, > interpreted mode, serial gc, linux-sparc) > # No core dump will be written. Core dumps have been disabled. To enable > core dumping, try "ulimit -c unlimited" before starting Java again > # > # An error report file with more information is saved as: > # > /srv/openjdk/hs/build/linux-sparcv9-normal-zero-release/jdk/bin/hs_err_pid10309.log > > # > # If you would like to submit a bug report, please visit: > #?? http://bugreport.java.com/bugreport/crash.jsp > # > Aborted > glaubitz at stadler:/srv/openjdk/hs/build/linux-sparcv9-normal-zero-release/jdk/bin$ > > > All machines run Debian unstable with the latest packages. > > Hrmpf. Will try digging now. > > Adrian > > PS: It was very nice meeting you all at FOSDEM. I wished I would have > had more time :|. > From aph at redhat.com Tue Feb 6 10:59:45 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 6 Feb 2018 10:59:45 +0000 Subject: Something external breaks Zero In-Reply-To: References: Message-ID: <8c049316-3c86-5821-7fae-48f891b4d0cf@redhat.com> On 06/02/18 10:53, David Holmes wrote: > This looks familiar ... let me do some digging .. something from the CPU > (security update) ... Maybe the patch at http://cr.openjdk.java.net/~aph/8194739-jdk10/jdk10.changeset still isn't in? -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From glaubitz at physik.fu-berlin.de Tue Feb 6 11:05:04 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Tue, 6 Feb 2018 12:05:04 +0100 Subject: Something external breaks Zero In-Reply-To: <8c049316-3c86-5821-7fae-48f891b4d0cf@redhat.com> References: <8c049316-3c86-5821-7fae-48f891b4d0cf@redhat.com> Message-ID: On 02/06/2018 11:59 AM, Andrew Haley wrote: > On 06/02/18 10:53, David Holmes wrote: >> This looks familiar ... let me do some digging .. something from the CPU >> (security update) ... > > Maybe the patch at http://cr.openjdk.java.net/~aph/8194739-jdk10/jdk10.changeset > still isn't in? The patch applied cleanly, but breaks the C++ build: === Output from failing command(s) repeated here === /usr/bin/printf "* For target hotspot_variant-zero_libjvm_objs_bytecodeInterpreter.o:\n" * For target hotspot_variant-zero_libjvm_objs_bytecodeInterpreter.o: (/bin/grep -v -e "^Note: including file:" < /home/glaubitz/upstream/hs/build/linux-x86_64-normal-zero-release/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_bytecodeInterpreter.o.log || true) | /usr/bin/head -n 12 /home/glaubitz/upstream/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp: In static member function ?static void BytecodeInterpreter::run(interpreterState)?: /home/glaubitz/upstream/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:2538:43: error: ?class ConstantPoolCacheEntry? has no member named ?f2_as_interface_method?; did you mean ?f2_as_vfinal_method?? Method *interface_method = cache->f2_as_interface_method(); ^~~~~~~~~~~~~~~~~~~~~~ f2_as_vfinal_method if test `/usr/bin/wc -l < /home/glaubitz/upstream/hs/build/linux-x86_64-normal-zero-release/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_bytecodeInterpreter.o.log` -gt 12; then /bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target hotspot_variant-zero_libjvm_objs_bytecodeInterpreterWithChecks.o:\n" * For target hotspot_variant-zero_libjvm_objs_bytecodeInterpreterWithChecks.o: (/bin/grep -v -e "^Note: including file:" < /home/glaubitz/upstream/hs/build/linux-x86_64-normal-zero-release/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_bytecodeInterpreterWithChecks.o.log || true) | /usr/bin/head -n 12 In file included from /home/glaubitz/upstream/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/gensrc/jvmtifiles/bytecodeInterpreterWithChecks.cpp:3:0: /home/glaubitz/upstream/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp: In static member function ?static void BytecodeInterpreter::runWithChecks(interpreterState)?: /home/glaubitz/upstream/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:2538:43: error: ?class ConstantPoolCacheEntry? has no member named ?f2_as_interface_method?; did you mean ?f2_as_vfinal_method?? Method *interface_method = cache->f2_as_interface_method(); ^~~~~~~~~~~~~~~~~~~~~~ f2_as_vfinal_method if test `/usr/bin/wc -l < /home/glaubitz/upstream/hs/build/linux-x86_64-normal-zero-release/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_bytecodeInterpreterWithChecks.o.log` -gt 12; then /bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "\n* All command lines available in /home/glaubitz/upstream/hs/build/linux-x86_64-normal-zero-release/make-support/failure-logs.\n" * All command lines available in /home/glaubitz/upstream/hs/build/linux-x86_64-normal-zero-release/make-support/failure-logs. /usr/bin/printf "=== End of repeated output ===\n" === End of repeated output === -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From aph at redhat.com Tue Feb 6 11:06:38 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 6 Feb 2018 11:06:38 +0000 Subject: Something external breaks Zero In-Reply-To: References: <8c049316-3c86-5821-7fae-48f891b4d0cf@redhat.com> Message-ID: <9fe0a910-98ca-fa7f-8bc8-c2cf4dffa193@redhat.com> On 06/02/18 11:05, John Paul Adrian Glaubitz wrote: > On 02/06/2018 11:59 AM, Andrew Haley wrote: >> On 06/02/18 10:53, David Holmes wrote: >>> This looks familiar ... let me do some digging .. something from the CPU >>> (security update) ... >> Maybe the patch at http://cr.openjdk.java.net/~aph/8194739-jdk10/jdk10.changeset >> still isn't in? > The patch applied cleanly, but breaks the C++ build: Right. I think the changes for the CPU aren't in hs, either because they are in a different form or not needed. But this looks like the right place to be looking. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From david.holmes at oracle.com Tue Feb 6 11:08:25 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 6 Feb 2018 21:08:25 +1000 Subject: Something external breaks Zero In-Reply-To: <8c049316-3c86-5821-7fae-48f891b4d0cf@redhat.com> References: <8c049316-3c86-5821-7fae-48f891b4d0cf@redhat.com> Message-ID: On 6/02/2018 8:59 PM, Andrew Haley wrote: > On 06/02/18 10:53, David Holmes wrote: >> This looks familiar ... let me do some digging .. something from the CPU >> (security update) ... > > Maybe the patch at http://cr.openjdk.java.net/~aph/8194739-jdk10/jdk10.changeset > still isn't in? That would be it. :) Thanks, David From glaubitz at physik.fu-berlin.de Tue Feb 6 11:08:53 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Tue, 6 Feb 2018 12:08:53 +0100 Subject: Something external breaks Zero In-Reply-To: <9fe0a910-98ca-fa7f-8bc8-c2cf4dffa193@redhat.com> References: <8c049316-3c86-5821-7fae-48f891b4d0cf@redhat.com> <9fe0a910-98ca-fa7f-8bc8-c2cf4dffa193@redhat.com> Message-ID: On 02/06/2018 12:06 PM, Andrew Haley wrote: > Right. I think the changes for the CPU aren't in hs, either because > they are in a different form or not needed. But this looks like the > right place to be looking. Looking into that now. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From aph at redhat.com Tue Feb 6 11:16:42 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 6 Feb 2018 11:16:42 +0000 Subject: Something external breaks Zero In-Reply-To: References: <8c049316-3c86-5821-7fae-48f891b4d0cf@redhat.com> Message-ID: <13cb691b-15b1-c2b0-61e8-27c668d319fb@redhat.com> On 06/02/18 11:08, David Holmes wrote: > On 6/02/2018 8:59 PM, Andrew Haley wrote: >> On 06/02/18 10:53, David Holmes wrote: >>> This looks familiar ... let me do some digging .. something from the CPU >>> (security update) ... >> >> Maybe the patch at http://cr.openjdk.java.net/~aph/8194739-jdk10/jdk10.changeset >> still isn't in? > > That would be it. :) But the infrastructure that patch needs (f2_as_interface_method()) is not there. So how can it work? -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From glaubitz at physik.fu-berlin.de Tue Feb 6 11:22:06 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Tue, 6 Feb 2018 12:22:06 +0100 Subject: Something external breaks Zero In-Reply-To: <13cb691b-15b1-c2b0-61e8-27c668d319fb@redhat.com> References: <8c049316-3c86-5821-7fae-48f891b4d0cf@redhat.com> <13cb691b-15b1-c2b0-61e8-27c668d319fb@redhat.com> Message-ID: <881b4b08-e230-2621-89da-4989c563f613@physik.fu-berlin.de> On 02/06/2018 12:16 PM, Andrew Haley wrote: >>> Maybe the patch at http://cr.openjdk.java.net/~aph/8194739-jdk10/jdk10.changeset >>> still isn't in? >> >> That would be it. :) > > But the infrastructure that patch needs (f2_as_interface_method()) is not > there. So how can it work? It is. My tree was outdated. Re-testing now. -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From glaubitz at physik.fu-berlin.de Tue Feb 6 11:26:40 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Tue, 6 Feb 2018 12:26:40 +0100 Subject: Something external breaks Zero In-Reply-To: <881b4b08-e230-2621-89da-4989c563f613@physik.fu-berlin.de> References: <8c049316-3c86-5821-7fae-48f891b4d0cf@redhat.com> <13cb691b-15b1-c2b0-61e8-27c668d319fb@redhat.com> <881b4b08-e230-2621-89da-4989c563f613@physik.fu-berlin.de> Message-ID: Hi again! On 02/06/2018 12:22 PM, John Paul Adrian Glaubitz wrote: > On 02/06/2018 12:16 PM, Andrew Haley wrote: >>>> Maybe the patch at http://cr.openjdk.java.net/~aph/8194739-jdk10/jdk10.changeset >>>> still isn't in? >>> >>> That would be it. :) >> >> But the infrastructure that patch needs (f2_as_interface_method()) is not >> there.? So how can it work? > > It is. My tree was outdated. Re-testing now. The patch applies cleanly to the JDK branch and fixes the problem for me. @Andrew: Can you push this fix or should I? Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From david.holmes at oracle.com Tue Feb 6 11:34:24 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 6 Feb 2018 21:34:24 +1000 Subject: Something external breaks Zero In-Reply-To: References: <8c049316-3c86-5821-7fae-48f891b4d0cf@redhat.com> <13cb691b-15b1-c2b0-61e8-27c668d319fb@redhat.com> <881b4b08-e230-2621-89da-4989c563f613@physik.fu-berlin.de> Message-ID: <65caf152-7b77-fbe1-a115-06baf334581d@oracle.com> On 6/02/2018 9:26 PM, John Paul Adrian Glaubitz wrote: > Hi again! > > On 02/06/2018 12:22 PM, John Paul Adrian Glaubitz wrote: >> On 02/06/2018 12:16 PM, Andrew Haley wrote: >>>>> Maybe the patch at >>>>> http://cr.openjdk.java.net/~aph/8194739-jdk10/jdk10.changeset >>>>> still isn't in? >>>> >>>> That would be it. :) >>> >>> But the infrastructure that patch needs (f2_as_interface_method()) is >>> not >>> there.? So how can it work? >> >> It is. My tree was outdated. Re-testing now. > > The patch applies cleanly to the JDK branch and fixes the problem for me. > > @Andrew: Can you push this fix or should I? The fix is traversing from jdk/jdk10 -> jdk/jdk -> jdk/hs. It can be directly exporterd from jdk/jdk and imported into jdk/hs if you can't wait for the next sync down (which should be soon, we've been waiting for some testing issues to settle down before muddying the waters with more changes). David > Adrian > From glaubitz at physik.fu-berlin.de Tue Feb 6 12:07:36 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Tue, 6 Feb 2018 13:07:36 +0100 Subject: Something external breaks Zero In-Reply-To: <65caf152-7b77-fbe1-a115-06baf334581d@oracle.com> References: <8c049316-3c86-5821-7fae-48f891b4d0cf@redhat.com> <13cb691b-15b1-c2b0-61e8-27c668d319fb@redhat.com> <881b4b08-e230-2621-89da-4989c563f613@physik.fu-berlin.de> <65caf152-7b77-fbe1-a115-06baf334581d@oracle.com> Message-ID: <1ba76ed0-9d33-ba37-682d-1a3712f6fe5d@physik.fu-berlin.de> On 02/06/2018 12:34 PM, David Holmes wrote: > The fix is traversing from jdk/jdk10 -> jdk/jdk -> jdk/hs. It can be directly exporterd from jdk/jdk and imported into jdk/hs if you can't wait for the next sync down (which should be soon, we've been waiting for some testing issues to settle down before muddying the waters with more changes). Ah, if that happens automatically, then I'm fine with waiting :). Thanks, Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From coleen.phillimore at oracle.com Tue Feb 6 14:30:34 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 6 Feb 2018 09:30:34 -0500 Subject: Something external breaks Zero In-Reply-To: <1ba76ed0-9d33-ba37-682d-1a3712f6fe5d@physik.fu-berlin.de> References: <8c049316-3c86-5821-7fae-48f891b4d0cf@redhat.com> <13cb691b-15b1-c2b0-61e8-27c668d319fb@redhat.com> <881b4b08-e230-2621-89da-4989c563f613@physik.fu-berlin.de> <65caf152-7b77-fbe1-a115-06baf334581d@oracle.com> <1ba76ed0-9d33-ba37-682d-1a3712f6fe5d@physik.fu-berlin.de> Message-ID: <065fa768-a2e2-8bbb-43ad-5517ba5a803b@oracle.com> iirc f2_as_interface_method was part of the fix and is in jdk/hs, so I think you can push the zero change to jdk/hs.?? I reviewed it also. thanks, Coleen On 2/6/18 7:07 AM, John Paul Adrian Glaubitz wrote: > On 02/06/2018 12:34 PM, David Holmes wrote: >> The fix is traversing from jdk/jdk10 -> jdk/jdk -> jdk/hs. It can be >> directly exporterd from jdk/jdk and imported into jdk/hs if you can't >> wait for the next sync down (which should be soon, we've been waiting >> for some testing issues to settle down before muddying the waters >> with more changes). > > Ah, if that happens automatically, then I'm fine with waiting :). > > Thanks, > Adrian > From daniel.daugherty at oracle.com Tue Feb 6 14:46:24 2018 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Tue, 6 Feb 2018 09:46:24 -0500 Subject: Something external breaks Zero In-Reply-To: <065fa768-a2e2-8bbb-43ad-5517ba5a803b@oracle.com> References: <8c049316-3c86-5821-7fae-48f891b4d0cf@redhat.com> <13cb691b-15b1-c2b0-61e8-27c668d319fb@redhat.com> <881b4b08-e230-2621-89da-4989c563f613@physik.fu-berlin.de> <65caf152-7b77-fbe1-a115-06baf334581d@oracle.com> <1ba76ed0-9d33-ba37-682d-1a3712f6fe5d@physik.fu-berlin.de> <065fa768-a2e2-8bbb-43ad-5517ba5a803b@oracle.com> Message-ID: Jesper's sync-down from jdk/jdk -> jdk/hs landed about 30 minutes ago... That sync-down includes the fix for JDK-8194739... Dan On 2/6/18 9:30 AM, coleen.phillimore at oracle.com wrote: > > iirc f2_as_interface_method was part of the fix and is in jdk/hs, so I > think you can push the zero change to jdk/hs.?? I reviewed it also. > > thanks, > Coleen > > On 2/6/18 7:07 AM, John Paul Adrian Glaubitz wrote: >> On 02/06/2018 12:34 PM, David Holmes wrote: >>> The fix is traversing from jdk/jdk10 -> jdk/jdk -> jdk/hs. It can be >>> directly exporterd from jdk/jdk and imported into jdk/hs if you >>> can't wait for the next sync down (which should be soon, we've been >>> waiting for some testing issues to settle down before muddying the >>> waters with more changes). >> >> Ah, if that happens automatically, then I'm fine with waiting :). >> >> Thanks, >> Adrian >> > > From glaubitz at physik.fu-berlin.de Tue Feb 6 17:49:29 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Tue, 6 Feb 2018 18:49:29 +0100 Subject: Something external breaks Zero In-Reply-To: References: <8c049316-3c86-5821-7fae-48f891b4d0cf@redhat.com> <13cb691b-15b1-c2b0-61e8-27c668d319fb@redhat.com> <881b4b08-e230-2621-89da-4989c563f613@physik.fu-berlin.de> <65caf152-7b77-fbe1-a115-06baf334581d@oracle.com> <1ba76ed0-9d33-ba37-682d-1a3712f6fe5d@physik.fu-berlin.de> <065fa768-a2e2-8bbb-43ad-5517ba5a803b@oracle.com> Message-ID: <8d5ee6c0-f1b4-3a89-06c4-c776efc6184f@physik.fu-berlin.de> On 02/06/2018 03:46 PM, Daniel D. Daugherty wrote: > Jesper's sync-down from jdk/jdk -> jdk/hs landed about 30 minutes ago... > That sync-down includes the fix for JDK-8194739... Indeed. After another "hg pull" and "hg update --clean" it works again :-). Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From jesper.wilhelmsson at oracle.com Wed Feb 7 02:55:51 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Wed, 7 Feb 2018 03:55:51 +0100 Subject: RFR: JDK-8196924 - [BACKOUT] NMT: Report array class count in NMT summary Message-ID: Hi, Please review this backout of JDK-8193184 that has caused build failures. Bug: https://bugs.openjdk.java.net/browse/JDK-8196924 Webrev: http://cr.openjdk.java.net/~jwilhelm/8196924/webrev.00/ Thanks, /Jesper From david.holmes at oracle.com Wed Feb 7 03:09:22 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 7 Feb 2018 13:09:22 +1000 Subject: RFR: JDK-8196924 - [BACKOUT] NMT: Report array class count in NMT summary In-Reply-To: References: Message-ID: <3b23b213-3273-9356-a4d6-f27240970e38@oracle.com> Looks good. Thanks for taking care of this. David On 7/02/2018 12:55 PM, jesper.wilhelmsson at oracle.com wrote: > Hi, > > Please review this backout of JDK-8193184 that has caused build failures. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8196924 > Webrev: http://cr.openjdk.java.net/~jwilhelm/8196924/webrev.00/ > > Thanks, > /Jesper > From jesper.wilhelmsson at oracle.com Wed Feb 7 03:09:56 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Wed, 7 Feb 2018 04:09:56 +0100 Subject: RFR: JDK-8196924 - [BACKOUT] NMT: Report array class count in NMT summary In-Reply-To: <3b23b213-3273-9356-a4d6-f27240970e38@oracle.com> References: <3b23b213-3273-9356-a4d6-f27240970e38@oracle.com> Message-ID: <6AB02E69-967D-47BD-924C-CE65A7A108DB@oracle.com> Thanks David! /Jesper > On 7 Feb 2018, at 04:09, David Holmes wrote: > > Looks good. Thanks for taking care of this. > > David > > On 7/02/2018 12:55 PM, jesper.wilhelmsson at oracle.com wrote: >> Hi, >> Please review this backout of JDK-8193184 that has caused build failures. >> Bug: https://bugs.openjdk.java.net/browse/JDK-8196924 >> Webrev: http://cr.openjdk.java.net/~jwilhelm/8196924/webrev.00/ >> Thanks, >> /Jesper From mikhailo.seledtsov at oracle.com Wed Feb 7 03:28:48 2018 From: mikhailo.seledtsov at oracle.com (Mikhailo Seledtsov) Date: Tue, 06 Feb 2018 19:28:48 -0800 Subject: RFR : 8196062 : Enable docker container related tests for linux ppc64le In-Reply-To: <930717626b314defb436c1947ecd5ef6@sap.com> References: <3374db8c-6e7b-fe0b-6874-8289e9e369bc@oracle.com> <6fa3fe84aad946eabb2b46281031e21d@sap.com> <4be36ed6-2f8e-2dd8-a87a-5f76d639550e@oracle.com> <64a3268575d14ddcad90f7d46bab64dd@sap.com> <10f2abc8dbc347f7b7f1a851e89a220a@sap.com> <81c11685dd1b4edd9419e4897e96292a@sap.com> <5A74ED9E.8060503@oracle.com> <930717626b314defb436c1947ecd5ef6@sap.com> Message-ID: <5A7A7270.1070804@oracle.com> I am running pre-integration testing; will push unless the testing finds any issues. Regards, Misha On 2/5/18, 11:50 PM, Baesken, Matthias wrote: > I only had to correct some whitespace changes found by hg jcheck , updated : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.2/ > > The fix has been reviewed by goetz, dsamersoff, bobv . > Feedback was added, Summary updated too (suggested by Goetz). > > Tested with docker on SLES 12.1 / Ubuntu based container . > > Best regards, Matthias > > >> -----Original Message----- >> From: Mikhailo Seledtsov [mailto:mikhailo.seledtsov at oracle.com] >> Sent: Samstag, 3. Februar 2018 00:01 >> To: Baesken, Matthias >> Cc: Bob Vandette; Lindenmaier, Goetz >> ; hotspot-dev at openjdk.java.net; Langer, >> Christoph; Doerr, Martin >> ; Dmitry Samersoff> sw.com> >> Subject: Re: RFR : 8196062 : Enable docker container related tests for linux >> ppc64le >> >> Hi Matthias, >> >> I can sponsor your change if you'd like. >> Once you addressed all the feedback from code review, please sync to the >> tip, build and test. >> Then export the changeset and send it to me (see: >> http://openjdk.java.net/sponsor/) >> >> I will import your change set, run all required testing and push the change. >> >> >> Thank you, >> Misha >> >> On 2/2/18, 12:39 AM, Baesken, Matthias wrote: >>> Thanks for the reviews . >>> >>> I added info about the fix for /proc/self/cgroup and /proc/self/mountinfo >> parsing to the bug : >>> https://bugs.openjdk.java.net/browse/JDK-8196062 >>> >>> Guess I need a sponsor now to get it pushed ? >>> >>> >>> Best regards, Matthias >>> >>> >>> >>>> -----Original Message----- >>>> From: Bob Vandette [mailto:bob.vandette at oracle.com] >>>> Sent: Donnerstag, 1. Februar 2018 17:53 >>>> To: Lindenmaier, Goetz >>>> Cc: Baesken, Matthias; mikhailo >>>> ; hotspot-dev at openjdk.java.net; >> Langer, >>>> Christoph; Doerr, Martin >>>> ; Dmitry Samersoff>>> sw.com> >>>> Subject: Re: RFR : 8196062 : Enable docker container related tests for linux >>>> ppc64le >>>> >>>> Looks good to me. >>>> >>>> Bob. >>>> >>>>> On Feb 1, 2018, at 5:39 AM, Lindenmaier, Goetz >>>> wrote: >>>>> Hi Matthias, >>>>> >>>>> thanks for enabling this test. Looks good. >>>>> I would appreciate if you would add a line >>>>> "Summary: also fix cgroup subsystem recognition" >>>>> to the bug description. Else this might be mistaken >>>>> for a mere testbug. >>>>> >>>>> Best regards, >>>>> Goetz. >>>>> >>>>> >>>>>> -----Original Message----- >>>>>> From: Baesken, Matthias >>>>>> Sent: Mittwoch, 31. Januar 2018 15:15 >>>>>> To: mikhailo; Bob Vandette >>>>>> >>>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >>>>>> ; Langer, Christoph >>>>>> ; Doerr, Martin; >>>>>> Dmitry Samersoff >>>>>> Subject: RE: RFR : 8196062 : Enable docker container related tests for >> linux >>>>>> ppc64le >>>>>> >>>>>> Hello , I created a second webrev : >>>>>> >>>>>> >>>>>> >> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.1/8196062.1/webr >>>>>> ev/ >>>>>> >>>>>> - changed DockerTestUtils.buildJdkDockerImage in the suggested way >>>> (this >>>>>> should be extendable to linux s390x soon) >>>>>> >>>>>>>>>> Can you add "return;" in each test for subsystem not found >>>> messages >>>>>> - added returns in the tests for the subsystems in >> osContainer_linux.cpp >>>>>> - moved some checks at the beginning of subsystem_file_contents >>>>>> (suggested by Dmitry) >>>>>> >>>>>> >>>>>> Best regards, Matthias >>>>>> >>>>>> >>>>>> >>>>>>> -----Original Message----- >>>>>>> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] >>>>>>> Sent: Donnerstag, 25. Januar 2018 18:43 >>>>>>> To: Baesken, Matthias; Bob Vandette >>>>>>> >>>>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >>>>>>> ; Langer, Christoph >>>>>>> ; Doerr, Martin >>>>>>> Subject: Re: RFR : 8196062 : Enable docker container related tests for >>>> linux >>>>>>> ppc64le >>>>>>> >>>>>>> Hi Matthias, >>>>>>> >>>>>>> >>>>>>> On 01/25/2018 12:15 AM, Baesken, Matthias wrote: >>>>>>>>> Perhaps, you could add code to >>>> DockerTestUtils.buildJdkDockerImage() >>>>>>>>> that does the following or similar: >>>>>>>>> 1. Construct a name for platform-specific docker file: >>>>>>>>> String platformSpecificDockerfile = dockerfile + "-" + >>>>>>>>> Platform.getOsArch(); >>>>>>>>> (Platform is jdk.test.lib.Platform) >>>>>>>>> >>>>>>>> Hello, the doc says : >>>>>>>> >>>>>>>> * Build a docker image that contains JDK under test. >>>>>>>> * The jdk will be placed under the "/jdk/" folder inside the docker >>>> file >>>>>>> system. >>>>>>>> ..... >>>>>>>> param dockerfile name of the dockerfile residing in the test >> source >>>>>>>> ..... >>>>>>>> public static void buildJdkDockerImage(String imageName, String >>>>>>> dockerfile, String buildDirName) >>>>>>>> >>>>>>>> It does not say anything about doing hidden insertions of some >>>> platform >>>>>>> names into the dockerfile name. >>>>>>>> So should the jtreg API doc be changed ? >>>>>>>> If so who needs to approve this ? >>>>>>> Thank you for your concerns about the clarity of API and >> corresponding >>>>>>> documentation. This is a test library API, so no need to file CCC or CSR. >>>>>>> >>>>>>> This API can be changed via a regular RFR/webrev review process, as >>>> soon >>>>>>> as on one objects. I am a VM SQE engineer covering the docker and >>>> Linux >>>>>>> container area, I am OK with this change. >>>>>>> And I agree with you, we should update the javadoc header on this >>>>>> method >>>>>>> to reflect this implicit part of API contract. >>>>>>> >>>>>>> >>>>>>> Thank you, >>>>>>> Misha >>>>>>> >>>>>>> >>>>>>> >>>>>>>> (as far as I see so far only the test at >>>>>>> hotspot/jtreg/runtime/containers/docker/ use this so it should not >> be >>>> a >>>>>> big >>>>>>> deal to change the interface?) >>>>>>>> Best regards, Matthias >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> -----Original Message----- >>>>>>>>> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] >>>>>>>>> Sent: Mittwoch, 24. Januar 2018 20:09 >>>>>>>>> To: Bob Vandette; Baesken, Matthias >>>>>>>>> >>>>>>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >>>>>>>>> ; Langer, Christoph >>>>>>>>> ; Doerr, Martin >>>> >>>>>>>>> Subject: Re: RFR : 8196062 : Enable docker container related tests >> for >>>>>> linux >>>>>>>>> ppc64le >>>>>>>>> >>>>>>>>> Hi Matthias, >>>>>>>>> >>>>>>>>> Please see my comments about the test changes inline. >>>>>>>>> >>>>>>>>> >>>>>>>>> On 01/24/2018 07:13 AM, Bob Vandette wrote: >>>>>>>>>> osContainer_linux.cpp: >>>>>>>>>> >>>>>>>>>> Can you add "return;" in each test for subsystem not found >>>> messages >>>>>>> and >>>>>>>>>> remove these 3 lines OR move your tests for NULL& messages >>>> inside. >>>>>>> The >>>>>>>>> compiler can >>>>>>>>>> probably optimize this but I?d prefer more compact code. >>>>>>>>>> >>>>>>>>>> if (memory == NULL || cpuset == NULL || cpu == NULL || cpuacct >> == >>>>>>> NULL) >>>>>>>>> { >>>>>>>>>> 342 return; >>>>>>>>>> 343 } >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> The other changes in osContainer_linux.cpp look ok. >>>>>>>>>> >>>>>>>>>> I forwarded your test changes to Misha, who wrote these. >>>>>>>>>> >>>>>>>>>> Since it?s likely that other platforms, such as aarch64, are going to >> run >>>>>>> into >>>>>>>>> the same problem, >>>>>>>>>> It would have been better to enable the tests based on the >>>> existence >>>>>> of >>>>>>> an >>>>>>>>> arch specific >>>>>>>>>> Dockerfile-BasicTest-{os.arch} rather than enabling specific arch?s >> in >>>>>>>>> VPProps.java. >>>>>>>>>> This approach would reduce the number of changes significantly >> and >>>>>>> allow >>>>>>>>> support to >>>>>>>>>> be added with 1 new file. >>>>>>>>>> >>>>>>>>>> You wouldn?t need "String dockerFileName = >>>>>>>>> Common.getDockerFileName();? >>>>>>>>>> in every test. Just make DockerTestUtils automatically add arch. >>>>>>>>> I like Bob's idea on handling platform-specific Dockerfiles. >>>>>>>>> >>>>>>>>> Perhaps, you could add code to >>>> DockerTestUtils.buildJdkDockerImage() >>>>>>>>> that does the following or similar: >>>>>>>>> 1. Construct a name for platform-specific docker file: >>>>>>>>> String platformSpecificDockerfile = dockerfile + "-" + >>>>>>>>> Platform.getOsArch(); >>>>>>>>> (Platform is jdk.test.lib.Platform) >>>>>>>>> >>>>>>>>> 2. Check if platformSpecificDockerfile file exists in the test >>>>>>>>> source directory >>>>>>>>> File.exists(Paths.get(Utils.TEST_SRC, >> platformSpecificDockerFile) >>>>>>>>> If it does, then use it. Otherwise continue using the >>>>>>>>> default/original dockerfile name. >>>>>>>>> >>>>>>>>> I think this will considerably simplify your change, as well as make it >>>>>>>>> easy to extend support to other platforms/configurations >>>>>>>>> in the future. Let us know what you think of this approach ? >>>>>>>>> >>>>>>>>> >>>>>>>>> Once your change gets (R)eviewed and approved, I can sponsor >> the >>>>>> push. >>>>>>>>> Thank you, >>>>>>>>> Misha >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> Bob. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> On Jan 24, 2018, at 9:24 AM, Baesken, Matthias >>>>>>>>> wrote: >>>>>>>>>>> Hello, could you please review the following change : 8196062 : >>>>>> Enable >>>>>>>>> docker container related tests for linux ppc64le . >>>>>>>>>>> It adds docker container testing for linux ppc64 le (little >> endian) . >>>>>>>>>>> A number of things had to be done : >>>>>>>>>>> ? Add a separate docker file >>>>>>>>> test/hotspot/jtreg/runtime/containers/docker/Dockerfile- >> BasicTest- >>>>>>> ppc64le >>>>>>>>> for linux ppc64 le which uses Ubuntu ( the Oracle Linux 7.2 used >>>> for >>>>>>>>> x86_64 seems not to be available for ppc64le ) >>>>>>>>>>> ? Fix parsing /proc/self/mountinfo and /proc/self/cgroup >>>>>> in >>>>>>>>> src/hotspot/os/linux/osContainer_linux.cpp , it could not handle >>>> the >>>>>>>>> format seen on SUSE LINUX 12.1 ppc64le (Host) and Ubuntu >> (Docker >>>>>>>>> container) >>>>>>>>>>> ? Add a bit more logging >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Webrev : >>>>>>>>>>> >>>>>>>>>>> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062/ >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Bug : >>>>>>>>>>> >>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8196062 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> After these adjustments I could run the >>>> runtime/containers/docker >>>>>>> - >>>>>>>>> jtreg tests successfully . >>>>>>>>>>> Best regards, Matthias From mikhailo.seledtsov at oracle.com Wed Feb 7 05:09:11 2018 From: mikhailo.seledtsov at oracle.com (Mikhailo Seledtsov) Date: Tue, 06 Feb 2018 21:09:11 -0800 Subject: RFR : 8196062 : Enable docker container related tests for linux ppc64le In-Reply-To: <5A7A7270.1070804@oracle.com> References: <3374db8c-6e7b-fe0b-6874-8289e9e369bc@oracle.com> <6fa3fe84aad946eabb2b46281031e21d@sap.com> <4be36ed6-2f8e-2dd8-a87a-5f76d639550e@oracle.com> <64a3268575d14ddcad90f7d46bab64dd@sap.com> <10f2abc8dbc347f7b7f1a851e89a220a@sap.com> <81c11685dd1b4edd9419e4897e96292a@sap.com> <5A74ED9E.8060503@oracle.com> <930717626b314defb436c1947ecd5ef6@sap.com> <5A7A7270.1070804@oracle.com> Message-ID: <5A7A89F7.3060509@oracle.com> Hi Matthias, Unfortunately one test failed during the pre-integration testing. The following test failed during the pre-integration testing: open/test/hotspot/jtreg/testlibrary_tests/TestMutuallyExclusivePlatformPredicates.java Reproducible: 100%, Linux and MAC Failure: TEST RESULT: Failed. Execution failed: `main' threw exception: java.lang.RuntimeException: All Platform's methods with signature '():Z' should be tested. Missing: isPPC64le: expected true, was false See my comments in the RFE for details, and a suggested fix. Thank you, Misha On 2/6/18, 7:28 PM, Mikhailo Seledtsov wrote: > I am running pre-integration testing; will push unless the testing > finds any issues. > > Regards, > Misha > > On 2/5/18, 11:50 PM, Baesken, Matthias wrote: >> I only had to correct some whitespace changes found by hg jcheck , >> updated : >> >> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.2/ >> >> The fix has been reviewed by goetz, dsamersoff, bobv . >> Feedback was added, Summary updated too (suggested by Goetz). >> >> Tested with docker on SLES 12.1 / Ubuntu based container . >> >> Best regards, Matthias >> >> >>> -----Original Message----- >>> From: Mikhailo Seledtsov [mailto:mikhailo.seledtsov at oracle.com] >>> Sent: Samstag, 3. Februar 2018 00:01 >>> To: Baesken, Matthias >>> Cc: Bob Vandette; Lindenmaier, Goetz >>> ; hotspot-dev at openjdk.java.net; Langer, >>> Christoph; Doerr, Martin >>> ; Dmitry Samersoff>> sw.com> >>> Subject: Re: RFR : 8196062 : Enable docker container related tests >>> for linux >>> ppc64le >>> >>> Hi Matthias, >>> >>> I can sponsor your change if you'd like. >>> Once you addressed all the feedback from code review, please sync to >>> the >>> tip, build and test. >>> Then export the changeset and send it to me (see: >>> http://openjdk.java.net/sponsor/) >>> >>> I will import your change set, run all required testing and push the >>> change. >>> >>> >>> Thank you, >>> Misha >>> >>> On 2/2/18, 12:39 AM, Baesken, Matthias wrote: >>>> Thanks for the reviews . >>>> >>>> I added info about the fix for /proc/self/cgroup and >>>> /proc/self/mountinfo >>> parsing to the bug : >>>> https://bugs.openjdk.java.net/browse/JDK-8196062 >>>> >>>> Guess I need a sponsor now to get it pushed ? >>>> >>>> >>>> Best regards, Matthias >>>> >>>> >>>> >>>>> -----Original Message----- >>>>> From: Bob Vandette [mailto:bob.vandette at oracle.com] >>>>> Sent: Donnerstag, 1. Februar 2018 17:53 >>>>> To: Lindenmaier, Goetz >>>>> Cc: Baesken, Matthias; mikhailo >>>>> ; hotspot-dev at openjdk.java.net; >>> Langer, >>>>> Christoph; Doerr, Martin >>>>> ; Dmitry Samersoff>>>> sw.com> >>>>> Subject: Re: RFR : 8196062 : Enable docker container related tests >>>>> for linux >>>>> ppc64le >>>>> >>>>> Looks good to me. >>>>> >>>>> Bob. >>>>> >>>>>> On Feb 1, 2018, at 5:39 AM, Lindenmaier, Goetz >>>>> wrote: >>>>>> Hi Matthias, >>>>>> >>>>>> thanks for enabling this test. Looks good. >>>>>> I would appreciate if you would add a line >>>>>> "Summary: also fix cgroup subsystem recognition" >>>>>> to the bug description. Else this might be mistaken >>>>>> for a mere testbug. >>>>>> >>>>>> Best regards, >>>>>> Goetz. >>>>>> >>>>>> >>>>>>> -----Original Message----- >>>>>>> From: Baesken, Matthias >>>>>>> Sent: Mittwoch, 31. Januar 2018 15:15 >>>>>>> To: mikhailo; Bob Vandette >>>>>>> >>>>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >>>>>>> ; Langer, Christoph >>>>>>> ; Doerr, Martin; >>>>>>> Dmitry Samersoff >>>>>>> Subject: RE: RFR : 8196062 : Enable docker container related >>>>>>> tests for >>> linux >>>>>>> ppc64le >>>>>>> >>>>>>> Hello , I created a second webrev : >>>>>>> >>>>>>> >>>>>>> >>> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.1/8196062.1/webr >>>>>>> ev/ >>>>>>> >>>>>>> - changed DockerTestUtils.buildJdkDockerImage in the suggested >>>>>>> way >>>>> (this >>>>>>> should be extendable to linux s390x soon) >>>>>>> >>>>>>>>>>> Can you add "return;" in each test for subsystem not found >>>>> messages >>>>>>> - added returns in the tests for the subsystems in >>> osContainer_linux.cpp >>>>>>> - moved some checks at the beginning of subsystem_file_contents >>>>>>> (suggested by Dmitry) >>>>>>> >>>>>>> >>>>>>> Best regards, Matthias >>>>>>> >>>>>>> >>>>>>> >>>>>>>> -----Original Message----- >>>>>>>> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] >>>>>>>> Sent: Donnerstag, 25. Januar 2018 18:43 >>>>>>>> To: Baesken, Matthias; Bob Vandette >>>>>>>> >>>>>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >>>>>>>> ; Langer, Christoph >>>>>>>> ; Doerr, Martin >>>>>>>> Subject: Re: RFR : 8196062 : Enable docker container related >>>>>>>> tests for >>>>> linux >>>>>>>> ppc64le >>>>>>>> >>>>>>>> Hi Matthias, >>>>>>>> >>>>>>>> >>>>>>>> On 01/25/2018 12:15 AM, Baesken, Matthias wrote: >>>>>>>>>> Perhaps, you could add code to >>>>> DockerTestUtils.buildJdkDockerImage() >>>>>>>>>> that does the following or similar: >>>>>>>>>> 1. Construct a name for platform-specific docker file: >>>>>>>>>> String platformSpecificDockerfile = dockerfile >>>>>>>>>> + "-" + >>>>>>>>>> Platform.getOsArch(); >>>>>>>>>> (Platform is jdk.test.lib.Platform) >>>>>>>>>> >>>>>>>>> Hello, the doc says : >>>>>>>>> >>>>>>>>> * Build a docker image that contains JDK under test. >>>>>>>>> * The jdk will be placed under the "/jdk/" folder >>>>>>>>> inside the docker >>>>> file >>>>>>>> system. >>>>>>>>> ..... >>>>>>>>> param dockerfile name of the dockerfile residing in >>>>>>>>> the test >>> source >>>>>>>>> ..... >>>>>>>>> public static void buildJdkDockerImage(String >>>>>>>>> imageName, String >>>>>>>> dockerfile, String buildDirName) >>>>>>>>> >>>>>>>>> It does not say anything about doing hidden insertions of some >>>>> platform >>>>>>>> names into the dockerfile name. >>>>>>>>> So should the jtreg API doc be changed ? >>>>>>>>> If so who needs to approve this ? >>>>>>>> Thank you for your concerns about the clarity of API and >>> corresponding >>>>>>>> documentation. This is a test library API, so no need to file >>>>>>>> CCC or CSR. >>>>>>>> >>>>>>>> This API can be changed via a regular RFR/webrev review >>>>>>>> process, as >>>>> soon >>>>>>>> as on one objects. I am a VM SQE engineer covering the docker and >>>>> Linux >>>>>>>> container area, I am OK with this change. >>>>>>>> And I agree with you, we should update the javadoc header on this >>>>>>> method >>>>>>>> to reflect this implicit part of API contract. >>>>>>>> >>>>>>>> >>>>>>>> Thank you, >>>>>>>> Misha >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> (as far as I see so far only the test at >>>>>>>> hotspot/jtreg/runtime/containers/docker/ use this so it >>>>>>>> should not >>> be >>>>> a >>>>>>> big >>>>>>>> deal to change the interface?) >>>>>>>>> Best regards, Matthias >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> -----Original Message----- >>>>>>>>>> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] >>>>>>>>>> Sent: Mittwoch, 24. Januar 2018 20:09 >>>>>>>>>> To: Bob Vandette; Baesken, Matthias >>>>>>>>>> >>>>>>>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >>>>>>>>>> ; Langer, Christoph >>>>>>>>>> ; Doerr, Martin >>>>> >>>>>>>>>> Subject: Re: RFR : 8196062 : Enable docker container related >>>>>>>>>> tests >>> for >>>>>>> linux >>>>>>>>>> ppc64le >>>>>>>>>> >>>>>>>>>> Hi Matthias, >>>>>>>>>> >>>>>>>>>> Please see my comments about the test changes inline. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On 01/24/2018 07:13 AM, Bob Vandette wrote: >>>>>>>>>>> osContainer_linux.cpp: >>>>>>>>>>> >>>>>>>>>>> Can you add "return;" in each test for subsystem not found >>>>> messages >>>>>>>> and >>>>>>>>>>> remove these 3 lines OR move your tests for NULL& messages >>>>> inside. >>>>>>>> The >>>>>>>>>> compiler can >>>>>>>>>>> probably optimize this but I?d prefer more compact code. >>>>>>>>>>> >>>>>>>>>>> if (memory == NULL || cpuset == NULL || cpu == NULL || cpuacct >>> == >>>>>>>> NULL) >>>>>>>>>> { >>>>>>>>>>> 342 return; >>>>>>>>>>> 343 } >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> The other changes in osContainer_linux.cpp look ok. >>>>>>>>>>> >>>>>>>>>>> I forwarded your test changes to Misha, who wrote these. >>>>>>>>>>> >>>>>>>>>>> Since it?s likely that other platforms, such as aarch64, are >>>>>>>>>>> going to >>> run >>>>>>>> into >>>>>>>>>> the same problem, >>>>>>>>>>> It would have been better to enable the tests based on the >>>>> existence >>>>>>> of >>>>>>>> an >>>>>>>>>> arch specific >>>>>>>>>>> Dockerfile-BasicTest-{os.arch} rather than enabling specific >>>>>>>>>>> arch?s >>> in >>>>>>>>>> VPProps.java. >>>>>>>>>>> This approach would reduce the number of changes significantly >>> and >>>>>>>> allow >>>>>>>>>> support to >>>>>>>>>>> be added with 1 new file. >>>>>>>>>>> >>>>>>>>>>> You wouldn?t need "String dockerFileName = >>>>>>>>>> Common.getDockerFileName();? >>>>>>>>>>> in every test. Just make DockerTestUtils automatically add >>>>>>>>>>> arch. >>>>>>>>>> I like Bob's idea on handling platform-specific Dockerfiles. >>>>>>>>>> >>>>>>>>>> Perhaps, you could add code to >>>>> DockerTestUtils.buildJdkDockerImage() >>>>>>>>>> that does the following or similar: >>>>>>>>>> 1. Construct a name for platform-specific docker file: >>>>>>>>>> String platformSpecificDockerfile = dockerfile >>>>>>>>>> + "-" + >>>>>>>>>> Platform.getOsArch(); >>>>>>>>>> (Platform is jdk.test.lib.Platform) >>>>>>>>>> >>>>>>>>>> 2. Check if platformSpecificDockerfile file exists in >>>>>>>>>> the test >>>>>>>>>> source directory >>>>>>>>>> File.exists(Paths.get(Utils.TEST_SRC, >>> platformSpecificDockerFile) >>>>>>>>>> If it does, then use it. Otherwise continue >>>>>>>>>> using the >>>>>>>>>> default/original dockerfile name. >>>>>>>>>> >>>>>>>>>> I think this will considerably simplify your change, as well >>>>>>>>>> as make it >>>>>>>>>> easy to extend support to other platforms/configurations >>>>>>>>>> in the future. Let us know what you think of this approach ? >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Once your change gets (R)eviewed and approved, I can sponsor >>> the >>>>>>> push. >>>>>>>>>> Thank you, >>>>>>>>>> Misha >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Bob. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> On Jan 24, 2018, at 9:24 AM, Baesken, Matthias >>>>>>>>>> wrote: >>>>>>>>>>>> Hello, could you please review the following change : >>>>>>>>>>>> 8196062 : >>>>>>> Enable >>>>>>>>>> docker container related tests for linux ppc64le . >>>>>>>>>>>> It adds docker container testing for linux ppc64 le >>>>>>>>>>>> (little >>> endian) . >>>>>>>>>>>> A number of things had to be done : >>>>>>>>>>>> ? Add a separate docker file >>>>>>>>>> test/hotspot/jtreg/runtime/containers/docker/Dockerfile- >>> BasicTest- >>>>>>>> ppc64le >>>>>>>>>> for linux ppc64 le which uses Ubuntu ( the Oracle >>>>>>>>>> Linux 7.2 used >>>>> for >>>>>>>>>> x86_64 seems not to be available for ppc64le ) >>>>>>>>>>>> ? Fix parsing /proc/self/mountinfo and >>>>>>>>>>>> /proc/self/cgroup >>>>>>> in >>>>>>>>>> src/hotspot/os/linux/osContainer_linux.cpp , it could >>>>>>>>>> not handle >>>>> the >>>>>>>>>> format seen on SUSE LINUX 12.1 ppc64le (Host) and Ubuntu >>> (Docker >>>>>>>>>> container) >>>>>>>>>>>> ? Add a bit more logging >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Webrev : >>>>>>>>>>>> >>>>>>>>>>>> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062/ >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Bug : >>>>>>>>>>>> >>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8196062 >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> After these adjustments I could run the >>>>> runtime/containers/docker >>>>>>>> - >>>>>>>>>> jtreg tests successfully . >>>>>>>>>>>> Best regards, Matthias From matthias.baesken at sap.com Wed Feb 7 16:12:34 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Wed, 7 Feb 2018 16:12:34 +0000 Subject: RFR : 8196062 : Enable docker container related tests for linux ppc64le In-Reply-To: <5A7A89F7.3060509@oracle.com> References: <3374db8c-6e7b-fe0b-6874-8289e9e369bc@oracle.com> <6fa3fe84aad946eabb2b46281031e21d@sap.com> <4be36ed6-2f8e-2dd8-a87a-5f76d639550e@oracle.com> <64a3268575d14ddcad90f7d46bab64dd@sap.com> <10f2abc8dbc347f7b7f1a851e89a220a@sap.com> <81c11685dd1b4edd9419e4897e96292a@sap.com> <5A74ED9E.8060503@oracle.com> <930717626b314defb436c1947ecd5ef6@sap.com> <5A7A7270.1070804@oracle.com> <5A7A89F7.3060509@oracle.com> Message-ID: <6d4be4c88ddb4e8c93d48f39db4a9cb4@sap.com> Hi Mikhailo, sorry for causing the issue. Looks like the change to test/lib/jdk/test/lib/Platform.java had unexpected consequences . I created a new webrev without the Platform.java change : http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.3/ Best regards, Matthias > -----Original Message----- > From: Mikhailo Seledtsov [mailto:mikhailo.seledtsov at oracle.com] > Sent: Mittwoch, 7. Februar 2018 06:09 > To: Baesken, Matthias > Cc: hotspot-dev at openjdk.java.net > Subject: Re: RFR : 8196062 : Enable docker container related tests for linux > ppc64le > > Hi Matthias, > > Unfortunately one test failed during the pre-integration testing. > The following test failed during the pre-integration testing: > open/test/hotspot/jtreg/testlibrary_tests/TestMutuallyExclusivePlatformPr > edicates.java > > > Reproducible: 100%, Linux and MAC > > Failure: TEST RESULT: Failed. Execution failed: `main' threw exception: > java.lang.RuntimeException: All Platform's methods with signature '():Z' > should be tested. Missing: isPPC64le: expected true, was false > > See my comments in the RFE for details, and a suggested fix. > > > Thank you, > Misha > > On 2/6/18, 7:28 PM, Mikhailo Seledtsov wrote: > > I am running pre-integration testing; will push unless the testing > > finds any issues. > > > > Regards, > > Misha > > > > On 2/5/18, 11:50 PM, Baesken, Matthias wrote: > >> I only had to correct some whitespace changes found by hg jcheck , > >> updated : > >> > >> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.2/ > >> > >> The fix has been reviewed by goetz, dsamersoff, bobv . > >> Feedback was added, Summary updated too (suggested by Goetz). > >> > >> Tested with docker on SLES 12.1 / Ubuntu based container . > >> > >> Best regards, Matthias > >> > >> > >>> -----Original Message----- > >>> From: Mikhailo Seledtsov [mailto:mikhailo.seledtsov at oracle.com] > >>> Sent: Samstag, 3. Februar 2018 00:01 > >>> To: Baesken, Matthias > >>> Cc: Bob Vandette; Lindenmaier, Goetz > >>> ; hotspot-dev at openjdk.java.net; > Langer, > >>> Christoph; Doerr, Martin > >>> ; Dmitry Samersoff >>> sw.com> > >>> Subject: Re: RFR : 8196062 : Enable docker container related tests > >>> for linux > >>> ppc64le > >>> > >>> Hi Matthias, > >>> > >>> I can sponsor your change if you'd like. > >>> Once you addressed all the feedback from code review, please sync to > >>> the > >>> tip, build and test. > >>> Then export the changeset and send it to me (see: > >>> http://openjdk.java.net/sponsor/) > >>> > >>> I will import your change set, run all required testing and push the > >>> change. > >>> > >>> > >>> Thank you, > >>> Misha > >>> > >>> On 2/2/18, 12:39 AM, Baesken, Matthias wrote: > >>>> Thanks for the reviews . > >>>> > >>>> I added info about the fix for /proc/self/cgroup and > >>>> /proc/self/mountinfo > >>> parsing to the bug : > >>>> https://bugs.openjdk.java.net/browse/JDK-8196062 > >>>> > >>>> Guess I need a sponsor now to get it pushed ? > >>>> > >>>> > >>>> Best regards, Matthias > >>>> > >>>> > >>>> > >>>>> -----Original Message----- > >>>>> From: Bob Vandette [mailto:bob.vandette at oracle.com] > >>>>> Sent: Donnerstag, 1. Februar 2018 17:53 > >>>>> To: Lindenmaier, Goetz > >>>>> Cc: Baesken, Matthias; mikhailo > >>>>> ; hotspot-dev at openjdk.java.net; > >>> Langer, > >>>>> Christoph; Doerr, Martin > >>>>> ; Dmitry Samersoff >>>>> sw.com> > >>>>> Subject: Re: RFR : 8196062 : Enable docker container related tests > >>>>> for linux > >>>>> ppc64le > >>>>> > >>>>> Looks good to me. > >>>>> > >>>>> Bob. > >>>>> > >>>>>> On Feb 1, 2018, at 5:39 AM, Lindenmaier, Goetz > >>>>> wrote: > >>>>>> Hi Matthias, > >>>>>> > >>>>>> thanks for enabling this test. Looks good. > >>>>>> I would appreciate if you would add a line > >>>>>> "Summary: also fix cgroup subsystem recognition" > >>>>>> to the bug description. Else this might be mistaken > >>>>>> for a mere testbug. > >>>>>> > >>>>>> Best regards, > >>>>>> Goetz. > >>>>>> > >>>>>> > >>>>>>> -----Original Message----- > >>>>>>> From: Baesken, Matthias > >>>>>>> Sent: Mittwoch, 31. Januar 2018 15:15 > >>>>>>> To: mikhailo; Bob Vandette > >>>>>>> > >>>>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz > >>>>>>> ; Langer, Christoph > >>>>>>> ; Doerr, > Martin; > >>>>>>> Dmitry Samersoff > >>>>>>> Subject: RE: RFR : 8196062 : Enable docker container related > >>>>>>> tests for > >>> linux > >>>>>>> ppc64le > >>>>>>> > >>>>>>> Hello , I created a second webrev : > >>>>>>> > >>>>>>> > >>>>>>> > >>> > http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.1/8196062.1/webr > >>>>>>> ev/ > >>>>>>> > >>>>>>> - changed DockerTestUtils.buildJdkDockerImage in the suggested > >>>>>>> way > >>>>> (this > >>>>>>> should be extendable to linux s390x soon) > >>>>>>> > >>>>>>>>>>> Can you add "return;" in each test for subsystem not found > >>>>> messages > >>>>>>> - added returns in the tests for the subsystems in > >>> osContainer_linux.cpp > >>>>>>> - moved some checks at the beginning of subsystem_file_contents > >>>>>>> (suggested by Dmitry) > >>>>>>> > >>>>>>> > >>>>>>> Best regards, Matthias > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>>> -----Original Message----- > >>>>>>>> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] > >>>>>>>> Sent: Donnerstag, 25. Januar 2018 18:43 > >>>>>>>> To: Baesken, Matthias; Bob > Vandette > >>>>>>>> > >>>>>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz > >>>>>>>> ; Langer, Christoph > >>>>>>>> ; Doerr, > Martin > >>>>>>>> Subject: Re: RFR : 8196062 : Enable docker container related > >>>>>>>> tests for > >>>>> linux > >>>>>>>> ppc64le > >>>>>>>> > >>>>>>>> Hi Matthias, > >>>>>>>> > >>>>>>>> > >>>>>>>> On 01/25/2018 12:15 AM, Baesken, Matthias wrote: > >>>>>>>>>> Perhaps, you could add code to > >>>>> DockerTestUtils.buildJdkDockerImage() > >>>>>>>>>> that does the following or similar: > >>>>>>>>>> 1. Construct a name for platform-specific docker file: > >>>>>>>>>> String platformSpecificDockerfile = dockerfile > >>>>>>>>>> + "-" + > >>>>>>>>>> Platform.getOsArch(); > >>>>>>>>>> (Platform is jdk.test.lib.Platform) > >>>>>>>>>> > >>>>>>>>> Hello, the doc says : > >>>>>>>>> > >>>>>>>>> * Build a docker image that contains JDK under test. > >>>>>>>>> * The jdk will be placed under the "/jdk/" folder > >>>>>>>>> inside the docker > >>>>> file > >>>>>>>> system. > >>>>>>>>> ..... > >>>>>>>>> param dockerfile name of the dockerfile residing in > >>>>>>>>> the test > >>> source > >>>>>>>>> ..... > >>>>>>>>> public static void buildJdkDockerImage(String > >>>>>>>>> imageName, String > >>>>>>>> dockerfile, String buildDirName) > >>>>>>>>> > >>>>>>>>> It does not say anything about doing hidden insertions of some > >>>>> platform > >>>>>>>> names into the dockerfile name. > >>>>>>>>> So should the jtreg API doc be changed ? > >>>>>>>>> If so who needs to approve this ? > >>>>>>>> Thank you for your concerns about the clarity of API and > >>> corresponding > >>>>>>>> documentation. This is a test library API, so no need to file > >>>>>>>> CCC or CSR. > >>>>>>>> > >>>>>>>> This API can be changed via a regular RFR/webrev review > >>>>>>>> process, as > >>>>> soon > >>>>>>>> as on one objects. I am a VM SQE engineer covering the docker > and > >>>>> Linux > >>>>>>>> container area, I am OK with this change. > >>>>>>>> And I agree with you, we should update the javadoc header on > this > >>>>>>> method > >>>>>>>> to reflect this implicit part of API contract. > >>>>>>>> > >>>>>>>> > >>>>>>>> Thank you, > >>>>>>>> Misha > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>>> (as far as I see so far only the test at > >>>>>>>> hotspot/jtreg/runtime/containers/docker/ use this so it > >>>>>>>> should not > >>> be > >>>>> a > >>>>>>> big > >>>>>>>> deal to change the interface?) > >>>>>>>>> Best regards, Matthias > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > >>>>>>>>>> -----Original Message----- > >>>>>>>>>> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] > >>>>>>>>>> Sent: Mittwoch, 24. Januar 2018 20:09 > >>>>>>>>>> To: Bob Vandette; Baesken, > Matthias > >>>>>>>>>> > >>>>>>>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz > >>>>>>>>>> ; Langer, Christoph > >>>>>>>>>> ; Doerr, Martin > >>>>> > >>>>>>>>>> Subject: Re: RFR : 8196062 : Enable docker container related > >>>>>>>>>> tests > >>> for > >>>>>>> linux > >>>>>>>>>> ppc64le > >>>>>>>>>> > >>>>>>>>>> Hi Matthias, > >>>>>>>>>> > >>>>>>>>>> Please see my comments about the test changes inline. > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> On 01/24/2018 07:13 AM, Bob Vandette wrote: > >>>>>>>>>>> osContainer_linux.cpp: > >>>>>>>>>>> > >>>>>>>>>>> Can you add "return;" in each test for subsystem not found > >>>>> messages > >>>>>>>> and > >>>>>>>>>>> remove these 3 lines OR move your tests for NULL& > messages > >>>>> inside. > >>>>>>>> The > >>>>>>>>>> compiler can > >>>>>>>>>>> probably optimize this but I?d prefer more compact code. > >>>>>>>>>>> > >>>>>>>>>>> if (memory == NULL || cpuset == NULL || cpu == NULL || > cpuacct > >>> == > >>>>>>>> NULL) > >>>>>>>>>> { > >>>>>>>>>>> 342 return; > >>>>>>>>>>> 343 } > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> The other changes in osContainer_linux.cpp look ok. > >>>>>>>>>>> > >>>>>>>>>>> I forwarded your test changes to Misha, who wrote these. > >>>>>>>>>>> > >>>>>>>>>>> Since it?s likely that other platforms, such as aarch64, are > >>>>>>>>>>> going to > >>> run > >>>>>>>> into > >>>>>>>>>> the same problem, > >>>>>>>>>>> It would have been better to enable the tests based on the > >>>>> existence > >>>>>>> of > >>>>>>>> an > >>>>>>>>>> arch specific > >>>>>>>>>>> Dockerfile-BasicTest-{os.arch} rather than enabling specific > >>>>>>>>>>> arch?s > >>> in > >>>>>>>>>> VPProps.java. > >>>>>>>>>>> This approach would reduce the number of changes > significantly > >>> and > >>>>>>>> allow > >>>>>>>>>> support to > >>>>>>>>>>> be added with 1 new file. > >>>>>>>>>>> > >>>>>>>>>>> You wouldn?t need "String dockerFileName = > >>>>>>>>>> Common.getDockerFileName();? > >>>>>>>>>>> in every test. Just make DockerTestUtils automatically add > >>>>>>>>>>> arch. > >>>>>>>>>> I like Bob's idea on handling platform-specific Dockerfiles. > >>>>>>>>>> > >>>>>>>>>> Perhaps, you could add code to > >>>>> DockerTestUtils.buildJdkDockerImage() > >>>>>>>>>> that does the following or similar: > >>>>>>>>>> 1. Construct a name for platform-specific docker file: > >>>>>>>>>> String platformSpecificDockerfile = dockerfile > >>>>>>>>>> + "-" + > >>>>>>>>>> Platform.getOsArch(); > >>>>>>>>>> (Platform is jdk.test.lib.Platform) > >>>>>>>>>> > >>>>>>>>>> 2. Check if platformSpecificDockerfile file exists in > >>>>>>>>>> the test > >>>>>>>>>> source directory > >>>>>>>>>> File.exists(Paths.get(Utils.TEST_SRC, > >>> platformSpecificDockerFile) > >>>>>>>>>> If it does, then use it. Otherwise continue > >>>>>>>>>> using the > >>>>>>>>>> default/original dockerfile name. > >>>>>>>>>> > >>>>>>>>>> I think this will considerably simplify your change, as well > >>>>>>>>>> as make it > >>>>>>>>>> easy to extend support to other platforms/configurations > >>>>>>>>>> in the future. Let us know what you think of this approach ? > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> Once your change gets (R)eviewed and approved, I can > sponsor > >>> the > >>>>>>> push. > >>>>>>>>>> Thank you, > >>>>>>>>>> Misha > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>>> Bob. > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>>> On Jan 24, 2018, at 9:24 AM, Baesken, Matthias > >>>>>>>>>> wrote: > >>>>>>>>>>>> Hello, could you please review the following change : > >>>>>>>>>>>> 8196062 : > >>>>>>> Enable > >>>>>>>>>> docker container related tests for linux ppc64le . > >>>>>>>>>>>> It adds docker container testing for linux ppc64 le > >>>>>>>>>>>> (little > >>> endian) . > >>>>>>>>>>>> A number of things had to be done : > >>>>>>>>>>>> ? Add a separate docker file > >>>>>>>>>> test/hotspot/jtreg/runtime/containers/docker/Dockerfile- > >>> BasicTest- > >>>>>>>> ppc64le > >>>>>>>>>> for linux ppc64 le which uses Ubuntu ( the Oracle > >>>>>>>>>> Linux 7.2 used > >>>>> for > >>>>>>>>>> x86_64 seems not to be available for ppc64le ) > >>>>>>>>>>>> ? Fix parsing /proc/self/mountinfo and > >>>>>>>>>>>> /proc/self/cgroup > >>>>>>> in > >>>>>>>>>> src/hotspot/os/linux/osContainer_linux.cpp , it could > >>>>>>>>>> not handle > >>>>> the > >>>>>>>>>> format seen on SUSE LINUX 12.1 ppc64le (Host) and Ubuntu > >>> (Docker > >>>>>>>>>> container) > >>>>>>>>>>>> ? Add a bit more logging > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> Webrev : > >>>>>>>>>>>> > >>>>>>>>>>>> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062/ > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> Bug : > >>>>>>>>>>>> > >>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8196062 > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> After these adjustments I could run the > >>>>> runtime/containers/docker > >>>>>>>> - > >>>>>>>>>> jtreg tests successfully . > >>>>>>>>>>>> Best regards, Matthias From mikhailo.seledtsov at oracle.com Wed Feb 7 18:24:08 2018 From: mikhailo.seledtsov at oracle.com (mikhailo) Date: Wed, 7 Feb 2018 10:24:08 -0800 Subject: RFR : 8196062 : Enable docker container related tests for linux ppc64le In-Reply-To: <6d4be4c88ddb4e8c93d48f39db4a9cb4@sap.com> References: <3374db8c-6e7b-fe0b-6874-8289e9e369bc@oracle.com> <6fa3fe84aad946eabb2b46281031e21d@sap.com> <4be36ed6-2f8e-2dd8-a87a-5f76d639550e@oracle.com> <64a3268575d14ddcad90f7d46bab64dd@sap.com> <10f2abc8dbc347f7b7f1a851e89a220a@sap.com> <81c11685dd1b4edd9419e4897e96292a@sap.com> <5A74ED9E.8060503@oracle.com> <930717626b314defb436c1947ecd5ef6@sap.com> <5A7A7270.1070804@oracle.com> <5A7A89F7.3060509@oracle.com> <6d4be4c88ddb4e8c93d48f39db4a9cb4@sap.com> Message-ID: <389da8f2-df49-93a7-f141-8c17e58bd922@oracle.com> Thank you for the update. I will import your latest webrev, and start the testing. Regard, Misha On 02/07/2018 08:12 AM, Baesken, Matthias wrote: > Hi Mikhailo, sorry for causing the issue. > Looks like the change to test/lib/jdk/test/lib/Platform.java had unexpected consequences . > I created a new webrev without the Platform.java change : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.3/ > > Best regards, Matthias > > > > >> -----Original Message----- >> From: Mikhailo Seledtsov [mailto:mikhailo.seledtsov at oracle.com] >> Sent: Mittwoch, 7. Februar 2018 06:09 >> To: Baesken, Matthias >> Cc: hotspot-dev at openjdk.java.net >> Subject: Re: RFR : 8196062 : Enable docker container related tests for linux >> ppc64le >> >> Hi Matthias, >> >> Unfortunately one test failed during the pre-integration testing. >> The following test failed during the pre-integration testing: >> open/test/hotspot/jtreg/testlibrary_tests/TestMutuallyExclusivePlatformPr >> edicates.java >> >> >> Reproducible: 100%, Linux and MAC >> >> Failure: TEST RESULT: Failed. Execution failed: `main' threw exception: >> java.lang.RuntimeException: All Platform's methods with signature '():Z' >> should be tested. Missing: isPPC64le: expected true, was false >> >> See my comments in the RFE for details, and a suggested fix. >> >> >> Thank you, >> Misha >> >> On 2/6/18, 7:28 PM, Mikhailo Seledtsov wrote: >>> I am running pre-integration testing; will push unless the testing >>> finds any issues. >>> >>> Regards, >>> Misha >>> >>> On 2/5/18, 11:50 PM, Baesken, Matthias wrote: >>>> I only had to correct some whitespace changes found by hg jcheck , >>>> updated : >>>> >>>> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.2/ >>>> >>>> The fix has been reviewed by goetz, dsamersoff, bobv . >>>> Feedback was added, Summary updated too (suggested by Goetz). >>>> >>>> Tested with docker on SLES 12.1 / Ubuntu based container . >>>> >>>> Best regards, Matthias >>>> >>>> >>>>> -----Original Message----- >>>>> From: Mikhailo Seledtsov [mailto:mikhailo.seledtsov at oracle.com] >>>>> Sent: Samstag, 3. Februar 2018 00:01 >>>>> To: Baesken, Matthias >>>>> Cc: Bob Vandette; Lindenmaier, Goetz >>>>> ; hotspot-dev at openjdk.java.net; >> Langer, >>>>> Christoph; Doerr, Martin >>>>> ; Dmitry Samersoff>>>> sw.com> >>>>> Subject: Re: RFR : 8196062 : Enable docker container related tests >>>>> for linux >>>>> ppc64le >>>>> >>>>> Hi Matthias, >>>>> >>>>> I can sponsor your change if you'd like. >>>>> Once you addressed all the feedback from code review, please sync to >>>>> the >>>>> tip, build and test. >>>>> Then export the changeset and send it to me (see: >>>>> http://openjdk.java.net/sponsor/) >>>>> >>>>> I will import your change set, run all required testing and push the >>>>> change. >>>>> >>>>> >>>>> Thank you, >>>>> Misha >>>>> >>>>> On 2/2/18, 12:39 AM, Baesken, Matthias wrote: >>>>>> Thanks for the reviews . >>>>>> >>>>>> I added info about the fix for /proc/self/cgroup and >>>>>> /proc/self/mountinfo >>>>> parsing to the bug : >>>>>> https://bugs.openjdk.java.net/browse/JDK-8196062 >>>>>> >>>>>> Guess I need a sponsor now to get it pushed ? >>>>>> >>>>>> >>>>>> Best regards, Matthias >>>>>> >>>>>> >>>>>> >>>>>>> -----Original Message----- >>>>>>> From: Bob Vandette [mailto:bob.vandette at oracle.com] >>>>>>> Sent: Donnerstag, 1. Februar 2018 17:53 >>>>>>> To: Lindenmaier, Goetz >>>>>>> Cc: Baesken, Matthias; mikhailo >>>>>>> ; hotspot-dev at openjdk.java.net; >>>>> Langer, >>>>>>> Christoph; Doerr, Martin >>>>>>> ; Dmitry Samersoff>>>>>> sw.com> >>>>>>> Subject: Re: RFR : 8196062 : Enable docker container related tests >>>>>>> for linux >>>>>>> ppc64le >>>>>>> >>>>>>> Looks good to me. >>>>>>> >>>>>>> Bob. >>>>>>> >>>>>>>> On Feb 1, 2018, at 5:39 AM, Lindenmaier, Goetz >>>>>>> wrote: >>>>>>>> Hi Matthias, >>>>>>>> >>>>>>>> thanks for enabling this test. Looks good. >>>>>>>> I would appreciate if you would add a line >>>>>>>> "Summary: also fix cgroup subsystem recognition" >>>>>>>> to the bug description. Else this might be mistaken >>>>>>>> for a mere testbug. >>>>>>>> >>>>>>>> Best regards, >>>>>>>> Goetz. >>>>>>>> >>>>>>>> >>>>>>>>> -----Original Message----- >>>>>>>>> From: Baesken, Matthias >>>>>>>>> Sent: Mittwoch, 31. Januar 2018 15:15 >>>>>>>>> To: mikhailo; Bob Vandette >>>>>>>>> >>>>>>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >>>>>>>>> ; Langer, Christoph >>>>>>>>> ; Doerr, >> Martin; >>>>>>>>> Dmitry Samersoff >>>>>>>>> Subject: RE: RFR : 8196062 : Enable docker container related >>>>>>>>> tests for >>>>> linux >>>>>>>>> ppc64le >>>>>>>>> >>>>>>>>> Hello , I created a second webrev : >>>>>>>>> >>>>>>>>> >>>>>>>>> >> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.1/8196062.1/webr >>>>>>>>> ev/ >>>>>>>>> >>>>>>>>> - changed DockerTestUtils.buildJdkDockerImage in the suggested >>>>>>>>> way >>>>>>> (this >>>>>>>>> should be extendable to linux s390x soon) >>>>>>>>> >>>>>>>>>>>>> Can you add "return;" in each test for subsystem not found >>>>>>> messages >>>>>>>>> - added returns in the tests for the subsystems in >>>>> osContainer_linux.cpp >>>>>>>>> - moved some checks at the beginning of subsystem_file_contents >>>>>>>>> (suggested by Dmitry) >>>>>>>>> >>>>>>>>> >>>>>>>>> Best regards, Matthias >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> -----Original Message----- >>>>>>>>>> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] >>>>>>>>>> Sent: Donnerstag, 25. Januar 2018 18:43 >>>>>>>>>> To: Baesken, Matthias; Bob >> Vandette >>>>>>>>>> >>>>>>>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >>>>>>>>>> ; Langer, Christoph >>>>>>>>>> ; Doerr, >> Martin >>>>>>>>>> Subject: Re: RFR : 8196062 : Enable docker container related >>>>>>>>>> tests for >>>>>>> linux >>>>>>>>>> ppc64le >>>>>>>>>> >>>>>>>>>> Hi Matthias, >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On 01/25/2018 12:15 AM, Baesken, Matthias wrote: >>>>>>>>>>>> Perhaps, you could add code to >>>>>>> DockerTestUtils.buildJdkDockerImage() >>>>>>>>>>>> that does the following or similar: >>>>>>>>>>>> 1. Construct a name for platform-specific docker file: >>>>>>>>>>>> String platformSpecificDockerfile = dockerfile >>>>>>>>>>>> + "-" + >>>>>>>>>>>> Platform.getOsArch(); >>>>>>>>>>>> (Platform is jdk.test.lib.Platform) >>>>>>>>>>>> >>>>>>>>>>> Hello, the doc says : >>>>>>>>>>> >>>>>>>>>>> * Build a docker image that contains JDK under test. >>>>>>>>>>> * The jdk will be placed under the "/jdk/" folder >>>>>>>>>>> inside the docker >>>>>>> file >>>>>>>>>> system. >>>>>>>>>>> ..... >>>>>>>>>>> param dockerfile name of the dockerfile residing in >>>>>>>>>>> the test >>>>> source >>>>>>>>>>> ..... >>>>>>>>>>> public static void buildJdkDockerImage(String >>>>>>>>>>> imageName, String >>>>>>>>>> dockerfile, String buildDirName) >>>>>>>>>>> It does not say anything about doing hidden insertions of some >>>>>>> platform >>>>>>>>>> names into the dockerfile name. >>>>>>>>>>> So should the jtreg API doc be changed ? >>>>>>>>>>> If so who needs to approve this ? >>>>>>>>>> Thank you for your concerns about the clarity of API and >>>>> corresponding >>>>>>>>>> documentation. This is a test library API, so no need to file >>>>>>>>>> CCC or CSR. >>>>>>>>>> >>>>>>>>>> This API can be changed via a regular RFR/webrev review >>>>>>>>>> process, as >>>>>>> soon >>>>>>>>>> as on one objects. I am a VM SQE engineer covering the docker >> and >>>>>>> Linux >>>>>>>>>> container area, I am OK with this change. >>>>>>>>>> And I agree with you, we should update the javadoc header on >> this >>>>>>>>> method >>>>>>>>>> to reflect this implicit part of API contract. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Thank you, >>>>>>>>>> Misha >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> (as far as I see so far only the test at >>>>>>>>>> hotspot/jtreg/runtime/containers/docker/ use this so it >>>>>>>>>> should not >>>>> be >>>>>>> a >>>>>>>>> big >>>>>>>>>> deal to change the interface?) >>>>>>>>>>> Best regards, Matthias >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> -----Original Message----- >>>>>>>>>>>> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] >>>>>>>>>>>> Sent: Mittwoch, 24. Januar 2018 20:09 >>>>>>>>>>>> To: Bob Vandette; Baesken, >> Matthias >>>>>>>>>>>> >>>>>>>>>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >>>>>>>>>>>> ; Langer, Christoph >>>>>>>>>>>> ; Doerr, Martin >>>>>>> >>>>>>>>>>>> Subject: Re: RFR : 8196062 : Enable docker container related >>>>>>>>>>>> tests >>>>> for >>>>>>>>> linux >>>>>>>>>>>> ppc64le >>>>>>>>>>>> >>>>>>>>>>>> Hi Matthias, >>>>>>>>>>>> >>>>>>>>>>>> Please see my comments about the test changes inline. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On 01/24/2018 07:13 AM, Bob Vandette wrote: >>>>>>>>>>>>> osContainer_linux.cpp: >>>>>>>>>>>>> >>>>>>>>>>>>> Can you add "return;" in each test for subsystem not found >>>>>>> messages >>>>>>>>>> and >>>>>>>>>>>>> remove these 3 lines OR move your tests for NULL& >> messages >>>>>>> inside. >>>>>>>>>> The >>>>>>>>>>>> compiler can >>>>>>>>>>>>> probably optimize this but I?d prefer more compact code. >>>>>>>>>>>>> >>>>>>>>>>>>> if (memory == NULL || cpuset == NULL || cpu == NULL || >> cpuacct >>>>> == >>>>>>>>>> NULL) >>>>>>>>>>>> { >>>>>>>>>>>>> 342 return; >>>>>>>>>>>>> 343 } >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> The other changes in osContainer_linux.cpp look ok. >>>>>>>>>>>>> >>>>>>>>>>>>> I forwarded your test changes to Misha, who wrote these. >>>>>>>>>>>>> >>>>>>>>>>>>> Since it?s likely that other platforms, such as aarch64, are >>>>>>>>>>>>> going to >>>>> run >>>>>>>>>> into >>>>>>>>>>>> the same problem, >>>>>>>>>>>>> It would have been better to enable the tests based on the >>>>>>> existence >>>>>>>>> of >>>>>>>>>> an >>>>>>>>>>>> arch specific >>>>>>>>>>>>> Dockerfile-BasicTest-{os.arch} rather than enabling specific >>>>>>>>>>>>> arch?s >>>>> in >>>>>>>>>>>> VPProps.java. >>>>>>>>>>>>> This approach would reduce the number of changes >> significantly >>>>> and >>>>>>>>>> allow >>>>>>>>>>>> support to >>>>>>>>>>>>> be added with 1 new file. >>>>>>>>>>>>> >>>>>>>>>>>>> You wouldn?t need "String dockerFileName = >>>>>>>>>>>> Common.getDockerFileName();? >>>>>>>>>>>>> in every test. Just make DockerTestUtils automatically add >>>>>>>>>>>>> arch. >>>>>>>>>>>> I like Bob's idea on handling platform-specific Dockerfiles. >>>>>>>>>>>> >>>>>>>>>>>> Perhaps, you could add code to >>>>>>> DockerTestUtils.buildJdkDockerImage() >>>>>>>>>>>> that does the following or similar: >>>>>>>>>>>> 1. Construct a name for platform-specific docker file: >>>>>>>>>>>> String platformSpecificDockerfile = dockerfile >>>>>>>>>>>> + "-" + >>>>>>>>>>>> Platform.getOsArch(); >>>>>>>>>>>> (Platform is jdk.test.lib.Platform) >>>>>>>>>>>> >>>>>>>>>>>> 2. Check if platformSpecificDockerfile file exists in >>>>>>>>>>>> the test >>>>>>>>>>>> source directory >>>>>>>>>>>> File.exists(Paths.get(Utils.TEST_SRC, >>>>> platformSpecificDockerFile) >>>>>>>>>>>> If it does, then use it. Otherwise continue >>>>>>>>>>>> using the >>>>>>>>>>>> default/original dockerfile name. >>>>>>>>>>>> >>>>>>>>>>>> I think this will considerably simplify your change, as well >>>>>>>>>>>> as make it >>>>>>>>>>>> easy to extend support to other platforms/configurations >>>>>>>>>>>> in the future. Let us know what you think of this approach ? >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Once your change gets (R)eviewed and approved, I can >> sponsor >>>>> the >>>>>>>>> push. >>>>>>>>>>>> Thank you, >>>>>>>>>>>> Misha >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> Bob. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> On Jan 24, 2018, at 9:24 AM, Baesken, Matthias >>>>>>>>>>>> wrote: >>>>>>>>>>>>>> Hello, could you please review the following change : >>>>>>>>>>>>>> 8196062 : >>>>>>>>> Enable >>>>>>>>>>>> docker container related tests for linux ppc64le . >>>>>>>>>>>>>> It adds docker container testing for linux ppc64 le >>>>>>>>>>>>>> (little >>>>> endian) . >>>>>>>>>>>>>> A number of things had to be done : >>>>>>>>>>>>>> ? Add a separate docker file >>>>>>>>>>>> test/hotspot/jtreg/runtime/containers/docker/Dockerfile- >>>>> BasicTest- >>>>>>>>>> ppc64le >>>>>>>>>>>> for linux ppc64 le which uses Ubuntu ( the Oracle >>>>>>>>>>>> Linux 7.2 used >>>>>>> for >>>>>>>>>>>> x86_64 seems not to be available for ppc64le ) >>>>>>>>>>>>>> ? Fix parsing /proc/self/mountinfo and >>>>>>>>>>>>>> /proc/self/cgroup >>>>>>>>> in >>>>>>>>>>>> src/hotspot/os/linux/osContainer_linux.cpp , it could >>>>>>>>>>>> not handle >>>>>>> the >>>>>>>>>>>> format seen on SUSE LINUX 12.1 ppc64le (Host) and Ubuntu >>>>> (Docker >>>>>>>>>>>> container) >>>>>>>>>>>>>> ? Add a bit more logging >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Webrev : >>>>>>>>>>>>>> >>>>>>>>>>>>>> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062/ >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Bug : >>>>>>>>>>>>>> >>>>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8196062 >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> After these adjustments I could run the >>>>>>> runtime/containers/docker >>>>>>>>>> - >>>>>>>>>>>> jtreg tests successfully . >>>>>>>>>>>>>> Best regards, Matthias From jschlather at hubspot.com Wed Feb 7 19:27:47 2018 From: jschlather at hubspot.com (Jacob Schlather) Date: Wed, 7 Feb 2018 14:27:47 -0500 Subject: Native Memory Leak Message-ID: I hope this is the right mailing list. I've been tracking down a memory leak that seems to be related to https://bugs.java.com/view_bug.do?bug_id=8162795. Running on Java 1.8.0_144 with native memory tracking enabled, we've been seeing the internal memory size grow until the kernel kills the JVM for using too much memory. Using the native memory detail diff tool I've been able to find the following items that seem to be growing, one being a MemberNameTable [0x00007efffa8d29fd] GenericGrowableArray::raw_allocate(int)+0x17d [0x00007efffab86456] MemberNameTable::add_member_name(_jobject*)+0x66 [0x00007efffa8ffb24] InstanceKlass::add_member_name(Handle)+0x84 [0x00007efffab8777d] MethodHandles::init_method_MemberName(Handle, CallInfo&)+0x28d (malloc=106502KB +106081KB #19) and another being some JNI allocations [0x00007efffa9ce58a] JNIHandleBlock::allocate_block(Thread*)+0xaa [0x00007efffad36830] JavaThread::run()+0xb0 [0x00007efffabe7338] java_start(Thread*)+0x108 (malloc=23903KB +23697KB #78452 +77776) [0x00007efffa9ce58a] JNIHandleBlock::allocate_block(Thread*)+0xaa [0x00007efffa94f7fb] JavaCallWrapper::JavaCallWrapper(methodHandle, Handle, JavaValue*, Thread*)+0x6b [0x00007efffa9506c4] JavaCalls::call_helper(JavaValue*, methodHandle*, JavaCallArguments*, Thread*)+0x884 [0x00007efffa904001] InstanceKlass::register_finalizer(instanceOopDesc*, Thread*)+0xf1 (malloc=19770KB +19727KB #64887 +64745) [0x00007efffa9ce58a] JNIHandleBlock::allocate_block(Thread*)+0xaa [0x00007efffa94f7fb] JavaCallWrapper::JavaCallWrapper(methodHandle, Handle, JavaValue*, Thread*)+0x6b [0x00007efffa9506c4] JavaCalls::call_helper(JavaValue*, methodHandle*, JavaCallArguments*, Thread*)+0x884 [0x00007efffa9513a1] JavaCalls::call_virtual(JavaValue*, KlassHandle, Symbol*, Symbol*, JavaCallArguments*, Thread*)+0x321 (malloc=29794KB +29563KB #97786 +97028) Having found these I could use some guidance on what to look at next and what sorts of calls could potentially be causing these issues. We don't have any explicit uses of JNI that I can tell in our code bases. Let me know if there's anymore information that would be helpful. From paul.sandoz at oracle.com Wed Feb 7 20:21:26 2018 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Wed, 7 Feb 2018 12:21:26 -0800 Subject: RFR 8196960 Exceptions in ConstantBootstrapsTest.java on SPARC Message-ID: <229649AE-824C-4889-8677-64A363841498@oracle.com> Hi, Please review this patch to exclude ConstantBootstrapsTest.java from executing on SPARC platforms. When condy is supported on SPARC this test can be updated to remove the restriction. Opportunistically i am also fixing an error in the @since on ConstantBootstraps. Thanks, Paul. diff -r 45b6aae769cc src/java.base/share/classes/java/lang/invoke/ConstantBootstraps.java --- a/src/java.base/share/classes/java/lang/invoke/ConstantBootstraps.java Wed Feb 07 16:03:12 2018 +0100 +++ b/src/java.base/share/classes/java/lang/invoke/ConstantBootstraps.java Wed Feb 07 10:42:32 2018 -0800 @@ -37,7 +37,7 @@ * unless the argument is specified to be unused or specified to accept a * {@code null} value. * - * @since 10 + * @since 11 */ public final class ConstantBootstraps { // implements the upcall from the JVM, MethodHandleNatives.linkDynamicConstant: diff -r 45b6aae769cc test/jdk/java/lang/invoke/condy/ConstantBootstrapsTest.java --- a/test/jdk/java/lang/invoke/condy/ConstantBootstrapsTest.java Wed Feb 07 16:03:12 2018 +0100 +++ b/test/jdk/java/lang/invoke/condy/ConstantBootstrapsTest.java Wed Feb 07 10:42:32 2018 -0800 @@ -25,6 +25,7 @@ * @test * @bug 8186046 8195694 * @summary Test dynamic constant bootstraps + * @requires os.arch != "sparcv9" * @library /lib/testlibrary/bytecode /java/lang/invoke/common * @build jdk.experimental.bytecode.BasicClassBuilder test.java.lang.invoke.lib.InstructionHelper * @run testng ConstantBootstrapsTest From lois.foltan at oracle.com Wed Feb 7 20:23:15 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Wed, 7 Feb 2018 15:23:15 -0500 Subject: RFR 8196960 Exceptions in ConstantBootstrapsTest.java on SPARC In-Reply-To: <229649AE-824C-4889-8677-64A363841498@oracle.com> References: <229649AE-824C-4889-8677-64A363841498@oracle.com> Message-ID: Looks good. Lois On 2/7/2018 3:21 PM, Paul Sandoz wrote: > Hi, > > Please review this patch to exclude ConstantBootstrapsTest.java from executing on SPARC platforms. When condy is supported on SPARC this test can be updated to remove the restriction. > > Opportunistically i am also fixing an error in the @since on ConstantBootstraps. > > Thanks, > Paul. > > diff -r 45b6aae769cc src/java.base/share/classes/java/lang/invoke/ConstantBootstraps.java > --- a/src/java.base/share/classes/java/lang/invoke/ConstantBootstraps.java Wed Feb 07 16:03:12 2018 +0100 > +++ b/src/java.base/share/classes/java/lang/invoke/ConstantBootstraps.java Wed Feb 07 10:42:32 2018 -0800 > @@ -37,7 +37,7 @@ > * unless the argument is specified to be unused or specified to accept a > * {@code null} value. > * > - * @since 10 > + * @since 11 > */ > public final class ConstantBootstraps { > // implements the upcall from the JVM, MethodHandleNatives.linkDynamicConstant: > diff -r 45b6aae769cc test/jdk/java/lang/invoke/condy/ConstantBootstrapsTest.java > --- a/test/jdk/java/lang/invoke/condy/ConstantBootstrapsTest.java Wed Feb 07 16:03:12 2018 +0100 > +++ b/test/jdk/java/lang/invoke/condy/ConstantBootstrapsTest.java Wed Feb 07 10:42:32 2018 -0800 > @@ -25,6 +25,7 @@ > * @test > * @bug 8186046 8195694 > * @summary Test dynamic constant bootstraps > + * @requires os.arch != "sparcv9" > * @library /lib/testlibrary/bytecode /java/lang/invoke/common > * @build jdk.experimental.bytecode.BasicClassBuilder test.java.lang.invoke.lib.InstructionHelper > * @run testng ConstantBootstrapsTest From coleen.phillimore at oracle.com Wed Feb 7 21:19:33 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 7 Feb 2018 16:19:33 -0500 Subject: RFR: 8196083: Avoid locking in OopStorage::release In-Reply-To: References: Message-ID: <1ef56349-0193-4390-17b7-429da1dbbec8@oracle.com> Hi, I've reviewed this and I don't see anything wrong.?? It looks like the necessary OrderAccess and CAS instructions are used on the deferred_updates list. I think this logging should be debug mode because I don't know if you want this with -Xlog (default is log everything with info mode, I think), except this one: ????????? log_info(oopstorage, ref)("%s: failed allocation", name()); And here, why do you need to unlink the blocks?? Aren't you deleting them below from traversing the main list? OopStorage::~OopStorage() { ? Block* block; ? while ((block = _deferred_updates) != NULL) { ??? _deferred_updates = block->deferred_updates_next(); ??? block->set_deferred_updates_next(NULL); ? } ? while ((block = _allocate_list.head()) != NULL) { ??? _allocate_list.unlink(*block); ? } Can delete_block just clear all the fields in the block? thanks, Coleen On 2/2/18 7:35 PM, Kim Barrett wrote: > Please review this change to the OopStorage::release operations to > eliminate their use of locks. Rather than directly performing the > _allocate_list updates when the block containing the entries being > released undergoes a state transition (full to not-full, not-full to > empty), we instead record the occurrence of the transition. This > recording is performed via a lock-free push of the block onto a list > of such deferred updates, if the block is not already present in the > list. Update requests are processed by later allocate and > delete_empty_block operations. > > Also backed out the JDK-8195979 lock rank changes for the JNI mutexes. > Those are no longer required to nested lock rank ordering errors. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8196083 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8196083/open.00/ > > Testing: > Reproducer from JDK-8195979. > Mach5 {hs,jdk}-tier{1,2,3} > From coleen.phillimore at oracle.com Wed Feb 7 21:32:56 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 7 Feb 2018 16:32:56 -0500 Subject: RFR: 8196083: Avoid locking in OopStorage::release In-Reply-To: References: Message-ID: <557fe394-bbac-7324-718d-3bbcfa7e45d0@oracle.com> On 2/4/18 9:15 AM, Erik ?sterlund wrote: > Hi David, > > This is starting to go a bit off-topic for this thread. But here goes... > > On 2018-02-04 13:55, David Holmes wrote: >> On 4/02/2018 10:38 PM, Erik Osterlund wrote: >>> Hi Kim, >>> >>> Looks complicated but good. >>> >>> It would be great in the future if the deadlock detection system >>> could be improved to not trigger such false positives that make us >>> implement tricky lock-free code to dodge the obviously false >>> positive deadlock assert. But I suppose that is out of scope for this. >> >> It isn't a deadlock-detection system it is a deadlock prevention >> system. If you honour the lock rankings then you can't get deadlocks. >> If you don't honour the lock rankings then you may get deadlocks. >> There isn't sufficient information in the ranking alone to know for >> sure whether you will or not. > > Okay. I guess I should have called it a potential deadlock situation > detection system. But it does not prevent deadlocks - that is up to > us. And since the checking is dynamic, we are never guaranteed not to > get deadlocks. ?? But the system does help with lock ordering problems that is a cause of deadlocks, so it does prevent that class of deadlocks. > >> If the deadlock possibility is so obviously not actually possible >> then that could be captured somehow for the specific locks involved. >> But I'm not aware of any tools we have that actually help us track >> what locks may concurrently be acquired - if we did then we would not >> need rank-based deadlock prevention checks. > > What I had in mind is something along the lines of dynamically > constructing a global partial ordering of the locks as they are > acquired, and to verify the global partial ordering is consistent and > not violated. That would be like a more precise version of the > manually constructed ordering we have today, and save us the trouble > of doing this manual picking of a number X, giving it a silly name > nobody understands like "leaf + 3", where leaf is not actually a leaf > at all - that's for "special", oh wait no there are more special lock > ranks than special. And then as testing is run, either manually > shuffling the ranks around to reflect the actual partial ordering the > code adheres to, or rewriting the code to be lock-free in fear of > getting intermittent false positive asserts triggered in testing after > moving ranks around (despite every failing test run actually strictly > conforming to a global partial ordering that was just not reflected > accurately by the numbers we picked). I don't think the assert that Kim got was a false positive.? It may have detected a deadlock.? Finding these things through testing though is the real problem.? I don't know how to do this statically though, unless there are static code analysis tools for lock ordering violations (?) > > With such an automatic solution, we could also get a better picture of > the interactions between the locks when adding a new lock by printing > the actual partial ordering of the locks that was found at runtime, > instead of trying to figure out which other relevant locks the "+ 3" > in "leaf + 3" referred to. > > Of course, this is just an idea for the bright future where a magical > system can do lock ordering consistency checks automagically without > us resorting to complicated lock-free solutions for code that never > violates global lock ordering consistency (which is what the system > was designed to detect), because it is found easier to write a > lock-free solution to the problem at hand than to figure out how best > to shuffle the ranks around to capture the actual partial ordering the > locks consistently conform to. But for now, since we do not have such > a system, and its future existence is merely hypothetical, I am okay > with the proposed lock-free solution instead. > I agree with you that the system is a partial ordering rather than a real ordering and leaf+3, non_leaf-2 has really no meaning.? I would prefer a system that is more declarative and I think we should do something about this problem in the next release.?? Can we collect yours and peoples thoughts in: https://bugs.openjdk.java.net/browse/JDK-8176393 Thanks, Coleen > Thanks, > /Erik > >> David >> ----- >> >>> Thanks, >>> /Erik >>> >>>> On 3 Feb 2018, at 01:35, Kim Barrett wrote: >>>> >>>> Please review this change to the OopStorage::release operations to >>>> eliminate their use of locks.? Rather than directly performing the >>>> _allocate_list updates when the block containing the entries being >>>> released undergoes a state transition (full to not-full, not-full to >>>> empty), we instead record the occurrence of the transition. This >>>> recording is performed via a lock-free push of the block onto a list >>>> of such deferred updates, if the block is not already present in the >>>> list.? Update requests are processed by later allocate and >>>> delete_empty_block operations. >>>> >>>> Also backed out the JDK-8195979 lock rank changes for the JNI mutexes. >>>> Those are no longer required to nested lock rank ordering errors. >>>> >>>> CR: >>>> https://bugs.openjdk.java.net/browse/JDK-8196083 >>>> >>>> Webrev: >>>> http://cr.openjdk.java.net/~kbarrett/8196083/open.00/ >>>> >>>> Testing: >>>> Reproducer from JDK-8195979. >>>> Mach5 {hs,jdk}-tier{1,2,3} >>>> >>> > From mikhailo.seledtsov at oracle.com Wed Feb 7 21:42:09 2018 From: mikhailo.seledtsov at oracle.com (mikhailo) Date: Wed, 7 Feb 2018 13:42:09 -0800 Subject: RFR : 8196062 : Enable docker container related tests for linux ppc64le In-Reply-To: <389da8f2-df49-93a7-f141-8c17e58bd922@oracle.com> References: <3374db8c-6e7b-fe0b-6874-8289e9e369bc@oracle.com> <6fa3fe84aad946eabb2b46281031e21d@sap.com> <4be36ed6-2f8e-2dd8-a87a-5f76d639550e@oracle.com> <64a3268575d14ddcad90f7d46bab64dd@sap.com> <10f2abc8dbc347f7b7f1a851e89a220a@sap.com> <81c11685dd1b4edd9419e4897e96292a@sap.com> <5A74ED9E.8060503@oracle.com> <930717626b314defb436c1947ecd5ef6@sap.com> <5A7A7270.1070804@oracle.com> <5A7A89F7.3060509@oracle.com> <6d4be4c88ddb4e8c93d48f39db4a9cb4@sap.com> <389da8f2-df49-93a7-f141-8c17e58bd922@oracle.com> Message-ID: Pre-integration tests executed; no new failures; change integrated. Best regards, Misha On 02/07/2018 10:24 AM, mikhailo wrote: > Thank you for the update. I will import your latest webrev, and start > the testing. > > > Regard, > > Misha > > > On 02/07/2018 08:12 AM, Baesken, Matthias wrote: >> Hi Mikhailo,? sorry for causing the issue. >> Looks like? the change to test/lib/jdk/test/lib/Platform.java??? had >> unexpected consequences . >> I created a new webrev?? without the? Platform.java change : >> >> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.3/ >> >> Best regards, Matthias >> >> >> >> >>> -----Original Message----- >>> From: Mikhailo Seledtsov [mailto:mikhailo.seledtsov at oracle.com] >>> Sent: Mittwoch, 7. Februar 2018 06:09 >>> To: Baesken, Matthias >>> Cc: hotspot-dev at openjdk.java.net >>> Subject: Re: RFR : 8196062 : Enable docker container related tests >>> for linux >>> ppc64le >>> >>> Hi Matthias, >>> >>> Unfortunately one test failed during the pre-integration testing. >>> The following test failed during the pre-integration testing: >>> open/test/hotspot/jtreg/testlibrary_tests/TestMutuallyExclusivePlatformPr >>> >>> edicates.java >>> >>> >>> Reproducible: 100%, Linux and MAC >>> >>> Failure: TEST RESULT: Failed. Execution failed: `main' threw exception: >>> java.lang.RuntimeException: All Platform's methods with signature >>> '():Z' >>> should be tested. Missing: isPPC64le: expected true, was false >>> >>> See my comments in the RFE for details, and a suggested fix. >>> >>> >>> Thank you, >>> Misha >>> >>> On 2/6/18, 7:28 PM, Mikhailo Seledtsov wrote: >>>> I am running pre-integration testing; will push unless the testing >>>> finds any issues. >>>> >>>> Regards, >>>> Misha >>>> >>>> On 2/5/18, 11:50 PM, Baesken, Matthias wrote: >>>>> I only had to correct some whitespace changes? found by hg jcheck , >>>>> updated : >>>>> >>>>> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.2/ >>>>> >>>>> The fix has been reviewed by goetz, dsamersoff, bobv? . >>>>> Feedback was added,? Summary updated too? (suggested by Goetz). >>>>> >>>>> Tested with docker on SLES 12.1? / Ubuntu based container . >>>>> >>>>> Best regards, Matthias >>>>> >>>>> >>>>>> -----Original Message----- >>>>>> From: Mikhailo Seledtsov [mailto:mikhailo.seledtsov at oracle.com] >>>>>> Sent: Samstag, 3. Februar 2018 00:01 >>>>>> To: Baesken, Matthias >>>>>> Cc: Bob Vandette; Lindenmaier, Goetz >>>>>> ; hotspot-dev at openjdk.java.net; >>> Langer, >>>>>> Christoph; Doerr, Martin >>>>>> ; Dmitry Samersoff>>>>> sw.com> >>>>>> Subject: Re: RFR : 8196062 : Enable docker container related tests >>>>>> for linux >>>>>> ppc64le >>>>>> >>>>>> Hi Matthias, >>>>>> >>>>>> ???? I can sponsor your change if you'd like. >>>>>> Once you addressed all the feedback from code review, please sync to >>>>>> the >>>>>> tip, build and test. >>>>>> Then export the changeset and send it to me (see: >>>>>> http://openjdk.java.net/sponsor/) >>>>>> >>>>>> I will import your change set, run all required testing and push the >>>>>> change. >>>>>> >>>>>> >>>>>> Thank you, >>>>>> Misha >>>>>> >>>>>> On 2/2/18, 12:39 AM, Baesken, Matthias wrote: >>>>>>> Thanks for the reviews . >>>>>>> >>>>>>> I added info about the fix for /proc/self/cgroup and >>>>>>> /proc/self/mountinfo >>>>>> parsing to the bug : >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8196062 >>>>>>> >>>>>>> Guess I need a sponsor now to get it pushed ? >>>>>>> >>>>>>> >>>>>>> Best regards, Matthias >>>>>>> >>>>>>> >>>>>>> >>>>>>>> -----Original Message----- >>>>>>>> From: Bob Vandette [mailto:bob.vandette at oracle.com] >>>>>>>> Sent: Donnerstag, 1. Februar 2018 17:53 >>>>>>>> To: Lindenmaier, Goetz >>>>>>>> Cc: Baesken, Matthias; mikhailo >>>>>>>> ; hotspot-dev at openjdk.java.net; >>>>>> Langer, >>>>>>>> Christoph; Doerr, Martin >>>>>>>> ; Dmitry Samersoff>>>>>>> sw.com> >>>>>>>> Subject: Re: RFR : 8196062 : Enable docker container related tests >>>>>>>> for linux >>>>>>>> ppc64le >>>>>>>> >>>>>>>> Looks good to me. >>>>>>>> >>>>>>>> Bob. >>>>>>>> >>>>>>>>> On Feb 1, 2018, at 5:39 AM, Lindenmaier, Goetz >>>>>>>> ?? wrote: >>>>>>>>> Hi Matthias, >>>>>>>>> >>>>>>>>> thanks for enabling this test. Looks good. >>>>>>>>> I would appreciate if you would add a line >>>>>>>>> "Summary: also fix cgroup subsystem recognition" >>>>>>>>> to the bug description.? Else this might be mistaken >>>>>>>>> for a mere testbug. >>>>>>>>> >>>>>>>>> Best regards, >>>>>>>>> ??? Goetz. >>>>>>>>> >>>>>>>>> >>>>>>>>>> -----Original Message----- >>>>>>>>>> From: Baesken, Matthias >>>>>>>>>> Sent: Mittwoch, 31. Januar 2018 15:15 >>>>>>>>>> To: mikhailo; Bob Vandette >>>>>>>>>> >>>>>>>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >>>>>>>>>> ; Langer, Christoph >>>>>>>>>> ; Doerr, >>> Martin; >>>>>>>>>> Dmitry Samersoff >>>>>>>>>> Subject: RE: RFR : 8196062 : Enable docker container related >>>>>>>>>> tests for >>>>>> linux >>>>>>>>>> ppc64le >>>>>>>>>> >>>>>>>>>> Hello , I created a second webrev : >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062.1/8196062.1/webr >>>>>>>>>> ev/ >>>>>>>>>> >>>>>>>>>> - changed DockerTestUtils.buildJdkDockerImage in the suggested >>>>>>>>>> way >>>>>>>> (this >>>>>>>>>> should be extendable to linux s390x soon) >>>>>>>>>> >>>>>>>>>>>>>> Can you add "return;" in each test for subsystem not found >>>>>>>> messages >>>>>>>>>> - added returns in the tests for the subsystems in >>>>>> osContainer_linux.cpp >>>>>>>>>> - moved some checks at the beginning of? subsystem_file_contents >>>>>>>>>> (suggested by Dmitry) >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Best regards, Matthias >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> -----Original Message----- >>>>>>>>>>> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] >>>>>>>>>>> Sent: Donnerstag, 25. Januar 2018 18:43 >>>>>>>>>>> To: Baesken, Matthias; Bob >>> Vandette >>>>>>>>>>> >>>>>>>>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >>>>>>>>>>> ; Langer, Christoph >>>>>>>>>>> ; Doerr, >>> Martin >>>>>>>>>>> Subject: Re: RFR : 8196062 : Enable docker container related >>>>>>>>>>> tests for >>>>>>>> linux >>>>>>>>>>> ppc64le >>>>>>>>>>> >>>>>>>>>>> Hi Matthias, >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On 01/25/2018 12:15 AM, Baesken, Matthias wrote: >>>>>>>>>>>>> Perhaps, you could add code to >>>>>>>> DockerTestUtils.buildJdkDockerImage() >>>>>>>>>>>>> that does the following or similar: >>>>>>>>>>>>> ??????? 1. Construct a name for platform-specific docker >>>>>>>>>>>>> file: >>>>>>>>>>>>> ?????????????? String platformSpecificDockerfile = dockerfile >>>>>>>>>>>>> + "-" + >>>>>>>>>>>>> Platform.getOsArch(); >>>>>>>>>>>>> ?????????????? (Platform is jdk.test.lib.Platform) >>>>>>>>>>>>> >>>>>>>>>>>> Hello,? the doc? says : >>>>>>>>>>>> >>>>>>>>>>>> ??????? * Build a docker image that contains JDK under test. >>>>>>>>>>>> ??????? * The jdk will be placed under the "/jdk/" folder >>>>>>>>>>>> inside the docker >>>>>>>> file >>>>>>>>>>> system. >>>>>>>>>>>> ??????? ..... >>>>>>>>>>>> ??????? param dockerfile??? name of the dockerfile residing in >>>>>>>>>>>> the test >>>>>> source >>>>>>>>>>>> ??????? ..... >>>>>>>>>>>> ?????? public static void buildJdkDockerImage(String >>>>>>>>>>>> imageName, String >>>>>>>>>>> dockerfile, String buildDirName) >>>>>>>>>>>> It does not say anything about doing hidden insertions of some >>>>>>>> platform >>>>>>>>>>> names into? the dockerfile name. >>>>>>>>>>>> So should the jtreg API doc be changed ? >>>>>>>>>>>> If so who needs to approve this ? >>>>>>>>>>> Thank you for your concerns about the clarity of API and >>>>>> corresponding >>>>>>>>>>> documentation. This is a test library API, so no need to file >>>>>>>>>>> CCC or CSR. >>>>>>>>>>> >>>>>>>>>>> This API can be changed via a regular RFR/webrev review >>>>>>>>>>> process, as >>>>>>>> soon >>>>>>>>>>> as on one objects. I am a VM SQE engineer covering the docker >>> and >>>>>>>> Linux >>>>>>>>>>> container area, I am OK with this change. >>>>>>>>>>> And I agree with you, we should update the javadoc header on >>> this >>>>>>>>>> method >>>>>>>>>>> to reflect this implicit part of API contract. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Thank you, >>>>>>>>>>> Misha >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> (as far as I see so far only the? test? at >>>>>>>>>>> hotspot/jtreg/runtime/containers/docker/?? use this? so it >>>>>>>>>>> should not >>>>>> be >>>>>>>> a >>>>>>>>>> big >>>>>>>>>>> deal to change the interface?) >>>>>>>>>>>> Best regards, Matthias >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> -----Original Message----- >>>>>>>>>>>>> From: mikhailo [mailto:mikhailo.seledtsov at oracle.com] >>>>>>>>>>>>> Sent: Mittwoch, 24. Januar 2018 20:09 >>>>>>>>>>>>> To: Bob Vandette; Baesken, >>> Matthias >>>>>>>>>>>>> >>>>>>>>>>>>> Cc: hotspot-dev at openjdk.java.net; Lindenmaier, Goetz >>>>>>>>>>>>> ; Langer, Christoph >>>>>>>>>>>>> ; Doerr, Martin >>>>>>>> >>>>>>>>>>>>> Subject: Re: RFR : 8196062 : Enable docker container related >>>>>>>>>>>>> tests >>>>>> for >>>>>>>>>> linux >>>>>>>>>>>>> ppc64le >>>>>>>>>>>>> >>>>>>>>>>>>> Hi Matthias, >>>>>>>>>>>>> >>>>>>>>>>>>> ?????? Please see my comments about the test changes inline. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On 01/24/2018 07:13 AM, Bob Vandette wrote: >>>>>>>>>>>>>> osContainer_linux.cpp: >>>>>>>>>>>>>> >>>>>>>>>>>>>> Can you add "return;" in each test for subsystem not found >>>>>>>> messages >>>>>>>>>>> and >>>>>>>>>>>>>> remove these 3 lines OR move your tests for NULL& >>> messages >>>>>>>> inside. >>>>>>>>>>> The >>>>>>>>>>>>> compiler can >>>>>>>>>>>>>> probably optimize this but I?d prefer more compact code. >>>>>>>>>>>>>> >>>>>>>>>>>>>> if (memory == NULL || cpuset == NULL || cpu == NULL || >>> cpuacct >>>>>> == >>>>>>>>>>> NULL) >>>>>>>>>>>>> { >>>>>>>>>>>>>> ???? 342 return; >>>>>>>>>>>>>> ???? 343?? } >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> The other changes in osContainer_linux.cpp look ok. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I forwarded your test changes to Misha, who wrote these. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Since it?s likely that other platforms, such as aarch64, are >>>>>>>>>>>>>> going to >>>>>> run >>>>>>>>>>> into >>>>>>>>>>>>> the same problem, >>>>>>>>>>>>>> It would have been better to enable the tests based on the >>>>>>>> existence >>>>>>>>>> of >>>>>>>>>>> an >>>>>>>>>>>>> arch specific >>>>>>>>>>>>>> Dockerfile-BasicTest-{os.arch} rather than enabling specific >>>>>>>>>>>>>> arch?s >>>>>> in >>>>>>>>>>>>> VPProps.java. >>>>>>>>>>>>>> This approach would reduce the number of changes >>> significantly >>>>>> and >>>>>>>>>>> allow >>>>>>>>>>>>> support to >>>>>>>>>>>>>> be added with 1 new file. >>>>>>>>>>>>>> >>>>>>>>>>>>>> You wouldn?t need "String dockerFileName = >>>>>>>>>>>>> Common.getDockerFileName();? >>>>>>>>>>>>>> in every test. Just make DockerTestUtils automatically add >>>>>>>>>>>>>> arch. >>>>>>>>>>>>> I like Bob's idea on handling platform-specific Dockerfiles. >>>>>>>>>>>>> >>>>>>>>>>>>> Perhaps, you could add code to >>>>>>>> DockerTestUtils.buildJdkDockerImage() >>>>>>>>>>>>> that does the following or similar: >>>>>>>>>>>>> ??????? 1. Construct a name for platform-specific docker >>>>>>>>>>>>> file: >>>>>>>>>>>>> ?????????????? String platformSpecificDockerfile = dockerfile >>>>>>>>>>>>> + "-" + >>>>>>>>>>>>> Platform.getOsArch(); >>>>>>>>>>>>> ?????????????? (Platform is jdk.test.lib.Platform) >>>>>>>>>>>>> >>>>>>>>>>>>> ??????? 2. Check if platformSpecificDockerfile file exists in >>>>>>>>>>>>> the test >>>>>>>>>>>>> source directory >>>>>>>>>>>>> File.exists(Paths.get(Utils.TEST_SRC, >>>>>> platformSpecificDockerFile) >>>>>>>>>>>>> ????????????? If it does, then use it. Otherwise continue >>>>>>>>>>>>> using the >>>>>>>>>>>>> default/original dockerfile name. >>>>>>>>>>>>> >>>>>>>>>>>>> I think this will considerably simplify your change, as well >>>>>>>>>>>>> as make it >>>>>>>>>>>>> easy to extend support to other platforms/configurations >>>>>>>>>>>>> in the future. Let us know what you think of this approach ? >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Once your change gets (R)eviewed and approved, I can >>> sponsor >>>>>> the >>>>>>>>>> push. >>>>>>>>>>>>> Thank you, >>>>>>>>>>>>> Misha >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> Bob. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Jan 24, 2018, at 9:24 AM, Baesken, Matthias >>>>>>>>>>>>> ?? wrote: >>>>>>>>>>>>>>> Hello,? could you please review the following change : >>>>>>>>>>>>>>> 8196062 : >>>>>>>>>> Enable >>>>>>>>>>>>> docker container related tests for linux ppc64le? . >>>>>>>>>>>>>>> It? adds? docker container testing?? for linux ppc64 le >>>>>>>>>>>>>>> (little >>>>>> endian) . >>>>>>>>>>>>>>> A number of things had to be done : >>>>>>>>>>>>>>> ???? ? Add a? separate? docker file >>>>>>>>>>>>> test/hotspot/jtreg/runtime/containers/docker/Dockerfile- >>>>>> BasicTest- >>>>>>>>>>> ppc64le >>>>>>>>>>>>> for linux ppc64 le???? which uses?? Ubuntu ( the? Oracle >>>>>>>>>>>>> Linux 7.2? used >>>>>>>> for >>>>>>>>>>>>> x86_64? seems not to be available for ppc64le ) >>>>>>>>>>>>>>> ???? ? Fix parsing??? /proc/self/mountinfo??? and >>>>>>>>>>>>>>> /proc/self/cgroup >>>>>>>>>> in >>>>>>>>>>>>> src/hotspot/os/linux/osContainer_linux.cpp , it could >>>>>>>>>>>>> not? handle >>>>>>>> the >>>>>>>>>>>>> format seen? on SUSE LINUX 12.1 ppc64le (Host)? and? Ubuntu >>>>>> (Docker >>>>>>>>>>>>> container) >>>>>>>>>>>>>>> ???? ? Add a bit? more logging >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Webrev : >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> http://cr.openjdk.java.net/~mbaesken/webrevs/8196062/ >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Bug : >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8196062 >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> After these adjustments I could run the >>>>>>>> runtime/containers/docker >>>>>>>>>>> - >>>>>>>>>>>>> jtreg tests successfully . >>>>>>>>>>>>>>> Best regards, Matthias > From kim.barrett at oracle.com Wed Feb 7 22:00:01 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 7 Feb 2018 17:00:01 -0500 Subject: RFR: 8194691: Cleanup unnecessary casts in Atomic/OrderAccess uses Message-ID: Please review this removal of unnecessary casts in calls to Atomic and OrderAccess functions. This isn't an attempt to be complete, but eliminates some easily found and easy to fix cases. Also changed some uses of Atomic::add with a negated value to instead use Atomic::sub. I've not made any changes around JavaThreadState and Thread::_thread_state manipulation. That may require more refactoring to deal with than I wanted to mix in with this otherwise fairly straight-forward set of changes. CR: https://bugs.openjdk.java.net/browse/JDK-8194691 Webrev: http://cr.openjdk.java.net/~kbarrett/8194691/open.00/ Testing: Mach5 {hs,jdk}-tier{1,2,3} From coleen.phillimore at oracle.com Wed Feb 7 22:27:51 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 7 Feb 2018 17:27:51 -0500 Subject: RFR: 8194691: Cleanup unnecessary casts in Atomic/OrderAccess uses In-Reply-To: References: Message-ID: <8f7c818b-1c10-029b-cd46-5225af0c2925@oracle.com> Looks good to me. thanks, Coleen On 2/7/18 5:00 PM, Kim Barrett wrote: > Please review this removal of unnecessary casts in calls to Atomic and > OrderAccess functions. This isn't an attempt to be complete, but > eliminates some easily found and easy to fix cases. > > Also changed some uses of Atomic::add with a negated value to instead > use Atomic::sub. > > I've not made any changes around JavaThreadState and > Thread::_thread_state manipulation. That may require more refactoring > to deal with than I wanted to mix in with this otherwise fairly > straight-forward set of changes. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8194691 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8194691/open.00/ > > Testing: > Mach5 {hs,jdk}-tier{1,2,3} > > From igor.ignatyev at oracle.com Thu Feb 8 00:48:22 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Wed, 7 Feb 2018 16:48:22 -0800 Subject: RFR(XS) : 8197113 : combine multiple @key tags in jtreg tests Message-ID: <74751A85-E2EE-4713-A58D-5D5279B78060@oracle.com> http://cr.openjdk.java.net/~iignatyev//8197113/webrev.00/index.html > 47 lines changed: 11 ins; 29 del; 7 mod; Hi all, could you please review this small fix for jtreg tests? jtreg doesn't support multiple @key tags, this fix replaces multiple @key tags by one tag w/ combined value. webrev: http://cr.openjdk.java.net/~iignatyev//8197113/webrev.00/index.html JBS: https://bugs.openjdk.java.net/browse/JDK-8197113 Thanks, -- Igor From kim.barrett at oracle.com Thu Feb 8 02:24:34 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 7 Feb 2018 21:24:34 -0500 Subject: RFR: 8196083: Avoid locking in OopStorage::release In-Reply-To: <1ef56349-0193-4390-17b7-429da1dbbec8@oracle.com> References: <1ef56349-0193-4390-17b7-429da1dbbec8@oracle.com> Message-ID: > On Feb 7, 2018, at 4:19 PM, coleen.phillimore at oracle.com wrote: > > > Hi, I've reviewed this and I don't see anything wrong. It looks like the necessary OrderAccess and CAS instructions are used on the deferred_updates list. Thanks. > I think this logging should be debug mode because I don't know if you want this with -Xlog (default is log everything with info mode, I think), except this one: > > log_info(oopstorage, ref)("%s: failed allocation", name()); If anything, I'm tempted to make that one a warning. Odds are pretty good that it will lead to some error or abort somewhere up the call chain. I think the oopstorage,blocks log_info's are reasonable. Other than the one you mention, oopstorage,ref log_info's might be a bit much, particularly as we start making more use of oopstorage. But I'd like to consider changes to the logging levels here as a separate issue. > And here, why do you need to unlink the blocks? Aren't you deleting them below from traversing the main list? > > OopStorage::~OopStorage() { > Block* block; > while ((block = _deferred_updates) != NULL) { > _deferred_updates = block->deferred_updates_next(); > block->set_deferred_updates_next(NULL); > } > while ((block = _allocate_list.head()) != NULL) { > _allocate_list.unlink(*block); > } > > Can delete_block just clear all the fields in the block? Block deletion asserts the block is no longer in use, to catch improper deletion bugs. But if we're tearing down the whole storage object, it doesn't matter what the current internal state is, we want to clean it all up. So storage deletion removes blocks from lists before deleting them. From kim.barrett at oracle.com Thu Feb 8 06:37:24 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 8 Feb 2018 01:37:24 -0500 Subject: RFR: 8196083: Avoid locking in OopStorage::release In-Reply-To: References: <1ef56349-0193-4390-17b7-429da1dbbec8@oracle.com> Message-ID: <5F460E3D-6D21-4E42-92A3-93C3FDA70718@oracle.com> > On Feb 7, 2018, at 9:24 PM, Kim Barrett wrote: > >> On Feb 7, 2018, at 4:19 PM, coleen.phillimore at oracle.com wrote: > >> I think this logging should be debug mode because I don't know if you want this with -Xlog (default is log everything with info mode, I think), except this one: >> >> log_info(oopstorage, ref)("%s: failed allocation", name()); > > If anything, I'm tempted to make that one a warning. Odds are pretty > good that it will lead to some error or abort somewhere up the call chain. > > I think the oopstorage,blocks log_info's are reasonable. > Other than the one you mention, oopstorage,ref log_info's might be a > bit much, particularly as we start making more use of oopstorage. But > I'd like to consider changes to the logging levels here as a separate > issue. "java -Xlog -version? produces over 1100 lines of output. It?s not obvious the existing info output from oopstorage would make a noticeable difference. From thomas.schatzl at oracle.com Thu Feb 8 11:23:27 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 08 Feb 2018 12:23:27 +0100 Subject: RFR: 8194691: Cleanup unnecessary casts in Atomic/OrderAccess uses In-Reply-To: References: Message-ID: <1518089007.2700.1.camel@oracle.com> On Wed, 2018-02-07 at 17:00 -0500, Kim Barrett wrote: > Please review this removal of unnecessary casts in calls to Atomic > and > OrderAccess functions. This isn't an attempt to be complete, but > eliminates some easily found and easy to fix cases. > > Also changed some uses of Atomic::add with a negated value to instead > use Atomic::sub. > > I've not made any changes around JavaThreadState and > Thread::_thread_state manipulation. That may require more > refactoring > to deal with than I wanted to mix in with this otherwise fairly > straight-forward set of changes. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8194691 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8194691/open.00/ > > Testing: > Mach5 {hs,jdk}-tier{1,2,3} > - the copyright line in dependencyContext.cpp should read (c) ... 2015, 2018, ..., mentioning both years like in other files. Looks good otherwise. I do not need to re-review above change added. Thanks, Thomas From thomas.stuefe at gmail.com Thu Feb 8 11:58:09 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 8 Feb 2018 12:58:09 +0100 Subject: RFR: Proposal for improvements to the metaspace chunk allocator Message-ID: Hi, We would like to contribute a patch developed at SAP which has been live in our VM for some time. It improves the metaspace chunk allocation: reduces fragmentation and raises the chance of reusing free metaspace chunks. The patch: http://cr.openjdk.java.net/~stuefe/webrevs/metaspace- coalescation/2018-02-05--2/webrev/ In very short, this patch helps with a number of pathological cases where metaspace chunks are free but cannot be reused because they are of the wrong size. For example, the metaspace freelist could be full of small chunks, which would not be reusable if we need larger chunks. So, we could get metaspace OOMs even in situations where the metaspace was far from exhausted. Our patch adds the ability to split and merge metaspace chunks dynamically and thus remove the "size-lock-in" problem. Note that there have been other attempts to get a grip on this problem, see e.g. "SpaceManager::get_small_chunks_and_allocate()". But arguably our patch attempts a more complete solution. In 2016 I discussed the idea for this patch with some folks off-list, among them Jon Matsimutso. He then did advice me to create a JEP. So I did: [1]. However, meanwhile changes to the JEP process were discussed [2], and I am not sure anymore this patch needs even needs a JEP. It may be moderately complex and hence carries the risk inherent in any patch, but its effects would not be externally visible (if you discount seeing fewer metaspace OOMs). So, I'd prefer to handle this as a simple RFE. -- How this patch works: 1) When a class loader dies, its metaspace chunks are freed and returned to the freelist for reuse by the next class loader. With the patch, upon returning a chunk to the freelist, an attempt is made to merge it with its neighboring chunks - should they happen to be free too - to form a larger chunk. Which then is placed in the free list. As a result, the freelist should be populated by larger chunks at the expense of smaller chunks. In other words, all free chunks should always be as "coalesced as possible". 2) When a class loader needs a new chunk and a chunk of the requested size cannot be found in the free list, before carving out a new chunk from the virtual space, we first check if there is a larger chunk in the free list. If there is, that larger chunk is chopped up into n smaller chunks. One of them is returned to the callers, the others are re-added to the freelist. (1) and (2) together have the effect of removing the size-lock-in for chunks. If fragmentation allows it, small chunks are dynamically combined to form larger chunks, and larger chunks are split on demand. -- What this patch does not: This is not a rewrite of the chunk allocator - most of the mechanisms stay intact. Specifically, chunk sizes remain unchanged, and so do chunk allocation processes (when do which class loaders get handed which chunk size). Almost everthing this patch does affects only internal workings of the ChunkManager. Also note that I refrained from doing any cleanups, since I wanted reviewers to be able to gauge this patch without filtering noise. Unfortunately this patch adds some complexity. But there are many future opportunities for code cleanup and simplification, some of which we already discussed in existing RFEs ([3], [4]). All of them are out of the scope for this particular patch. -- Details: Before the patch, the following rules held: - All chunk sizes are multiples of the smallest chunk size ("specialized chunks") - All chunk sizes of larger chunks are also clean multiples of the next smaller chunk size (e.g. for class space, the ratio of specialized/small/medium chunks is 1:2:32) - All chunk start addresses are aligned to the smallest chunk size (more or less accidentally, see metaspace_reserve_alignment). The patch makes the last rule explicit and more strict: - All (non-humongous) chunk start addresses are now aligned to their own chunk size. So, e.g. medium chunks are allocated at addresses which are a multiple of medium chunk size. This rule is not extended to humongous chunks, whose start addresses continue to be aligned to the smallest chunk size. The reason for this new alignment rule is that it makes it cheap both to find chunk predecessors of a chunk and to check which chunks are free. When a class loader dies and its chunk is returned to the freelist, all we have is its address. In order to merge it with its neighbors to form a larger chunk, we need to find those neighbors, including those preceding the returned chunk. Prior to this patch that was not easy - one would have to iterate chunks starting at the beginning of the VirtualSpaceNode. But due to the new alignment rule, we now know where the prospective larger chunk must start - at the next lower larger-chunk-size-aligned boundary. We also know that currently a smaller chunk must start there (*). In order to check the free-ness of chunks quickly, each VirtualSpaceNode now keeps a bitmap which describes its occupancy. One bit in this bitmap corresponds to a range the size of the smallest chunk size and starting at an address aligned to the smallest chunk size. Because of the alignment rules above, such a range belongs to one single chunk. The bit is 1 if the associated chunk is in use by a class loader, 0 if it is free. When we have calculated the address range a prospective larger chunk would span, we now need to check if all chunks in that range are free. Only then we can merge them. We do that by querying the bitmap. Note that the most common use case here is forming medium chunks from smaller chunks. With the new alignment rules, the bitmap portion covering a medium chunk now always happens to be 16- or 32bit in size and is 16- or 32bit aligned, so reading the bitmap in many cases becomes a simple 16- or 32bit load. If the range is free, only then we need to iterate the chunks in that range: pull them from the freelist, combine them to one new larger chunk, re-add that one to the freelist. (*) Humongous chunks make this a bit more complicated. Since the new alignment rule does not extend to them, a humongous chunk could still straddle the lower or upper boundary of the prospective larger chunk. So I gave the occupancy map a second layer, which is used to mark the start of chunks. An alternative approach could have been to make humongous chunks size and start address always a multiple of the largest non-humongous chunk size (medium chunks). That would have caused a bit of waste per humongous chunk (<64K) in exchange for simpler coding and a simpler occupancy map. -- The patch shows its best results in scenarios where a lot of smallish class loaders are alive simultaneously. When dying, they leave continuous expanses of metaspace covered in small chunks, which can be merged nicely. However, if class loader life times vary more, we have more interleaving of dead and alive small chunks, and hence chunk merging does not work as well as it could. For an example of a pathological case like this see example program: [5] Executed like this: "java -XX:CompressedClassSpaceSize=10M -cp test3 test3.Example2" the test will load 3000 small classes in separate class loaders, then throw them away and start loading large classes. The small classes will have flooded the metaspace with small chunks, which are unusable for the large classes. When executing with the rather limited CompressedClassSpaceSize=10M, we will run into an OOM after loading about 800 large classes, having used only 40% of the class space, the rest is wasted to unused small chunks. However, with our patch the example program will manage to allocate ~2900 large classes before running into an OOM, and class space will show almost no waste. Do demonstrate this, add -Xlog:gc+metaspace+freelist. After running into an OOM, statistics and an ASCII representation of the class space will be shown. The unpatched version will show large expanses of unused small chunks, the patched variant will show almost no waste. Note that the patch could be made more effective with a different size ratio between small and medium chunks: in class space, that ratio is 1:16, so 16 small chunks must happen to be free to form one larger chunk. With a smaller ratio the chance for coalescation would be larger. So there may be room for future improvement here: Since we now can merge and split chunks on demand, we could introduce more chunk sizes. Potentially arriving at a buddy-ish allocator style where we drop hard-wired chunk sizes for a dynamic model where the ratio between chunk sizes is always 1:2 and we could in theory have no limit to the chunk size? But this is just a thought and well out of the scope of this patch. -- What does this patch cost (memory): - the occupancy bitmap adds 1 byte per 4K metaspace. - MetaChunk headers get larger, since we add an enum and two bools to it. Depending on what the c++ compiler does with that, chunk headers grow by one or two MetaWords, reducing the payload size by that amount. - The new alignment rules mean we may need to create padding chunks to precede larger chunks. But since these padding chunks are added to the freelist, they should be used up before the need for new padding chunks arises. So, the maximally possible number of unused padding chunks should be limited by design to about 64K. The expectation is that the memory savings by this patch far outweighs its added memory costs. .. (performance): We did not see measurable drops in standard benchmarks raising over the normal noise. I also measured times for a program which stresses metaspace chunk coalescation, with the same result. I am open to suggestions what else I should measure, and/or independent measurements. -- Other details: I removed SpaceManager::get_small_chunk_and_allocate() to reduce complexity somewhat, because it was made mostly obsolete by this patch: since small chunks are combined to larger chunks upon return to the freelist, in theory we should not have that many free small chunks anymore anyway. However, there may be still cases where we could benefit from this workaround, so I am asking your opinion on this one. About tests: There were two native tests - ChunkManagerReturnTest and TestVirtualSpaceNode (the former was added by me last year) - which did not make much sense anymore, since they relied heavily on internal behavior which was made unpredictable with this patch. To make up for these lost tests, I added a new gtest which attempts to stress the many combinations of allocation pattern but does so from a layer above the old tests. It now uses Metaspace::allocate() and friends. By using that point as entry for tests, I am less dependent on implementation internals and still cover a lot of scenarios. -- Review pointers: Good points to start are - ChunkManager::return_single_chunk() - specifically, ChunkManager::attempt_to_coalesce_around_chunk() - here we merge chunks upon return to the free list - ChunkManager::free_chunks_get(): Here we now split large chunks into smaller chunks on demand - VirtualSpaceNode::take_from_committed() : chunks are allocated according to align rules now, padding chunks are handles - The OccupancyMap class is the helper class implementing the new occupancy bitmap The rest is mostly chaff: helper functions, added tests and verifications. -- Thanks and Best Regards, Thomas [1] https://bugs.openjdk.java.net/browse/JDK-8166690 [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2017-November/000128.html [3] https://bugs.openjdk.java.net/browse/JDK-8185034 [4] https://bugs.openjdk.java.net/browse/JDK-8176808 [5] https://bugs.openjdk.java.net/secure/attachment/63532/test3.zip From coleen.phillimore at oracle.com Thu Feb 8 12:50:57 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 8 Feb 2018 07:50:57 -0500 Subject: RFR: 8196083: Avoid locking in OopStorage::release In-Reply-To: <5F460E3D-6D21-4E42-92A3-93C3FDA70718@oracle.com> References: <1ef56349-0193-4390-17b7-429da1dbbec8@oracle.com> <5F460E3D-6D21-4E42-92A3-93C3FDA70718@oracle.com> Message-ID: On 2/8/18 1:37 AM, Kim Barrett wrote: >> On Feb 7, 2018, at 9:24 PM, Kim Barrett wrote: >> >>> On Feb 7, 2018, at 4:19 PM, coleen.phillimore at oracle.com wrote: >>> I think this logging should be debug mode because I don't know if you want this with -Xlog (default is log everything with info mode, I think), except this one: >>> >>> log_info(oopstorage, ref)("%s: failed allocation", name()); >> If anything, I'm tempted to make that one a warning. Odds are pretty >> good that it will lead to some error or abort somewhere up the call chain. >> >> I think the oopstorage,blocks log_info's are reasonable. >> Other than the one you mention, oopstorage,ref log_info's might be a >> bit much, particularly as we start making more use of oopstorage. But >> I'd like to consider changes to the logging levels here as a separate >> issue. > "java -Xlog -version? produces over 1100 lines of output. It?s not obvious the > existing info output from oopstorage would make a noticeable difference. > Yes, that is why I would like this logging to not be info. Thanks, Coleen From harold.seigel at oracle.com Thu Feb 8 14:26:26 2018 From: harold.seigel at oracle.com (harold seigel) Date: Thu, 8 Feb 2018 09:26:26 -0500 Subject: RFR(XS) : 8197113 : combine multiple @key tags in jtreg tests In-Reply-To: <74751A85-E2EE-4713-A58D-5D5279B78060@oracle.com> References: <74751A85-E2EE-4713-A58D-5D5279B78060@oracle.com> Message-ID: <22a8026c-d0a2-7668-5d16-ac7c47a7978b@oracle.com> Hi Igor, These changes look good. Harold On 2/7/2018 7:48 PM, Igor Ignatyev wrote: > http://cr.openjdk.java.net/~iignatyev//8197113/webrev.00/index.html >> 47 lines changed: 11 ins; 29 del; 7 mod; > Hi all, > > could you please review this small fix for jtreg tests? jtreg doesn't support multiple @key tags, this fix replaces multiple @key tags by one tag w/ combined value. > > webrev: http://cr.openjdk.java.net/~iignatyev//8197113/webrev.00/index.html > JBS: https://bugs.openjdk.java.net/browse/JDK-8197113 > > Thanks, > -- Igor From goetz.lindenmaier at sap.com Thu Feb 8 15:21:42 2018 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Thu, 8 Feb 2018 15:21:42 +0000 Subject: RFR(XS) : 8197113 : combine multiple @key tags in jtreg tests In-Reply-To: <22a8026c-d0a2-7668-5d16-ac7c47a7978b@oracle.com> References: <74751A85-E2EE-4713-A58D-5D5279B78060@oracle.com> <22a8026c-d0a2-7668-5d16-ac7c47a7978b@oracle.com> Message-ID: Hi, yes, the changes look good. I have seen and fixed similar problems before. Shouldn't jtreg be changed to accept and just concatenate multiple @key tags? It will happen again, and, alternatively rejecting such tests by jtreg would break downward compatilbility. Best regards, Goetz. > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On > Behalf Of harold seigel > Sent: Donnerstag, 8. Februar 2018 15:26 > To: hotspot-dev at openjdk.java.net > Subject: Re: RFR(XS) : 8197113 : combine multiple @key tags in jtreg tests > > Hi Igor, > > These changes look good. > > Harold > > On 2/7/2018 7:48 PM, Igor Ignatyev wrote: > > http://cr.openjdk.java.net/~iignatyev//8197113/webrev.00/index.html > >> 47 lines changed: 11 ins; 29 del; 7 mod; > > Hi all, > > > > could you please review this small fix for jtreg tests? jtreg doesn't support > multiple @key tags, this fix replaces multiple @key tags by one tag w/ > combined value. > > > > webrev: > http://cr.openjdk.java.net/~iignatyev//8197113/webrev.00/index.html > > JBS: https://bugs.openjdk.java.net/browse/JDK-8197113 > > > > Thanks, > > -- Igor From coleen.phillimore at oracle.com Thu Feb 8 16:10:29 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 8 Feb 2018 11:10:29 -0500 Subject: RFR: 8196083: Avoid locking in OopStorage::release In-Reply-To: References: <1ef56349-0193-4390-17b7-429da1dbbec8@oracle.com> <5F460E3D-6D21-4E42-92A3-93C3FDA70718@oracle.com> Message-ID: <4a207e6c-c780-2f9b-c8f3-21d93dcc07b4@oracle.com> On 2/8/18 7:50 AM, coleen.phillimore at oracle.com wrote: > > > On 2/8/18 1:37 AM, Kim Barrett wrote: >>> On Feb 7, 2018, at 9:24 PM, Kim Barrett wrote: >>> >>>> On Feb 7, 2018, at 4:19 PM, coleen.phillimore at oracle.com wrote: >>>> I think this logging should be debug mode because I don't know if >>>> you want this with -Xlog (default is log everything with info mode, >>>> I think), except this one: >>>> >>>> ????????? log_info(oopstorage, ref)("%s: failed allocation", name()); >>> If anything, I'm tempted to make that one a warning.? Odds are pretty >>> good that it will lead to some error or abort somewhere up the call >>> chain. >>> >>> I think the oopstorage,blocks log_info's are reasonable. >>> Other than the one you mention, oopstorage,ref log_info's might be a >>> bit much, particularly as we start making more use of oopstorage.? But >>> I'd like to consider changes to the logging levels here as a separate >>> issue. Yes, we can discuss this as a separate issue.?? I just happened to notice it while reading through your change.? The lock free release change looks good. Thanks, Coleen >> "java -Xlog -version? produces over 1100 lines of output.? It?s not >> obvious the >> existing info output from oopstorage would make a noticeable difference. >> > > Yes, that is why I would like this logging to not be info. > Thanks, > Coleen From igor.ignatyev at oracle.com Thu Feb 8 16:59:15 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Thu, 8 Feb 2018 08:59:15 -0800 Subject: RFR(XS) : 8197113 : combine multiple @key tags in jtreg tests In-Reply-To: References: <74751A85-E2EE-4713-A58D-5D5279B78060@oracle.com> <22a8026c-d0a2-7668-5d16-ac7c47a7978b@oracle.com> Message-ID: <1E211B34-766B-4525-BC6B-6E5DA78DAAF9@oracle.com> Hi Goetz, Harold. Thank you for your review. I totally agree that jtreg should be fixed to handle this one way or another, compatibility question can be solved by enabling this check based on requiredVersion value. I have started discussion w/ Jon in CODETOOLS-7902076 Thanks, -- Igor > On Feb 8, 2018, at 7:21 AM, Lindenmaier, Goetz wrote: > > Hi, > > yes, the changes look good. > I have seen and fixed similar problems before. > Shouldn't jtreg be changed to accept and just concatenate multiple > @key tags? It will happen again, and, alternatively rejecting such tests by jtreg > would break downward compatilbility. > > Best regards, > Goetz. > >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >> Behalf Of harold seigel >> Sent: Donnerstag, 8. Februar 2018 15:26 >> To: hotspot-dev at openjdk.java.net >> Subject: Re: RFR(XS) : 8197113 : combine multiple @key tags in jtreg tests >> >> Hi Igor, >> >> These changes look good. >> >> Harold >> >> On 2/7/2018 7:48 PM, Igor Ignatyev wrote: >>> http://cr.openjdk.java.net/~iignatyev//8197113/webrev.00/index.html >>>> 47 lines changed: 11 ins; 29 del; 7 mod; >>> Hi all, >>> >>> could you please review this small fix for jtreg tests? jtreg doesn't support >> multiple @key tags, this fix replaces multiple @key tags by one tag w/ >> combined value. >>> >>> webrev: >> http://cr.openjdk.java.net/~iignatyev//8197113/webrev.00/index.html >>> JBS: https://bugs.openjdk.java.net/browse/JDK-8197113 >>> >>> Thanks, >>> -- Igor > From thomas.stuefe at gmail.com Thu Feb 8 17:19:26 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 8 Feb 2018 18:19:26 +0100 Subject: RFR : 8196578 : enhance errno_to_string function in os.cpp with some additional errno texts from AIX 7.1 In-Reply-To: References: <93fc660476f1490da815cdfba98ff623@sap.com> Message-ID: After discussing this off-line with Matthias and Goetz, I withdraw my opposition to this patch. Still not a big fan, but if everyone else (including David) is okay with this patch, I am too. Kind Regards, Thomas On Fri, Feb 2, 2018 at 10:20 AM, Thomas St?fe wrote: > > > On Fri, Feb 2, 2018 at 9:57 AM, David Holmes > wrote: > >> While I did not do an exhaustive check of the existing codes even the >> ones under >> >> // The following enums are not defined on all platforms. >> >> are at least defined by POSIX (even if just listed as "Reserved"). >> >> So I am still reluctant to introduce OS specific codes into a shared >> file. Plus there's the problem of different OS having different meanings >> for the same error code - suggesting per-OS specialization might be useful >> (but tricky to implement). >> >> That said I have to re-question whether we should be maintaining this >> explicit string mapping table anyway? strerror() is not thread-safe but >> strerror_l() seems to be, or at worst we need buffer management with >> strerror_r(). I know this topic has arisen before ... >> >> > How about we build the string table dynamically at process start by > iterating the first n errnos and calling strerror() :) Just kidding. > > Yes, I admit this table starts to feel weird. Original discussions were > here: https://bugs.openjdk.java.net/browse/JDK-8148425 > > I originally just wanted a static translation of errno numbers to > literalized errno constants (e.g. ETOOMANYREFS => "ETOOMANYREFS"), > because in 99% of cases where we call os::strerror() we do this to print > log output for developers, and as a developer I find "ETOOMANYREFS" far > more succinct than whatever strerror() returns. This would also bypass any > localization issues. If I see "ETOOMANYREFS" in a log file I immediately > know this is an error code from the libc, and can look it up in the man > page or google it. But when I read "Too many references: can't splice" - > potentially in Portuguese :) - I would have to dig a bit until I find out > what is actually happening. > > Of course, there are cases where we want the human readable, localized > text, but those cases are rarer and could be rewritten to use strerror_r. > > Just my 5 cent. > > ..Thomas > > Cheers, >> David >> >> On 2/02/2018 6:40 PM, Thomas St?fe wrote: >> >>> On Fri, Feb 2, 2018 at 9:02 AM, Baesken, Matthias < >>> matthias.baesken at sap.com> >>> wrote: >>> >>> >>>> - I do not really like spamming a shared file with AIX specific >>>> errno >>>> >>>> codes. >>>> >>>> >>>> >>>> Hi, I wrote ?for a few errnos ***we find*** on AIX 7.1? , not that >>>> they are AIX ***specific***. >>>> >>>> Checked the first few added ones : >>>> >>>> >>>> >>>> 1522 // some more errno numbers from AIX 7.1 (some are also >>>> supported >>>> on Linux) >>>> >>>> 1523 #ifdef ENOTBLK >>>> >>>> 1524 DEFINE_ENTRY(ENOTBLK, "Block device required") >>>> >>>> 1525 #endif >>>> >>>> 1526 #ifdef ECHRNG >>>> >>>> 1527 DEFINE_ENTRY(ECHRNG, "Channel number out of range") >>>> >>>> 1528 #endif >>>> >>>> 1529 #ifdef ELNRNG >>>> >>>> 1530 DEFINE_ENTRY(ELNRNG, "Link number out of range") >>>> >>>> 1531 #endif >>>> >>>> >>>> >>>> According to >>>> >>>> >>>> >>>> http://www.ioplex.com/~miallen/errcmp.html >>>> >>>> >>>> >>>> ENOTBLK ? found on AIX, Solaris, Linux, ? >>>> >>>> ECHRNG - found on AIX, Solaris, Linux >>>> >>>> ELNRNG - found on AIX, Solaris, Linux >>>> >>>> >>>> >>>> >>> The argument can easily made in the other direction. Checking the last n >>> errno codes I see: >>> >>> AIX, MAC + #ifdef EPROCLIM >>> AIX only + #ifdef ECORRUPT >>> AIX only + #ifdef ESYSERROR >>> AIX only + DEFINE_ENTRY(ESOFT, "I/O completed, but needs relocation") >>> AIX, MAC + #ifdef ENOATTR >>> AIX only + DEFINE_ENTRY(ESAD, "Security authentication denied") >>> AIX only + #ifdef ENOTRUST >>> ... >>> >>> >>> I would suggest to keep the multi-platform errnos in os.cpp just where >>>> they are . >>>> >>>> >>>> >>>> >>> I am still not convinced and like my original suggestion better. Lets >>> wait >>> for others to chime in and see what the consensus is. >>> >>> Best Regards, Thomas >>> >>> >>> >>> >>> - Can we move platform specific error codes to platform files? Eg by >>>> having a platform specific version pd_errno_to_string(), >>>> - which has a first shot at translating errno values, and only if >>>> that >>>> one returns no result reverting back to the shared version? >>>> - >>>> >>>> >>>> >>>> Can go through the list of added errnos and check if there are really a >>>> few in that exist only on AIX. >>>> >>>> If there are a significant number we might do what you suggest , but for >>>> only a small number I wouldn?t do it. >>>> >>>> >>>> >>>> >>>> >>>> Small nit: >>>>> >>>> >>>> >>>>> >>>> - DEFINE_ENTRY(ESTALE, "Reserved") >>>>> >>>> >>>> + DEFINE_ENTRY(ESTALE, "No filesystem / stale NFS file handle") >>>>> >>>> >>>> >>>>> >>>> I like the glibc text better, just "Stale file handle". NFS seems too >>>>> >>>> specific, can handles for other remote file systems not get stale? >>>> >>>> >>>> >>>> That?s fine with me, I can change this to what you suggest. >>>> >>>> >>>> >>>> Best regards, Matthias >>>> >>>> >>>> >>>> >>>> >>>> *From:* Thomas St?fe [mailto:thomas.stuefe at gmail.com] >>>> *Sent:* Donnerstag, 1. Februar 2018 18:38 >>>> *To:* Baesken, Matthias >>>> *Cc:* hotspot-dev at openjdk.java.net; ppc-aix-port-dev at openjdk.java.net >>>> *Subject:* Re: RFR : 8196578 : enhance errno_to_string function in >>>> os.cpp >>>> >>>> with some additional errno texts from AIX 7.1 >>>> >>>> >>>> >>>> Hi Matthias, >>>> >>>> >>>> >>>> This would probably better discussed in hotspot-runtime, no? >>>> >>>> >>>> >>>> The old error codes and their descriptions were Posix ( >>>> http://pubs.opengroup.org/onlinepubs/000095399/basedefs/errno.h.html). >>>> I >>>> do not really like spamming a shared file with AIX specific errno codes. >>>> Can we move platform specific error codes to platform files? Eg by >>>> having a >>>> platform specific version pd_errno_to_string(), which has a first shot >>>> at >>>> translating errno values, and only if that one returns no result >>>> reverting >>>> back to the shared version? >>>> >>>> >>>> >>>> Small nit: >>>> >>>> >>>> >>>> - DEFINE_ENTRY(ESTALE, "Reserved") >>>> >>>> + DEFINE_ENTRY(ESTALE, "No filesystem / stale NFS file handle") >>>> >>>> >>>> >>>> I like the glibc text better, just "Stale file handle". NFS seems too >>>> specific, can handles for other remote file systems not get stale? >>>> >>>> Kind Regards, Thomas >>>> >>>> >>>> >>>> On Thu, Feb 1, 2018 at 5:16 PM, Baesken, Matthias < >>>> matthias.baesken at sap.com> wrote: >>>> >>>> Hello , I enhanced the errno - to - error-text mappings in os.cpp >>>> for a few errnos we find on AIX 7.1 . >>>> Some of these added errnos are found as well on Linux (e.g. SLES 11 / >>>> 12 >>>> ). >>>> >>>> Could you please check and review ? >>>> >>>> ( btw. there is good cross platform info about the errnos at >>>> http://www.ioplex.com/~miallen/errcmp.html ) >>>> >>>> Bug : >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8196578 >>>> >>>> Webrev : >>>> >>>> http://cr.openjdk.java.net/~mbaesken/webrevs/8196578/ >>>> >>>> >>>> >>>> Best regards, Matthias >>>> >>>> >>>> >>>> > From aph at redhat.com Thu Feb 8 17:38:41 2018 From: aph at redhat.com (Andrew Haley) Date: Thu, 8 Feb 2018 17:38:41 +0000 Subject: Native Memory Leak In-Reply-To: References: Message-ID: <2e9951d9-42ea-dab9-d73d-e8c3e73dc511@redhat.com> On 07/02/18 19:27, Jacob Schlather wrote: > Having found these I could use some guidance on what to look at next and > what sorts of calls could potentially be causing these issues. We don't > have any explicit uses of JNI that I can tell in our code bases. Let me > know if there's anymore information that would be helpful. These look pretty normal. Assuming that this is Linux, look at /proc//maps/. You might actually be leaking memory because of some native structures like FileHandles. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From kim.barrett at oracle.com Thu Feb 8 17:51:08 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 8 Feb 2018 12:51:08 -0500 Subject: RFR: 8196083: Avoid locking in OopStorage::release In-Reply-To: <4a207e6c-c780-2f9b-c8f3-21d93dcc07b4@oracle.com> References: <1ef56349-0193-4390-17b7-429da1dbbec8@oracle.com> <5F460E3D-6D21-4E42-92A3-93C3FDA70718@oracle.com> <4a207e6c-c780-2f9b-c8f3-21d93dcc07b4@oracle.com> Message-ID: > On Feb 8, 2018, at 11:10 AM, coleen.phillimore at oracle.com wrote: > > > > On 2/8/18 7:50 AM, coleen.phillimore at oracle.com wrote: >> >> >> On 2/8/18 1:37 AM, Kim Barrett wrote: >>>> On Feb 7, 2018, at 9:24 PM, Kim Barrett wrote: >>>> >>>>> On Feb 7, 2018, at 4:19 PM, coleen.phillimore at oracle.com wrote: >>>>> I think this logging should be debug mode because I don't know if you want this with -Xlog (default is log everything with info mode, I think), except this one: >>>>> >>>>> log_info(oopstorage, ref)("%s: failed allocation", name()); >>>> If anything, I'm tempted to make that one a warning. Odds are pretty >>>> good that it will lead to some error or abort somewhere up the call chain. >>>> >>>> I think the oopstorage,blocks log_info's are reasonable. >>>> Other than the one you mention, oopstorage,ref log_info's might be a >>>> bit much, particularly as we start making more use of oopstorage. But >>>> I'd like to consider changes to the logging levels here as a separate >>>> issue. > > Yes, we can discuss this as a separate issue. I just happened to notice it while reading through your change. The lock free release change looks good. Thanks. > > Thanks, > Coleen > >>> "java -Xlog -version? produces over 1100 lines of output. It?s not obvious the >>> existing info output from oopstorage would make a noticeable difference. >>> >> >> Yes, that is why I would like this logging to not be info. >> Thanks, >> Coleen From jschlather at hubspot.com Thu Feb 8 18:26:48 2018 From: jschlather at hubspot.com (Jacob Schlather) Date: Thu, 8 Feb 2018 13:26:48 -0500 Subject: Native Memory Leak In-Reply-To: <2e9951d9-42ea-dab9-d73d-e8c3e73dc511@redhat.com> References: <2e9951d9-42ea-dab9-d73d-e8c3e73dc511@redhat.com> Message-ID: Thanks Andrew. I've looked at file handles and they don't seem to be growing over time. The allocations I posted were after the JVM had been up for one day. After another 24 hours all of the allocations have nearly doubled the internal memory reservation is at 435mb, with the MemberTable at 212mb and the JNI allocations around 140mb. Is this sort of growth of these normal? The characteristic of this issue that I've been seeing is that the memory of the JVM monotically increases over time with internal memory seeming to be the culprit from native memory tracking. We're seeing this issue across multiple services, on our higher throughput boxes internal memory will grow to 3gb, while the heap is 6g. [0x00007efffa8d29fd] GenericGrowableArray::raw_allocate(int)+0x17d [0x00007efffab86456] MemberNameTable::add_member_name(_jobject*)+0x66 [0x00007efffa8ffb24] InstanceKlass::add_member_name(Handle)+0x84 [0x00007efffab8777d] MethodHandles::init_method_MemberName(Handle, CallInfo&)+0x28d (malloc=212998KB +212577KB #19) On Thu, Feb 8, 2018 at 12:38 PM, Andrew Haley wrote: > On 07/02/18 19:27, Jacob Schlather wrote: > > Having found these I could use some guidance on what to look at next and > > what sorts of calls could potentially be causing these issues. We don't > > have any explicit uses of JNI that I can tell in our code bases. Let me > > know if there's anymore information that would be helpful. > > These look pretty normal. Assuming that this is Linux, look at > /proc//maps/. You might actually be leaking memory because > of some native structures like FileHandles. > > -- > Andrew Haley > Java Platform Lead Engineer > Red Hat UK Ltd. > EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 > From aph at redhat.com Thu Feb 8 18:34:14 2018 From: aph at redhat.com (Andrew Haley) Date: Thu, 8 Feb 2018 18:34:14 +0000 Subject: Native Memory Leak In-Reply-To: References: <2e9951d9-42ea-dab9-d73d-e8c3e73dc511@redhat.com> Message-ID: <96f3db66-df26-db06-58f6-301186bbc3b1@redhat.com> On 08/02/18 18:26, Jacob Schlather wrote: > [0x00007efffa8d29fd] GenericGrowableArray::raw_allocate(int)+0x17d > [0x00007efffab86456] MemberNameTable::add_member_name(_jobject*)+0x66 > [0x00007efffa8ffb24] InstanceKlass::add_member_name(Handle)+0x84 > [0x00007efffab8777d] MethodHandles::init_method_MemberName(Handle, > CallInfo&)+0x28d > (malloc=212998KB +212577KB #19) Looks to me like you're leaking MethodHandles. It should be easy enough for you to scan your code for those. If you're not generating MethodHandles yourself, put a breakpoint on MethodHandle's constructor and see who is. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From kim.barrett at oracle.com Thu Feb 8 19:01:53 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 8 Feb 2018 14:01:53 -0500 Subject: RFR: 8194691: Cleanup unnecessary casts in Atomic/OrderAccess uses In-Reply-To: <1518089007.2700.1.camel@oracle.com> References: <1518089007.2700.1.camel@oracle.com> Message-ID: > On Feb 8, 2018, at 6:23 AM, Thomas Schatzl wrote: > > On Wed, 2018-02-07 at 17:00 -0500, Kim Barrett wrote: >> Please review this removal of unnecessary casts in calls to Atomic >> and >> OrderAccess functions. This isn't an attempt to be complete, but >> eliminates some easily found and easy to fix cases. >> >> Also changed some uses of Atomic::add with a negated value to instead >> use Atomic::sub. >> >> I've not made any changes around JavaThreadState and >> Thread::_thread_state manipulation. That may require more >> refactoring >> to deal with than I wanted to mix in with this otherwise fairly >> straight-forward set of changes. >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8194691 >> >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8194691/open.00/ >> >> Testing: >> Mach5 {hs,jdk}-tier{1,2,3} >> > > - the copyright line in dependencyContext.cpp should read (c) ... > 2015, 2018, ..., mentioning both years like in other files. Oops. Thanks for spotting that. > Looks good otherwise. I do not need to re-review above change added. > > Thanks, > Thomas Thanks. From coleen.phillimore at oracle.com Thu Feb 8 19:07:49 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 8 Feb 2018 14:07:49 -0500 Subject: Native Memory Leak In-Reply-To: References: Message-ID: <0cb32e39-1532-2a6c-2790-8a579a35e296@oracle.com> On 2/7/18 2:27 PM, Jacob Schlather wrote: > I hope this is the right mailing list. I've been tracking down a memory > leak that seems to be related to > https://bugs.java.com/view_bug.do?bug_id=8162795. Running on Java 1.8.0_144 > with native memory tracking enabled, we've been seeing the internal memory > size grow until the kernel kills the JVM for using too much memory. Using > the native memory detail diff tool I've been able to find the following > items that seem to be growing, one being a MemberNameTable > > [0x00007efffa8d29fd] GenericGrowableArray::raw_allocate(int)+0x17d > [0x00007efffab86456] MemberNameTable::add_member_name(_jobject*)+0x66 > [0x00007efffa8ffb24] InstanceKlass::add_member_name(Handle)+0x84 > [0x00007efffab8777d] MethodHandles::init_method_MemberName(Handle, > CallInfo&)+0x28d > (malloc=106502KB +106081KB #19) > > and another being some JNI allocations The MemberNameTable in jdk8 used jweak as pointers to MemberNames so the below increases in JNI block memory would also be accountable to the MemberNameTable.?? We fixed this temporarily in jdk8 and 9 to leak less memory but it still leaked.? The long term non-leaking fix is in jdk10, which is pointed to by the bugs you found. I would follow Andrew Haley's suggestion and see if you can find and reduce the use of MethodHandles in the application.?? I don't think we have any workarounds other than backporting the jdk10 fix, or adding a diagnostic option to disable the table.?? If you don't use RedefineClasses, this table is unnecessary. Were these the highest differences in the native memory that were observed using NMT? Thanks, Coleen > > [0x00007efffa9ce58a] JNIHandleBlock::allocate_block(Thread*)+0xaa > [0x00007efffad36830] JavaThread::run()+0xb0 > [0x00007efffabe7338] java_start(Thread*)+0x108 > (malloc=23903KB +23697KB #78452 +77776) > > [0x00007efffa9ce58a] JNIHandleBlock::allocate_block(Thread*)+0xaa > [0x00007efffa94f7fb] JavaCallWrapper::JavaCallWrapper(methodHandle, Handle, > JavaValue*, Thread*)+0x6b > [0x00007efffa9506c4] JavaCalls::call_helper(JavaValue*, methodHandle*, > JavaCallArguments*, Thread*)+0x884 > [0x00007efffa904001] InstanceKlass::register_finalizer(instanceOopDesc*, > Thread*)+0xf1 > (malloc=19770KB +19727KB #64887 +64745) > > [0x00007efffa9ce58a] JNIHandleBlock::allocate_block(Thread*)+0xaa > [0x00007efffa94f7fb] JavaCallWrapper::JavaCallWrapper(methodHandle, Handle, > JavaValue*, Thread*)+0x6b > [0x00007efffa9506c4] JavaCalls::call_helper(JavaValue*, methodHandle*, > JavaCallArguments*, Thread*)+0x884 > [0x00007efffa9513a1] JavaCalls::call_virtual(JavaValue*, KlassHandle, > Symbol*, Symbol*, JavaCallArguments*, Thread*)+0x321 > (malloc=29794KB +29563KB #97786 +97028) > > Having found these I could use some guidance on what to look at next and > what sorts of calls could potentially be causing these issues. We don't > have any explicit uses of JNI that I can tell in our code bases. Let me > know if there's anymore information that would be helpful. From kirk.pepperdine at gmail.com Fri Feb 9 08:10:30 2018 From: kirk.pepperdine at gmail.com (Kirk Pepperdine) Date: Fri, 9 Feb 2018 09:10:30 +0100 Subject: Native Memory Leak In-Reply-To: <2e9951d9-42ea-dab9-d73d-e8c3e73dc511@redhat.com> References: <2e9951d9-42ea-dab9-d73d-e8c3e73dc511@redhat.com> Message-ID: <3AB5C19D-12E8-4B57-BDEF-777D5D235672@gmail.com> > On Feb 8, 2018, at 6:38 PM, Andrew Haley wrote: > > On 07/02/18 19:27, Jacob Schlather wrote: >> Having found these I could use some guidance on what to look at next and >> what sorts of calls could potentially be causing these issues. We don't >> have any explicit uses of JNI that I can tell in our code bases. Let me >> know if there's anymore information that would be helpful. > > These look pretty normal. Assuming that this is Linux, look at > /proc//maps/. You might actually be leaking memory because > of some native structures like FileHandles. These will be cleaned up by finalization. I don?t think you?ll find a leak unless you can perform a few diff?s to get a better view. > > -- > Andrew Haley > Java Platform Lead Engineer > Red Hat UK Ltd. > EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From aph at redhat.com Fri Feb 9 09:15:25 2018 From: aph at redhat.com (Andrew Haley) Date: Fri, 9 Feb 2018 09:15:25 +0000 Subject: Native Memory Leak In-Reply-To: <3AB5C19D-12E8-4B57-BDEF-777D5D235672@gmail.com> References: <2e9951d9-42ea-dab9-d73d-e8c3e73dc511@redhat.com> <3AB5C19D-12E8-4B57-BDEF-777D5D235672@gmail.com> Message-ID: On 09/02/18 08:10, Kirk Pepperdine wrote: > These will be cleaned up by finalization. If you're lucky. As discussed at some length here, once something ends up in the old generation it can be stuck there for a very long time, and maybe never be collected. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From kirk.pepperdine at gmail.com Fri Feb 9 09:25:54 2018 From: kirk.pepperdine at gmail.com (Kirk Pepperdine) Date: Fri, 9 Feb 2018 10:25:54 +0100 Subject: Native Memory Leak In-Reply-To: References: <2e9951d9-42ea-dab9-d73d-e8c3e73dc511@redhat.com> <3AB5C19D-12E8-4B57-BDEF-777D5D235672@gmail.com> Message-ID: <2678AE55-5742-4B42-9E7A-BEA3C66E8E6B@gmail.com> > On Feb 9, 2018, at 10:15 AM, Andrew Haley wrote: > > On 09/02/18 08:10, Kirk Pepperdine wrote: >> These will be cleaned up by finalization. > > If you're lucky. As discussed at some length here, once something ends up > in the old generation it can be stuck there for a very long time, and > maybe never be collected. True enough but on the first collection of tenured the file handles will be returned to the OS and the memory should be recovered meaning that over the long run, native usage should stabilize. If it doesn?t, two back to back calls to System.gc() should give you a hint if this is an issue. What I see more frequently is anon blocks (pmap) accumulating as they are for some reason, not being reused and not being reclaimed. Generally this happens if you churn through threads at a significant rate. Kind regards, Kirk Pepperdine From kirk.pepperdine at gmail.com Fri Feb 9 09:46:14 2018 From: kirk.pepperdine at gmail.com (Kirk Pepperdine) Date: Fri, 9 Feb 2018 10:46:14 +0100 Subject: Native Memory Leak In-Reply-To: References: <2e9951d9-42ea-dab9-d73d-e8c3e73dc511@redhat.com> Message-ID: Hi Jacob, Out of curiosity, are you using any JDBC drivers? I believe there is a version of some drivers that do leak native memory buffers. Also, are you tracking anon block allocations with pmap? Kind regards, Kirk Pepperdine > On Feb 8, 2018, at 7:26 PM, Jacob Schlather wrote: > > Thanks Andrew. I've looked at file handles and they don't seem to be > growing over time. The allocations I posted were after the JVM had been up > for one day. After another 24 hours all of the allocations have nearly > doubled the internal memory reservation is at 435mb, with the MemberTable > at 212mb and the JNI allocations around 140mb. Is this sort of growth of > these normal? The characteristic of this issue that I've been seeing is > that the memory of the JVM monotically increases over time with internal > memory seeming to be the culprit from native memory tracking. We're seeing > this issue across multiple services, on our higher throughput boxes > internal memory will grow to 3gb, while the heap is 6g. > > [0x00007efffa8d29fd] GenericGrowableArray::raw_allocate(int)+0x17d > [0x00007efffab86456] MemberNameTable::add_member_name(_jobject*)+0x66 > [0x00007efffa8ffb24] InstanceKlass::add_member_name(Handle)+0x84 > [0x00007efffab8777d] MethodHandles::init_method_MemberName(Handle, > CallInfo&)+0x28d > (malloc=212998KB +212577KB #19) > > > On Thu, Feb 8, 2018 at 12:38 PM, Andrew Haley wrote: > >> On 07/02/18 19:27, Jacob Schlather wrote: >>> Having found these I could use some guidance on what to look at next and >>> what sorts of calls could potentially be causing these issues. We don't >>> have any explicit uses of JNI that I can tell in our code bases. Let me >>> know if there's anymore information that would be helpful. >> >> These look pretty normal. Assuming that this is Linux, look at >> /proc//maps/. You might actually be leaking memory because >> of some native structures like FileHandles. >> >> -- >> Andrew Haley >> Java Platform Lead Engineer >> Red Hat UK Ltd. >> EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 >> From matthias.baesken at sap.com Fri Feb 9 10:29:50 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Fri, 9 Feb 2018 10:29:50 +0000 Subject: RFR : 8197412 Enable docker container related tests for linux s390x Message-ID: <655c04a77d6a41e993cc55d7e8700301@sap.com> Hello, please review : 8197412 Enable docker container related tests for linux s390x This enables the docker related tests on Linux s390x. I tested on SLES12 , docker version 17.09.1 . Some comments : TestCPUSets.java : is for now disabled on s390x because Cpus_allowed_list from /proc/self/status can give misleading values (larger than the currently available CPU number). DockerTestUtils.java : I changed the order to docker build to what is really documented . Docker help build says : Usage: docker build [OPTIONS] PATH | URL | - On older docker versions the order is important ( but on docker 17.x is seems to be ok to give the path first, still prefer to change it to what the help says ). Webrev : http://cr.openjdk.java.net/~mbaesken/webrevs/8197412.0/ bug : https://bugs.openjdk.java.net/browse/JDK-8197412 Thanks, Matthias From goetz.lindenmaier at sap.com Fri Feb 9 11:52:02 2018 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Fri, 9 Feb 2018 11:52:02 +0000 Subject: RFR : 8197412 Enable docker container related tests for linux s390x In-Reply-To: <655c04a77d6a41e993cc55d7e8700301@sap.com> References: <655c04a77d6a41e993cc55d7e8700301@sap.com> Message-ID: <2718e58bc40242edac2f540be5152945@sap.com> Hi Matthias, looks good! Best regards, Goetz. > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On > Behalf Of Baesken, Matthias > Sent: Freitag, 9. Februar 2018 11:30 > To: 'hotspot-dev at openjdk.java.net' > Subject: RFR : 8197412 Enable docker container related tests for linux s390x > > Hello, please review : > > 8197412 Enable docker container related tests for linux s390x > > > This enables the docker related tests on Linux s390x. I tested on SLES12 , > docker version 17.09.1 . > Some comments : > > TestCPUSets.java : > is for now disabled on s390x because Cpus_allowed_list from > /proc/self/status can give misleading values (larger than the currently > available CPU number). > > DockerTestUtils.java : > I changed the order to docker build to what is really documented . > Docker help build says : > Usage: docker build [OPTIONS] PATH | URL | - > On older docker versions the order is important ( but on docker 17.x is seems > to be ok to give the path first, still prefer to change it to what the help says ). > > > > Webrev : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8197412.0/ > > > bug : > > https://bugs.openjdk.java.net/browse/JDK-8197412 > > > > Thanks, Matthias From jschlather at hubspot.com Fri Feb 9 15:46:46 2018 From: jschlather at hubspot.com (Jacob Schlather) Date: Fri, 9 Feb 2018 10:46:46 -0500 Subject: Native Memory Leak In-Reply-To: References: <2e9951d9-42ea-dab9-d73d-e8c3e73dc511@redhat.com> Message-ID: Thanks Andrew & Colleen, that was really helpful. The issue ended up being inside caffeine. In a recent upgrade they changed over the way caches are created to use method handles and some of our tooling continually recreates these caches. I downgraded caffeine to before the upgrade and the issue went away. I filed an issue in caffeine https://github.com/ben-manes/caffeine/issues/222 . Thanks for the help. On Fri, Feb 9, 2018 at 4:46 AM, Kirk Pepperdine wrote: > Hi Jacob, > > Out of curiosity, are you using any JDBC drivers? I believe there is a > version of some drivers that do leak native memory buffers. Also, are you > tracking anon block allocations with pmap? > > Kind regards, > Kirk Pepperdine > > > On Feb 8, 2018, at 7:26 PM, Jacob Schlather > wrote: > > > > Thanks Andrew. I've looked at file handles and they don't seem to be > > growing over time. The allocations I posted were after the JVM had been > up > > for one day. After another 24 hours all of the allocations have nearly > > doubled the internal memory reservation is at 435mb, with the MemberTable > > at 212mb and the JNI allocations around 140mb. Is this sort of growth of > > these normal? The characteristic of this issue that I've been seeing is > > that the memory of the JVM monotically increases over time with internal > > memory seeming to be the culprit from native memory tracking. We're > seeing > > this issue across multiple services, on our higher throughput boxes > > internal memory will grow to 3gb, while the heap is 6g. > > > > [0x00007efffa8d29fd] GenericGrowableArray::raw_allocate(int)+0x17d > > [0x00007efffab86456] MemberNameTable::add_member_name(_jobject*)+0x66 > > [0x00007efffa8ffb24] InstanceKlass::add_member_name(Handle)+0x84 > > [0x00007efffab8777d] MethodHandles::init_method_MemberName(Handle, > > CallInfo&)+0x28d > > (malloc=212998KB +212577KB #19) > > > > > > On Thu, Feb 8, 2018 at 12:38 PM, Andrew Haley wrote: > > > >> On 07/02/18 19:27, Jacob Schlather wrote: > >>> Having found these I could use some guidance on what to look at next > and > >>> what sorts of calls could potentially be causing these issues. We don't > >>> have any explicit uses of JNI that I can tell in our code bases. Let me > >>> know if there's anymore information that would be helpful. > >> > >> These look pretty normal. Assuming that this is Linux, look at > >> /proc//maps/. You might actually be leaking memory because > >> of some native structures like FileHandles. > >> > >> -- > >> Andrew Haley > >> Java Platform Lead Engineer > >> Red Hat UK Ltd. > >> EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 > >> > > From aph at redhat.com Fri Feb 9 16:51:02 2018 From: aph at redhat.com (Andrew Haley) Date: Fri, 9 Feb 2018 16:51:02 +0000 Subject: RFR: 8197429: Increased stack guard causes segfaults on x86-32 Message-ID: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> 32-bit Linux x86 HotSpot allocates an executable memory region just below the end of the stack. This is a workaround for JDK-8023956, which in turn relates to a bug in the RHEL 5 & 6 kernels on old (pre-NX) CPUs: To summarize: to emulate NX feature on X86_32 code segment is used to limit execution to the highest executable VA. There is a tiny race on SMP MM invalidation code which can cause the lazy CS update code in trap handling to think a general protection fault wasn't cause by itself. This results in sending the JVM a useless SIGSEGV with si_code:SI_KERNEL, results in JVM signal handling forcing a dump. The suggested work around (limited to 32 bit Linux): is to enable execution (PROT_EXEC) on a high address and execute some code. To be more precise: on 32-bit Linux kernels the top of the main stack is at about 3G (0xC0000000), and HotSpot creates an executable mapping of a single page just a little way below the main stack and executes an instruction in it. (It then leaves the region mapped; I do not know why. It could be that the region could be removed at this point.) Some new Linux kernels by default have a stack guard of a megabyte between the main stack and any allocated memory region. See CVE-2017-1000364. (Note that this megabyte is a default: it can be changed at boot time.) So, when the stack grows to within a megabyte of the executable region HotSpot installed, the process segfaults and is killed. This only happens when we're running on the main stack, and that only happens when using the JNI invocation interface. I have looked at several ways to fix this. One was to probe to find out what the stack guard size is, and to place the executable mapping an appropriate distance from the stack; this can be done, but it is complex. I believe it's also unnecessary, because the workaround for JDK-8023956 isn't needed with the newer kernels that have the larger stack guard gap. The fix I'm proposing here first bangs down the stack to the Java stack limit, then tries to map the executable memory region. On systems with a large stack guard gap this mapping attempt will fail, and we return and continue. On older systems which do not have a large stack gap it will continue and install the executable memory region. There are other possible fixes. Rather than failing, we could loop trying to install the executable mapping until we succeed. http://cr.openjdk.java.net/~aph/8197429-1/ -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From leonid.mesnik at oracle.com Sat Feb 10 00:41:19 2018 From: leonid.mesnik at oracle.com (Leonid Mesnik) Date: Fri, 9 Feb 2018 16:41:19 -0800 Subject: RFR: 8197429: Increased stack guard causes segfaults on x86-32 In-Reply-To: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> References: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> Message-ID: <72B4DDAB-A875-47C4-BD1C-312AAEAED4B0@oracle.com> Hi Andrew Could you please update bug https://bugs.openjdk.java.net/browse/JDK-8197429 with affected and fixed version of jdk. It is unclear where you plan to push this fix. I assume that you are going to push into jdk/hs repo. (from your webrev: Compare against:http://hg.openjdk.java.net/jdk/hs) I looked on the tests only. Here are my comments: 1) There are no copyrights for new files. 2) File http://cr.openjdk.java.net/~aph/8197429-1/test/hotspot/jtreg/runtime/8197429/foo.java.html contains the same code as in T.java and doesn?t seems to be used. 3) The preferable name for test is some meaningful and rather then bug-id. Would it possible to replace runtime/8197429 with something like runtime/segfaults ? 4) Currently the correct way to use native libs is to compile it during build and use with -nativepath. See make examples here: http://hg.openjdk.java.net/jdk/jdk10/file/tip/make/test/JtregNativeHotspot.gmk http://hg.openjdk.java.net/jdk/jdk10/file/tip/test/hotspot/jtreg/runtime/jni/CalleeSavedRegisters So you might rewrite your test completely on java. So you could use requires tag to filter out unsupported platforms. Also logic of choosing platform will be slightly different. Leonid > On Feb 9, 2018, at 8:51 AM, Andrew Haley wrote: > > 32-bit Linux x86 HotSpot allocates an executable memory region just > below the end of the stack. This is a workaround for JDK-8023956, > which in turn relates to a bug in the RHEL 5 & 6 kernels on old > (pre-NX) CPUs: > > To summarize: to emulate NX feature on X86_32 code segment is used > to limit execution to the highest executable VA. There is a tiny > race on SMP MM invalidation code which can cause the lazy CS update > code in trap handling to think a general protection fault wasn't > cause by itself. This results in sending the JVM a useless SIGSEGV > with si_code:SI_KERNEL, results in JVM signal handling forcing a > dump. > > The suggested work around (limited to 32 bit Linux): is to enable > execution (PROT_EXEC) on a high address and execute some code. > > To be more precise: on 32-bit Linux kernels the top of the main stack > is at about 3G (0xC0000000), and HotSpot creates an executable mapping > of a single page just a little way below the main stack and executes > an instruction in it. (It then leaves the region mapped; I do not > know why. It could be that the region could be removed at this > point.) > > Some new Linux kernels by default have a stack guard of a megabyte > between the main stack and any allocated memory region. See > CVE-2017-1000364. (Note that this megabyte is a default: it can be > changed at boot time.) > > So, when the stack grows to within a megabyte of the executable region > HotSpot installed, the process segfaults and is killed. This only > happens when we're running on the main stack, and that only happens > when using the JNI invocation interface. > > > I have looked at several ways to fix this. One was to probe to find > out what the stack guard size is, and to place the executable mapping > an appropriate distance from the stack; this can be done, but it is > complex. I believe it's also unnecessary, because the workaround for > JDK-8023956 isn't needed with the newer kernels that have the larger > stack guard gap. > > The fix I'm proposing here first bangs down the stack to the Java > stack limit, then tries to map the executable memory region. On > systems with a large stack guard gap this mapping attempt will fail, > and we return and continue. On older systems which do not have a > large stack gap it will continue and install the executable memory > region. > > There are other possible fixes. Rather than failing, we could loop > trying to install the executable mapping until we succeed. > > http://cr.openjdk.java.net/~aph/8197429-1/ > > -- > Andrew Haley > Java Platform Lead Engineer > Red Hat UK Ltd. > EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From dms at samersoff.net Sat Feb 10 10:58:32 2018 From: dms at samersoff.net (Dmitry Samersoff) Date: Sat, 10 Feb 2018 13:58:32 +0300 Subject: Constant dynamic pushed to the hs repo In-Reply-To: <093b9c05-4414-6341-9e39-c2e1cb5d9059@redhat.com> References: <093b9c05-4414-6341-9e39-c2e1cb5d9059@redhat.com> Message-ID: <002f5746-db5c-c964-3e6e-3e98d4d22566@samersoff.net> Everybody, AArch64 changes pushed to repository. -Dmitry On 02/01/2018 01:09 PM, Andrew Haley wrote: > On 31/01/18 22:43, Paul Sandoz wrote: >> I just pushed the constant dynamic change sets to hs [*]. It took a little longer than I anticipated to work through some of the review process given the holiday break. >> >> We should now be able to follow up, in the hs repo until the merge in some cases, with dependent issues such as the changes to support AArch64, SPARC, AoT/Graal, additional tests, and some bug/performance fixes. > > OK. Can you please send a list of those changesets? I guess they're > just everything pushed by you on Jan 31, but I wanted to check. > -- Dmitry Samersoff http://devnull.samersoff.net * There will come soft rains ... From dmitry.samersoff at bell-sw.com Sat Feb 10 12:10:33 2018 From: dmitry.samersoff at bell-sw.com (Dmitry Samersoff) Date: Sat, 10 Feb 2018 15:10:33 +0300 Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 Message-ID: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> Everybody, Please review small changes, that enables docker testing on Linux/AArch64 http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.01/ PS: Matthias - I refactored VMProps.dockerSupport() a bit to make it more readable, please check that it doesn't brake your work. -Dmitry -- Dmitry Samersoff http://devnull.samersoff.net * There will come soft rains ... From aph at redhat.com Sat Feb 10 16:28:36 2018 From: aph at redhat.com (Andrew Haley) Date: Sat, 10 Feb 2018 16:28:36 +0000 Subject: RFR: 8197429: Increased stack guard causes segfaults on x86-32 In-Reply-To: <72B4DDAB-A875-47C4-BD1C-312AAEAED4B0@oracle.com> References: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> <72B4DDAB-A875-47C4-BD1C-312AAEAED4B0@oracle.com> Message-ID: On 10/02/18 00:41, Leonid Mesnik wrote: > > Could you please update bug > https://bugs.openjdk.java.net/browse/JDK-8197429 with affected and > fixed version of jdk. It is unclear where you plan to push this > fix. I assume that you are going to push into jdk/hs repo. Sure. The bug affects all versions of HotSpot running on a modern Linux kernel, going back years. > (from your webrev: Compare against:http://hg.openjdk.java.net/jdk/hs) > > I looked on the tests only. Here are my comments: > > 1) There are no copyrights for new files. OK. > 2) File > http://cr.openjdk.java.net/~aph/8197429-1/test/hotspot/jtreg/runtime/8197429/foo.java.html > contains the same code as in T.java and doesn?t seems to be used. OK. > > 3) The preferable name for test is some meaningful and rather then bug-id. > Would it possible to replace runtime/8197429 with something like runtime/segfaults ? I guess so, but that doesn't help much. I'll try something like "stack guard" as a name. > 4) Currently the correct way to use native libs is to compile it during build and use with -nativepath. > See make examples here: > http://hg.openjdk.java.net/jdk/jdk10/file/tip/make/test/JtregNativeHotspot.gmk > http://hg.openjdk.java.net/jdk/jdk10/file/tip/test/hotspot/jtreg/runtime/jni/CalleeSavedRegisters > So you might rewrite your test completely on java. So you could use requires tag to filter out unsupported platforms. > Also logic of choosing platform will be slightly different. I'll do that. I wasn't at all sure whether only to test x86 for this bug: sure, only x86 is affected now, but it does not hurt to test JNI invocation with non-standard stack sizes on all platforms. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From kim.barrett at oracle.com Sun Feb 11 07:25:57 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Sun, 11 Feb 2018 02:25:57 -0500 Subject: RFR: 8197454: Need Access decorator for storing oop into uninitialized location Message-ID: <0D2C672D-3310-40C3-9C6D-B28CA70F2AFC@oracle.com> Please review this change to the Access API to support stores of oops into uninitialized locations. This change is needed to prevent such stores from, for example, having the G1 pre-barrier applied to whatever garbage happens to be in the location being stored into. There was already support for stores to uninitialized locations in the Access API, but only for array initialization. This change generalizes that mechanism, and renames it accordingly: ARRAYCOPY_DEST_NOT_INITIALIZED => AS_DEST_NOT_INITIALIZED. CR: https://bugs.openjdk.java.net/browse/JDK-8197454 Webrev: http://cr.openjdk.java.net/~kbarrett/8197454/open.00/ Testing: Mach5 {hs,jdk}-tier{1,2,3} From david.holmes at oracle.com Mon Feb 12 06:59:08 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 12 Feb 2018 16:59:08 +1000 Subject: RFR : 8196578 : enhance errno_to_string function in os.cpp with some additional errno texts from AIX 7.1 In-Reply-To: References: <93fc660476f1490da815cdfba98ff623@sap.com> Message-ID: <6fcb1c13-8ea8-8d8d-5f99-060575c7be48@oracle.com> On 9/02/2018 3:19 AM, Thomas St?fe wrote: > After discussing this off-line with Matthias and Goetz, I withdraw my > opposition to this patch. > > Still not a big fan, but if everyone else (including David) is okay with > this patch, I am too. I'm certainly not a fan of it - there are probably BSD, OS X and Solaris specific error codes that might be listed too. I'd prefer to see a RFE to rip this out completely (or deal with platform specific extensions). But until such a time I can grudgingly accept this patch. Cheers, David > Kind Regards, Thomas > > On Fri, Feb 2, 2018 at 10:20 AM, Thomas St?fe > wrote: > > > > On Fri, Feb 2, 2018 at 9:57 AM, David Holmes > > wrote: > > While I did not do an exhaustive check of the existing codes > even the ones under > > // The following enums are not defined on all platforms. > > are at least defined by POSIX (even if just listed as "Reserved"). > > So I am still reluctant to introduce OS specific codes into a > shared file. Plus there's the problem of different OS having > different meanings for the same error code - suggesting per-OS > specialization might be useful (but tricky to implement). > > That said I have to re-question whether we should be maintaining > this explicit string mapping table anyway? strerror() is not > thread-safe but strerror_l() seems to be, or at worst we need > buffer management with strerror_r(). I know this topic has > arisen before ... > > > How about we build the string table dynamically at process start by > iterating the first n errnos and calling strerror() :) Just kidding. > > Yes, I admit this table starts to feel weird. Original discussions > were here: https://bugs.openjdk.java.net/browse/JDK-8148425 > > > I originally just wanted a static translation of errno numbers to > literalized errno constants (e.g. ETOOMANYREFS => "ETOOMANYREFS"), > because in 99% of cases where we call os::strerror() we do this to > print log output for developers, and as a developer I find > "ETOOMANYREFS" far more succinct than whatever strerror() returns. > This would also bypass any localization issues. If I see > "ETOOMANYREFS" in a log file I immediately know this is an error > code from the libc, and can look it up in the man page or google it. > But when I read "Too many references: can't splice" - potentially in > Portuguese :) - I would have to dig a bit until I find out what is > actually happening. > > Of course, there are cases where we want the human readable, > localized text, but those cases are rarer and could be rewritten to > use strerror_r. > > Just my 5 cent. > > ..Thomas > > Cheers, > David > > On 2/02/2018 6:40 PM, Thomas St?fe wrote: > > On Fri, Feb 2, 2018 at 9:02 AM, Baesken, Matthias > > > wrote: > > > ? ? - I do not really like spamming a shared file with > AIX specific errno > > ? ? codes. > > > > Hi, I wrote? ?for a few errnos ***we find*** on AIX > 7.1?? ?,? not that > they are? AIX ***specific***. > > Checked the first few added ones : > > > > 1522? ? ?// some more errno numbers from AIX 7.1 (some > are also supported > on Linux) > > 1523? ? ?#ifdef ENOTBLK > > 1524? ? ?DEFINE_ENTRY(ENOTBLK, "Block device required") > > 1525? ? ?#endif > > 1526? ? ?#ifdef ECHRNG > > 1527? ? ?DEFINE_ENTRY(ECHRNG, "Channel number out of range") > > 1528? ? ?#endif > > 1529? ? ?#ifdef ELNRNG > > 1530? ? ?DEFINE_ENTRY(ELNRNG, "Link number out of range") > > 1531? ? ?#endif > > > > According to > > > > http://www.ioplex.com/~miallen/errcmp.html > > > > > ENOTBLK ? found on AIX, Solaris, Linux, ? > > ECHRNG? ?-? found on? AIX, Solaris, Linux > > ELNRNG? ?-? found on AIX, Solaris, Linux > > > > > The argument can easily made in the other direction. > Checking the last n > errno codes I see: > > AIX, MAC +? ? #ifdef EPROCLIM > AIX only +? ? #ifdef ECORRUPT > AIX only? +? ? #ifdef ESYSERROR > AIX only +? ? DEFINE_ENTRY(ESOFT, "I/O completed, but needs > relocation") > AIX, MAC +? ? #ifdef ENOATTR > AIX only +? ? DEFINE_ENTRY(ESAD, "Security authentication > denied") > AIX? only? +? ? #ifdef ENOTRUST > ... > > > I would suggest to keep the multi-platform? errnos in > os.cpp? just where > they are . > > > > > I am still not convinced and like my original suggestion > better. Lets wait > for others to chime in and see what the consensus is. > > Best Regards, Thomas > > > > > ? ? - Can we move platform specific error codes to > platform files? Eg by > ? ? having a platform specific version > pd_errno_to_string(), > ? ? - which has a first shot at translating errno > values, and only if that > ? ? one returns no result reverting back to the shared > version? > ? ? - > > > > Can go through the list of added? errnos and check if > there are really a > few in that exist only on AIX. > > If there are a significant number we might do what you > suggest , but for > only a small number I wouldn?t do it. > > > > > > Small nit: > > > > > - DEFINE_ENTRY(ESTALE, "Reserved") > > > + DEFINE_ENTRY(ESTALE, "No filesystem / stale NFS > file handle") > > > > > I like the glibc text better, just "Stale file > handle". NFS seems too > > specific, can handles for other remote file systems not > get stale? > > > > That?s fine with me, I can change this to what you suggest. > > > > Best regards, Matthias > > > > > > *From:* Thomas St?fe [mailto:thomas.stuefe at gmail.com > ] > *Sent:* Donnerstag, 1. Februar 2018 18:38 > *To:* Baesken, Matthias > > *Cc:* hotspot-dev at openjdk.java.net > ; > ppc-aix-port-dev at openjdk.java.net > > *Subject:* Re: RFR : 8196578 : enhance errno_to_string > function in os.cpp > > with some additional errno texts from AIX 7.1 > > > > Hi Matthias, > > > > This would probably better discussed in hotspot-runtime, no? > > > > The old error codes and their descriptions were Posix ( > http://pubs.opengroup.org/onlinepubs/000095399/basedefs/errno.h.html > ). > I > do not really like spamming a shared file with AIX > specific errno codes. > Can we move platform specific error codes to platform > files? Eg by having a > platform specific version pd_errno_to_string(), which > has a first shot at > translating errno values, and only if that one returns > no result reverting > back to the shared version? > > > > Small nit: > > > > - DEFINE_ENTRY(ESTALE, "Reserved") > > + DEFINE_ENTRY(ESTALE, "No filesystem / stale NFS file > handle") > > > > I like the glibc text better, just "Stale file handle". > NFS seems too > specific, can handles for other remote file systems not > get stale? > > Kind Regards, Thomas > > > > On Thu, Feb 1, 2018 at 5:16 PM, Baesken, Matthias < > matthias.baesken at sap.com > > wrote: > > Hello , I enhanced? the? errno - to -? error-text > mappings? ?in os.cpp > ? for a few errnos we find on AIX 7.1 . > Some of these added errnos are found as well on? Linux > (e.g. SLES 11 / 12 > ). > > Could you please check and review ? > > ( btw. there is good cross platform? info about the > errnos at > http://www.ioplex.com/~miallen/errcmp.html > ? ? ?) > > Bug : > > https://bugs.openjdk.java.net/browse/JDK-8196578 > > > Webrev : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8196578/ > > > > > Best regards, Matthias > > > > > From david.holmes at oracle.com Mon Feb 12 07:19:41 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 12 Feb 2018 17:19:41 +1000 Subject: RFR: 8197429: Increased stack guard causes segfaults on x86-32 In-Reply-To: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> References: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> Message-ID: <6f61e8ad-a0f2-c6a4-dbc3-02ce8b621412@oracle.com> Hi Andrew, On 10/02/2018 2:51 AM, Andrew Haley wrote: > 32-bit Linux x86 HotSpot allocates an executable memory region just > below the end of the stack. This is a workaround for JDK-8023956, > which in turn relates to a bug in the RHEL 5 & 6 kernels on old > (pre-NX) CPUs: > > To summarize: to emulate NX feature on X86_32 code segment is used > to limit execution to the highest executable VA. There is a tiny > race on SMP MM invalidation code which can cause the lazy CS update > code in trap handling to think a general protection fault wasn't > cause by itself. This results in sending the JVM a useless SIGSEGV > with si_code:SI_KERNEL, results in JVM signal handling forcing a > dump. > > The suggested work around (limited to 32 bit Linux): is to enable > execution (PROT_EXEC) on a high address and execute some code. > > To be more precise: on 32-bit Linux kernels the top of the main stack > is at about 3G (0xC0000000), and HotSpot creates an executable mapping > of a single page just a little way below the main stack and executes > an instruction in it. (It then leaves the region mapped; I do not > know why. It could be that the region could be removed at this > point.) > > Some new Linux kernels by default have a stack guard of a megabyte > between the main stack and any allocated memory region. See > CVE-2017-1000364. (Note that this megabyte is a default: it can be > changed at boot time.) > > So, when the stack grows to within a megabyte of the executable region > HotSpot installed, the process segfaults and is killed. This only > happens when we're running on the main stack, and that only happens > when using the JNI invocation interface. > > > I have looked at several ways to fix this. One was to probe to find > out what the stack guard size is, and to place the executable mapping > an appropriate distance from the stack; this can be done, but it is > complex. I believe it's also unnecessary, because the workaround for > JDK-8023956 isn't needed with the newer kernels that have the larger > stack guard gap. > > The fix I'm proposing here first bangs down the stack to the Java > stack limit, then tries to map the executable memory region. On > systems with a large stack guard gap this mapping attempt will fail, > and we return and continue. On older systems which do not have a > large stack gap it will continue and install the executable memory > region. How does this interact with the use of DisablePrimordialThreadGuardPages? Thanks, David > There are other possible fixes. Rather than failing, we could loop > trying to install the executable mapping until we succeed. > > http://cr.openjdk.java.net/~aph/8197429-1/ > From matthias.baesken at sap.com Mon Feb 12 08:13:29 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Mon, 12 Feb 2018 08:13:29 +0000 Subject: RFR : 8197412 Enable docker container related tests for linux s390x In-Reply-To: <2718e58bc40242edac2f540be5152945@sap.com> References: <655c04a77d6a41e993cc55d7e8700301@sap.com> <2718e58bc40242edac2f540be5152945@sap.com> Message-ID: Thanks! Can I get a second review ? Best Regards, Matthias > -----Original Message----- > From: Lindenmaier, Goetz > Sent: Freitag, 9. Februar 2018 12:52 > To: Baesken, Matthias ; 'hotspot- > dev at openjdk.java.net' > Subject: RE: RFR : 8197412 Enable docker container related tests for linux > s390x > > Hi Matthias, > > looks good! > > Best regards, > Goetz. > > > -----Original Message----- > > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On > > Behalf Of Baesken, Matthias > > Sent: Freitag, 9. Februar 2018 11:30 > > To: 'hotspot-dev at openjdk.java.net' > > Subject: RFR : 8197412 Enable docker container related tests for linux s390x > > > > Hello, please review : > > > > 8197412 Enable docker container related tests for linux s390x > > > > > > This enables the docker related tests on Linux s390x. I tested on SLES12 , > > docker version 17.09.1 . > > Some comments : > > > > TestCPUSets.java : > > is for now disabled on s390x because Cpus_allowed_list from > > /proc/self/status can give misleading values (larger than the currently > > available CPU number). > > > > DockerTestUtils.java : > > I changed the order to docker build to what is really documented . > > Docker help build says : > > Usage: docker build [OPTIONS] PATH | URL | - > > On older docker versions the order is important ( but on docker 17.x is > seems > > to be ok to give the path first, still prefer to change it to what the help says > ). > > > > > > > > Webrev : > > > > http://cr.openjdk.java.net/~mbaesken/webrevs/8197412.0/ > > > > > > bug : > > > > https://bugs.openjdk.java.net/browse/JDK-8197412 > > > > > > > > Thanks, Matthias From matthias.baesken at sap.com Mon Feb 12 08:15:25 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Mon, 12 Feb 2018 08:15:25 +0000 Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 In-Reply-To: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> References: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> Message-ID: <7d6be29faafc44e596d03563bf45731c@sap.com> Hi Dmitry, looks good to me (not a Reviewer however). ( But guess it will be a merge conflict with 8197412 Enable docker container related tests for linux s390x > > Webrev : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8197412.0/ > > > bug : > > https://bugs.openjdk.java.net/browse/JDK-8197412 > ) Best regards, Matthias > -----Original Message----- > From: Dmitry Samersoff [mailto:dmitry.samersoff at bell-sw.com] > Sent: Samstag, 10. Februar 2018 13:11 > To: 'hotspot-dev at openjdk.java.net' > Cc: Baesken, Matthias ; > mikhailo.seledtsov at oracle.com > Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux > AARCH64 > > Everybody, > > Please review small changes, that enables docker testing on Linux/AArch64 > > http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.01/ > > PS: > > Matthias - I refactored VMProps.dockerSupport() a bit to make it more > readable, please check that it doesn't brake your work. > > -Dmitry > > -- > Dmitry Samersoff > http://devnull.samersoff.net > * There will come soft rains ... From thomas.stuefe at gmail.com Mon Feb 12 10:06:00 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 12 Feb 2018 11:06:00 +0100 Subject: RFR : 8196578 : enhance errno_to_string function in os.cpp with some additional errno texts from AIX 7.1 In-Reply-To: <6fcb1c13-8ea8-8d8d-5f99-060575c7be48@oracle.com> References: <93fc660476f1490da815cdfba98ff623@sap.com> <6fcb1c13-8ea8-8d8d-5f99-060575c7be48@oracle.com> Message-ID: On Mon, Feb 12, 2018 at 7:59 AM, David Holmes wrote: > On 9/02/2018 3:19 AM, Thomas St?fe wrote: > >> After discussing this off-line with Matthias and Goetz, I withdraw my >> opposition to this patch. >> >> Still not a big fan, but if everyone else (including David) is okay with >> this patch, I am too. >> > > I'm certainly not a fan of it - there are probably BSD, OS X and Solaris > specific error codes that might be listed too. I'd prefer to see a RFE to > rip this out completely (or deal with platform specific extensions). But > until such a time I can grudgingly accept this patch. > > Thank you David. I agree, lets revisit this later with an RFE. ..Thomas > Cheers, > David > > Kind Regards, Thomas >> >> On Fri, Feb 2, 2018 at 10:20 AM, Thomas St?fe > > wrote: >> >> >> >> On Fri, Feb 2, 2018 at 9:57 AM, David Holmes >> > wrote: >> >> While I did not do an exhaustive check of the existing codes >> even the ones under >> >> // The following enums are not defined on all platforms. >> >> are at least defined by POSIX (even if just listed as "Reserved"). >> >> So I am still reluctant to introduce OS specific codes into a >> shared file. Plus there's the problem of different OS having >> different meanings for the same error code - suggesting per-OS >> specialization might be useful (but tricky to implement). >> >> That said I have to re-question whether we should be maintaining >> this explicit string mapping table anyway? strerror() is not >> thread-safe but strerror_l() seems to be, or at worst we need >> buffer management with strerror_r(). I know this topic has >> arisen before ... >> >> >> How about we build the string table dynamically at process start by >> iterating the first n errnos and calling strerror() :) Just kidding. >> >> Yes, I admit this table starts to feel weird. Original discussions >> were here: https://bugs.openjdk.java.net/browse/JDK-8148425 >> >> >> I originally just wanted a static translation of errno numbers to >> literalized errno constants (e.g. ETOOMANYREFS => "ETOOMANYREFS"), >> because in 99% of cases where we call os::strerror() we do this to >> print log output for developers, and as a developer I find >> "ETOOMANYREFS" far more succinct than whatever strerror() returns. >> This would also bypass any localization issues. If I see >> "ETOOMANYREFS" in a log file I immediately know this is an error >> code from the libc, and can look it up in the man page or google it. >> But when I read "Too many references: can't splice" - potentially in >> Portuguese :) - I would have to dig a bit until I find out what is >> actually happening. >> >> Of course, there are cases where we want the human readable, >> localized text, but those cases are rarer and could be rewritten to >> use strerror_r. >> >> Just my 5 cent. >> >> ..Thomas >> >> Cheers, >> David >> >> On 2/02/2018 6:40 PM, Thomas St?fe wrote: >> >> On Fri, Feb 2, 2018 at 9:02 AM, Baesken, Matthias >> > >> >> wrote: >> >> >> - I do not really like spamming a shared file with >> AIX specific errno >> >> codes. >> >> >> >> Hi, I wrote ?for a few errnos ***we find*** on AIX >> 7.1? , not that >> they are AIX ***specific***. >> >> Checked the first few added ones : >> >> >> >> 1522 // some more errno numbers from AIX 7.1 (some >> are also supported >> on Linux) >> >> 1523 #ifdef ENOTBLK >> >> 1524 DEFINE_ENTRY(ENOTBLK, "Block device required") >> >> 1525 #endif >> >> 1526 #ifdef ECHRNG >> >> 1527 DEFINE_ENTRY(ECHRNG, "Channel number out of >> range") >> >> 1528 #endif >> >> 1529 #ifdef ELNRNG >> >> 1530 DEFINE_ENTRY(ELNRNG, "Link number out of range") >> >> 1531 #endif >> >> >> >> According to >> >> >> >> http://www.ioplex.com/~miallen/errcmp.html >> >> >> >> >> ENOTBLK ? found on AIX, Solaris, Linux, ? >> >> ECHRNG - found on AIX, Solaris, Linux >> >> ELNRNG - found on AIX, Solaris, Linux >> >> >> >> >> The argument can easily made in the other direction. >> Checking the last n >> errno codes I see: >> >> AIX, MAC + #ifdef EPROCLIM >> AIX only + #ifdef ECORRUPT >> AIX only + #ifdef ESYSERROR >> AIX only + DEFINE_ENTRY(ESOFT, "I/O completed, but needs >> relocation") >> AIX, MAC + #ifdef ENOATTR >> AIX only + DEFINE_ENTRY(ESAD, "Security authentication >> denied") >> AIX only + #ifdef ENOTRUST >> ... >> >> >> I would suggest to keep the multi-platform errnos in >> os.cpp just where >> they are . >> >> >> >> >> I am still not convinced and like my original suggestion >> better. Lets wait >> for others to chime in and see what the consensus is. >> >> Best Regards, Thomas >> >> >> >> >> - Can we move platform specific error codes to >> platform files? Eg by >> having a platform specific version >> pd_errno_to_string(), >> - which has a first shot at translating errno >> values, and only if that >> one returns no result reverting back to the shared >> version? >> - >> >> >> >> Can go through the list of added errnos and check if >> there are really a >> few in that exist only on AIX. >> >> If there are a significant number we might do what you >> suggest , but for >> only a small number I wouldn?t do it. >> >> >> >> >> >> Small nit: >> >> >> >> >> - DEFINE_ENTRY(ESTALE, "Reserved") >> >> >> + DEFINE_ENTRY(ESTALE, "No filesystem / stale NFS >> file handle") >> >> >> >> >> I like the glibc text better, just "Stale file >> handle". NFS seems too >> >> specific, can handles for other remote file systems not >> get stale? >> >> >> >> That?s fine with me, I can change this to what you >> suggest. >> >> >> >> Best regards, Matthias >> >> >> >> >> >> *From:* Thomas St?fe [mailto:thomas.stuefe at gmail.com >> ] >> *Sent:* Donnerstag, 1. Februar 2018 18:38 >> *To:* Baesken, Matthias > > >> *Cc:* hotspot-dev at openjdk.java.net >> ; >> ppc-aix-port-dev at openjdk.java.net >> >> *Subject:* Re: RFR : 8196578 : enhance errno_to_string >> function in os.cpp >> >> with some additional errno texts from AIX 7.1 >> >> >> >> Hi Matthias, >> >> >> >> This would probably better discussed in hotspot-runtime, >> no? >> >> >> >> The old error codes and their descriptions were Posix ( >> http://pubs.opengroup.org/onli >> nepubs/000095399/basedefs/errno.h.html >> > inepubs/000095399/basedefs/errno.h.html>). >> I >> do not really like spamming a shared file with AIX >> specific errno codes. >> Can we move platform specific error codes to platform >> files? Eg by having a >> platform specific version pd_errno_to_string(), which >> has a first shot at >> translating errno values, and only if that one returns >> no result reverting >> back to the shared version? >> >> >> >> Small nit: >> >> >> >> - DEFINE_ENTRY(ESTALE, "Reserved") >> >> + DEFINE_ENTRY(ESTALE, "No filesystem / stale NFS file >> handle") >> >> >> >> I like the glibc text better, just "Stale file handle". >> NFS seems too >> specific, can handles for other remote file systems not >> get stale? >> >> Kind Regards, Thomas >> >> >> >> On Thu, Feb 1, 2018 at 5:16 PM, Baesken, Matthias < >> matthias.baesken at sap.com >> > wrote: >> >> Hello , I enhanced the errno - to - error-text >> mappings in os.cpp >> for a few errnos we find on AIX 7.1 . >> Some of these added errnos are found as well on Linux >> (e.g. SLES 11 / 12 >> ). >> >> Could you please check and review ? >> >> ( btw. there is good cross platform info about the >> errnos at >> http://www.ioplex.com/~miallen/errcmp.html >> ) >> >> Bug : >> >> https://bugs.openjdk.java.net/browse/JDK-8196578 >> >> >> Webrev : >> >> http://cr.openjdk.java.net/~mbaesken/webrevs/8196578/ >> >> >> >> >> Best regards, Matthias >> >> >> >> >> >> From dmitry.samersoff at bell-sw.com Mon Feb 12 12:25:34 2018 From: dmitry.samersoff at bell-sw.com (Dmitry Samersoff) Date: Mon, 12 Feb 2018 15:25:34 +0300 Subject: RFR : 8197412 Enable docker container related tests for linux s390x In-Reply-To: <655c04a77d6a41e993cc55d7e8700301@sap.com> References: <655c04a77d6a41e993cc55d7e8700301@sap.com> Message-ID: <780cdda0-13d2-6731-8483-824b47ec7329@bell-sw.com> Matthias, Looks good to me. -Dmitry. On 09.02.2018 13:29, Baesken, Matthias wrote: > Hello, please review : > > 8197412 Enable docker container related tests for linux s390x > > > This enables the docker related tests on Linux s390x. I tested on SLES12 , docker version 17.09.1 . > Some comments : > > TestCPUSets.java : > is for now disabled on s390x because Cpus_allowed_list from /proc/self/status can give misleading values (larger than the currently available CPU number). > > DockerTestUtils.java : > I changed the order to docker build to what is really documented . > Docker help build says : > Usage: docker build [OPTIONS] PATH | URL | - > On older docker versions the order is important ( but on docker 17.x is seems to be ok to give the path first, still prefer to change it to what the help says ). > > > > Webrev : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8197412.0/ > > > bug : > > https://bugs.openjdk.java.net/browse/JDK-8197412 > > > > Thanks, Matthias > -- Dmitry Samersoff http://devnull.samersoff.net * There will come soft rains ... From dmitry.samersoff at bell-sw.com Mon Feb 12 12:27:26 2018 From: dmitry.samersoff at bell-sw.com (Dmitry Samersoff) Date: Mon, 12 Feb 2018 15:27:26 +0300 Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 In-Reply-To: <7d6be29faafc44e596d03563bf45731c@sap.com> References: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> <7d6be29faafc44e596d03563bf45731c@sap.com> Message-ID: <91ca2324-6fec-89bc-fb31-4724e390f80d@bell-sw.com> Matthias, > Hi Dmitry, looks good to me (not a Reviewer however). Thank you for the review. > ( But guess it will be a merge conflict with 8197412 Enable docker container related tests for linux s390x I'll wait until you have your changes committed, then update my one. -Dmitry On 12.02.2018 11:15, Baesken, Matthias wrote: > Hi Dmitry, looks good to me (not a Reviewer however). > > ( But guess it will be a merge conflict with 8197412 Enable docker container related tests for linux s390x > >> >> Webrev : >> >> http://cr.openjdk.java.net/~mbaesken/webrevs/8197412.0/ >> >> >> bug : >> >> https://bugs.openjdk.java.net/browse/JDK-8197412 >> > > ) > > > Best regards, Matthias > > >> -----Original Message----- >> From: Dmitry Samersoff [mailto:dmitry.samersoff at bell-sw.com] >> Sent: Samstag, 10. Februar 2018 13:11 >> To: 'hotspot-dev at openjdk.java.net' >> Cc: Baesken, Matthias ; >> mikhailo.seledtsov at oracle.com >> Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux >> AARCH64 >> >> Everybody, >> >> Please review small changes, that enables docker testing on Linux/AArch64 >> >> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.01/ >> >> PS: >> >> Matthias - I refactored VMProps.dockerSupport() a bit to make it more >> readable, please check that it doesn't brake your work. >> >> -Dmitry >> >> -- >> Dmitry Samersoff >> http://devnull.samersoff.net >> * There will come soft rains ... -- Dmitry Samersoff http://devnull.samersoff.net * There will come soft rains ... From erik.osterlund at oracle.com Mon Feb 12 14:16:48 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 12 Feb 2018 15:16:48 +0100 Subject: RFR: 8197454: Need Access decorator for storing oop into uninitialized location In-Reply-To: <0D2C672D-3310-40C3-9C6D-B28CA70F2AFC@oracle.com> References: <0D2C672D-3310-40C3-9C6D-B28CA70F2AFC@oracle.com> Message-ID: <5A81A1D0.7070901@oracle.com> Hi Kim, Looks good. This decorator was renamed to have the ARRAYCOPY_ prefix after discovering it was only used by arraycopy. But if you have a different use case that needs to perform stores on uninitialized memory, then I support this change. Thanks, /Erik On 2018-02-11 08:25, Kim Barrett wrote: > Please review this change to the Access API to support stores of oops > into uninitialized locations. This change is needed to prevent such > stores from, for example, having the G1 pre-barrier applied to > whatever garbage happens to be in the location being stored into. > > There was already support for stores to uninitialized locations in the > Access API, but only for array initialization. This change > generalizes that mechanism, and renames it accordingly: > ARRAYCOPY_DEST_NOT_INITIALIZED => AS_DEST_NOT_INITIALIZED. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8197454 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8197454/open.00/ > > Testing: > Mach5 {hs,jdk}-tier{1,2,3} > From aph at redhat.com Mon Feb 12 14:29:07 2018 From: aph at redhat.com (Andrew Haley) Date: Mon, 12 Feb 2018 14:29:07 +0000 Subject: RFR: 8197429: Increased stack guard causes segfaults on x86-32 In-Reply-To: <72B4DDAB-A875-47C4-BD1C-312AAEAED4B0@oracle.com> References: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> <72B4DDAB-A875-47C4-BD1C-312AAEAED4B0@oracle.com> Message-ID: On 10/02/18 00:41, Leonid Mesnik wrote: > 4) Currently the correct way to use native libs is to compile it during build and use with -nativepath. > See make examples here: > http://hg.openjdk.java.net/jdk/jdk10/file/tip/make/test/JtregNativeHotspot.gmk > http://hg.openjdk.java.net/jdk/jdk10/file/tip/test/hotspot/jtreg/runtime/jni/CalleeSavedRegisters > So you might rewrite your test completely on java. So you could use requires tag to filter out unsupported platforms. > Also logic of choosing platform will be slightly different. I've done this, but I've been unable to figure out how to run the test. There are many places containing instructions, and all of them seem to be out of date. My usual technique of running jtreg from the command line doesn't work. I'd be very grateful if you could tell me the correct incantation to run a single jtreg test form the command line. Thank you. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From matthias.baesken at sap.com Mon Feb 12 14:37:30 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Mon, 12 Feb 2018 14:37:30 +0000 Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 In-Reply-To: <91ca2324-6fec-89bc-fb31-4724e390f80d@bell-sw.com> References: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> <7d6be29faafc44e596d03563bf45731c@sap.com> <91ca2324-6fec-89bc-fb31-4724e390f80d@bell-sw.com> Message-ID: > > I'll wait until you have your changes committed, then update my one. > Hi Dmitry , my change (8197412 Enable docker container related tests for linux s390x) is now in the jdk/hs repo . Best regards, Matthias > -----Original Message----- > From: Dmitry Samersoff [mailto:dmitry.samersoff at bell-sw.com] > Sent: Montag, 12. Februar 2018 13:27 > To: Baesken, Matthias ; 'hotspot- > dev at openjdk.java.net' > Cc: mikhailo.seledtsov at oracle.com > Subject: Re: RFR(S): JDK-8196590 Enable docker container related tests for > linux AARCH64 > > Matthias, > > > Hi Dmitry, looks good to me (not a Reviewer however). > > Thank you for the review. > > > ( But guess it will be a merge conflict with 8197412 Enable docker > container related tests for linux s390x > > I'll wait until you have your changes committed, then update my one. > > -Dmitry > > On 12.02.2018 11:15, Baesken, Matthias wrote: > > Hi Dmitry, looks good to me (not a Reviewer however). > > > > ( But guess it will be a merge conflict with 8197412 Enable docker container > related tests for linux s390x > > > >> > >> Webrev : > >> > >> http://cr.openjdk.java.net/~mbaesken/webrevs/8197412.0/ > >> > >> > >> bug : > >> > >> https://bugs.openjdk.java.net/browse/JDK-8197412 > >> > > > > ) > > > > > > Best regards, Matthias > > > > > >> -----Original Message----- > >> From: Dmitry Samersoff [mailto:dmitry.samersoff at bell-sw.com] > >> Sent: Samstag, 10. Februar 2018 13:11 > >> To: 'hotspot-dev at openjdk.java.net' > >> Cc: Baesken, Matthias ; > >> mikhailo.seledtsov at oracle.com > >> Subject: RFR(S): JDK-8196590 Enable docker container related tests for > linux > >> AARCH64 > >> > >> Everybody, > >> > >> Please review small changes, that enables docker testing on > Linux/AArch64 > >> > >> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.01/ > >> > >> PS: > >> > >> Matthias - I refactored VMProps.dockerSupport() a bit to make it more > >> readable, please check that it doesn't brake your work. > >> > >> -Dmitry > >> > >> -- > >> Dmitry Samersoff > >> http://devnull.samersoff.net > >> * There will come soft rains ... > > > -- > Dmitry Samersoff > http://devnull.samersoff.net > * There will come soft rains ... From bob.vandette at oracle.com Mon Feb 12 14:59:02 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Mon, 12 Feb 2018 09:59:02 -0500 Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 In-Reply-To: References: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> <7d6be29faafc44e596d03563bf45731c@sap.com> <91ca2324-6fec-89bc-fb31-4724e390f80d@bell-sw.com> Message-ID: Sorry for the late review, I was out last week. I assume this was added to TestCPUSets.java due to the lack of cpuset support on the s390x OS. @requires (os.arch != "s390x") Is there any way to generalize the lack of cpusets rather than restricting this test on one arch? I assume that this limitation is OS specific and not arch specific? Bob. > On Feb 12, 2018, at 9:37 AM, Baesken, Matthias wrote: > >> >> I'll wait until you have your changes committed, then update my one. >> > > Hi Dmitry , my change (8197412 Enable docker container related tests for linux s390x) is now in the jdk/hs repo . > > Best regards, Matthias > > >> -----Original Message----- >> From: Dmitry Samersoff [mailto:dmitry.samersoff at bell-sw.com] >> Sent: Montag, 12. Februar 2018 13:27 >> To: Baesken, Matthias ; 'hotspot- >> dev at openjdk.java.net' >> Cc: mikhailo.seledtsov at oracle.com >> Subject: Re: RFR(S): JDK-8196590 Enable docker container related tests for >> linux AARCH64 >> >> Matthias, >> >>> Hi Dmitry, looks good to me (not a Reviewer however). >> >> Thank you for the review. >> >>> ( But guess it will be a merge conflict with 8197412 Enable docker >> container related tests for linux s390x >> >> I'll wait until you have your changes committed, then update my one. >> >> -Dmitry >> >> On 12.02.2018 11:15, Baesken, Matthias wrote: >>> Hi Dmitry, looks good to me (not a Reviewer however). >>> >>> ( But guess it will be a merge conflict with 8197412 Enable docker container >> related tests for linux s390x >>> >>>> >>>> Webrev : >>>> >>>> http://cr.openjdk.java.net/~mbaesken/webrevs/8197412.0/ >>>> >>>> >>>> bug : >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8197412 >>>> >>> >>> ) >>> >>> >>> Best regards, Matthias >>> >>> >>>> -----Original Message----- >>>> From: Dmitry Samersoff [mailto:dmitry.samersoff at bell-sw.com] >>>> Sent: Samstag, 10. Februar 2018 13:11 >>>> To: 'hotspot-dev at openjdk.java.net' >>>> Cc: Baesken, Matthias ; >>>> mikhailo.seledtsov at oracle.com >>>> Subject: RFR(S): JDK-8196590 Enable docker container related tests for >> linux >>>> AARCH64 >>>> >>>> Everybody, >>>> >>>> Please review small changes, that enables docker testing on >> Linux/AArch64 >>>> >>>> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.01/ >>>> >>>> PS: >>>> >>>> Matthias - I refactored VMProps.dockerSupport() a bit to make it more >>>> readable, please check that it doesn't brake your work. >>>> >>>> -Dmitry >>>> >>>> -- >>>> Dmitry Samersoff >>>> http://devnull.samersoff.net >>>> * There will come soft rains ... >> >> >> -- >> Dmitry Samersoff >> http://devnull.samersoff.net >> * There will come soft rains ... > From matthias.baesken at sap.com Mon Feb 12 15:43:41 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Mon, 12 Feb 2018 15:43:41 +0000 Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 In-Reply-To: References: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> <7d6be29faafc44e596d03563bf45731c@sap.com> <91ca2324-6fec-89bc-fb31-4724e390f80d@bell-sw.com> Message-ID: <24e47d867653474a984cac55b3b9d210@sap.com> Hi Bob, the issue on Linux s390x is that we get (at least on my test machine) from /proc/self/status Cpus_allowed_list a value that is much larger than the CPUs that are currently available. ( test/hotspot/jtreg/runtime/containers/docker/TestCPUAwareness.java is looking at the Cpus_allowed_list value ) >From what I heard, the value from /proc/self/status Cpus_allowed_list on the Linux s390x test machine is more like an upper bound of potentially hot-pluggable CPUs . So far I am not sure about the details, I try to find out a better way to get the values . So it is not really a lack of cpusets , but more like a difference compared to other systems . Best regards, Matthias From: Bob Vandette [mailto:bob.vandette at oracle.com] Sent: Montag, 12. Februar 2018 15:59 To: Baesken, Matthias Cc: Dmitry Samersoff ; hotspot-dev at openjdk.java.net Subject: Re: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 Sorry for the late review, I was out last week. I assume this was added to TestCPUSets.java due to the lack of cpuset support on the s390x OS. @requires (os.arch != "s390x") Is there any way to generalize the lack of cpusets rather than restricting this test on one arch? I assume that this limitation is OS specific and not arch specific? Bob. On Feb 12, 2018, at 9:37 AM, Baesken, Matthias > wrote: I'll wait until you have your changes committed, then update my one. Hi Dmitry , my change (8197412 Enable docker container related tests for linux s390x) is now in the jdk/hs repo . Best regards, Matthias -----Original Message----- From: Dmitry Samersoff [mailto:dmitry.samersoff at bell-sw.com] Sent: Montag, 12. Februar 2018 13:27 To: Baesken, Matthias >; 'hotspot- dev at openjdk.java.net' > Cc: mikhailo.seledtsov at oracle.com Subject: Re: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 Matthias, Hi Dmitry, looks good to me (not a Reviewer however). Thank you for the review. ( But guess it will be a merge conflict with 8197412 Enable docker container related tests for linux s390x I'll wait until you have your changes committed, then update my one. -Dmitry On 12.02.2018 11:15, Baesken, Matthias wrote: Hi Dmitry, looks good to me (not a Reviewer however). ( But guess it will be a merge conflict with 8197412 Enable docker container related tests for linux s390x Webrev : http://cr.openjdk.java.net/~mbaesken/webrevs/8197412.0/ bug : https://bugs.openjdk.java.net/browse/JDK-8197412 ) Best regards, Matthias -----Original Message----- From: Dmitry Samersoff [mailto:dmitry.samersoff at bell-sw.com] Sent: Samstag, 10. Februar 2018 13:11 To: 'hotspot-dev at openjdk.java.net' > Cc: Baesken, Matthias >; mikhailo.seledtsov at oracle.com Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 Everybody, Please review small changes, that enables docker testing on Linux/AArch64 http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.01/ PS: Matthias - I refactored VMProps.dockerSupport() a bit to make it more readable, please check that it doesn't brake your work. -Dmitry -- Dmitry Samersoff http://devnull.samersoff.net * There will come soft rains ... -- Dmitry Samersoff http://devnull.samersoff.net * There will come soft rains ... From bob.vandette at oracle.com Mon Feb 12 16:30:05 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Mon, 12 Feb 2018 11:30:05 -0500 Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 In-Reply-To: <24e47d867653474a984cac55b3b9d210@sap.com> References: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> <7d6be29faafc44e596d03563bf45731c@sap.com> <91ca2324-6fec-89bc-fb31-4724e390f80d@bell-sw.com> <24e47d867653474a984cac55b3b9d210@sap.com> Message-ID: <9E5699A8-4697-466B-B056-8AD2C121FD13@oracle.com> > > On Feb 12, 2018, at 10:43 AM, Baesken, Matthias wrote: > > Hi Bob, the issue on Linux s390x is that we get (at least on my test machine) from /proc/self/status Cpus_allowed_list a value that is much larger than the CPUs that are currently available. > ( test /hotspot /jtreg /runtime /containers /docker /TestCPUAwareness.java is looking at the Cpus_allowed_list value ) > > From what I heard, the value from /proc/self/status Cpus_allowed_list on the Linux s390x test machine is more like an upper bound of potentially hot-pluggable CPUs . > So far I am not sure about the details, I try to find out a better way to get the values . > > So it is not really a lack of cpusets , but more like a difference compared to other systems . > That?s odd since the entry is documented as CPUs that you can be scheduled on! What does your Cpus_allowed entry contain? It looks like that entry was available before the list form. Also the /proc//status file for each process has four added lines, displaying the process's Cpus_allowed (on which CPUs it may be scheduled) and Mems_allowed (on which memory nodes it may obtain memory), in the two formats Mask Format and List Format (see below) as shown in the following example: Cpus_allowed: ffffffff,ffffffff,ffffffff,ffffffff Cpus_allowed_list: 0-127 Mems_allowed: ffffffff,ffffffff Mems_allowed_list: 0-63 The "allowed" fields were added in Linux 2.6.24; the "allowed_list" fields were added in Linux 2.6.26. Bob. > > Best regards, Matthias > > > From: Bob Vandette [mailto:bob.vandette at oracle.com ] > Sent: Montag, 12. Februar 2018 15:59 > To: Baesken, Matthias > > Cc: Dmitry Samersoff >; hotspot-dev at openjdk.java.net > Subject: Re: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 > > Sorry for the late review, I was out last week. > > I assume this was added to TestCPUSets.java due to the lack of cpuset support on the s390x OS. > > @requires (os.arch != "s390x") > > Is there any way to generalize the lack of cpusets rather than restricting this test on one arch? > I assume that this limitation is OS specific and not arch specific? > > Bob. > > > On Feb 12, 2018, at 9:37 AM, Baesken, Matthias > wrote: > > > I'll wait until you have your changes committed, then update my one. > > > Hi Dmitry , my change (8197412 Enable docker container related tests for linux s390x) is now in the jdk/hs repo . > > Best regards, Matthias > > > > -----Original Message----- > From: Dmitry Samersoff [mailto:dmitry.samersoff at bell-sw.com ] > Sent: Montag, 12. Februar 2018 13:27 > To: Baesken, Matthias >; 'hotspot- > dev at openjdk.java.net ' > > Cc: mikhailo.seledtsov at oracle.com > Subject: Re: RFR(S): JDK-8196590 Enable docker container related tests for > linux AARCH64 > > Matthias, > > > Hi Dmitry, looks good to me (not a Reviewer however). > > Thank you for the review. > > > ( But guess it will be a merge conflict with 8197412 Enable docker > container related tests for linux s390x > > I'll wait until you have your changes committed, then update my one. > > -Dmitry > > On 12.02.2018 11:15, Baesken, Matthias wrote: > > Hi Dmitry, looks good to me (not a Reviewer however). > > ( But guess it will be a merge conflict with 8197412 Enable docker container > related tests for linux s390x > > > > > Webrev : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8197412.0/ > > > bug : > > https://bugs.openjdk.java.net/browse/JDK-8197412 > > ) > > > Best regards, Matthias > > > > -----Original Message----- > From: Dmitry Samersoff [mailto:dmitry.samersoff at bell-sw.com ] > Sent: Samstag, 10. Februar 2018 13:11 > To: 'hotspot-dev at openjdk.java.net ' > > Cc: Baesken, Matthias >; > mikhailo.seledtsov at oracle.com > Subject: RFR(S): JDK-8196590 Enable docker container related tests for > linux > > AARCH64 > > Everybody, > > Please review small changes, that enables docker testing on > Linux/AArch64 > > > http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.01/ > > PS: > > Matthias - I refactored VMProps.dockerSupport() a bit to make it more > readable, please check that it doesn't brake your work. > > -Dmitry > > -- > Dmitry Samersoff > http://devnull.samersoff.net > * There will come soft rains ... > > > -- > Dmitry Samersoff > http://devnull.samersoff.net > * There will come soft rains ... From mikhailo.seledtsov at oracle.com Mon Feb 12 16:43:46 2018 From: mikhailo.seledtsov at oracle.com (Mikhailo Seledtsov) Date: Mon, 12 Feb 2018 08:43:46 -0800 Subject: RFR : 8197412 Enable docker container related tests for linux s390x In-Reply-To: <655c04a77d6a41e993cc55d7e8700301@sap.com> References: <655c04a77d6a41e993cc55d7e8700301@sap.com> Message-ID: <5A81C442.9040206@oracle.com> Looks good, Misha On 2/9/18, 2:29 AM, Baesken, Matthias wrote: > > Hello, please review : > > 8197412 Enable docker container related tests for linux s390x > > This enables the docker related tests on Linux s390x. I tested on > SLES12 , docker version 17.09.1 . > > Some comments : > > TestCPUSets.java : > > is for now disabled on s390x because Cpus_allowed_list from > /proc/self/status can give misleading values (larger than the > currently available CPU number). > > DockerTestUtils.java : > > I changed the order to docker build to what is really documented . > > Docker help build says : > > Usage: docker build [OPTIONS] PATH | URL | - > > On older docker versions the order is important ( but on docker 17.x > is seems to be ok to give the path first, still prefer to change it to > what the help says ). > > Webrev : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8197412.0/ > > > bug : > > https://bugs.openjdk.java.net/browse/JDK-8197412 > > Thanks, Matthias > From kim.barrett at oracle.com Mon Feb 12 16:49:05 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 12 Feb 2018 11:49:05 -0500 Subject: RFR: 8197454: Need Access decorator for storing oop into uninitialized location In-Reply-To: <5A81A1D0.7070901@oracle.com> References: <0D2C672D-3310-40C3-9C6D-B28CA70F2AFC@oracle.com> <5A81A1D0.7070901@oracle.com> Message-ID: <6DB2E483-D1E3-490E-9A08-C01FCFEA8E98@oracle.com> > On Feb 12, 2018, at 9:16 AM, Erik ?sterlund wrote: > > Hi Kim, > > Looks good. Thanks, Erik. > This decorator was renamed to have the ARRAYCOPY_ prefix after discovering it was only used by arraycopy. But if you have a different use case that needs to perform stores on uninitialized memory, then I support this change. This came up while Access-orizing JNI. The couple of places where this was needed for JNI could instead do a raw store of NULL followed by a normally barriered oop_store. It seemed clearer to make this change in order to be explicit about the semantics. > Thanks, > /Erik > > On 2018-02-11 08:25, Kim Barrett wrote: >> Please review this change to the Access API to support stores of oops >> into uninitialized locations. This change is needed to prevent such >> stores from, for example, having the G1 pre-barrier applied to >> whatever garbage happens to be in the location being stored into. >> >> There was already support for stores to uninitialized locations in the >> Access API, but only for array initialization. This change >> generalizes that mechanism, and renames it accordingly: >> ARRAYCOPY_DEST_NOT_INITIALIZED => AS_DEST_NOT_INITIALIZED. >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8197454 >> >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8197454/open.00/ >> >> Testing: >> Mach5 {hs,jdk}-tier{1,2,3} From leonid.mesnik at oracle.com Mon Feb 12 21:04:58 2018 From: leonid.mesnik at oracle.com (Leonid Mesnik) Date: Mon, 12 Feb 2018 13:04:58 -0800 Subject: RFR: 8197429: Increased stack guard causes segfaults on x86-32 In-Reply-To: References: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> <72B4DDAB-A875-47C4-BD1C-312AAEAED4B0@oracle.com> Message-ID: <766737D9-9F5A-4E70-A312-6A69A56B81A8@oracle.com> Could you please verify that expected libs are generated in test-mage directory /build/linux-x64/images/test/hotspot/jtreg/native. This directory contains all native binaries used during hotspot testing. Also you need to add -nativepath: like -nativepath:/build/linux-x64/images/test/hotspot/jtreg/native in your jtreg command-line. So jtreg set correct library path for tested jdk. Please check if it helps to run test correctly. Leonid > On Feb 12, 2018, at 6:29 AM, Andrew Haley wrote: > > On 10/02/18 00:41, Leonid Mesnik wrote: >> 4) Currently the correct way to use native libs is to compile it during build and use with -nativepath. >> See make examples here: >> http://hg.openjdk.java.net/jdk/jdk10/file/tip/make/test/JtregNativeHotspot.gmk >> http://hg.openjdk.java.net/jdk/jdk10/file/tip/test/hotspot/jtreg/runtime/jni/CalleeSavedRegisters >> So you might rewrite your test completely on java. So you could use requires tag to filter out unsupported platforms. >> Also logic of choosing platform will be slightly different. > > I've done this, but I've been unable to figure out how to run the > test. There are many places containing instructions, and all of them > seem to be out of date. My usual technique of running jtreg from the > command line doesn't work. I'd be very grateful if you could tell me > the correct incantation to run a single jtreg test form the command > line. Thank you. > > -- > Andrew Haley > Java Platform Lead Engineer > Red Hat UK Ltd. > EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From joe.darcy at oracle.com Mon Feb 12 22:50:12 2018 From: joe.darcy at oracle.com (joe darcy) Date: Mon, 12 Feb 2018 14:50:12 -0800 Subject: JDK 11 RFR of JDK-8197773: Problem list InlineAccessors.java until JDK-8196726 is fixed Message-ID: Hello, ?The test compiler/inlining/InlineAccessors.java is currently failing in JDK 11 master. It should be problem listed until the fix for JDK-8196726 propagates there. Please review the patch below to be pushed to jdk/jdk. Thanks, -Joe --- a/test/hotspot/jtreg/ProblemList.txt??? Mon Feb 12 08:19:33 2018 -0800 +++ b/test/hotspot/jtreg/ProblemList.txt??? Mon Feb 12 14:49:51 2018 -0800 @@ -59,6 +59,8 @@ ?applications/ctw/modules/java_desktop.java 8189604 windows-all ?applications/ctw/modules/jdk_jconsole.java 8189604 windows-all +compiler/inlining/InlineAccessors.java 8196726 windows-all,linux-all + ?############################################################################# ?# :hotspot_gc From david.holmes at oracle.com Mon Feb 12 23:04:33 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 13 Feb 2018 09:04:33 +1000 Subject: JDK 11 RFR of JDK-8197773: Problem list InlineAccessors.java until JDK-8196726 is fixed In-Reply-To: References: Message-ID: <6379424e-04fa-ac9e-0364-d04ed260d7b5@oracle.com> Hi Joe, On 13/02/2018 8:50 AM, joe darcy wrote: > Hello, > > ?The test > > compiler/inlining/InlineAccessors.java > > is currently failing in JDK 11 master. It should be problem listed until > the fix for JDK-8196726 propagates there. How will it then be un-problem-listed? That would be a separate CR and patch after it propagates. I think, as Jesper suggested elsewhere, the better thing to do is pull that change into jdk/jdk ahead of the full integration if there is a concern about timing. David > Please review the patch below to be pushed to jdk/jdk. > > Thanks, > > -Joe > > --- a/test/hotspot/jtreg/ProblemList.txt??? Mon Feb 12 08:19:33 2018 -0800 > +++ b/test/hotspot/jtreg/ProblemList.txt??? Mon Feb 12 14:49:51 2018 -0800 > @@ -59,6 +59,8 @@ > ?applications/ctw/modules/java_desktop.java 8189604 windows-all > ?applications/ctw/modules/jdk_jconsole.java 8189604 windows-all > > +compiler/inlining/InlineAccessors.java 8196726 windows-all,linux-all > + > ?############################################################################# > > ?# :hotspot_gc > From joe.darcy at oracle.com Mon Feb 12 23:57:27 2018 From: joe.darcy at oracle.com (Joseph D. Darcy) Date: Mon, 12 Feb 2018 15:57:27 -0800 Subject: JDK 11 RFR of JDK-8197773: Problem list InlineAccessors.java until JDK-8196726 is fixed In-Reply-To: <6379424e-04fa-ac9e-0364-d04ed260d7b5@oracle.com> References: <6379424e-04fa-ac9e-0364-d04ed260d7b5@oracle.com> Message-ID: <5A8229E7.2010907@oracle.com> To follow-up, David pushed the small test-only adjustment needed to jdk/jdk so the problem listing is no longer necessary. I'll close out this bug. Thanks David, -Joe On 2/12/2018 3:04 PM, David Holmes wrote: > Hi Joe, > > On 13/02/2018 8:50 AM, joe darcy wrote: >> Hello, >> >> The test >> >> compiler/inlining/InlineAccessors.java >> >> is currently failing in JDK 11 master. It should be problem listed >> until the fix for JDK-8196726 propagates there. > > How will it then be un-problem-listed? That would be a separate CR and > patch after it propagates. I think, as Jesper suggested elsewhere, the > better thing to do is pull that change into jdk/jdk ahead of the full > integration if there is a concern about timing. > > David > >> Please review the patch below to be pushed to jdk/jdk. >> >> Thanks, >> >> -Joe >> >> --- a/test/hotspot/jtreg/ProblemList.txt Mon Feb 12 08:19:33 2018 >> -0800 >> +++ b/test/hotspot/jtreg/ProblemList.txt Mon Feb 12 14:49:51 2018 >> -0800 >> @@ -59,6 +59,8 @@ >> applications/ctw/modules/java_desktop.java 8189604 windows-all >> applications/ctw/modules/jdk_jconsole.java 8189604 windows-all >> >> +compiler/inlining/InlineAccessors.java 8196726 windows-all,linux-all >> + >> ############################################################################# >> >> >> # :hotspot_gc >> From mikhailo.seledtsov at oracle.com Tue Feb 13 01:34:03 2018 From: mikhailo.seledtsov at oracle.com (Mikhailo Seledtsov) Date: Mon, 12 Feb 2018 17:34:03 -0800 Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 In-Reply-To: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> References: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> Message-ID: <5A82408B.7070001@oracle.com> Changes look good from my point of view. Misha On 2/10/18, 4:10 AM, Dmitry Samersoff wrote: > Everybody, > > Please review small changes, that enables docker testing on Linux/AArch64 > > http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.01/ > > PS: > > Matthias - I refactored VMProps.dockerSupport() a bit to make it more > readable, please check that it doesn't brake your work. > > -Dmitry > > -- > Dmitry Samersoff > http://devnull.samersoff.net > * There will come soft rains ... From matthias.baesken at sap.com Tue Feb 13 09:04:02 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Tue, 13 Feb 2018 09:04:02 +0000 Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 In-Reply-To: <9E5699A8-4697-466B-B056-8AD2C121FD13@oracle.com> References: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> <7d6be29faafc44e596d03563bf45731c@sap.com> <91ca2324-6fec-89bc-fb31-4724e390f80d@bell-sw.com> <24e47d867653474a984cac55b3b9d210@sap.com> <9E5699A8-4697-466B-B056-8AD2C121FD13@oracle.com> Message-ID: Hi Bob, >What does your Cpus_allowed entry contain? It looks like that entry was available >before the list form. This is what my Cpus_allowed and Cpus_allowed_list contains : Cpus_allowed: ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff Cpus_allowed_list: 0-255 ( btw. /proc/cpuinfo contains processors 0-3 ) Best regards, Matthias From: Bob Vandette [mailto:bob.vandette at oracle.com] Sent: Montag, 12. Februar 2018 17:30 To: Baesken, Matthias Cc: Dmitry Samersoff ; hotspot-dev at openjdk.java.net; Schmidt, Lutz Subject: Re: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 On Feb 12, 2018, at 10:43 AM, Baesken, Matthias > wrote: Hi Bob, the issue on Linux s390x is that we get (at least on my test machine) from /proc/self/status Cpus_allowed_list a value that is much larger than the CPUs that are currently available. ( test/hotspot/jtreg/runtime/containers/docker/TestCPUAwareness.java is looking at the Cpus_allowed_list value ) From what I heard, the value from /proc/self/status Cpus_allowed_list on the Linux s390x test machine is more like an upper bound of potentially hot-pluggable CPUs . So far I am not sure about the details, I try to find out a better way to get the values . So it is not really a lack of cpusets , but more like a difference compared to other systems . That?s odd since the entry is documented as CPUs that you can be scheduled on! What does your Cpus_allowed entry contain? It looks like that entry was available before the list form. Also the /proc//status file for each process has four added lines, displaying the process's Cpus_allowed (on which CPUs it may be scheduled) and Mems_allowed (on which memory nodes it may obtain memory), in the two formats Mask Format and List Format (see below) as shown in the following example: Cpus_allowed: ffffffff,ffffffff,ffffffff,ffffffff Cpus_allowed_list: 0-127 Mems_allowed: ffffffff,ffffffff Mems_allowed_list: 0-63 The "allowed" fields were added in Linux 2.6.24; the "allowed_list" fields were added in Linux 2.6.26. Bob. Best regards, Matthias From: Bob Vandette [mailto:bob.vandette at oracle.com] Sent: Montag, 12. Februar 2018 15:59 To: Baesken, Matthias > Cc: Dmitry Samersoff >; hotspot-dev at openjdk.java.net Subject: Re: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 Sorry for the late review, I was out last week. I assume this was added to TestCPUSets.java due to the lack of cpuset support on the s390x OS. @requires (os.arch != "s390x") Is there any way to generalize the lack of cpusets rather than restricting this test on one arch? I assume that this limitation is OS specific and not arch specific? Bob. On Feb 12, 2018, at 9:37 AM, Baesken, Matthias > wrote: I'll wait until you have your changes committed, then update my one. Hi Dmitry , my change (8197412 Enable docker container related tests for linux s390x) is now in the jdk/hs repo . Best regards, Matthias -----Original Message----- From: Dmitry Samersoff [mailto:dmitry.samersoff at bell-sw.com] Sent: Montag, 12. Februar 2018 13:27 To: Baesken, Matthias >; 'hotspot- dev at openjdk.java.net' > Cc: mikhailo.seledtsov at oracle.com Subject: Re: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 Matthias, Hi Dmitry, looks good to me (not a Reviewer however). Thank you for the review. ( But guess it will be a merge conflict with 8197412 Enable docker container related tests for linux s390x I'll wait until you have your changes committed, then update my one. -Dmitry On 12.02.2018 11:15, Baesken, Matthias wrote: Hi Dmitry, looks good to me (not a Reviewer however). ( But guess it will be a merge conflict with 8197412 Enable docker container related tests for linux s390x Webrev : http://cr.openjdk.java.net/~mbaesken/webrevs/8197412.0/ bug : https://bugs.openjdk.java.net/browse/JDK-8197412 ) Best regards, Matthias -----Original Message----- From: Dmitry Samersoff [mailto:dmitry.samersoff at bell-sw.com] Sent: Samstag, 10. Februar 2018 13:11 To: 'hotspot-dev at openjdk.java.net' > Cc: Baesken, Matthias >; mikhailo.seledtsov at oracle.com Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 Everybody, Please review small changes, that enables docker testing on Linux/AArch64 http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.01/ PS: Matthias - I refactored VMProps.dockerSupport() a bit to make it more readable, please check that it doesn't brake your work. -Dmitry -- Dmitry Samersoff http://devnull.samersoff.net * There will come soft rains ... -- Dmitry Samersoff http://devnull.samersoff.net * There will come soft rains ... From bob.vandette at oracle.com Tue Feb 13 14:19:40 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Tue, 13 Feb 2018 09:19:40 -0500 Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 In-Reply-To: References: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> <7d6be29faafc44e596d03563bf45731c@sap.com> <91ca2324-6fec-89bc-fb31-4724e390f80d@bell-sw.com> <24e47d867653474a984cac55b3b9d210@sap.com> <9E5699A8-4697-466B-B056-8AD2C121FD13@oracle.com> Message-ID: > On Feb 13, 2018, at 4:04 AM, Baesken, Matthias wrote: > > Hi Bob, > > >What does your Cpus_allowed entry contain? It looks like that entry was available > >before the list form. > > This is what my Cpus_allowed and Cpus_allowed_list contains : > > Cpus_allowed: ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff > Cpus_allowed_list: 0-255 Oh well that doesn?t help us. Bob. > > > ( btw. /proc/cpuinfo contains processors 0-3 ) > > > Best regards, Matthias > > > From: Bob Vandette [mailto:bob.vandette at oracle.com ] > Sent: Montag, 12. Februar 2018 17:30 > To: Baesken, Matthias > > Cc: Dmitry Samersoff >; hotspot-dev at openjdk.java.net ; Schmidt, Lutz > > Subject: Re: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 > > > On Feb 12, 2018, at 10:43 AM, Baesken, Matthias > wrote: > > Hi Bob, the issue on Linux s390x is that we get (at least on my test machine) from /proc/self/status Cpus_allowed_list a value that is much larger than the CPUs that are currently available. > ( test /hotspot /jtreg /runtime /containers /docker /TestCPUAwareness.java is looking at the Cpus_allowed_list value ) > > From what I heard, the value from /proc/self/status Cpus_allowed_list on the Linux s390x test machine is more like an upper bound of potentially hot-pluggable CPUs . > So far I am not sure about the details, I try to find out a better way to get the values . > > So it is not really a lack of cpusets , but more like a difference compared to other systems . > > That?s odd since the entry is documented as CPUs that you can be scheduled on! > What does your Cpus_allowed entry contain? It looks like that entry was available > before the list form. > > Also the /proc//status file for each process has four added > lines, displaying the process's Cpus_allowed (on which CPUs it may be > scheduled) and Mems_allowed (on which memory nodes it may obtain > memory), in the two formats Mask Format and List Format (see below) > as shown in the following example: > > Cpus_allowed: ffffffff,ffffffff,ffffffff,ffffffff > Cpus_allowed_list: 0-127 > Mems_allowed: ffffffff,ffffffff > Mems_allowed_list: 0-63 > > The "allowed" fields were added in Linux 2.6.24; the "allowed_list" > fields were added in Linux 2.6.26. > Bob. > > > Best regards, Matthias > > > From: Bob Vandette [mailto:bob.vandette at oracle.com ] > Sent: Montag, 12. Februar 2018 15:59 > To: Baesken, Matthias > > Cc: Dmitry Samersoff >; hotspot-dev at openjdk.java.net > Subject: Re: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 > > Sorry for the late review, I was out last week. > > I assume this was added to TestCPUSets.java due to the lack of cpuset support on the s390x OS. > > @requires (os.arch != "s390x") > > Is there any way to generalize the lack of cpusets rather than restricting this test on one arch? > I assume that this limitation is OS specific and not arch specific? > > Bob. > > > On Feb 12, 2018, at 9:37 AM, Baesken, Matthias > wrote: > > > I'll wait until you have your changes committed, then update my one. > > > Hi Dmitry , my change (8197412 Enable docker container related tests for linux s390x) is now in the jdk/hs repo . > > Best regards, Matthias > > > > > -----Original Message----- > From: Dmitry Samersoff [mailto:dmitry.samersoff at bell-sw.com ] > Sent: Montag, 12. Februar 2018 13:27 > To: Baesken, Matthias >; 'hotspot- > dev at openjdk.java.net ' > > Cc: mikhailo.seledtsov at oracle.com > Subject: Re: RFR(S): JDK-8196590 Enable docker container related tests for > linux AARCH64 > > Matthias, > > > > Hi Dmitry, looks good to me (not a Reviewer however). > > Thank you for the review. > > > > ( But guess it will be a merge conflict with 8197412 Enable docker > container related tests for linux s390x > > I'll wait until you have your changes committed, then update my one. > > -Dmitry > > On 12.02.2018 11:15, Baesken, Matthias wrote: > > > Hi Dmitry, looks good to me (not a Reviewer however). > > ( But guess it will be a merge conflict with 8197412 Enable docker container > related tests for linux s390x > > > > > > > Webrev : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8197412.0/ > > > bug : > > https://bugs.openjdk.java.net/browse/JDK-8197412 > > ) > > > Best regards, Matthias > > > > > -----Original Message----- > From: Dmitry Samersoff [mailto:dmitry.samersoff at bell-sw.com ] > Sent: Samstag, 10. Februar 2018 13:11 > To: 'hotspot-dev at openjdk.java.net ' > > Cc: Baesken, Matthias >; > mikhailo.seledtsov at oracle.com > Subject: RFR(S): JDK-8196590 Enable docker container related tests for > linux > > > AARCH64 > > Everybody, > > Please review small changes, that enables docker testing on > Linux/AArch64 > > > > http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.01/ > > PS: > > Matthias - I refactored VMProps.dockerSupport() a bit to make it more > readable, please check that it doesn't brake your work. > > -Dmitry > > -- > Dmitry Samersoff > http://devnull.samersoff.net > * There will come soft rains ... > > > -- > Dmitry Samersoff > http://devnull.samersoff.net > * There will come soft rains ... From aph at redhat.com Tue Feb 13 14:51:16 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 13 Feb 2018 14:51:16 +0000 Subject: Take 2: RFR: 8197429: Increased stack guard causes segfaults on x86-32 In-Reply-To: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> References: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> Message-ID: Webrev amended. http://cr.openjdk.java.net/~aph/8197429-2/ Copyrights fixed. Test case changed to be built with the rest of the native code. We now only bang down the stack to its maximum size if we're going to run Java code on the primordial stack. This only happens with the JNI invocation interface. The new logic is: if we're on the primordial stack bang down the stack to its maximum size try to map the codebuf just below the primordial stack if that didn't work try to map again, but 1 megabyte lower OK? -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From ioi.lam at oracle.com Tue Feb 13 17:28:02 2018 From: ioi.lam at oracle.com (Ioi Lam) Date: Tue, 13 Feb 2018 09:28:02 -0800 Subject: RFR(XXS) 8197857 fieldDescriptor prints incorrect 32-bit representation of compressed oops Message-ID: <5f483301-5995-58ea-15b8-53b79c610b3e@oracle.com> https://bugs.openjdk.java.net/browse/JDK-8197857 When UseCompressedOops is enabled for 64-bit VMs, fieldDescriptor::print_on_for prints two 32-bit integers for each object field. E.g. ?- 'asTypeCache' 'Ljava/lang/invoke/MethodHandle;' @24 NULL (0 8221b591) ?- final 'argL0' 'Ljava/lang/Object;' @28 a 'LambHello$$Lambda$1'{0x00000004110dac88} (8221b591 1) However, compressed oops occupy the space of only a single 32-bit integer, so the superfluous output is confusing. The above should be printed as ?- 'asTypeCache' 'Ljava/lang/invoke/MethodHandle;' @24 NULL (0) ?- final 'argL0' 'Ljava/lang/Object;' @28 a 'LambHello$$Lambda$1'{0x00000004110dac88} (8221b591) Patch: ======================================= --- a/src/hotspot/share/runtime/fieldDescriptor.cpp??? Mon Feb 12 09:12:59 2018 -0800 +++ b/src/hotspot/share/runtime/fieldDescriptor.cpp??? Tue Feb 13 09:24:26 2018 -0800 @@ -201,6 +201,13 @@ ?? } ?? // Print a hint as to the underlying integer representation. This can be wrong for ?? // pointers on an LP64 machine + +#ifdef _LP64 +? if ((ft == T_OBJECT || ft == T_ARRAY) && UseCompressedOops) { +??? st->print(" (%x)", obj->int_field(offset())); +? } +? else // <- intended +#endif ?? if (ft == T_LONG || ft == T_DOUBLE LP64_ONLY(|| !is_java_primitive(ft)) ) { ???? st->print(" (%x %x)", obj->int_field(offset()), obj->int_field(offset()+sizeof(jint))); ?? } else if (as_int < 0 || as_int > 9) { From lois.foltan at oracle.com Tue Feb 13 18:47:32 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Tue, 13 Feb 2018 13:47:32 -0500 Subject: (11) RFR (XS) JDK-8196889: VS2017 Unable to Instantiate OrderAccess::release_store with an Incomplete Class Within an Inlined Method Message-ID: Please review this small fix for a VS2017 compilation error.? The inaccessibility of the private Atomic::IsPointerConvertible in the template arguments for the partial specialization of Atomic::StoreImpl causes a (C2027) compilation error indicating a use of an undefined type has occurred.? This issue manifests itself only in certain coding scenarios, for example, when OrderAccess::release_store() is called from within an inlined method of a class where one of the types used to instantiate is the currently being defined class. open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196889/webrev/ bug link https://bugs.openjdk.java.net/browse/JDK-8196889 Building & Testing complete (hs-tier1-3, jdk-tier1-3) Thanks, Lois From kim.barrett at oracle.com Tue Feb 13 19:48:29 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 13 Feb 2018 14:48:29 -0500 Subject: (11) RFR (XS) JDK-8196889: VS2017 Unable to Instantiate OrderAccess::release_store with an Incomplete Class Within an Inlined Method In-Reply-To: References: Message-ID: <5D1E3867-3DC8-43F8-8DF0-C298E653A40E@oracle.com> > On Feb 13, 2018, at 1:47 PM, Lois Foltan wrote: > > Please review this small fix for a VS2017 compilation error. The inaccessibility of the private Atomic::IsPointerConvertible in the template arguments for the partial specialization of Atomic::StoreImpl causes a (C2027) compilation error indicating a use of an undefined type has occurred. This issue manifests itself only in certain coding scenarios, for example, when OrderAccess::release_store() is called from within an inlined method of a class where one of the types used to instantiate is the currently being defined class. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196889/webrev/ > bug link https://bugs.openjdk.java.net/browse/JDK-8196889 > > Building & Testing complete (hs-tier1-3, jdk-tier1-3) > > Thanks, > Lois Looks good. I wish we had a better understanding of the problem, but at this point I?m ready to concede that something is going awry that probably requires debugging the compiler to figure out. I will try to make a small reproducer to turn over to Microsoft. From thomas.stuefe at gmail.com Tue Feb 13 19:55:59 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 13 Feb 2018 20:55:59 +0100 Subject: (11) RFR (XS) JDK-8196889: VS2017 Unable to Instantiate OrderAccess::release_store with an Incomplete Class Within an Inlined Method In-Reply-To: References: Message-ID: HI Lois, this looks fine. I have to ask though, what version of VS2017 are you using? When I try to build (VS 2017 community edition), I get only as far as the adlc before the first couple of: error C2956: sized deallocation function 'operator delete(void*, size_t)' would be chosen as placement deallocation function. (in arena.cpp in adlc, class Chunk). Thanks, Thomas On Tue, Feb 13, 2018 at 7:47 PM, Lois Foltan wrote: > Please review this small fix for a VS2017 compilation error. The > inaccessibility of the private Atomic::IsPointerConvertible in the template > arguments for the partial specialization of Atomic::StoreImpl causes a > (C2027) compilation error indicating a use of an undefined type has > occurred. This issue manifests itself only in certain coding scenarios, > for example, when OrderAccess::release_store() is called from within an > inlined method of a class where one of the types used to instantiate is the > currently being defined class. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196889/webrev/ > bug link https://bugs.openjdk.java.net/browse/JDK-8196889 > > Building & Testing complete (hs-tier1-3, jdk-tier1-3) > > Thanks, > Lois > > > From thomas.stuefe at gmail.com Tue Feb 13 19:58:16 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 13 Feb 2018 20:58:16 +0100 Subject: (11) RFR (XS) JDK-8196889: VS2017 Unable to Instantiate OrderAccess::release_store with an Incomplete Class Within an Inlined Method In-Reply-To: References: Message-ID: On Tue, Feb 13, 2018 at 8:55 PM, Thomas St?fe wrote: > HI Lois, > > this looks fine. > > I have to ask though, what version of VS2017 are you using? When I try to > build (VS 2017 community edition), I get only as far as the adlc before the > first couple of: > > error C2956: sized deallocation function 'operator delete(void*, size_t)' > would be chosen as placement deallocation function. > > (in arena.cpp in adlc, class Chunk). > > Thanks, Thomas > > > Oh, I just found https://bugs.openjdk.java.net/browse/JDK-8196880. So you guys are already on it. Great. Do you have a patch already for trying out? > > > > On Tue, Feb 13, 2018 at 7:47 PM, Lois Foltan > wrote: > >> Please review this small fix for a VS2017 compilation error. The >> inaccessibility of the private Atomic::IsPointerConvertible in the template >> arguments for the partial specialization of Atomic::StoreImpl causes a >> (C2027) compilation error indicating a use of an undefined type has >> occurred. This issue manifests itself only in certain coding scenarios, >> for example, when OrderAccess::release_store() is called from within an >> inlined method of a class where one of the types used to instantiate is the >> currently being defined class. >> >> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196889/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8196889 >> >> Building & Testing complete (hs-tier1-3, jdk-tier1-3) >> >> Thanks, >> Lois >> >> >> > From lois.foltan at oracle.com Tue Feb 13 20:01:30 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Tue, 13 Feb 2018 15:01:30 -0500 Subject: (11) RFR (XS) JDK-8196889: VS2017 Unable to Instantiate OrderAccess::release_store with an Incomplete Class Within an Inlined Method In-Reply-To: <5D1E3867-3DC8-43F8-8DF0-C298E653A40E@oracle.com> References: <5D1E3867-3DC8-43F8-8DF0-C298E653A40E@oracle.com> Message-ID: Thank you Kim for the review and for consulting with me on this issue! Lois On 2/13/2018 2:48 PM, Kim Barrett wrote: >> On Feb 13, 2018, at 1:47 PM, Lois Foltan wrote: >> >> Please review this small fix for a VS2017 compilation error. The inaccessibility of the private Atomic::IsPointerConvertible in the template arguments for the partial specialization of Atomic::StoreImpl causes a (C2027) compilation error indicating a use of an undefined type has occurred. This issue manifests itself only in certain coding scenarios, for example, when OrderAccess::release_store() is called from within an inlined method of a class where one of the types used to instantiate is the currently being defined class. >> >> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196889/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8196889 >> >> Building & Testing complete (hs-tier1-3, jdk-tier1-3) >> >> Thanks, >> Lois > Looks good. > > I wish we had a better understanding of the problem, but at this point I?m ready to concede that > something is going awry that probably requires debugging the compiler to figure out. I will try to > make a small reproducer to turn over to Microsoft. > From lois.foltan at oracle.com Tue Feb 13 20:29:05 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Tue, 13 Feb 2018 15:29:05 -0500 Subject: (11) RFR (XS) JDK-8196889: VS2017 Unable to Instantiate OrderAccess::release_store with an Incomplete Class Within an Inlined Method In-Reply-To: References: Message-ID: <3fe34768-b2c8-adc9-d2fa-b02a088bf17d@oracle.com> On 2/13/2018 2:58 PM, Thomas St?fe wrote: > > > On Tue, Feb 13, 2018 at 8:55 PM, Thomas St?fe > wrote: > > HI Lois, > > this looks fine. > > I have to ask though, what version of VS2017 are you using? When I > try to build (VS 2017 community edition), I get only as far as the > adlc before the first couple of: > > error C2956: sized deallocation function 'operator delete(void*, > size_t)' would be chosen as placement deallocation function. > > (in arena.cpp in adlc, class Chunk). > > Thanks, Thomas > > > > Oh, I just found https://bugs.openjdk.java.net/browse/JDK-8196880. So > you guys are already on it. Great. Do you have a patch already for > trying out? Thank you Thomas for the review!? No patch yet, but working on it. Lois > > > > On Tue, Feb 13, 2018 at 7:47 PM, Lois Foltan > > wrote: > > Please review this small fix for a VS2017 compilation error. > The inaccessibility of the private > Atomic::IsPointerConvertible in the template arguments for the > partial specialization of Atomic::StoreImpl causes a (C2027) > compilation error indicating a use of an undefined type has > occurred.? This issue manifests itself only in certain coding > scenarios, for example, when OrderAccess::release_store() is > called from within an inlined method of a class where one of > the types used to instantiate is the currently being defined > class. > > open webrev at > http://cr.openjdk.java.net/~lfoltan/bug_jdk8196889/webrev/ > > bug link https://bugs.openjdk.java.net/browse/JDK-8196889 > > > Building & Testing complete (hs-tier1-3, jdk-tier1-3) > > Thanks, > Lois > > > > From coleen.phillimore at oracle.com Tue Feb 13 20:30:33 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 13 Feb 2018 15:30:33 -0500 Subject: RFR(XXS) 8197857 fieldDescriptor prints incorrect 32-bit representation of compressed oops In-Reply-To: <5f483301-5995-58ea-15b8-53b79c610b3e@oracle.com> References: <5f483301-5995-58ea-15b8-53b79c610b3e@oracle.com> Message-ID: This looks good but this is very odd output.? I don't know why we print this.? I wouldn't object if it were removed. Otherwise, I hate to do this to a trivial change but would this also print this better? ? // Print a hint as to the underlying integer representation. This can be wrong for ? // pointers on an LP64 machine ? if (ft == T_LONG || ft == T_DOUBLE LP64_ONLY(|| (!UseCompressedOops && !is_java_primitive(ft))) ) { ??? st->print(" (%x %x)", obj->int_field(offset()), obj->int_field(offset()+sizeof(jint))); ? } else if (as_int < 0 || as_int > 9) { ??? st->print(" (%x)", as_int); ? } Thanks, Coleen On 2/13/18 12:28 PM, Ioi Lam wrote: > https://bugs.openjdk.java.net/browse/JDK-8197857 > > > When UseCompressedOops is enabled for 64-bit VMs, > fieldDescriptor::print_on_for > prints two 32-bit integers for each object field. E.g. > > ?- 'asTypeCache' 'Ljava/lang/invoke/MethodHandle;' @24 NULL (0 8221b591) > ?- final 'argL0' 'Ljava/lang/Object;' @28 a > 'LambHello$$Lambda$1'{0x00000004110dac88} (8221b591 1) > > However, compressed oops occupy the space of only a single 32-bit > integer, so the superfluous output is confusing. > > The above should be printed as > > ?- 'asTypeCache' 'Ljava/lang/invoke/MethodHandle;' @24 NULL (0) > ?- final 'argL0' 'Ljava/lang/Object;' @28 a > 'LambHello$$Lambda$1'{0x00000004110dac88} (8221b591) > > Patch: > ======================================= > > --- a/src/hotspot/share/runtime/fieldDescriptor.cpp??? Mon Feb 12 > 09:12:59 2018 -0800 > +++ b/src/hotspot/share/runtime/fieldDescriptor.cpp??? Tue Feb 13 > 09:24:26 2018 -0800 > @@ -201,6 +201,13 @@ > ?? } > ?? // Print a hint as to the underlying integer representation. This > can be wrong for > ?? // pointers on an LP64 machine > + > +#ifdef _LP64 > +? if ((ft == T_OBJECT || ft == T_ARRAY) && UseCompressedOops) { > +??? st->print(" (%x)", obj->int_field(offset())); > +? } > +? else // <- intended > +#endif > ?? if (ft == T_LONG || ft == T_DOUBLE LP64_ONLY(|| > !is_java_primitive(ft)) ) { > ???? st->print(" (%x %x)", obj->int_field(offset()), > obj->int_field(offset()+sizeof(jint))); > ?? } else if (as_int < 0 || as_int > 9) { > From coleen.phillimore at oracle.com Tue Feb 13 20:45:38 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 13 Feb 2018 15:45:38 -0500 Subject: RFR: 8197454: Need Access decorator for storing oop into uninitialized location In-Reply-To: <0D2C672D-3310-40C3-9C6D-B28CA70F2AFC@oracle.com> References: <0D2C672D-3310-40C3-9C6D-B28CA70F2AFC@oracle.com> Message-ID: <454f3785-45c3-3112-b750-4da0e7385c00@oracle.com> Reviewed! Coleen On 2/11/18 2:25 AM, Kim Barrett wrote: > Please review this change to the Access API to support stores of oops > into uninitialized locations. This change is needed to prevent such > stores from, for example, having the G1 pre-barrier applied to > whatever garbage happens to be in the location being stored into. > > There was already support for stores to uninitialized locations in the > Access API, but only for array initialization. This change > generalizes that mechanism, and renames it accordingly: > ARRAYCOPY_DEST_NOT_INITIALIZED => AS_DEST_NOT_INITIALIZED. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8197454 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8197454/open.00/ > > Testing: > Mach5 {hs,jdk}-tier{1,2,3} > From kim.barrett at oracle.com Tue Feb 13 22:21:04 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 13 Feb 2018 17:21:04 -0500 Subject: RFR: 8197454: Need Access decorator for storing oop into uninitialized location In-Reply-To: <454f3785-45c3-3112-b750-4da0e7385c00@oracle.com> References: <0D2C672D-3310-40C3-9C6D-B28CA70F2AFC@oracle.com> <454f3785-45c3-3112-b750-4da0e7385c00@oracle.com> Message-ID: > On Feb 13, 2018, at 3:45 PM, coleen.phillimore at oracle.com wrote: > > > Reviewed! > Coleen Thanks! > > On 2/11/18 2:25 AM, Kim Barrett wrote: >> Please review this change to the Access API to support stores of oops >> into uninitialized locations. This change is needed to prevent such >> stores from, for example, having the G1 pre-barrier applied to >> whatever garbage happens to be in the location being stored into. >> >> There was already support for stores to uninitialized locations in the >> Access API, but only for array initialization. This change >> generalizes that mechanism, and renames it accordingly: >> ARRAYCOPY_DEST_NOT_INITIALIZED => AS_DEST_NOT_INITIALIZED. >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8197454 >> >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8197454/open.00/ >> >> Testing: >> Mach5 {hs,jdk}-tier{1,2,3} From ioi.lam at oracle.com Tue Feb 13 23:37:03 2018 From: ioi.lam at oracle.com (Ioi Lam) Date: Tue, 13 Feb 2018 15:37:03 -0800 Subject: RFR(XXS) 8197857 fieldDescriptor prints incorrect 32-bit representation of compressed oops In-Reply-To: References: <5f483301-5995-58ea-15b8-53b79c610b3e@oracle.com> Message-ID: <3902728e-8fb2-3c58-76a5-7d5a72465fff@oracle.com> On 2/13/18 12:30 PM, coleen.phillimore at oracle.com wrote: > > This looks good but this is very odd output.? I don't know why we > print this.? I wouldn't object if it were removed. > I guess it's useful for someone debugging issues related to compressed oops? > Otherwise, I hate to do this to a trivial change but would this also > print this better? > > ? // Print a hint as to the underlying integer representation. This > can be wrong for > ? // pointers on an LP64 machine > ? if (ft == T_LONG || ft == T_DOUBLE LP64_ONLY(|| (!UseCompressedOops > && !is_java_primitive(ft))) ) { > ??? st->print(" (%x %x)", obj->int_field(offset()), > obj->int_field(offset()+sizeof(jint))); > ? } else if (as_int < 0 || as_int > 9) { > ??? st->print(" (%x)", as_int); > ? } > This would make the code even harder to read than it already is. Also, the (as_int < 0 || as_int > 9) is useful only for 32-bit pointers and numerical values. For CompressedOops, I guess it's possible to have a value 8. This is probably not a big deal, but I don't want to have code that's theoretically incorrect. Thanks - Ioi > > Thanks, > Coleen > > On 2/13/18 12:28 PM, Ioi Lam wrote: >> https://bugs.openjdk.java.net/browse/JDK-8197857 >> >> >> When UseCompressedOops is enabled for 64-bit VMs, >> fieldDescriptor::print_on_for >> prints two 32-bit integers for each object field. E.g. >> >> ?- 'asTypeCache' 'Ljava/lang/invoke/MethodHandle;' @24 NULL (0 8221b591) >> ?- final 'argL0' 'Ljava/lang/Object;' @28 a >> 'LambHello$$Lambda$1'{0x00000004110dac88} (8221b591 1) >> >> However, compressed oops occupy the space of only a single 32-bit >> integer, so the superfluous output is confusing. >> >> The above should be printed as >> >> ?- 'asTypeCache' 'Ljava/lang/invoke/MethodHandle;' @24 NULL (0) >> ?- final 'argL0' 'Ljava/lang/Object;' @28 a >> 'LambHello$$Lambda$1'{0x00000004110dac88} (8221b591) >> >> Patch: >> ======================================= >> >> --- a/src/hotspot/share/runtime/fieldDescriptor.cpp??? Mon Feb 12 >> 09:12:59 2018 -0800 >> +++ b/src/hotspot/share/runtime/fieldDescriptor.cpp??? Tue Feb 13 >> 09:24:26 2018 -0800 >> @@ -201,6 +201,13 @@ >> ?? } >> ?? // Print a hint as to the underlying integer representation. This >> can be wrong for >> ?? // pointers on an LP64 machine >> + >> +#ifdef _LP64 >> +? if ((ft == T_OBJECT || ft == T_ARRAY) && UseCompressedOops) { >> +??? st->print(" (%x)", obj->int_field(offset())); >> +? } >> +? else // <- intended >> +#endif >> ?? if (ft == T_LONG || ft == T_DOUBLE LP64_ONLY(|| >> !is_java_primitive(ft)) ) { >> ???? st->print(" (%x %x)", obj->int_field(offset()), >> obj->int_field(offset()+sizeof(jint))); >> ?? } else if (as_int < 0 || as_int > 9) { >> > From leonid.mesnik at oracle.com Tue Feb 13 23:44:03 2018 From: leonid.mesnik at oracle.com (Leonid Mesnik) Date: Tue, 13 Feb 2018 15:44:03 -0800 Subject: Take 2: RFR: 8197429: Increased stack guard causes segfaults on x86-32 In-Reply-To: References: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> Message-ID: Andrew Thank you for fixing tests. I think that it would be also better to rewrite test in java using process utilities test/lib/jdk/test/lib/process/ProcessTools.java. It is possible to push shell test but the general direction for OpenJDK tests is to use java and testlibrary. Now it is more easier to develop and debug tests with new processes using testlibrary. Also the could be execute faster. But if you have any reasons to keep test in shell that I am fine. Leonid > On Feb 13, 2018, at 6:51 AM, Andrew Haley wrote: > > Webrev amended. > > http://cr.openjdk.java.net/~aph/8197429-2/ > > Copyrights fixed. > > Test case changed to be built with the rest of the native code. > > We now only bang down the stack to its maximum size if we're going to > run Java code on the primordial stack. This only happens with the > JNI invocation interface. > > The new logic is: > > if we're on the primordial stack > bang down the stack to its maximum size > > try to map the codebuf just below the primordial stack > > if that didn't work > try to map again, but 1 megabyte lower > > OK? > > -- > Andrew Haley > Java Platform Lead Engineer > Red Hat UK Ltd. > EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From coleen.phillimore at oracle.com Wed Feb 14 00:30:39 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 13 Feb 2018 19:30:39 -0500 Subject: RFR(XXS) 8197857 fieldDescriptor prints incorrect 32-bit representation of compressed oops In-Reply-To: <3902728e-8fb2-3c58-76a5-7d5a72465fff@oracle.com> References: <5f483301-5995-58ea-15b8-53b79c610b3e@oracle.com> <3902728e-8fb2-3c58-76a5-7d5a72465fff@oracle.com> Message-ID: On 2/13/18 6:37 PM, Ioi Lam wrote: > > > On 2/13/18 12:30 PM, coleen.phillimore at oracle.com wrote: >> >> This looks good but this is very odd output.? I don't know why we >> print this.? I wouldn't object if it were removed. >> > I guess it's useful for someone debugging issues related to compressed > oops? I don't think so.? I never used it.? It might be for debugging short and ints?? I think it's useless if you want to remove it, I'll review it quickly. > >> Otherwise, I hate to do this to a trivial change but would this also >> print this better? >> >> ? // Print a hint as to the underlying integer representation. This >> can be wrong for >> ? // pointers on an LP64 machine >> ? if (ft == T_LONG || ft == T_DOUBLE LP64_ONLY(|| (!UseCompressedOops >> && !is_java_primitive(ft))) ) { >> ??? st->print(" (%x %x)", obj->int_field(offset()), >> obj->int_field(offset()+sizeof(jint))); >> ? } else if (as_int < 0 || as_int > 9) { >> ??? st->print(" (%x)", as_int); >> ? } >> > > This would make the code even harder to read than it already is. > > Also, the (as_int < 0 || as_int > 9) is useful only for 32-bit > pointers and numerical values. For CompressedOops, I guess it's > possible to have a value 8. This is probably not a big deal, but I > don't want to have code that's theoretically incorrect. I saw this line afterwards, and can't guess why it's there. Your patch is fine if you want to push it. Coleen > > Thanks > - Ioi > > > >> >> Thanks, >> Coleen >> >> On 2/13/18 12:28 PM, Ioi Lam wrote: >>> https://bugs.openjdk.java.net/browse/JDK-8197857 >>> >>> >>> When UseCompressedOops is enabled for 64-bit VMs, >>> fieldDescriptor::print_on_for >>> prints two 32-bit integers for each object field. E.g. >>> >>> ?- 'asTypeCache' 'Ljava/lang/invoke/MethodHandle;' @24 NULL (0 >>> 8221b591) >>> ?- final 'argL0' 'Ljava/lang/Object;' @28 a >>> 'LambHello$$Lambda$1'{0x00000004110dac88} (8221b591 1) >>> >>> However, compressed oops occupy the space of only a single 32-bit >>> integer, so the superfluous output is confusing. >>> >>> The above should be printed as >>> >>> ?- 'asTypeCache' 'Ljava/lang/invoke/MethodHandle;' @24 NULL (0) >>> ?- final 'argL0' 'Ljava/lang/Object;' @28 a >>> 'LambHello$$Lambda$1'{0x00000004110dac88} (8221b591) >>> >>> Patch: >>> ======================================= >>> >>> --- a/src/hotspot/share/runtime/fieldDescriptor.cpp??? Mon Feb 12 >>> 09:12:59 2018 -0800 >>> +++ b/src/hotspot/share/runtime/fieldDescriptor.cpp??? Tue Feb 13 >>> 09:24:26 2018 -0800 >>> @@ -201,6 +201,13 @@ >>> ?? } >>> ?? // Print a hint as to the underlying integer representation. This >>> can be wrong for >>> ?? // pointers on an LP64 machine >>> + >>> +#ifdef _LP64 >>> +? if ((ft == T_OBJECT || ft == T_ARRAY) && UseCompressedOops) { >>> +??? st->print(" (%x)", obj->int_field(offset())); >>> +? } >>> +? else // <- intended >>> +#endif >>> ?? if (ft == T_LONG || ft == T_DOUBLE LP64_ONLY(|| >>> !is_java_primitive(ft)) ) { >>> ???? st->print(" (%x %x)", obj->int_field(offset()), >>> obj->int_field(offset()+sizeof(jint))); >>> ?? } else if (as_int < 0 || as_int > 9) { >>> >> > From david.holmes at oracle.com Wed Feb 14 02:10:39 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 14 Feb 2018 12:10:39 +1000 Subject: Take 2: RFR: 8197429: Increased stack guard causes segfaults on x86-32 In-Reply-To: References: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> Message-ID: <02d49fdb-6518-3e27-ac4b-a1d71edfb313@oracle.com> On 14/02/2018 12:51 AM, Andrew Haley wrote: > Webrev amended. > > http://cr.openjdk.java.net/~aph/8197429-2/ My question still stands: How does this interact with the use of DisablePrimordialThreadGuardPages? Thanks, David ----- > Copyrights fixed. > > Test case changed to be built with the rest of the native code. > > We now only bang down the stack to its maximum size if we're going to > run Java code on the primordial stack. This only happens with the > JNI invocation interface. > > The new logic is: > > if we're on the primordial stack > bang down the stack to its maximum size > > try to map the codebuf just below the primordial stack > > if that didn't work > try to map again, but 1 megabyte lower > > OK? > From aph at redhat.com Wed Feb 14 09:23:11 2018 From: aph at redhat.com (Andrew Haley) Date: Wed, 14 Feb 2018 09:23:11 +0000 Subject: Take 2: RFR: 8197429: Increased stack guard causes segfaults on x86-32 In-Reply-To: References: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> Message-ID: On 13/02/18 23:44, Leonid Mesnik wrote: > Thank you for fixing tests. I think that it would be also better to > rewrite test in java using process utilities > test/lib/jdk/test/lib/process/ProcessTools.java. > It is possible to push shell test but the general direction for > OpenJDK tests is to use java and testlibrary. > Now it is more easier to develop and debug tests with new processes > using testlibrary. Also the could be execute faster. > But if you have any reasons to keep test in shell that I am fine. I'm trying to test a very specific fault path, one that is due to the JNI launcher from C. IfI wanted to test something else I would do something else, but I don't understand your motivation for wanting to write the test in some other way. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From adam.farley at uk.ibm.com Wed Feb 14 11:32:31 2018 From: adam.farley at uk.ibm.com (Adam Farley8) Date: Wed, 14 Feb 2018 11:32:31 +0000 Subject: [PATCH] RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers Message-ID: Hi All, Currently, diagnostic core files generated from OpenJDK seem to lump all of the native memory usages together, making it near-impossible for someone to figure out *what* is using all that memory in the event of a memory leak. The OpenJ9 VM has a feature which allows it to track the allocation of native memory for Direct Byte Buffers (DBBs), and to supply that information into the cores when they are generated. This makes it a *lot* easier to find out what is using all that native memory, making memory leak resolution less like some dark art, and more like logical debugging. To use this feature, there is a native method referenced in Unsafe.java. To open up this feature so that any VM can make use of it, the java code below sets the stage for it. This change starts letting people call DBB-specific methods when allocating native memory, and getting into the habit of using it. Thoughts? Best Regards Adam Farley P.S. Code: diff --git a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template --- a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template +++ b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template @@ -85,7 +85,7 @@ // Paranoia return; } - UNSAFE.freeMemory(address); + UNSAFE.freeDBBMemory(address); address = 0; Bits.unreserveMemory(size, capacity); } @@ -118,7 +118,7 @@ long base = 0; try { - base = UNSAFE.allocateMemory(size); + base = UNSAFE.allocateDBBMemory(size); } catch (OutOfMemoryError x) { Bits.unreserveMemory(size, cap); throw x; diff --git a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java --- a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java +++ b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java @@ -632,6 +632,26 @@ } /** + * Allocates a new block of native memory for DirectByteBuffers, of the + * given size in bytes. The contents of the memory are uninitialized; + * they will generally be garbage. The resulting native pointer will + * never be zero, and will be aligned for all value types. Dispose of + * this memory by calling {@link #freeDBBMemory} or resize it with + * {@link #reallocateDBBMemory}. + * + * @throws RuntimeException if the size is negative or too large + * for the native size_t type + * + * @throws OutOfMemoryError if the allocation is refused by the system + * + * @see #getByte(long) + * @see #putByte(long, byte) + */ + public long allocateDBBMemory(long bytes) { + return allocateMemory(bytes); + } + + /** * Resizes a new block of native memory, to the given size in bytes. The * contents of the new block past the size of the old block are * uninitialized; they will generally be garbage. The resulting native @@ -687,6 +707,27 @@ } /** + * Resizes a new block of native memory for DirectByteBuffers, to the + * given size in bytes. The contents of the new block past the size of + * the old block are uninitialized; they will generally be garbage. The + * resulting native pointer will be zero if and only if the requested size + * is zero. The resulting native pointer will be aligned for all value + * types. Dispose of this memory by calling {@link #freeDBBMemory}, or + * resize it with {@link #reallocateDBBMemory}. The address passed to + * this method may be null, in which case an allocation will be performed. + * + * @throws RuntimeException if the size is negative or too large + * for the native size_t type + * + * @throws OutOfMemoryError if the allocation is refused by the system + * + * @see #allocateDBBMemory + */ + public long reallocateDBBMemory(long address, long bytes) { + return reallocateMemory(address, bytes); + } + + /** * Sets all bytes in a given block of memory to a fixed value * (usually zero). * @@ -918,6 +959,17 @@ checkPointer(null, address); } + /** + * Disposes of a block of native memory, as obtained from {@link + * #allocateDBBMemory} or {@link #reallocateDBBMemory}. The address passed + * to this method may be null, in which case no action is taken. + * + * @see #allocateDBBMemory + */ + public void freeDBBMemory(long address) { + freeMemory(address); + } + /// random queries /** Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU From david.holmes at oracle.com Wed Feb 14 12:43:38 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 14 Feb 2018 22:43:38 +1000 Subject: [PATCH] RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers In-Reply-To: References: Message-ID: <4d7c4629-7866-13f8-1777-2ff610784dfd@oracle.com> Adding in core-libs-dev as there's nothing related to hotspot directly here. David On 14/02/2018 9:32 PM, Adam Farley8 wrote: > Hi All, > > Currently, diagnostic core files generated from OpenJDK seem to lump all > of the > native memory usages together, making it near-impossible for someone to > figure > out *what* is using all that memory in the event of a memory leak. > > The OpenJ9 VM has a feature which allows it to track the allocation of > native > memory for Direct Byte Buffers (DBBs), and to supply that information into > the > cores when they are generated. This makes it a *lot* easier to find out > what is using > all that native memory, making memory leak resolution less like some dark > art, and > more like logical debugging. > > To use this feature, there is a native method referenced in Unsafe.java. > To open > up this feature so that any VM can make use of it, the java code below > sets the > stage for it. This change starts letting people call DBB-specific methods > when > allocating native memory, and getting into the habit of using it. > > Thoughts? > > Best Regards > > Adam Farley > > P.S. Code: > > diff --git > a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > --- a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > +++ b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > @@ -85,7 +85,7 @@ > // Paranoia > return; > } > - UNSAFE.freeMemory(address); > + UNSAFE.freeDBBMemory(address); > address = 0; > Bits.unreserveMemory(size, capacity); > } > @@ -118,7 +118,7 @@ > > long base = 0; > try { > - base = UNSAFE.allocateMemory(size); > + base = UNSAFE.allocateDBBMemory(size); > } catch (OutOfMemoryError x) { > Bits.unreserveMemory(size, cap); > throw x; > diff --git a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > --- a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > +++ b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > @@ -632,6 +632,26 @@ > } > > /** > + * Allocates a new block of native memory for DirectByteBuffers, of > the > + * given size in bytes. The contents of the memory are > uninitialized; > + * they will generally be garbage. The resulting native pointer will > + * never be zero, and will be aligned for all value types. Dispose > of > + * this memory by calling {@link #freeDBBMemory} or resize it with > + * {@link #reallocateDBBMemory}. > + * > + * @throws RuntimeException if the size is negative or too large > + * for the native size_t type > + * > + * @throws OutOfMemoryError if the allocation is refused by the > system > + * > + * @see #getByte(long) > + * @see #putByte(long, byte) > + */ > + public long allocateDBBMemory(long bytes) { > + return allocateMemory(bytes); > + } > + > + /** > * Resizes a new block of native memory, to the given size in bytes. > The > * contents of the new block past the size of the old block are > * uninitialized; they will generally be garbage. The resulting > native > @@ -687,6 +707,27 @@ > } > > /** > + * Resizes a new block of native memory for DirectByteBuffers, to the > + * given size in bytes. The contents of the new block past the size > of > + * the old block are uninitialized; they will generally be garbage. > The > + * resulting native pointer will be zero if and only if the requested > size > + * is zero. The resulting native pointer will be aligned for all > value > + * types. Dispose of this memory by calling {@link #freeDBBMemory}, > or > + * resize it with {@link #reallocateDBBMemory}. The address passed > to > + * this method may be null, in which case an allocation will be > performed. > + * > + * @throws RuntimeException if the size is negative or too large > + * for the native size_t type > + * > + * @throws OutOfMemoryError if the allocation is refused by the > system > + * > + * @see #allocateDBBMemory > + */ > + public long reallocateDBBMemory(long address, long bytes) { > + return reallocateMemory(address, bytes); > + } > + > + /** > * Sets all bytes in a given block of memory to a fixed value > * (usually zero). > * > @@ -918,6 +959,17 @@ > checkPointer(null, address); > } > > + /** > + * Disposes of a block of native memory, as obtained from {@link > + * #allocateDBBMemory} or {@link #reallocateDBBMemory}. The address > passed > + * to this method may be null, in which case no action is taken. > + * > + * @see #allocateDBBMemory > + */ > + public void freeDBBMemory(long address) { > + freeMemory(address); > + } > + > /// random queries > > /** > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > From david.holmes at oracle.com Wed Feb 14 12:53:42 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 14 Feb 2018 22:53:42 +1000 Subject: [PATCH] RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers In-Reply-To: <4d7c4629-7866-13f8-1777-2ff610784dfd@oracle.com> References: <4d7c4629-7866-13f8-1777-2ff610784dfd@oracle.com> Message-ID: On 14/02/2018 10:43 PM, David Holmes wrote: > Adding in core-libs-dev as there's nothing related to hotspot directly > here. Correction, this is of course leading to a proposed change in hotspot to implement the new Unsafe methods and perform the native memory tracking. Of course we already have NMT so the obvious question is how this will fit in with NMT? David > David > > On 14/02/2018 9:32 PM, Adam Farley8 wrote: >> Hi All, >> >> Currently, diagnostic core files generated from OpenJDK seem to lump all >> of the >> native memory usages together, making it near-impossible for someone to >> figure >> out *what* is using all that memory in the event of a memory leak. >> >> The OpenJ9 VM has a feature which allows it to track the allocation of >> native >> memory for Direct Byte Buffers (DBBs), and to supply that information >> into >> the >> cores when they are generated. This makes it a *lot* easier to find out >> what is using >> all that native memory, making memory leak resolution less like some dark >> art, and >> more like logical debugging. >> >> To use this feature, there is a native method referenced in Unsafe.java. >> To open >> up this feature so that any VM can make use of it, the java code below >> sets the >> stage for it. This change starts letting people call DBB-specific methods >> when >> allocating native memory, and getting into the habit of using it. >> >> Thoughts? >> >> Best Regards >> >> Adam Farley >> >> P.S. Code: >> >> diff --git >> a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >> b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >> --- a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >> +++ b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >> @@ -85,7 +85,7 @@ >> ????????????????? // Paranoia >> ????????????????? return; >> ????????????? } >> -??????????? UNSAFE.freeMemory(address); >> +??????????? UNSAFE.freeDBBMemory(address); >> ????????????? address = 0; >> ????????????? Bits.unreserveMemory(size, capacity); >> ????????? } >> @@ -118,7 +118,7 @@ >> ????????? long base = 0; >> ????????? try { >> -??????????? base = UNSAFE.allocateMemory(size); >> +??????????? base = UNSAFE.allocateDBBMemory(size); >> ????????? } catch (OutOfMemoryError x) { >> ????????????? Bits.unreserveMemory(size, cap); >> ????????????? throw x; >> diff --git a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >> b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >> --- a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >> +++ b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >> @@ -632,6 +632,26 @@ >> ????? } >> ????? /** >> +???? * Allocates a new block of native memory for DirectByteBuffers, of >> the >> +???? * given size in bytes.? The contents of the memory are >> uninitialized; >> +???? * they will generally be garbage.? The resulting native pointer >> will >> +???? * never be zero, and will be aligned for all value types.? Dispose >> of >> +???? * this memory by calling {@link #freeDBBMemory} or resize it with >> +???? * {@link #reallocateDBBMemory}. >> +???? * >> +???? * @throws RuntimeException if the size is negative or too large >> +???? *????????????????????????? for the native size_t type >> +???? * >> +???? * @throws OutOfMemoryError if the allocation is refused by the >> system >> +???? * >> +???? * @see #getByte(long) >> +???? * @see #putByte(long, byte) >> +???? */ >> +??? public long allocateDBBMemory(long bytes) { >> +??????? return allocateMemory(bytes); >> +??? } >> + >> +??? /** >> ?????? * Resizes a new block of native memory, to the given size in >> bytes. >> The >> ?????? * contents of the new block past the size of the old block are >> ?????? * uninitialized; they will generally be garbage.? The resulting >> native >> @@ -687,6 +707,27 @@ >> ????? } >> ????? /** >> +???? * Resizes a new block of native memory for DirectByteBuffers, to >> the >> +???? * given size in bytes.? The contents of the new block past the size >> of >> +???? * the old block are uninitialized; they will generally be garbage. >> The >> +???? * resulting native pointer will be zero if and only if the >> requested >> size >> +???? * is zero.? The resulting native pointer will be aligned for all >> value >> +???? * types.? Dispose of this memory by calling {@link #freeDBBMemory}, >> or >> +???? * resize it with {@link #reallocateDBBMemory}.? The address passed >> to >> +???? * this method may be null, in which case an allocation will be >> performed. >> +???? * >> +???? * @throws RuntimeException if the size is negative or too large >> +???? *????????????????????????? for the native size_t type >> +???? * >> +???? * @throws OutOfMemoryError if the allocation is refused by the >> system >> +???? * >> +???? * @see #allocateDBBMemory >> +???? */ >> +??? public long reallocateDBBMemory(long address, long bytes) { >> +??????? return reallocateMemory(address, bytes); >> +??? } >> + >> +??? /** >> ?????? * Sets all bytes in a given block of memory to a fixed value >> ?????? * (usually zero). >> ?????? * >> @@ -918,6 +959,17 @@ >> ????????? checkPointer(null, address); >> ????? } >> +??? /** >> +???? * Disposes of a block of native memory, as obtained from {@link >> +???? * #allocateDBBMemory} or {@link #reallocateDBBMemory}.? The address >> passed >> +???? * to this method may be null, in which case no action is taken. >> +???? * >> +???? * @see #allocateDBBMemory >> +???? */ >> +??? public void freeDBBMemory(long address) { >> +??????? freeMemory(address); >> +??? } >> + >> ????? /// random queries >> ????? /** >> >> Unless stated otherwise above: >> IBM United Kingdom Limited - Registered in England and Wales with number >> 741598. >> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 >> 3AU >> From aph at redhat.com Wed Feb 14 12:55:26 2018 From: aph at redhat.com (Andrew Haley) Date: Wed, 14 Feb 2018 12:55:26 +0000 Subject: Take 2: RFR: 8197429: Increased stack guard causes segfaults on x86-32 In-Reply-To: <02d49fdb-6518-3e27-ac4b-a1d71edfb313@oracle.com> References: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> <02d49fdb-6518-3e27-ac4b-a1d71edfb313@oracle.com> Message-ID: <2e999e00-7e22-476f-1e66-4e2ae3221ab7@redhat.com> On 14/02/18 02:10, David Holmes wrote: > On 14/02/2018 12:51 AM, Andrew Haley wrote: >> Webrev amended. >> >> http://cr.openjdk.java.net/~aph/8197429-2/ > > My question still stands: Sorry, I didn't see it. > How does this interact with the use of DisablePrimordialThreadGuardPages? My initial answer was "not at all", but there is a minor possible modification. If DisablePrimordialThreadGuardPages is set it is possible to use slightly more stack in Java code, so we could bang down slightly further in workaround_expand_exec_shield_cs_limit() and therefore place the codebuf slightly lower. This would allow every page of the primordial stack to be used in Java code. Like this: if (os::is_primordial_thread()) { address limit = Linux::initial_thread_stack_bottom(); if (! DisablePrimordialThreadGuardPages) { limit += JavaThread::stack_red_zone_size() + JavaThread::stack_yellow_zone_size(); } os::Linux::expand_stack_to(limit); } I'm happy to make that change and add a test for DisablePrimordialThreadGuardPages if you think it's worth doing. Alternatively, we could simply ignore the JVM's stack guard pages in the calculation and always bang down all the way to initial_thread_stack_bottom(). This would cause the codebuf to be mapped slightly lower, but I guess that's no big deal. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From adam.farley at uk.ibm.com Wed Feb 14 13:04:44 2018 From: adam.farley at uk.ibm.com (Adam Farley8) Date: Wed, 14 Feb 2018 13:04:44 +0000 Subject: [PATCH] RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers In-Reply-To: <4d7c4629-7866-13f8-1777-2ff610784dfd@oracle.com> References: <4d7c4629-7866-13f8-1777-2ff610784dfd@oracle.com> Message-ID: > Adding in core-libs-dev as there's nothing related to hotspot directly here. > > David Thought it was best to pass this through hotspot lists first, as full completion of the native side of things will probably require hotspot changes. You're quite right though, I should have cc'd hotspot *and* core libs. - Adam > > On 14/02/2018 9:32 PM, Adam Farley8 wrote: >> Hi All, >> >> Currently, diagnostic core files generated from OpenJDK seem to lump all >> of the >> native memory usages together, making it near-impossible for someone to >> figure >> out *what* is using all that memory in the event of a memory leak. >> >> The OpenJ9 VM has a feature which allows it to track the allocation of >> native >> memory for Direct Byte Buffers (DBBs), and to supply that information into >> the >> cores when they are generated. This makes it a *lot* easier to find out >> what is using >> all that native memory, making memory leak resolution less like some dark >> art, and >> more like logical debugging. >> >> To use this feature, there is a native method referenced in Unsafe.java. >> To open >> up this feature so that any VM can make use of it, the java code below >> sets the >> stage for it. This change starts letting people call DBB-specific methods >> when >> allocating native memory, and getting into the habit of using it. >> >> Thoughts? >> >> Best Regards >> >> Adam Farley >> >> P.S. Code: >> >> diff --git >> a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >> b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >> --- a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >> +++ b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >> @@ -85,7 +85,7 @@ >> // Paranoia >> return; >> } >> - UNSAFE.freeMemory(address); >> + UNSAFE.freeDBBMemory(address); >> address = 0; >> Bits.unreserveMemory(size, capacity); >> } >> @@ -118,7 +118,7 @@ >> >> long base = 0; >> try { >> - base = UNSAFE.allocateMemory(size); >> + base = UNSAFE.allocateDBBMemory(size); >> } catch (OutOfMemoryError x) { >> Bits.unreserveMemory(size, cap); >> throw x; >> diff --git a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >> b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >> --- a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >> +++ b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >> @@ -632,6 +632,26 @@ >> } >> >> /** >> + * Allocates a new block of native memory for DirectByteBuffers, of >> the >> + * given size in bytes. The contents of the memory are >> uninitialized; >> + * they will generally be garbage. The resulting native pointer will >> + * never be zero, and will be aligned for all value types. Dispose >> of >> + * this memory by calling {@link #freeDBBMemory} or resize it with >> + * {@link #reallocateDBBMemory}. >> + * >> + * @throws RuntimeException if the size is negative or too large >> + * for the native size_t type >> + * >> + * @throws OutOfMemoryError if the allocation is refused by the >> system >> + * >> + * @see #getByte(long) >> + * @see #putByte(long, byte) >> + */ >> + public long allocateDBBMemory(long bytes) { >> + return allocateMemory(bytes); >> + } >> + >> + /** >> * Resizes a new block of native memory, to the given size in bytes. >> The >> * contents of the new block past the size of the old block are >> * uninitialized; they will generally be garbage. The resulting >> native >> @@ -687,6 +707,27 @@ >> } >> >> /** >> + * Resizes a new block of native memory for DirectByteBuffers, to the >> + * given size in bytes. The contents of the new block past the size >> of >> + * the old block are uninitialized; they will generally be garbage. >> The >> + * resulting native pointer will be zero if and only if the requested >> size >> + * is zero. The resulting native pointer will be aligned for all >> value >> + * types. Dispose of this memory by calling {@link #freeDBBMemory}, >> or >> + * resize it with {@link #reallocateDBBMemory}. The address passed >> to >> + * this method may be null, in which case an allocation will be >> performed. >> + * >> + * @throws RuntimeException if the size is negative or too large >> + * for the native size_t type >> + * >> + * @throws OutOfMemoryError if the allocation is refused by the >> system >> + * >> + * @see #allocateDBBMemory >> + */ >> + public long reallocateDBBMemory(long address, long bytes) { >> + return reallocateMemory(address, bytes); >> + } >> + >> + /** >> * Sets all bytes in a given block of memory to a fixed value >> * (usually zero). >> * >> @@ -918,6 +959,17 @@ >> checkPointer(null, address); >> } >> >> + /** >> + * Disposes of a block of native memory, as obtained from {@link >> + * #allocateDBBMemory} or {@link #reallocateDBBMemory}. The address >> passed >> + * to this method may be null, in which case no action is taken. >> + * >> + * @see #allocateDBBMemory >> + */ >> + public void freeDBBMemory(long address) { >> + freeMemory(address); >> + } >> + >> /// random queries >> >> /** >> >> Unless stated otherwise above: >> IBM United Kingdom Limited - Registered in England and Wales with number >> 741598. >> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU >> > Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU From thomas.stuefe at gmail.com Wed Feb 14 13:16:14 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 14 Feb 2018 14:16:14 +0100 Subject: [PATCH] RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers In-Reply-To: References: <4d7c4629-7866-13f8-1777-2ff610784dfd@oracle.com> Message-ID: On Wed, Feb 14, 2018 at 1:53 PM, David Holmes wrote: > On 14/02/2018 10:43 PM, David Holmes wrote: > >> Adding in core-libs-dev as there's nothing related to hotspot directly >> here. >> > > Correction, this is of course leading to a proposed change in hotspot to > implement the new Unsafe methods and perform the native memory tracking. Of > course we already have NMT so the obvious question is how this will fit in > with NMT? > > I thought Unsafe.allocateMemory is served by hotspot os::malloc(), is it not? So, allocations should show up in NMT with "Unsafe_AllocateMemory0". ..Thomas > David > > > David >> >> On 14/02/2018 9:32 PM, Adam Farley8 wrote: >> >>> Hi All, >>> >>> Currently, diagnostic core files generated from OpenJDK seem to lump all >>> of the >>> native memory usages together, making it near-impossible for someone to >>> figure >>> out *what* is using all that memory in the event of a memory leak. >>> >>> The OpenJ9 VM has a feature which allows it to track the allocation of >>> native >>> memory for Direct Byte Buffers (DBBs), and to supply that information >>> into >>> the >>> cores when they are generated. This makes it a *lot* easier to find out >>> what is using >>> all that native memory, making memory leak resolution less like some dark >>> art, and >>> more like logical debugging. >>> >>> To use this feature, there is a native method referenced in Unsafe.java. >>> To open >>> up this feature so that any VM can make use of it, the java code below >>> sets the >>> stage for it. This change starts letting people call DBB-specific methods >>> when >>> allocating native memory, and getting into the habit of using it. >>> >>> Thoughts? >>> >>> Best Regards >>> >>> Adam Farley >>> >>> P.S. Code: >>> >>> diff --git >>> a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >>> b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >>> --- a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >>> +++ b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >>> @@ -85,7 +85,7 @@ >>> // Paranoia >>> return; >>> } >>> - UNSAFE.freeMemory(address); >>> + UNSAFE.freeDBBMemory(address); >>> address = 0; >>> Bits.unreserveMemory(size, capacity); >>> } >>> @@ -118,7 +118,7 @@ >>> long base = 0; >>> try { >>> - base = UNSAFE.allocateMemory(size); >>> + base = UNSAFE.allocateDBBMemory(size); >>> } catch (OutOfMemoryError x) { >>> Bits.unreserveMemory(size, cap); >>> throw x; >>> diff --git a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >>> b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >>> --- a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >>> +++ b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >>> @@ -632,6 +632,26 @@ >>> } >>> /** >>> + * Allocates a new block of native memory for DirectByteBuffers, of >>> the >>> + * given size in bytes. The contents of the memory are >>> uninitialized; >>> + * they will generally be garbage. The resulting native pointer >>> will >>> + * never be zero, and will be aligned for all value types. Dispose >>> of >>> + * this memory by calling {@link #freeDBBMemory} or resize it with >>> + * {@link #reallocateDBBMemory}. >>> + * >>> + * @throws RuntimeException if the size is negative or too large >>> + * for the native size_t type >>> + * >>> + * @throws OutOfMemoryError if the allocation is refused by the >>> system >>> + * >>> + * @see #getByte(long) >>> + * @see #putByte(long, byte) >>> + */ >>> + public long allocateDBBMemory(long bytes) { >>> + return allocateMemory(bytes); >>> + } >>> + >>> + /** >>> * Resizes a new block of native memory, to the given size in >>> bytes. >>> The >>> * contents of the new block past the size of the old block are >>> * uninitialized; they will generally be garbage. The resulting >>> native >>> @@ -687,6 +707,27 @@ >>> } >>> /** >>> + * Resizes a new block of native memory for DirectByteBuffers, to >>> the >>> + * given size in bytes. The contents of the new block past the size >>> of >>> + * the old block are uninitialized; they will generally be garbage. >>> The >>> + * resulting native pointer will be zero if and only if the >>> requested >>> size >>> + * is zero. The resulting native pointer will be aligned for all >>> value >>> + * types. Dispose of this memory by calling {@link #freeDBBMemory}, >>> or >>> + * resize it with {@link #reallocateDBBMemory}. The address passed >>> to >>> + * this method may be null, in which case an allocation will be >>> performed. >>> + * >>> + * @throws RuntimeException if the size is negative or too large >>> + * for the native size_t type >>> + * >>> + * @throws OutOfMemoryError if the allocation is refused by the >>> system >>> + * >>> + * @see #allocateDBBMemory >>> + */ >>> + public long reallocateDBBMemory(long address, long bytes) { >>> + return reallocateMemory(address, bytes); >>> + } >>> + >>> + /** >>> * Sets all bytes in a given block of memory to a fixed value >>> * (usually zero). >>> * >>> @@ -918,6 +959,17 @@ >>> checkPointer(null, address); >>> } >>> + /** >>> + * Disposes of a block of native memory, as obtained from {@link >>> + * #allocateDBBMemory} or {@link #reallocateDBBMemory}. The address >>> passed >>> + * to this method may be null, in which case no action is taken. >>> + * >>> + * @see #allocateDBBMemory >>> + */ >>> + public void freeDBBMemory(long address) { >>> + freeMemory(address); >>> + } >>> + >>> /// random queries >>> /** >>> >>> Unless stated otherwise above: >>> IBM United Kingdom Limited - Registered in England and Wales with number >>> 741598. >>> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 >>> 3AU >>> >>> From adam.farley at uk.ibm.com Wed Feb 14 13:32:46 2018 From: adam.farley at uk.ibm.com (Adam Farley8) Date: Wed, 14 Feb 2018 13:32:46 +0000 Subject: [PATCH] RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers In-Reply-To: References: <4d7c4629-7866-13f8-1777-2ff610784dfd@oracle.com> Message-ID: >> Adding in core-libs-dev as there's nothing related to hotspot directly >> here. > > Correction, this is of course leading to a proposed change in hotspot to > implement the new Unsafe methods and perform the native memory tracking. Hah, I wrote the same thing in a parallel reply. Jinx. :) - Adam > Of course we already have NMT so the obvious question is how this will > fit in with NMT? > > David It will add granularity to Native Memory Tracking, allowing people to tell, at a glance, how much of the allocated native memory has been used for Direct Byte Buffers. This makes native memory OOM debugging easier. Think of it as an NMT upgrade. Here's an example of what the output should look like: https://developer.ibm.com/answers/questions/288697/why-does-nativememinfo-in-javacore-show-incorrect.html?sort=oldest - Adam > >> David >> >> On 14/02/2018 9:32 PM, Adam Farley8 wrote: >>> Hi All, >>> >>> Currently, diagnostic core files generated from OpenJDK seem to lump all >>> of the >>> native memory usages together, making it near-impossible for someone to >>> figure >>> out *what* is using all that memory in the event of a memory leak. >>> >>> The OpenJ9 VM has a feature which allows it to track the allocation of >>> native >>> memory for Direct Byte Buffers (DBBs), and to supply that information >>> into >>> the >>> cores when they are generated. This makes it a *lot* easier to find out >>> what is using >>> all that native memory, making memory leak resolution less like some dark >>> art, and >>> more like logical debugging. >>> >>> To use this feature, there is a native method referenced in Unsafe.java. >>> To open >>> up this feature so that any VM can make use of it, the java code below >>> sets the >>> stage for it. This change starts letting people call DBB-specific methods >>> when >>> allocating native memory, and getting into the habit of using it. >>> >>> Thoughts? >>> >>> Best Regards >>> >>> Adam Farley >>> >>> P.S. Code: >>> >>> diff --git >>> a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >>> b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >>> --- a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >>> +++ b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >>> @@ -85,7 +85,7 @@ >>> // Paranoia >>> return; >>> } >>> - UNSAFE.freeMemory(address); >>> + UNSAFE.freeDBBMemory(address); >>> address = 0; >>> Bits.unreserveMemory(size, capacity); >>> } >>> @@ -118,7 +118,7 @@ >>> long base = 0; >>> try { >>> - base = UNSAFE.allocateMemory(size); >>> + base = UNSAFE.allocateDBBMemory(size); >>> } catch (OutOfMemoryError x) { >>> Bits.unreserveMemory(size, cap); >>> throw x; >>> diff --git a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >>> b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >>> --- a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >>> +++ b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >>> @@ -632,6 +632,26 @@ >>> } >>> /** >>> + * Allocates a new block of native memory for DirectByteBuffers, of >>> the >>> + * given size in bytes. The contents of the memory are >>> uninitialized; >>> + * they will generally be garbage. The resulting native pointer >>> will >>> + * never be zero, and will be aligned for all value types. Dispose >>> of >>> + * this memory by calling {@link #freeDBBMemory} or resize it with >>> + * {@link #reallocateDBBMemory}. >>> + * >>> + * @throws RuntimeException if the size is negative or too large >>> + * for the native size_t type >>> + * >>> + * @throws OutOfMemoryError if the allocation is refused by the >>> system >>> + * >>> + * @see #getByte(long) >>> + * @see #putByte(long, byte) >>> + */ >>> + public long allocateDBBMemory(long bytes) { >>> + return allocateMemory(bytes); >>> + } >>> + >>> + /** >>> * Resizes a new block of native memory, to the given size in >>> bytes. >>> The >>> * contents of the new block past the size of the old block are >>> * uninitialized; they will generally be garbage. The resulting >>> native >>> @@ -687,6 +707,27 @@ >>> } >>> /** >>> + * Resizes a new block of native memory for DirectByteBuffers, to >>> the >>> + * given size in bytes. The contents of the new block past the size >>> of >>> + * the old block are uninitialized; they will generally be garbage. >>> The >>> + * resulting native pointer will be zero if and only if the >>> requested >>> size >>> + * is zero. The resulting native pointer will be aligned for all >>> value >>> + * types. Dispose of this memory by calling {@link #freeDBBMemory}, >>> or >>> + * resize it with {@link #reallocateDBBMemory}. The address passed >>> to >>> + * this method may be null, in which case an allocation will be >>> performed. >>> + * >>> + * @throws RuntimeException if the size is negative or too large >>> + * for the native size_t type >>> + * >>> + * @throws OutOfMemoryError if the allocation is refused by the >>> system >>> + * >>> + * @see #allocateDBBMemory >>> + */ >>> + public long reallocateDBBMemory(long address, long bytes) { >>> + return reallocateMemory(address, bytes); >>> + } >>> + >>> + /** >>> * Sets all bytes in a given block of memory to a fixed value >>> * (usually zero). >>> * >>> @@ -918,6 +959,17 @@ >>> checkPointer(null, address); >>> } >>> + /** >>> + * Disposes of a block of native memory, as obtained from {@link >>> + * #allocateDBBMemory} or {@link #reallocateDBBMemory}. The address >>> passed >>> + * to this method may be null, in which case no action is taken. >>> + * >>> + * @see #allocateDBBMemory >>> + */ >>> + public void freeDBBMemory(long address) { >>> + freeMemory(address); >>> + } >>> + >>> /// random queries >>> /** >>> >>> Unless stated otherwise above: >>> IBM United Kingdom Limited - Registered in England and Wales with number >>> 741598. >>> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 >>> 3AU >>> Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU From lois.foltan at oracle.com Wed Feb 14 13:48:23 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Wed, 14 Feb 2018 08:48:23 -0500 Subject: (11) RFR (S) JDK-8196880: VS2017 Addition of Global Delete Operator with Size Parameter Conflicts with Arena's Chunk Provided One Message-ID: Please review this change in VS2017 to the delete operator due to C++14 standard conformance.? From https://msdn.microsoft.com/en-us/library/mt723604.aspx The function|void operator delete(void *, size_t)|was a placement delete operator corresponding to the placement new function "void * operator new(size_t, size_t)" in C++11. With C++14 sized deallocation, this delete function is now a/usual deallocation function/(global delete operator). The standard requires that if the use of a placement new looks up a corresponding delete function and finds a usual deallocation function, the program is ill-formed. Thank you to Kim Barrett for proposing the fix below. open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196880/webrev/ bug link https://bugs.openjdk.java.net/browse/JDK-8196880 Testing complete (hs-tier1-3, jdk-tier1-3) Thanks, Lois From zgu at redhat.com Wed Feb 14 13:53:46 2018 From: zgu at redhat.com (Zhengyu Gu) Date: Wed, 14 Feb 2018 08:53:46 -0500 Subject: [PATCH] RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers In-Reply-To: References: <4d7c4629-7866-13f8-1777-2ff610784dfd@oracle.com> Message-ID: On 02/14/2018 08:16 AM, Thomas St?fe wrote: > On Wed, Feb 14, 2018 at 1:53 PM, David Holmes > wrote: > >> On 14/02/2018 10:43 PM, David Holmes wrote: >> >>> Adding in core-libs-dev as there's nothing related to hotspot directly >>> here. >>> >> >> Correction, this is of course leading to a proposed change in hotspot to >> implement the new Unsafe methods and perform the native memory tracking. Of >> course we already have NMT so the obvious question is how this will fit in >> with NMT? >> >> > I thought Unsafe.allocateMemory is served by hotspot os::malloc(), is it > not? So, allocations should show up in NMT with "Unsafe_AllocateMemory0". Could use another category? to make it earlier to identify. Thanks, -Zhengyu > > ..Thomas > > > >> David >> >> >> David >>> >>> On 14/02/2018 9:32 PM, Adam Farley8 wrote: >>> >>>> Hi All, >>>> >>>> Currently, diagnostic core files generated from OpenJDK seem to lump all >>>> of the >>>> native memory usages together, making it near-impossible for someone to >>>> figure >>>> out *what* is using all that memory in the event of a memory leak. >>>> >>>> The OpenJ9 VM has a feature which allows it to track the allocation of >>>> native >>>> memory for Direct Byte Buffers (DBBs), and to supply that information >>>> into >>>> the >>>> cores when they are generated. This makes it a *lot* easier to find out >>>> what is using >>>> all that native memory, making memory leak resolution less like some dark >>>> art, and >>>> more like logical debugging. >>>> >>>> To use this feature, there is a native method referenced in Unsafe.java. >>>> To open >>>> up this feature so that any VM can make use of it, the java code below >>>> sets the >>>> stage for it. This change starts letting people call DBB-specific methods >>>> when >>>> allocating native memory, and getting into the habit of using it. >>>> >>>> Thoughts? >>>> >>>> Best Regards >>>> >>>> Adam Farley >>>> >>>> P.S. Code: >>>> >>>> diff --git >>>> a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >>>> b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >>>> --- a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >>>> +++ b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >>>> @@ -85,7 +85,7 @@ >>>> // Paranoia >>>> return; >>>> } >>>> - UNSAFE.freeMemory(address); >>>> + UNSAFE.freeDBBMemory(address); >>>> address = 0; >>>> Bits.unreserveMemory(size, capacity); >>>> } >>>> @@ -118,7 +118,7 @@ >>>> long base = 0; >>>> try { >>>> - base = UNSAFE.allocateMemory(size); >>>> + base = UNSAFE.allocateDBBMemory(size); >>>> } catch (OutOfMemoryError x) { >>>> Bits.unreserveMemory(size, cap); >>>> throw x; >>>> diff --git a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >>>> b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >>>> --- a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >>>> +++ b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >>>> @@ -632,6 +632,26 @@ >>>> } >>>> /** >>>> + * Allocates a new block of native memory for DirectByteBuffers, of >>>> the >>>> + * given size in bytes. The contents of the memory are >>>> uninitialized; >>>> + * they will generally be garbage. The resulting native pointer >>>> will >>>> + * never be zero, and will be aligned for all value types. Dispose >>>> of >>>> + * this memory by calling {@link #freeDBBMemory} or resize it with >>>> + * {@link #reallocateDBBMemory}. >>>> + * >>>> + * @throws RuntimeException if the size is negative or too large >>>> + * for the native size_t type >>>> + * >>>> + * @throws OutOfMemoryError if the allocation is refused by the >>>> system >>>> + * >>>> + * @see #getByte(long) >>>> + * @see #putByte(long, byte) >>>> + */ >>>> + public long allocateDBBMemory(long bytes) { >>>> + return allocateMemory(bytes); >>>> + } >>>> + >>>> + /** >>>> * Resizes a new block of native memory, to the given size in >>>> bytes. >>>> The >>>> * contents of the new block past the size of the old block are >>>> * uninitialized; they will generally be garbage. The resulting >>>> native >>>> @@ -687,6 +707,27 @@ >>>> } >>>> /** >>>> + * Resizes a new block of native memory for DirectByteBuffers, to >>>> the >>>> + * given size in bytes. The contents of the new block past the size >>>> of >>>> + * the old block are uninitialized; they will generally be garbage. >>>> The >>>> + * resulting native pointer will be zero if and only if the >>>> requested >>>> size >>>> + * is zero. The resulting native pointer will be aligned for all >>>> value >>>> + * types. Dispose of this memory by calling {@link #freeDBBMemory}, >>>> or >>>> + * resize it with {@link #reallocateDBBMemory}. The address passed >>>> to >>>> + * this method may be null, in which case an allocation will be >>>> performed. >>>> + * >>>> + * @throws RuntimeException if the size is negative or too large >>>> + * for the native size_t type >>>> + * >>>> + * @throws OutOfMemoryError if the allocation is refused by the >>>> system >>>> + * >>>> + * @see #allocateDBBMemory >>>> + */ >>>> + public long reallocateDBBMemory(long address, long bytes) { >>>> + return reallocateMemory(address, bytes); >>>> + } >>>> + >>>> + /** >>>> * Sets all bytes in a given block of memory to a fixed value >>>> * (usually zero). >>>> * >>>> @@ -918,6 +959,17 @@ >>>> checkPointer(null, address); >>>> } >>>> + /** >>>> + * Disposes of a block of native memory, as obtained from {@link >>>> + * #allocateDBBMemory} or {@link #reallocateDBBMemory}. The address >>>> passed >>>> + * to this method may be null, in which case no action is taken. >>>> + * >>>> + * @see #allocateDBBMemory >>>> + */ >>>> + public void freeDBBMemory(long address) { >>>> + freeMemory(address); >>>> + } >>>> + >>>> /// random queries >>>> /** >>>> >>>> Unless stated otherwise above: >>>> IBM United Kingdom Limited - Registered in England and Wales with number >>>> 741598. >>>> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 >>>> 3AU >>>> >>>> From thomas.stuefe at gmail.com Wed Feb 14 14:12:18 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 14 Feb 2018 15:12:18 +0100 Subject: [PATCH] RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers In-Reply-To: References: <4d7c4629-7866-13f8-1777-2ff610784dfd@oracle.com> Message-ID: On Wed, Feb 14, 2018 at 2:32 PM, Adam Farley8 wrote: > >> Adding in core-libs-dev as there's nothing related to hotspot directly > >> here. > > > > Correction, this is of course leading to a proposed change in hotspot to > > > implement the new Unsafe methods and perform the native memory tracking. > > > Hah, I wrote the same thing in a parallel reply. Jinx. :) > > - Adam > > > Of course we already have NMT so the obvious question is how this will > > fit in with NMT? > > > > David > > It will add granularity to Native Memory Tracking, allowing people > to tell, at a glance, how much of the allocated native memory has been > used for Direct Byte Buffers. This makes native memory OOM > debugging easier. > > Think of it as an NMT upgrade. > > Here's an example of what the output should look like: > > https://developer.ibm.com/answers/questions/288697/why- > does-nativememinfo-in-javacore-show-incorrect.html?sort=oldest > > - Adam > > I think NMT walks the stack, so we should get allocation points grouped by call stacks. Provided we have symbols loaded for the native library using Unsafe.allocateMemory(), this should give us too a fine granularity. But I have not yet tested this in practice. Maybe Zhengyu knows more. ..Thomas > > > >> David > >> > >> On 14/02/2018 9:32 PM, Adam Farley8 wrote: > >>> Hi All, > >>> > >>> Currently, diagnostic core files generated from OpenJDK seem to lump > all > >>> of the > >>> native memory usages together, making it near-impossible for someone > to > >>> figure > >>> out *what* is using all that memory in the event of a memory leak. > >>> > >>> The OpenJ9 VM has a feature which allows it to track the allocation of > >>> native > >>> memory for Direct Byte Buffers (DBBs), and to supply that information > >>> into > >>> the > >>> cores when they are generated. This makes it a *lot* easier to find > out > >>> what is using > >>> all that native memory, making memory leak resolution less like some > dark > >>> art, and > >>> more like logical debugging. > >>> > >>> To use this feature, there is a native method referenced in > Unsafe.java. > >>> To open > >>> up this feature so that any VM can make use of it, the java code below > >>> sets the > >>> stage for it. This change starts letting people call DBB-specific > methods > >>> when > >>> allocating native memory, and getting into the habit of using it. > >>> > >>> Thoughts? > >>> > >>> Best Regards > >>> > >>> Adam Farley > >>> > >>> P.S. Code: > >>> > >>> diff --git > >>> a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > >>> b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > >>> --- > a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > >>> +++ > b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > >>> @@ -85,7 +85,7 @@ > >>> // Paranoia > >>> return; > >>> } > >>> - UNSAFE.freeMemory(address); > >>> + UNSAFE.freeDBBMemory(address); > >>> address = 0; > >>> Bits.unreserveMemory(size, capacity); > >>> } > >>> @@ -118,7 +118,7 @@ > >>> long base = 0; > >>> try { > >>> - base = UNSAFE.allocateMemory(size); > >>> + base = UNSAFE.allocateDBBMemory(size); > >>> } catch (OutOfMemoryError x) { > >>> Bits.unreserveMemory(size, cap); > >>> throw x; > >>> diff --git a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > >>> b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > >>> --- a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > >>> +++ b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > >>> @@ -632,6 +632,26 @@ > >>> } > >>> /** > >>> + * Allocates a new block of native memory for DirectByteBuffers, > of > >>> the > >>> + * given size in bytes. The contents of the memory are > >>> uninitialized; > >>> + * they will generally be garbage. The resulting native pointer > >>> will > >>> + * never be zero, and will be aligned for all value types. > Dispose > >>> of > >>> + * this memory by calling {@link #freeDBBMemory} or resize it > with > >>> + * {@link #reallocateDBBMemory}. > >>> + * > >>> + * @throws RuntimeException if the size is negative or too large > >>> + * for the native size_t type > >>> + * > >>> + * @throws OutOfMemoryError if the allocation is refused by the > >>> system > >>> + * > >>> + * @see #getByte(long) > >>> + * @see #putByte(long, byte) > >>> + */ > >>> + public long allocateDBBMemory(long bytes) { > >>> + return allocateMemory(bytes); > >>> + } > >>> + > >>> + /** > >>> * Resizes a new block of native memory, to the given size in > >>> bytes. > >>> The > >>> * contents of the new block past the size of the old block are > >>> * uninitialized; they will generally be garbage. The resulting > >>> native > >>> @@ -687,6 +707,27 @@ > >>> } > >>> /** > >>> + * Resizes a new block of native memory for DirectByteBuffers, to > > >>> the > >>> + * given size in bytes. The contents of the new block past the > size > >>> of > >>> + * the old block are uninitialized; they will generally be > garbage. > >>> The > >>> + * resulting native pointer will be zero if and only if the > >>> requested > >>> size > >>> + * is zero. The resulting native pointer will be aligned for all > >>> value > >>> + * types. Dispose of this memory by calling {@link > #freeDBBMemory}, > >>> or > >>> + * resize it with {@link #reallocateDBBMemory}. The address > passed > >>> to > >>> + * this method may be null, in which case an allocation will be > >>> performed. > >>> + * > >>> + * @throws RuntimeException if the size is negative or too large > >>> + * for the native size_t type > >>> + * > >>> + * @throws OutOfMemoryError if the allocation is refused by the > >>> system > >>> + * > >>> + * @see #allocateDBBMemory > >>> + */ > >>> + public long reallocateDBBMemory(long address, long bytes) { > >>> + return reallocateMemory(address, bytes); > >>> + } > >>> + > >>> + /** > >>> * Sets all bytes in a given block of memory to a fixed value > >>> * (usually zero). > >>> * > >>> @@ -918,6 +959,17 @@ > >>> checkPointer(null, address); > >>> } > >>> + /** > >>> + * Disposes of a block of native memory, as obtained from {@link > >>> + * #allocateDBBMemory} or {@link #reallocateDBBMemory}. The > address > >>> passed > >>> + * to this method may be null, in which case no action is taken. > >>> + * > >>> + * @see #allocateDBBMemory > >>> + */ > >>> + public void freeDBBMemory(long address) { > >>> + freeMemory(address); > >>> + } > >>> + > >>> /// random queries > >>> /** > >>> > >>> Unless stated otherwise above: > >>> IBM United Kingdom Limited - Registered in England and Wales with > number > >>> 741598. > >>> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 > > >>> 3AU > >>> > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > From thomas.stuefe at gmail.com Wed Feb 14 14:26:42 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 14 Feb 2018 15:26:42 +0100 Subject: (11) RFR (S) JDK-8196880: VS2017 Addition of Global Delete Operator with Size Parameter Conflicts with Arena's Chunk Provided One In-Reply-To: References: Message-ID: Hi Lois, thanks for fixing this! Small nit, not part of your patch, but still: Chunks get allocated via CHeapObj::operator new() but deleted in Chunk::chop() with raw ::free(). Would it not be cleaner to call CHeapObj::operator delete() instead (it does a free() too, but that would be symmetrical? That would require that we actually implement Chunk::operator delete(), I guess. Best Regards, Thomas On Wed, Feb 14, 2018 at 2:48 PM, Lois Foltan wrote: > Please review this change in VS2017 to the delete operator due to C++14 > standard conformance. From https://msdn.microsoft.com/en- > us/library/mt723604.aspx > > The function|void operator delete(void *, size_t)|was a placement delete > operator corresponding to the placement new function "void * operator > new(size_t, size_t)" in C++11. With C++14 sized deallocation, this delete > function is now a/usual deallocation function/(global delete operator). The > standard requires that if the use of a placement new looks up a > corresponding delete function and finds a usual deallocation function, the > program is ill-formed. > > Thank you to Kim Barrett for proposing the fix below. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196880/webrev/ > bug link https://bugs.openjdk.java.net/browse/JDK-8196880 > > Testing complete (hs-tier1-3, jdk-tier1-3) > > Thanks, > Lois > > > From zgu at redhat.com Wed Feb 14 14:51:31 2018 From: zgu at redhat.com (Zhengyu Gu) Date: Wed, 14 Feb 2018 09:51:31 -0500 Subject: [PATCH] RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers In-Reply-To: References: <4d7c4629-7866-13f8-1777-2ff610784dfd@oracle.com> Message-ID: <012e8977-3cd6-ceef-cd67-a8ae55427264@redhat.com> >> Think of it as an NMT upgrade. >> >> Here's an example of what the output should look like: >> >> https://developer.ibm.com/answers/questions/288697/why- >> does-nativememinfo-in-javacore-show-incorrect.html?sort=oldest >> >> - Adam >> >> > I think NMT walks the stack, so we should get allocation points grouped by > call stacks. Provided we have symbols loaded for the native library using > Unsafe.allocateMemory(), this should give us too a fine granularity. But I > have not yet tested this in practice. Maybe Zhengyu knows more. Quick test shows this call site: [0x00007f8558b26243] Unsafe_AllocateMemory0+0x93 [0x00007f8537b085cb] (malloc=2KB type=Internal #1) I will take a look why there is a frame not decoded. Thanks, -Zhengyu > > ..Thomas > > >>> >>>> David >>>> >>>> On 14/02/2018 9:32 PM, Adam Farley8 wrote: >>>>> Hi All, >>>>> >>>>> Currently, diagnostic core files generated from OpenJDK seem to lump >> all >>>>> of the >>>>> native memory usages together, making it near-impossible for someone >> to >>>>> figure >>>>> out *what* is using all that memory in the event of a memory leak. >>>>> >>>>> The OpenJ9 VM has a feature which allows it to track the allocation of >>>>> native >>>>> memory for Direct Byte Buffers (DBBs), and to supply that information >>>>> into >>>>> the >>>>> cores when they are generated. This makes it a *lot* easier to find >> out >>>>> what is using >>>>> all that native memory, making memory leak resolution less like some >> dark >>>>> art, and >>>>> more like logical debugging. >>>>> >>>>> To use this feature, there is a native method referenced in >> Unsafe.java. >>>>> To open >>>>> up this feature so that any VM can make use of it, the java code below >>>>> sets the >>>>> stage for it. This change starts letting people call DBB-specific >> methods >>>>> when >>>>> allocating native memory, and getting into the habit of using it. >>>>> >>>>> Thoughts? >>>>> >>>>> Best Regards >>>>> >>>>> Adam Farley >>>>> >>>>> P.S. Code: >>>>> >>>>> diff --git >>>>> a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >>>>> b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >>>>> --- >> a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >>>>> +++ >> b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >>>>> @@ -85,7 +85,7 @@ >>>>> // Paranoia >>>>> return; >>>>> } >>>>> - UNSAFE.freeMemory(address); >>>>> + UNSAFE.freeDBBMemory(address); >>>>> address = 0; >>>>> Bits.unreserveMemory(size, capacity); >>>>> } >>>>> @@ -118,7 +118,7 @@ >>>>> long base = 0; >>>>> try { >>>>> - base = UNSAFE.allocateMemory(size); >>>>> + base = UNSAFE.allocateDBBMemory(size); >>>>> } catch (OutOfMemoryError x) { >>>>> Bits.unreserveMemory(size, cap); >>>>> throw x; >>>>> diff --git a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >>>>> b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >>>>> --- a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >>>>> +++ b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >>>>> @@ -632,6 +632,26 @@ >>>>> } >>>>> /** >>>>> + * Allocates a new block of native memory for DirectByteBuffers, >> of >>>>> the >>>>> + * given size in bytes. The contents of the memory are >>>>> uninitialized; >>>>> + * they will generally be garbage. The resulting native pointer >>>>> will >>>>> + * never be zero, and will be aligned for all value types. >> Dispose >>>>> of >>>>> + * this memory by calling {@link #freeDBBMemory} or resize it >> with >>>>> + * {@link #reallocateDBBMemory}. >>>>> + * >>>>> + * @throws RuntimeException if the size is negative or too large >>>>> + * for the native size_t type >>>>> + * >>>>> + * @throws OutOfMemoryError if the allocation is refused by the >>>>> system >>>>> + * >>>>> + * @see #getByte(long) >>>>> + * @see #putByte(long, byte) >>>>> + */ >>>>> + public long allocateDBBMemory(long bytes) { >>>>> + return allocateMemory(bytes); >>>>> + } >>>>> + >>>>> + /** >>>>> * Resizes a new block of native memory, to the given size in >>>>> bytes. >>>>> The >>>>> * contents of the new block past the size of the old block are >>>>> * uninitialized; they will generally be garbage. The resulting >>>>> native >>>>> @@ -687,6 +707,27 @@ >>>>> } >>>>> /** >>>>> + * Resizes a new block of native memory for DirectByteBuffers, to >> >>>>> the >>>>> + * given size in bytes. The contents of the new block past the >> size >>>>> of >>>>> + * the old block are uninitialized; they will generally be >> garbage. >>>>> The >>>>> + * resulting native pointer will be zero if and only if the >>>>> requested >>>>> size >>>>> + * is zero. The resulting native pointer will be aligned for all >>>>> value >>>>> + * types. Dispose of this memory by calling {@link >> #freeDBBMemory}, >>>>> or >>>>> + * resize it with {@link #reallocateDBBMemory}. The address >> passed >>>>> to >>>>> + * this method may be null, in which case an allocation will be >>>>> performed. >>>>> + * >>>>> + * @throws RuntimeException if the size is negative or too large >>>>> + * for the native size_t type >>>>> + * >>>>> + * @throws OutOfMemoryError if the allocation is refused by the >>>>> system >>>>> + * >>>>> + * @see #allocateDBBMemory >>>>> + */ >>>>> + public long reallocateDBBMemory(long address, long bytes) { >>>>> + return reallocateMemory(address, bytes); >>>>> + } >>>>> + >>>>> + /** >>>>> * Sets all bytes in a given block of memory to a fixed value >>>>> * (usually zero). >>>>> * >>>>> @@ -918,6 +959,17 @@ >>>>> checkPointer(null, address); >>>>> } >>>>> + /** >>>>> + * Disposes of a block of native memory, as obtained from {@link >>>>> + * #allocateDBBMemory} or {@link #reallocateDBBMemory}. The >> address >>>>> passed >>>>> + * to this method may be null, in which case no action is taken. >>>>> + * >>>>> + * @see #allocateDBBMemory >>>>> + */ >>>>> + public void freeDBBMemory(long address) { >>>>> + freeMemory(address); >>>>> + } >>>>> + >>>>> /// random queries >>>>> /** >>>>> >>>>> Unless stated otherwise above: >>>>> IBM United Kingdom Limited - Registered in England and Wales with >> number >>>>> 741598. >>>>> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 >> >>>>> 3AU >>>>> >> >> Unless stated otherwise above: >> IBM United Kingdom Limited - Registered in England and Wales with number >> 741598. >> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU >> From ioi.lam at oracle.com Wed Feb 14 14:56:23 2018 From: ioi.lam at oracle.com (Ioi Lam) Date: Wed, 14 Feb 2018 06:56:23 -0800 Subject: RFR(XXS) 8197857 fieldDescriptor prints incorrect 32-bit representation of compressed oops In-Reply-To: References: <5f483301-5995-58ea-15b8-53b79c610b3e@oracle.com> <3902728e-8fb2-3c58-76a5-7d5a72465fff@oracle.com> Message-ID: <37ed07d2-054a-3919-35ca-7459f74591b1@oracle.com> On 2/13/18 4:30 PM, coleen.phillimore at oracle.com wrote: > > > On 2/13/18 6:37 PM, Ioi Lam wrote: >> >> >> On 2/13/18 12:30 PM, coleen.phillimore at oracle.com wrote: >>> >>> This looks good but this is very odd output.? I don't know why we >>> print this.? I wouldn't object if it were removed. >>> >> I guess it's useful for someone debugging issues related to >> compressed oops? > > I don't think so.? I never used it.? It might be for debugging short > and ints?? I think it's useless if you want to remove it, I'll review > it quickly. >> >>> Otherwise, I hate to do this to a trivial change but would this also >>> print this better? >>> >>> ? // Print a hint as to the underlying integer representation. This >>> can be wrong for >>> ? // pointers on an LP64 machine >>> ? if (ft == T_LONG || ft == T_DOUBLE LP64_ONLY(|| >>> (!UseCompressedOops && !is_java_primitive(ft))) ) { >>> ??? st->print(" (%x %x)", obj->int_field(offset()), >>> obj->int_field(offset()+sizeof(jint))); >>> ? } else if (as_int < 0 || as_int > 9) { >>> ??? st->print(" (%x)", as_int); >>> ? } >>> >> >> This would make the code even harder to read than it already is. >> >> Also, the (as_int < 0 || as_int > 9) is useful only for 32-bit >> pointers and numerical values. For CompressedOops, I guess it's >> possible to have a value 8. This is probably not a big deal, but I >> don't want to have code that's theoretically incorrect. > > I saw this line afterwards, and can't guess why it's there. > > Your patch is fine if you want to push it. > Thanks Coleen, I'll push it as is. - Ioi > Coleen >> >> Thanks >> - Ioi >> >> >> >>> >>> Thanks, >>> Coleen >>> >>> On 2/13/18 12:28 PM, Ioi Lam wrote: >>>> https://bugs.openjdk.java.net/browse/JDK-8197857 >>>> >>>> >>>> When UseCompressedOops is enabled for 64-bit VMs, >>>> fieldDescriptor::print_on_for >>>> prints two 32-bit integers for each object field. E.g. >>>> >>>> ?- 'asTypeCache' 'Ljava/lang/invoke/MethodHandle;' @24 NULL (0 >>>> 8221b591) >>>> ?- final 'argL0' 'Ljava/lang/Object;' @28 a >>>> 'LambHello$$Lambda$1'{0x00000004110dac88} (8221b591 1) >>>> >>>> However, compressed oops occupy the space of only a single 32-bit >>>> integer, so the superfluous output is confusing. >>>> >>>> The above should be printed as >>>> >>>> ?- 'asTypeCache' 'Ljava/lang/invoke/MethodHandle;' @24 NULL (0) >>>> ?- final 'argL0' 'Ljava/lang/Object;' @28 a >>>> 'LambHello$$Lambda$1'{0x00000004110dac88} (8221b591) >>>> >>>> Patch: >>>> ======================================= >>>> >>>> --- a/src/hotspot/share/runtime/fieldDescriptor.cpp??? Mon Feb 12 >>>> 09:12:59 2018 -0800 >>>> +++ b/src/hotspot/share/runtime/fieldDescriptor.cpp??? Tue Feb 13 >>>> 09:24:26 2018 -0800 >>>> @@ -201,6 +201,13 @@ >>>> ?? } >>>> ?? // Print a hint as to the underlying integer representation. >>>> This can be wrong for >>>> ?? // pointers on an LP64 machine >>>> + >>>> +#ifdef _LP64 >>>> +? if ((ft == T_OBJECT || ft == T_ARRAY) && UseCompressedOops) { >>>> +??? st->print(" (%x)", obj->int_field(offset())); >>>> +? } >>>> +? else // <- intended >>>> +#endif >>>> ?? if (ft == T_LONG || ft == T_DOUBLE LP64_ONLY(|| >>>> !is_java_primitive(ft)) ) { >>>> ???? st->print(" (%x %x)", obj->int_field(offset()), >>>> obj->int_field(offset()+sizeof(jint))); >>>> ?? } else if (as_int < 0 || as_int > 9) { >>>> >>> >> > From jesper.wilhelmsson at oracle.com Wed Feb 14 16:14:41 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Wed, 14 Feb 2018 17:14:41 +0100 Subject: RFR: JDK-8197945 - Qurarantine failing condy tests In-Reply-To: <1B69958B-4133-4448-9EFA-ED81B8F4170C@oracle.com> References: <1B69958B-4133-4448-9EFA-ED81B8F4170C@oracle.com> Message-ID: +hotspot_dev > On 14 Feb 2018, at 16:44, jesper.wilhelmsson at oracle.com wrote: > > Hi, > > Please review this tiny fix to quarantine two tests that are failing in HS Tier 1. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8197945 > Diff: > > diff --git a/test/jdk/ProblemList.txt b/test/jdk/ProblemList.txt > --- a/test/jdk/ProblemList.txt > +++ b/test/jdk/ProblemList.txt > @@ -282,6 +282,9 @@ > > java/lang/String/nativeEncoding/StringPlatformChars.java 8182569 windows-all,solaris-all > > +java/lang/invoke/condy/CondyRepeatFailedResolution.java 8197944 windows-all > +java/lang/invoke/condy/CondyReturnPrimitiveTest.java 8197944 windows-all > + > ############################################################################ > > # jdk_instrument > > > > Thanks, > /Jesper > From leonid.mesnik at oracle.com Wed Feb 14 17:13:21 2018 From: leonid.mesnik at oracle.com (Leonid Mesnik) Date: Wed, 14 Feb 2018 09:13:21 -0800 Subject: Take 2: RFR: 8197429: Increased stack guard causes segfaults on x86-32 In-Reply-To: References: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> Message-ID: <2A2BF912-1ACE-4FF4-89EA-28924377CB31@oracle.com> Hi Andrew > On Feb 14, 2018, at 1:23 AM, Andrew Haley wrote: > > On 13/02/18 23:44, Leonid Mesnik wrote: > >> Thank you for fixing tests. I think that it would be also better to >> rewrite test in java using process utilities >> test/lib/jdk/test/lib/process/ProcessTools.java. >> It is possible to push shell test but the general direction for >> OpenJDK tests is to use java and testlibrary. >> Now it is more easier to develop and debug tests with new processes >> using testlibrary. Also the could be execute faster. >> But if you have any reasons to keep test in shell that I am fine. > > I'm trying to test a very specific fault path, one that is due to the > JNI launcher from C. IfI wanted to test something else I would do > something else, but I don't understand your motivation for wanting to > write the test in some other way. There several benefits from writing tests in java 1) Test filtering and more correct results. It is possible to filtering using requires tag: * @requires (os.family == "linux?) So jtreg will filter this test instead of running it and ?pass? on non-linux platforms. It helps to have more precise test results 2) More effective execution. For each shell test it is required to run separate shell process and then at least run and parse java -version in test_env.sh. While java-based tests which use @driver are compiled and executed in the same vm. So only single new process is forked for your test. I wouldn?t insist on writing this test in java right now but eventually we are going to port all shell tests in java anyway. Leonid > > -- > Andrew Haley > Java Platform Lead Engineer > Red Hat UK Ltd. > EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From lois.foltan at oracle.com Wed Feb 14 17:14:34 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Wed, 14 Feb 2018 12:14:34 -0500 Subject: (11) RFR (S) JDK-8196880: VS2017 Addition of Global Delete Operator with Size Parameter Conflicts with Arena's Chunk Provided One In-Reply-To: References: Message-ID: On 2/14/2018 9:26 AM, Thomas St?fe wrote: > Hi Lois, > > thanks for fixing this! Thanks for the review! > > Small nit, not part of your patch, but still: > > Chunks get allocated via CHeapObj::operator new() but deleted in > Chunk::chop() with raw ::free(). Would it not be cleaner to call > CHeapObj::operator delete() instead (it does a free() too, but that > would be symmetrical? That would require that we actually implement > Chunk::operator delete(), I guess. Sure, that seems reasonable and logical.? However, note it still doesn't alleviate the need to define a delete operator that takes a size parameter.? At the point where placement new is used, the compiler tries to find an appropriate matching operator delete and would error (C4291) with a failure to find one if not provided. Thanks, Lois > > Best Regards, Thomas > > > On Wed, Feb 14, 2018 at 2:48 PM, Lois Foltan > wrote: > > Please review this change in VS2017 to the delete operator due to > C++14 standard conformance.? From > https://msdn.microsoft.com/en-us/library/mt723604.aspx > > > The function|void operator delete(void *, size_t)|was a placement > delete operator corresponding to the placement new function "void > * operator new(size_t, size_t)" in C++11. With C++14 sized > deallocation, this delete function is now a/usual deallocation > function/(global delete operator). The standard requires that if > the use of a placement new looks up a corresponding delete > function and finds a usual deallocation function, the program is > ill-formed. > > Thank you to Kim Barrett for proposing the fix below. > > open webrev at > http://cr.openjdk.java.net/~lfoltan/bug_jdk8196880/webrev/ > > bug link https://bugs.openjdk.java.net/browse/JDK-8196880 > > > Testing complete (hs-tier1-3, jdk-tier1-3) > > Thanks, > Lois > > > From kim.barrett at oracle.com Wed Feb 14 17:19:38 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 14 Feb 2018 12:19:38 -0500 Subject: RFR: JDK-8197945 - Qurarantine failing condy tests In-Reply-To: References: <1B69958B-4133-4448-9EFA-ED81B8F4170C@oracle.com> Message-ID: <7A9FAFB0-0A95-4D0F-BED5-C73ACC4D7015@oracle.com> > On Feb 14, 2018, at 11:14 AM, jesper.wilhelmsson at oracle.com wrote: > > +hotspot_dev > >> On 14 Feb 2018, at 16:44, jesper.wilhelmsson at oracle.com wrote: >> >> Hi, >> >> Please review this tiny fix to quarantine two tests that are failing in HS Tier 1. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8197945 >> Diff: >> >> diff --git a/test/jdk/ProblemList.txt b/test/jdk/ProblemList.txt >> --- a/test/jdk/ProblemList.txt >> +++ b/test/jdk/ProblemList.txt >> @@ -282,6 +282,9 @@ >> >> java/lang/String/nativeEncoding/StringPlatformChars.java 8182569 windows-all,solaris-all >> >> +java/lang/invoke/condy/CondyRepeatFailedResolution.java 8197944 windows-all >> +java/lang/invoke/condy/CondyReturnPrimitiveTest.java 8197944 windows-all >> + >> ############################################################################ >> >> # jdk_instrument >> >> >> >> Thanks, >> /Jesper Looks good. From ecki at zusammenkunft.net Wed Feb 14 15:55:18 2018 From: ecki at zusammenkunft.net (Bernd Eckenfels) Date: Wed, 14 Feb 2018 16:55:18 +0100 Subject: [PATCH] RFR Bug-pending: Enable Hotspot to Track Native MemoryUsage for Direct Byte Buffers In-Reply-To: <4d7c4629-7866-13f8-1777-2ff610784dfd@oracle.com> References: <4d7c4629-7866-13f8-1777-2ff610784dfd@oracle.com> Message-ID: <5a845be7.01c0df0a.35f1a.f0ec@mx.google.com> Maybe instead adding a ?allocation request type? argment to allocate Memory? (and wrap it with a typed allocator in the buffer interfacessomewhere?) the ?DBB? part Looks especially cryptic. We have similiar concepts for NMT in the native Code. Besides I mentioned a while back that the JMX part of the memory Accounting could be improved as well. Those two could probably be unified. This is where I see most Troubleshooting activities struggle. Gruss Bernd -- http://bernd.eckenfels.net Von: David Holmes Gesendet: Mittwoch, 14. Februar 2018 16:44 An: Adam Farley8; hotspot-dev at openjdk.java.net; core-libs-dev Libs Betreff: Re: [PATCH] RFR Bug-pending: Enable Hotspot to Track Native MemoryUsage for Direct Byte Buffers Adding in core-libs-dev as there's nothing related to hotspot directly here. David On 14/02/2018 9:32 PM, Adam Farley8 wrote: > Hi All, > > Currently, diagnostic core files generated from OpenJDK seem to lump all > of the > native memory usages together, making it near-impossible for someone to > figure > out *what* is using all that memory in the event of a memory leak. > > The OpenJ9 VM has a feature which allows it to track the allocation of > native > memory for Direct Byte Buffers (DBBs), and to supply that information into > the > cores when they are generated. This makes it a *lot* easier to find out > what is using > all that native memory, making memory leak resolution less like some dark > art, and > more like logical debugging. > > To use this feature, there is a native method referenced in Unsafe.java. > To open > up this feature so that any VM can make use of it, the java code below > sets the > stage for it. This change starts letting people call DBB-specific methods > when > allocating native memory, and getting into the habit of using it. > > Thoughts? > > Best Regards > > Adam Farley > > P.S. Code: > > diff --git > a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > --- a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > +++ b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > @@ -85,7 +85,7 @@ > // Paranoia > return; > } > - UNSAFE.freeMemory(address); > + UNSAFE.freeDBBMemory(address); > address = 0; > Bits.unreserveMemory(size, capacity); > } > @@ -118,7 +118,7 @@ > > long base = 0; > try { > - base = UNSAFE.allocateMemory(size); > + base = UNSAFE.allocateDBBMemory(size); > } catch (OutOfMemoryError x) { > Bits.unreserveMemory(size, cap); > throw x; > diff --git a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > --- a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > +++ b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > @@ -632,6 +632,26 @@ > } > > /** > + * Allocates a new block of native memory for DirectByteBuffers, of > the > + * given size in bytes. The contents of the memory are > uninitialized; > + * they will generally be garbage. The resulting native pointer will > + * never be zero, and will be aligned for all value types. Dispose > of > + * this memory by calling {@link #freeDBBMemory} or resize it with > + * {@link #reallocateDBBMemory}. > + * > + * @throws RuntimeException if the size is negative or too large > + * for the native size_t type > + * > + * @throws OutOfMemoryError if the allocation is refused by the > system > + * > + * @see #getByte(long) > + * @see #putByte(long, byte) > + */ > + public long allocateDBBMemory(long bytes) { > + return allocateMemory(bytes); > + } > + > + /** > * Resizes a new block of native memory, to the given size in bytes. > The > * contents of the new block past the size of the old block are > * uninitialized; they will generally be garbage. The resulting > native > @@ -687,6 +707,27 @@ > } > > /** > + * Resizes a new block of native memory for DirectByteBuffers, to the > + * given size in bytes. The contents of the new block past the size > of > + * the old block are uninitialized; they will generally be garbage. > The > + * resulting native pointer will be zero if and only if the requested > size > + * is zero. The resulting native pointer will be aligned for all > value > + * types. Dispose of this memory by calling {@link #freeDBBMemory}, > or > + * resize it with {@link #reallocateDBBMemory}. The address passed > to > + * this method may be null, in which case an allocation will be > performed. > + * > + * @throws RuntimeException if the size is negative or too large > + * for the native size_t type > + * > + * @throws OutOfMemoryError if the allocation is refused by the > system > + * > + * @see #allocateDBBMemory > + */ > + public long reallocateDBBMemory(long address, long bytes) { > + return reallocateMemory(address, bytes); > + } > + > + /** > * Sets all bytes in a given block of memory to a fixed value > * (usually zero). > * > @@ -918,6 +959,17 @@ > checkPointer(null, address); > } > > + /** > + * Disposes of a block of native memory, as obtained from {@link > + * #allocateDBBMemory} or {@link #reallocateDBBMemory}. The address > passed > + * to this method may be null, in which case no action is taken. > + * > + * @see #allocateDBBMemory > + */ > + public void freeDBBMemory(long address) { > + freeMemory(address); > + } > + > /// random queries > > /** > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > From lois.foltan at oracle.com Wed Feb 14 18:07:15 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Wed, 14 Feb 2018 13:07:15 -0500 Subject: (11) RFR (S) JDK-8196997: VS2017 The non-Standard std::tr1 namespace and TR1-only machinery are deprecated and will be removed Message-ID: <60ec83f4-2b23-8b28-99dc-00d4abcce6b4@oracle.com> Please review this small fix to ignore the VS2017 deprecation warning for non-Standard std::tr1 namespace and TR1-only machinery when compiling gtest.? A corresponding RFE has been added to upgrade gtest. open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196997/webrev/ bug link https://bugs.openjdk.java.net/browse/JDK-8196997 gtest upgrade RFE link https://bugs.openjdk.java.net/browse/JDK-8197951 Thanks, Lois From thomas.stuefe at gmail.com Wed Feb 14 18:25:36 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 14 Feb 2018 19:25:36 +0100 Subject: (11) RFR (S) JDK-8196880: VS2017 Addition of Global Delete Operator with Size Parameter Conflicts with Arena's Chunk Provided One In-Reply-To: References: Message-ID: On Wed, Feb 14, 2018 at 6:14 PM, Lois Foltan wrote: > On 2/14/2018 9:26 AM, Thomas St?fe wrote: > > Hi Lois, > > thanks for fixing this! > > Thanks for the review! > > > Small nit, not part of your patch, but still: > > Chunks get allocated via CHeapObj::operator new() but deleted in > Chunk::chop() with raw ::free(). Would it not be cleaner to call > CHeapObj::operator delete() instead (it does a free() too, but that would > be symmetrical? That would require that we actually implement > Chunk::operator delete(), I guess. > > Sure, that seems reasonable and logical. However, note it still doesn't > alleviate the need to define a delete operator that takes a size > parameter. At the point where placement new is used, the compiler tries to > find an appropriate matching operator delete and would error (C4291) with a > failure to find one if not provided. > > Makes sense. Change is reviewed from my side. Kind Regards, Thomas > Thanks, > Lois > > > Best Regards, Thomas > > > On Wed, Feb 14, 2018 at 2:48 PM, Lois Foltan > wrote: > >> Please review this change in VS2017 to the delete operator due to C++14 >> standard conformance. From https://msdn.microsoft.com/en- >> us/library/mt723604.aspx >> >> The function|void operator delete(void *, size_t)|was a placement delete >> operator corresponding to the placement new function "void * operator >> new(size_t, size_t)" in C++11. With C++14 sized deallocation, this delete >> function is now a/usual deallocation function/(global delete operator). The >> standard requires that if the use of a placement new looks up a >> corresponding delete function and finds a usual deallocation function, the >> program is ill-formed. >> >> Thank you to Kim Barrett for proposing the fix below. >> >> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196880/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8196880 >> >> Testing complete (hs-tier1-3, jdk-tier1-3) >> >> Thanks, >> Lois >> >> >> > > From kim.barrett at oracle.com Wed Feb 14 19:16:25 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 14 Feb 2018 14:16:25 -0500 Subject: (11) RFR (S) JDK-8196880: VS2017 Addition of Global Delete Operator with Size Parameter Conflicts with Arena's Chunk Provided One In-Reply-To: References: Message-ID: <556FF1A5-BCED-415F-9E52-BFE37FA2886F@oracle.com> > On Feb 14, 2018, at 8:48 AM, Lois Foltan wrote: > > Please review this change in VS2017 to the delete operator due to C++14 standard conformance. From https://msdn.microsoft.com/en-us/library/mt723604.aspx > > The function|void operator delete(void *, size_t)|was a placement delete operator corresponding to the placement new function "void * operator new(size_t, size_t)" in C++11. With C++14 sized deallocation, this delete function is now a/usual deallocation function/(global delete operator). The standard requires that if the use of a placement new looks up a corresponding delete function and finds a usual deallocation function, the program is ill-formed. > > Thank you to Kim Barrett for proposing the fix below. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196880/webrev/ > bug link https://bugs.openjdk.java.net/browse/JDK-8196880 > > Testing complete (hs-tier1-3, jdk-tier1-3) > > Thanks, > Lois Looks good to me, obviously. From lois.foltan at oracle.com Wed Feb 14 19:23:17 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Wed, 14 Feb 2018 14:23:17 -0500 Subject: (11) RFR (S) JDK-8196880: VS2017 Addition of Global Delete Operator with Size Parameter Conflicts with Arena's Chunk Provided One In-Reply-To: <556FF1A5-BCED-415F-9E52-BFE37FA2886F@oracle.com> References: <556FF1A5-BCED-415F-9E52-BFE37FA2886F@oracle.com> Message-ID: On 2/14/2018 2:16 PM, Kim Barrett wrote: >> On Feb 14, 2018, at 8:48 AM, Lois Foltan wrote: >> >> Please review this change in VS2017 to the delete operator due to C++14 standard conformance. From https://msdn.microsoft.com/en-us/library/mt723604.aspx >> >> The function|void operator delete(void *, size_t)|was a placement delete operator corresponding to the placement new function "void * operator new(size_t, size_t)" in C++11. With C++14 sized deallocation, this delete function is now a/usual deallocation function/(global delete operator). The standard requires that if the use of a placement new looks up a corresponding delete function and finds a usual deallocation function, the program is ill-formed. >> >> Thank you to Kim Barrett for proposing the fix below. >> >> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196880/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8196880 >> >> Testing complete (hs-tier1-3, jdk-tier1-3) >> >> Thanks, >> Lois > Looks good to me, obviously. Thanks Kim! Lois > From lois.foltan at oracle.com Wed Feb 14 19:24:30 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Wed, 14 Feb 2018 14:24:30 -0500 Subject: (11) RFR (S) JDK-8196997: VS2017 The non-Standard std::tr1 namespace and TR1-only machinery are deprecated and will be removed In-Reply-To: <94C14020-4DBB-4864-AED8-E80F71AFA11B@oracle.com> References: <60ec83f4-2b23-8b28-99dc-00d4abcce6b4@oracle.com> <94C14020-4DBB-4864-AED8-E80F71AFA11B@oracle.com> Message-ID: On 2/14/2018 2:23 PM, Kim Barrett wrote: >> On Feb 14, 2018, at 1:07 PM, Lois Foltan wrote: >> >> Please review this small fix to ignore the VS2017 deprecation warning for non-Standard std::tr1 namespace and TR1-only machinery when compiling gtest. A corresponding RFE has been added to upgrade gtest. >> >> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196997/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8196997 >> gtest upgrade RFE link https://bugs.openjdk.java.net/browse/JDK-8197951 >> >> Thanks, >> Lois > Looks good. Great, thanks Kim! Lois From kim.barrett at oracle.com Wed Feb 14 19:23:01 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 14 Feb 2018 14:23:01 -0500 Subject: (11) RFR (S) JDK-8196997: VS2017 The non-Standard std::tr1 namespace and TR1-only machinery are deprecated and will be removed In-Reply-To: <60ec83f4-2b23-8b28-99dc-00d4abcce6b4@oracle.com> References: <60ec83f4-2b23-8b28-99dc-00d4abcce6b4@oracle.com> Message-ID: <94C14020-4DBB-4864-AED8-E80F71AFA11B@oracle.com> > On Feb 14, 2018, at 1:07 PM, Lois Foltan wrote: > > Please review this small fix to ignore the VS2017 deprecation warning for non-Standard std::tr1 namespace and TR1-only machinery when compiling gtest. A corresponding RFE has been added to upgrade gtest. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196997/webrev/ > bug link https://bugs.openjdk.java.net/browse/JDK-8196997 > gtest upgrade RFE link https://bugs.openjdk.java.net/browse/JDK-8197951 > > Thanks, > Lois Looks good. From jesper.wilhelmsson at oracle.com Wed Feb 14 19:59:39 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Wed, 14 Feb 2018 20:59:39 +0100 Subject: RFR: JDK-8197945 - Qurarantine failing condy tests In-Reply-To: <7A9FAFB0-0A95-4D0F-BED5-C73ACC4D7015@oracle.com> References: <1B69958B-4133-4448-9EFA-ED81B8F4170C@oracle.com> <7A9FAFB0-0A95-4D0F-BED5-C73ACC4D7015@oracle.com> Message-ID: <3D5D6E07-3228-453E-A754-CEAE471364BC@oracle.com> Thanks! /Jesper > On 14 Feb 2018, at 18:19, Kim Barrett wrote: > >> On Feb 14, 2018, at 11:14 AM, jesper.wilhelmsson at oracle.com wrote: >> >> +hotspot_dev >> >>> On 14 Feb 2018, at 16:44, jesper.wilhelmsson at oracle.com wrote: >>> >>> Hi, >>> >>> Please review this tiny fix to quarantine two tests that are failing in HS Tier 1. >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8197945 >>> Diff: >>> >>> diff --git a/test/jdk/ProblemList.txt b/test/jdk/ProblemList.txt >>> --- a/test/jdk/ProblemList.txt >>> +++ b/test/jdk/ProblemList.txt >>> @@ -282,6 +282,9 @@ >>> >>> java/lang/String/nativeEncoding/StringPlatformChars.java 8182569 windows-all,solaris-all >>> >>> +java/lang/invoke/condy/CondyRepeatFailedResolution.java 8197944 windows-all >>> +java/lang/invoke/condy/CondyReturnPrimitiveTest.java 8197944 windows-all >>> + >>> ############################################################################ >>> >>> # jdk_instrument >>> >>> >>> >>> Thanks, >>> /Jesper > > Looks good. > From sangheon.kim at oracle.com Wed Feb 14 21:45:04 2018 From: sangheon.kim at oracle.com (sangheon.kim) Date: Wed, 14 Feb 2018 13:45:04 -0800 Subject: RFR(xs): 8193909: Obsolete(remove) Co-operative Memory Management (CMM) Message-ID: <1554d7d4-df1f-5254-9014-f4f1d0c2aac6@oracle.com> Hi all, Could I have some reviews for CMM removal? This is closed CR but some public codes also need small modifications. This CR is for removing stuff related to an Oracle JDK module/package. Changes are just removing CMM from lists or in a test to skip the testing logic. CR: https://bugs.openjdk.java.net/browse/JDK-8193909 Webrev: http://cr.openjdk.java.net/~sangheki/8193909/webrev.0 Testing: hs-tier1~5, jdk1~3, open/test/jdk:jdk_core Thanks, Sangheon From david.holmes at oracle.com Wed Feb 14 22:40:02 2018 From: david.holmes at oracle.com (David Holmes) Date: Thu, 15 Feb 2018 08:40:02 +1000 Subject: RFR(xs): 8193909: Obsolete(remove) Co-operative Memory Management (CMM) In-Reply-To: <1554d7d4-df1f-5254-9014-f4f1d0c2aac6@oracle.com> References: <1554d7d4-df1f-5254-9014-f4f1d0c2aac6@oracle.com> Message-ID: That all seems trivially fine. Thanks, David On 15/02/2018 7:45 AM, sangheon.kim wrote: > Hi all, > > Could I have some reviews for CMM removal? > This is closed CR but some public codes also need small modifications. > This CR is for removing stuff related to an Oracle JDK module/package. > Changes are just removing CMM from lists or in a test to skip the > testing logic. > > CR: https://bugs.openjdk.java.net/browse/JDK-8193909 > Webrev: http://cr.openjdk.java.net/~sangheki/8193909/webrev.0 > Testing: hs-tier1~5, jdk1~3, open/test/jdk:jdk_core > > Thanks, > Sangheon > From jesper.wilhelmsson at oracle.com Wed Feb 14 22:53:21 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Wed, 14 Feb 2018 23:53:21 +0100 Subject: RFR(xs): 8193909: Obsolete(remove) Co-operative Memory Management (CMM) In-Reply-To: <1554d7d4-df1f-5254-9014-f4f1d0c2aac6@oracle.com> References: <1554d7d4-df1f-5254-9014-f4f1d0c2aac6@oracle.com> Message-ID: <71FFFDAB-B096-48F1-BC18-765F5D6EE734@oracle.com> Looks good! /Jesper > On 14 Feb 2018, at 22:45, sangheon.kim wrote: > > Hi all, > > Could I have some reviews for CMM removal? > This is closed CR but some public codes also need small modifications. This CR is for removing stuff related to an Oracle JDK module/package. > Changes are just removing CMM from lists or in a test to skip the testing logic. > > CR: https://bugs.openjdk.java.net/browse/JDK-8193909 > Webrev: http://cr.openjdk.java.net/~sangheki/8193909/webrev.0 > Testing: hs-tier1~5, jdk1~3, open/test/jdk:jdk_core > > Thanks, > Sangheon > From sangheon.kim at oracle.com Wed Feb 14 23:02:38 2018 From: sangheon.kim at oracle.com (sangheon.kim) Date: Wed, 14 Feb 2018 15:02:38 -0800 Subject: RFR(xs): 8193909: Obsolete(remove) Co-operative Memory Management (CMM) In-Reply-To: <71FFFDAB-B096-48F1-BC18-765F5D6EE734@oracle.com> References: <1554d7d4-df1f-5254-9014-f4f1d0c2aac6@oracle.com> <71FFFDAB-B096-48F1-BC18-765F5D6EE734@oracle.com> Message-ID: <91fd2bf6-e4ac-e4fb-7c34-8332b78a3e27@oracle.com> Hi David and Jesper, Thank you for the review. Sangheon On 02/14/2018 02:53 PM, jesper.wilhelmsson at oracle.com wrote: > Looks good! > /Jesper > >> On 14 Feb 2018, at 22:45, sangheon.kim wrote: >> >> Hi all, >> >> Could I have some reviews for CMM removal? >> This is closed CR but some public codes also need small modifications. This CR is for removing stuff related to an Oracle JDK module/package. >> Changes are just removing CMM from lists or in a test to skip the testing logic. >> >> CR: https://bugs.openjdk.java.net/browse/JDK-8193909 >> Webrev: http://cr.openjdk.java.net/~sangheki/8193909/webrev.0 >> Testing: hs-tier1~5, jdk1~3, open/test/jdk:jdk_core >> >> Thanks, >> Sangheon >> From jcbeyler at google.com Wed Feb 14 23:08:39 2018 From: jcbeyler at google.com (JC Beyler) Date: Wed, 14 Feb 2018 15:08:39 -0800 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code Message-ID: Hi all, Here is a webrev to do the work mentioned in JDK-8194084 : http://cr.openjdk.java.net/~jcbeyler/8194084/webrev.01/ It has the parts for each architecture and I can't test a lot of them so I would need a review and test for each :). I think first would be an agreement to the code change itself then test it once everyone agrees on the change ? Could I please get some initial reviews on this? Basically what this webrev does is follow what the interpreter is saying: - No longer try to do a fast tlab refill - Try eden allocation if contiguous inline allocation is true - Otherwise slowpath This is true for all architectures except: - ppc, which doesn't do eden allocations, I just cleaned up the code a bit there to be consistent - s390 that does not do tlab_refill at all, I just removed the dead code there. Thanks a lot for your help, Jc From mandy.chung at oracle.com Wed Feb 14 23:45:50 2018 From: mandy.chung at oracle.com (mandy chung) Date: Wed, 14 Feb 2018 15:45:50 -0800 Subject: RFR(xs): 8193909: Obsolete(remove) Co-operative Memory Management (CMM) In-Reply-To: <1554d7d4-df1f-5254-9014-f4f1d0c2aac6@oracle.com> References: <1554d7d4-df1f-5254-9014-f4f1d0c2aac6@oracle.com> Message-ID: <07c86a73-1695-9080-57df-80ef15261221@oracle.com> +1 Mandy On 2/14/18 1:45 PM, sangheon.kim wrote: > Hi all, > > Could I have some reviews for CMM removal? > This is closed CR but some public codes also need small modifications. > This CR is for removing stuff related to an Oracle JDK module/package. > Changes are just removing CMM from lists or in a test to skip the > testing logic. > > CR: https://bugs.openjdk.java.net/browse/JDK-8193909 > Webrev: http://cr.openjdk.java.net/~sangheki/8193909/webrev.0 > Testing: hs-tier1~5, jdk1~3, open/test/jdk:jdk_core > > Thanks, > Sangheon > From sangheon.kim at oracle.com Wed Feb 14 23:58:21 2018 From: sangheon.kim at oracle.com (sangheon.kim) Date: Wed, 14 Feb 2018 15:58:21 -0800 Subject: RFR(xs): 8193909: Obsolete(remove) Co-operative Memory Management (CMM) In-Reply-To: <07c86a73-1695-9080-57df-80ef15261221@oracle.com> References: <1554d7d4-df1f-5254-9014-f4f1d0c2aac6@oracle.com> <07c86a73-1695-9080-57df-80ef15261221@oracle.com> Message-ID: Hi Mandy, Thank you for the review! Sangheon On 02/14/2018 03:45 PM, mandy chung wrote: > +1 > > Mandy > > On 2/14/18 1:45 PM, sangheon.kim wrote: >> Hi all, >> >> Could I have some reviews for CMM removal? >> This is closed CR but some public codes also need small >> modifications. This CR is for removing stuff related to an Oracle JDK >> module/package. >> Changes are just removing CMM from lists or in a test to skip the >> testing logic. >> >> CR: https://bugs.openjdk.java.net/browse/JDK-8193909 >> Webrev: http://cr.openjdk.java.net/~sangheki/8193909/webrev.0 >> Testing: hs-tier1~5, jdk1~3, open/test/jdk:jdk_core >> >> Thanks, >> Sangheon >> > From david.holmes at oracle.com Thu Feb 15 02:06:35 2018 From: david.holmes at oracle.com (David Holmes) Date: Thu, 15 Feb 2018 12:06:35 +1000 Subject: (11) RFR (S) JDK-8196997: VS2017 The non-Standard std::tr1 namespace and TR1-only machinery are deprecated and will be removed In-Reply-To: <60ec83f4-2b23-8b28-99dc-00d4abcce6b4@oracle.com> References: <60ec83f4-2b23-8b28-99dc-00d4abcce6b4@oracle.com> Message-ID: Hi Lois, On 15/02/2018 4:07 AM, Lois Foltan wrote: > Please review this small fix to ignore the VS2017 deprecation warning > for non-Standard std::tr1 namespace and TR1-only machinery when > compiling gtest.? A corresponding RFE has been added to upgrade gtest. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196997/webrev/ > bug link https://bugs.openjdk.java.net/browse/JDK-8196997 What is the change in make/common/MakeBase.gmk ? Thanks, David > gtest upgrade RFE link https://bugs.openjdk.java.net/browse/JDK-8197951 > > Thanks, > Lois From david.holmes at oracle.com Thu Feb 15 02:40:27 2018 From: david.holmes at oracle.com (David Holmes) Date: Thu, 15 Feb 2018 12:40:27 +1000 Subject: Take 2: RFR: 8197429: Increased stack guard causes segfaults on x86-32 In-Reply-To: <2e999e00-7e22-476f-1e66-4e2ae3221ab7@redhat.com> References: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> <02d49fdb-6518-3e27-ac4b-a1d71edfb313@oracle.com> <2e999e00-7e22-476f-1e66-4e2ae3221ab7@redhat.com> Message-ID: <1d068d2c-f47b-c9bf-21fc-602707470d3f@oracle.com> Hi Andrew, On 14/02/2018 10:55 PM, Andrew Haley wrote: > On 14/02/18 02:10, David Holmes wrote: >> On 14/02/2018 12:51 AM, Andrew Haley wrote: >>> Webrev amended. >>> >>> http://cr.openjdk.java.net/~aph/8197429-2/ >> >> My question still stands: > > Sorry, I didn't see it. > >> How does this interact with the use of DisablePrimordialThreadGuardPages? > > My initial answer was "not at all", but there is a minor possible > modification. If DisablePrimordialThreadGuardPages is set it is > possible to use slightly more stack in Java code, so we could bang > down slightly further in workaround_expand_exec_shield_cs_limit() and > therefore place the codebuf slightly lower. This would allow every > page of the primordial stack to be used in Java code. > > Like this: > > if (os::is_primordial_thread()) { > address limit = Linux::initial_thread_stack_bottom(); > if (! DisablePrimordialThreadGuardPages) { > limit += JavaThread::stack_red_zone_size() + > JavaThread::stack_yellow_zone_size(); > } > os::Linux::expand_stack_to(limit); > } > > I'm happy to make that change and add a test for > DisablePrimordialThreadGuardPages if you think it's worth doing. > > Alternatively, we could simply ignore the JVM's stack guard pages in > the calculation and always bang down all the way to > initial_thread_stack_bottom(). This would cause the codebuf to be > mapped slightly lower, but I guess that's no big deal. My main concern was to ensure that this did not somehow cause DisablePrimordialThreadGuardPages to break. IIUC without the suggested adjustment this change would slightly reduce the amount of stack available for use - in which case I'd prefer to see the adjustment made in either form you deem best. Thanks, David From kim.barrett at oracle.com Thu Feb 15 06:46:05 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 15 Feb 2018 01:46:05 -0500 Subject: (11) RFR (S) JDK-8196997: VS2017 The non-Standard std::tr1 namespace and TR1-only machinery are deprecated and will be removed In-Reply-To: References: <60ec83f4-2b23-8b28-99dc-00d4abcce6b4@oracle.com> Message-ID: <3ABBCA56-E1A6-4CFC-A190-69C10BE07252@oracle.com> > On Feb 14, 2018, at 9:06 PM, David Holmes wrote: > > Hi Lois, > > On 15/02/2018 4:07 AM, Lois Foltan wrote: >> Please review this small fix to ignore the VS2017 deprecation warning for non-Standard std::tr1 namespace and TR1-only machinery when compiling gtest. A corresponding RFE has been added to upgrade gtest. >> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196997/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8196997 > > What is the change in make/common/MakeBase.gmk ? It increases the maximum number of arguments that can be passed to things like SetupNativeCompilation. Without it, the other change produces a build error, telling you there were too many parameters, and to update MAX_PARAMS. > Thanks, > David > >> gtest upgrade RFE link https://bugs.openjdk.java.net/browse/JDK-8197951 >> Thanks, >> Lois From david.holmes at oracle.com Thu Feb 15 07:09:13 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 14 Feb 2018 23:09:13 -0800 (PST) Subject: (11) RFR (S) JDK-8196997: VS2017 The non-Standard std::tr1 namespace and TR1-only machinery are deprecated and will be removed In-Reply-To: <3ABBCA56-E1A6-4CFC-A190-69C10BE07252@oracle.com> References: <60ec83f4-2b23-8b28-99dc-00d4abcce6b4@oracle.com> <3ABBCA56-E1A6-4CFC-A190-69C10BE07252@oracle.com> Message-ID: <0f64ff1f-8a37-c91d-af30-c7d063aebb24@oracle.com> On 15/02/2018 4:46 PM, Kim Barrett wrote: >> On Feb 14, 2018, at 9:06 PM, David Holmes wrote: >> >> Hi Lois, >> >> On 15/02/2018 4:07 AM, Lois Foltan wrote: >>> Please review this small fix to ignore the VS2017 deprecation warning for non-Standard std::tr1 namespace and TR1-only machinery when compiling gtest. A corresponding RFE has been added to upgrade gtest. >>> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196997/webrev/ >>> bug link https://bugs.openjdk.java.net/browse/JDK-8196997 >> >> What is the change in make/common/MakeBase.gmk ? > > It increases the maximum number of arguments that can be passed to things like SetupNativeCompilation. > Without it, the other change produces a build error, telling you there were too many parameters, and > to update MAX_PARAMS. Thanks Kim! David >> Thanks, >> David >> >>> gtest upgrade RFE link https://bugs.openjdk.java.net/browse/JDK-8197951 >>> Thanks, >>> Lois > > From thomas.stuefe at gmail.com Thu Feb 15 07:37:46 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 15 Feb 2018 08:37:46 +0100 Subject: (11) RFR (S) JDK-8196997: VS2017 The non-Standard std::tr1 namespace and TR1-only machinery are deprecated and will be removed In-Reply-To: <60ec83f4-2b23-8b28-99dc-00d4abcce6b4@oracle.com> References: <60ec83f4-2b23-8b28-99dc-00d4abcce6b4@oracle.com> Message-ID: On Wed, Feb 14, 2018 at 7:07 PM, Lois Foltan wrote: > Please review this small fix to ignore the VS2017 deprecation warning for > non-Standard std::tr1 namespace and TR1-only machinery when compiling > gtest. A corresponding RFE has been added to upgrade gtest. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196997/webrev/ > bug link https://bugs.openjdk.java.net/browse/JDK-8196997 > gtest upgrade RFE link https://bugs.openjdk.java.net/browse/JDK-8197951 > > Thanks, > Lois > This looks fine, thanks for fixing. +1 for the upgrade. But that will require us (SAP) to test all our build platforms, so a heads-up would be appreciated when you do it. Best Regards, Thomas From erik.osterlund at oracle.com Thu Feb 15 09:31:34 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 15 Feb 2018 10:31:34 +0100 Subject: RFR(L): 8195142: Refactor out card table from CardTableModRefBS to flatten the BarrierSet hierarchy In-Reply-To: <5A5F476E.6080000@oracle.com> References: <5A5F476E.6080000@oracle.com> Message-ID: <5A855376.5090203@oracle.com> Hi, Here is an updated revision of this webrev after internal feedback from StefanK who helped looking through my changes - thanks a lot for the help with that. The changes to the new revision are a bunch of minor clean up changes, e.g. copy right headers, indentation issues, sorting includes, adding/removing newlines, reverting an assert error message, fixing constructor initialization orders, and things like that. The problem I mentioned last time about the version number of our repo not yet being bumped to 11 and resulting awkwardness in JVMCI has been resolved by simply waiting. So now I changed the JVMCI logic to get the card values from the new location in the corresponding card tables when observing JDK version 11 or above. New full webrev (rebased onto a month fresher jdk-hs): http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01/ Incremental webrev (over the rebase): http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00_01/ This new version has run through hs-tier1-5 and jdk-tier1-3 without any issues. Thanks, /Erik On 2018-01-17 13:54, Erik ?sterlund wrote: > Hi, > > Today, both Parallel, CMS and Serial share the same code for its card > marking barrier. However, they have different requirements how to > manage its card tables by the GC. And as the card table itself is > embedded as a part of the CardTableModRefBS barrier set, this has led > to an unnecessary inheritance hierarchy for CardTableModRefBS, where > for example CardTableModRefBSForCTRS and CardTableExtension are > CardTableModRefBS subclasses that do not change anything to do with > the barriers. > > To clean up the code, there should really be a separate CardTable > hierarchy that contains the differences how to manage the card table > from the GC point of view, and simply let CardTableModRefBS have a > CardTable. This would allow removing CardTableModRefBSForCTRS and > CardTableExtension and their references from shared code (that really > have nothing to do with the barriers, despite being barrier sets), and > significantly simplify the barrier set code. > > This patch mechanically performs this refactoring. A new CardTable > class has been created with a PSCardTable subclass for Parallel, a > CardTableRS for CMS and Serial, and a G1CardTable for G1. All > references to card tables and their values have been updated accordingly. > > This touches a lot of platform specific code, so would be fantastic if > port maintainers could have a look that I have not broken anything. > > There is a slight problem that should be pointed out. There is an > unfortunate interaction between Graal and hotspot. Graal needs to know > the values of g1 young cards and dirty cards. This is queried in > different ways in different versions of the JDK in the > ||GraalHotSpotVMConfig.java file. Now these values will move from > their barrier set class to their card table class. That means we have > at least three cases how to find the correct values. There is one for > JDK8, one for JDK9, and now a new one for JDK11. Except, we have not > yet bumped the version number to 11 in the repo, and therefore it has > to be from JDK10 - 11 for now and updated after incrementing the > version number. But that means that it will be temporarily > incompatible with JDK10. That is okay for our own copy of Graal, but > can not be used by upstream Graal as they are given the choice whether > to support the public JDK10 or the JDK11 that does not quite admit to > being 11 yet. I chose the solution that works in our repository. I > will notify Graal folks of this issue. In the long run, it would be > nice if we could have a more solid interface here. > > However, as an added benefit, this changeset brings about a hundred > copyright headers up to date, so others do not have to update them for > a while. > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8195142 > > Webrev: > http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00/ > > Testing: mach5 hs-tier1-5 plus local AoT testing. > > Thanks, > /Erik From thomas.schatzl at oracle.com Thu Feb 15 11:32:48 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 15 Feb 2018 12:32:48 +0100 Subject: RFR(xs): 8193909: Obsolete(remove) Co-operative Memory Management (CMM) In-Reply-To: <1554d7d4-df1f-5254-9014-f4f1d0c2aac6@oracle.com> References: <1554d7d4-df1f-5254-9014-f4f1d0c2aac6@oracle.com> Message-ID: <1518694368.4282.26.camel@oracle.com> Hi, On Wed, 2018-02-14 at 13:45 -0800, sangheon.kim wrote: > Hi all, > > Could I have some reviews for CMM removal? > This is closed CR but some public codes also need small > modifications. > This CR is for removing stuff related to an Oracle JDK > module/package. > Changes are just removing CMM from lists or in a test to skip the > testing logic. > > CR: https://bugs.openjdk.java.net/browse/JDK-8193909 > Webrev: http://cr.openjdk.java.net/~sangheki/8193909/webrev.0 > Testing: hs-tier1~5, jdk1~3, open/test/jdk:jdk_core > looks good. Thomas From aph at redhat.com Thu Feb 15 14:01:29 2018 From: aph at redhat.com (Andrew Haley) Date: Thu, 15 Feb 2018 14:01:29 +0000 Subject: Take 2: RFR: 8197429: Increased stack guard causes segfaults on x86-32 In-Reply-To: <1d068d2c-f47b-c9bf-21fc-602707470d3f@oracle.com> References: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> <02d49fdb-6518-3e27-ac4b-a1d71edfb313@oracle.com> <2e999e00-7e22-476f-1e66-4e2ae3221ab7@redhat.com> <1d068d2c-f47b-c9bf-21fc-602707470d3f@oracle.com> Message-ID: <9bd008c1-33aa-1adf-8d6a-ad66d8e1d5d5@redhat.com> On 15/02/18 02:40, David Holmes wrote: > My main concern was to ensure that this did not somehow cause > DisablePrimordialThreadGuardPages to break. IIUC without the suggested > adjustment this change would slightly reduce the amount of stack > available for use - in which case I'd prefer to see the adjustment made > in either form you deem best. OK. http://cr.openjdk.java.net/~aph/8197429-3/ Here it is, with adjustment and a test for DisablePrimordialThreadGuardPages. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From sangheon.kim at oracle.com Thu Feb 15 15:00:17 2018 From: sangheon.kim at oracle.com (sangheon.kim) Date: Thu, 15 Feb 2018 07:00:17 -0800 Subject: RFR(xs): 8193909: Obsolete(remove) Co-operative Memory Management (CMM) In-Reply-To: <1518694368.4282.26.camel@oracle.com> References: <1554d7d4-df1f-5254-9014-f4f1d0c2aac6@oracle.com> <1518694368.4282.26.camel@oracle.com> Message-ID: <10e5b37c-fdcc-3666-5934-c301b4d18fa0@oracle.com> Hi Thomas, On 02/15/2018 03:32 AM, Thomas Schatzl wrote: > Hi, > > On Wed, 2018-02-14 at 13:45 -0800, sangheon.kim wrote: >> Hi all, >> >> Could I have some reviews for CMM removal? >> This is closed CR but some public codes also need small >> modifications. >> This CR is for removing stuff related to an Oracle JDK >> module/package. >> Changes are just removing CMM from lists or in a test to skip the >> testing logic. >> >> CR: https://bugs.openjdk.java.net/browse/JDK-8193909 >> Webrev: http://cr.openjdk.java.net/~sangheki/8193909/webrev.0 >> Testing: hs-tier1~5, jdk1~3, open/test/jdk:jdk_core >> > looks good. Thank you for your review! Sangheon > > Thomas From bob.vandette at oracle.com Thu Feb 15 17:07:38 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Thu, 15 Feb 2018 12:07:38 -0500 Subject: JEP [DRAFT]: Container aware Java Message-ID: <9B0DF35E-0019-46DE-A6C1-AE4DE2416844@oracle.com> I?d like to re-propose the following JEP that will enhance the Java runtime to be more container aware. This will add an Internal Java API that will provide container specific statistics. Some of the initial goals of the previous JEP proposal has been integrated into JDK 10 under an RFE (JDK-8146115). This JEP is now focused on providing a Java API that exports Container runtime configuration and metrics. Since the scope of this JEP have changed, I?m re-submitting it for comment and endorsement. JEP Issue: https://bugs.openjdk.java.net/browse/JDK-8182070 Here?s a Text dump of the JEP contents for your convenience: Summary ------- Container aware Java runtime Goals ----- Provide an internal API that can be used to extract container specific configuration and runtime statistics. This JEP will only support Docker on Linux-x64 although the design should be flexible enough to allow support for other platforms and container technologies. The initial focus will be on Linux cgroups technology so that we will be able to easily support other container technologies running on Linux in addition to Docker. Non-Goals --------- It is not a goal of this JEP to support any platform other than Docker container technology running on Linux x64. Success Metrics --------------- Success will be measured by the improvement in information that will be available to tools which visualize resource usage of containers that are running Java processes. Motivation ---------- Container technology is becoming more and more prevalent in Cloud based applications. The Cloud Serverless application programming model motivates developers to split large monolithic applications into 100s of smaller pieces each running in thier own container. This move increases the importance of the observability of each running container process. Adding the proposed set of APIs will allow more details related to each container process to be made available to external tools thereby improving the observability. Description ----------- This enhancement will be made up of the following work items: A. Detecting if Java is running in a container. The Java runtime, as well as any tests that we might write for this feature, will need to be able to detect that the current Java process is running in a container. A new API will be made available for this purpose. B. Exposing container resource limits, configuration and runtime statistics. There are several configuration options and limits that can be imposed upon a running container. Not all of these are important to a running Java process. We clearly want to be able to detect how many CPUs have been allocated to our process along with the maximum amount of memory that the process has been allocated but there are other options that we might want to base runtime decisions on. In addition, since Container typically impose limits on system resources, they also provide the ability to easily access the amount of consumption of these resources. The goal is to provide this information in addition to the configuration data. I propose adding a new jdk.internal.Platform class that will allow access to this information. Here are some of the types of configuration and consumption statistics that would be made available: isContainerized Memory Limit Total Memory Limit Soft Memory Limit Max Memory Usage Current Memory Usage Maximum Kernel Memory CPU Shares CPU Period CPU Quota Number of CPUs CPU Sets CPU Set Memory Nodes CPU Usage CPU Usage Per CPU Block I/O Weight Block I/O Device Weight Device I/O Read Rate Device I/O Write Rate OOM Kill Enabled OOM Score Adjustment Memory Swappiness Shared Memory Size Alternatives ------------ There are a few existing tools available to extract some of the same container statistics. These tools could be used instead. The benefit of providing a core Java internal API is that this information can be expose by current Java serviceability tools such as JMX and JFR along side other JVM specific information. Testing ------- Docker/container specific tests should be added in order to validate the functionality being provided with this JEP. Risks and Assumptions --------------------- Docker is currently based on cgroups v1. Cgroups v2 is also available but is incomplete and not yet supported by Docker. It's possible that v2 could replace v1 in an incompatible way rendering this work unusable until it is upgraded. Other alternative container technologies based on hypervisors are being developed that could replace the use of cgroups for container isloation. Dependencies ----------- None at this time. From lois.foltan at oracle.com Thu Feb 15 17:22:09 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Thu, 15 Feb 2018 12:22:09 -0500 Subject: (11) RFR (S) JDK-8196997: VS2017 The non-Standard std::tr1 namespace and TR1-only machinery are deprecated and will be removed In-Reply-To: <3ABBCA56-E1A6-4CFC-A190-69C10BE07252@oracle.com> References: <60ec83f4-2b23-8b28-99dc-00d4abcce6b4@oracle.com> <3ABBCA56-E1A6-4CFC-A190-69C10BE07252@oracle.com> Message-ID: <1c80a46d-e93b-b806-91cf-94fd187add1f@oracle.com> On 2/15/2018 1:46 AM, Kim Barrett wrote: >> On Feb 14, 2018, at 9:06 PM, David Holmes wrote: >> >> Hi Lois, >> >> On 15/02/2018 4:07 AM, Lois Foltan wrote: >>> Please review this small fix to ignore the VS2017 deprecation warning for non-Standard std::tr1 namespace and TR1-only machinery when compiling gtest. A corresponding RFE has been added to upgrade gtest. >>> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196997/webrev/ >>> bug link https://bugs.openjdk.java.net/browse/JDK-8196997 >> What is the change in make/common/MakeBase.gmk ? > It increases the maximum number of arguments that can be passed to things like SetupNativeCompilation. > Without it, the other change produces a build error, telling you there were too many parameters, and > to update MAX_PARAMS. For example, the build error generated when MAX_PARAMS is not increased is: lib/CompileGtest.gmk:65: *** Internal makefile error: Too many named arguments to macro, please update MAX_PARAMS in MakeBase.gmk.? Stop. make/Main.gmk:268: recipe for target 'hotspot-server-libs' failedmake[2]: *** [hotspot-server-libs] Error 2 Thanks, Lois > >> Thanks, >> David >> >>> gtest upgrade RFE link https://bugs.openjdk.java.net/browse/JDK-8197951 >>> Thanks, >>> Lois > From lois.foltan at oracle.com Thu Feb 15 17:29:12 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Thu, 15 Feb 2018 12:29:12 -0500 Subject: (11) RFR (S) JDK-8196997: VS2017 The non-Standard std::tr1 namespace and TR1-only machinery are deprecated and will be removed In-Reply-To: References: <60ec83f4-2b23-8b28-99dc-00d4abcce6b4@oracle.com> Message-ID: On 2/15/2018 2:37 AM, Thomas St?fe wrote: > > > On Wed, Feb 14, 2018 at 7:07 PM, Lois Foltan > wrote: > > Please review this small fix to ignore the VS2017 deprecation > warning for non-Standard std::tr1 namespace and TR1-only machinery > when compiling gtest.? A corresponding RFE has been added to > upgrade gtest. > > open webrev at > http://cr.openjdk.java.net/~lfoltan/bug_jdk8196997/webrev/ > > bug link https://bugs.openjdk.java.net/browse/JDK-8196997 > > gtest upgrade RFE link > https://bugs.openjdk.java.net/browse/JDK-8197951 > > > Thanks, > Lois > > > This looks fine, thanks for fixing. +1 for the upgrade. But that will > require us (SAP) to test all our build platforms, so a heads-up would > be appreciated when you do it. Thanks for the review Thomas!? I have added a comment to JDK-8197951 to note your request for a heads-up when gtest is upgraded. Lois > > Best Regards, Thomas > > > From volker.simonis at gmail.com Thu Feb 15 17:52:33 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Thu, 15 Feb 2018 18:52:33 +0100 Subject: JEP [DRAFT]: Container aware Java In-Reply-To: <9B0DF35E-0019-46DE-A6C1-AE4DE2416844@oracle.com> References: <9B0DF35E-0019-46DE-A6C1-AE4DE2416844@oracle.com> Message-ID: Sounds cool! Is this JEP only about providing the corresponding API or also about using it internally (within HotSpot and class library) to better adopt to environment the JVM is running in? Either way, looking forward to see (and test) the first implementation bits! Regards, Volker On Thu, Feb 15, 2018 at 6:07 PM, Bob Vandette wrote: > I?d like to re-propose the following JEP that will enhance the Java runtime to be more container aware. > This will add an Internal Java API that will provide container specific statistics. Some of the initial goals > of the previous JEP proposal has been integrated into JDK 10 under an RFE (JDK-8146115). > This JEP is now focused on providing a Java API that exports Container runtime configuration and metrics. > > Since the scope of this JEP have changed, I?m re-submitting it for comment and endorsement. > > > JEP Issue: > > https://bugs.openjdk.java.net/browse/JDK-8182070 > > Here?s a Text dump of the JEP contents for your convenience: > > Summary > ------- > > Container aware Java runtime > > Goals > ----- > > Provide an internal API that can be used to extract container specific configuration and runtime statistics. This JEP will only support Docker on Linux-x64 although the design should be flexible enough to allow support for other platforms and container technologies. The initial focus will be on Linux cgroups technology so that we will be able to easily support other container technologies running on Linux in addition to Docker. > > Non-Goals > --------- > > It is not a goal of this JEP to support any platform other than Docker container technology running on Linux x64. > > Success Metrics > --------------- > > Success will be measured by the improvement in information that will be available to tools which visualize resource usage of containers that are running Java processes. > > Motivation > ---------- > > Container technology is becoming more and more prevalent in Cloud based applications. The Cloud Serverless application programming model motivates developers to split large monolithic applications into 100s of smaller pieces each running in thier own container. This move increases the importance of the observability of each running container process. Adding the proposed set of APIs will allow more details related to each container process to be made available to external tools thereby improving the observability. > > Description > ----------- > > This enhancement will be made up of the following work items: > > A. Detecting if Java is running in a container. > > The Java runtime, as well as any tests that we might write for this feature, will need to be able to detect that the current Java process is running in a container. A new API will be made available for this purpose. > > B. Exposing container resource limits, configuration and runtime statistics. > > There are several configuration options and limits that can be imposed upon a running container. Not all of these > are important to a running Java process. We clearly want to be able to detect how many CPUs have been allocated to our process along with the maximum amount of memory that the process has been allocated but there are other options that we might want to base runtime decisions on. > > In addition, since Container typically impose limits on system resources, they also provide the ability to easily access the amount of consumption of these resources. The goal is to provide this information in addition to the configuration data. > > I propose adding a new jdk.internal.Platform class that will allow access to this information. > > Here are some of the types of configuration and consumption statistics that would be made available: > > isContainerized > Memory Limit > Total Memory Limit > Soft Memory Limit > Max Memory Usage > Current Memory Usage > Maximum Kernel Memory > CPU Shares > CPU Period > CPU Quota > Number of CPUs > CPU Sets > CPU Set Memory Nodes > CPU Usage > CPU Usage Per CPU > Block I/O Weight > Block I/O Device Weight > Device I/O Read Rate > Device I/O Write Rate > OOM Kill Enabled > OOM Score Adjustment > Memory Swappiness > Shared Memory Size > > Alternatives > ------------ > > There are a few existing tools available to extract some of the same container statistics. These tools could be used instead. The benefit of providing a core Java internal API is that this information can be expose by current Java serviceability tools such as JMX and JFR along side other JVM specific information. > > Testing > ------- > > Docker/container specific tests should be added in order to validate the functionality being provided with this JEP. > > Risks and Assumptions > --------------------- > > Docker is currently based on cgroups v1. Cgroups v2 is also available but is incomplete and not yet supported by Docker. It's possible that v2 could replace v1 in an incompatible way rendering this work unusable until it is upgraded. > > Other alternative container technologies based on hypervisors are being developed that could replace the use of cgroups for container isloation. > > Dependencies > ----------- > > None at this time. > From bob.vandette at oracle.com Thu Feb 15 18:04:38 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Thu, 15 Feb 2018 13:04:38 -0500 Subject: JEP [DRAFT]: Container aware Java In-Reply-To: References: <9B0DF35E-0019-46DE-A6C1-AE4DE2416844@oracle.com> Message-ID: > On Feb 15, 2018, at 12:52 PM, Volker Simonis wrote: > > Sounds cool! > > Is this JEP only about providing the corresponding API or also about > using it internally (within HotSpot and class library) to better adopt > to environment the JVM is running in? Thanks Volker. I integrated JDK-8146115 into JDK 10 which allows Hotspot to adapt to the container it?s running in. The configuration that is examined includes memory limits, cpu count including cpu shares and quotas. This JEP?s main focus is to provide an internal API that can be used by JDK tools such as JFR and JMX to export container statistics and configuration data. This JEP does not include the JFR and JMX implementations. They will hopefully follow shortly after. Bob. > > Either way, looking forward to see (and test) the first implementation bits! > > Regards, > Volker > > > On Thu, Feb 15, 2018 at 6:07 PM, Bob Vandette wrote: >> I?d like to re-propose the following JEP that will enhance the Java runtime to be more container aware. >> This will add an Internal Java API that will provide container specific statistics. Some of the initial goals >> of the previous JEP proposal has been integrated into JDK 10 under an RFE (JDK-8146115). >> This JEP is now focused on providing a Java API that exports Container runtime configuration and metrics. >> >> Since the scope of this JEP have changed, I?m re-submitting it for comment and endorsement. >> >> >> JEP Issue: >> >> https://bugs.openjdk.java.net/browse/JDK-8182070 >> >> Here?s a Text dump of the JEP contents for your convenience: >> >> Summary >> ------- >> >> Container aware Java runtime >> >> Goals >> ----- >> >> Provide an internal API that can be used to extract container specific configuration and runtime statistics. This JEP will only support Docker on Linux-x64 although the design should be flexible enough to allow support for other platforms and container technologies. The initial focus will be on Linux cgroups technology so that we will be able to easily support other container technologies running on Linux in addition to Docker. >> >> Non-Goals >> --------- >> >> It is not a goal of this JEP to support any platform other than Docker container technology running on Linux x64. >> >> Success Metrics >> --------------- >> >> Success will be measured by the improvement in information that will be available to tools which visualize resource usage of containers that are running Java processes. >> >> Motivation >> ---------- >> >> Container technology is becoming more and more prevalent in Cloud based applications. The Cloud Serverless application programming model motivates developers to split large monolithic applications into 100s of smaller pieces each running in thier own container. This move increases the importance of the observability of each running container process. Adding the proposed set of APIs will allow more details related to each container process to be made available to external tools thereby improving the observability. >> >> Description >> ----------- >> >> This enhancement will be made up of the following work items: >> >> A. Detecting if Java is running in a container. >> >> The Java runtime, as well as any tests that we might write for this feature, will need to be able to detect that the current Java process is running in a container. A new API will be made available for this purpose. >> >> B. Exposing container resource limits, configuration and runtime statistics. >> >> There are several configuration options and limits that can be imposed upon a running container. Not all of these >> are important to a running Java process. We clearly want to be able to detect how many CPUs have been allocated to our process along with the maximum amount of memory that the process has been allocated but there are other options that we might want to base runtime decisions on. >> >> In addition, since Container typically impose limits on system resources, they also provide the ability to easily access the amount of consumption of these resources. The goal is to provide this information in addition to the configuration data. >> >> I propose adding a new jdk.internal.Platform class that will allow access to this information. >> >> Here are some of the types of configuration and consumption statistics that would be made available: >> >> isContainerized >> Memory Limit >> Total Memory Limit >> Soft Memory Limit >> Max Memory Usage >> Current Memory Usage >> Maximum Kernel Memory >> CPU Shares >> CPU Period >> CPU Quota >> Number of CPUs >> CPU Sets >> CPU Set Memory Nodes >> CPU Usage >> CPU Usage Per CPU >> Block I/O Weight >> Block I/O Device Weight >> Device I/O Read Rate >> Device I/O Write Rate >> OOM Kill Enabled >> OOM Score Adjustment >> Memory Swappiness >> Shared Memory Size >> >> Alternatives >> ------------ >> >> There are a few existing tools available to extract some of the same container statistics. These tools could be used instead. The benefit of providing a core Java internal API is that this information can be expose by current Java serviceability tools such as JMX and JFR along side other JVM specific information. >> >> Testing >> ------- >> >> Docker/container specific tests should be added in order to validate the functionality being provided with this JEP. >> >> Risks and Assumptions >> --------------------- >> >> Docker is currently based on cgroups v1. Cgroups v2 is also available but is incomplete and not yet supported by Docker. It's possible that v2 could replace v1 in an incompatible way rendering this work unusable until it is upgraded. >> >> Other alternative container technologies based on hypervisors are being developed that could replace the use of cgroups for container isloation. >> >> Dependencies >> ----------- >> >> None at this time. >> From kedar.mhaswade at gmail.com Thu Feb 15 18:30:16 2018 From: kedar.mhaswade at gmail.com (kedar mhaswade) Date: Thu, 15 Feb 2018 10:30:16 -0800 Subject: JEP [DRAFT]: Container aware Java In-Reply-To: References: <9B0DF35E-0019-46DE-A6C1-AE4DE2416844@oracle.com> Message-ID: This appears useful. Will Runtime.getRuntime().availableProcessors() return the processors available to the container after this integration, or will a new API be provided? I have seen thread pools being misconfigured (e.g. it returns the number of processors available on the host which is far more than that of processors allotted by cgroups to the container) because of the confusion (or because of just using the non-container-aware code). Regards, Kedar On Thu, Feb 15, 2018 at 10:04 AM, Bob Vandette wrote: > > > On Feb 15, 2018, at 12:52 PM, Volker Simonis > wrote: > > > > Sounds cool! > > > > Is this JEP only about providing the corresponding API or also about > > using it internally (within HotSpot and class library) to better adopt > > to environment the JVM is running in? > > Thanks Volker. > > I integrated JDK-8146115 into JDK 10 which allows Hotspot to adapt to the > container it?s running in. The configuration that is examined includes > memory limits, > cpu count including cpu shares and quotas. > > This JEP?s main focus is to provide an internal API that can be used by > JDK tools > such as JFR and JMX to export container statistics and configuration > data. This JEP > does not include the JFR and JMX implementations. They will hopefully > follow shortly after. > > Bob. > > > > > Either way, looking forward to see (and test) the first implementation > bits! > > > > Regards, > > Volker > > > > > > On Thu, Feb 15, 2018 at 6:07 PM, Bob Vandette > wrote: > >> I?d like to re-propose the following JEP that will enhance the Java > runtime to be more container aware. > >> This will add an Internal Java API that will provide container specific > statistics. Some of the initial goals > >> of the previous JEP proposal has been integrated into JDK 10 under an > RFE (JDK-8146115). > >> This JEP is now focused on providing a Java API that exports Container > runtime configuration and metrics. > >> > >> Since the scope of this JEP have changed, I?m re-submitting it for > comment and endorsement. > >> > >> > >> JEP Issue: > >> > >> https://bugs.openjdk.java.net/browse/JDK-8182070 > >> > >> Here?s a Text dump of the JEP contents for your convenience: > >> > >> Summary > >> ------- > >> > >> Container aware Java runtime > >> > >> Goals > >> ----- > >> > >> Provide an internal API that can be used to extract container specific > configuration and runtime statistics. This JEP will only support Docker on > Linux-x64 although the design should be flexible enough to allow support > for other platforms and container technologies. The initial focus will be > on Linux cgroups technology so that we will be able to easily support other > container technologies running on Linux in addition to Docker. > >> > >> Non-Goals > >> --------- > >> > >> It is not a goal of this JEP to support any platform other than Docker > container technology running on Linux x64. > >> > >> Success Metrics > >> --------------- > >> > >> Success will be measured by the improvement in information that will be > available to tools which visualize resource usage of containers that are > running Java processes. > >> > >> Motivation > >> ---------- > >> > >> Container technology is becoming more and more prevalent in Cloud based > applications. The Cloud Serverless application programming model motivates > developers to split large monolithic applications into 100s of smaller > pieces each running in thier own container. This move increases the > importance of the observability of each running container process. Adding > the proposed set of APIs will allow more details related to each container > process to be made available to external tools thereby improving the > observability. > >> > >> Description > >> ----------- > >> > >> This enhancement will be made up of the following work items: > >> > >> A. Detecting if Java is running in a container. > >> > >> The Java runtime, as well as any tests that we might write for this > feature, will need to be able to detect that the current Java process is > running in a container. A new API will be made available for this purpose. > >> > >> B. Exposing container resource limits, configuration and runtime > statistics. > >> > >> There are several configuration options and limits that can be imposed > upon a running container. Not all of these > >> are important to a running Java process. We clearly want to be able to > detect how many CPUs have been allocated to our process along with the > maximum amount of memory that the process has been allocated but there are > other options that we might want to base runtime decisions on. > >> > >> In addition, since Container typically impose limits on system > resources, they also provide the ability to easily access the amount of > consumption of these resources. The goal is to provide this information in > addition to the configuration data. > >> > >> I propose adding a new jdk.internal.Platform class that will allow > access to this information. > >> > >> Here are some of the types of configuration and consumption statistics > that would be made available: > >> > >> isContainerized > >> Memory Limit > >> Total Memory Limit > >> Soft Memory Limit > >> Max Memory Usage > >> Current Memory Usage > >> Maximum Kernel Memory > >> CPU Shares > >> CPU Period > >> CPU Quota > >> Number of CPUs > >> CPU Sets > >> CPU Set Memory Nodes > >> CPU Usage > >> CPU Usage Per CPU > >> Block I/O Weight > >> Block I/O Device Weight > >> Device I/O Read Rate > >> Device I/O Write Rate > >> OOM Kill Enabled > >> OOM Score Adjustment > >> Memory Swappiness > >> Shared Memory Size > >> > >> Alternatives > >> ------------ > >> > >> There are a few existing tools available to extract some of the same > container statistics. These tools could be used instead. The benefit of > providing a core Java internal API is that this information can be expose > by current Java serviceability tools such as JMX and JFR along side other > JVM specific information. > >> > >> Testing > >> ------- > >> > >> Docker/container specific tests should be added in order to validate > the functionality being provided with this JEP. > >> > >> Risks and Assumptions > >> --------------------- > >> > >> Docker is currently based on cgroups v1. Cgroups v2 is also available > but is incomplete and not yet supported by Docker. It's possible that v2 > could replace v1 in an incompatible way rendering this work unusable until > it is upgraded. > >> > >> Other alternative container technologies based on hypervisors are being > developed that could replace the use of cgroups for container isloation. > >> > >> Dependencies > >> ----------- > >> > >> None at this time. > >> > > From bob.vandette at oracle.com Thu Feb 15 18:36:44 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Thu, 15 Feb 2018 13:36:44 -0500 Subject: JEP [DRAFT]: Container aware Java In-Reply-To: References: <9B0DF35E-0019-46DE-A6C1-AE4DE2416844@oracle.com> Message-ID: <277A93E7-BDE4-4391-B97F-22D5ADB968BB@oracle.com> > On Feb 15, 2018, at 1:30 PM, kedar mhaswade wrote: > > This appears useful. > > Will Runtime.getRuntime().availableProcessors() return the processors available to the container after this integration, or will a new API be provided? I have seen thread pools being misconfigured (e.g. it returns the number of processors available on the host which is far more than that of processors allotted by cgroups to the container) because of the confusion (or because of just using the non-container-aware code). I fixed your above problem in JDK 10 (https://bugs.openjdk.java.net/browse/JDK-8146115 ). This work exports much more detail about the container to monitoring tools. Bob. > > Regards, > Kedar > > On Thu, Feb 15, 2018 at 10:04 AM, Bob Vandette > wrote: > > > On Feb 15, 2018, at 12:52 PM, Volker Simonis > wrote: > > > > Sounds cool! > > > > Is this JEP only about providing the corresponding API or also about > > using it internally (within HotSpot and class library) to better adopt > > to environment the JVM is running in? > > Thanks Volker. > > I integrated JDK-8146115 into JDK 10 which allows Hotspot to adapt to the > container it?s running in. The configuration that is examined includes memory limits, > cpu count including cpu shares and quotas. > > This JEP?s main focus is to provide an internal API that can be used by JDK tools > such as JFR and JMX to export container statistics and configuration data. This JEP > does not include the JFR and JMX implementations. They will hopefully follow shortly after. > > Bob. > > > > > Either way, looking forward to see (and test) the first implementation bits! > > > > Regards, > > Volker > > > > > > On Thu, Feb 15, 2018 at 6:07 PM, Bob Vandette > wrote: > >> I?d like to re-propose the following JEP that will enhance the Java runtime to be more container aware. > >> This will add an Internal Java API that will provide container specific statistics. Some of the initial goals > >> of the previous JEP proposal has been integrated into JDK 10 under an RFE (JDK-8146115). > >> This JEP is now focused on providing a Java API that exports Container runtime configuration and metrics. > >> > >> Since the scope of this JEP have changed, I?m re-submitting it for comment and endorsement. > >> > >> > >> JEP Issue: > >> > >> https://bugs.openjdk.java.net/browse/JDK-8182070 > >> > >> Here?s a Text dump of the JEP contents for your convenience: > >> > >> Summary > >> ------- > >> > >> Container aware Java runtime > >> > >> Goals > >> ----- > >> > >> Provide an internal API that can be used to extract container specific configuration and runtime statistics. This JEP will only support Docker on Linux-x64 although the design should be flexible enough to allow support for other platforms and container technologies. The initial focus will be on Linux cgroups technology so that we will be able to easily support other container technologies running on Linux in addition to Docker. > >> > >> Non-Goals > >> --------- > >> > >> It is not a goal of this JEP to support any platform other than Docker container technology running on Linux x64. > >> > >> Success Metrics > >> --------------- > >> > >> Success will be measured by the improvement in information that will be available to tools which visualize resource usage of containers that are running Java processes. > >> > >> Motivation > >> ---------- > >> > >> Container technology is becoming more and more prevalent in Cloud based applications. The Cloud Serverless application programming model motivates developers to split large monolithic applications into 100s of smaller pieces each running in thier own container. This move increases the importance of the observability of each running container process. Adding the proposed set of APIs will allow more details related to each container process to be made available to external tools thereby improving the observability. > >> > >> Description > >> ----------- > >> > >> This enhancement will be made up of the following work items: > >> > >> A. Detecting if Java is running in a container. > >> > >> The Java runtime, as well as any tests that we might write for this feature, will need to be able to detect that the current Java process is running in a container. A new API will be made available for this purpose. > >> > >> B. Exposing container resource limits, configuration and runtime statistics. > >> > >> There are several configuration options and limits that can be imposed upon a running container. Not all of these > >> are important to a running Java process. We clearly want to be able to detect how many CPUs have been allocated to our process along with the maximum amount of memory that the process has been allocated but there are other options that we might want to base runtime decisions on. > >> > >> In addition, since Container typically impose limits on system resources, they also provide the ability to easily access the amount of consumption of these resources. The goal is to provide this information in addition to the configuration data. > >> > >> I propose adding a new jdk.internal.Platform class that will allow access to this information. > >> > >> Here are some of the types of configuration and consumption statistics that would be made available: > >> > >> isContainerized > >> Memory Limit > >> Total Memory Limit > >> Soft Memory Limit > >> Max Memory Usage > >> Current Memory Usage > >> Maximum Kernel Memory > >> CPU Shares > >> CPU Period > >> CPU Quota > >> Number of CPUs > >> CPU Sets > >> CPU Set Memory Nodes > >> CPU Usage > >> CPU Usage Per CPU > >> Block I/O Weight > >> Block I/O Device Weight > >> Device I/O Read Rate > >> Device I/O Write Rate > >> OOM Kill Enabled > >> OOM Score Adjustment > >> Memory Swappiness > >> Shared Memory Size > >> > >> Alternatives > >> ------------ > >> > >> There are a few existing tools available to extract some of the same container statistics. These tools could be used instead. The benefit of providing a core Java internal API is that this information can be expose by current Java serviceability tools such as JMX and JFR along side other JVM specific information. > >> > >> Testing > >> ------- > >> > >> Docker/container specific tests should be added in order to validate the functionality being provided with this JEP. > >> > >> Risks and Assumptions > >> --------------------- > >> > >> Docker is currently based on cgroups v1. Cgroups v2 is also available but is incomplete and not yet supported by Docker. It's possible that v2 could replace v1 in an incompatible way rendering this work unusable until it is upgraded. > >> > >> Other alternative container technologies based on hypervisors are being developed that could replace the use of cgroups for container isloation. > >> > >> Dependencies > >> ----------- > >> > >> None at this time. > >> > > From Derek.White at cavium.com Thu Feb 15 22:54:57 2018 From: Derek.White at cavium.com (White, Derek) Date: Thu, 15 Feb 2018 22:54:57 +0000 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: Message-ID: Hi JC, I reviewed the aarch64 code only. The code looks safe enough, but could be optimized as part of a 2nd CR: In the fast_new_instance cases, r5 and r19 are saved and restored around the eden allocation. See lines 964, 727, and 731. - After your change, r5 is no longer needed, so does not need to be saved. - Perhaps we could use another register beside r19, and not save anything? - Seems like rscratch2 might be free, but it might be bad form to use around so many macros? - If we do need to save r19, we could push the store to around line 720, and remove the restore at line 731. I am OK with checking in your patch as is, and optimizing the aarch64 code as a separate CR. - Derek White > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On > Behalf Of JC Beyler > Sent: Wednesday, February 14, 2018 6:09 PM > To: hotspot-dev at openjdk.java.net > Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related > code > > Hi all, > > Here is a webrev to do the work mentioned in JDK-8194084 > : > http://cr.openjdk.java.net/~jcbeyler/8194084/webrev.01/ > > It has the parts for each architecture and I can't test a lot of them so I would > need a review and test for each :). I think first would be an agreement to the > code change itself then test it once everyone agrees on the change ? > > Could I please get some initial reviews on this? > > Basically what this webrev does is follow what the interpreter is saying: > - No longer try to do a fast tlab refill > - Try eden allocation if contiguous inline allocation is true > - Otherwise slowpath > > This is true for all architectures except: > - ppc, which doesn't do eden allocations, I just cleaned up the code a bit > there to be consistent > - s390 that does not do tlab_refill at all, I just removed the dead code there. > > Thanks a lot for your help, > Jc From david.holmes at oracle.com Fri Feb 16 02:33:03 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 16 Feb 2018 12:33:03 +1000 Subject: JEP [DRAFT]: Container aware Java In-Reply-To: References: <9B0DF35E-0019-46DE-A6C1-AE4DE2416844@oracle.com> Message-ID: <939916f0-3b99-7167-f495-abb29a61fb62@oracle.com> On 16/02/2018 4:30 AM, kedar mhaswade wrote: > This appears useful. > > Will Runtime.getRuntime().availableProcessors() return the processors > available to the container after this integration, or will a new API be > provided? I have seen thread pools being misconfigured (e.g. it returns the > number of processors available on the host which is far more than that of > processors allotted by cgroups to the container) because of the confusion > (or because of just using the non-container-aware code). Depending on what you mean by "allotted" the VM has reported the actual set of CPU's available to the VM process (as can be configured by cpu_sets) since JDK 9. In JDK 10 quotas/shares are also used to approximate a notion of "available processors". David > Regards, > Kedar > > On Thu, Feb 15, 2018 at 10:04 AM, Bob Vandette > wrote: > >> >>> On Feb 15, 2018, at 12:52 PM, Volker Simonis >> wrote: >>> >>> Sounds cool! >>> >>> Is this JEP only about providing the corresponding API or also about >>> using it internally (within HotSpot and class library) to better adopt >>> to environment the JVM is running in? >> >> Thanks Volker. >> >> I integrated JDK-8146115 into JDK 10 which allows Hotspot to adapt to the >> container it?s running in. The configuration that is examined includes >> memory limits, >> cpu count including cpu shares and quotas. >> >> This JEP?s main focus is to provide an internal API that can be used by >> JDK tools >> such as JFR and JMX to export container statistics and configuration >> data. This JEP >> does not include the JFR and JMX implementations. They will hopefully >> follow shortly after. >> >> Bob. >> >>> >>> Either way, looking forward to see (and test) the first implementation >> bits! >>> >>> Regards, >>> Volker >>> >>> >>> On Thu, Feb 15, 2018 at 6:07 PM, Bob Vandette >> wrote: >>>> I?d like to re-propose the following JEP that will enhance the Java >> runtime to be more container aware. >>>> This will add an Internal Java API that will provide container specific >> statistics. Some of the initial goals >>>> of the previous JEP proposal has been integrated into JDK 10 under an >> RFE (JDK-8146115). >>>> This JEP is now focused on providing a Java API that exports Container >> runtime configuration and metrics. >>>> >>>> Since the scope of this JEP have changed, I?m re-submitting it for >> comment and endorsement. >>>> >>>> >>>> JEP Issue: >>>> >>>> https://bugs.openjdk.java.net/browse/JDK-8182070 >>>> >>>> Here?s a Text dump of the JEP contents for your convenience: >>>> >>>> Summary >>>> ------- >>>> >>>> Container aware Java runtime >>>> >>>> Goals >>>> ----- >>>> >>>> Provide an internal API that can be used to extract container specific >> configuration and runtime statistics. This JEP will only support Docker on >> Linux-x64 although the design should be flexible enough to allow support >> for other platforms and container technologies. The initial focus will be >> on Linux cgroups technology so that we will be able to easily support other >> container technologies running on Linux in addition to Docker. >>>> >>>> Non-Goals >>>> --------- >>>> >>>> It is not a goal of this JEP to support any platform other than Docker >> container technology running on Linux x64. >>>> >>>> Success Metrics >>>> --------------- >>>> >>>> Success will be measured by the improvement in information that will be >> available to tools which visualize resource usage of containers that are >> running Java processes. >>>> >>>> Motivation >>>> ---------- >>>> >>>> Container technology is becoming more and more prevalent in Cloud based >> applications. The Cloud Serverless application programming model motivates >> developers to split large monolithic applications into 100s of smaller >> pieces each running in thier own container. This move increases the >> importance of the observability of each running container process. Adding >> the proposed set of APIs will allow more details related to each container >> process to be made available to external tools thereby improving the >> observability. >>>> >>>> Description >>>> ----------- >>>> >>>> This enhancement will be made up of the following work items: >>>> >>>> A. Detecting if Java is running in a container. >>>> >>>> The Java runtime, as well as any tests that we might write for this >> feature, will need to be able to detect that the current Java process is >> running in a container. A new API will be made available for this purpose. >>>> >>>> B. Exposing container resource limits, configuration and runtime >> statistics. >>>> >>>> There are several configuration options and limits that can be imposed >> upon a running container. Not all of these >>>> are important to a running Java process. We clearly want to be able to >> detect how many CPUs have been allocated to our process along with the >> maximum amount of memory that the process has been allocated but there are >> other options that we might want to base runtime decisions on. >>>> >>>> In addition, since Container typically impose limits on system >> resources, they also provide the ability to easily access the amount of >> consumption of these resources. The goal is to provide this information in >> addition to the configuration data. >>>> >>>> I propose adding a new jdk.internal.Platform class that will allow >> access to this information. >>>> >>>> Here are some of the types of configuration and consumption statistics >> that would be made available: >>>> >>>> isContainerized >>>> Memory Limit >>>> Total Memory Limit >>>> Soft Memory Limit >>>> Max Memory Usage >>>> Current Memory Usage >>>> Maximum Kernel Memory >>>> CPU Shares >>>> CPU Period >>>> CPU Quota >>>> Number of CPUs >>>> CPU Sets >>>> CPU Set Memory Nodes >>>> CPU Usage >>>> CPU Usage Per CPU >>>> Block I/O Weight >>>> Block I/O Device Weight >>>> Device I/O Read Rate >>>> Device I/O Write Rate >>>> OOM Kill Enabled >>>> OOM Score Adjustment >>>> Memory Swappiness >>>> Shared Memory Size >>>> >>>> Alternatives >>>> ------------ >>>> >>>> There are a few existing tools available to extract some of the same >> container statistics. These tools could be used instead. The benefit of >> providing a core Java internal API is that this information can be expose >> by current Java serviceability tools such as JMX and JFR along side other >> JVM specific information. >>>> >>>> Testing >>>> ------- >>>> >>>> Docker/container specific tests should be added in order to validate >> the functionality being provided with this JEP. >>>> >>>> Risks and Assumptions >>>> --------------------- >>>> >>>> Docker is currently based on cgroups v1. Cgroups v2 is also available >> but is incomplete and not yet supported by Docker. It's possible that v2 >> could replace v1 in an incompatible way rendering this work unusable until >> it is upgraded. >>>> >>>> Other alternative container technologies based on hypervisors are being >> developed that could replace the use of cgroups for container isloation. >>>> >>>> Dependencies >>>> ----------- >>>> >>>> None at this time. >>>> >> >> From david.holmes at oracle.com Fri Feb 16 04:25:36 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 16 Feb 2018 14:25:36 +1000 Subject: Take 2: RFR: 8197429: Increased stack guard causes segfaults on x86-32 In-Reply-To: <9bd008c1-33aa-1adf-8d6a-ad66d8e1d5d5@redhat.com> References: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> <02d49fdb-6518-3e27-ac4b-a1d71edfb313@oracle.com> <2e999e00-7e22-476f-1e66-4e2ae3221ab7@redhat.com> <1d068d2c-f47b-c9bf-21fc-602707470d3f@oracle.com> <9bd008c1-33aa-1adf-8d6a-ad66d8e1d5d5@redhat.com> Message-ID: <6ba7b1de-dfdd-3f71-54ed-a111de46dffd@oracle.com> On 16/02/2018 12:01 AM, Andrew Haley wrote: > On 15/02/18 02:40, David Holmes wrote: >> My main concern was to ensure that this did not somehow cause >> DisablePrimordialThreadGuardPages to break. IIUC without the suggested >> adjustment this change would slightly reduce the amount of stack >> available for use - in which case I'd prefer to see the adjustment made >> in either form you deem best. > > OK. > > http://cr.openjdk.java.net/~aph/8197429-3/ > > Here it is, with adjustment and a test for > DisablePrimordialThreadGuardPages. Thanks Andrew. Just one nit with the test: 48 # Run the test for a java and native overflow 49 ${TESTNATIVEPATH}/stack-gap 50 ${TESTNATIVEPATH}/stack-gap -XX:+DisablePrimordialThreadGuardPages 51 exit $? Need to check we get zero exit code from first run before doing second. David From jcbeyler at google.com Fri Feb 16 04:40:39 2018 From: jcbeyler at google.com (JC Beyler) Date: Thu, 15 Feb 2018 20:40:39 -0800 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: Message-ID: Hi Derek, I agree with removing the r5 saving/loading; for r19, I have no idea if there is another register that could be used instead. I also have no opinion on using rscratch2 but I agree that it is probably error prone to do that with the other macros. I'm unfamiliar with aarch64 so if someone tells me what they want, I can wrap the refactor here too. Last thing: you say we could move the store down but I don't see how to remove the line 731 restore because allocate_eden does seem to use it so should we not always restore it? Thanks for your help, Jc On Thu, Feb 15, 2018 at 2:54 PM, White, Derek wrote: > Hi JC, > > I reviewed the aarch64 code only. > > The code looks safe enough, but could be optimized as part of a 2nd CR: > > In the fast_new_instance cases, r5 and r19 are saved and restored around > the eden allocation. See lines 964, 727, and 731. > > - After your change, r5 is no longer needed, so does not need to be saved. > - Perhaps we could use another register beside r19, and not save anything? > - Seems like rscratch2 might be free, but it might be bad form to use > around so many macros? > - If we do need to save r19, we could push the store to around line 720, > and remove the restore at line 731. > > I am OK with checking in your patch as is, and optimizing the aarch64 code > as a separate CR. > > - Derek White > > > -----Original Message----- > > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On > > Behalf Of JC Beyler > > Sent: Wednesday, February 14, 2018 6:09 PM > > To: hotspot-dev at openjdk.java.net > > Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related > > code > > > > Hi all, > > > > Here is a webrev to do the work mentioned in JDK-8194084 > > : > > http://cr.openjdk.java.net/~jcbeyler/8194084/webrev.01/ > > > > It has the parts for each architecture and I can't test a lot of them so > I would > > need a review and test for each :). I think first would be an agreement > to the > > code change itself then test it once everyone agrees on the change ? > > > > Could I please get some initial reviews on this? > > > > Basically what this webrev does is follow what the interpreter is saying: > > - No longer try to do a fast tlab refill > > - Try eden allocation if contiguous inline allocation is true > > - Otherwise slowpath > > > > This is true for all architectures except: > > - ppc, which doesn't do eden allocations, I just cleaned up the code > a bit > > there to be consistent > > - s390 that does not do tlab_refill at all, I just removed the dead > code there. > > > > Thanks a lot for your help, > > Jc > From aph at redhat.com Fri Feb 16 09:46:41 2018 From: aph at redhat.com (Andrew Haley) Date: Fri, 16 Feb 2018 09:46:41 +0000 Subject: Take 2: RFR: 8197429: Increased stack guard causes segfaults on x86-32 In-Reply-To: <6ba7b1de-dfdd-3f71-54ed-a111de46dffd@oracle.com> References: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> <02d49fdb-6518-3e27-ac4b-a1d71edfb313@oracle.com> <2e999e00-7e22-476f-1e66-4e2ae3221ab7@redhat.com> <1d068d2c-f47b-c9bf-21fc-602707470d3f@oracle.com> <9bd008c1-33aa-1adf-8d6a-ad66d8e1d5d5@redhat.com> <6ba7b1de-dfdd-3f71-54ed-a111de46dffd@oracle.com> Message-ID: On 16/02/18 04:25, David Holmes wrote: > Thanks Andrew. Just one nit with the test: > > 48 # Run the test for a java and native overflow > 49 ${TESTNATIVEPATH}/stack-gap > 50 ${TESTNATIVEPATH}/stack-gap -XX:+DisablePrimordialThreadGuardPages > 51 exit $? > > Need to check we get zero exit code from first run before doing second. Oh, poo. Thanks. :-) http://cr.openjdk.java.net/~aph/8197429-4/ The only change is to the test case. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From david.holmes at oracle.com Fri Feb 16 09:57:54 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 16 Feb 2018 19:57:54 +1000 Subject: Take 2: RFR: 8197429: Increased stack guard causes segfaults on x86-32 In-Reply-To: References: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> <02d49fdb-6518-3e27-ac4b-a1d71edfb313@oracle.com> <2e999e00-7e22-476f-1e66-4e2ae3221ab7@redhat.com> <1d068d2c-f47b-c9bf-21fc-602707470d3f@oracle.com> <9bd008c1-33aa-1adf-8d6a-ad66d8e1d5d5@redhat.com> <6ba7b1de-dfdd-3f71-54ed-a111de46dffd@oracle.com> Message-ID: <9bfd4c7e-82b5-aafb-87a9-2838586ea49d@oracle.com> Looks good! Thanks, David On 16/02/2018 7:46 PM, Andrew Haley wrote: > On 16/02/18 04:25, David Holmes wrote: >> Thanks Andrew. Just one nit with the test: >> >> 48 # Run the test for a java and native overflow >> 49 ${TESTNATIVEPATH}/stack-gap >> 50 ${TESTNATIVEPATH}/stack-gap -XX:+DisablePrimordialThreadGuardPages >> 51 exit $? >> >> Need to check we get zero exit code from first run before doing second. > > Oh, poo. Thanks. :-) > > http://cr.openjdk.java.net/~aph/8197429-4/ > > The only change is to the test case. > From martin.doerr at sap.com Fri Feb 16 10:09:16 2018 From: martin.doerr at sap.com (Doerr, Martin) Date: Fri, 16 Feb 2018 10:09:16 +0000 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: Message-ID: <1fcb58c6f6e94121b229c6048eaf39b4@sap.com> Hi Jc, the PPC64 and s390 parts look good. Thanks for the cleanup. Best regards, Martin -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of JC Beyler Sent: Donnerstag, 15. Februar 2018 00:09 To: hotspot-dev at openjdk.java.net Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code Hi all, Here is a webrev to do the work mentioned in JDK-8194084 : http://cr.openjdk.java.net/~jcbeyler/8194084/webrev.01/ It has the parts for each architecture and I can't test a lot of them so I would need a review and test for each :). I think first would be an agreement to the code change itself then test it once everyone agrees on the change ? Could I please get some initial reviews on this? Basically what this webrev does is follow what the interpreter is saying: - No longer try to do a fast tlab refill - Try eden allocation if contiguous inline allocation is true - Otherwise slowpath This is true for all architectures except: - ppc, which doesn't do eden allocations, I just cleaned up the code a bit there to be consistent - s390 that does not do tlab_refill at all, I just removed the dead code there. Thanks a lot for your help, Jc From matthias.baesken at sap.com Fri Feb 16 10:39:31 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Fri, 16 Feb 2018 10:39:31 +0000 Subject: RFR : 8198275 AIX build broken after latest whitebox.cpp changes Message-ID: Hi, please review this AIX related fix. Recent changes in jdk/hs caused a compilation error : The AIX build is broken, error is /nb/rs6000_64/nightly/jdk-hs/src/hotspot/share/utilities/elfFile.hpp", line 33.10: 1540-0836 (S) The #include file is not found. (whitebox.o compilation failed). The fix handles AIX in the same way Windows and "APPLE" is handled . Bug : https://bugs.openjdk.java.net/browse/JDK-8198275 Webrev : http://cr.openjdk.java.net/~mbaesken/webrevs/8198275.0/ Thanks, Matthias From martin.doerr at sap.com Fri Feb 16 10:50:45 2018 From: martin.doerr at sap.com (Doerr, Martin) Date: Fri, 16 Feb 2018 10:50:45 +0000 Subject: RFR : 8198275 AIX build broken after latest whitebox.cpp changes In-Reply-To: References: Message-ID: <602e778e7b2d41778a3d1a38b73e8871@sap.com> Hi Matthias, thanks for fixing. Reviewed and pushed as it is a very trivial build fix for AIX. Best regards, Martin From: Baesken, Matthias Sent: Freitag, 16. Februar 2018 11:40 To: 'hotspot-dev at openjdk.java.net' Cc: Doerr, Martin ; Simonis, Volker Subject: RFR : 8198275 AIX build broken after latest whitebox.cpp changes Hi, please review this AIX related fix. Recent changes in jdk/hs caused a compilation error : The AIX build is broken, error is /nb/rs6000_64/nightly/jdk-hs/src/hotspot/share/utilities/elfFile.hpp", line 33.10: 1540-0836 (S) The #include file is not found. (whitebox.o compilation failed). The fix handles AIX in the same way Windows and "APPLE" is handled . Bug : https://bugs.openjdk.java.net/browse/JDK-8198275 Webrev : http://cr.openjdk.java.net/~mbaesken/webrevs/8198275.0/ Thanks, Matthias From matthias.baesken at sap.com Fri Feb 16 11:55:12 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Fri, 16 Feb 2018 11:55:12 +0000 Subject: JEP [DRAFT]: Container aware Java Message-ID: <69864b2c86ea432cbf1d07022719c8d1@sap.com> Hi Bob, I'll look into how the added API will work on Linux ppc64le and s390x (where docker is available as well ). Best regards, Matthias On Thu, Feb 15, 2018 at 6:07 PM, Bob Vandette > wrote: > I?d like to re-propose the following JEP that will enhance the Java runtime to be more container aware. > This will add an Internal Java API that will provide container specific statistics. Some of the initial goals > of the previous JEP proposal has been integrated into JDK 10 under an RFE (JDK-8146115). > This JEP is now focused on providing a Java API that exports Container runtime configuration and metrics. > > Since the scope of this JEP have changed, I?m re-submitting it for comment and endorsement. > > > JEP Issue: > > https://bugs.openjdk.java.net/browse/JDK-8182070 > > Here?s a Text dump of the JEP contents for your convenience: > > Summary > ------- > > Container aware Java runtime > > Goals > ----- > > Provide an internal API that can be used to extract container specific configuration and runtime statistics. This JEP will only support Docker on Linux-x64 although the design should be flexible enough to allow support for other platforms and container technologies. The initial focus will be on Linux cgroups technology so that we will be able to easily support other container technologies running on Linux in addition to Docker. > > Non-Goals > --------- > > It is not a goal of this JEP to support any platform other than Docker container technology running on Linux x64. > > Success Metrics > --------------- > > Success will be measured by the improvement in information that will be available to tools which visualize resource usage of containers that are running Java processes. > > Motivation > ---------- > > Container technology is becoming more and more prevalent in Cloud based applications. The Cloud Serverless application programming model motivates developers to split large monolithic applications into 100s of smaller pieces each running in thier own container. This move increases the importance of the observability of each running container process. Adding the proposed set of APIs will allow more details related to each container process to be made available to external tools thereby improving the observability. > > Description > ----------- > > This enhancement will be made up of the following work items: > > A. Detecting if Java is running in a container. > > The Java runtime, as well as any tests that we might write for this feature, will need to be able to detect that the current Java process is running in a container. A new API will be made available for this purpose. > > B. Exposing container resource limits, configuration and runtime statistics. > > There are several configuration options and limits that can be imposed upon a running container. Not all of these > are important to a running Java process. We clearly want to be able to detect how many CPUs have been allocated to our process along with the maximum amount of memory that the process has been allocated but there are other options that we might want to base runtime decisions on. > > In addition, since Container typically impose limits on system resources, they also provide the ability to easily access the amount of consumption of these resources. The goal is to provide this information in addition to the configuration data. > > I propose adding a new jdk.internal.Platform class that will allow access to this information. > > Here are some of the types of configuration and consumption statistics that would be made available: > > isContainerized > Memory Limit > Total Memory Limit > Soft Memory Limit > Max Memory Usage > Current Memory Usage > Maximum Kernel Memory > CPU Shares > CPU Period > CPU Quota > Number of CPUs > CPU Sets > CPU Set Memory Nodes > CPU Usage > CPU Usage Per CPU > Block I/O Weight > Block I/O Device Weight > Device I/O Read Rate > Device I/O Write Rate > OOM Kill Enabled > OOM Score Adjustment > Memory Swappiness > Shared Memory Size > > Alternatives > ------------ > > There are a few existing tools available to extract some of the same container statistics. These tools could be used instead. The benefit of providing a core Java internal API is that this information can be expose by current Java serviceability tools such as JMX and JFR along side other JVM specific information. > > Testing > ------- > > Docker/container specific tests should be added in order to validate the functionality being provided with this JEP. > > Risks and Assumptions > --------------------- > > Docker is currently based on cgroups v1. Cgroups v2 is also available but is incomplete and not yet supported by Docker. It's possible that v2 could replace v1 in an incompatible way rendering this work unusable until it is upgraded. > > Other alternative container technologies based on hypervisors are being developed that could replace the use of cgroups for container isloation. > > Dependencies > ----------- > > None at this time. > From david.holmes at oracle.com Fri Feb 16 12:33:20 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 16 Feb 2018 22:33:20 +1000 Subject: RFR : 8198275 AIX build broken after latest whitebox.cpp changes In-Reply-To: <602e778e7b2d41778a3d1a38b73e8871@sap.com> References: <602e778e7b2d41778a3d1a38b73e8871@sap.com> Message-ID: <70642aff-8994-c7e2-fbc8-35f83b303c37@oracle.com> I have to wonder why all this Elf file stuff is in shared code in the first place. :( The earlier Elf file stuff in whitebox.cpp is in a #ifdef LINUX guard - in fact "utilities/elfFile.hpp" is now included twice in that file! So this is getting very messy. I'll file a RFE to have it cleaned up. David On 16/02/2018 8:50 PM, Doerr, Martin wrote: > Hi Matthias, > > thanks for fixing. Reviewed and pushed as it is a very trivial build fix for AIX. > > Best regards, > Martin > > > From: Baesken, Matthias > Sent: Freitag, 16. Februar 2018 11:40 > To: 'hotspot-dev at openjdk.java.net' > Cc: Doerr, Martin ; Simonis, Volker > Subject: RFR : 8198275 AIX build broken after latest whitebox.cpp changes > > Hi, please review this AIX related fix. > > Recent changes in jdk/hs caused a compilation error : > > The AIX build is broken, error is /nb/rs6000_64/nightly/jdk-hs/src/hotspot/share/utilities/elfFile.hpp", line 33.10: 1540-0836 (S) The #include file is not found. (whitebox.o compilation failed). > > The fix handles AIX in the same way Windows and "APPLE" is handled . > > Bug : > > https://bugs.openjdk.java.net/browse/JDK-8198275 > > Webrev : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8198275.0/ > > > > Thanks, Matthias > From coleen.phillimore at oracle.com Fri Feb 16 13:43:06 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 16 Feb 2018 08:43:06 -0500 Subject: (11) RFR (S) JDK-8196880: VS2017 Addition of Global Delete Operator with Size Parameter Conflicts with Arena's Chunk Provided One In-Reply-To: References: Message-ID: <98f8247b-6a24-b5c9-6d27-0c8597a63f1c@oracle.com> This is odd but seems ok. If you make the dummy operator delete private, does it still compile??? If it is actually used, will it get a linktime error rather than a compile time error? thanks, Coleen On 2/14/18 8:48 AM, Lois Foltan wrote: > Please review this change in VS2017 to the delete operator due to > C++14 standard conformance.? From > https://msdn.microsoft.com/en-us/library/mt723604.aspx > > The function|void operator delete(void *, size_t)|was a placement > delete operator corresponding to the placement new function "void * > operator new(size_t, size_t)" in C++11. With C++14 sized deallocation, > this delete function is now a/usual deallocation function/(global > delete operator). The standard requires that if the use of a placement > new looks up a corresponding delete function and finds a usual > deallocation function, the program is ill-formed. > > Thank you to Kim Barrett for proposing the fix below. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196880/webrev/ > bug link https://bugs.openjdk.java.net/browse/JDK-8196880 > > Testing complete (hs-tier1-3, jdk-tier1-3) > > Thanks, > Lois > > From coleen.phillimore at oracle.com Fri Feb 16 13:49:19 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 16 Feb 2018 08:49:19 -0500 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: Message-ID: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> I agree with this change, and will sponsor it and test on sparc (and on oracle platforms). When you obsolete an option, I think you just remove it from globals.hpp and the code in arguments.cpp should tell you it's obsolete. Thanks, Coleen On 2/14/18 6:08 PM, JC Beyler wrote: > Hi all, > > Here is a webrev to do the work mentioned in JDK-8194084 > : > http://cr.openjdk.java.net/~jcbeyler/8194084/webrev.01/ > > It has the parts for each architecture and I can't test a lot of them so I > would need a review and test for each :). I think first would be an > agreement to the code change itself then test it once everyone agrees on > the change ? > > Could I please get some initial reviews on this? > > Basically what this webrev does is follow what the interpreter is saying: > - No longer try to do a fast tlab refill > - Try eden allocation if contiguous inline allocation is true > - Otherwise slowpath > > This is true for all architectures except: > - ppc, which doesn't do eden allocations, I just cleaned up the code a > bit there to be consistent > - s390 that does not do tlab_refill at all, I just removed the dead code > there. > > Thanks a lot for your help, > Jc From martin.doerr at sap.com Fri Feb 16 14:26:55 2018 From: martin.doerr at sap.com (Doerr, Martin) Date: Fri, 16 Feb 2018 14:26:55 +0000 Subject: RFR : 8198275 AIX build broken after latest whitebox.cpp changes In-Reply-To: <70642aff-8994-c7e2-fbc8-35f83b303c37@oracle.com> References: <602e778e7b2d41778a3d1a38b73e8871@sap.com> <70642aff-8994-c7e2-fbc8-35f83b303c37@oracle.com> Message-ID: <2881a76077c142e79b82730fbb373d2f@sap.com> Thank you, David. That makes sense. Best regards, Martin -----Original Message----- From: David Holmes [mailto:david.holmes at oracle.com] Sent: Freitag, 16. Februar 2018 13:33 To: Doerr, Martin ; Baesken, Matthias ; 'hotspot-dev at openjdk.java.net' Cc: Simonis, Volker Subject: Re: RFR : 8198275 AIX build broken after latest whitebox.cpp changes I have to wonder why all this Elf file stuff is in shared code in the first place. :( The earlier Elf file stuff in whitebox.cpp is in a #ifdef LINUX guard - in fact "utilities/elfFile.hpp" is now included twice in that file! So this is getting very messy. I'll file a RFE to have it cleaned up. David On 16/02/2018 8:50 PM, Doerr, Martin wrote: > Hi Matthias, > > thanks for fixing. Reviewed and pushed as it is a very trivial build fix for AIX. > > Best regards, > Martin > > > From: Baesken, Matthias > Sent: Freitag, 16. Februar 2018 11:40 > To: 'hotspot-dev at openjdk.java.net' > Cc: Doerr, Martin ; Simonis, Volker > Subject: RFR : 8198275 AIX build broken after latest whitebox.cpp changes > > Hi, please review this AIX related fix. > > Recent changes in jdk/hs caused a compilation error : > > The AIX build is broken, error is /nb/rs6000_64/nightly/jdk-hs/src/hotspot/share/utilities/elfFile.hpp", line 33.10: 1540-0836 (S) The #include file is not found. (whitebox.o compilation failed). > > The fix handles AIX in the same way Windows and "APPLE" is handled . > > Bug : > > https://bugs.openjdk.java.net/browse/JDK-8198275 > > Webrev : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8198275.0/ > > > > Thanks, Matthias > From bob.vandette at oracle.com Fri Feb 16 15:29:16 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Fri, 16 Feb 2018 10:29:16 -0500 Subject: JEP [DRAFT]: Container aware Java In-Reply-To: <69864b2c86ea432cbf1d07022719c8d1@sap.com> References: <69864b2c86ea432cbf1d07022719c8d1@sap.com> Message-ID: <393494EB-296E-48B1-BF15-CA783232243E@oracle.com> Thanks. It should be pure Java code so hopefully you won?t have any work to do. Bob. > On Feb 16, 2018, at 6:55 AM, Baesken, Matthias wrote: > > Hi Bob, I?ll look into how the added API will work on Linux ppc64le and s390x (where docker is available as well ). > > Best regards, Matthias > > > > On Thu, Feb 15, 2018 at 6:07 PM, Bob Vandette > wrote: > > I?d like to re-propose the following JEP that will enhance the Java runtime to be more container aware. > > This will add an Internal Java API that will provide container specific statistics. Some of the initial goals > > of the previous JEP proposal has been integrated into JDK 10 under an RFE (JDK-8146115). > > This JEP is now focused on providing a Java API that exports Container runtime configuration and metrics. > > > > Since the scope of this JEP have changed, I?m re-submitting it for comment and endorsement. > > > > > > JEP Issue: > > > > https://bugs.openjdk.java.net/browse/JDK-8182070 > > > > Here?s a Text dump of the JEP contents for your convenience: > > > > Summary > > ------- > > > > Container aware Java runtime > > > > Goals > > ----- > > > > Provide an internal API that can be used to extract container specific configuration and runtime statistics. This JEP will only support Docker on Linux-x64 although the design should be flexible enough to allow support for other platforms and container technologies. The initial focus will be on Linux cgroups technology so that we will be able to easily support other container technologies running on Linux in addition to Docker. > > > > Non-Goals > > --------- > > > > It is not a goal of this JEP to support any platform other than Docker container technology running on Linux x64. > > > > Success Metrics > > --------------- > > > > Success will be measured by the improvement in information that will be available to tools which visualize resource usage of containers that are running Java processes. > > > > Motivation > > ---------- > > > > Container technology is becoming more and more prevalent in Cloud based applications. The Cloud Serverless application programming model motivates developers to split large monolithic applications into 100s of smaller pieces each running in thier own container. This move increases the importance of the observability of each running container process. Adding the proposed set of APIs will allow more details related to each container process to be made available to external tools thereby improving the observability. > > > > Description > > ----------- > > > > This enhancement will be made up of the following work items: > > > > A. Detecting if Java is running in a container. > > > > The Java runtime, as well as any tests that we might write for this feature, will need to be able to detect that the current Java process is running in a container. A new API will be made available for this purpose. > > > > B. Exposing container resource limits, configuration and runtime statistics. > > > > There are several configuration options and limits that can be imposed upon a running container. Not all of these > > are important to a running Java process. We clearly want to be able to detect how many CPUs have been allocated to our process along with the maximum amount of memory that the process has been allocated but there are other options that we might want to base runtime decisions on. > > > > In addition, since Container typically impose limits on system resources, they also provide the ability to easily access the amount of consumption of these resources. The goal is to provide this information in addition to the configuration data. > > > > I propose adding a new jdk.internal.Platform class that will allow access to this information. > > > > Here are some of the types of configuration and consumption statistics that would be made available: > > > > isContainerized > > Memory Limit > > Total Memory Limit > > Soft Memory Limit > > Max Memory Usage > > Current Memory Usage > > Maximum Kernel Memory > > CPU Shares > > CPU Period > > CPU Quota > > Number of CPUs > > CPU Sets > > CPU Set Memory Nodes > > CPU Usage > > CPU Usage Per CPU > > Block I/O Weight > > Block I/O Device Weight > > Device I/O Read Rate > > Device I/O Write Rate > > OOM Kill Enabled > > OOM Score Adjustment > > Memory Swappiness > > Shared Memory Size > > > > Alternatives > > ------------ > > > > There are a few existing tools available to extract some of the same container statistics. These tools could be used instead. The benefit of providing a core Java internal API is that this information can be expose by current Java serviceability tools such as JMX and JFR along side other JVM specific information. > > > > Testing > > ------- > > > > Docker/container specific tests should be added in order to validate the functionality being provided with this JEP. > > > > Risks and Assumptions > > --------------------- > > > > Docker is currently based on cgroups v1. Cgroups v2 is also available but is incomplete and not yet supported by Docker. It's possible that v2 could replace v1 in an incompatible way rendering this work unusable until it is upgraded. > > > > Other alternative container technologies based on hypervisors are being developed that could replace the use of cgroups for container isloation. > > > > Dependencies > > ----------- > > > > None at this time. > > From lois.foltan at oracle.com Fri Feb 16 16:28:57 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 16 Feb 2018 11:28:57 -0500 Subject: (11) RFR (S) JDK-8196880: VS2017 Addition of Global Delete Operator with Size Parameter Conflicts with Arena's Chunk Provided One In-Reply-To: <98f8247b-6a24-b5c9-6d27-0c8597a63f1c@oracle.com> References: <98f8247b-6a24-b5c9-6d27-0c8597a63f1c@oracle.com> Message-ID: On 2/16/2018 8:43 AM, coleen.phillimore at oracle.com wrote: > > This is odd but seems ok. > > If you make the dummy operator delete private, does it still > compile??? If it is actually used, will it get a linktime error rather > than a compile time error? Hi Coleen, Thanks for the review.? Yes, making the operator delete private does work and I have updated the webrev.? If it is actually used it will get a linktime error. Updated webrev at: http://cr.openjdk.java.net/~lfoltan/bug_jdk8196880.1/webrev/ Lois > > thanks, > Coleen > > On 2/14/18 8:48 AM, Lois Foltan wrote: >> Please review this change in VS2017 to the delete operator due to >> C++14 standard conformance.? From >> https://msdn.microsoft.com/en-us/library/mt723604.aspx >> >> The function|void operator delete(void *, size_t)|was a placement >> delete operator corresponding to the placement new function "void * >> operator new(size_t, size_t)" in C++11. With C++14 sized >> deallocation, this delete function is now a/usual deallocation >> function/(global delete operator). The standard requires that if the >> use of a placement new looks up a corresponding delete function and >> finds a usual deallocation function, the program is ill-formed. >> >> Thank you to Kim Barrett for proposing the fix below. >> >> open webrev at >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8196880/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8196880 >> >> Testing complete (hs-tier1-3, jdk-tier1-3) >> >> Thanks, >> Lois >> >> > From lois.foltan at oracle.com Fri Feb 16 16:53:15 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 16 Feb 2018 11:53:15 -0500 Subject: (11) RFR (S) JDK-8197868: VS2017 (C2065) 'timezone': Undeclared Identifier in share/runtime/os.cpp Message-ID: Please review this change to use the functional version of _get_timezone for VS2017.? The global variable timezone has been deprecated. open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8197868/webrev/ bug link https://bugs.openjdk.java.net/browse/JDK-8197868 contributed-by: Kim Barrett & Lois Foltan Testing: hs-tier(1-3), jdk-tier(1-3) complete Thanks, Lois From jcbeyler at google.com Fri Feb 16 17:06:28 2018 From: jcbeyler at google.com (JC Beyler) Date: Fri, 16 Feb 2018 09:06:28 -0800 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> References: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> Message-ID: Answering all in one go :) I updated the webrev to: http://cr.openjdk.java.net/~jcbeyler/8194084/webrev.01/ I am guessing I need a tested ok from each architecture and not only a looks good. If I am wrong, I apologize! If I am right, the current status of each architecture is: - aarch64: I removed the use of r5 but left for now r19 to be stored/loaded back, let me know what you think, Derek. - ppc64 and s390 look good from Martin, missing a tested ok - Sparc looks good from Coleen, missing a tested ok - I removed the option from globals.h, let me know if that is correct, it seems to be what is said from the comment above but I thought maybe we had to wait until the version number got bumped and then move all flags out of globals.hpp. - arm: missing a looks good and test - x86: missing a looks good and test :) I'm happy to do as I did with https://bugs.openjdk.java.net/browse/JDK-8190862 and create subtasks if that will make it easier on everyone. Let me know, Jc On Fri, Feb 16, 2018 at 5:49 AM, wrote: > > I agree with this change, and will sponsor it and test on sparc (and on > oracle platforms). > > When you obsolete an option, I think you just remove it from globals.hpp > and the code in arguments.cpp should tell you it's obsolete. > > Thanks, > Coleen > > On 2/14/18 6:08 PM, JC Beyler wrote: > >> Hi all, >> >> Here is a webrev to do the work mentioned in JDK-8194084 >> : >> http://cr.openjdk.java.net/~jcbeyler/8194084/webrev.01/ >> >> It has the parts for each architecture and I can't test a lot of them so I >> would need a review and test for each :). I think first would be an >> agreement to the code change itself then test it once everyone agrees on >> the change ? >> >> Could I please get some initial reviews on this? >> >> Basically what this webrev does is follow what the interpreter is saying: >> - No longer try to do a fast tlab refill >> - Try eden allocation if contiguous inline allocation is true >> - Otherwise slowpath >> >> This is true for all architectures except: >> - ppc, which doesn't do eden allocations, I just cleaned up the code a >> bit there to be consistent >> - s390 that does not do tlab_refill at all, I just removed the dead >> code >> there. >> >> Thanks a lot for your help, >> Jc >> > > From leonid.mesnik at oracle.com Fri Feb 16 17:36:29 2018 From: leonid.mesnik at oracle.com (Leonid Mesnik) Date: Fri, 16 Feb 2018 09:36:29 -0800 Subject: Take 2: RFR: 8197429: Increased stack guard causes segfaults on x86-32 In-Reply-To: References: <16c4702e-18af-84a4-0549-59b5f971d723@redhat.com> <02d49fdb-6518-3e27-ac4b-a1d71edfb313@oracle.com> <2e999e00-7e22-476f-1e66-4e2ae3221ab7@redhat.com> <1d068d2c-f47b-c9bf-21fc-602707470d3f@oracle.com> <9bd008c1-33aa-1adf-8d6a-ad66d8e1d5d5@redhat.com> <6ba7b1de-dfdd-3f71-54ed-a111de46dffd@oracle.com> Message-ID: <94D18A07-2D1C-4111-9F4B-49EEED3B3628@oracle.com> Andrew Test changes looks good. Thanks for updating native code compilation. Leonid > On Feb 16, 2018, at 1:46 AM, Andrew Haley wrote: > > On 16/02/18 04:25, David Holmes wrote: >> Thanks Andrew. Just one nit with the test: >> >> 48 # Run the test for a java and native overflow >> 49 ${TESTNATIVEPATH}/stack-gap >> 50 ${TESTNATIVEPATH}/stack-gap -XX:+DisablePrimordialThreadGuardPages >> 51 exit $? >> >> Need to check we get zero exit code from first run before doing second. > > Oh, poo. Thanks. :-) > > http://cr.openjdk.java.net/~aph/8197429-4/ > > The only change is to the test case. > > -- > Andrew Haley > Java Platform Lead Engineer > Red Hat UK Ltd. > EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From coleen.phillimore at oracle.com Fri Feb 16 18:30:32 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 16 Feb 2018 13:30:32 -0500 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> Message-ID: <2111b78e-8c80-c93d-188f-7ff973d1f604@oracle.com> On 2/16/18 12:06 PM, JC Beyler wrote: > Answering all in one go :) > > I updated the webrev to: > http://cr.openjdk.java.net/~jcbeyler/8194084/webrev.01/ > > > I am guessing I need a tested ok from each architecture and not only a > looks good. If I am wrong, I apologize! If I am right, the current > status of each architecture is: > > - aarch64: I removed the use of r5 but left for now r19 to be > stored/loaded back, let me know what you think, Derek. > - ppc64 and s390 look good? from Martin, missing a tested ok > - Sparc looks good from Coleen, missing a tested ok > ? ? - I removed the option from globals.h, let me know if that is > correct, it seems to be what is said from the comment above but I > thought maybe we had to wait until the version number got bumped and > then move all flags out of globals.hpp. > > - arm: missing a looks good and test > - x86: missing a looks good and test :) > > I'm happy to do as I did with > https://bugs.openjdk.java.net/browse/JDK-8190862 and create subtasks > if that will make it easier on everyone. > Oh please, no. If you update your webrev with a commit message with the current reviewers (including myself).? I'll import and run final tests on the platforms missing and if it passes, I'll sponsor and push. thanks, Coleen > Let me know, > Jc > > > On Fri, Feb 16, 2018 at 5:49 AM, > wrote: > > > I agree with this change, and will sponsor it and test on sparc > (and on oracle platforms). > > When you obsolete an option, I think you just remove it from > globals.hpp and the code in arguments.cpp should tell you it's > obsolete. > > Thanks, > Coleen > > On 2/14/18 6:08 PM, JC Beyler wrote: > > Hi all, > > Here is a webrev to do the work mentioned in JDK-8194084 > >: > http://cr.openjdk.java.net/~jcbeyler/8194084/webrev.01/ > > > It has the parts for each architecture and I can't test a lot > of them so I > would need a review and test for each :). I think first would > be an > agreement to the code change itself then test it once everyone > agrees on > the change ? > > Could I please get some initial reviews on this? > > Basically what this webrev does is follow what the interpreter > is saying: > ? ?- No longer try to do a fast tlab refill > ? ?- Try eden allocation if contiguous inline allocation is true > ? ?- Otherwise slowpath > > This is true for all architectures except: > ? ? - ppc, which doesn't do eden allocations, I just cleaned > up the code a > bit there to be consistent > ? ? - s390 that does not do tlab_refill at all, I just removed > the dead code > there. > > Thanks a lot for your help, > Jc > > > From coleen.phillimore at oracle.com Fri Feb 16 18:31:46 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 16 Feb 2018 13:31:46 -0500 Subject: (11) RFR (S) JDK-8196880: VS2017 Addition of Global Delete Operator with Size Parameter Conflicts with Arena's Chunk Provided One In-Reply-To: References: <98f8247b-6a24-b5c9-6d27-0c8597a63f1c@oracle.com> Message-ID: <1fea637c-a147-38db-547c-7748da82a9cf@oracle.com> On 2/16/18 11:28 AM, Lois Foltan wrote: > On 2/16/2018 8:43 AM, coleen.phillimore at oracle.com wrote: > >> >> This is odd but seems ok. >> >> If you make the dummy operator delete private, does it still >> compile??? If it is actually used, will it get a linktime error >> rather than a compile time error? > Hi Coleen, > Thanks for the review.? Yes, making the operator delete private does > work and I have updated the webrev.? If it is actually used it will > get a linktime error. > > Updated webrev at: > http://cr.openjdk.java.net/~lfoltan/bug_jdk8196880.1/webrev/ I like this.?? See what Kim says, I guess.? If someone tries to call it they should get a compile time error first.?? I believe... thanks! Coleen > Lois > >> >> thanks, >> Coleen >> >> On 2/14/18 8:48 AM, Lois Foltan wrote: >>> Please review this change in VS2017 to the delete operator due to >>> C++14 standard conformance.? From >>> https://msdn.microsoft.com/en-us/library/mt723604.aspx >>> >>> The function|void operator delete(void *, size_t)|was a placement >>> delete operator corresponding to the placement new function "void * >>> operator new(size_t, size_t)" in C++11. With C++14 sized >>> deallocation, this delete function is now a/usual deallocation >>> function/(global delete operator). The standard requires that if the >>> use of a placement new looks up a corresponding delete function and >>> finds a usual deallocation function, the program is ill-formed. >>> >>> Thank you to Kim Barrett for proposing the fix below. >>> >>> open webrev at >>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8196880/webrev/ >>> bug link https://bugs.openjdk.java.net/browse/JDK-8196880 >>> >>> Testing complete (hs-tier1-3, jdk-tier1-3) >>> >>> Thanks, >>> Lois >>> >>> >> > From paul.sandoz at oracle.com Fri Feb 16 19:47:05 2018 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Fri, 16 Feb 2018 11:47:05 -0800 Subject: [PATCH] RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers In-Reply-To: References: Message-ID: <39D8F43A-06BD-483B-8901-6F4444A8235F@oracle.com> Hi Adam, From reading the thread i cannot tell if this is part of a wider solution including some yet to be proposed HotSpot changes. As is i would be resistant to adding such standalone internal wrapper methods to Unsafe that have no apparent benefit within the OpenJDK itself since it's a maintenance burden. Can you determine if the calls to UNSAFE.freeMemory/allocateMemory come from a DBB by looking at the call stack frame above the unsafe call? Thanks, Paul. > On Feb 14, 2018, at 3:32 AM, Adam Farley8 wrote: > > Hi All, > > Currently, diagnostic core files generated from OpenJDK seem to lump all > of the > native memory usages together, making it near-impossible for someone to > figure > out *what* is using all that memory in the event of a memory leak. > > The OpenJ9 VM has a feature which allows it to track the allocation of > native > memory for Direct Byte Buffers (DBBs), and to supply that information into > the > cores when they are generated. This makes it a *lot* easier to find out > what is using > all that native memory, making memory leak resolution less like some dark > art, and > more like logical debugging. > > To use this feature, there is a native method referenced in Unsafe.java. > To open > up this feature so that any VM can make use of it, the java code below > sets the > stage for it. This change starts letting people call DBB-specific methods > when > allocating native memory, and getting into the habit of using it. > > Thoughts? > > Best Regards > > Adam Farley > > P.S. Code: > > diff --git > a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > --- a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > +++ b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > @@ -85,7 +85,7 @@ > // Paranoia > return; > } > - UNSAFE.freeMemory(address); > + UNSAFE.freeDBBMemory(address); > address = 0; > Bits.unreserveMemory(size, capacity); > } > @@ -118,7 +118,7 @@ > > long base = 0; > try { > - base = UNSAFE.allocateMemory(size); > + base = UNSAFE.allocateDBBMemory(size); > } catch (OutOfMemoryError x) { > Bits.unreserveMemory(size, cap); > throw x; > diff --git a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > --- a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > +++ b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > @@ -632,6 +632,26 @@ > } > > /** > + * Allocates a new block of native memory for DirectByteBuffers, of > the > + * given size in bytes. The contents of the memory are > uninitialized; > + * they will generally be garbage. The resulting native pointer will > + * never be zero, and will be aligned for all value types. Dispose > of > + * this memory by calling {@link #freeDBBMemory} or resize it with > + * {@link #reallocateDBBMemory}. > + * > + * @throws RuntimeException if the size is negative or too large > + * for the native size_t type > + * > + * @throws OutOfMemoryError if the allocation is refused by the > system > + * > + * @see #getByte(long) > + * @see #putByte(long, byte) > + */ > + public long allocateDBBMemory(long bytes) { > + return allocateMemory(bytes); > + } > + > + /** > * Resizes a new block of native memory, to the given size in bytes. > The > * contents of the new block past the size of the old block are > * uninitialized; they will generally be garbage. The resulting > native > @@ -687,6 +707,27 @@ > } > > /** > + * Resizes a new block of native memory for DirectByteBuffers, to the > + * given size in bytes. The contents of the new block past the size > of > + * the old block are uninitialized; they will generally be garbage. > The > + * resulting native pointer will be zero if and only if the requested > size > + * is zero. The resulting native pointer will be aligned for all > value > + * types. Dispose of this memory by calling {@link #freeDBBMemory}, > or > + * resize it with {@link #reallocateDBBMemory}. The address passed > to > + * this method may be null, in which case an allocation will be > performed. > + * > + * @throws RuntimeException if the size is negative or too large > + * for the native size_t type > + * > + * @throws OutOfMemoryError if the allocation is refused by the > system > + * > + * @see #allocateDBBMemory > + */ > + public long reallocateDBBMemory(long address, long bytes) { > + return reallocateMemory(address, bytes); > + } > + > + /** > * Sets all bytes in a given block of memory to a fixed value > * (usually zero). > * > @@ -918,6 +959,17 @@ > checkPointer(null, address); > } > > + /** > + * Disposes of a block of native memory, as obtained from {@link > + * #allocateDBBMemory} or {@link #reallocateDBBMemory}. The address > passed > + * to this method may be null, in which case no action is taken. > + * > + * @see #allocateDBBMemory > + */ > + public void freeDBBMemory(long address) { > + freeMemory(address); > + } > + > /// random queries > > /** > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU From lois.foltan at oracle.com Fri Feb 16 20:07:45 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 16 Feb 2018 15:07:45 -0500 Subject: (11) RFR (S) JDK-8197956: VS2017 (C4838) Narrowing conversion required from __int64 to julong Message-ID: <7847ae70-eaca-772e-1052-b66b1db69986@oracle.com> Please review this fix to use the correct typed constant when initializing the StubRoutines::x86::_k512_W array. open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8197956/webrev/ bug https://bugs.openjdk.java.net/browse/JDK-8197956 contributed-by: Kim Barrett & Lois Foltan Testing complete (hs-tier1-3, jdk-tier1-3) From coleen.phillimore at oracle.com Fri Feb 16 20:11:41 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 16 Feb 2018 15:11:41 -0500 Subject: (11) RFR (S) JDK-8197956: VS2017 (C4838) Narrowing conversion required from __int64 to julong In-Reply-To: <7847ae70-eaca-772e-1052-b66b1db69986@oracle.com> References: <7847ae70-eaca-772e-1052-b66b1db69986@oracle.com> Message-ID: <657e9ad8-525c-fb82-5b9e-58b75c6fbbd1@oracle.com> This looks good, and trivial enough to push. Coleen On 2/16/18 3:07 PM, Lois Foltan wrote: > Please review this fix to use the correct typed constant when > initializing the StubRoutines::x86::_k512_W array. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8197956/webrev/ > bug https://bugs.openjdk.java.net/browse/JDK-8197956 > contributed-by: Kim Barrett & Lois Foltan > > Testing complete (hs-tier1-3, jdk-tier1-3) From coleen.phillimore at oracle.com Fri Feb 16 20:12:32 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 16 Feb 2018 15:12:32 -0500 Subject: (11) RFR (S) JDK-8197868: VS2017 (C2065) 'timezone': Undeclared Identifier in share/runtime/os.cpp In-Reply-To: References: Message-ID: <834f002d-7c88-c2e1-3b54-51ffe764a675@oracle.com> This seems good. Coleen On 2/16/18 11:53 AM, Lois Foltan wrote: > Please review this change to use the functional version of > _get_timezone for VS2017.? The global variable timezone has been > deprecated. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8197868/webrev/ > bug link https://bugs.openjdk.java.net/browse/JDK-8197868 > contributed-by: Kim Barrett & Lois Foltan > > Testing: hs-tier(1-3), jdk-tier(1-3) complete > > Thanks, > Lois From jcbeyler at google.com Fri Feb 16 20:22:06 2018 From: jcbeyler at google.com (JC Beyler) Date: Fri, 16 Feb 2018 12:22:06 -0800 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: <2111b78e-8c80-c93d-188f-7ff973d1f604@oracle.com> References: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> <2111b78e-8c80-c93d-188f-7ff973d1f604@oracle.com> Message-ID: Hi Coleen, Done then I think correctly: http://cr.openjdk.java.net/~jcbeyler/8194084/webrev.02/ Due diligence would require Derek White to potentially ack my latest webrev change for aarch64 of removing r5 from the spill/fill as he had requested it. Other than that, let me know what else is needed/missing and thanks for testing/sponsoring! Jc On Fri, Feb 16, 2018 at 10:30 AM, wrote: > > > On 2/16/18 12:06 PM, JC Beyler wrote: > > Answering all in one go :) > > I updated the webrev to: > http://cr.openjdk.java.net/~jcbeyler/8194084/webrev.01/ > > I am guessing I need a tested ok from each architecture and not only a > looks good. If I am wrong, I apologize! If I am right, the current status > of each architecture is: > > - aarch64: I removed the use of r5 but left for now r19 to be > stored/loaded back, let me know what you think, Derek. > - ppc64 and s390 look good from Martin, missing a tested ok > - Sparc looks good from Coleen, missing a tested ok > - I removed the option from globals.h, let me know if that is correct, > it seems to be what is said from the comment above but I thought maybe we > had to wait until the version number got bumped and then move all flags out > of globals.hpp. > > - arm: missing a looks good and test > - x86: missing a looks good and test :) > > I'm happy to do as I did with https://bugs.openjdk. > java.net/browse/JDK-8190862 and create subtasks if that will make it > easier on everyone. > > > Oh please, no. > > If you update your webrev with a commit message with the current reviewers > (including myself). I'll import and run final tests on the platforms > missing and if it passes, I'll sponsor and push. > > thanks, > Coleen > > Let me know, > Jc > > > On Fri, Feb 16, 2018 at 5:49 AM, wrote: > >> >> I agree with this change, and will sponsor it and test on sparc (and on >> oracle platforms). >> >> When you obsolete an option, I think you just remove it from globals.hpp >> and the code in arguments.cpp should tell you it's obsolete. >> >> Thanks, >> Coleen >> >> On 2/14/18 6:08 PM, JC Beyler wrote: >> >>> Hi all, >>> >>> Here is a webrev to do the work mentioned in JDK-8194084 >>> : >>> http://cr.openjdk.java.net/~jcbeyler/8194084/webrev.01/ >>> >>> It has the parts for each architecture and I can't test a lot of them so >>> I >>> would need a review and test for each :). I think first would be an >>> agreement to the code change itself then test it once everyone agrees on >>> the change ? >>> >>> Could I please get some initial reviews on this? >>> >>> Basically what this webrev does is follow what the interpreter is saying: >>> - No longer try to do a fast tlab refill >>> - Try eden allocation if contiguous inline allocation is true >>> - Otherwise slowpath >>> >>> This is true for all architectures except: >>> - ppc, which doesn't do eden allocations, I just cleaned up the code >>> a >>> bit there to be consistent >>> - s390 that does not do tlab_refill at all, I just removed the dead >>> code >>> there. >>> >>> Thanks a lot for your help, >>> Jc >>> >> >> > > From lois.foltan at oracle.com Fri Feb 16 20:37:58 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 16 Feb 2018 15:37:58 -0500 Subject: (11) RFR (S) JDK-8196884: VS2017 Multiple Type Cast Conversion Compilation Errors Message-ID: Please review this fix for multiple type cast conversion compilation errors.? This fix includes a change to correctly type the symbolic constants badAddressVal and badOopVal in globalDefinitions.hpp and adjust numerous type cast conversions for the constant 0xdeadbeef. open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196884/webrev/ bug link https://bugs.openjdk.java.net/browse/JDK-8196884 Suggestions in the bug to remove the questionable remaining use of badOopVal as well as introduce a BAD_PTR macro definition for 0xdeadbeef have been filed as RFEs: - Removal of BadOopVal https://bugs.openjdk.java.net/browse/JDK-8198308 - Introduce BAD_PTR https://bugs.openjdk.java.net/browse/JDK-8198309 Testing in progress (hs-tier1-3, jdk-tier1-3) Thanks, Lois From lois.foltan at oracle.com Fri Feb 16 20:40:23 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 16 Feb 2018 15:40:23 -0500 Subject: (11) RFR (S) JDK-8197956: VS2017 (C4838) Narrowing conversion required from __int64 to julong In-Reply-To: <657e9ad8-525c-fb82-5b9e-58b75c6fbbd1@oracle.com> References: <7847ae70-eaca-772e-1052-b66b1db69986@oracle.com> <657e9ad8-525c-fb82-5b9e-58b75c6fbbd1@oracle.com> Message-ID: <0bf360be-6bf6-d6c8-2d7a-118f557b63e2@oracle.com> Thanks Coleen! On 2/16/2018 3:11 PM, coleen.phillimore at oracle.com wrote: > > This looks good, and trivial enough to push. > Coleen > > On 2/16/18 3:07 PM, Lois Foltan wrote: >> Please review this fix to use the correct typed constant when >> initializing the StubRoutines::x86::_k512_W array. >> >> open webrev at >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8197956/webrev/ >> bug https://bugs.openjdk.java.net/browse/JDK-8197956 >> contributed-by: Kim Barrett & Lois Foltan >> >> Testing complete (hs-tier1-3, jdk-tier1-3) > From lois.foltan at oracle.com Fri Feb 16 20:40:42 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 16 Feb 2018 15:40:42 -0500 Subject: (11) RFR (S) JDK-8197868: VS2017 (C2065) 'timezone': Undeclared Identifier in share/runtime/os.cpp In-Reply-To: <834f002d-7c88-c2e1-3b54-51ffe764a675@oracle.com> References: <834f002d-7c88-c2e1-3b54-51ffe764a675@oracle.com> Message-ID: <15325996-2abd-0d2e-4e42-cc1b46825f3f@oracle.com> Thanks Coleen! On 2/16/2018 3:12 PM, coleen.phillimore at oracle.com wrote: > This seems good. > Coleen > > On 2/16/18 11:53 AM, Lois Foltan wrote: >> Please review this change to use the functional version of >> _get_timezone for VS2017.? The global variable timezone has been >> deprecated. >> >> open webrev at >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8197868/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8197868 >> contributed-by: Kim Barrett & Lois Foltan >> >> Testing: hs-tier(1-3), jdk-tier(1-3) complete >> >> Thanks, >> Lois > From kim.barrett at oracle.com Fri Feb 16 20:58:26 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 16 Feb 2018 15:58:26 -0500 Subject: (11) RFR (S) JDK-8196880: VS2017 Addition of Global Delete Operator with Size Parameter Conflicts with Arena's Chunk Provided One In-Reply-To: <1fea637c-a147-38db-547c-7748da82a9cf@oracle.com> References: <98f8247b-6a24-b5c9-6d27-0c8597a63f1c@oracle.com> <1fea637c-a147-38db-547c-7748da82a9cf@oracle.com> Message-ID: <0EB21140-034B-4F26-91B7-3DCBAB041D99@oracle.com> > On Feb 16, 2018, at 1:31 PM, coleen.phillimore at oracle.com wrote: > > > > On 2/16/18 11:28 AM, Lois Foltan wrote: >> On 2/16/2018 8:43 AM, coleen.phillimore at oracle.com wrote: >> >>> >>> This is odd but seems ok. >>> >>> If you make the dummy operator delete private, does it still compile? If it is actually used, will it get a linktime error rather than a compile time error? >> Hi Coleen, >> Thanks for the review. Yes, making the operator delete private does work and I have updated the webrev. If it is actually used it will get a linktime error. >> >> Updated webrev at: http://cr.openjdk.java.net/~lfoltan/bug_jdk8196880.1/webrev/ > > I like this. See what Kim says, I guess. If someone tries to call it they should get a compile time error first. I believe? Looks good to me. > > thanks! > Coleen > >> Lois >> >>> >>> thanks, >>> Coleen >>> >>> On 2/14/18 8:48 AM, Lois Foltan wrote: >>>> Please review this change in VS2017 to the delete operator due to C++14 standard conformance. From https://msdn.microsoft.com/en-us/library/mt723604.aspx >>>> >>>> The function|void operator delete(void *, size_t)|was a placement delete operator corresponding to the placement new function "void * operator new(size_t, size_t)" in C++11. With C++14 sized deallocation, this delete function is now a/usual deallocation function/(global delete operator). The standard requires that if the use of a placement new looks up a corresponding delete function and finds a usual deallocation function, the program is ill-formed. >>>> >>>> Thank you to Kim Barrett for proposing the fix below. >>>> >>>> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196880/webrev/ >>>> bug link https://bugs.openjdk.java.net/browse/JDK-8196880 >>>> >>>> Testing complete (hs-tier1-3, jdk-tier1-3) >>>> >>>> Thanks, >>>> Lois From coleen.phillimore at oracle.com Fri Feb 16 21:03:11 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 16 Feb 2018 16:03:11 -0500 Subject: (11) RFR (S) JDK-8196884: VS2017 Multiple Type Cast Conversion Compilation Errors In-Reply-To: References: Message-ID: These changes look good.? Agree that hopefully these can be replaced with some BAD_PTR in the future. Coleen On 2/16/18 3:37 PM, Lois Foltan wrote: > Please review this fix for multiple type cast conversion compilation > errors.? This fix includes a change to correctly type the symbolic > constants badAddressVal and badOopVal in globalDefinitions.hpp and > adjust numerous type cast conversions for the constant 0xdeadbeef. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196884/webrev/ > bug link https://bugs.openjdk.java.net/browse/JDK-8196884 > > Suggestions in the bug to remove the questionable remaining use of > badOopVal as well as introduce a BAD_PTR macro definition for > 0xdeadbeef have been filed as RFEs: > - Removal of BadOopVal https://bugs.openjdk.java.net/browse/JDK-8198308 > - Introduce BAD_PTR https://bugs.openjdk.java.net/browse/JDK-8198309 > > Testing in progress (hs-tier1-3, jdk-tier1-3) > > Thanks, > Lois From lois.foltan at oracle.com Fri Feb 16 21:18:21 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 16 Feb 2018 16:18:21 -0500 Subject: (11) RFR (S) JDK-8196884: VS2017 Multiple Type Cast Conversion Compilation Errors In-Reply-To: References: Message-ID: Thanks Coleen! Lois On 2/16/2018 4:03 PM, coleen.phillimore at oracle.com wrote: > > These changes look good.? Agree that hopefully these can be replaced > with some BAD_PTR in the future. > Coleen > > On 2/16/18 3:37 PM, Lois Foltan wrote: >> Please review this fix for multiple type cast conversion compilation >> errors.? This fix includes a change to correctly type the symbolic >> constants badAddressVal and badOopVal in globalDefinitions.hpp and >> adjust numerous type cast conversions for the constant 0xdeadbeef. >> >> open webrev at >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8196884/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8196884 >> >> Suggestions in the bug to remove the questionable remaining use of >> badOopVal as well as introduce a BAD_PTR macro definition for >> 0xdeadbeef have been filed as RFEs: >> - Removal of BadOopVal https://bugs.openjdk.java.net/browse/JDK-8198308 >> - Introduce BAD_PTR https://bugs.openjdk.java.net/browse/JDK-8198309 >> >> Testing in progress (hs-tier1-3, jdk-tier1-3) >> >> Thanks, >> Lois > From coleen.phillimore at oracle.com Fri Feb 16 23:14:07 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 16 Feb 2018 18:14:07 -0500 Subject: RFR (XS) 8182847: Copy class should use assert macros Message-ID: open webrev at http://cr.openjdk.java.net/~coleenp/8182847.01/webrev bug link https://bugs.openjdk.java.net/browse/JDK-8182847 Tested with tier1-4 on all Oracle platforms in mach5.? Also tested failure case with temporary change (see bug for details). Thanks, Coleen From coleen.phillimore at oracle.com Fri Feb 16 23:09:05 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 16 Feb 2018 18:09:05 -0500 Subject: RFR (XS) 8198311: Avoid uses of global malloc and free Message-ID: <06a3a669-d357-1b97-4ae6-a84d3304b407@oracle.com> Summary: fix two places that improperly use global malloc and free Tested by tier1 in mach5 which includes aot tests. open webrev at http://cr.openjdk.java.net/~coleenp/8198311.01/webrev bug link https://bugs.openjdk.java.net/browse/JDK-8198311 Thanks, Coleen From vladimir.kozlov at oracle.com Fri Feb 16 23:36:21 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 16 Feb 2018 15:36:21 -0800 Subject: RFR (XS) 8198311: Avoid uses of global malloc and free In-Reply-To: <06a3a669-d357-1b97-4ae6-a84d3304b407@oracle.com> References: <06a3a669-d357-1b97-4ae6-a84d3304b407@oracle.com> Message-ID: Good. Thanks, Vladimir On 2/16/18 3:09 PM, coleen.phillimore at oracle.com wrote: > Summary: fix two places that improperly use global malloc and free > > Tested by tier1 in mach5 which includes aot tests. > > open webrev at http://cr.openjdk.java.net/~coleenp/8198311.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8198311 > > Thanks, > Coleen > > From coleen.phillimore at oracle.com Fri Feb 16 23:48:39 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 16 Feb 2018 18:48:39 -0500 Subject: RFR (XS) 8198311: Avoid uses of global malloc and free In-Reply-To: References: <06a3a669-d357-1b97-4ae6-a84d3304b407@oracle.com> Message-ID: <771f7c8f-d5c3-0946-9df9-57d4dfdce2d1@oracle.com> Thanks! Coleen On 2/16/18 6:36 PM, Vladimir Kozlov wrote: > Good. > > Thanks, > Vladimir > > On 2/16/18 3:09 PM, coleen.phillimore at oracle.com wrote: >> Summary: fix two places that improperly use global malloc and free >> >> Tested by tier1 in mach5 which includes aot tests. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8198311.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8198311 >> >> Thanks, >> Coleen >> >> From kim.barrett at oracle.com Fri Feb 16 23:57:33 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 16 Feb 2018 18:57:33 -0500 Subject: RFR (XS) 8182847: Copy class should use assert macros In-Reply-To: References: Message-ID: <96213A4A-E82C-47B6-97E6-FC91775180C9@oracle.com> > On Feb 16, 2018, at 6:14 PM, coleen.phillimore at oracle.com wrote: > > open webrev at http://cr.openjdk.java.net/~coleenp/8182847.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8182847 > > Tested with tier1-4 on all Oracle platforms in mach5. Also tested failure case with temporary change (see bug for details). > > Thanks, > Coleen Looks good. From daniel.daugherty at oracle.com Sat Feb 17 01:44:49 2018 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Fri, 16 Feb 2018 20:44:49 -0500 Subject: RFR (XS) 8198311: Avoid uses of global malloc and free In-Reply-To: <06a3a669-d357-1b97-4ae6-a84d3304b407@oracle.com> References: <06a3a669-d357-1b97-4ae6-a84d3304b407@oracle.com> Message-ID: <515a4e48-6506-76e1-1a97-460334a759fb@oracle.com> On 2/16/18 6:09 PM, coleen.phillimore at oracle.com wrote: > Summary: fix two places that improperly use global malloc and free > > Tested by tier1 in mach5 which includes aot tests. > > open webrev at http://cr.openjdk.java.net/~coleenp/8198311.01/webrev src/hotspot/share/aot/aotCodeHeap.cpp ??? No comments. src/hotspot/share/runtime/objectMonitor.cpp ??? No comments. Thumbs up. Dan > bug link https://bugs.openjdk.java.net/browse/JDK-8198311 > > Thanks, > Coleen > > From zgu at redhat.com Sat Feb 17 03:03:12 2018 From: zgu at redhat.com (Zhengyu Gu) Date: Fri, 16 Feb 2018 22:03:12 -0500 Subject: RFR (XS) 8198311: Avoid uses of global malloc and free In-Reply-To: <06a3a669-d357-1b97-4ae6-a84d3304b407@oracle.com> References: <06a3a669-d357-1b97-4ae6-a84d3304b407@oracle.com> Message-ID: <25f1418f-5419-1f61-842a-ecd24a2ea21a@redhat.com> Looks good. -Zhengyu On 02/16/2018 06:09 PM, coleen.phillimore at oracle.com wrote: > Summary: fix two places that improperly use global malloc and free > > Tested by tier1 in mach5 which includes aot tests. > > open webrev at http://cr.openjdk.java.net/~coleenp/8198311.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8198311 > > Thanks, > Coleen > > From kim.barrett at oracle.com Fri Feb 16 23:55:12 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 16 Feb 2018 18:55:12 -0500 Subject: RFR (XS) 8198311: Avoid uses of global malloc and free In-Reply-To: <06a3a669-d357-1b97-4ae6-a84d3304b407@oracle.com> References: <06a3a669-d357-1b97-4ae6-a84d3304b407@oracle.com> Message-ID: <7AB9F63D-31BF-49F5-A173-CB6E512D4DF1@oracle.com> > On Feb 16, 2018, at 6:09 PM, coleen.phillimore at oracle.com wrote: > > Summary: fix two places that improperly use global malloc and free > > Tested by tier1 in mach5 which includes aot tests. > > open webrev at http://cr.openjdk.java.net/~coleenp/8198311.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8198311 > > Thanks, > Coleen Looks good. From thomas.schatzl at oracle.com Sat Feb 17 13:40:23 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Sat, 17 Feb 2018 14:40:23 +0100 Subject: RFR (XS) 8182847: Copy class should use assert macros In-Reply-To: References: Message-ID: <1518874823.8560.0.camel@oracle.com> Hi, On Fri, 2018-02-16 at 18:14 -0500, coleen.phillimore at oracle.com wrote: > open webrev at http://cr.openjdk.java.net/~coleenp/8182847.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8182847 > > Tested with tier1-4 on all Oracle platforms in mach5. Also tested > failure case with temporary change (see bug for details). looks good. Thanks for cleaning this up. Thomas From kim.barrett at oracle.com Sat Feb 17 17:29:52 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Sat, 17 Feb 2018 12:29:52 -0500 Subject: (11) RFR (S) JDK-8196884: VS2017 Multiple Type Cast Conversion Compilation Errors In-Reply-To: References: Message-ID: <71A63721-3067-4E34-9E25-C50D6021912B@oracle.com> > On Feb 16, 2018, at 3:37 PM, Lois Foltan wrote: > > Please review this fix for multiple type cast conversion compilation errors. This fix includes a change to correctly type the symbolic constants badAddressVal and badOopVal in globalDefinitions.hpp and adjust numerous type cast conversions for the constant 0xdeadbeef. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196884/webrev/ > bug link https://bugs.openjdk.java.net/browse/JDK-8196884 > > Suggestions in the bug to remove the questionable remaining use of badOopVal as well as introduce a BAD_PTR macro definition for 0xdeadbeef have been filed as RFEs: > - Removal of BadOopVal https://bugs.openjdk.java.net/browse/JDK-8198308 > - Introduce BAD_PTR https://bugs.openjdk.java.net/browse/JDK-8198309 > > Testing in progress (hs-tier1-3, jdk-tier1-3) > > Thanks, > Lois Looks good, given JDK-8196309 as a trailing cleanup. From coleen.phillimore at oracle.com Sun Feb 18 18:29:59 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Sun, 18 Feb 2018 13:29:59 -0500 Subject: RFR (XS) 8182847: Copy class should use assert macros In-Reply-To: <1518874823.8560.0.camel@oracle.com> References: <1518874823.8560.0.camel@oracle.com> Message-ID: Thanks Thomas and Kim. Coleen On 2/17/18 8:40 AM, Thomas Schatzl wrote: > Hi, > > On Fri, 2018-02-16 at 18:14 -0500, coleen.phillimore at oracle.com wrote: >> open webrev at http://cr.openjdk.java.net/~coleenp/8182847.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8182847 >> >> Tested with tier1-4 on all Oracle platforms in mach5. Also tested >> failure case with temporary change (see bug for details). > looks good. Thanks for cleaning this up. > > Thomas > From dmitry.samersoff at bell-sw.com Sun Feb 18 18:31:53 2018 From: dmitry.samersoff at bell-sw.com (Dmitry Samersoff) Date: Sun, 18 Feb 2018 21:31:53 +0300 Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 In-Reply-To: <5A82408B.7070001@oracle.com> References: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> <5A82408B.7070001@oracle.com> Message-ID: Mikhailo, Here is the changes rebased to recent sources. http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.02/ Could you sponsor the push? -Dmitry On 02/13/2018 04:34 AM, Mikhailo Seledtsov wrote: > Changes look good from my point of view. > > Misha > > On 2/10/18, 4:10 AM, Dmitry Samersoff wrote: >> Everybody, >> >> Please review small changes, that enables docker testing on Linux/AArch64 >> >> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.01/ >> >> PS: >> >> Matthias - I refactored VMProps.dockerSupport() a bit to make it more >> readable, please check that it doesn't brake your work. >> >> -Dmitry >> >> -- >> Dmitry Samersoff >> http://devnull.samersoff.net >> * There will come soft rains ... From coleen.phillimore at oracle.com Sun Feb 18 18:36:33 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Sun, 18 Feb 2018 13:36:33 -0500 Subject: RFR (XS) 8198311: Avoid uses of global malloc and free In-Reply-To: <25f1418f-5419-1f61-842a-ecd24a2ea21a@redhat.com> References: <06a3a669-d357-1b97-4ae6-a84d3304b407@oracle.com> <25f1418f-5419-1f61-842a-ecd24a2ea21a@redhat.com> Message-ID: <1dd7f82f-0f65-b228-c3a8-c6aa4b318943@oracle.com> Thank you Vladimir, Kim, Dan and Zhengyu. Coleen On 2/16/18 10:03 PM, Zhengyu Gu wrote: > Looks good. > > -Zhengyu > > On 02/16/2018 06:09 PM, coleen.phillimore at oracle.com wrote: >> Summary: fix two places that improperly use global malloc and free >> >> Tested by tier1 in mach5 which includes aot tests. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8198311.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8198311 >> >> Thanks, >> Coleen >> >> From shade at redhat.com Sun Feb 18 20:55:07 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Sun, 18 Feb 2018 21:55:07 +0100 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> Message-ID: On 02/16/2018 06:06 PM, JC Beyler wrote: > I updated the webrev to: > http://cr.openjdk.java.net/~jcbeyler/8194084/webrev.01/ > > - arm: missing a looks good and test > - x86: missing a looks good and test :) Well, current jdk/hs builds do not look very good ;) https://bugs.openjdk.java.net/browse/JDK-8198341 JC, can you eyeball the patch for these kinds of failures on other platforms and follow up? Thanks, -Aleksey From jcbeyler at google.com Sun Feb 18 22:38:56 2018 From: jcbeyler at google.com (JC Beyler) Date: Sun, 18 Feb 2018 14:38:56 -0800 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> Message-ID: I just checked and it seems good to me for the other architectures but as I say in the bug, my track record for this fix is not so good ;-) Jc On Sun, Feb 18, 2018 at 12:55 PM, Aleksey Shipilev wrote: > On 02/16/2018 06:06 PM, JC Beyler wrote: > > I updated the webrev to: > > http://cr.openjdk.java.net/~jcbeyler/8194084/webrev.01/ > > > > - arm: missing a looks good and test > > - x86: missing a looks good and test :) > > Well, current jdk/hs builds do not look very good ;) > https://bugs.openjdk.java.net/browse/JDK-8198341 > > JC, can you eyeball the patch for these kinds of failures on other > platforms and follow up? > > Thanks, > -Aleksey > > From shade at redhat.com Mon Feb 19 08:55:27 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Mon, 19 Feb 2018 09:55:27 +0100 Subject: RFR: Build failures after JDK-8194084: Obsolete FastTLABRefill and remove the related code Message-ID: <8f32b660-b69f-b639-dcce-f7dc64cca98f@redhat.com> https://bugs.openjdk.java.net/browse/JDK-8198341 Trivial patch: http://cr.openjdk.java.net/~shade/8198341/fixes.patch Not sure if other platforms are affected (they seem to be not). Hey, SAP folks, does current jdk/hs build for you? Also, not very sure if I need a sponsor for this. Testing: cross-compiled builds on x86_32 and aarch64 Thanks, -Aleksey From martin.doerr at sap.com Mon Feb 19 09:33:42 2018 From: martin.doerr at sap.com (Doerr, Martin) Date: Mon, 19 Feb 2018 09:33:42 +0000 Subject: Build failures after JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: <8f32b660-b69f-b639-dcce-f7dc64cca98f@redhat.com> References: <8f32b660-b69f-b639-dcce-f7dc64cca98f@redhat.com> Message-ID: Hi Aleksey, thanks for posting the fix. Our x86_32 build is broken. (We don't have aarch64.) I'll try your fix. Best regards, Martin -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Aleksey Shipilev Sent: Montag, 19. Februar 2018 09:55 To: hotspot-dev at openjdk.java.net Subject: RFR: Build failures after JDK-8194084: Obsolete FastTLABRefill and remove the related code https://bugs.openjdk.java.net/browse/JDK-8198341 Trivial patch: http://cr.openjdk.java.net/~shade/8198341/fixes.patch Not sure if other platforms are affected (they seem to be not). Hey, SAP folks, does current jdk/hs build for you? Also, not very sure if I need a sponsor for this. Testing: cross-compiled builds on x86_32 and aarch64 Thanks, -Aleksey From martin.doerr at sap.com Mon Feb 19 11:32:16 2018 From: martin.doerr at sap.com (Doerr, Martin) Date: Mon, 19 Feb 2018 11:32:16 +0000 Subject: Build failures after JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: <8f32b660-b69f-b639-dcce-f7dc64cca98f@redhat.com> Message-ID: Hi Aleksey, your fix is good. Reviewed and successfully tested on Windows 32 bit (and Windows 64 bit for safety, but 64 bit is not affected). So if you have tested linux 32 bit and aarch64 it should be safe to push it. Other platforms look good. Best regards, Martin -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Doerr, Martin Sent: Montag, 19. Februar 2018 10:34 To: Aleksey Shipilev ; hotspot-dev at openjdk.java.net Subject: RE: Build failures after JDK-8194084: Obsolete FastTLABRefill and remove the related code Hi Aleksey, thanks for posting the fix. Our x86_32 build is broken. (We don't have aarch64.) I'll try your fix. Best regards, Martin -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Aleksey Shipilev Sent: Montag, 19. Februar 2018 09:55 To: hotspot-dev at openjdk.java.net Subject: RFR: Build failures after JDK-8194084: Obsolete FastTLABRefill and remove the related code https://bugs.openjdk.java.net/browse/JDK-8198341 Trivial patch: http://cr.openjdk.java.net/~shade/8198341/fixes.patch Not sure if other platforms are affected (they seem to be not). Hey, SAP folks, does current jdk/hs build for you? Also, not very sure if I need a sponsor for this. Testing: cross-compiled builds on x86_32 and aarch64 Thanks, -Aleksey From shade at redhat.com Mon Feb 19 11:33:34 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Mon, 19 Feb 2018 12:33:34 +0100 Subject: Build failures after JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: <8f32b660-b69f-b639-dcce-f7dc64cca98f@redhat.com> Message-ID: On 02/19/2018 12:32 PM, Doerr, Martin wrote: > your fix is good. Reviewed and successfully tested on Windows 32 bit (and Windows 64 bit for safety, but 64 bit is not affected). > So if you have tested linux 32 bit and aarch64 it should be safe to push it. > > Other platforms look good. Ok, thanks for testing! I am blurry on the process though: am I allowed to push this directly, without the Oracle sponsor? Thanks, -Aleksey From david.holmes at oracle.com Mon Feb 19 12:41:37 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 19 Feb 2018 22:41:37 +1000 Subject: Build failures after JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: <8f32b660-b69f-b639-dcce-f7dc64cca98f@redhat.com> Message-ID: On 19/02/2018 9:33 PM, Aleksey Shipilev wrote: > On 02/19/2018 12:32 PM, Doerr, Martin wrote: >> your fix is good. Reviewed and successfully tested on Windows 32 bit (and Windows 64 bit for safety, but 64 bit is not affected). >> So if you have tested linux 32 bit and aarch64 it should be safe to push it. >> >> Other platforms look good. > > Ok, thanks for testing! > > I am blurry on the process though: am I allowed to push this directly, without the Oracle sponsor? Given this only affects platforms we do not build or test I'd say you are fine to push it. David > Thanks, > -Aleksey > From adam.farley at uk.ibm.com Mon Feb 19 13:08:01 2018 From: adam.farley at uk.ibm.com (Adam Farley8) Date: Mon, 19 Feb 2018 13:08:01 +0000 Subject: [PATCH] RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers In-Reply-To: <39D8F43A-06BD-483B-8901-6F4444A8235F@oracle.com> References: <39D8F43A-06BD-483B-8901-6F4444A8235F@oracle.com> Message-ID: Hi Paul, > Hi Adam, > > From reading the thread i cannot tell if this is part of a wider solution including some yet to be proposed HotSpot changes. The wider solution would need to include some Hotspot changes, yes. I'm proposing raising a bug, committing the code we have here to "set the stage", and then we can invest more time&energy later if the concept goes down well and the community agrees to pursue the full solution. As an aside, I tried submitting a big code set (including hotspot changes) months ago, and I'm *still* struggling to find someone to commit the thing, so I figured I'd try a more gradual, staged approach this time. > > As is i would be resistant to adding such standalone internal wrapper methods to Unsafe that have no apparent benefit within the OpenJDK itself since it's a maintenance burden. I'm hoping the fact that the methods are a single line (sans comments, descriptors and curly braces) will minimise this burden. > > Can you determine if the calls to UNSAFE.freeMemory/allocateMemory come from a DBB by looking at the call stack frame above the unsafe call? > > Thanks, > Paul. Yes that is possible, though I would advise against this because: A) Checking the call stack is expensive, and doing this every time we allocate native memory is an easy way to slow down a program, or rack up mips. and B) deciding which code path we're using based on the stack means the DBB class+method (and anything the parsing code mistakes for that class+method) can only ever allocate native memory for DBBs. What do you think? Best Regards Adam Farley > >> On Feb 14, 2018, at 3:32 AM, Adam Farley8 wrote: >> >> Hi All, >> >> Currently, diagnostic core files generated from OpenJDK seem to lump all >> of the >> native memory usages together, making it near-impossible for someone to >> figure >> out *what* is using all that memory in the event of a memory leak. >> >> The OpenJ9 VM has a feature which allows it to track the allocation of >> native >> memory for Direct Byte Buffers (DBBs), and to supply that information into >> the >> cores when they are generated. This makes it a *lot* easier to find out >> what is using >> all that native memory, making memory leak resolution less like some dark >> art, and >> more like logical debugging. >> >> To use this feature, there is a native method referenced in Unsafe.java. >> To open >> up this feature so that any VM can make use of it, the java code below >> sets the >> stage for it. This change starts letting people call DBB-specific methods >> when >> allocating native memory, and getting into the habit of using it. >> >> Thoughts? >> >> Best Regards >> >> Adam Farley >> >> P.S. Code: >> >> diff --git >> a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >> b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >> --- a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >> +++ b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >> @@ -85,7 +85,7 @@ >> // Paranoia >> return; >> } >> - UNSAFE.freeMemory(address); >> + UNSAFE.freeDBBMemory(address); >> address = 0; >> Bits.unreserveMemory(size, capacity); >> } >> @@ -118,7 +118,7 @@ >> >> long base = 0; >> try { >> - base = UNSAFE.allocateMemory(size); >> + base = UNSAFE.allocateDBBMemory(size); >> } catch (OutOfMemoryError x) { >> Bits.unreserveMemory(size, cap); >> throw x; >> diff --git a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >> b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >> --- a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >> +++ b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >> @@ -632,6 +632,26 @@ >> } >> >> /** >> + * Allocates a new block of native memory for DirectByteBuffers, of >> the >> + * given size in bytes. The contents of the memory are >> uninitialized; >> + * they will generally be garbage. The resulting native pointer will >> + * never be zero, and will be aligned for all value types. Dispose >> of >> + * this memory by calling {@link #freeDBBMemory} or resize it with >> + * {@link #reallocateDBBMemory}. >> + * >> + * @throws RuntimeException if the size is negative or too large >> + * for the native size_t type >> + * >> + * @throws OutOfMemoryError if the allocation is refused by the >> system >> + * >> + * @see #getByte(long) >> + * @see #putByte(long, byte) >> + */ >> + public long allocateDBBMemory(long bytes) { >> + return allocateMemory(bytes); >> + } >> + >> + /** >> * Resizes a new block of native memory, to the given size in bytes. >> The >> * contents of the new block past the size of the old block are >> * uninitialized; they will generally be garbage. The resulting >> native >> @@ -687,6 +707,27 @@ >> } >> >> /** >> + * Resizes a new block of native memory for DirectByteBuffers, to the >> + * given size in bytes. The contents of the new block past the size >> of >> + * the old block are uninitialized; they will generally be garbage. >> The >> + * resulting native pointer will be zero if and only if the requested >> size >> + * is zero. The resulting native pointer will be aligned for all >> value >> + * types. Dispose of this memory by calling {@link #freeDBBMemory}, >> or >> + * resize it with {@link #reallocateDBBMemory}. The address passed >> to >> + * this method may be null, in which case an allocation will be >> performed. >> + * >> + * @throws RuntimeException if the size is negative or too large >> + * for the native size_t type >> + * >> + * @throws OutOfMemoryError if the allocation is refused by the >> system >> + * >> + * @see #allocateDBBMemory >> + */ >> + public long reallocateDBBMemory(long address, long bytes) { >> + return reallocateMemory(address, bytes); >> + } >> + >> + /** >> * Sets all bytes in a given block of memory to a fixed value >> * (usually zero). >> * >> @@ -918,6 +959,17 @@ >> checkPointer(null, address); >> } >> >> + /** >> + * Disposes of a block of native memory, as obtained from {@link >> + * #allocateDBBMemory} or {@link #reallocateDBBMemory}. The address >> passed >> + * to this method may be null, in which case no action is taken. >> + * >> + * @see #allocateDBBMemory >> + */ >> + public void freeDBBMemory(long address) { >> + freeMemory(address); >> + } >> + >> /// random queries >> >> /** >> >> Unless stated otherwise above: >> IBM United Kingdom Limited - Registered in England and Wales with number >> 741598. >> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU From lois.foltan at oracle.com Mon Feb 19 13:46:32 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Mon, 19 Feb 2018 08:46:32 -0500 Subject: (11) RFR (S) JDK-8196884: VS2017 Multiple Type Cast Conversion Compilation Errors In-Reply-To: <71A63721-3067-4E34-9E25-C50D6021912B@oracle.com> References: <71A63721-3067-4E34-9E25-C50D6021912B@oracle.com> Message-ID: <3964dca3-c635-6e7c-659d-c95c4e50a31f@oracle.com> Thanks Kim for the review! Lois On 2/17/2018 12:29 PM, Kim Barrett wrote: >> On Feb 16, 2018, at 3:37 PM, Lois Foltan wrote: >> >> Please review this fix for multiple type cast conversion compilation errors. This fix includes a change to correctly type the symbolic constants badAddressVal and badOopVal in globalDefinitions.hpp and adjust numerous type cast conversions for the constant 0xdeadbeef. >> >> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8196884/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8196884 >> >> Suggestions in the bug to remove the questionable remaining use of badOopVal as well as introduce a BAD_PTR macro definition for 0xdeadbeef have been filed as RFEs: >> - Removal of BadOopVal https://bugs.openjdk.java.net/browse/JDK-8198308 >> - Introduce BAD_PTR https://bugs.openjdk.java.net/browse/JDK-8198309 >> >> Testing in progress (hs-tier1-3, jdk-tier1-3) >> >> Thanks, >> Lois > Looks good, given JDK-8196309 as a trailing cleanup. > From goetz.lindenmaier at sap.com Mon Feb 19 14:05:16 2018 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Mon, 19 Feb 2018 14:05:16 +0000 Subject: Proposal for improvements to the metaspace chunk allocator In-Reply-To: References: Message-ID: Hi Thomas, thanks for posting this change. I think it will help a lot, especially with the class space. I agree that this not necessarily requires a JEP. So could you please open a bug and post a RFR to hotspot-runtime-dev? Thanks for the laborious documentation, the code is well to understand that way! Maybe put the text from this your mail into the bug? It's very helpful and easier to locate there than to find it in the mail archive. My comments: take_from_committed(): Do I understand correctly that this only takes the next needed piece of memory? And because if the size passed to the current call is bigger than that of the last call, the alignment must be fixed you add what you call padding? Is this also called for humongous chunks? If not, for simplicity, I would have implemented this by just taking the next medium chunk (which would always be aligned) and split it into the needed size and add all the rest to the corresponding free lists. But no change needed here, I just want to understand. (Probably this is not feasible because the humongous ones are not aliged to medium chunks size...) I think the naming "padding chunks" is a bit misleading. It sounds as if the chunks would be wasted, but as they are added to the free lists they are not lost. dict.leo gives "offcut" for "Verschnitt" ... not a word common to me, but at least the german translation and the wordwise translation better fit the situation I think. Feel free to keep it as is, though. In your mail you are discussing the additional fields you add. In case adding _is_class to metachunk is considered a problem (I don't think so), can't you compute the property "is_class()" by comparing the metachunk address with the possible range of the compressed class space? These 3GB are only reserved for the class space ... TestVirtualSpaceNode_test() is empty. Maybe remove it altogether? A lot of the methods are passed 'true' or 'false' to indicate whether it is for the class or metaspace manager. Maybe you could define enum is_class and is_metaspace or the like, to make these calls more speaking? Minor nit: as you anyways normalize #defines to ASSERT, you might want to fix the remaining two or three #defines in metaspace.cpp from PRODUCT to ASSERT/DEBUG, too. Best regards, Goetz. -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Thomas St?fe Sent: Thursday, February 8, 2018 12:58 PM To: HotSpot Open Source Developers Subject: RFR: Proposal for improvements to the metaspace chunk allocator Hi, We would like to contribute a patch developed at SAP which has been live in our VM for some time. It improves the metaspace chunk allocation: reduces fragmentation and raises the chance of reusing free metaspace chunks. The patch: http://cr.openjdk.java.net/~stuefe/webrevs/metaspace- coalescation/2018-02-05--2/webrev/ In very short, this patch helps with a number of pathological cases where metaspace chunks are free but cannot be reused because they are of the wrong size. For example, the metaspace freelist could be full of small chunks, which would not be reusable if we need larger chunks. So, we could get metaspace OOMs even in situations where the metaspace was far from exhausted. Our patch adds the ability to split and merge metaspace chunks dynamically and thus remove the "size-lock-in" problem. Note that there have been other attempts to get a grip on this problem, see e.g. "SpaceManager::get_small_chunks_and_allocate()". But arguably our patch attempts a more complete solution. In 2016 I discussed the idea for this patch with some folks off-list, among them Jon Matsimutso. He then did advice me to create a JEP. So I did: [1]. However, meanwhile changes to the JEP process were discussed [2], and I am not sure anymore this patch needs even needs a JEP. It may be moderately complex and hence carries the risk inherent in any patch, but its effects would not be externally visible (if you discount seeing fewer metaspace OOMs). So, I'd prefer to handle this as a simple RFE. -- How this patch works: 1) When a class loader dies, its metaspace chunks are freed and returned to the freelist for reuse by the next class loader. With the patch, upon returning a chunk to the freelist, an attempt is made to merge it with its neighboring chunks - should they happen to be free too - to form a larger chunk. Which then is placed in the free list. As a result, the freelist should be populated by larger chunks at the expense of smaller chunks. In other words, all free chunks should always be as "coalesced as possible". 2) When a class loader needs a new chunk and a chunk of the requested size cannot be found in the free list, before carving out a new chunk from the virtual space, we first check if there is a larger chunk in the free list. If there is, that larger chunk is chopped up into n smaller chunks. One of them is returned to the callers, the others are re-added to the freelist. (1) and (2) together have the effect of removing the size-lock-in for chunks. If fragmentation allows it, small chunks are dynamically combined to form larger chunks, and larger chunks are split on demand. -- What this patch does not: This is not a rewrite of the chunk allocator - most of the mechanisms stay intact. Specifically, chunk sizes remain unchanged, and so do chunk allocation processes (when do which class loaders get handed which chunk size). Almost everthing this patch does affects only internal workings of the ChunkManager. Also note that I refrained from doing any cleanups, since I wanted reviewers to be able to gauge this patch without filtering noise. Unfortunately this patch adds some complexity. But there are many future opportunities for code cleanup and simplification, some of which we already discussed in existing RFEs ([3], [4]). All of them are out of the scope for this particular patch. -- Details: Before the patch, the following rules held: - All chunk sizes are multiples of the smallest chunk size ("specialized chunks") - All chunk sizes of larger chunks are also clean multiples of the next smaller chunk size (e.g. for class space, the ratio of specialized/small/medium chunks is 1:2:32) - All chunk start addresses are aligned to the smallest chunk size (more or less accidentally, see metaspace_reserve_alignment). The patch makes the last rule explicit and more strict: - All (non-humongous) chunk start addresses are now aligned to their own chunk size. So, e.g. medium chunks are allocated at addresses which are a multiple of medium chunk size. This rule is not extended to humongous chunks, whose start addresses continue to be aligned to the smallest chunk size. The reason for this new alignment rule is that it makes it cheap both to find chunk predecessors of a chunk and to check which chunks are free. When a class loader dies and its chunk is returned to the freelist, all we have is its address. In order to merge it with its neighbors to form a larger chunk, we need to find those neighbors, including those preceding the returned chunk. Prior to this patch that was not easy - one would have to iterate chunks starting at the beginning of the VirtualSpaceNode. But due to the new alignment rule, we now know where the prospective larger chunk must start - at the next lower larger-chunk-size-aligned boundary. We also know that currently a smaller chunk must start there (*). In order to check the free-ness of chunks quickly, each VirtualSpaceNode now keeps a bitmap which describes its occupancy. One bit in this bitmap corresponds to a range the size of the smallest chunk size and starting at an address aligned to the smallest chunk size. Because of the alignment rules above, such a range belongs to one single chunk. The bit is 1 if the associated chunk is in use by a class loader, 0 if it is free. When we have calculated the address range a prospective larger chunk would span, we now need to check if all chunks in that range are free. Only then we can merge them. We do that by querying the bitmap. Note that the most common use case here is forming medium chunks from smaller chunks. With the new alignment rules, the bitmap portion covering a medium chunk now always happens to be 16- or 32bit in size and is 16- or 32bit aligned, so reading the bitmap in many cases becomes a simple 16- or 32bit load. If the range is free, only then we need to iterate the chunks in that range: pull them from the freelist, combine them to one new larger chunk, re-add that one to the freelist. (*) Humongous chunks make this a bit more complicated. Since the new alignment rule does not extend to them, a humongous chunk could still straddle the lower or upper boundary of the prospective larger chunk. So I gave the occupancy map a second layer, which is used to mark the start of chunks. An alternative approach could have been to make humongous chunks size and start address always a multiple of the largest non-humongous chunk size (medium chunks). That would have caused a bit of waste per humongous chunk (<64K) in exchange for simpler coding and a simpler occupancy map. -- The patch shows its best results in scenarios where a lot of smallish class loaders are alive simultaneously. When dying, they leave continuous expanses of metaspace covered in small chunks, which can be merged nicely. However, if class loader life times vary more, we have more interleaving of dead and alive small chunks, and hence chunk merging does not work as well as it could. For an example of a pathological case like this see example program: [5] Executed like this: "java -XX:CompressedClassSpaceSize=10M -cp test3 test3.Example2" the test will load 3000 small classes in separate class loaders, then throw them away and start loading large classes. The small classes will have flooded the metaspace with small chunks, which are unusable for the large classes. When executing with the rather limited CompressedClassSpaceSize=10M, we will run into an OOM after loading about 800 large classes, having used only 40% of the class space, the rest is wasted to unused small chunks. However, with our patch the example program will manage to allocate ~2900 large classes before running into an OOM, and class space will show almost no waste. Do demonstrate this, add -Xlog:gc+metaspace+freelist. After running into an OOM, statistics and an ASCII representation of the class space will be shown. The unpatched version will show large expanses of unused small chunks, the patched variant will show almost no waste. Note that the patch could be made more effective with a different size ratio between small and medium chunks: in class space, that ratio is 1:16, so 16 small chunks must happen to be free to form one larger chunk. With a smaller ratio the chance for coalescation would be larger. So there may be room for future improvement here: Since we now can merge and split chunks on demand, we could introduce more chunk sizes. Potentially arriving at a buddy-ish allocator style where we drop hard-wired chunk sizes for a dynamic model where the ratio between chunk sizes is always 1:2 and we could in theory have no limit to the chunk size? But this is just a thought and well out of the scope of this patch. -- What does this patch cost (memory): - the occupancy bitmap adds 1 byte per 4K metaspace. - MetaChunk headers get larger, since we add an enum and two bools to it. Depending on what the c++ compiler does with that, chunk headers grow by one or two MetaWords, reducing the payload size by that amount. - The new alignment rules mean we may need to create padding chunks to precede larger chunks. But since these padding chunks are added to the freelist, they should be used up before the need for new padding chunks arises. So, the maximally possible number of unused padding chunks should be limited by design to about 64K. The expectation is that the memory savings by this patch far outweighs its added memory costs. .. (performance): We did not see measurable drops in standard benchmarks raising over the normal noise. I also measured times for a program which stresses metaspace chunk coalescation, with the same result. I am open to suggestions what else I should measure, and/or independent measurements. -- Other details: I removed SpaceManager::get_small_chunk_and_allocate() to reduce complexity somewhat, because it was made mostly obsolete by this patch: since small chunks are combined to larger chunks upon return to the freelist, in theory we should not have that many free small chunks anymore anyway. However, there may be still cases where we could benefit from this workaround, so I am asking your opinion on this one. About tests: There were two native tests - ChunkManagerReturnTest and TestVirtualSpaceNode (the former was added by me last year) - which did not make much sense anymore, since they relied heavily on internal behavior which was made unpredictable with this patch. To make up for these lost tests, I added a new gtest which attempts to stress the many combinations of allocation pattern but does so from a layer above the old tests. It now uses Metaspace::allocate() and friends. By using that point as entry for tests, I am less dependent on implementation internals and still cover a lot of scenarios. -- Review pointers: Good points to start are - ChunkManager::return_single_chunk() - specifically, ChunkManager::attempt_to_coalesce_around_chunk() - here we merge chunks upon return to the free list - ChunkManager::free_chunks_get(): Here we now split large chunks into smaller chunks on demand - VirtualSpaceNode::take_from_committed() : chunks are allocated according to align rules now, padding chunks are handles - The OccupancyMap class is the helper class implementing the new occupancy bitmap The rest is mostly chaff: helper functions, added tests and verifications. -- Thanks and Best Regards, Thomas [1] https://bugs.openjdk.java.net/browse/JDK-8166690 [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2017-November/000128.html [3] https://bugs.openjdk.java.net/browse/JDK-8185034 [4] https://bugs.openjdk.java.net/browse/JDK-8176808 [5] https://bugs.openjdk.java.net/secure/attachment/63532/test3.zip From shade at redhat.com Mon Feb 19 14:14:57 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Mon, 19 Feb 2018 15:14:57 +0100 Subject: Build failures after JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: <8f32b660-b69f-b639-dcce-f7dc64cca98f@redhat.com> Message-ID: <859ce772-3caf-d8c6-86bc-01b6c8833b3d@redhat.com> On 02/19/2018 01:41 PM, David Holmes wrote: > On 19/02/2018 9:33 PM, Aleksey Shipilev wrote: >> On 02/19/2018 12:32 PM, Doerr, Martin wrote: >>> your fix is good. Reviewed and successfully tested on Windows 32 bit (and Windows 64 bit for >>> safety, but 64 bit is not affected). >>> So if you have tested linux 32 bit and aarch64 it should be safe to push it. >>> >>> Other platforms look good. >> >> Ok, thanks for testing! >> >> I am blurry on the process though: am I allowed to push this directly, without the Oracle sponsor? > > Given this only affects platforms we do not build or test I'd say you are fine to push it. Thanks, pushed: http://hg.openjdk.java.net/jdk/hs/rev/f7caa2aecc86 -Aleksey From stuart.monteith at linaro.org Mon Feb 19 14:24:43 2018 From: stuart.monteith at linaro.org (Stuart Monteith) Date: Mon, 19 Feb 2018 14:24:43 +0000 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> Message-ID: I've tried building with Aleksey's patch (http://cr.openjdk.java.net/~shade/8198341/fixes.patch), but came across a JVM crash when building OpenJDK. I need to look a bit closer, but the patch "8194084: Obsolete FastTLABRefill and remove the related code" is causing SIGBUS BUS_ADRALN errors. The stack pointer is becoming unaligned, and so breaks on aarch64. For example, in your patch you do: - __ ldp(r5, r19, Address(__ post(sp, 2 * wordSize))); + __ ldr(r19, Address(__ post(sp, wordSize))); You can only have a 16-byte aligned stack pointer, and you replaced two loads with one, resulting in an unaligned SP. Here's some extracts from an hs_err during my build: # A fatal error has been detected by the Java Runtime Environment: # # SIGBUS (0x7) at pc=0x0000010008991764, pid=10622, tid=10623 Stack: [0x0000010001950000,0x0000010001b50000], sp=0x0000010001b4c758, free space=2033k Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code) v ~RuntimeStub::fast_new_instance Runtime1 stub C 0x0000010001b4c900 siginfo: si_signo: 7 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: 0x00000100042afc08 0x000001000899175c: sub x0, x0, #0x10 0x0000010008991760: dmb ishst 0x0000010008991764: ldr x19, [sp],#8 0x0000010008991768: ret You shouldn't ever have an "8" as the least-significant-hex-digit for the stack pointer. BR, Stuart On 18 February 2018 at 22:38, JC Beyler wrote: > I just checked and it seems good to me for the other architectures but as I > say in the bug, my track record for this fix is not so good ;-) > Jc > > On Sun, Feb 18, 2018 at 12:55 PM, Aleksey Shipilev wrote: > >> On 02/16/2018 06:06 PM, JC Beyler wrote: >> > I updated the webrev to: >> > http://cr.openjdk.java.net/~jcbeyler/8194084/webrev.01/ >> > >> > - arm: missing a looks good and test >> > - x86: missing a looks good and test :) >> >> Well, current jdk/hs builds do not look very good ;) >> https://bugs.openjdk.java.net/browse/JDK-8198341 >> >> JC, can you eyeball the patch for these kinds of failures on other >> platforms and follow up? >> >> Thanks, >> -Aleksey >> >> From adinn at redhat.com Mon Feb 19 15:15:46 2018 From: adinn at redhat.com (Andrew Dinn) Date: Mon, 19 Feb 2018 15:15:46 +0000 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> Message-ID: <6ce3fd81-8eb0-2b4c-369a-b5f70b46af4c@redhat.com> On 19/02/18 14:24, Stuart Monteith wrote: > I've tried building with Aleksey's patch > (http://cr.openjdk.java.net/~shade/8198341/fixes.patch), but came > across a JVM crash when building OpenJDK. > > I need to look a bit closer, but the patch "8194084: Obsolete > FastTLABRefill and remove the related code" is causing SIGBUS > BUS_ADRALN errors. The stack pointer is becoming unaligned, and so > breaks on aarch64. > For example, in your patch you do: > > - __ ldp(r5, r19, Address(__ post(sp, 2 * wordSize))); > + __ ldr(r19, Address(__ post(sp, wordSize))); > > You can only have a 16-byte aligned stack pointer, and you replaced > two loads with one, resulting in an unaligned SP. Yes, I believe Stuart has diagnosed this correctly. The problem is in the changes in c1_Runtime1_aarch64.cpp. The original stp with pre-decrement instruction that save r19+r5 retained 16-byte alignment for rsp. The replacement single str with pre-decrement instruction misaligns sp -- and AArch64 hw gets /very/ unhappy when that happens. Three may be no need to save and restore r5, per se, but there is still a need to push and restore 16 byte's worth of stack data. The str and ldr instructions which currently save/restore r19 could simply be reverted to stp and ldp of r5+r19 (it does no harm to save/restore r5). However, it would be better to save and restore zr+r19. That would better indicate the uselessness of the zr stack slot. regards, Andrew Dinn ----------- From stuart.monteith at linaro.org Mon Feb 19 15:53:52 2018 From: stuart.monteith at linaro.org (Stuart Monteith) Date: Mon, 19 Feb 2018 15:53:52 +0000 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: <6ce3fd81-8eb0-2b4c-369a-b5f70b46af4c@redhat.com> References: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> <6ce3fd81-8eb0-2b4c-369a-b5f70b46af4c@redhat.com> Message-ID: Hi, I've tried with: __ stp(r19, zr, Address(__ pre(sp, -wordSize*2))); and __ ldp(r19, zr, Address(__ post(sp, wordSize*2))); To save and restore r19. We now get: # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x0000010008e1ce5c, pid=15063, tid=15091 # # JRE version: OpenJDK Runtime Environment (11.0) (fastdebug build 11-internal+0-adhoc.stuart.hs) # Java VM: OpenJDK 64-Bit Server VM (fastdebug 11-internal+0-adhoc.stuart.hs, mixed mode, tiered, compressed oops, serial gc, linux-aarch64) # Problematic frame: # J 473 c1 java.io.BufferedReader.readLine(Z)Ljava/lang/String; java.base (304 bytes) @ 0x0000010008e1ce5c [0x0000010008e1cb40+0x000000000000031c] # siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x000000000000002c java/io/BufferedReader.readLine(Z)Ljava/lang/String; [0x0000010008e1cb40, 0x0000010008e1d9c0] 3712 bytes [Entry Point] [Constants] # {method} {0x00000100025663b0} 'readLine' '(Z)Ljava/lang/String;' in 'java/io/BufferedReader' # this: c_rarg1:c_rarg1 = 'java/io/BufferedReader' # parm0: c_rarg2 = boolean # [sp+0x120] (sp of caller) ;; block B45 [0, 0] 0x0000010008e1cb40: ldr w8, [x1,#8] 0x0000010008e1cb44: cmp x9, x8, lsl #3 0x0000010008e1cb48: b.eq 0x0000010008e1cb80 0x0000010008e1cb4c: adrp x8, 0x00000100080a5000 [error occurred during error reporting (printing code blob if possible), id 0xb] It looks like more is wrong here than just the saving/restoration of r19. I'll review the changes a bit more closely. BR, Stuart On 19 February 2018 at 15:15, Andrew Dinn wrote: > On 19/02/18 14:24, Stuart Monteith wrote: >> I've tried building with Aleksey's patch >> (http://cr.openjdk.java.net/~shade/8198341/fixes.patch), but came >> across a JVM crash when building OpenJDK. >> >> I need to look a bit closer, but the patch "8194084: Obsolete >> FastTLABRefill and remove the related code" is causing SIGBUS >> BUS_ADRALN errors. The stack pointer is becoming unaligned, and so >> breaks on aarch64. >> For example, in your patch you do: >> >> - __ ldp(r5, r19, Address(__ post(sp, 2 * wordSize))); >> + __ ldr(r19, Address(__ post(sp, wordSize))); >> >> You can only have a 16-byte aligned stack pointer, and you replaced >> two loads with one, resulting in an unaligned SP. > Yes, I believe Stuart has diagnosed this correctly. The problem is in > the changes in c1_Runtime1_aarch64.cpp. The original stp with > pre-decrement instruction that save r19+r5 retained 16-byte alignment > for rsp. The replacement single str with pre-decrement instruction > misaligns sp -- and AArch64 hw gets /very/ unhappy when that happens. > > Three may be no need to save and restore r5, per se, but there is still > a need to push and restore 16 byte's worth of stack data. The str and > ldr instructions which currently save/restore r19 could simply be > reverted to stp and ldp of r5+r19 (it does no harm to save/restore r5). > However, it would be better to save and restore zr+r19. That would > better indicate the uselessness of the zr stack slot. > > regards, > > > Andrew Dinn > ----------- > From jcbeyler at google.com Tue Feb 20 01:17:51 2018 From: jcbeyler at google.com (JC Beyler) Date: Mon, 19 Feb 2018 17:17:51 -0800 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> <6ce3fd81-8eb0-2b4c-369a-b5f70b46af4c@redhat.com> Message-ID: I apologize for the architecture dependent issues. Let me know if there is anything I can do on my side to help :). And if you know what I can do in the future to reduce the risk of these errors, let me know that as well. Jc On Mon, Feb 19, 2018 at 7:53 AM, Stuart Monteith wrote: > Hi, > I've tried with: > > __ stp(r19, zr, Address(__ pre(sp, -wordSize*2))); > > and > > __ ldp(r19, zr, Address(__ post(sp, wordSize*2))); > > To save and restore r19. We now get: > > # > # A fatal error has been detected by the Java Runtime Environment: > # > # SIGSEGV (0xb) at pc=0x0000010008e1ce5c, pid=15063, tid=15091 > # > # JRE version: OpenJDK Runtime Environment (11.0) (fastdebug build > 11-internal+0-adhoc.stuart.hs) > # Java VM: OpenJDK 64-Bit Server VM (fastdebug > 11-internal+0-adhoc.stuart.hs, mixed mode, tiered, compressed oops, > serial gc, linux-aarch64) > # Problematic frame: > # J 473 c1 java.io.BufferedReader.readLine(Z)Ljava/lang/String; > java.base (304 bytes) @ 0x0000010008e1ce5c > [0x0000010008e1cb40+0x000000000000031c] > # > siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: > 0x000000000000002c > > java/io/BufferedReader.readLine(Z)Ljava/lang/String; > [0x0000010008e1cb40, 0x0000010008e1d9c0] 3712 bytes > [Entry Point] > [Constants] > # {method} {0x00000100025663b0} 'readLine' '(Z)Ljava/lang/String;' > in 'java/io/BufferedReader' > # this: c_rarg1:c_rarg1 > = 'java/io/BufferedReader' > # parm0: c_rarg2 = boolean > # [sp+0x120] (sp of caller) > ;; block B45 [0, 0] > > 0x0000010008e1cb40: ldr w8, [x1,#8] > 0x0000010008e1cb44: cmp x9, x8, lsl #3 > 0x0000010008e1cb48: b.eq 0x0000010008e1cb80 > 0x0000010008e1cb4c: adrp x8, 0x00000100080a5000 > [error occurred during error reporting (printing code blob if possible), > id 0xb] > > > It looks like more is wrong here than just the saving/restoration of > r19. I'll review the changes a bit more closely. > > BR, > Stuart > > > On 19 February 2018 at 15:15, Andrew Dinn wrote: > > On 19/02/18 14:24, Stuart Monteith wrote: > >> I've tried building with Aleksey's patch > >> (http://cr.openjdk.java.net/~shade/8198341/fixes.patch), but came > >> across a JVM crash when building OpenJDK. > >> > >> I need to look a bit closer, but the patch "8194084: Obsolete > >> FastTLABRefill and remove the related code" is causing SIGBUS > >> BUS_ADRALN errors. The stack pointer is becoming unaligned, and so > >> breaks on aarch64. > >> For example, in your patch you do: > >> > >> - __ ldp(r5, r19, Address(__ post(sp, 2 * wordSize))); > >> + __ ldr(r19, Address(__ post(sp, wordSize))); > >> > >> You can only have a 16-byte aligned stack pointer, and you replaced > >> two loads with one, resulting in an unaligned SP. > > Yes, I believe Stuart has diagnosed this correctly. The problem is in > > the changes in c1_Runtime1_aarch64.cpp. The original stp with > > pre-decrement instruction that save r19+r5 retained 16-byte alignment > > for rsp. The replacement single str with pre-decrement instruction > > misaligns sp -- and AArch64 hw gets /very/ unhappy when that happens. > > > > Three may be no need to save and restore r5, per se, but there is still > > a need to push and restore 16 byte's worth of stack data. The str and > > ldr instructions which currently save/restore r19 could simply be > > reverted to stp and ldp of r5+r19 (it does no harm to save/restore r5). > > However, it would be better to save and restore zr+r19. That would > > better indicate the uselessness of the zr stack slot. > > > > regards, > > > > > > Andrew Dinn > > ----------- > > > From kim.barrett at oracle.com Tue Feb 20 14:58:22 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 20 Feb 2018 09:58:22 -0500 Subject: RFR: 8197859: VS2017 Complains about UINTPTR_MAX definition in globalDefinitions_VisCPP.hpp Message-ID: Please review this change to the Windows port, removing emulation of certain C99 features and instead simply including the appropriate headers. The headers are (available since VS2013) and (available before VS2013). CR: https://bugs.openjdk.java.net/browse/JDK-8197859 Webrev: http://cr.openjdk.java.net/~kbarrett/8197859/open.00/ Testing: hs-tier{1,2} From lois.foltan at oracle.com Tue Feb 20 15:24:37 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Tue, 20 Feb 2018 10:24:37 -0500 Subject: RFR: 8197859: VS2017 Complains about UINTPTR_MAX definition in globalDefinitions_VisCPP.hpp In-Reply-To: References: Message-ID: <93a0fea6-bc48-d450-46d4-4e7dcf312bc8@oracle.com> On 2/20/2018 9:58 AM, Kim Barrett wrote: > Please review this change to the Windows port, removing emulation of > certain C99 features and instead simply including the appropriate > headers. The headers are (available since VS2013) and > (available before VS2013). > > CR: > https://bugs.openjdk.java.net/browse/JDK-8197859 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8197859/open.00/ > > Testing: > hs-tier{1,2} > > Looks good.? Minor comment to update the copyright. Lois From george.triantafillou at oracle.com Tue Feb 20 15:32:37 2018 From: george.triantafillou at oracle.com (George Triantafillou) Date: Tue, 20 Feb 2018 10:32:37 -0500 Subject: RFR: 8197859: VS2017 Complains about UINTPTR_MAX definition in globalDefinitions_VisCPP.hpp In-Reply-To: References: Message-ID: Hi Kim, Your changes look good. -George On 2/20/2018 9:58 AM, Kim Barrett wrote: > Please review this change to the Windows port, removing emulation of > certain C99 features and instead simply including the appropriate > headers. The headers are (available since VS2013) and > (available before VS2013). > > CR: > https://bugs.openjdk.java.net/browse/JDK-8197859 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8197859/open.00/ > > Testing: > hs-tier{1,2} > > From thomas.stuefe at gmail.com Tue Feb 20 15:42:57 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 20 Feb 2018 16:42:57 +0100 Subject: Proposal for improvements to the metaspace chunk allocator In-Reply-To: References: Message-ID: Hi Goetz, thank you for taking the time to review this change! As per your suggestion, I created an RFE for this: https://bugs.openjdk.java.net/browse/JDK-8198423 I'll use this one to track work on this patch, unless there are strong objections in favour of going with the real JEP process. Please find more comments inline. On Mon, Feb 19, 2018 at 3:05 PM, Lindenmaier, Goetz < goetz.lindenmaier at sap.com> wrote: > Hi Thomas, > > thanks for posting this change. I think it will help a > lot, especially with the class space. > > I agree that this not necessarily requires a JEP. So could > you please open a bug and post a RFR to hotspot-runtime-dev? > > Thanks for the laborious documentation, the code is well > to understand that way! > Maybe put the text from this your mail into the bug? > It's very helpful and easier to locate there than to find it in > the mail archive. > > My comments: > > take_from_committed(): > Do I understand correctly that this only takes the next > needed piece of memory? And because if the size passed to > the current call is bigger than that of the last call, > the alignment must be fixed you add what you call padding? > Yes. > > Is this also called for humongous chunks? > Yes. > If not, for simplicity, I would have implemented this by just taking > the next medium chunk (which would always be aligned) and > split it into the needed size and add all the rest to > the corresponding free lists. But no change needed here, > I just want to understand. (Probably this is not feasible > because the humongous ones are not aliged to medium chunks size...) > You understand everything correctly. As for your proposal, I am not sure it would make matters much simpler. Maybe I do not fully understand: Now, we do: - is watermark aligned to chunk size? No -> carve out padding chunks, add them to freelist, then - with the watermark now properly aligned - carve out the desired chunk we wanted in the first place. After your proposal: - the watermark should always be correctly aligned. So, first, carve out desired chunk. Then, if it is smaller than a medium chunk, carve out n padding chunks until the watermark is properly aligned again. Not sure this is better. Only the order of operations is reversed. Also, yes, the one thorn is that Humongous chunks are still unaligned, but we could change the alignment rules for humongous chunks - that would be not difficult. > I think the naming "padding chunks" is a bit misleading. > It sounds as if the chunks would be wasted, but as they > are added to the free lists they are not lost. > dict.leo gives "offcut" for "Verschnitt" ... not a word > common to me, but at least the german translation and the > wordwise translation better fit the situation I think. > Feel free to keep it as is, though. > I agree. "Alignment chunks"? > > In your mail you are discussing the additional fields you > add. In case adding _is_class to metachunk is considered > a problem (I don't think so), can't you compute the property > "is_class()" by comparing the metachunk address with the > possible range of the compressed class space? These 3GB are > only reserved for the class space ... > > Sure, that would be possible. > TestVirtualSpaceNode_test() is empty. Maybe remove it altogether? > > Makes sense. > A lot of the methods are passed 'true' or 'false' to indicate > whether it is for the class or metaspace manager. Maybe you > could define enum is_class and is_metaspace or the like, to > make these calls more speaking? > > There is already one, "MetadataType". One could use that throughout the code. However, there already was a mixture of "MetadataType" and "bool is_class" predating this patch - so, my patch did not add to the confusion, I just choose one of the prevalent forms. Unifying those two forms makes sense and can be done in a later cleanup (or? Opinions?). > Minor nit: as you anyways normalize #defines to ASSERT, you > might want to fix the remaining two or three #defines in metaspace.cpp > from PRODUCT to ASSERT/DEBUG, too. > > Sure! > Best regards, > Goetz. > > I'll wait a bit if more opinions are forthcoming; if not, I'll prepare a new patch based on your suggestions. Thanks again for the review work, Best Regards, Thomas > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf > Of Thomas St?fe > Sent: Thursday, February 8, 2018 12:58 PM > To: HotSpot Open Source Developers > Subject: RFR: Proposal for improvements to the metaspace chunk allocator > > Hi, > > We would like to contribute a patch developed at SAP which has been live in > our VM for some time. It improves the metaspace chunk allocation: reduces > fragmentation and raises the chance of reusing free metaspace chunks. > > The patch: http://cr.openjdk.java.net/~stuefe/webrevs/metaspace- > coalescation/2018-02-05--2/webrev/ > > In very short, this patch helps with a number of pathological cases where > metaspace chunks are free but cannot be reused because they are of the > wrong size. For example, the metaspace freelist could be full of small > chunks, which would not be reusable if we need larger chunks. So, we could > get metaspace OOMs even in situations where the metaspace was far from > exhausted. Our patch adds the ability to split and merge metaspace chunks > dynamically and thus remove the "size-lock-in" problem. > > Note that there have been other attempts to get a grip on this problem, see > e.g. "SpaceManager::get_small_chunks_and_allocate()". But arguably our > patch attempts a more complete solution. > > In 2016 I discussed the idea for this patch with some folks off-list, among > them Jon Matsimutso. He then did advice me to create a JEP. So I did: [1]. > However, meanwhile changes to the JEP process were discussed [2], and I am > not sure anymore this patch needs even needs a JEP. It may be moderately > complex and hence carries the risk inherent in any patch, but its effects > would not be externally visible (if you discount seeing fewer metaspace > OOMs). So, I'd prefer to handle this as a simple RFE. > > -- > > How this patch works: > > 1) When a class loader dies, its metaspace chunks are freed and returned to > the freelist for reuse by the next class loader. With the patch, upon > returning a chunk to the freelist, an attempt is made to merge it with its > neighboring chunks - should they happen to be free too - to form a larger > chunk. Which then is placed in the free list. > > As a result, the freelist should be populated by larger chunks at the > expense of smaller chunks. In other words, all free chunks should always be > as "coalesced as possible". > > 2) When a class loader needs a new chunk and a chunk of the requested size > cannot be found in the free list, before carving out a new chunk from the > virtual space, we first check if there is a larger chunk in the free list. > If there is, that larger chunk is chopped up into n smaller chunks. One of > them is returned to the callers, the others are re-added to the freelist. > > (1) and (2) together have the effect of removing the size-lock-in for > chunks. If fragmentation allows it, small chunks are dynamically combined > to form larger chunks, and larger chunks are split on demand. > > -- > > What this patch does not: > > This is not a rewrite of the chunk allocator - most of the mechanisms stay > intact. Specifically, chunk sizes remain unchanged, and so do chunk > allocation processes (when do which class loaders get handed which chunk > size). Almost everthing this patch does affects only internal workings of > the ChunkManager. > > Also note that I refrained from doing any cleanups, since I wanted > reviewers to be able to gauge this patch without filtering noise. > Unfortunately this patch adds some complexity. But there are many future > opportunities for code cleanup and simplification, some of which we already > discussed in existing RFEs ([3], [4]). All of them are out of the scope for > this particular patch. > > -- > > Details: > > Before the patch, the following rules held: > - All chunk sizes are multiples of the smallest chunk size ("specialized > chunks") > - All chunk sizes of larger chunks are also clean multiples of the next > smaller chunk size (e.g. for class space, the ratio of > specialized/small/medium chunks is 1:2:32) > - All chunk start addresses are aligned to the smallest chunk size (more or > less accidentally, see metaspace_reserve_alignment). > The patch makes the last rule explicit and more strict: > - All (non-humongous) chunk start addresses are now aligned to their own > chunk size. So, e.g. medium chunks are allocated at addresses which are a > multiple of medium chunk size. This rule is not extended to humongous > chunks, whose start addresses continue to be aligned to the smallest chunk > size. > > The reason for this new alignment rule is that it makes it cheap both to > find chunk predecessors of a chunk and to check which chunks are free. > > When a class loader dies and its chunk is returned to the freelist, all we > have is its address. In order to merge it with its neighbors to form a > larger chunk, we need to find those neighbors, including those preceding > the returned chunk. Prior to this patch that was not easy - one would have > to iterate chunks starting at the beginning of the VirtualSpaceNode. But > due to the new alignment rule, we now know where the prospective larger > chunk must start - at the next lower larger-chunk-size-aligned boundary. We > also know that currently a smaller chunk must start there (*). > > In order to check the free-ness of chunks quickly, each VirtualSpaceNode > now keeps a bitmap which describes its occupancy. One bit in this bitmap > corresponds to a range the size of the smallest chunk size and starting at > an address aligned to the smallest chunk size. Because of the alignment > rules above, such a range belongs to one single chunk. The bit is 1 if the > associated chunk is in use by a class loader, 0 if it is free. > > When we have calculated the address range a prospective larger chunk would > span, we now need to check if all chunks in that range are free. Only then > we can merge them. We do that by querying the bitmap. Note that the most > common use case here is forming medium chunks from smaller chunks. With the > new alignment rules, the bitmap portion covering a medium chunk now always > happens to be 16- or 32bit in size and is 16- or 32bit aligned, so reading > the bitmap in many cases becomes a simple 16- or 32bit load. > > If the range is free, only then we need to iterate the chunks in that > range: pull them from the freelist, combine them to one new larger chunk, > re-add that one to the freelist. > > (*) Humongous chunks make this a bit more complicated. Since the new > alignment rule does not extend to them, a humongous chunk could still > straddle the lower or upper boundary of the prospective larger chunk. So I > gave the occupancy map a second layer, which is used to mark the start of > chunks. > An alternative approach could have been to make humongous chunks size and > start address always a multiple of the largest non-humongous chunk size > (medium chunks). That would have caused a bit of waste per humongous chunk > (<64K) in exchange for simpler coding and a simpler occupancy map. > > -- > > The patch shows its best results in scenarios where a lot of smallish class > loaders are alive simultaneously. When dying, they leave continuous > expanses of metaspace covered in small chunks, which can be merged nicely. > However, if class loader life times vary more, we have more interleaving of > dead and alive small chunks, and hence chunk merging does not work as well > as it could. > > For an example of a pathological case like this see example program: [5] > > Executed like this: "java -XX:CompressedClassSpaceSize=10M -cp test3 > test3.Example2" the test will load 3000 small classes in separate class > loaders, then throw them away and start loading large classes. The small > classes will have flooded the metaspace with small chunks, which are > unusable for the large classes. When executing with the rather limited > CompressedClassSpaceSize=10M, we will run into an OOM after loading about > 800 large classes, having used only 40% of the class space, the rest is > wasted to unused small chunks. However, with our patch the example program > will manage to allocate ~2900 large classes before running into an OOM, and > class space will show almost no waste. > > Do demonstrate this, add -Xlog:gc+metaspace+freelist. After running into an > OOM, statistics and an ASCII representation of the class space will be > shown. The unpatched version will show large expanses of unused small > chunks, the patched variant will show almost no waste. > > Note that the patch could be made more effective with a different size > ratio between small and medium chunks: in class space, that ratio is 1:16, > so 16 small chunks must happen to be free to form one larger chunk. With a > smaller ratio the chance for coalescation would be larger. So there may be > room for future improvement here: Since we now can merge and split chunks > on demand, we could introduce more chunk sizes. Potentially arriving at a > buddy-ish allocator style where we drop hard-wired chunk sizes for a > dynamic model where the ratio between chunk sizes is always 1:2 and we > could in theory have no limit to the chunk size? But this is just a thought > and well out of the scope of this patch. > > -- > > What does this patch cost (memory): > > - the occupancy bitmap adds 1 byte per 4K metaspace. > - MetaChunk headers get larger, since we add an enum and two bools to it. > Depending on what the c++ compiler does with that, chunk headers grow by > one or two MetaWords, reducing the payload size by that amount. > - The new alignment rules mean we may need to create padding chunks to > precede larger chunks. But since these padding chunks are added to the > freelist, they should be used up before the need for new padding chunks > arises. So, the maximally possible number of unused padding chunks should > be limited by design to about 64K. > > The expectation is that the memory savings by this patch far outweighs its > added memory costs. > > .. (performance): > > We did not see measurable drops in standard benchmarks raising over the > normal noise. I also measured times for a program which stresses metaspace > chunk coalescation, with the same result. > > I am open to suggestions what else I should measure, and/or independent > measurements. > > -- > > Other details: > > I removed SpaceManager::get_small_chunk_and_allocate() to reduce > complexity > somewhat, because it was made mostly obsolete by this patch: since small > chunks are combined to larger chunks upon return to the freelist, in theory > we should not have that many free small chunks anymore anyway. However, > there may be still cases where we could benefit from this workaround, so I > am asking your opinion on this one. > > About tests: There were two native tests - ChunkManagerReturnTest and > TestVirtualSpaceNode (the former was added by me last year) - which did not > make much sense anymore, since they relied heavily on internal behavior > which was made unpredictable with this patch. > To make up for these lost tests, I added a new gtest which attempts to > stress the many combinations of allocation pattern but does so from a layer > above the old tests. It now uses Metaspace::allocate() and friends. By > using that point as entry for tests, I am less dependent on implementation > internals and still cover a lot of scenarios. > > -- > > Review pointers: > > Good points to start are > - ChunkManager::return_single_chunk() - specifically, > ChunkManager::attempt_to_coalesce_around_chunk() - here we merge chunks > upon return to the free list > - ChunkManager::free_chunks_get(): Here we now split large chunks into > smaller chunks on demand > - VirtualSpaceNode::take_from_committed() : chunks are allocated according > to align rules now, padding chunks are handles > - The OccupancyMap class is the helper class implementing the new occupancy > bitmap > > The rest is mostly chaff: helper functions, added tests and verifications. > > -- > > Thanks and Best Regards, Thomas > > [1] https://bugs.openjdk.java.net/browse/JDK-8166690 > [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2017-November > /000128.html > [3] https://bugs.openjdk.java.net/browse/JDK-8185034 > [4] https://bugs.openjdk.java.net/browse/JDK-8176808 > [5] https://bugs.openjdk.java.net/secure/attachment/63532/test3.zip > From jcbeyler at google.com Tue Feb 20 16:22:52 2018 From: jcbeyler at google.com (JC Beyler) Date: Tue, 20 Feb 2018 08:22:52 -0800 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> <6ce3fd81-8eb0-2b4c-369a-b5f70b46af4c@redhat.com> Message-ID: Hi all, I fixed this issue after Stuart + Andrew diagnosed the issue for aarch64. After looking at the code and trying to get the spill/fills out of the way when possible, x86 also had the possibility to do skip a case of spill/fill. Let me know what you think: http://cr.openjdk.java.net/~jcbeyler/8194084/webrev.05 I added a no_pop slowpath, fixed the str to stp for aarch64, and then did the same no_pop slowpath for x86. I can also remove the no_pop label and just fix the aarch64 code by moving the str/ldr to stp/ldp and moving the stp before the jump to the slowpath. Let me know and thanks! Jc On Mon, Feb 19, 2018 at 7:53 AM, Stuart Monteith wrote: > Hi, > I've tried with: > > __ stp(r19, zr, Address(__ pre(sp, -wordSize*2))); > > and > > __ ldp(r19, zr, Address(__ post(sp, wordSize*2))); > > To save and restore r19. We now get: > > # > # A fatal error has been detected by the Java Runtime Environment: > # > # SIGSEGV (0xb) at pc=0x0000010008e1ce5c, pid=15063, tid=15091 > # > # JRE version: OpenJDK Runtime Environment (11.0) (fastdebug build > 11-internal+0-adhoc.stuart.hs) > # Java VM: OpenJDK 64-Bit Server VM (fastdebug > 11-internal+0-adhoc.stuart.hs, mixed mode, tiered, compressed oops, > serial gc, linux-aarch64) > # Problematic frame: > # J 473 c1 java.io.BufferedReader.readLine(Z)Ljava/lang/String; > java.base (304 bytes) @ 0x0000010008e1ce5c > [0x0000010008e1cb40+0x000000000000031c] > # > siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: > 0x000000000000002c > > java/io/BufferedReader.readLine(Z)Ljava/lang/String; > [0x0000010008e1cb40, 0x0000010008e1d9c0] 3712 bytes > [Entry Point] > [Constants] > # {method} {0x00000100025663b0} 'readLine' '(Z)Ljava/lang/String;' > in 'java/io/BufferedReader' > # this: c_rarg1:c_rarg1 > = 'java/io/BufferedReader' > # parm0: c_rarg2 = boolean > # [sp+0x120] (sp of caller) > ;; block B45 [0, 0] > > 0x0000010008e1cb40: ldr w8, [x1,#8] > 0x0000010008e1cb44: cmp x9, x8, lsl #3 > 0x0000010008e1cb48: b.eq 0x0000010008e1cb80 > 0x0000010008e1cb4c: adrp x8, 0x00000100080a5000 > [error occurred during error reporting (printing code blob if possible), > id 0xb] > > > It looks like more is wrong here than just the saving/restoration of > r19. I'll review the changes a bit more closely. > > BR, > Stuart > > > On 19 February 2018 at 15:15, Andrew Dinn wrote: > > On 19/02/18 14:24, Stuart Monteith wrote: > >> I've tried building with Aleksey's patch > >> (http://cr.openjdk.java.net/~shade/8198341/fixes.patch), but came > >> across a JVM crash when building OpenJDK. > >> > >> I need to look a bit closer, but the patch "8194084: Obsolete > >> FastTLABRefill and remove the related code" is causing SIGBUS > >> BUS_ADRALN errors. The stack pointer is becoming unaligned, and so > >> breaks on aarch64. > >> For example, in your patch you do: > >> > >> - __ ldp(r5, r19, Address(__ post(sp, 2 * wordSize))); > >> + __ ldr(r19, Address(__ post(sp, wordSize))); > >> > >> You can only have a 16-byte aligned stack pointer, and you replaced > >> two loads with one, resulting in an unaligned SP. > > Yes, I believe Stuart has diagnosed this correctly. The problem is in > > the changes in c1_Runtime1_aarch64.cpp. The original stp with > > pre-decrement instruction that save r19+r5 retained 16-byte alignment > > for rsp. The replacement single str with pre-decrement instruction > > misaligns sp -- and AArch64 hw gets /very/ unhappy when that happens. > > > > Three may be no need to save and restore r5, per se, but there is still > > a need to push and restore 16 byte's worth of stack data. The str and > > ldr instructions which currently save/restore r19 could simply be > > reverted to stp and ldp of r5+r19 (it does no harm to save/restore r5). > > However, it would be better to save and restore zr+r19. That would > > better indicate the uselessness of the zr stack slot. > > > > regards, > > > > > > Andrew Dinn > > ----------- > > > From adinn at redhat.com Tue Feb 20 16:33:44 2018 From: adinn at redhat.com (Andrew Dinn) Date: Tue, 20 Feb 2018 16:33:44 +0000 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> <6ce3fd81-8eb0-2b4c-369a-b5f70b46af4c@redhat.com> Message-ID: <981d79d4-98c1-2557-e86e-f6417e8c36a2@redhat.com> Hi JC, On 20/02/18 16:22, JC Beyler wrote: > I fixed this issue after Stuart + Andrew diagnosed the issue for > aarch64. After looking at the code and trying to get the spill/fills out > of the way when possible, x86 also had the possibility to do skip a case > of spill/fill. > > Let me know what you think: > http://cr.openjdk.java.net/~jcbeyler/8194084/webrev.05 > > I added a no_pop slowpath, fixed the str to stp for aarch64, and then > did the same no_pop slowpath for x86. I can also remove the no_pop label > and just fix the aarch64 code by moving the str/ldr to stp/ldp and > moving the stp before the jump to the slowpath. > > Let me know and thanks! That patch is fine for AArch64 and with it the build now completes and runs ok. I have not tested the change on x86 -- it looks fine by eyeball but it really needs a proper check. I assume you will raise a separate JIRA for this new patch? regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From kim.barrett at oracle.com Tue Feb 20 16:39:06 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 20 Feb 2018 11:39:06 -0500 Subject: RFR: 8197859: VS2017 Complains about UINTPTR_MAX definition in globalDefinitions_VisCPP.hpp In-Reply-To: <93a0fea6-bc48-d450-46d4-4e7dcf312bc8@oracle.com> References: <93a0fea6-bc48-d450-46d4-4e7dcf312bc8@oracle.com> Message-ID: > On Feb 20, 2018, at 10:24 AM, Lois Foltan wrote: > > On 2/20/2018 9:58 AM, Kim Barrett wrote: > >> Please review this change to the Windows port, removing emulation of >> certain C99 features and instead simply including the appropriate >> headers. The headers are (available since VS2013) and >> (available before VS2013). >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8197859 >> >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8197859/open.00/ >> >> Testing: >> hs-tier{1,2} >> >> > Looks good. Minor comment to update the copyright. > Lois Thanks. Will fix the copyright before pushing. From erik.osterlund at oracle.com Tue Feb 20 16:41:25 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 20 Feb 2018 17:41:25 +0100 Subject: RFR: 8198286: Direct memory accessors in typeArrayOop.hpp should use Access API In-Reply-To: <82c0ce41-6a34-9f87-ec45-6543bde97624@oracle.com> References: <82c0ce41-6a34-9f87-ec45-6543bde97624@oracle.com> Message-ID: <5A8C4FB5.4090706@oracle.com> Hi Per, (looping in hotspot-dev as this seems to touch more than runtime) On 2018-02-20 17:03, Per Liden wrote: > Hi Erik, > > As we discussed, coming up with a good name for the new Access call is > really hard. All good/descriptive alternatives I can come up with tend > to be way to long. So, next strategy is to pick something that fits > into the reset of the API. With this in mind I'd like to suggest we > just name it: oop Access<>::resolve(oop obj) > > The justification would that this this matches the one-verb style we > have for the other functions (load/store/clone) and it seems that you > anyway named the internal parts just "resolve", such as > BARRIER_RESOLVE, and resolve_func_t. Sure. Here is a full webrev with my proposal for this RFE, now that we agree on the direction: http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00/ Incremental from the prototype I sent out for early turnaround yesterday: http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00_inc/ It is now enforced that *_addr_raw() functions are to be used by the GC only, when the GC knows addresses are stable. All other address resolution goes through non-raw address resolution functions that at a lower level end up calling the resolve barrier on Access, which can be overridden by Shenandoah. There are in total two callers of Access<>::resolve: on oopDesc::field_addr and arrayOop::base. The rest is derived from that. @Roman: Hope this works well for Shenandoah. @Per: Hope you like the new shorter name. Thanks, /Erik > What do you think? > > cheers, > Per > > On 02/19/2018 06:08 PM, Erik Osterlund wrote: >> Hi Roman, >> >> I see there is a need to resolve a stable address for some objects to >> bulk access primitives. The code base is full of assumptions that no >> barriers are needed for such address resolution. It looks like the >> proposed approach is to one by one hunt down all such callsites. I >> could find some places where such barriers are missing. >> >> To make the code as maintainable as possible, I would like to propose >> a slightly different take on this, and would love to hear if this >> works for Shenandoah or not. The main idea is to annotate places >> where we do *not* want GC address resolution for internal pointers to >> objects, instead of where we want it, as it seems to be the common >> case that we do want to resolve the address. >> >> In some more detail: >> >> 1) Rip out the *_addr fascilities not used (a whole bunch on oopDesc). >> 2) Ignore the difference between read/write resolution (write >> resolution handles both reads and writes). Instead introduce an oop >> resolve_stable_addr(oop) function in Access. This makes it easier to >> use. >> 3) Identify as few callsites as possible for this function. I'm >> thinking arrayOop::base() and a few strange exceptions. >> 4) Identify the few places where we explicitly do *not* want address >> resolution, like calls from GC, and replace them with *_addr_raw >> variants. >> 5) Have a switch in barrierSetConfig.hpp that determines whether the >> build needs to support not to-space invariant GCs or not. >> >> With these changes, the number of callsites have been kept down to >> what I believe to be a minimum. And yet it covers some callsites that >> you accidentally missed (e.g. jvmciCodeInstaller.cpp). Existing uses >> of the various *_addr fascilities can in most cases continue to do >> what they have done in the past. And new uses will not be surprised >> that they accidentally missed some barriers. It will be solved >> automagically. >> >> Webrev: >> http://cr.openjdk.java.net/~eosterlund/typearray_resolve/webrev.00/ >> >> Please let me know what you think about this style and whether that >> works for you or not. I have not done proper testing yet, but >> presented this patch for quicker turn-around so we can synchronize >> the direction first. >> >> Thanks, >> /Erik >> >>> On 16 Feb 2018, at 17:18, Roman Kennke wrote: >>> >>> The direct memory accessors in typeArrayOop.hpp, which are usually >>> used for bulk memory access operations, should use the Access API, in >>> order to give the garbage collector a chance to intercept the access >>> (for example, employ read- or write-barriers on the target array). >>> This also means it's necessary to distinguish between write-accesses >>> and read-accesses (for example, GCs might want to use a >>> copy-on-write-barrier for write-accesses only). >>> >>> This changeset introduces two new APIs in access.hpp: load_at_addr() >>> and store_at_addr(), and links it up to the corresponding X_get_addr() >>> and X_put_addr() in typeArrayOop.hpp. All uses of the previous >>> X_addr() accessors have been renamed to match their use (load or store >>> of primitive array elements). >>> >>> The changeset is based on the previously proposed: >>> http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2018-February/026426.html >>> >>> >>> Webrev: >>> http://cr.openjdk.java.net/~rkennke/8198286/webrev.00/ >>> Bug: >>> https://bugs.openjdk.java.net/browse/JDK-8198286 >>> >>> Please review! >>> >>> Thanks, >>> Roman From kim.barrett at oracle.com Tue Feb 20 16:39:18 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 20 Feb 2018 11:39:18 -0500 Subject: RFR: 8197859: VS2017 Complains about UINTPTR_MAX definition in globalDefinitions_VisCPP.hpp In-Reply-To: References: Message-ID: <87CEAE86-0F9D-473F-B658-42395B5FAF55@oracle.com> > On Feb 20, 2018, at 10:32 AM, George Triantafillou wrote: > > Hi Kim, > > Your changes look good. Thanks. > -George > > On 2/20/2018 9:58 AM, Kim Barrett wrote: >> Please review this change to the Windows port, removing emulation of >> certain C99 features and instead simply including the appropriate >> headers. The headers are (available since VS2013) and >> (available before VS2013). >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8197859 >> >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8197859/open.00/ >> >> Testing: >> hs-tier{1,2} From stuart.monteith at linaro.org Tue Feb 20 16:39:49 2018 From: stuart.monteith at linaro.org (Stuart Monteith) Date: Tue, 20 Feb 2018 16:39:49 +0000 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: <981d79d4-98c1-2557-e86e-f6417e8c36a2@redhat.com> References: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> <6ce3fd81-8eb0-2b4c-369a-b5f70b46af4c@redhat.com> <981d79d4-98c1-2557-e86e-f6417e8c36a2@redhat.com> Message-ID: Hi, JC's patch looks good. This provides an explanation for the spurious ldp I was . I'm currently building aarch64/x86_64, and will run some tests. Thanks! Stuart On 20 February 2018 at 16:33, Andrew Dinn wrote: > Hi JC, > > On 20/02/18 16:22, JC Beyler wrote: >> I fixed this issue after Stuart + Andrew diagnosed the issue for >> aarch64. After looking at the code and trying to get the spill/fills out >> of the way when possible, x86 also had the possibility to do skip a case >> of spill/fill. >> >> Let me know what you think: >> http://cr.openjdk.java.net/~jcbeyler/8194084/webrev.05 >> >> I added a no_pop slowpath, fixed the str to stp for aarch64, and then >> did the same no_pop slowpath for x86. I can also remove the no_pop label >> and just fix the aarch64 code by moving the str/ldr to stp/ldp and >> moving the stp before the jump to the slowpath. >> >> Let me know and thanks! > That patch is fine for AArch64 and with it the build now completes and > runs ok. > > I have not tested the change on x86 -- it looks fine by eyeball but it > really needs a proper check. > > I assume you will raise a separate JIRA for this new patch? > > regards, > > > Andrew Dinn > ----------- > Senior Principal Software Engineer > Red Hat UK Ltd > Registered in England and Wales under Company Registration No. 03798903 > Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From rkennke at redhat.com Tue Feb 20 16:44:02 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 20 Feb 2018 17:44:02 +0100 Subject: RFR: 8198286: Direct memory accessors in typeArrayOop.hpp should use Access API In-Reply-To: <5A8C4FB5.4090706@oracle.com> References: <82c0ce41-6a34-9f87-ec45-6543bde97624@oracle.com> <5A8C4FB5.4090706@oracle.com> Message-ID: Hi Eric, On Tue, Feb 20, 2018 at 5:41 PM, Erik ?sterlund wrote: > Hi Per, > > (looping in hotspot-dev as this seems to touch more than runtime) > > On 2018-02-20 17:03, Per Liden wrote: >> >> Hi Erik, >> >> As we discussed, coming up with a good name for the new Access call is >> really hard. All good/descriptive alternatives I can come up with tend to be >> way to long. So, next strategy is to pick something that fits into the reset >> of the API. With this in mind I'd like to suggest we just name it: oop >> Access<>::resolve(oop obj) >> >> The justification would that this this matches the one-verb style we have >> for the other functions (load/store/clone) and it seems that you anyway >> named the internal parts just "resolve", such as BARRIER_RESOLVE, and >> resolve_func_t. > > > Sure. > > Here is a full webrev with my proposal for this RFE, now that we agree on > the direction: > http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00/ > > Incremental from the prototype I sent out for early turnaround yesterday: > http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00_inc/ > > It is now enforced that *_addr_raw() functions are to be used by the GC > only, when the GC knows addresses are stable. All other address resolution > goes through non-raw address resolution functions that at a lower level end > up calling the resolve barrier on Access, which can be overridden by > Shenandoah. There are in total two callers of Access<>::resolve: on > oopDesc::field_addr and arrayOop::base. The rest is derived from that. > > @Roman: Hope this works well for Shenandoah. I will review it ASAP. Just a quick note: In Shenandoah we have introduced an API in BarrierSet to check bool BarrierSet::is_safe(oop) which checks exactly the to-space invariant, i.e. if it's safe to access that oop for writing. I'd like that to be upstream if possible, and maybe this would be a good opportunity to add it? Maybe rename is_stable() or assert_stable() (and maybe keep it under #ifdef ASSERT)? Thanks for doing this! Roman From aph at redhat.com Tue Feb 20 16:56:32 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 20 Feb 2018 16:56:32 +0000 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> <6ce3fd81-8eb0-2b4c-369a-b5f70b46af4c@redhat.com> Message-ID: On 20/02/18 16:22, JC Beyler wrote: > I added a no_pop slowpath, fixed the str to stp for aarch64, and then did > the same no_pop slowpath for x86. I can also remove the no_pop label and > just fix the aarch64 code by moving the str/ldr to stp/ldp and moving the > stp before the jump to the slowpath. > > Let me know and thanks! I'd do the latter. There's no real justification for changing the x86 code, is there? -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From jcbeyler at google.com Tue Feb 20 17:10:29 2018 From: jcbeyler at google.com (JC Beyler) Date: Tue, 20 Feb 2018 09:10:29 -0800 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> <6ce3fd81-8eb0-2b4c-369a-b5f70b46af4c@redhat.com> Message-ID: Apart from better code for x86 as well, no not really. Any other opinions? Slightly optimize that piece of code or leave as is? On Tue, Feb 20, 2018 at 8:56 AM, Andrew Haley wrote: > On 20/02/18 16:22, JC Beyler wrote: > > I added a no_pop slowpath, fixed the str to stp for aarch64, and then did > > the same no_pop slowpath for x86. I can also remove the no_pop label and > > just fix the aarch64 code by moving the str/ldr to stp/ldp and moving the > > stp before the jump to the slowpath. > > > > Let me know and thanks! > > I'd do the latter. There's no real justification for changing the x86 > code, is there? > > -- > Andrew Haley > Java Platform Lead Engineer > Red Hat UK Ltd. > EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 > From shade at redhat.com Tue Feb 20 17:13:32 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Tue, 20 Feb 2018 18:13:32 +0100 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> <6ce3fd81-8eb0-2b4c-369a-b5f70b46af4c@redhat.com> Message-ID: On 02/20/2018 06:10 PM, JC Beyler wrote: > Apart from better code for x86 as well, no not really. Any other opinions? > Slightly optimize that piece of code or leave as is? Leave x86 code alone for now. We need to fix AArch64 bug first, not introduce another x86 bug. -Aleksey From erik.osterlund at oracle.com Tue Feb 20 17:25:52 2018 From: erik.osterlund at oracle.com (Erik Osterlund) Date: Tue, 20 Feb 2018 18:25:52 +0100 Subject: RFR: 8198286: Direct memory accessors in typeArrayOop.hpp should use Access API In-Reply-To: References: <82c0ce41-6a34-9f87-ec45-6543bde97624@oracle.com> <5A8C4FB5.4090706@oracle.com> Message-ID: <71102C74-8AA7-4FED-B085-D90BCA16A99C@oracle.com> Hi Roman, > On 20 Feb 2018, at 17:44, Roman Kennke wrote: > > Hi Eric, > > On Tue, Feb 20, 2018 at 5:41 PM, Erik ?sterlund > wrote: >> Hi Per, >> >> (looping in hotspot-dev as this seems to touch more than runtime) >> >>> On 2018-02-20 17:03, Per Liden wrote: >>> >>> Hi Erik, >>> >>> As we discussed, coming up with a good name for the new Access call is >>> really hard. All good/descriptive alternatives I can come up with tend to be >>> way to long. So, next strategy is to pick something that fits into the reset >>> of the API. With this in mind I'd like to suggest we just name it: oop >>> Access<>::resolve(oop obj) >>> >>> The justification would that this this matches the one-verb style we have >>> for the other functions (load/store/clone) and it seems that you anyway >>> named the internal parts just "resolve", such as BARRIER_RESOLVE, and >>> resolve_func_t. >> >> >> Sure. >> >> Here is a full webrev with my proposal for this RFE, now that we agree on >> the direction: >> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00/ >> >> Incremental from the prototype I sent out for early turnaround yesterday: >> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00_inc/ >> >> It is now enforced that *_addr_raw() functions are to be used by the GC >> only, when the GC knows addresses are stable. All other address resolution >> goes through non-raw address resolution functions that at a lower level end >> up calling the resolve barrier on Access, which can be overridden by >> Shenandoah. There are in total two callers of Access<>::resolve: on >> oopDesc::field_addr and arrayOop::base. The rest is derived from that. >> >> @Roman: Hope this works well for Shenandoah. > > I will review it ASAP. > > Just a quick note: In Shenandoah we have introduced an API in > BarrierSet to check bool BarrierSet::is_safe(oop) which checks exactly > the to-space invariant, i.e. if it's safe to access that oop for > writing. I'd like that to be upstream if possible, and maybe this > would be a good opportunity to add it? Maybe rename is_stable() or > assert_stable() (and maybe keep it under #ifdef ASSERT)? I see. That sounds like a useful tool to have. I?m not sure where I should be calling it from though. In the _addr calls that go through resolve(), is_stable() trivially returns true. That leaves only a few gc callsites that use *_addr_raw() - oop iterate and reference processing to put the assert in. As for oop iterate, I don?t think we should check is_stable. Imagine, e.g. a generational concurrently compacting GC that uses oop iterate to scan cards, and expects to see only addresses in that card boundary, whether they are to-space invariant or not (been there, done that). The addresses in that card boundary may or may not be stable. As for reference processing, I can see that it could be nice to know the addresses are stable. You probably know better than me where to put these asserts as you have already done it. Thanks, /Erik > Thanks for doing this! > > Roman From jcbeyler at google.com Tue Feb 20 17:33:19 2018 From: jcbeyler at google.com (JC Beyler) Date: Tue, 20 Feb 2018 09:33:19 -0800 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> <6ce3fd81-8eb0-2b4c-369a-b5f70b46af4c@redhat.com> Message-ID: Done here and a new JIRA item for it: http://cr.openjdk.java.net/~jcbeyler/8198439/webrev.00/ I put Andrew and Stuart as reviewers, let me know if anything else is needed. Thanks, Jc On Tue, Feb 20, 2018 at 9:13 AM, Aleksey Shipilev wrote: > On 02/20/2018 06:10 PM, JC Beyler wrote: > > Apart from better code for x86 as well, no not really. Any other > opinions? > > Slightly optimize that piece of code or leave as is? > > Leave x86 code alone for now. We need to fix AArch64 bug first, not > introduce another x86 bug. > > -Aleksey > > From adinn at redhat.com Tue Feb 20 17:42:13 2018 From: adinn at redhat.com (Andrew Dinn) Date: Tue, 20 Feb 2018 17:42:13 +0000 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> <6ce3fd81-8eb0-2b4c-369a-b5f70b46af4c@redhat.com> Message-ID: <07732449-1b34-d30e-9e93-c0a0196abf32@redhat.com> On 20/02/18 17:33, JC Beyler wrote: > Done here and a new JIRA item for it: > http://cr.openjdk.java.net/~jcbeyler/8198439/webrev.00/ > > I put Andrew and Stuart as reviewers, let me know if anything else is > needed. That's actually the wrong Andrew (should be adinn not aph) but never mind as Andrew Haley did look at it. regards, Andrew Dinn ----------- From jcbeyler at google.com Tue Feb 20 17:47:56 2018 From: jcbeyler at google.com (JC Beyler) Date: Tue, 20 Feb 2018 09:47:56 -0800 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: <07732449-1b34-d30e-9e93-c0a0196abf32@redhat.com> References: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> <6ce3fd81-8eb0-2b4c-369a-b5f70b46af4c@redhat.com> <07732449-1b34-d30e-9e93-c0a0196abf32@redhat.com> Message-ID: Fixed :) http://cr.openjdk.java.net/~jcbeyler/8198439/webrev.01/ Jc On Tue, Feb 20, 2018 at 9:42 AM, Andrew Dinn wrote: > On 20/02/18 17:33, JC Beyler wrote: > > Done here and a new JIRA item for it: > > http://cr.openjdk.java.net/~jcbeyler/8198439/webrev.00/ > > > > I put Andrew and Stuart as reviewers, let me know if anything else is > > needed. > That's actually the wrong Andrew (should be adinn not aph) but never > mind as Andrew Haley did look at it. > > regards, > > > Andrew Dinn > ----------- > > From coleen.phillimore at oracle.com Tue Feb 20 20:07:00 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 20 Feb 2018 15:07:00 -0500 Subject: RFR: 8197859: VS2017 Complains about UINTPTR_MAX definition in globalDefinitions_VisCPP.hpp In-Reply-To: References: Message-ID: <6fc522a9-1e88-0687-b2f6-b25f5c592f73@oracle.com> Looks good! Coleen On 2/20/18 9:58 AM, Kim Barrett wrote: > Please review this change to the Windows port, removing emulation of > certain C99 features and instead simply including the appropriate > headers. The headers are (available since VS2013) and > (available before VS2013). > > CR: > https://bugs.openjdk.java.net/browse/JDK-8197859 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8197859/open.00/ > > Testing: > hs-tier{1,2} > > From mikhailo.seledtsov at oracle.com Tue Feb 20 20:51:04 2018 From: mikhailo.seledtsov at oracle.com (mikhailo) Date: Tue, 20 Feb 2018 12:51:04 -0800 Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 In-Reply-To: References: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> <5A82408B.7070001@oracle.com> Message-ID: Hi Dmitry, On 02/18/2018 10:31 AM, Dmitry Samersoff wrote: > Mikhailo, > > Here is the changes rebased to recent sources. > > http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.02/ Changes look good to me. > > Could you sponsor the push? I can sponsor the change, once the updated change is reviewed. Once it is ready, please send me the latest hg changeset (with usual fields, description, reviewers). Thank you, Misha > > -Dmitry > > On 02/13/2018 04:34 AM, Mikhailo Seledtsov wrote: >> Changes look good from my point of view. >> >> Misha >> >> On 2/10/18, 4:10 AM, Dmitry Samersoff wrote: >>> Everybody, >>> >>> Please review small changes, that enables docker testing on Linux/AArch64 >>> >>> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.01/ >>> >>> PS: >>> >>> Matthias - I refactored VMProps.dockerSupport() a bit to make it more >>> readable, please check that it doesn't brake your work. >>> >>> -Dmitry >>> >>> -- >>> Dmitry Samersoff >>> http://devnull.samersoff.net >>> * There will come soft rains ... From per.liden at oracle.com Tue Feb 20 22:00:39 2018 From: per.liden at oracle.com (Per Liden) Date: Tue, 20 Feb 2018 23:00:39 +0100 Subject: RFR: 8198286: Direct memory accessors in typeArrayOop.hpp should use Access API In-Reply-To: <71102C74-8AA7-4FED-B085-D90BCA16A99C@oracle.com> References: <82c0ce41-6a34-9f87-ec45-6543bde97624@oracle.com> <5A8C4FB5.4090706@oracle.com> <71102C74-8AA7-4FED-B085-D90BCA16A99C@oracle.com> Message-ID: <3f91b397-903e-284c-5561-dbb05ec3543e@oracle.com> Hi Roman, On 02/20/2018 06:25 PM, Erik Osterlund wrote: > Hi Roman, > >> On 20 Feb 2018, at 17:44, Roman Kennke wrote: >> >> Hi Eric, >> >> On Tue, Feb 20, 2018 at 5:41 PM, Erik ?sterlund >> wrote: >>> Hi Per, >>> >>> (looping in hotspot-dev as this seems to touch more than runtime) >>> >>>> On 2018-02-20 17:03, Per Liden wrote: >>>> >>>> Hi Erik, >>>> >>>> As we discussed, coming up with a good name for the new Access call is >>>> really hard. All good/descriptive alternatives I can come up with tend to be >>>> way to long. So, next strategy is to pick something that fits into the reset >>>> of the API. With this in mind I'd like to suggest we just name it: oop >>>> Access<>::resolve(oop obj) >>>> >>>> The justification would that this this matches the one-verb style we have >>>> for the other functions (load/store/clone) and it seems that you anyway >>>> named the internal parts just "resolve", such as BARRIER_RESOLVE, and >>>> resolve_func_t. >>> >>> >>> Sure. >>> >>> Here is a full webrev with my proposal for this RFE, now that we agree on >>> the direction: >>> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00/ >>> >>> Incremental from the prototype I sent out for early turnaround yesterday: >>> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00_inc/ >>> >>> It is now enforced that *_addr_raw() functions are to be used by the GC >>> only, when the GC knows addresses are stable. All other address resolution >>> goes through non-raw address resolution functions that at a lower level end >>> up calling the resolve barrier on Access, which can be overridden by >>> Shenandoah. There are in total two callers of Access<>::resolve: on >>> oopDesc::field_addr and arrayOop::base. The rest is derived from that. >>> >>> @Roman: Hope this works well for Shenandoah. >> >> I will review it ASAP. >> >> Just a quick note: In Shenandoah we have introduced an API in >> BarrierSet to check bool BarrierSet::is_safe(oop) which checks exactly >> the to-space invariant, i.e. if it's safe to access that oop for >> writing. I'd like that to be upstream if possible, and maybe this >> would be a good opportunity to add it? Maybe rename is_stable() or >> assert_stable() (and maybe keep it under #ifdef ASSERT)? > > I see. That sounds like a useful tool to have. I?m not sure where I should be calling it from though. In the _addr calls that go through resolve(), is_stable() trivially returns true. That leaves only a few gc callsites that use *_addr_raw() - oop iterate and reference processing to put the assert in. As for oop iterate, I don?t think we should check is_stable. Imagine, e.g. a generational concurrently compacting GC that uses oop iterate to scan cards, and expects to see only addresses in that card boundary, whether they are to-space invariant or not (been there, done that). The addresses in that card boundary may or may not be stable. As for reference processing, I can see that it could be nice to know the addresses are stable. You probably know better than me where to put these asserts as you have already done it. I think I agree with Erik. It sounds like something that is useful to Shenandoah, but it also sounds strange that someone outside of Shenandoah code would be calling it? I.e. I'm not sure this should be part of the public facing BarrierSet, as it more sounds like something that should only be used internally in Shenandoah? cheers, Per > > Thanks, > /Erik > >> Thanks for doing this! >> >> Roman > From rkennke at redhat.com Tue Feb 20 22:06:46 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 20 Feb 2018 23:06:46 +0100 Subject: RFR: 8198286: Direct memory accessors in typeArrayOop.hpp should use Access API In-Reply-To: <3f91b397-903e-284c-5561-dbb05ec3543e@oracle.com> References: <82c0ce41-6a34-9f87-ec45-6543bde97624@oracle.com> <5A8C4FB5.4090706@oracle.com> <71102C74-8AA7-4FED-B085-D90BCA16A99C@oracle.com> <3f91b397-903e-284c-5561-dbb05ec3543e@oracle.com> Message-ID: Indeed, I just checked our code base, and the only non-GC-internal use of that API is inside referenceProcessor.cpp, and we can probably live without that. Thanks, Roman On Tue, Feb 20, 2018 at 11:00 PM, Per Liden wrote: > Hi Roman, > > > On 02/20/2018 06:25 PM, Erik Osterlund wrote: >> >> Hi Roman, >> >>> On 20 Feb 2018, at 17:44, Roman Kennke wrote: >>> >>> Hi Eric, >>> >>> On Tue, Feb 20, 2018 at 5:41 PM, Erik ?sterlund >>> wrote: >>>> >>>> Hi Per, >>>> >>>> (looping in hotspot-dev as this seems to touch more than runtime) >>>> >>>>> On 2018-02-20 17:03, Per Liden wrote: >>>>> >>>>> Hi Erik, >>>>> >>>>> As we discussed, coming up with a good name for the new Access call is >>>>> really hard. All good/descriptive alternatives I can come up with tend >>>>> to be >>>>> way to long. So, next strategy is to pick something that fits into the >>>>> reset >>>>> of the API. With this in mind I'd like to suggest we just name it: oop >>>>> Access<>::resolve(oop obj) >>>>> >>>>> The justification would that this this matches the one-verb style we >>>>> have >>>>> for the other functions (load/store/clone) and it seems that you anyway >>>>> named the internal parts just "resolve", such as BARRIER_RESOLVE, and >>>>> resolve_func_t. >>>> >>>> >>>> >>>> Sure. >>>> >>>> Here is a full webrev with my proposal for this RFE, now that we agree >>>> on >>>> the direction: >>>> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00/ >>>> >>>> Incremental from the prototype I sent out for early turnaround >>>> yesterday: >>>> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00_inc/ >>>> >>>> It is now enforced that *_addr_raw() functions are to be used by the GC >>>> only, when the GC knows addresses are stable. All other address >>>> resolution >>>> goes through non-raw address resolution functions that at a lower level >>>> end >>>> up calling the resolve barrier on Access, which can be overridden by >>>> Shenandoah. There are in total two callers of Access<>::resolve: on >>>> oopDesc::field_addr and arrayOop::base. The rest is derived from that. >>>> >>>> @Roman: Hope this works well for Shenandoah. >>> >>> >>> I will review it ASAP. >>> >>> Just a quick note: In Shenandoah we have introduced an API in >>> BarrierSet to check bool BarrierSet::is_safe(oop) which checks exactly >>> the to-space invariant, i.e. if it's safe to access that oop for >>> writing. I'd like that to be upstream if possible, and maybe this >>> would be a good opportunity to add it? Maybe rename is_stable() or >>> assert_stable() (and maybe keep it under #ifdef ASSERT)? >> >> >> I see. That sounds like a useful tool to have. I?m not sure where I should >> be calling it from though. In the _addr calls that go through resolve(), >> is_stable() trivially returns true. That leaves only a few gc callsites that >> use *_addr_raw() - oop iterate and reference processing to put the assert >> in. As for oop iterate, I don?t think we should check is_stable. Imagine, >> e.g. a generational concurrently compacting GC that uses oop iterate to scan >> cards, and expects to see only addresses in that card boundary, whether they >> are to-space invariant or not (been there, done that). The addresses in that >> card boundary may or may not be stable. As for reference processing, I can >> see that it could be nice to know the addresses are stable. You probably >> know better than me where to put these asserts as you have already done it. > > > I think I agree with Erik. It sounds like something that is useful to > Shenandoah, but it also sounds strange that someone outside of Shenandoah > code would be calling it? I.e. I'm not sure this should be part of the > public facing BarrierSet, as it more sounds like something that should only > be used internally in Shenandoah? > > cheers, > Per > > >> >> Thanks, >> /Erik >> >>> Thanks for doing this! >>> >>> Roman >> >> > From rkennke at redhat.com Tue Feb 20 22:31:30 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 20 Feb 2018 23:31:30 +0100 Subject: RFR: 8198286: Direct memory accessors in typeArrayOop.hpp should use Access API In-Reply-To: References: <82c0ce41-6a34-9f87-ec45-6543bde97624@oracle.com> <5A8C4FB5.4090706@oracle.com> <71102C74-8AA7-4FED-B085-D90BCA16A99C@oracle.com> <3f91b397-903e-284c-5561-dbb05ec3543e@oracle.com> Message-ID: Alright, I figured it out. The humongous handling is correct, but the accounting of seqnum_at_gc_start is not. We currently set it in ShenandoahGCPauseMark, which updates the numbers at the beginning and end of each *pause*, not GC cycle. However, doing it in ShenandoahGCSession, which would cover the whole cycle, is not safe because it happens from the scheduler thread while Java is running (and updating the counter). It needs to be done during the init- and final- pauses of the cycle in order to be safe and correct. I've done it in traversal using this patch: https://paste.fedoraproject.org/paste/SZ2tqntg4l5T3TAIdtQ2uw The only other user of that counter seems to be the generational and LRU heuristics, and I'm wondering if they work by accident, or not really at all, but a complete fix would do what I did to traversal also to partial GC. Something for tomorrow. Cheers, Roman On Tue, Feb 20, 2018 at 11:06 PM, Roman Kennke wrote: > Indeed, I just checked our code base, and the only non-GC-internal use > of that API is inside referenceProcessor.cpp, and we can probably live > without that. > Thanks, Roman > > On Tue, Feb 20, 2018 at 11:00 PM, Per Liden wrote: >> Hi Roman, >> >> >> On 02/20/2018 06:25 PM, Erik Osterlund wrote: >>> >>> Hi Roman, >>> >>>> On 20 Feb 2018, at 17:44, Roman Kennke wrote: >>>> >>>> Hi Eric, >>>> >>>> On Tue, Feb 20, 2018 at 5:41 PM, Erik ?sterlund >>>> wrote: >>>>> >>>>> Hi Per, >>>>> >>>>> (looping in hotspot-dev as this seems to touch more than runtime) >>>>> >>>>>> On 2018-02-20 17:03, Per Liden wrote: >>>>>> >>>>>> Hi Erik, >>>>>> >>>>>> As we discussed, coming up with a good name for the new Access call is >>>>>> really hard. All good/descriptive alternatives I can come up with tend >>>>>> to be >>>>>> way to long. So, next strategy is to pick something that fits into the >>>>>> reset >>>>>> of the API. With this in mind I'd like to suggest we just name it: oop >>>>>> Access<>::resolve(oop obj) >>>>>> >>>>>> The justification would that this this matches the one-verb style we >>>>>> have >>>>>> for the other functions (load/store/clone) and it seems that you anyway >>>>>> named the internal parts just "resolve", such as BARRIER_RESOLVE, and >>>>>> resolve_func_t. >>>>> >>>>> >>>>> >>>>> Sure. >>>>> >>>>> Here is a full webrev with my proposal for this RFE, now that we agree >>>>> on >>>>> the direction: >>>>> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00/ >>>>> >>>>> Incremental from the prototype I sent out for early turnaround >>>>> yesterday: >>>>> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00_inc/ >>>>> >>>>> It is now enforced that *_addr_raw() functions are to be used by the GC >>>>> only, when the GC knows addresses are stable. All other address >>>>> resolution >>>>> goes through non-raw address resolution functions that at a lower level >>>>> end >>>>> up calling the resolve barrier on Access, which can be overridden by >>>>> Shenandoah. There are in total two callers of Access<>::resolve: on >>>>> oopDesc::field_addr and arrayOop::base. The rest is derived from that. >>>>> >>>>> @Roman: Hope this works well for Shenandoah. >>>> >>>> >>>> I will review it ASAP. >>>> >>>> Just a quick note: In Shenandoah we have introduced an API in >>>> BarrierSet to check bool BarrierSet::is_safe(oop) which checks exactly >>>> the to-space invariant, i.e. if it's safe to access that oop for >>>> writing. I'd like that to be upstream if possible, and maybe this >>>> would be a good opportunity to add it? Maybe rename is_stable() or >>>> assert_stable() (and maybe keep it under #ifdef ASSERT)? >>> >>> >>> I see. That sounds like a useful tool to have. I?m not sure where I should >>> be calling it from though. In the _addr calls that go through resolve(), >>> is_stable() trivially returns true. That leaves only a few gc callsites that >>> use *_addr_raw() - oop iterate and reference processing to put the assert >>> in. As for oop iterate, I don?t think we should check is_stable. Imagine, >>> e.g. a generational concurrently compacting GC that uses oop iterate to scan >>> cards, and expects to see only addresses in that card boundary, whether they >>> are to-space invariant or not (been there, done that). The addresses in that >>> card boundary may or may not be stable. As for reference processing, I can >>> see that it could be nice to know the addresses are stable. You probably >>> know better than me where to put these asserts as you have already done it. >> >> >> I think I agree with Erik. It sounds like something that is useful to >> Shenandoah, but it also sounds strange that someone outside of Shenandoah >> code would be calling it? I.e. I'm not sure this should be part of the >> public facing BarrierSet, as it more sounds like something that should only >> be used internally in Shenandoah? >> >> cheers, >> Per >> >> >>> >>> Thanks, >>> /Erik >>> >>>> Thanks for doing this! >>>> >>>> Roman >>> >>> >> From kim.barrett at oracle.com Wed Feb 21 00:05:11 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 20 Feb 2018 19:05:11 -0500 Subject: RFR: 8197859: VS2017 Complains about UINTPTR_MAX definition in globalDefinitions_VisCPP.hpp In-Reply-To: <6fc522a9-1e88-0687-b2f6-b25f5c592f73@oracle.com> References: <6fc522a9-1e88-0687-b2f6-b25f5c592f73@oracle.com> Message-ID: > On Feb 20, 2018, at 3:07 PM, coleen.phillimore at oracle.com wrote: > > Looks good! > Coleen Thanks. > On 2/20/18 9:58 AM, Kim Barrett wrote: >> Please review this change to the Windows port, removing emulation of >> certain C99 features and instead simply including the appropriate >> headers. The headers are (available since VS2013) and >> (available before VS2013). >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8197859 >> >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8197859/open.00/ >> >> Testing: >> hs-tier{1,2} From kim.barrett at oracle.com Wed Feb 21 00:08:56 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 20 Feb 2018 19:08:56 -0500 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error Message-ID: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> Please review this change to how HotSpot handles platform variations of vsnprintf and snprintf. We propose to provide os::vsnprintf and os::snprintf, which have the agreed behavior for such functions for use in HotSpot; see jio_vsnprintf and friends. Specifically, (1) always NUL-terminate the target buffer unless its indicated size is zero, and (2) return -1 on output truncation. This places the platform dependent code in OS-specific files where it belongs (os_windows.cpp and os_posix.cpp), while giving the rest of HotSpot a consistent API to use. An additional benefit is that these new os functions are decorated with format attributes, so will produce warnings for incorrect calls (on platforms which support the decorations and associated checking). The jio_ variants have the attributes in jvm.cpp rather than jvm.h, making them largely useless. (Maybe that's just a bug? But jvm.h doesn't have the infrastructure for platform-dependent configuration that is available for in-HotSpot code.) However, to really get the benefit of this, we will need to change HotSpot code to consistently use the os functions, rather than the jio_ equivalents. (That would have the additional benefit of not needing to include jvm.h all over the place just to have access to the jio_ functions.) That's a change for later, not part of this change. We still provide os::log_vsnprintf, which differs from the new os::vsnprintf in the return value when output truncation occurs. (It returns the size the output would have been had it not been truncated.) It has been changed to always NUL-terminate the output, and documented as such. None of the current uses care, and this makes it consistent with os::vsnprintf. Note that os::log_vsnprintf was added as part of UL and is presently only used by it. Maybe it could have a better name? This change leaves no direct calls to vsnprintf in HotSpot, outside of the relevant parts of the os implementation. However, there are a lot of direct calls to snprintf, potentially resulting in strings that are not NUL-terminated. Those should perhaps be calling os::snprintf (or previously, jio_snprintf), but that's another change for later. CR: https://bugs.openjdk.java.net/browse/JDK-8196882 Webrev: http://cr.openjdk.java.net/~kbarrett/8196882/open.00/ Testing: Mach5 {hs,jdk}-tier{1,2,3} (used VS2013 for Windows testing) Also built with VS2017. From kim.barrett at oracle.com Wed Feb 21 03:02:21 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 20 Feb 2018 22:02:21 -0500 Subject: RFR: 8198474: Move JNIHandles::resolve into jniHandles.inline.hpp Message-ID: Please review this change to split jniHandles.hpp, moving the inline definition of JNIHandles::resolve and related functions to the new jniHandles.inline.hpp. This is being done in preparation for JDK-8195972 "Refactor oops in JNI to use the Access API". This is needed so we can include access.inline.hpp as part of that change, making the implementation of Access available for reference by JNIHandles::resolve &etc. This was accomplished largly by a simple copy of the code, and updating the #includes of lots of files. However, resolve_external_guard was changed to no longer be inline. It doesn't seem to be performance critical, and this change reduced the fanout on #include updates. CR: https://bugs.openjdk.java.net/browse/JDK-8198474 Webrev: http://cr.openjdk.java.net/~kbarrett/8198474/open.00/ [Note to Oracle reviewers: There is a closed part to this change too.] Testing: hs-tier1 in isolation {hs,jdk}-tier{1,2,3} JDK-8195972 as part of changes for JDK-8195972 From thomas.stuefe at gmail.com Wed Feb 21 07:30:51 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 21 Feb 2018 08:30:51 +0100 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> Message-ID: Hi Kim, this is good. Please find comments inline / below. On Wed, Feb 21, 2018 at 1:08 AM, Kim Barrett wrote: > Please review this change to how HotSpot handles platform variations > of vsnprintf and snprintf. > > We propose to provide os::vsnprintf and os::snprintf, which have the > agreed behavior for such functions for use in HotSpot; see > jio_vsnprintf and friends. Specifically, (1) always NUL-terminate the > target buffer unless its indicated size is zero, and (2) return -1 on > output truncation. This places the platform dependent code in > OS-specific files where it belongs (os_windows.cpp and os_posix.cpp), > while giving the rest of HotSpot a consistent API to use. > > An additional benefit is that these new os functions are decorated > with format attributes, so will produce warnings for incorrect calls > (on platforms which support the decorations and associated > checking). The jio_ variants have the attributes in jvm.cpp rather > than jvm.h, making them largely useless. (Maybe that's just a bug? But > jvm.h doesn't have the infrastructure for platform-dependent > configuration that is available for in-HotSpot code.) However, to > really get the benefit of this, we will need to change HotSpot code to > consistently use the os functions, rather than the jio_ equivalents. > (That would have the additional benefit of not needing to include > jvm.h all over the place just to have access to the jio_ functions.) > os.hpp is no lightweight alternative though. It includes quite a lot of other headers, including jvm.h :) Also system headers like . And then, whatever comes with the os_xxx_xxx.h files. So, this is the part I do not like much about this change, it forces us to include a lot of stuff where before we would just include jvm.h or just roll with raw ::snprintf(). Can we disentangle the header dep better? > That's a change for later, not part of this change. > > We still provide os::log_vsnprintf, which differs from the new > os::vsnprintf in the return value when output truncation occurs. (It > returns the size the output would have been had it not been > truncated.) It has been changed to always NUL-terminate the output, > and documented as such. None of the current uses care, and this makes > it consistent with os::vsnprintf. Note that os::log_vsnprintf was > added as part of UL and is presently only used by it. Maybe it could > have a better name? > I totally agree. Possible alternatives: 1 rename it 2 move it into the log sub project as an internal implementation detail 3 Or, provide a platform independent version of _vcsprintf ( https://msdn.microsoft.com/en-us/library/w05tbk72.aspx) instead. So whoever really wants to count characters in resolved format string should first use that function, then alloc the appropiate buffer, then do the real printing. Posix variant for _vcsprintf could just be vsnprintf with a zero byte output buffer. I personally like (3) best, followed by (2) > This change leaves no direct calls to vsnprintf in HotSpot, outside of > the relevant parts of the os implementation. However, there are a lot > of direct calls to snprintf, potentially resulting in strings that are > not NUL-terminated. Those should perhaps be calling os::snprintf (or > previously, jio_snprintf), but that's another change for later. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8196882 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8196882/open.00/ > > Testing: > Mach5 {hs,jdk}-tier{1,2,3} (used VS2013 for Windows testing) > Also built with VS2017. > > The changes: src/hotspot/os/posix/os_posix.cpp + if (len > 0) buf[len - 1] = '\0'; Style nit: brackets ? src/hotspot/share/prims/jvm.cpp Small behaviour change for input buffers of 0 and format strings of "". Before we returned -1, now (I guess) 0. Does any existing caller care? I looked but could not find anyone even checking the return code, so it is probably nothing. src/hotspot/share/runtime/os.hpp Comment to os::(v)snprintf: "These functions return -1 if the + // output has been truncated, rather than returning the number of characters + // that would have been written (exclusive of the terminating NUL) if the + // output had not been truncated." I would not describe what the functions do NOT. Only what they do - would be a bit clearer. Proposal for a more concise version (feel free to reformulate, I am no native speaker): "os::snprintf and os::vsnprintf are identical to snprintf(3) and vsnprintf(3) except in the following points: - On truncation they will return -1. - On truncation, the output string will be zero terminated, unless the input buffer length is 0. " test/hotspot/gtest/runtime/test_os.cpp: +// Test os::vmsprintf and friends Typo. -- Thinking about it I think the test could be made a bit denser and simpler and have more coverage. How about this instead: static void test_snprintf(int (*pf)(char*, size_t, const char*, ...), bool expect_count) { const char expected[] = "0123456789012345678901234567890123456789"; char buffer[sizeof(expected) + 4]; const int lengths_to_test[] = { sizeof(buffer), ......, 1, 0, -1 }; for (int i = 0; lengths_to_test[i] != -1; i ++) { int length = lengths_to_test[i]; memset(buffer, 'x', sizeof(buffer)); // to catch overwriters int result = pf(buffer, length, "%s", expected); if (length > 0) { ASSERT_EQ('\0', buffer[length - 1]); // expect terminating zero if (length > sizeof(expected)) { ASSERT_EQ(0, strcmp(buffer, expected)); // expect the whole string to fit ASSERT_EQ(result, strlen(expected)) } else { ASSERT_EQ(0, strncmp(buffer, expected, length)); // expect truncation ASSERT_EQ(result, expect_count ? strlen(expected) : -1) } } ASSERT_EQ('x', buffer[lengths[i]]); // canary } } (for real paranoia one could check also for leading overwriters by writing to buffer[1] and checking buffer[0] for 'x', but I do not think this is necessary.) And should we move log_snprintf to logging, the tests would get a bit simpler too. Thanks and Kind Regards, Thomas From rkennke at redhat.com Wed Feb 21 07:51:09 2018 From: rkennke at redhat.com (Roman Kennke) Date: Wed, 21 Feb 2018 08:51:09 +0100 Subject: RFR: 8198286: Direct memory accessors in typeArrayOop.hpp should use Access API In-Reply-To: References: <82c0ce41-6a34-9f87-ec45-6543bde97624@oracle.com> <5A8C4FB5.4090706@oracle.com> <71102C74-8AA7-4FED-B085-D90BCA16A99C@oracle.com> <3f91b397-903e-284c-5561-dbb05ec3543e@oracle.com> Message-ID: Sorry, I replied that to the wrong thread. Discard it please ;-) On Tue, Feb 20, 2018 at 11:31 PM, Roman Kennke wrote: > Alright, I figured it out. The humongous handling is correct, but the > accounting of seqnum_at_gc_start is not. We currently set it in > ShenandoahGCPauseMark, which updates the numbers at the beginning and > end of each *pause*, not GC cycle. However, doing it in > ShenandoahGCSession, which would cover the whole cycle, is not safe > because it happens from the scheduler thread while Java is running > (and updating the counter). It needs to be done during the init- and > final- pauses of the cycle in order to be safe and correct. I've done > it in traversal using this patch: > > https://paste.fedoraproject.org/paste/SZ2tqntg4l5T3TAIdtQ2uw > > The only other user of that counter seems to be the generational and > LRU heuristics, and I'm wondering if they work by accident, or not > really at all, but a complete fix would do what I did to traversal > also to partial GC. Something for tomorrow. > > Cheers, Roman > > On Tue, Feb 20, 2018 at 11:06 PM, Roman Kennke wrote: >> Indeed, I just checked our code base, and the only non-GC-internal use >> of that API is inside referenceProcessor.cpp, and we can probably live >> without that. >> Thanks, Roman >> >> On Tue, Feb 20, 2018 at 11:00 PM, Per Liden wrote: >>> Hi Roman, >>> >>> >>> On 02/20/2018 06:25 PM, Erik Osterlund wrote: >>>> >>>> Hi Roman, >>>> >>>>> On 20 Feb 2018, at 17:44, Roman Kennke wrote: >>>>> >>>>> Hi Eric, >>>>> >>>>> On Tue, Feb 20, 2018 at 5:41 PM, Erik ?sterlund >>>>> wrote: >>>>>> >>>>>> Hi Per, >>>>>> >>>>>> (looping in hotspot-dev as this seems to touch more than runtime) >>>>>> >>>>>>> On 2018-02-20 17:03, Per Liden wrote: >>>>>>> >>>>>>> Hi Erik, >>>>>>> >>>>>>> As we discussed, coming up with a good name for the new Access call is >>>>>>> really hard. All good/descriptive alternatives I can come up with tend >>>>>>> to be >>>>>>> way to long. So, next strategy is to pick something that fits into the >>>>>>> reset >>>>>>> of the API. With this in mind I'd like to suggest we just name it: oop >>>>>>> Access<>::resolve(oop obj) >>>>>>> >>>>>>> The justification would that this this matches the one-verb style we >>>>>>> have >>>>>>> for the other functions (load/store/clone) and it seems that you anyway >>>>>>> named the internal parts just "resolve", such as BARRIER_RESOLVE, and >>>>>>> resolve_func_t. >>>>>> >>>>>> >>>>>> >>>>>> Sure. >>>>>> >>>>>> Here is a full webrev with my proposal for this RFE, now that we agree >>>>>> on >>>>>> the direction: >>>>>> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00/ >>>>>> >>>>>> Incremental from the prototype I sent out for early turnaround >>>>>> yesterday: >>>>>> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00_inc/ >>>>>> >>>>>> It is now enforced that *_addr_raw() functions are to be used by the GC >>>>>> only, when the GC knows addresses are stable. All other address >>>>>> resolution >>>>>> goes through non-raw address resolution functions that at a lower level >>>>>> end >>>>>> up calling the resolve barrier on Access, which can be overridden by >>>>>> Shenandoah. There are in total two callers of Access<>::resolve: on >>>>>> oopDesc::field_addr and arrayOop::base. The rest is derived from that. >>>>>> >>>>>> @Roman: Hope this works well for Shenandoah. >>>>> >>>>> >>>>> I will review it ASAP. >>>>> >>>>> Just a quick note: In Shenandoah we have introduced an API in >>>>> BarrierSet to check bool BarrierSet::is_safe(oop) which checks exactly >>>>> the to-space invariant, i.e. if it's safe to access that oop for >>>>> writing. I'd like that to be upstream if possible, and maybe this >>>>> would be a good opportunity to add it? Maybe rename is_stable() or >>>>> assert_stable() (and maybe keep it under #ifdef ASSERT)? >>>> >>>> >>>> I see. That sounds like a useful tool to have. I?m not sure where I should >>>> be calling it from though. In the _addr calls that go through resolve(), >>>> is_stable() trivially returns true. That leaves only a few gc callsites that >>>> use *_addr_raw() - oop iterate and reference processing to put the assert >>>> in. As for oop iterate, I don?t think we should check is_stable. Imagine, >>>> e.g. a generational concurrently compacting GC that uses oop iterate to scan >>>> cards, and expects to see only addresses in that card boundary, whether they >>>> are to-space invariant or not (been there, done that). The addresses in that >>>> card boundary may or may not be stable. As for reference processing, I can >>>> see that it could be nice to know the addresses are stable. You probably >>>> know better than me where to put these asserts as you have already done it. >>> >>> >>> I think I agree with Erik. It sounds like something that is useful to >>> Shenandoah, but it also sounds strange that someone outside of Shenandoah >>> code would be calling it? I.e. I'm not sure this should be part of the >>> public facing BarrierSet, as it more sounds like something that should only >>> be used internally in Shenandoah? >>> >>> cheers, >>> Per >>> >>> >>>> >>>> Thanks, >>>> /Erik >>>> >>>>> Thanks for doing this! >>>>> >>>>> Roman >>>> >>>> >>> From erik.helin at oracle.com Wed Feb 21 08:18:11 2018 From: erik.helin at oracle.com (Erik Helin) Date: Wed, 21 Feb 2018 09:18:11 +0100 Subject: RFR(L): 8195142: Refactor out card table from CardTableModRefBS to flatten the BarrierSet hierarchy In-Reply-To: <5A855376.5090203@oracle.com> References: <5A5F476E.6080000@oracle.com> <5A855376.5090203@oracle.com> Message-ID: <79f7fbf8-b0cc-1999-45a9-aec8f8879f77@oracle.com> Hi Erik, this is a very nice improvement, thanks for working on this! A few minor comments thus far: - in stubGenerator_ppc.cpp: you seem to have lost a `const` in the refactoring - in psCardTable.hpp: I don't think card_mark_must_follow_store() is needed, since PSCardTable passes `false` for `conc_scan` to the CardTable constructor - in g1CollectedHeap.hpp: could you store the G1CardTable as a field in G1CollectedHeap? Also, could you name the "getter" just card_table()? (I see that g1_hot_card_cache method above, but that one should also be renamed to just hot_card_cache, but in another patch) - in cardTable.hpp and cardTable.cpp: could you use `hg cp` when constructing these files from cardTableModRefBS.{hpp,cpp} so the history is preserved? Thanks, Erik On 02/15/2018 10:31 AM, Erik ?sterlund wrote: > Hi, > > Here is an updated revision of this webrev after internal feedback from > StefanK who helped looking through my changes - thanks a lot for the > help with that. > > The changes to the new revision are a bunch of minor clean up changes, > e.g. copy right headers, indentation issues, sorting includes, > adding/removing newlines, reverting an assert error message, fixing > constructor initialization orders, and things like that. > > The problem I mentioned last time about the version number of our repo > not yet being bumped to 11 and resulting awkwardness in JVMCI has been > resolved by simply waiting. So now I changed the JVMCI logic to get the > card values from the new location in the corresponding card tables when > observing JDK version 11 or above. > > New full webrev (rebased onto a month fresher jdk-hs): > http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01/ > > Incremental webrev (over the rebase): > http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00_01/ > > This new version has run through hs-tier1-5 and jdk-tier1-3 without any > issues. > > Thanks, > /Erik > > On 2018-01-17 13:54, Erik ?sterlund wrote: >> Hi, >> >> Today, both Parallel, CMS and Serial share the same code for its card >> marking barrier. However, they have different requirements how to >> manage its card tables by the GC. And as the card table itself is >> embedded as a part of the CardTableModRefBS barrier set, this has led >> to an unnecessary inheritance hierarchy for CardTableModRefBS, where >> for example CardTableModRefBSForCTRS and CardTableExtension are >> CardTableModRefBS subclasses that do not change anything to do with >> the barriers. >> >> To clean up the code, there should really be a separate CardTable >> hierarchy that contains the differences how to manage the card table >> from the GC point of view, and simply let CardTableModRefBS have a >> CardTable. This would allow removing CardTableModRefBSForCTRS and >> CardTableExtension and their references from shared code (that really >> have nothing to do with the barriers, despite being barrier sets), and >> significantly simplify the barrier set code. >> >> This patch mechanically performs this refactoring. A new CardTable >> class has been created with a PSCardTable subclass for Parallel, a >> CardTableRS for CMS and Serial, and a G1CardTable for G1. All >> references to card tables and their values have been updated accordingly. >> >> This touches a lot of platform specific code, so would be fantastic if >> port maintainers could have a look that I have not broken anything. >> >> There is a slight problem that should be pointed out. There is an >> unfortunate interaction between Graal and hotspot. Graal needs to know >> the values of g1 young cards and dirty cards. This is queried in >> different ways in different versions of the JDK in the >> ||GraalHotSpotVMConfig.java file. Now these values will move from >> their barrier set class to their card table class. That means we have >> at least three cases how to find the correct values. There is one for >> JDK8, one for JDK9, and now a new one for JDK11. Except, we have not >> yet bumped the version number to 11 in the repo, and therefore it has >> to be from JDK10 - 11 for now and updated after incrementing the >> version number. But that means that it will be temporarily >> incompatible with JDK10. That is okay for our own copy of Graal, but >> can not be used by upstream Graal as they are given the choice whether >> to support the public JDK10 or the JDK11 that does not quite admit to >> being 11 yet. I chose the solution that works in our repository. I >> will notify Graal folks of this issue. In the long run, it would be >> nice if we could have a more solid interface here. >> >> However, as an added benefit, this changeset brings about a hundred >> copyright headers up to date, so others do not have to update them for >> a while. >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8195142 >> >> Webrev: >> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00/ >> >> Testing: mach5 hs-tier1-5 plus local AoT testing. >> >> Thanks, >> /Erik > From marcus.larsson at oracle.com Wed Feb 21 08:23:24 2018 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Wed, 21 Feb 2018 09:23:24 +0100 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> Message-ID: Hi, On 2018-02-21 08:30, Thomas St?fe wrote: > Hi Kim, > > this is good. Please find comments inline / below. > > On Wed, Feb 21, 2018 at 1:08 AM, Kim Barrett wrote: > >> Please review this change to how HotSpot handles platform variations >> of vsnprintf and snprintf. >> >> We propose to provide os::vsnprintf and os::snprintf, which have the >> agreed behavior for such functions for use in HotSpot; see >> jio_vsnprintf and friends. Specifically, (1) always NUL-terminate the >> target buffer unless its indicated size is zero, and (2) return -1 on >> output truncation. This places the platform dependent code in >> OS-specific files where it belongs (os_windows.cpp and os_posix.cpp), >> while giving the rest of HotSpot a consistent API to use. >> >> An additional benefit is that these new os functions are decorated >> with format attributes, so will produce warnings for incorrect calls >> (on platforms which support the decorations and associated >> checking). The jio_ variants have the attributes in jvm.cpp rather >> than jvm.h, making them largely useless. (Maybe that's just a bug? But >> jvm.h doesn't have the infrastructure for platform-dependent >> configuration that is available for in-HotSpot code.) However, to >> really get the benefit of this, we will need to change HotSpot code to >> consistently use the os functions, rather than the jio_ equivalents. >> (That would have the additional benefit of not needing to include >> jvm.h all over the place just to have access to the jio_ functions.) >> > os.hpp is no lightweight alternative though. It includes quite a lot of > other headers, including jvm.h :) > > Also system headers like . And then, whatever comes with the > os_xxx_xxx.h files. > > So, this is the part I do not like much about this change, it forces us to > include a lot of stuff where before we would just include jvm.h or just > roll with raw ::snprintf(). Can we disentangle the header dep better? > > >> That's a change for later, not part of this change. >> >> We still provide os::log_vsnprintf, which differs from the new >> os::vsnprintf in the return value when output truncation occurs. (It >> returns the size the output would have been had it not been >> truncated.) It has been changed to always NUL-terminate the output, >> and documented as such. None of the current uses care, and this makes >> it consistent with os::vsnprintf. Note that os::log_vsnprintf was >> added as part of UL and is presently only used by it. Maybe it could >> have a better name? >> > I totally agree. Possible alternatives: > > 1 rename it > 2 move it into the log sub project as an internal implementation detail > 3 Or, provide a platform independent version of _vcsprintf ( > https://msdn.microsoft.com/en-us/library/w05tbk72.aspx) instead. So whoever > really wants to count characters in resolved format string should first use > that function, then alloc the appropiate buffer, then do the real printing. > Posix variant for _vcsprintf could just be vsnprintf with a zero byte > output buffer. > > I personally like (3) best, followed by (2) The best alternative, IMHO, would be to make os::vsnprintf behave just like log_vsnprintf (C99 standard vsnprintf), thus removing the need for log_vsnprintf completely. Also, that behavior is strictly better than always returning -1 on error. Thanks, Marcus > > >> This change leaves no direct calls to vsnprintf in HotSpot, outside of >> the relevant parts of the os implementation. However, there are a lot >> of direct calls to snprintf, potentially resulting in strings that are >> not NUL-terminated. Those should perhaps be calling os::snprintf (or >> previously, jio_snprintf), but that's another change for later. >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8196882 >> >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8196882/open.00/ >> >> Testing: >> Mach5 {hs,jdk}-tier{1,2,3} (used VS2013 for Windows testing) >> Also built with VS2017. >> >> > The changes: > > src/hotspot/os/posix/os_posix.cpp > > + if (len > 0) buf[len - 1] = '\0'; > Style nit: brackets ? > > src/hotspot/share/prims/jvm.cpp > > Small behaviour change for input buffers of 0 and format strings of "". > Before we returned -1, now (I guess) 0. Does any existing caller care? I > looked but could not find anyone even checking the return code, so it is > probably nothing. > > src/hotspot/share/runtime/os.hpp > > Comment to os::(v)snprintf: "These functions return -1 if the > + // output has been truncated, rather than returning the number of > characters > + // that would have been written (exclusive of the terminating NUL) if the > + // output had not been truncated." > > I would not describe what the functions do NOT. Only what they do - would > be a bit clearer. > > Proposal for a more concise version (feel free to reformulate, I am no > native speaker): > > "os::snprintf and os::vsnprintf are identical to snprintf(3) and > vsnprintf(3) except in the following points: > - On truncation they will return -1. > - On truncation, the output string will be zero terminated, unless the > input buffer length is 0. > " > > test/hotspot/gtest/runtime/test_os.cpp: > > +// Test os::vmsprintf and friends > > Typo. > > -- > Thinking about it I think the test could be made a bit denser and simpler > and have more coverage. How about this instead: > > static void test_snprintf(int (*pf)(char*, size_t, const char*, ...), bool > expect_count) { > > const char expected[] = "0123456789012345678901234567890123456789"; > char buffer[sizeof(expected) + 4]; > const int lengths_to_test[] = { sizeof(buffer), ......, 1, 0, -1 }; > for (int i = 0; lengths_to_test[i] != -1; i ++) { > int length = lengths_to_test[i]; > memset(buffer, 'x', sizeof(buffer)); // to catch overwriters > int result = pf(buffer, length, "%s", expected); > if (length > 0) { > ASSERT_EQ('\0', buffer[length - 1]); // expect terminating zero > if (length > sizeof(expected)) { > ASSERT_EQ(0, strcmp(buffer, expected)); // expect the whole string > to fit > ASSERT_EQ(result, strlen(expected)) > } else { > ASSERT_EQ(0, strncmp(buffer, expected, length)); // expect > truncation > ASSERT_EQ(result, expect_count ? strlen(expected) : -1) > } > } > ASSERT_EQ('x', buffer[lengths[i]]); // canary > } > } > > (for real paranoia one could check also for leading overwriters by writing > to buffer[1] and checking buffer[0] for 'x', but I do not think this is > necessary.) > > And should we move log_snprintf to logging, the tests would get a bit > simpler too. > > Thanks and Kind Regards, Thomas From thomas.stuefe at gmail.com Wed Feb 21 08:30:53 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 21 Feb 2018 09:30:53 +0100 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> Message-ID: On Wed, Feb 21, 2018 at 9:23 AM, Marcus Larsson wrote: > Hi, > > > > On 2018-02-21 08:30, Thomas St?fe wrote: > >> Hi Kim, >> >> this is good. Please find comments inline / below. >> >> On Wed, Feb 21, 2018 at 1:08 AM, Kim Barrett >> wrote: >> >> Please review this change to how HotSpot handles platform variations >>> of vsnprintf and snprintf. >>> >>> We propose to provide os::vsnprintf and os::snprintf, which have the >>> agreed behavior for such functions for use in HotSpot; see >>> jio_vsnprintf and friends. Specifically, (1) always NUL-terminate the >>> target buffer unless its indicated size is zero, and (2) return -1 on >>> output truncation. This places the platform dependent code in >>> OS-specific files where it belongs (os_windows.cpp and os_posix.cpp), >>> while giving the rest of HotSpot a consistent API to use. >>> >>> An additional benefit is that these new os functions are decorated >>> with format attributes, so will produce warnings for incorrect calls >>> (on platforms which support the decorations and associated >>> checking). The jio_ variants have the attributes in jvm.cpp rather >>> than jvm.h, making them largely useless. (Maybe that's just a bug? But >>> jvm.h doesn't have the infrastructure for platform-dependent >>> configuration that is available for in-HotSpot code.) However, to >>> really get the benefit of this, we will need to change HotSpot code to >>> consistently use the os functions, rather than the jio_ equivalents. >>> (That would have the additional benefit of not needing to include >>> jvm.h all over the place just to have access to the jio_ functions.) >>> >>> os.hpp is no lightweight alternative though. It includes quite a lot of >> other headers, including jvm.h :) >> >> Also system headers like . And then, whatever comes with the >> os_xxx_xxx.h files. >> >> So, this is the part I do not like much about this change, it forces us to >> include a lot of stuff where before we would just include jvm.h or just >> roll with raw ::snprintf(). Can we disentangle the header dep better? >> >> >> That's a change for later, not part of this change. >>> >>> We still provide os::log_vsnprintf, which differs from the new >>> os::vsnprintf in the return value when output truncation occurs. (It >>> returns the size the output would have been had it not been >>> truncated.) It has been changed to always NUL-terminate the output, >>> and documented as such. None of the current uses care, and this makes >>> it consistent with os::vsnprintf. Note that os::log_vsnprintf was >>> added as part of UL and is presently only used by it. Maybe it could >>> have a better name? >>> >>> I totally agree. Possible alternatives: >> >> 1 rename it >> 2 move it into the log sub project as an internal implementation detail >> 3 Or, provide a platform independent version of _vcsprintf ( >> https://msdn.microsoft.com/en-us/library/w05tbk72.aspx) instead. So >> whoever >> really wants to count characters in resolved format string should first >> use >> that function, then alloc the appropiate buffer, then do the real >> printing. >> Posix variant for _vcsprintf could just be vsnprintf with a zero byte >> output buffer. >> >> I personally like (3) best, followed by (2) >> > > The best alternative, IMHO, would be to make os::vsnprintf behave just > like log_vsnprintf (C99 standard vsnprintf), thus removing the need for > log_vsnprintf completely. Also, that behavior is strictly better than > always returning -1 on error. > > Arguably, people tend to check - if they check at all - for -1 than for result > sizeof input buffer. But I also see that this argument goes both ways, because strictly speaking you should check for both -1 and truncation and handle them differently, so your way would be more correct. Best Regards, Thomas > Thanks, > Marcus > > > >> >> This change leaves no direct calls to vsnprintf in HotSpot, outside of >>> the relevant parts of the os implementation. However, there are a lot >>> of direct calls to snprintf, potentially resulting in strings that are >>> not NUL-terminated. Those should perhaps be calling os::snprintf (or >>> previously, jio_snprintf), but that's another change for later. >>> >>> CR: >>> https://bugs.openjdk.java.net/browse/JDK-8196882 >>> >>> Webrev: >>> http://cr.openjdk.java.net/~kbarrett/8196882/open.00/ >>> >>> Testing: >>> Mach5 {hs,jdk}-tier{1,2,3} (used VS2013 for Windows testing) >>> Also built with VS2017. >>> >>> >>> The changes: >> >> src/hotspot/os/posix/os_posix.cpp >> >> + if (len > 0) buf[len - 1] = '\0'; >> Style nit: brackets ? >> >> src/hotspot/share/prims/jvm.cpp >> >> Small behaviour change for input buffers of 0 and format strings of "". >> Before we returned -1, now (I guess) 0. Does any existing caller care? I >> looked but could not find anyone even checking the return code, so it is >> probably nothing. >> >> src/hotspot/share/runtime/os.hpp >> >> Comment to os::(v)snprintf: "These functions return -1 if the >> + // output has been truncated, rather than returning the number of >> characters >> + // that would have been written (exclusive of the terminating NUL) if >> the >> + // output had not been truncated." >> >> I would not describe what the functions do NOT. Only what they do - would >> be a bit clearer. >> >> Proposal for a more concise version (feel free to reformulate, I am no >> native speaker): >> >> "os::snprintf and os::vsnprintf are identical to snprintf(3) and >> vsnprintf(3) except in the following points: >> - On truncation they will return -1. >> - On truncation, the output string will be zero terminated, unless the >> input buffer length is 0. >> " >> >> test/hotspot/gtest/runtime/test_os.cpp: >> >> +// Test os::vmsprintf and friends >> >> Typo. >> >> -- >> Thinking about it I think the test could be made a bit denser and simpler >> and have more coverage. How about this instead: >> >> static void test_snprintf(int (*pf)(char*, size_t, const char*, ...), bool >> expect_count) { >> >> const char expected[] = "0123456789012345678901234567890123456789"; >> char buffer[sizeof(expected) + 4]; >> const int lengths_to_test[] = { sizeof(buffer), ......, 1, 0, -1 }; >> for (int i = 0; lengths_to_test[i] != -1; i ++) { >> int length = lengths_to_test[i]; >> memset(buffer, 'x', sizeof(buffer)); // to catch overwriters >> int result = pf(buffer, length, "%s", expected); >> if (length > 0) { >> ASSERT_EQ('\0', buffer[length - 1]); // expect terminating zero >> if (length > sizeof(expected)) { >> ASSERT_EQ(0, strcmp(buffer, expected)); // expect the whole >> string >> to fit >> ASSERT_EQ(result, strlen(expected)) >> } else { >> ASSERT_EQ(0, strncmp(buffer, expected, length)); // expect >> truncation >> ASSERT_EQ(result, expect_count ? strlen(expected) : -1) >> } >> } >> ASSERT_EQ('x', buffer[lengths[i]]); // canary >> } >> } >> >> (for real paranoia one could check also for leading overwriters by writing >> to buffer[1] and checking buffer[0] for 'x', but I do not think this is >> necessary.) >> >> And should we move log_snprintf to logging, the tests would get a bit >> simpler too. >> >> Thanks and Kind Regards, Thomas >> > > From per.liden at oracle.com Wed Feb 21 09:00:49 2018 From: per.liden at oracle.com (Per Liden) Date: Wed, 21 Feb 2018 10:00:49 +0100 Subject: RFR: 8198474: Move JNIHandles::resolve into jniHandles.inline.hpp In-Reply-To: References: Message-ID: Hi Kim, On 02/21/2018 04:02 AM, Kim Barrett wrote: > Please review this change to split jniHandles.hpp, moving the inline > definition of JNIHandles::resolve and related functions to the new > jniHandles.inline.hpp. This is being done in preparation for > JDK-8195972 "Refactor oops in JNI to use the Access API". This is > needed so we can include access.inline.hpp as part of that change, > making the implementation of Access available for reference by > JNIHandles::resolve &etc. > > This was accomplished largly by a simple copy of the code, and > updating the #includes of lots of files. However, > resolve_external_guard was changed to no longer be inline. It doesn't > seem to be performance critical, and this change reduced the fanout on > #include updates. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8198474 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8198474/open.00/ > [Note to Oracle reviewers: There is a closed part to this change too.] src/hotspot/share/ci/ciObject.cpp --------------------------------- +// Get the oop of this ciObject. +oop ciObject::get_oop() const { + assert(_handle != NULL, "null oop"); + return JNIHandles::resolve_non_null(_handle); +} I know your change didn't add it, but the above assert is unnecessary, since JNIHandles::resolve_non_null() already does exactly that. I suggest we just remove it. src/hotspot/share/jvmci/jvmciCodeInstaller.hpp ---------------------------------------------- +// --- FIXME +#include "runtime/jniHandles.inline.hpp" Looks like a FIXME was left here, and the include line sits outside of the main include block. Is there still something to fix here? src/hotspot/share/jvmci/jvmciJavaClasses.hpp -------------------------------------------- +// --- FIXME +#include "runtime/jniHandles.inline.hpp" Same here. src/hotspot/share/runtime/jniHandles.inline.hpp ----------------------------------------------- + +#endif // include guard + Should be: #endif // SHARE_RUNTIME_JNIHANDLES_INLINE_HPP cheers, Per > > Testing: > hs-tier1 in isolation > {hs,jdk}-tier{1,2,3} JDK-8195972 as part of changes for JDK-8195972 > > From adinn at redhat.com Wed Feb 21 09:47:20 2018 From: adinn at redhat.com (Andrew Dinn) Date: Wed, 21 Feb 2018 09:47:20 +0000 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> <6ce3fd81-8eb0-2b4c-369a-b5f70b46af4c@redhat.com> <07732449-1b34-d30e-9e93-c0a0196abf32@redhat.com> Message-ID: On 20/02/18 17:47, JC Beyler wrote: > Fixed :) > http://cr.openjdk.java.net/~jcbeyler/8198439/webrev.01/ Ok, that's good to go. I've pushed this to hs on your behalf. Since it is AArch64 only it doesn't need any further approval. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From stuart.monteith at linaro.org Wed Feb 21 11:12:07 2018 From: stuart.monteith at linaro.org (Stuart Monteith) Date: Wed, 21 Feb 2018 11:12:07 +0000 Subject: RFR JDK-8194084: Obsolete FastTLABRefill and remove the related code In-Reply-To: References: <20c0a51d-e65f-f6c1-7322-a2536214b660@oracle.com> <6ce3fd81-8eb0-2b4c-369a-b5f70b46af4c@redhat.com> <07732449-1b34-d30e-9e93-c0a0196abf32@redhat.com> Message-ID: Looks ok to me too. I've just been eyeballing the output and running the hotspot tests against it. I've not found any issues. Thanks, Stuart On 21 February 2018 at 09:47, Andrew Dinn wrote: > On 20/02/18 17:47, JC Beyler wrote: >> Fixed :) >> http://cr.openjdk.java.net/~jcbeyler/8198439/webrev.01/ > Ok, that's good to go. > > I've pushed this to hs on your behalf. Since it is AArch64 only it > doesn't need any further approval. > > regards, > > > Andrew Dinn > ----------- > Senior Principal Software Engineer > Red Hat UK Ltd > Registered in England and Wales under Company Registration No. 03798903 > Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From erik.osterlund at oracle.com Wed Feb 21 11:33:16 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Wed, 21 Feb 2018 12:33:16 +0100 Subject: RFR(L): 8195142: Refactor out card table from CardTableModRefBS to flatten the BarrierSet hierarchy In-Reply-To: <79f7fbf8-b0cc-1999-45a9-aec8f8879f77@oracle.com> References: <5A5F476E.6080000@oracle.com> <5A855376.5090203@oracle.com> <79f7fbf8-b0cc-1999-45a9-aec8f8879f77@oracle.com> Message-ID: <5A8D58FC.10603@oracle.com> Hi Erik, Thank you for reviewing this. New full webrev: http://cr.openjdk.java.net/~eosterlund/8195142/webrev.02/ New incremental webrev: http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01_02/ On 2018-02-21 09:18, Erik Helin wrote: > Hi Erik, > > this is a very nice improvement, thanks for working on this! > > A few minor comments thus far: > - in stubGenerator_ppc.cpp: > you seem to have lost a `const` in the refactoring Fixed. > - in psCardTable.hpp: > I don't think card_mark_must_follow_store() is needed, since > PSCardTable passes `false` for `conc_scan` to the CardTable > constructor Fixed. I took the liberty of also making the condition for card_mark_must_follow_store() more precise on CMS by making the condition for scanned_concurrently consider whether CMSPrecleaningEnabled is set or not (like other generated code does). > - in g1CollectedHeap.hpp: > could you store the G1CardTable as a field in G1CollectedHeap? Also, > could you name the "getter" just card_table()? (I see that > g1_hot_card_cache method above, but that one should also be renamed to > just hot_card_cache, but in another patch) Fixed. > - in cardTable.hpp and cardTable.cpp: > could you use `hg cp` when constructing these files from > cardTableModRefBS.{hpp,cpp} so the history is preserved? Yes, I will do this before pushing to make sure the history is preserved. Thanks, /Erik > > Thanks, > Erik > > On 02/15/2018 10:31 AM, Erik ?sterlund wrote: >> Hi, >> >> Here is an updated revision of this webrev after internal feedback >> from StefanK who helped looking through my changes - thanks a lot for >> the help with that. >> >> The changes to the new revision are a bunch of minor clean up >> changes, e.g. copy right headers, indentation issues, sorting >> includes, adding/removing newlines, reverting an assert error >> message, fixing constructor initialization orders, and things like that. >> >> The problem I mentioned last time about the version number of our >> repo not yet being bumped to 11 and resulting awkwardness in JVMCI >> has been resolved by simply waiting. So now I changed the JVMCI logic >> to get the card values from the new location in the corresponding >> card tables when observing JDK version 11 or above. >> >> New full webrev (rebased onto a month fresher jdk-hs): >> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01/ >> >> Incremental webrev (over the rebase): >> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00_01/ >> >> This new version has run through hs-tier1-5 and jdk-tier1-3 without >> any issues. >> >> Thanks, >> /Erik >> >> On 2018-01-17 13:54, Erik ?sterlund wrote: >>> Hi, >>> >>> Today, both Parallel, CMS and Serial share the same code for its >>> card marking barrier. However, they have different requirements how >>> to manage its card tables by the GC. And as the card table itself is >>> embedded as a part of the CardTableModRefBS barrier set, this has >>> led to an unnecessary inheritance hierarchy for CardTableModRefBS, >>> where for example CardTableModRefBSForCTRS and CardTableExtension >>> are CardTableModRefBS subclasses that do not change anything to do >>> with the barriers. >>> >>> To clean up the code, there should really be a separate CardTable >>> hierarchy that contains the differences how to manage the card table >>> from the GC point of view, and simply let CardTableModRefBS have a >>> CardTable. This would allow removing CardTableModRefBSForCTRS and >>> CardTableExtension and their references from shared code (that >>> really have nothing to do with the barriers, despite being barrier >>> sets), and significantly simplify the barrier set code. >>> >>> This patch mechanically performs this refactoring. A new CardTable >>> class has been created with a PSCardTable subclass for Parallel, a >>> CardTableRS for CMS and Serial, and a G1CardTable for G1. All >>> references to card tables and their values have been updated >>> accordingly. >>> >>> This touches a lot of platform specific code, so would be fantastic >>> if port maintainers could have a look that I have not broken anything. >>> >>> There is a slight problem that should be pointed out. There is an >>> unfortunate interaction between Graal and hotspot. Graal needs to >>> know the values of g1 young cards and dirty cards. This is queried >>> in different ways in different versions of the JDK in the >>> ||GraalHotSpotVMConfig.java file. Now these values will move from >>> their barrier set class to their card table class. That means we >>> have at least three cases how to find the correct values. There is >>> one for JDK8, one for JDK9, and now a new one for JDK11. Except, we >>> have not yet bumped the version number to 11 in the repo, and >>> therefore it has to be from JDK10 - 11 for now and updated after >>> incrementing the version number. But that means that it will be >>> temporarily incompatible with JDK10. That is okay for our own copy >>> of Graal, but can not be used by upstream Graal as they are given >>> the choice whether to support the public JDK10 or the JDK11 that >>> does not quite admit to being 11 yet. I chose the solution that >>> works in our repository. I will notify Graal folks of this issue. In >>> the long run, it would be nice if we could have a more solid >>> interface here. >>> >>> However, as an added benefit, this changeset brings about a hundred >>> copyright headers up to date, so others do not have to update them >>> for a while. >>> >>> Bug: >>> https://bugs.openjdk.java.net/browse/JDK-8195142 >>> >>> Webrev: >>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00/ >>> >>> Testing: mach5 hs-tier1-5 plus local AoT testing. >>> >>> Thanks, >>> /Erik >> From per.liden at oracle.com Wed Feb 21 11:31:44 2018 From: per.liden at oracle.com (Per Liden) Date: Wed, 21 Feb 2018 12:31:44 +0100 Subject: RFR: 8198286: Direct memory accessors in typeArrayOop.hpp should use Access API In-Reply-To: <5A8C4FB5.4090706@oracle.com> References: <82c0ce41-6a34-9f87-ec45-6543bde97624@oracle.com> <5A8C4FB5.4090706@oracle.com> Message-ID: <6e3a8240-a304-5fbc-e273-1f5dbdced79f@oracle.com> Hi Erik, Looks good, just one small-ish request. Can we please push the default implementation down to the raw layer, so that the two instances of "return obj;" becomes "return Raw::resolve(obj);". From my point of view, that makes this symmetric with the other functions and helps me think/reason about this. cheers, Per On 02/20/2018 05:41 PM, Erik ?sterlund wrote: > Hi Per, > > (looping in hotspot-dev as this seems to touch more than runtime) > > On 2018-02-20 17:03, Per Liden wrote: >> Hi Erik, >> >> As we discussed, coming up with a good name for the new Access call is >> really hard. All good/descriptive alternatives I can come up with tend >> to be way to long. So, next strategy is to pick something that fits >> into the reset of the API. With this in mind I'd like to suggest we >> just name it: oop Access<>::resolve(oop obj) >> >> The justification would that this this matches the one-verb style we >> have for the other functions (load/store/clone) and it seems that you >> anyway named the internal parts just "resolve", such as >> BARRIER_RESOLVE, and resolve_func_t. > > Sure. > > Here is a full webrev with my proposal for this RFE, now that we agree > on the direction: > http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00/ > > Incremental from the prototype I sent out for early turnaround yesterday: > http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00_inc/ > > It is now enforced that *_addr_raw() functions are to be used by the GC > only, when the GC knows addresses are stable. All other address > resolution goes through non-raw address resolution functions that at a > lower level end up calling the resolve barrier on Access, which can be > overridden by Shenandoah. There are in total two callers of > Access<>::resolve: on oopDesc::field_addr and arrayOop::base. The rest > is derived from that. > > @Roman: Hope this works well for Shenandoah. > @Per: Hope you like the new shorter name. > > Thanks, > /Erik > >> What do you think? >> >> cheers, >> Per >> >> On 02/19/2018 06:08 PM, Erik Osterlund wrote: >>> Hi Roman, >>> >>> I see there is a need to resolve a stable address for some objects to >>> bulk access primitives. The code base is full of assumptions that no >>> barriers are needed for such address resolution. It looks like the >>> proposed approach is to one by one hunt down all such callsites. I >>> could find some places where such barriers are missing. >>> >>> To make the code as maintainable as possible, I would like to propose >>> a slightly different take on this, and would love to hear if this >>> works for Shenandoah or not. The main idea is to annotate places >>> where we do *not* want GC address resolution for internal pointers to >>> objects, instead of where we want it, as it seems to be the common >>> case that we do want to resolve the address. >>> >>> In some more detail: >>> >>> 1) Rip out the *_addr fascilities not used (a whole bunch on oopDesc). >>> 2) Ignore the difference between read/write resolution (write >>> resolution handles both reads and writes). Instead introduce an oop >>> resolve_stable_addr(oop) function in Access. This makes it easier to >>> use. >>> 3) Identify as few callsites as possible for this function. I'm >>> thinking arrayOop::base() and a few strange exceptions. >>> 4) Identify the few places where we explicitly do *not* want address >>> resolution, like calls from GC, and replace them with *_addr_raw >>> variants. >>> 5) Have a switch in barrierSetConfig.hpp that determines whether the >>> build needs to support not to-space invariant GCs or not. >>> >>> With these changes, the number of callsites have been kept down to >>> what I believe to be a minimum. And yet it covers some callsites that >>> you accidentally missed (e.g. jvmciCodeInstaller.cpp). Existing uses >>> of the various *_addr fascilities can in most cases continue to do >>> what they have done in the past. And new uses will not be surprised >>> that they accidentally missed some barriers. It will be solved >>> automagically. >>> >>> Webrev: >>> http://cr.openjdk.java.net/~eosterlund/typearray_resolve/webrev.00/ >>> >>> Please let me know what you think about this style and whether that >>> works for you or not. I have not done proper testing yet, but >>> presented this patch for quicker turn-around so we can synchronize >>> the direction first. >>> >>> Thanks, >>> /Erik >>> >>>> On 16 Feb 2018, at 17:18, Roman Kennke wrote: >>>> >>>> The direct memory accessors in typeArrayOop.hpp, which are usually >>>> used for bulk memory access operations, should use the Access API, in >>>> order to give the garbage collector a chance to intercept the access >>>> (for example, employ read- or write-barriers on the target array). >>>> This also means it's necessary to distinguish between write-accesses >>>> and read-accesses (for example, GCs might want to use a >>>> copy-on-write-barrier for write-accesses only). >>>> >>>> This changeset introduces two new APIs in access.hpp: load_at_addr() >>>> and store_at_addr(), and links it up to the corresponding X_get_addr() >>>> and X_put_addr() in typeArrayOop.hpp. All uses of the previous >>>> X_addr() accessors have been renamed to match their use (load or store >>>> of primitive array elements). >>>> >>>> The changeset is based on the previously proposed: >>>> http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2018-February/026426.html >>>> >>>> >>>> Webrev: >>>> http://cr.openjdk.java.net/~rkennke/8198286/webrev.00/ >>>> Bug: >>>> https://bugs.openjdk.java.net/browse/JDK-8198286 >>>> >>>> Please review! >>>> >>>> Thanks, >>>> Roman > From erik.osterlund at oracle.com Wed Feb 21 11:51:00 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Wed, 21 Feb 2018 12:51:00 +0100 Subject: RFR: 8198286: Direct memory accessors in typeArrayOop.hpp should use Access API In-Reply-To: <6e3a8240-a304-5fbc-e273-1f5dbdced79f@oracle.com> References: <82c0ce41-6a34-9f87-ec45-6543bde97624@oracle.com> <5A8C4FB5.4090706@oracle.com> <6e3a8240-a304-5fbc-e273-1f5dbdced79f@oracle.com> Message-ID: <5A8D5D24.4030605@oracle.com> Hi Per, While I think any potential callsite to RawAccess<>::resolve() would always be rather nonsense, I do not mind moving it to Raw. Incremental webrev: http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00_01/ Full webrev: http://cr.openjdk.java.net/~eosterlund/8198286/webrev.01/ Thanks, /Erik On 2018-02-21 12:31, Per Liden wrote: > Hi Erik, > > Looks good, just one small-ish request. Can we please push the default > implementation down to the raw layer, so that the two instances of > "return obj;" becomes "return Raw::resolve(obj);". From my point of > view, that makes this symmetric with the other functions and helps me > think/reason about this. > > cheers, > Per > > On 02/20/2018 05:41 PM, Erik ?sterlund wrote: >> Hi Per, >> >> (looping in hotspot-dev as this seems to touch more than runtime) >> >> On 2018-02-20 17:03, Per Liden wrote: >>> Hi Erik, >>> >>> As we discussed, coming up with a good name for the new Access call >>> is really hard. All good/descriptive alternatives I can come up with >>> tend to be way to long. So, next strategy is to pick something that >>> fits into the reset of the API. With this in mind I'd like to >>> suggest we just name it: oop Access<>::resolve(oop obj) >>> >>> The justification would that this this matches the one-verb style we >>> have for the other functions (load/store/clone) and it seems that >>> you anyway named the internal parts just "resolve", such as >>> BARRIER_RESOLVE, and resolve_func_t. >> >> Sure. >> >> Here is a full webrev with my proposal for this RFE, now that we >> agree on the direction: >> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00/ >> >> Incremental from the prototype I sent out for early turnaround >> yesterday: >> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00_inc/ >> >> It is now enforced that *_addr_raw() functions are to be used by the >> GC only, when the GC knows addresses are stable. All other address >> resolution goes through non-raw address resolution functions that at >> a lower level end up calling the resolve barrier on Access, which can >> be overridden by Shenandoah. There are in total two callers of >> Access<>::resolve: on oopDesc::field_addr and arrayOop::base. The >> rest is derived from that. >> >> @Roman: Hope this works well for Shenandoah. >> @Per: Hope you like the new shorter name. >> >> Thanks, >> /Erik >> >>> What do you think? >>> >>> cheers, >>> Per >>> >>> On 02/19/2018 06:08 PM, Erik Osterlund wrote: >>>> Hi Roman, >>>> >>>> I see there is a need to resolve a stable address for some objects >>>> to bulk access primitives. The code base is full of assumptions >>>> that no barriers are needed for such address resolution. It looks >>>> like the proposed approach is to one by one hunt down all such >>>> callsites. I could find some places where such barriers are missing. >>>> >>>> To make the code as maintainable as possible, I would like to >>>> propose a slightly different take on this, and would love to hear >>>> if this works for Shenandoah or not. The main idea is to annotate >>>> places where we do *not* want GC address resolution for internal >>>> pointers to objects, instead of where we want it, as it seems to be >>>> the common case that we do want to resolve the address. >>>> >>>> In some more detail: >>>> >>>> 1) Rip out the *_addr fascilities not used (a whole bunch on oopDesc). >>>> 2) Ignore the difference between read/write resolution (write >>>> resolution handles both reads and writes). Instead introduce an oop >>>> resolve_stable_addr(oop) function in Access. This makes it easier >>>> to use. >>>> 3) Identify as few callsites as possible for this function. I'm >>>> thinking arrayOop::base() and a few strange exceptions. >>>> 4) Identify the few places where we explicitly do *not* want >>>> address resolution, like calls from GC, and replace them with >>>> *_addr_raw variants. >>>> 5) Have a switch in barrierSetConfig.hpp that determines whether >>>> the build needs to support not to-space invariant GCs or not. >>>> >>>> With these changes, the number of callsites have been kept down to >>>> what I believe to be a minimum. And yet it covers some callsites >>>> that you accidentally missed (e.g. jvmciCodeInstaller.cpp). >>>> Existing uses of the various *_addr fascilities can in most cases >>>> continue to do what they have done in the past. And new uses will >>>> not be surprised that they accidentally missed some barriers. It >>>> will be solved automagically. >>>> >>>> Webrev: >>>> http://cr.openjdk.java.net/~eosterlund/typearray_resolve/webrev.00/ >>>> >>>> Please let me know what you think about this style and whether that >>>> works for you or not. I have not done proper testing yet, but >>>> presented this patch for quicker turn-around so we can synchronize >>>> the direction first. >>>> >>>> Thanks, >>>> /Erik >>>> >>>>> On 16 Feb 2018, at 17:18, Roman Kennke wrote: >>>>> >>>>> The direct memory accessors in typeArrayOop.hpp, which are usually >>>>> used for bulk memory access operations, should use the Access API, in >>>>> order to give the garbage collector a chance to intercept the access >>>>> (for example, employ read- or write-barriers on the target array). >>>>> This also means it's necessary to distinguish between write-accesses >>>>> and read-accesses (for example, GCs might want to use a >>>>> copy-on-write-barrier for write-accesses only). >>>>> >>>>> This changeset introduces two new APIs in access.hpp: load_at_addr() >>>>> and store_at_addr(), and links it up to the corresponding >>>>> X_get_addr() >>>>> and X_put_addr() in typeArrayOop.hpp. All uses of the previous >>>>> X_addr() accessors have been renamed to match their use (load or >>>>> store >>>>> of primitive array elements). >>>>> >>>>> The changeset is based on the previously proposed: >>>>> http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2018-February/026426.html >>>>> >>>>> >>>>> Webrev: >>>>> http://cr.openjdk.java.net/~rkennke/8198286/webrev.00/ >>>>> Bug: >>>>> https://bugs.openjdk.java.net/browse/JDK-8198286 >>>>> >>>>> Please review! >>>>> >>>>> Thanks, >>>>> Roman >> From coleen.phillimore at oracle.com Wed Feb 21 12:33:26 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 21 Feb 2018 07:33:26 -0500 Subject: RFR: 8198474: Move JNIHandles::resolve into jniHandles.inline.hpp In-Reply-To: References: Message-ID: <04d54205-61da-9f7b-84be-a0956cc68bcd@oracle.com> http://cr.openjdk.java.net/~kbarrett/8198474/open.00/src/hotspot/share/jvmci/jvmciCodeInstaller.hpp.udiff.html I think you should move these oop functions to the cpp file, rather than leave FIXME.? This seems very limited. ? objArrayOop sites() { return (objArrayOop) JNIHandles::resolve(_sites_handle); } ? arrayOop code() { return (arrayOop) JNIHandles::resolve(_code_handle); } ? arrayOop data_section() { return (arrayOop) JNIHandles::resolve(_data_section_handle); } ? objArrayOop data_section_patches() { return (objArrayOop) JNIHandles::resolve(_data_section_patches_handle); } #ifndef PRODUCT ? objArrayOop comments() { return (objArrayOop) JNIHandles::resolve(_comments_handle); } #endif ? oop word_kind() { return (oop) JNIHandles::resolve(_word_kind_handle); } http://cr.openjdk.java.net/~kbarrett/8198474/open.00/src/hotspot/share/jvmci/jvmciJavaClasses.hpp.udiff.html This one is not so easy, so can you add to FIXME some short comment to not include .inline.hpp in hpp files? The rest looks good. Thanks, Coleen On 2/20/18 10:02 PM, Kim Barrett wrote: > Please review this change to split jniHandles.hpp, moving the inline > definition of JNIHandles::resolve and related functions to the new > jniHandles.inline.hpp. This is being done in preparation for > JDK-8195972 "Refactor oops in JNI to use the Access API". This is > needed so we can include access.inline.hpp as part of that change, > making the implementation of Access available for reference by > JNIHandles::resolve &etc. > > This was accomplished largly by a simple copy of the code, and > updating the #includes of lots of files. However, > resolve_external_guard was changed to no longer be inline. It doesn't > seem to be performance critical, and this change reduced the fanout on > #include updates. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8198474 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8198474/open.00/ > [Note to Oracle reviewers: There is a closed part to this change too.] > > Testing: > hs-tier1 in isolation > {hs,jdk}-tier{1,2,3} JDK-8195972 as part of changes for JDK-8195972 > > From thomas.schatzl at oracle.com Wed Feb 21 12:44:05 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Wed, 21 Feb 2018 13:44:05 +0100 Subject: RFR: 8198474: Move JNIHandles::resolve into jniHandles.inline.hpp In-Reply-To: References: Message-ID: <1519217045.2401.14.camel@oracle.com> Hi Kim, seem good, two minor comments: - in jvmciCodeInstaller.hpp and jvmciJavaClasses.hpp, can the "FIXME" comment elaborate a bit more what's broken, and file a CR, maybe even detailing how this could be fixed. If it is not current any more, please remove the comments. I just really really do not like "FIXME" comments, nobody is going to remember next time what the issue was, whether it has been fixed, etc. - copyright update in jvmtiEnvBase.hpp missing Thanks, Thomas On Tue, 2018-02-20 at 22:02 -0500, Kim Barrett wrote: > Please review this change to split jniHandles.hpp, moving the inline > definition of JNIHandles::resolve and related functions to the new > jniHandles.inline.hpp. This is being done in preparation for > JDK-8195972 "Refactor oops in JNI to use the Access API". This is > needed so we can include access.inline.hpp as part of that change, > making the implementation of Access available for reference by > JNIHandles::resolve &etc. > > This was accomplished largly by a simple copy of the code, and > updating the #includes of lots of files. However, > resolve_external_guard was changed to no longer be inline. It > doesn't > seem to be performance critical, and this change reduced the fanout > on > #include updates. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8198474 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8198474/open.00/ > [Note to Oracle reviewers: There is a closed part to this change > too.] > > Testing: > hs-tier1 in isolation > {hs,jdk}-tier{1,2,3} JDK-8195972 as part of changes for JDK-8195972 > > From coleen.phillimore at oracle.com Wed Feb 21 13:17:09 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 21 Feb 2018 08:17:09 -0500 Subject: RFR(L): 8195142: Refactor out card table from CardTableModRefBS to flatten the BarrierSet hierarchy In-Reply-To: <5A8D58FC.10603@oracle.com> References: <5A5F476E.6080000@oracle.com> <5A855376.5090203@oracle.com> <79f7fbf8-b0cc-1999-45a9-aec8f8879f77@oracle.com> <5A8D58FC.10603@oracle.com> Message-ID: <1f234e18-9226-f6a2-d912-76ef3cb75e21@oracle.com> Hi Erik,? I started looking at this but was quickly overwhelmed by the changes.? It looks like the case for BarrierSet::ModRef is removed in the stubGenerator code(s) but not in templateTable do_oop_store.?? Should the case of BarrierSet::ModRef get a ShouldNotReachHere in stubGenerator in the places where they are removed? Some platforms have code for this in do_oop_store in templateTable and some platforms get ShouldNotReachHere(), which does not pattern match for me. - case BarrierSet::CardTableForRS: - case BarrierSet::CardTableExtension: - case BarrierSet::ModRef: + case BarrierSet::CardTableModRef: I think SAP should test this out on the other platforms to hopefully avoid any issues we've been seeing lately with multi-platform changes.? CCing Thomas. thanks, Coleen On 2/21/18 6:33 AM, Erik ?sterlund wrote: > Hi Erik, > > Thank you for reviewing this. > > New full webrev: > http://cr.openjdk.java.net/~eosterlund/8195142/webrev.02/ > > New incremental webrev: > http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01_02/ > > On 2018-02-21 09:18, Erik Helin wrote: >> Hi Erik, >> >> this is a very nice improvement, thanks for working on this! >> >> A few minor comments thus far: >> - in stubGenerator_ppc.cpp: >> ? you seem to have lost a `const` in the refactoring > > Fixed. > >> - in psCardTable.hpp: >> ? I don't think card_mark_must_follow_store() is needed, since >> ? PSCardTable passes `false` for `conc_scan` to the CardTable >> ? constructor > > Fixed. I took the liberty of also making the condition for > card_mark_must_follow_store() more precise on CMS by making the > condition for scanned_concurrently consider whether > CMSPrecleaningEnabled is set or not (like other generated code does). > >> - in g1CollectedHeap.hpp: >> ? could you store the G1CardTable as a field in G1CollectedHeap? Also, >> ? could you name the "getter" just card_table()? (I see that >> ? g1_hot_card_cache method above, but that one should also be renamed to >> ? just hot_card_cache, but in another patch) > > Fixed. > >> - in cardTable.hpp and cardTable.cpp: >> ? could you use `hg cp` when constructing these files from >> ? cardTableModRefBS.{hpp,cpp} so the history is preserved? > > Yes, I will do this before pushing to make sure the history is preserved. > > Thanks, > /Erik > >> >> Thanks, >> Erik >> >> On 02/15/2018 10:31 AM, Erik ?sterlund wrote: >>> Hi, >>> >>> Here is an updated revision of this webrev after internal feedback >>> from StefanK who helped looking through my changes - thanks a lot >>> for the help with that. >>> >>> The changes to the new revision are a bunch of minor clean up >>> changes, e.g. copy right headers, indentation issues, sorting >>> includes, adding/removing newlines, reverting an assert error >>> message, fixing constructor initialization orders, and things like >>> that. >>> >>> The problem I mentioned last time about the version number of our >>> repo not yet being bumped to 11 and resulting awkwardness in JVMCI >>> has been resolved by simply waiting. So now I changed the JVMCI >>> logic to get the card values from the new location in the >>> corresponding card tables when observing JDK version 11 or above. >>> >>> New full webrev (rebased onto a month fresher jdk-hs): >>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01/ >>> >>> Incremental webrev (over the rebase): >>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00_01/ >>> >>> This new version has run through hs-tier1-5 and jdk-tier1-3 without >>> any issues. >>> >>> Thanks, >>> /Erik >>> >>> On 2018-01-17 13:54, Erik ?sterlund wrote: >>>> Hi, >>>> >>>> Today, both Parallel, CMS and Serial share the same code for its >>>> card marking barrier. However, they have different requirements how >>>> to manage its card tables by the GC. And as the card table itself >>>> is embedded as a part of the CardTableModRefBS barrier set, this >>>> has led to an unnecessary inheritance hierarchy for >>>> CardTableModRefBS, where for example CardTableModRefBSForCTRS and >>>> CardTableExtension are CardTableModRefBS subclasses that do not >>>> change anything to do with the barriers. >>>> >>>> To clean up the code, there should really be a separate CardTable >>>> hierarchy that contains the differences how to manage the card >>>> table from the GC point of view, and simply let CardTableModRefBS >>>> have a CardTable. This would allow removing >>>> CardTableModRefBSForCTRS and CardTableExtension and their >>>> references from shared code (that really have nothing to do with >>>> the barriers, despite being barrier sets), and significantly >>>> simplify the barrier set code. >>>> >>>> This patch mechanically performs this refactoring. A new CardTable >>>> class has been created with a PSCardTable subclass for Parallel, a >>>> CardTableRS for CMS and Serial, and a G1CardTable for G1. All >>>> references to card tables and their values have been updated >>>> accordingly. >>>> >>>> This touches a lot of platform specific code, so would be fantastic >>>> if port maintainers could have a look that I have not broken anything. >>>> >>>> There is a slight problem that should be pointed out. There is an >>>> unfortunate interaction between Graal and hotspot. Graal needs to >>>> know the values of g1 young cards and dirty cards. This is queried >>>> in different ways in different versions of the JDK in the >>>> ||GraalHotSpotVMConfig.java file. Now these values will move from >>>> their barrier set class to their card table class. That means we >>>> have at least three cases how to find the correct values. There is >>>> one for JDK8, one for JDK9, and now a new one for JDK11. Except, we >>>> have not yet bumped the version number to 11 in the repo, and >>>> therefore it has to be from JDK10 - 11 for now and updated after >>>> incrementing the version number. But that means that it will be >>>> temporarily incompatible with JDK10. That is okay for our own copy >>>> of Graal, but can not be used by upstream Graal as they are given >>>> the choice whether to support the public JDK10 or the JDK11 that >>>> does not quite admit to being 11 yet. I chose the solution that >>>> works in our repository. I will notify Graal folks of this issue. >>>> In the long run, it would be nice if we could have a more solid >>>> interface here. >>>> >>>> However, as an added benefit, this changeset brings about a hundred >>>> copyright headers up to date, so others do not have to update them >>>> for a while. >>>> >>>> Bug: >>>> https://bugs.openjdk.java.net/browse/JDK-8195142 >>>> >>>> Webrev: >>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00/ >>>> >>>> Testing: mach5 hs-tier1-5 plus local AoT testing. >>>> >>>> Thanks, >>>> /Erik >>> > From per.liden at oracle.com Wed Feb 21 13:18:10 2018 From: per.liden at oracle.com (Per Liden) Date: Wed, 21 Feb 2018 14:18:10 +0100 Subject: RFR: 8198286: Direct memory accessors in typeArrayOop.hpp should use Access API In-Reply-To: <5A8D5D24.4030605@oracle.com> References: <82c0ce41-6a34-9f87-ec45-6543bde97624@oracle.com> <5A8C4FB5.4090706@oracle.com> <6e3a8240-a304-5fbc-e273-1f5dbdced79f@oracle.com> <5A8D5D24.4030605@oracle.com> Message-ID: Thanks Erik! Looks good! /Per On 02/21/2018 12:51 PM, Erik ?sterlund wrote: > Hi Per, > > While I think any potential callsite to RawAccess<>::resolve() would > always be rather nonsense, I do not mind moving it to Raw. > > Incremental webrev: > http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00_01/ > > Full webrev: > http://cr.openjdk.java.net/~eosterlund/8198286/webrev.01/ > > Thanks, > /Erik > > On 2018-02-21 12:31, Per Liden wrote: >> Hi Erik, >> >> Looks good, just one small-ish request. Can we please push the default >> implementation down to the raw layer, so that the two instances of >> "return obj;" becomes "return Raw::resolve(obj);". From my point of >> view, that makes this symmetric with the other functions and helps me >> think/reason about this. >> >> cheers, >> Per >> >> On 02/20/2018 05:41 PM, Erik ?sterlund wrote: >>> Hi Per, >>> >>> (looping in hotspot-dev as this seems to touch more than runtime) >>> >>> On 2018-02-20 17:03, Per Liden wrote: >>>> Hi Erik, >>>> >>>> As we discussed, coming up with a good name for the new Access call >>>> is really hard. All good/descriptive alternatives I can come up with >>>> tend to be way to long. So, next strategy is to pick something that >>>> fits into the reset of the API. With this in mind I'd like to >>>> suggest we just name it: oop Access<>::resolve(oop obj) >>>> >>>> The justification would that this this matches the one-verb style we >>>> have for the other functions (load/store/clone) and it seems that >>>> you anyway named the internal parts just "resolve", such as >>>> BARRIER_RESOLVE, and resolve_func_t. >>> >>> Sure. >>> >>> Here is a full webrev with my proposal for this RFE, now that we >>> agree on the direction: >>> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00/ >>> >>> Incremental from the prototype I sent out for early turnaround >>> yesterday: >>> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00_inc/ >>> >>> It is now enforced that *_addr_raw() functions are to be used by the >>> GC only, when the GC knows addresses are stable. All other address >>> resolution goes through non-raw address resolution functions that at >>> a lower level end up calling the resolve barrier on Access, which can >>> be overridden by Shenandoah. There are in total two callers of >>> Access<>::resolve: on oopDesc::field_addr and arrayOop::base. The >>> rest is derived from that. >>> >>> @Roman: Hope this works well for Shenandoah. >>> @Per: Hope you like the new shorter name. >>> >>> Thanks, >>> /Erik >>> >>>> What do you think? >>>> >>>> cheers, >>>> Per >>>> >>>> On 02/19/2018 06:08 PM, Erik Osterlund wrote: >>>>> Hi Roman, >>>>> >>>>> I see there is a need to resolve a stable address for some objects >>>>> to bulk access primitives. The code base is full of assumptions >>>>> that no barriers are needed for such address resolution. It looks >>>>> like the proposed approach is to one by one hunt down all such >>>>> callsites. I could find some places where such barriers are missing. >>>>> >>>>> To make the code as maintainable as possible, I would like to >>>>> propose a slightly different take on this, and would love to hear >>>>> if this works for Shenandoah or not. The main idea is to annotate >>>>> places where we do *not* want GC address resolution for internal >>>>> pointers to objects, instead of where we want it, as it seems to be >>>>> the common case that we do want to resolve the address. >>>>> >>>>> In some more detail: >>>>> >>>>> 1) Rip out the *_addr fascilities not used (a whole bunch on oopDesc). >>>>> 2) Ignore the difference between read/write resolution (write >>>>> resolution handles both reads and writes). Instead introduce an oop >>>>> resolve_stable_addr(oop) function in Access. This makes it easier >>>>> to use. >>>>> 3) Identify as few callsites as possible for this function. I'm >>>>> thinking arrayOop::base() and a few strange exceptions. >>>>> 4) Identify the few places where we explicitly do *not* want >>>>> address resolution, like calls from GC, and replace them with >>>>> *_addr_raw variants. >>>>> 5) Have a switch in barrierSetConfig.hpp that determines whether >>>>> the build needs to support not to-space invariant GCs or not. >>>>> >>>>> With these changes, the number of callsites have been kept down to >>>>> what I believe to be a minimum. And yet it covers some callsites >>>>> that you accidentally missed (e.g. jvmciCodeInstaller.cpp). >>>>> Existing uses of the various *_addr fascilities can in most cases >>>>> continue to do what they have done in the past. And new uses will >>>>> not be surprised that they accidentally missed some barriers. It >>>>> will be solved automagically. >>>>> >>>>> Webrev: >>>>> http://cr.openjdk.java.net/~eosterlund/typearray_resolve/webrev.00/ >>>>> >>>>> Please let me know what you think about this style and whether that >>>>> works for you or not. I have not done proper testing yet, but >>>>> presented this patch for quicker turn-around so we can synchronize >>>>> the direction first. >>>>> >>>>> Thanks, >>>>> /Erik >>>>> >>>>>> On 16 Feb 2018, at 17:18, Roman Kennke wrote: >>>>>> >>>>>> The direct memory accessors in typeArrayOop.hpp, which are usually >>>>>> used for bulk memory access operations, should use the Access API, in >>>>>> order to give the garbage collector a chance to intercept the access >>>>>> (for example, employ read- or write-barriers on the target array). >>>>>> This also means it's necessary to distinguish between write-accesses >>>>>> and read-accesses (for example, GCs might want to use a >>>>>> copy-on-write-barrier for write-accesses only). >>>>>> >>>>>> This changeset introduces two new APIs in access.hpp: load_at_addr() >>>>>> and store_at_addr(), and links it up to the corresponding >>>>>> X_get_addr() >>>>>> and X_put_addr() in typeArrayOop.hpp. All uses of the previous >>>>>> X_addr() accessors have been renamed to match their use (load or >>>>>> store >>>>>> of primitive array elements). >>>>>> >>>>>> The changeset is based on the previously proposed: >>>>>> http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2018-February/026426.html >>>>>> >>>>>> >>>>>> Webrev: >>>>>> http://cr.openjdk.java.net/~rkennke/8198286/webrev.00/ >>>>>> Bug: >>>>>> https://bugs.openjdk.java.net/browse/JDK-8198286 >>>>>> >>>>>> Please review! >>>>>> >>>>>> Thanks, >>>>>> Roman >>> > From rkennke at redhat.com Wed Feb 21 13:28:01 2018 From: rkennke at redhat.com (Roman Kennke) Date: Wed, 21 Feb 2018 14:28:01 +0100 Subject: RFR: 8198286: Direct memory accessors in typeArrayOop.hpp should use Access API In-Reply-To: References: <82c0ce41-6a34-9f87-ec45-6543bde97624@oracle.com> <5A8C4FB5.4090706@oracle.com> <6e3a8240-a304-5fbc-e273-1f5dbdced79f@oracle.com> <5A8D5D24.4030605@oracle.com> Message-ID: Looks good to me too. Thanks!! Roman On Wed, Feb 21, 2018 at 2:18 PM, Per Liden wrote: > Thanks Erik! > > Looks good! > > /Per > > > On 02/21/2018 12:51 PM, Erik ?sterlund wrote: >> >> Hi Per, >> >> While I think any potential callsite to RawAccess<>::resolve() would >> always be rather nonsense, I do not mind moving it to Raw. >> >> Incremental webrev: >> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00_01/ >> >> Full webrev: >> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.01/ >> >> Thanks, >> /Erik >> >> On 2018-02-21 12:31, Per Liden wrote: >>> >>> Hi Erik, >>> >>> Looks good, just one small-ish request. Can we please push the default >>> implementation down to the raw layer, so that the two instances of "return >>> obj;" becomes "return Raw::resolve(obj);". From my point of view, that makes >>> this symmetric with the other functions and helps me think/reason about >>> this. >>> >>> cheers, >>> Per >>> >>> On 02/20/2018 05:41 PM, Erik ?sterlund wrote: >>>> >>>> Hi Per, >>>> >>>> (looping in hotspot-dev as this seems to touch more than runtime) >>>> >>>> On 2018-02-20 17:03, Per Liden wrote: >>>>> >>>>> Hi Erik, >>>>> >>>>> As we discussed, coming up with a good name for the new Access call is >>>>> really hard. All good/descriptive alternatives I can come up with tend to be >>>>> way to long. So, next strategy is to pick something that fits into the reset >>>>> of the API. With this in mind I'd like to suggest we just name it: oop >>>>> Access<>::resolve(oop obj) >>>>> >>>>> The justification would that this this matches the one-verb style we >>>>> have for the other functions (load/store/clone) and it seems that you anyway >>>>> named the internal parts just "resolve", such as BARRIER_RESOLVE, and >>>>> resolve_func_t. >>>> >>>> >>>> Sure. >>>> >>>> Here is a full webrev with my proposal for this RFE, now that we agree >>>> on the direction: >>>> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00/ >>>> >>>> Incremental from the prototype I sent out for early turnaround >>>> yesterday: >>>> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00_inc/ >>>> >>>> It is now enforced that *_addr_raw() functions are to be used by the GC >>>> only, when the GC knows addresses are stable. All other address resolution >>>> goes through non-raw address resolution functions that at a lower level end >>>> up calling the resolve barrier on Access, which can be overridden by >>>> Shenandoah. There are in total two callers of Access<>::resolve: on >>>> oopDesc::field_addr and arrayOop::base. The rest is derived from that. >>>> >>>> @Roman: Hope this works well for Shenandoah. >>>> @Per: Hope you like the new shorter name. >>>> >>>> Thanks, >>>> /Erik >>>> >>>>> What do you think? >>>>> >>>>> cheers, >>>>> Per >>>>> >>>>> On 02/19/2018 06:08 PM, Erik Osterlund wrote: >>>>>> >>>>>> Hi Roman, >>>>>> >>>>>> I see there is a need to resolve a stable address for some objects to >>>>>> bulk access primitives. The code base is full of assumptions that no >>>>>> barriers are needed for such address resolution. It looks like the proposed >>>>>> approach is to one by one hunt down all such callsites. I could find some >>>>>> places where such barriers are missing. >>>>>> >>>>>> To make the code as maintainable as possible, I would like to propose >>>>>> a slightly different take on this, and would love to hear if this works for >>>>>> Shenandoah or not. The main idea is to annotate places where we do *not* >>>>>> want GC address resolution for internal pointers to objects, instead of >>>>>> where we want it, as it seems to be the common case that we do want to >>>>>> resolve the address. >>>>>> >>>>>> In some more detail: >>>>>> >>>>>> 1) Rip out the *_addr fascilities not used (a whole bunch on oopDesc). >>>>>> 2) Ignore the difference between read/write resolution (write >>>>>> resolution handles both reads and writes). Instead introduce an oop >>>>>> resolve_stable_addr(oop) function in Access. This makes it easier to use. >>>>>> 3) Identify as few callsites as possible for this function. I'm >>>>>> thinking arrayOop::base() and a few strange exceptions. >>>>>> 4) Identify the few places where we explicitly do *not* want address >>>>>> resolution, like calls from GC, and replace them with *_addr_raw variants. >>>>>> 5) Have a switch in barrierSetConfig.hpp that determines whether the >>>>>> build needs to support not to-space invariant GCs or not. >>>>>> >>>>>> With these changes, the number of callsites have been kept down to >>>>>> what I believe to be a minimum. And yet it covers some callsites that you >>>>>> accidentally missed (e.g. jvmciCodeInstaller.cpp). Existing uses of the >>>>>> various *_addr fascilities can in most cases continue to do what they have >>>>>> done in the past. And new uses will not be surprised that they accidentally >>>>>> missed some barriers. It will be solved automagically. >>>>>> >>>>>> Webrev: >>>>>> http://cr.openjdk.java.net/~eosterlund/typearray_resolve/webrev.00/ >>>>>> >>>>>> Please let me know what you think about this style and whether that >>>>>> works for you or not. I have not done proper testing yet, but presented this >>>>>> patch for quicker turn-around so we can synchronize the direction first. >>>>>> >>>>>> Thanks, >>>>>> /Erik >>>>>> >>>>>>> On 16 Feb 2018, at 17:18, Roman Kennke wrote: >>>>>>> >>>>>>> The direct memory accessors in typeArrayOop.hpp, which are usually >>>>>>> used for bulk memory access operations, should use the Access API, in >>>>>>> order to give the garbage collector a chance to intercept the access >>>>>>> (for example, employ read- or write-barriers on the target array). >>>>>>> This also means it's necessary to distinguish between write-accesses >>>>>>> and read-accesses (for example, GCs might want to use a >>>>>>> copy-on-write-barrier for write-accesses only). >>>>>>> >>>>>>> This changeset introduces two new APIs in access.hpp: load_at_addr() >>>>>>> and store_at_addr(), and links it up to the corresponding >>>>>>> X_get_addr() >>>>>>> and X_put_addr() in typeArrayOop.hpp. All uses of the previous >>>>>>> X_addr() accessors have been renamed to match their use (load or >>>>>>> store >>>>>>> of primitive array elements). >>>>>>> >>>>>>> The changeset is based on the previously proposed: >>>>>>> >>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2018-February/026426.html >>>>>>> >>>>>>> Webrev: >>>>>>> http://cr.openjdk.java.net/~rkennke/8198286/webrev.00/ >>>>>>> Bug: >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8198286 >>>>>>> >>>>>>> Please review! >>>>>>> >>>>>>> Thanks, >>>>>>> Roman >>>> >>>> >> > From erik.osterlund at oracle.com Wed Feb 21 13:49:44 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Wed, 21 Feb 2018 14:49:44 +0100 Subject: RFR: 8198286: Direct memory accessors in typeArrayOop.hpp should use Access API In-Reply-To: References: <82c0ce41-6a34-9f87-ec45-6543bde97624@oracle.com> <5A8C4FB5.4090706@oracle.com> <6e3a8240-a304-5fbc-e273-1f5dbdced79f@oracle.com> <5A8D5D24.4030605@oracle.com> Message-ID: <5A8D78F8.3090409@oracle.com> Hi Roman, Thanks for the review. /Erik On 2018-02-21 14:28, Roman Kennke wrote: > Looks good to me too. Thanks!! > > Roman > > On Wed, Feb 21, 2018 at 2:18 PM, Per Liden wrote: >> Thanks Erik! >> >> Looks good! >> >> /Per >> >> >> On 02/21/2018 12:51 PM, Erik ?sterlund wrote: >>> Hi Per, >>> >>> While I think any potential callsite to RawAccess<>::resolve() would >>> always be rather nonsense, I do not mind moving it to Raw. >>> >>> Incremental webrev: >>> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00_01/ >>> >>> Full webrev: >>> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.01/ >>> >>> Thanks, >>> /Erik >>> >>> On 2018-02-21 12:31, Per Liden wrote: >>>> Hi Erik, >>>> >>>> Looks good, just one small-ish request. Can we please push the default >>>> implementation down to the raw layer, so that the two instances of "return >>>> obj;" becomes "return Raw::resolve(obj);". From my point of view, that makes >>>> this symmetric with the other functions and helps me think/reason about >>>> this. >>>> >>>> cheers, >>>> Per >>>> >>>> On 02/20/2018 05:41 PM, Erik ?sterlund wrote: >>>>> Hi Per, >>>>> >>>>> (looping in hotspot-dev as this seems to touch more than runtime) >>>>> >>>>> On 2018-02-20 17:03, Per Liden wrote: >>>>>> Hi Erik, >>>>>> >>>>>> As we discussed, coming up with a good name for the new Access call is >>>>>> really hard. All good/descriptive alternatives I can come up with tend to be >>>>>> way to long. So, next strategy is to pick something that fits into the reset >>>>>> of the API. With this in mind I'd like to suggest we just name it: oop >>>>>> Access<>::resolve(oop obj) >>>>>> >>>>>> The justification would that this this matches the one-verb style we >>>>>> have for the other functions (load/store/clone) and it seems that you anyway >>>>>> named the internal parts just "resolve", such as BARRIER_RESOLVE, and >>>>>> resolve_func_t. >>>>> >>>>> Sure. >>>>> >>>>> Here is a full webrev with my proposal for this RFE, now that we agree >>>>> on the direction: >>>>> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00/ >>>>> >>>>> Incremental from the prototype I sent out for early turnaround >>>>> yesterday: >>>>> http://cr.openjdk.java.net/~eosterlund/8198286/webrev.00_inc/ >>>>> >>>>> It is now enforced that *_addr_raw() functions are to be used by the GC >>>>> only, when the GC knows addresses are stable. All other address resolution >>>>> goes through non-raw address resolution functions that at a lower level end >>>>> up calling the resolve barrier on Access, which can be overridden by >>>>> Shenandoah. There are in total two callers of Access<>::resolve: on >>>>> oopDesc::field_addr and arrayOop::base. The rest is derived from that. >>>>> >>>>> @Roman: Hope this works well for Shenandoah. >>>>> @Per: Hope you like the new shorter name. >>>>> >>>>> Thanks, >>>>> /Erik >>>>> >>>>>> What do you think? >>>>>> >>>>>> cheers, >>>>>> Per >>>>>> >>>>>> On 02/19/2018 06:08 PM, Erik Osterlund wrote: >>>>>>> Hi Roman, >>>>>>> >>>>>>> I see there is a need to resolve a stable address for some objects to >>>>>>> bulk access primitives. The code base is full of assumptions that no >>>>>>> barriers are needed for such address resolution. It looks like the proposed >>>>>>> approach is to one by one hunt down all such callsites. I could find some >>>>>>> places where such barriers are missing. >>>>>>> >>>>>>> To make the code as maintainable as possible, I would like to propose >>>>>>> a slightly different take on this, and would love to hear if this works for >>>>>>> Shenandoah or not. The main idea is to annotate places where we do *not* >>>>>>> want GC address resolution for internal pointers to objects, instead of >>>>>>> where we want it, as it seems to be the common case that we do want to >>>>>>> resolve the address. >>>>>>> >>>>>>> In some more detail: >>>>>>> >>>>>>> 1) Rip out the *_addr fascilities not used (a whole bunch on oopDesc). >>>>>>> 2) Ignore the difference between read/write resolution (write >>>>>>> resolution handles both reads and writes). Instead introduce an oop >>>>>>> resolve_stable_addr(oop) function in Access. This makes it easier to use. >>>>>>> 3) Identify as few callsites as possible for this function. I'm >>>>>>> thinking arrayOop::base() and a few strange exceptions. >>>>>>> 4) Identify the few places where we explicitly do *not* want address >>>>>>> resolution, like calls from GC, and replace them with *_addr_raw variants. >>>>>>> 5) Have a switch in barrierSetConfig.hpp that determines whether the >>>>>>> build needs to support not to-space invariant GCs or not. >>>>>>> >>>>>>> With these changes, the number of callsites have been kept down to >>>>>>> what I believe to be a minimum. And yet it covers some callsites that you >>>>>>> accidentally missed (e.g. jvmciCodeInstaller.cpp). Existing uses of the >>>>>>> various *_addr fascilities can in most cases continue to do what they have >>>>>>> done in the past. And new uses will not be surprised that they accidentally >>>>>>> missed some barriers. It will be solved automagically. >>>>>>> >>>>>>> Webrev: >>>>>>> http://cr.openjdk.java.net/~eosterlund/typearray_resolve/webrev.00/ >>>>>>> >>>>>>> Please let me know what you think about this style and whether that >>>>>>> works for you or not. I have not done proper testing yet, but presented this >>>>>>> patch for quicker turn-around so we can synchronize the direction first. >>>>>>> >>>>>>> Thanks, >>>>>>> /Erik >>>>>>> >>>>>>>> On 16 Feb 2018, at 17:18, Roman Kennke wrote: >>>>>>>> >>>>>>>> The direct memory accessors in typeArrayOop.hpp, which are usually >>>>>>>> used for bulk memory access operations, should use the Access API, in >>>>>>>> order to give the garbage collector a chance to intercept the access >>>>>>>> (for example, employ read- or write-barriers on the target array). >>>>>>>> This also means it's necessary to distinguish between write-accesses >>>>>>>> and read-accesses (for example, GCs might want to use a >>>>>>>> copy-on-write-barrier for write-accesses only). >>>>>>>> >>>>>>>> This changeset introduces two new APIs in access.hpp: load_at_addr() >>>>>>>> and store_at_addr(), and links it up to the corresponding >>>>>>>> X_get_addr() >>>>>>>> and X_put_addr() in typeArrayOop.hpp. All uses of the previous >>>>>>>> X_addr() accessors have been renamed to match their use (load or >>>>>>>> store >>>>>>>> of primitive array elements). >>>>>>>> >>>>>>>> The changeset is based on the previously proposed: >>>>>>>> >>>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2018-February/026426.html >>>>>>>> >>>>>>>> Webrev: >>>>>>>> http://cr.openjdk.java.net/~rkennke/8198286/webrev.00/ >>>>>>>> Bug: >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8198286 >>>>>>>> >>>>>>>> Please review! >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Roman >>>>> From harold.seigel at oracle.com Wed Feb 21 14:16:01 2018 From: harold.seigel at oracle.com (harold seigel) Date: Wed, 21 Feb 2018 09:16:01 -0500 Subject: (11) RFR (S) JDK-8197868: VS2017 (C2065) 'timezone': Undeclared Identifier in share/runtime/os.cpp In-Reply-To: <834f002d-7c88-c2e1-3b54-51ffe764a675@oracle.com> References: <834f002d-7c88-c2e1-3b54-51ffe764a675@oracle.com> Message-ID: <7ebfe18b-b469-90be-ae65-de95b0808c19@oracle.com> Looks good to me, also. Harold On 2/16/2018 3:12 PM, coleen.phillimore at oracle.com wrote: > This seems good. > Coleen > > On 2/16/18 11:53 AM, Lois Foltan wrote: >> Please review this change to use the functional version of >> _get_timezone for VS2017.? The global variable timezone has been >> deprecated. >> >> open webrev at >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8197868/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8197868 >> contributed-by: Kim Barrett & Lois Foltan >> >> Testing: hs-tier(1-3), jdk-tier(1-3) complete >> >> Thanks, >> Lois > From erik.osterlund at oracle.com Wed Feb 21 14:53:35 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Wed, 21 Feb 2018 15:53:35 +0100 Subject: RFR(L): 8195142: Refactor out card table from CardTableModRefBS to flatten the BarrierSet hierarchy In-Reply-To: <1f234e18-9226-f6a2-d912-76ef3cb75e21@oracle.com> References: <5A5F476E.6080000@oracle.com> <5A855376.5090203@oracle.com> <79f7fbf8-b0cc-1999-45a9-aec8f8879f77@oracle.com> <5A8D58FC.10603@oracle.com> <1f234e18-9226-f6a2-d912-76ef3cb75e21@oracle.com> Message-ID: <5A8D87EF.5070101@oracle.com> Hi Coleen, Thank you for having a look at this. The BarrierSet switch statements in platform specific code are going away relatively soon. Do you still want me to synchronize all the switches to handle such errors the same way now before we get there, or wait a few patches and have them removed? Thanks, /Erik On 2018-02-21 14:17, coleen.phillimore at oracle.com wrote: > > Hi Erik, I started looking at this but was quickly overwhelmed by the > changes. It looks like the case for BarrierSet::ModRef is removed in > the stubGenerator code(s) but not in templateTable do_oop_store. > Should the case of BarrierSet::ModRef get a ShouldNotReachHere in > stubGenerator in the places where they are removed? > > Some platforms have code for this in do_oop_store in templateTable and > some platforms get ShouldNotReachHere(), which does not pattern match > for me. > > - case BarrierSet::CardTableForRS: > - case BarrierSet::CardTableExtension: > - case BarrierSet::ModRef: > + case BarrierSet::CardTableModRef: > > > I think SAP should test this out on the other platforms to hopefully > avoid any issues we've been seeing lately with multi-platform > changes. CCing Thomas. > > thanks, > Coleen > > On 2/21/18 6:33 AM, Erik ?sterlund wrote: >> Hi Erik, >> >> Thank you for reviewing this. >> >> New full webrev: >> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.02/ >> >> New incremental webrev: >> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01_02/ >> >> On 2018-02-21 09:18, Erik Helin wrote: >>> Hi Erik, >>> >>> this is a very nice improvement, thanks for working on this! >>> >>> A few minor comments thus far: >>> - in stubGenerator_ppc.cpp: >>> you seem to have lost a `const` in the refactoring >> >> Fixed. >> >>> - in psCardTable.hpp: >>> I don't think card_mark_must_follow_store() is needed, since >>> PSCardTable passes `false` for `conc_scan` to the CardTable >>> constructor >> >> Fixed. I took the liberty of also making the condition for >> card_mark_must_follow_store() more precise on CMS by making the >> condition for scanned_concurrently consider whether >> CMSPrecleaningEnabled is set or not (like other generated code does). >> >>> - in g1CollectedHeap.hpp: >>> could you store the G1CardTable as a field in G1CollectedHeap? Also, >>> could you name the "getter" just card_table()? (I see that >>> g1_hot_card_cache method above, but that one should also be >>> renamed to >>> just hot_card_cache, but in another patch) >> >> Fixed. >> >>> - in cardTable.hpp and cardTable.cpp: >>> could you use `hg cp` when constructing these files from >>> cardTableModRefBS.{hpp,cpp} so the history is preserved? >> >> Yes, I will do this before pushing to make sure the history is >> preserved. >> >> Thanks, >> /Erik >> >>> >>> Thanks, >>> Erik >>> >>> On 02/15/2018 10:31 AM, Erik ?sterlund wrote: >>>> Hi, >>>> >>>> Here is an updated revision of this webrev after internal feedback >>>> from StefanK who helped looking through my changes - thanks a lot >>>> for the help with that. >>>> >>>> The changes to the new revision are a bunch of minor clean up >>>> changes, e.g. copy right headers, indentation issues, sorting >>>> includes, adding/removing newlines, reverting an assert error >>>> message, fixing constructor initialization orders, and things like >>>> that. >>>> >>>> The problem I mentioned last time about the version number of our >>>> repo not yet being bumped to 11 and resulting awkwardness in JVMCI >>>> has been resolved by simply waiting. So now I changed the JVMCI >>>> logic to get the card values from the new location in the >>>> corresponding card tables when observing JDK version 11 or above. >>>> >>>> New full webrev (rebased onto a month fresher jdk-hs): >>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01/ >>>> >>>> Incremental webrev (over the rebase): >>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00_01/ >>>> >>>> This new version has run through hs-tier1-5 and jdk-tier1-3 without >>>> any issues. >>>> >>>> Thanks, >>>> /Erik >>>> >>>> On 2018-01-17 13:54, Erik ?sterlund wrote: >>>>> Hi, >>>>> >>>>> Today, both Parallel, CMS and Serial share the same code for its >>>>> card marking barrier. However, they have different requirements >>>>> how to manage its card tables by the GC. And as the card table >>>>> itself is embedded as a part of the CardTableModRefBS barrier set, >>>>> this has led to an unnecessary inheritance hierarchy for >>>>> CardTableModRefBS, where for example CardTableModRefBSForCTRS and >>>>> CardTableExtension are CardTableModRefBS subclasses that do not >>>>> change anything to do with the barriers. >>>>> >>>>> To clean up the code, there should really be a separate CardTable >>>>> hierarchy that contains the differences how to manage the card >>>>> table from the GC point of view, and simply let CardTableModRefBS >>>>> have a CardTable. This would allow removing >>>>> CardTableModRefBSForCTRS and CardTableExtension and their >>>>> references from shared code (that really have nothing to do with >>>>> the barriers, despite being barrier sets), and significantly >>>>> simplify the barrier set code. >>>>> >>>>> This patch mechanically performs this refactoring. A new CardTable >>>>> class has been created with a PSCardTable subclass for Parallel, a >>>>> CardTableRS for CMS and Serial, and a G1CardTable for G1. All >>>>> references to card tables and their values have been updated >>>>> accordingly. >>>>> >>>>> This touches a lot of platform specific code, so would be >>>>> fantastic if port maintainers could have a look that I have not >>>>> broken anything. >>>>> >>>>> There is a slight problem that should be pointed out. There is an >>>>> unfortunate interaction between Graal and hotspot. Graal needs to >>>>> know the values of g1 young cards and dirty cards. This is queried >>>>> in different ways in different versions of the JDK in the >>>>> ||GraalHotSpotVMConfig.java file. Now these values will move from >>>>> their barrier set class to their card table class. That means we >>>>> have at least three cases how to find the correct values. There is >>>>> one for JDK8, one for JDK9, and now a new one for JDK11. Except, >>>>> we have not yet bumped the version number to 11 in the repo, and >>>>> therefore it has to be from JDK10 - 11 for now and updated after >>>>> incrementing the version number. But that means that it will be >>>>> temporarily incompatible with JDK10. That is okay for our own copy >>>>> of Graal, but can not be used by upstream Graal as they are given >>>>> the choice whether to support the public JDK10 or the JDK11 that >>>>> does not quite admit to being 11 yet. I chose the solution that >>>>> works in our repository. I will notify Graal folks of this issue. >>>>> In the long run, it would be nice if we could have a more solid >>>>> interface here. >>>>> >>>>> However, as an added benefit, this changeset brings about a >>>>> hundred copyright headers up to date, so others do not have to >>>>> update them for a while. >>>>> >>>>> Bug: >>>>> https://bugs.openjdk.java.net/browse/JDK-8195142 >>>>> >>>>> Webrev: >>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00/ >>>>> >>>>> Testing: mach5 hs-tier1-5 plus local AoT testing. >>>>> >>>>> Thanks, >>>>> /Erik >>>> >> > From lois.foltan at oracle.com Wed Feb 21 15:04:15 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Wed, 21 Feb 2018 10:04:15 -0500 Subject: (11) RFR (S) JDK-8197868: VS2017 (C2065) 'timezone': Undeclared Identifier in share/runtime/os.cpp In-Reply-To: <7ebfe18b-b469-90be-ae65-de95b0808c19@oracle.com> References: <834f002d-7c88-c2e1-3b54-51ffe764a675@oracle.com> <7ebfe18b-b469-90be-ae65-de95b0808c19@oracle.com> Message-ID: Thanks for the review Harold! Lois On 2/21/2018 9:16 AM, harold seigel wrote: > Looks good to me, also. > > Harold > > > On 2/16/2018 3:12 PM, coleen.phillimore at oracle.com wrote: >> This seems good. >> Coleen >> >> On 2/16/18 11:53 AM, Lois Foltan wrote: >>> Please review this change to use the functional version of >>> _get_timezone for VS2017.? The global variable timezone has been >>> deprecated. >>> >>> open webrev at >>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8197868/webrev/ >>> bug link https://bugs.openjdk.java.net/browse/JDK-8197868 >>> contributed-by: Kim Barrett & Lois Foltan >>> >>> Testing: hs-tier(1-3), jdk-tier(1-3) complete >>> >>> Thanks, >>> Lois >> > From coleen.phillimore at oracle.com Wed Feb 21 16:52:45 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 21 Feb 2018 11:52:45 -0500 Subject: RFR(L): 8195142: Refactor out card table from CardTableModRefBS to flatten the BarrierSet hierarchy In-Reply-To: <5A8D87EF.5070101@oracle.com> References: <5A5F476E.6080000@oracle.com> <5A855376.5090203@oracle.com> <79f7fbf8-b0cc-1999-45a9-aec8f8879f77@oracle.com> <5A8D58FC.10603@oracle.com> <1f234e18-9226-f6a2-d912-76ef3cb75e21@oracle.com> <5A8D87EF.5070101@oracle.com> Message-ID: <1e5ec8ed-cddf-3b18-79f2-4dc81854c72a@oracle.com> On 2/21/18 9:53 AM, Erik ?sterlund wrote: > Hi Coleen, > > Thank you for having a look at this. > > The BarrierSet switch statements in platform specific code are going > away relatively soon. Do you still want me to synchronize all the > switches to handle such errors the same way now before we get there, > or wait a few patches and have them removed? Oh, sure, I'm fine if they're going to be removed (moved?).? I still think SAP should make sure their platforms build before you push if they have time though. thanks, Coleen > > Thanks, > /Erik > > On 2018-02-21 14:17, coleen.phillimore at oracle.com wrote: >> >> Hi Erik,? I started looking at this but was quickly overwhelmed by >> the changes.? It looks like the case for BarrierSet::ModRef is >> removed in the stubGenerator code(s) but not in templateTable >> do_oop_store.?? Should the case of BarrierSet::ModRef get a >> ShouldNotReachHere in stubGenerator in the places where they are >> removed? >> >> Some platforms have code for this in do_oop_store in templateTable >> and some platforms get ShouldNotReachHere(), which does not pattern >> match for me. >> >> - case BarrierSet::CardTableForRS: >> - case BarrierSet::CardTableExtension: >> - case BarrierSet::ModRef: >> + case BarrierSet::CardTableModRef: >> >> >> I think SAP should test this out on the other platforms to hopefully >> avoid any issues we've been seeing lately with multi-platform >> changes.? CCing Thomas. >> >> thanks, >> Coleen >> >> On 2/21/18 6:33 AM, Erik ?sterlund wrote: >>> Hi Erik, >>> >>> Thank you for reviewing this. >>> >>> New full webrev: >>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.02/ >>> >>> New incremental webrev: >>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01_02/ >>> >>> On 2018-02-21 09:18, Erik Helin wrote: >>>> Hi Erik, >>>> >>>> this is a very nice improvement, thanks for working on this! >>>> >>>> A few minor comments thus far: >>>> - in stubGenerator_ppc.cpp: >>>> ? you seem to have lost a `const` in the refactoring >>> >>> Fixed. >>> >>>> - in psCardTable.hpp: >>>> ? I don't think card_mark_must_follow_store() is needed, since >>>> ? PSCardTable passes `false` for `conc_scan` to the CardTable >>>> ? constructor >>> >>> Fixed. I took the liberty of also making the condition for >>> card_mark_must_follow_store() more precise on CMS by making the >>> condition for scanned_concurrently consider whether >>> CMSPrecleaningEnabled is set or not (like other generated code does). >>> >>>> - in g1CollectedHeap.hpp: >>>> ? could you store the G1CardTable as a field in G1CollectedHeap? Also, >>>> ? could you name the "getter" just card_table()? (I see that >>>> ? g1_hot_card_cache method above, but that one should also be >>>> renamed to >>>> ? just hot_card_cache, but in another patch) >>> >>> Fixed. >>> >>>> - in cardTable.hpp and cardTable.cpp: >>>> ? could you use `hg cp` when constructing these files from >>>> ? cardTableModRefBS.{hpp,cpp} so the history is preserved? >>> >>> Yes, I will do this before pushing to make sure the history is >>> preserved. >>> >>> Thanks, >>> /Erik >>> >>>> >>>> Thanks, >>>> Erik >>>> >>>> On 02/15/2018 10:31 AM, Erik ?sterlund wrote: >>>>> Hi, >>>>> >>>>> Here is an updated revision of this webrev after internal feedback >>>>> from StefanK who helped looking through my changes - thanks a lot >>>>> for the help with that. >>>>> >>>>> The changes to the new revision are a bunch of minor clean up >>>>> changes, e.g. copy right headers, indentation issues, sorting >>>>> includes, adding/removing newlines, reverting an assert error >>>>> message, fixing constructor initialization orders, and things like >>>>> that. >>>>> >>>>> The problem I mentioned last time about the version number of our >>>>> repo not yet being bumped to 11 and resulting awkwardness in JVMCI >>>>> has been resolved by simply waiting. So now I changed the JVMCI >>>>> logic to get the card values from the new location in the >>>>> corresponding card tables when observing JDK version 11 or above. >>>>> >>>>> New full webrev (rebased onto a month fresher jdk-hs): >>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01/ >>>>> >>>>> Incremental webrev (over the rebase): >>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00_01/ >>>>> >>>>> This new version has run through hs-tier1-5 and jdk-tier1-3 >>>>> without any issues. >>>>> >>>>> Thanks, >>>>> /Erik >>>>> >>>>> On 2018-01-17 13:54, Erik ?sterlund wrote: >>>>>> Hi, >>>>>> >>>>>> Today, both Parallel, CMS and Serial share the same code for its >>>>>> card marking barrier. However, they have different requirements >>>>>> how to manage its card tables by the GC. And as the card table >>>>>> itself is embedded as a part of the CardTableModRefBS barrier >>>>>> set, this has led to an unnecessary inheritance hierarchy for >>>>>> CardTableModRefBS, where for example CardTableModRefBSForCTRS and >>>>>> CardTableExtension are CardTableModRefBS subclasses that do not >>>>>> change anything to do with the barriers. >>>>>> >>>>>> To clean up the code, there should really be a separate CardTable >>>>>> hierarchy that contains the differences how to manage the card >>>>>> table from the GC point of view, and simply let CardTableModRefBS >>>>>> have a CardTable. This would allow removing >>>>>> CardTableModRefBSForCTRS and CardTableExtension and their >>>>>> references from shared code (that really have nothing to do with >>>>>> the barriers, despite being barrier sets), and significantly >>>>>> simplify the barrier set code. >>>>>> >>>>>> This patch mechanically performs this refactoring. A new >>>>>> CardTable class has been created with a PSCardTable subclass for >>>>>> Parallel, a CardTableRS for CMS and Serial, and a G1CardTable for >>>>>> G1. All references to card tables and their values have been >>>>>> updated accordingly. >>>>>> >>>>>> This touches a lot of platform specific code, so would be >>>>>> fantastic if port maintainers could have a look that I have not >>>>>> broken anything. >>>>>> >>>>>> There is a slight problem that should be pointed out. There is an >>>>>> unfortunate interaction between Graal and hotspot. Graal needs to >>>>>> know the values of g1 young cards and dirty cards. This is >>>>>> queried in different ways in different versions of the JDK in the >>>>>> ||GraalHotSpotVMConfig.java file. Now these values will move from >>>>>> their barrier set class to their card table class. That means we >>>>>> have at least three cases how to find the correct values. There >>>>>> is one for JDK8, one for JDK9, and now a new one for JDK11. >>>>>> Except, we have not yet bumped the version number to 11 in the >>>>>> repo, and therefore it has to be from JDK10 - 11 for now and >>>>>> updated after incrementing the version number. But that means >>>>>> that it will be temporarily incompatible with JDK10. That is okay >>>>>> for our own copy of Graal, but can not be used by upstream Graal >>>>>> as they are given the choice whether to support the public JDK10 >>>>>> or the JDK11 that does not quite admit to being 11 yet. I chose >>>>>> the solution that works in our repository. I will notify Graal >>>>>> folks of this issue. In the long run, it would be nice if we >>>>>> could have a more solid interface here. >>>>>> >>>>>> However, as an added benefit, this changeset brings about a >>>>>> hundred copyright headers up to date, so others do not have to >>>>>> update them for a while. >>>>>> >>>>>> Bug: >>>>>> https://bugs.openjdk.java.net/browse/JDK-8195142 >>>>>> >>>>>> Webrev: >>>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00/ >>>>>> >>>>>> Testing: mach5 hs-tier1-5 plus local AoT testing. >>>>>> >>>>>> Thanks, >>>>>> /Erik >>>>> >>> >> > From kim.barrett at oracle.com Wed Feb 21 16:56:13 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 21 Feb 2018 11:56:13 -0500 Subject: RFR: 8198474: Move JNIHandles::resolve into jniHandles.inline.hpp In-Reply-To: <1519217045.2401.14.camel@oracle.com> References: <1519217045.2401.14.camel@oracle.com> Message-ID: <32A93964-BDA3-4A7B-9056-48B18B7F7416@oracle.com> > On Feb 21, 2018, at 7:44 AM, Thomas Schatzl wrote: > > Hi Kim, > > seem good, two minor comments: > > - in jvmciCodeInstaller.hpp and jvmciJavaClasses.hpp, can the "FIXME" > comment elaborate a bit more what's broken, and file a CR, maybe even > detailing how this could be fixed. > If it is not current any more, please remove the comments. > > I just really really do not like "FIXME" comments, nobody is going to > remember next time what the issue was, whether it has been fixed, etc. I completely forgot about these FIXMEs, and shouldn?t have put out the RFR with them still present. My apologies for this. From vladimir.kozlov at oracle.com Wed Feb 21 17:12:15 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 21 Feb 2018 09:12:15 -0800 Subject: RFR(L): 8195142: Refactor out card table from CardTableModRefBS to flatten the BarrierSet hierarchy In-Reply-To: <5A8D58FC.10603@oracle.com> References: <5A5F476E.6080000@oracle.com> <5A855376.5090203@oracle.com> <79f7fbf8-b0cc-1999-45a9-aec8f8879f77@oracle.com> <5A8D58FC.10603@oracle.com> Message-ID: <79ef5154-98d3-2578-b997-e179e8f9f634@oracle.com> Hi Erik, I looked on compiler and aot changes. I noticed repeated sequence in several files to get byte_map_base() + BarrierSet* bs = Universe::heap()->barrier_set(); + CardTableModRefBS* ctbs = barrier_set_cast(bs); + CardTable* ct = ctbs->card_table(); + assert(sizeof(*(ct->byte_map_base())) == sizeof(jbyte), "adjust this code"); + LIR_Const* card_table_base = new LIR_Const(ct->byte_map_base()); But sometimes it has the assert (graphKit.cpp) and sometimes does not (aotCodeHeap.cpp). Can you factor this sequence into one method which can be used in all such places? Thanks, Vladimir On 2/21/18 3:33 AM, Erik ?sterlund wrote: > Hi Erik, > > Thank you for reviewing this. > > New full webrev: > http://cr.openjdk.java.net/~eosterlund/8195142/webrev.02/ > > New incremental webrev: > http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01_02/ > > On 2018-02-21 09:18, Erik Helin wrote: >> Hi Erik, >> >> this is a very nice improvement, thanks for working on this! >> >> A few minor comments thus far: >> - in stubGenerator_ppc.cpp: >> ? you seem to have lost a `const` in the refactoring > > Fixed. > >> - in psCardTable.hpp: >> ? I don't think card_mark_must_follow_store() is needed, since >> ? PSCardTable passes `false` for `conc_scan` to the CardTable >> ? constructor > > Fixed. I took the liberty of also making the condition for > card_mark_must_follow_store() more precise on CMS by making the > condition for scanned_concurrently consider whether > CMSPrecleaningEnabled is set or not (like other generated code does). > >> - in g1CollectedHeap.hpp: >> ? could you store the G1CardTable as a field in G1CollectedHeap? Also, >> ? could you name the "getter" just card_table()? (I see that >> ? g1_hot_card_cache method above, but that one should also be renamed to >> ? just hot_card_cache, but in another patch) > > Fixed. > >> - in cardTable.hpp and cardTable.cpp: >> ? could you use `hg cp` when constructing these files from >> ? cardTableModRefBS.{hpp,cpp} so the history is preserved? > > Yes, I will do this before pushing to make sure the history is preserved. > > Thanks, > /Erik > >> >> Thanks, >> Erik >> >> On 02/15/2018 10:31 AM, Erik ?sterlund wrote: >>> Hi, >>> >>> Here is an updated revision of this webrev after internal feedback >>> from StefanK who helped looking through my changes - thanks a lot for >>> the help with that. >>> >>> The changes to the new revision are a bunch of minor clean up >>> changes, e.g. copy right headers, indentation issues, sorting >>> includes, adding/removing newlines, reverting an assert error >>> message, fixing constructor initialization orders, and things like that. >>> >>> The problem I mentioned last time about the version number of our >>> repo not yet being bumped to 11 and resulting awkwardness in JVMCI >>> has been resolved by simply waiting. So now I changed the JVMCI logic >>> to get the card values from the new location in the corresponding >>> card tables when observing JDK version 11 or above. >>> >>> New full webrev (rebased onto a month fresher jdk-hs): >>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01/ >>> >>> Incremental webrev (over the rebase): >>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00_01/ >>> >>> This new version has run through hs-tier1-5 and jdk-tier1-3 without >>> any issues. >>> >>> Thanks, >>> /Erik >>> >>> On 2018-01-17 13:54, Erik ?sterlund wrote: >>>> Hi, >>>> >>>> Today, both Parallel, CMS and Serial share the same code for its >>>> card marking barrier. However, they have different requirements how >>>> to manage its card tables by the GC. And as the card table itself is >>>> embedded as a part of the CardTableModRefBS barrier set, this has >>>> led to an unnecessary inheritance hierarchy for CardTableModRefBS, >>>> where for example CardTableModRefBSForCTRS and CardTableExtension >>>> are CardTableModRefBS subclasses that do not change anything to do >>>> with the barriers. >>>> >>>> To clean up the code, there should really be a separate CardTable >>>> hierarchy that contains the differences how to manage the card table >>>> from the GC point of view, and simply let CardTableModRefBS have a >>>> CardTable. This would allow removing CardTableModRefBSForCTRS and >>>> CardTableExtension and their references from shared code (that >>>> really have nothing to do with the barriers, despite being barrier >>>> sets), and significantly simplify the barrier set code. >>>> >>>> This patch mechanically performs this refactoring. A new CardTable >>>> class has been created with a PSCardTable subclass for Parallel, a >>>> CardTableRS for CMS and Serial, and a G1CardTable for G1. All >>>> references to card tables and their values have been updated >>>> accordingly. >>>> >>>> This touches a lot of platform specific code, so would be fantastic >>>> if port maintainers could have a look that I have not broken anything. >>>> >>>> There is a slight problem that should be pointed out. There is an >>>> unfortunate interaction between Graal and hotspot. Graal needs to >>>> know the values of g1 young cards and dirty cards. This is queried >>>> in different ways in different versions of the JDK in the >>>> ||GraalHotSpotVMConfig.java file. Now these values will move from >>>> their barrier set class to their card table class. That means we >>>> have at least three cases how to find the correct values. There is >>>> one for JDK8, one for JDK9, and now a new one for JDK11. Except, we >>>> have not yet bumped the version number to 11 in the repo, and >>>> therefore it has to be from JDK10 - 11 for now and updated after >>>> incrementing the version number. But that means that it will be >>>> temporarily incompatible with JDK10. That is okay for our own copy >>>> of Graal, but can not be used by upstream Graal as they are given >>>> the choice whether to support the public JDK10 or the JDK11 that >>>> does not quite admit to being 11 yet. I chose the solution that >>>> works in our repository. I will notify Graal folks of this issue. In >>>> the long run, it would be nice if we could have a more solid >>>> interface here. >>>> >>>> However, as an added benefit, this changeset brings about a hundred >>>> copyright headers up to date, so others do not have to update them >>>> for a while. >>>> >>>> Bug: >>>> https://bugs.openjdk.java.net/browse/JDK-8195142 >>>> >>>> Webrev: >>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00/ >>>> >>>> Testing: mach5 hs-tier1-5 plus local AoT testing. >>>> >>>> Thanks, >>>> /Erik >>> > From kim.barrett at oracle.com Thu Feb 22 01:47:29 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 21 Feb 2018 20:47:29 -0500 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> Message-ID: <9552C4D9-3C0C-4AD8-A6A6-18FE8FA5175B@oracle.com> > On Feb 21, 2018, at 2:30 AM, Thomas St?fe wrote: > > Hi Kim, > > this is good. Please find comments inline / below. Thanks for looking at it. Responses inline. > On Wed, Feb 21, 2018 at 1:08 AM, Kim Barrett wrote: >> (That would have the additional benefit of not needing to include >> jvm.h all over the place just to have access to the jio_ functions.) >> > os.hpp is no lightweight alternative though. It includes quite a lot of other headers, including jvm.h :) > > Also system headers like . And then, whatever comes with the os_xxx_xxx.h files. > > So, this is the part I do not like much about this change, it forces us to include a lot of stuff where before we would just include jvm.h or just roll with raw ::snprintf(). Can we disentangle the header dep better? os is where HotSpot generally deals with such platform variances, so I think it's the right place for this. jvm.h provides access to some VM facilities by non-VM code. I think it's somewhat strange that HotSpot C++ code is also a user of this *C* API; especially considering the noted deficiencies. That os.hpp drags in a bunch of stuff is a separate issue. I personally liked the suggestion of making os a namespace, allowing it to be broken down into more fine-grained components. That's a discussion that is well out of scope for the problem at hand though. (BTW, it's not clear why os.hpp includes jvm.h.) >> We still provide os::log_vsnprintf [?] >> > I totally agree. Possible alternatives: > > 1 rename it > 2 move it into the log sub project as an internal implementation detail > 3 Or, provide a platform independent version of _vcsprintf (https://msdn.microsoft.com/en-us/library/w05tbk72.aspx) instead. So whoever really wants to count characters in resolved format string should first use that function, then alloc the appropiate buffer, then do the real printing. Posix variant for _vcsprintf could just be vsnprintf with a zero byte output buffer. > > I personally like (3) best, followed by (2) Renaming requires a better name. I'm open to suggestions. Moving it to the logging component means logging then needs to deal with the platform variances, or defer back to os somehow, in which case we're pretty much back here and looking at the other options. Providing a portable wrapper for _vscprintf instead of os::log_vsnprintf seems like it just makes things harder for callers. It probably doesn't matter much, but it also adds overhead on platforms where log_vsnprintf can be implemented directly in terms of ::vsnprintf and _vscprintf functionality would also use that; on truncation a caller would typically end up making 3 calls to vsnprintf, rather than two. > src/hotspot/os/posix/os_posix.cpp > > + if (len > 0) buf[len - 1] = '\0'; > Style nit: brackets ? I think it was okay as written, according to the style guide, but I changed it anyway. > src/hotspot/share/prims/jvm.cpp > > Small behaviour change for input buffers of 0 and format strings of "". Before we returned -1, now (I guess) 0. Does any existing caller care? I looked but could not find anyone even checking the return code, so it is probably nothing. You are right, that?s a change. Well spotted. I think the old behavior here, which matches the documentation, is probably preferable; it indicates we didn't even write a terminating NUL, so the buffer may not be terminated. I'm going to change os::vsnprintf accordingly. Of course, this is very much a corner case, since an empty format string is pretty rare. But it could also arise with, for example, a format string of ?%s? with an empty argument string. > src/hotspot/share/runtime/os.hpp > > Comment to os::(v)snprintf: "These functions return -1 if the > + // output has been truncated, rather than returning the number of characters > + // that would have been written (exclusive of the terminating NUL) if the > + // output had not been truncated." > > I would not describe what the functions do NOT. Only what they do - would be a bit clearer. > > Proposal for a more concise version (feel free to reformulate, I am no native speaker): > > "os::snprintf and os::vsnprintf are identical to snprintf(3) and vsnprintf(3) except in the following points: > - On truncation they will return -1. > - On truncation, the output string will be zero terminated, unless the input buffer length is 0. > ? Agreed. Updated based on your suggestion, with adjustment for the previous item. > test/hotspot/gtest/runtime/test_os.cpp: > > +// Test os::vmsprintf and friends > > Typo. Fixed. > Thinking about it I think the test could be made a bit denser and simpler and have more coverage. How about this instead: Agreed the test can be improved along the lines you suggested. I?ve done so, though not exactly using the proposed replacement. > (for real paranoia one could check also for leading overwriters by writing to buffer[1] and checking buffer[0] for 'x', but I do not think this is necessary.) I ended up doing that anyway. > And should we move log_snprintf to logging, the tests would get a bit simpler too. Not by much, as it turns out. From paul.sandoz at oracle.com Thu Feb 22 02:18:30 2018 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Wed, 21 Feb 2018 18:18:30 -0800 Subject: [PATCH] RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers In-Reply-To: References: <39D8F43A-06BD-483B-8901-6F4444A8235F@oracle.com> Message-ID: <4B429F1D-5727-4B20-A051-E39E1E8C69AA@oracle.com> Hi Adam, While the burden is minimal there is a principle here that i think we should adhere to regarding additions to the code base: additions should have value within OpenJDK itself otherwise it can become a thin end of the wedge to more stuff (?well you added these things, why not just add these too??). So i would still be reluctant to add such methods without understanding the larger picture and what you have in mind. Can you send a pointer to your email referring in more detail to the larger change sets? This use-case might also apply in other related areas too with regards to logging/monitoring. I would be interested to understand what Java Flight Recorder (JFR) does in this regard (it being open sourced soon i believe) and how JFR might relate to what you are doing. Should we be adding JFR events to unsafe memory allocation? Can JFR efficiently access part of the Java call stack to determine the origin? Thanks, Paul. > On Feb 19, 2018, at 5:08 AM, Adam Farley8 wrote: > > Hi Paul, > > > Hi Adam, > > > > From reading the thread i cannot tell if this is part of a wider solution including some yet to be proposed HotSpot changes. > > The wider solution would need to include some Hotspot changes, yes. > I'm proposing raising a bug, committing the code we have here to > "set the stage", and then we can invest more time&energy later > if the concept goes down well and the community agrees to pursue > the full solution. > > As an aside, I tried submitting a big code set (including hotspot > changes) months ago, and I'm *still* struggling to find someone to > commit the thing, so I figured I'd try a more gradual, staged approach > this time. > > > > > As is i would be resistant to adding such standalone internal wrapper methods to Unsafe that have no apparent benefit within the OpenJDK itself since it's a maintenance burden. > > I'm hoping the fact that the methods are a single line (sans > comments, descriptors and curly braces) will minimise this burden. > > > > > Can you determine if the calls to UNSAFE.freeMemory/allocateMemory come from a DBB by looking at the call stack frame above the unsafe call? > > > > Thanks, > > Paul. > > Yes that is possible, though I would advise against this because: > > A) Checking the call stack is expensive, and doing this every time we > allocate native memory is an easy way to slow down a program, > or rack up mips. > and > B) deciding which code path we're using based on the stack > means the DBB class+method (and anything the parsing code > mistakes for that class+method) can only ever allocate native > memory for DBBs. > > What do you think? > > Best Regards > > Adam Farley > > > > >> On Feb 14, 2018, at 3:32 AM, Adam Farley8 wrote: > >> > >> Hi All, > >> > >> Currently, diagnostic core files generated from OpenJDK seem to lump all > >> of the > >> native memory usages together, making it near-impossible for someone to > >> figure > >> out *what* is using all that memory in the event of a memory leak. > >> > >> The OpenJ9 VM has a feature which allows it to track the allocation of > >> native > >> memory for Direct Byte Buffers (DBBs), and to supply that information into > >> the > >> cores when they are generated. This makes it a *lot* easier to find out > >> what is using > >> all that native memory, making memory leak resolution less like some dark > >> art, and > >> more like logical debugging. > >> > >> To use this feature, there is a native method referenced in Unsafe.java. > >> To open > >> up this feature so that any VM can make use of it, the java code below > >> sets the > >> stage for it. This change starts letting people call DBB-specific methods > >> when > >> allocating native memory, and getting into the habit of using it. > >> > >> Thoughts? > >> > >> Best Regards > >> > >> Adam Farley > >> > >> P.S. Code: > >> > >> diff --git > >> a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > >> b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > >> --- a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > >> +++ b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > >> @@ -85,7 +85,7 @@ > >> // Paranoia > >> return; > >> } > >> - UNSAFE.freeMemory(address); > >> + UNSAFE.freeDBBMemory(address); > >> address = 0; > >> Bits.unreserveMemory(size, capacity); > >> } > >> @@ -118,7 +118,7 @@ > >> > >> long base = 0; > >> try { > >> - base = UNSAFE.allocateMemory(size); > >> + base = UNSAFE.allocateDBBMemory(size); > >> } catch (OutOfMemoryError x) { > >> Bits.unreserveMemory(size, cap); > >> throw x; > >> diff --git a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > >> b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > >> --- a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > >> +++ b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > >> @@ -632,6 +632,26 @@ > >> } > >> > >> /** > >> + * Allocates a new block of native memory for DirectByteBuffers, of > >> the > >> + * given size in bytes. The contents of the memory are > >> uninitialized; > >> + * they will generally be garbage. The resulting native pointer will > >> + * never be zero, and will be aligned for all value types. Dispose > >> of > >> + * this memory by calling {@link #freeDBBMemory} or resize it with > >> + * {@link #reallocateDBBMemory}. > >> + * > >> + * @throws RuntimeException if the size is negative or too large > >> + * for the native size_t type > >> + * > >> + * @throws OutOfMemoryError if the allocation is refused by the > >> system > >> + * > >> + * @see #getByte(long) > >> + * @see #putByte(long, byte) > >> + */ > >> + public long allocateDBBMemory(long bytes) { > >> + return allocateMemory(bytes); > >> + } > >> + > >> + /** > >> * Resizes a new block of native memory, to the given size in bytes. > >> The > >> * contents of the new block past the size of the old block are > >> * uninitialized; they will generally be garbage. The resulting > >> native > >> @@ -687,6 +707,27 @@ > >> } > >> > >> /** > >> + * Resizes a new block of native memory for DirectByteBuffers, to the > >> + * given size in bytes. The contents of the new block past the size > >> of > >> + * the old block are uninitialized; they will generally be garbage. > >> The > >> + * resulting native pointer will be zero if and only if the requested > >> size > >> + * is zero. The resulting native pointer will be aligned for all > >> value > >> + * types. Dispose of this memory by calling {@link #freeDBBMemory}, > >> or > >> + * resize it with {@link #reallocateDBBMemory}. The address passed > >> to > >> + * this method may be null, in which case an allocation will be > >> performed. > >> + * > >> + * @throws RuntimeException if the size is negative or too large > >> + * for the native size_t type > >> + * > >> + * @throws OutOfMemoryError if the allocation is refused by the > >> system > >> + * > >> + * @see #allocateDBBMemory > >> + */ > >> + public long reallocateDBBMemory(long address, long bytes) { > >> + return reallocateMemory(address, bytes); > >> + } > >> + > >> + /** > >> * Sets all bytes in a given block of memory to a fixed value > >> * (usually zero). > >> * > >> @@ -918,6 +959,17 @@ > >> checkPointer(null, address); > >> } > >> > >> + /** > >> + * Disposes of a block of native memory, as obtained from {@link > >> + * #allocateDBBMemory} or {@link #reallocateDBBMemory}. The address > >> passed > >> + * to this method may be null, in which case no action is taken. > >> + * > >> + * @see #allocateDBBMemory > >> + */ > >> + public void freeDBBMemory(long address) { > >> + freeMemory(address); > >> + } > >> + > >> /// random queries > >> > >> /** > >> > >> Unless stated otherwise above: > >> IBM United Kingdom Limited - Registered in England and Wales with number > >> 741598. > >> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU From kim.barrett at oracle.com Thu Feb 22 02:34:22 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 21 Feb 2018 21:34:22 -0500 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: <9552C4D9-3C0C-4AD8-A6A6-18FE8FA5175B@oracle.com> References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> <9552C4D9-3C0C-4AD8-A6A6-18FE8FA5175B@oracle.com> Message-ID: > On Feb 21, 2018, at 8:47 PM, Kim Barrett wrote: > >> On Feb 21, 2018, at 2:30 AM, Thomas St?fe wrote: >> src/hotspot/share/prims/jvm.cpp >> >> Small behaviour change for input buffers of 0 and format strings of "". Before we returned -1, now (I guess) 0. Does any existing caller care? I looked but could not find anyone even checking the return code, so it is probably nothing. > > You are right, that?s a change. Well spotted. > > I think the old behavior here, which matches the documentation, is > probably preferable; it indicates we didn't even write a terminating > NUL, so the buffer may not be terminated. I'm going to change > os::vsnprintf accordingly. Of course, this is very much a corner case, > since an empty format string is pretty rare. But it could also arise with, > for example, a format string of ?%s? with an empty argument string. While dealing with this I noticed the Visual Studio documentation for _vsnprintf varies quite a bit from version to version in the description of what happens with a 0 buffer size (including being an error in some versions). As a result, both the VS and POSIX versions of os::vsnprintf will now just special case 0 buffer size and always return -1 for that, indicating that not even the terminating NUL was written. Good thing you made me revisit that. From gnu.andrew at redhat.com Thu Feb 22 04:01:42 2018 From: gnu.andrew at redhat.com (Andrew Hughes) Date: Thu, 22 Feb 2018 04:01:42 +0000 Subject: [8u] [RFR] Request for Review of Backport of JDK-8078628: linux-zero does not build without precompiled header Message-ID: [CCing hotspot list for review] Bug: https://bugs.openjdk.java.net/browse/JDK-8078628 Webrev: http://cr.openjdk.java.net/~andrew/openjdk8/8078628/webrev.01/ Review thread: http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-April/018239.html When testing a slowdebug build of Zero for the backport of 8194739, my build failed because I don't have pre-compiled headers enabled. It seems this was fixed in OpenJDK 9, but never backported. The backported version is pretty similar with a few adjustments for context in the older OpenJDK 8 version. The src/cpu/zero/vm/methodHandles_zero.hpp are my own addition from the same fix I came up with independently, and stops multiple inclusions of that header. Please review and approve for OpenJDK 8 so Zero builds without precompiled headers work there. Thanks, -- Andrew :) Senior Free Java Software Engineer Red Hat, Inc. (http://www.redhat.com) Web Site: http://fuseyism.com Twitter: https://twitter.com/gnu_andrew_java PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net) Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222 From kim.barrett at oracle.com Thu Feb 22 04:20:40 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 21 Feb 2018 23:20:40 -0500 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> Message-ID: > On Feb 21, 2018, at 3:23 AM, Marcus Larsson wrote: > > Hi, > > > On 2018-02-21 08:30, Thomas St?fe wrote: >> Hi Kim, >> >> this is good. Please find comments inline / below. >> >> On Wed, Feb 21, 2018 at 1:08 AM, Kim Barrett wrote: >>> We still provide os::log_vsnprintf, which differs from the new >>> os::vsnprintf in the return value when output truncation occurs. [?] >>> >> I totally agree. Possible alternatives: >> >> 1 rename it >> 2 move it into the log sub project as an internal implementation detail >> 3 Or, provide a platform independent version of _vcsprintf ( >> https://msdn.microsoft.com/en-us/library/w05tbk72.aspx) instead. So whoever >> really wants to count characters in resolved format string should first use >> that function, then alloc the appropiate buffer, then do the real printing. >> Posix variant for _vcsprintf could just be vsnprintf with a zero byte >> output buffer. >> >> I personally like (3) best, followed by (2) > > The best alternative, IMHO, would be to make os::vsnprintf behave just like log_vsnprintf (C99 standard vsnprintf), thus removing the need for log_vsnprintf completely. Also, that behavior is strictly better than always returning -1 on error. There was discussion of the behavior in this area around JDK-8062370 http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-November/015779.html The consensus then was that a guarantee of NUL-termination was desirable. That's already at variance with C99 vsnprintf. It was noted at the time that most calls to these functions in HotSpot ignored the return value. That's still true. I looked at the 57 I found via egrep "[=,(]\s*(jio_|::|)v?snprintf\(", and only found 2 that seem to want the C99 vsnprintf return value behavior. (Also found a handful that appear to be mishandling the result in one way or another; will be filing some more bugs...) I didn't look at the roughly 350 calls that are ignoring the result. So being consistent with C99 vsnprintf doesn't appear to be useful in this code base, would add some cost to converting away from using the jio_ functions in order to get the -Wformat warnings, and might add some (likely not really measurable) performance cost for Windows. So I'm not inclined toward making the return value from os::vsnprintf consistent with C99 vsnprintf. If there's a groundswell of support, it can be done, but I'd rather keep such an effort separate from this change, which is intended to address a problem with building using recent versions of Visual Studio, doing a little bit of cleanup in the process. From thomas.schatzl at oracle.com Thu Feb 22 07:16:31 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 22 Feb 2018 08:16:31 +0100 Subject: RFR: 8198474: Move JNIHandles::resolve into jniHandles.inline.hpp In-Reply-To: <32A93964-BDA3-4A7B-9056-48B18B7F7416@oracle.com> References: <1519217045.2401.14.camel@oracle.com> <32A93964-BDA3-4A7B-9056-48B18B7F7416@oracle.com> Message-ID: <1519283791.2348.1.camel@oracle.com> Hi, On Wed, 2018-02-21 at 11:56 -0500, Kim Barrett wrote: > > On Feb 21, 2018, at 7:44 AM, Thomas Schatzl > com> wrote: > > > > Hi Kim, > > > > seem good, two minor comments: > > > > - in jvmciCodeInstaller.hpp and jvmciJavaClasses.hpp, can the > > "FIXME" comment elaborate a bit more what's broken, and file a CR, > > maybe even detailing how this could be fixed. > > If it is not current any more, please remove the comments. > > > > I just really really do not like "FIXME" comments, nobody is going > > to remember next time what the issue was, whether it has been > > fixed, etc. > > I completely forgot about these FIXMEs, and shouldn?t have put out > the RFR > with them still present. My apologies for this. > okay, looks good without the fixme's and the copyright update. No need to see another webrev for these trivial changes. Thanks, Thomas From marcus.larsson at oracle.com Thu Feb 22 08:30:13 2018 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Thu, 22 Feb 2018 09:30:13 +0100 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> Message-ID: <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> Hi, On 2018-02-22 05:20, Kim Barrett wrote: >> On Feb 21, 2018, at 3:23 AM, Marcus Larsson wrote: >> >> Hi, >> >> >> On 2018-02-21 08:30, Thomas St?fe wrote: >>> Hi Kim, >>> >>> this is good. Please find comments inline / below. >>> >>> On Wed, Feb 21, 2018 at 1:08 AM, Kim Barrett wrote: >>>> We still provide os::log_vsnprintf, which differs from the new >>>> os::vsnprintf in the return value when output truncation occurs. [?] >>>> >>> I totally agree. Possible alternatives: >>> >>> 1 rename it >>> 2 move it into the log sub project as an internal implementation detail >>> 3 Or, provide a platform independent version of _vcsprintf ( >>> https://msdn.microsoft.com/en-us/library/w05tbk72.aspx) instead. So whoever >>> really wants to count characters in resolved format string should first use >>> that function, then alloc the appropiate buffer, then do the real printing. >>> Posix variant for _vcsprintf could just be vsnprintf with a zero byte >>> output buffer. >>> >>> I personally like (3) best, followed by (2) >> The best alternative, IMHO, would be to make os::vsnprintf behave just like log_vsnprintf (C99 standard vsnprintf), thus removing the need for log_vsnprintf completely. Also, that behavior is strictly better than always returning -1 on error. > There was discussion of the behavior in this area around JDK-8062370 > http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-November/015779.html > > The consensus then was that a guarantee of NUL-termination was > desirable. That's already at variance with C99 vsnprintf. From what I can tell, C99 guarantees NUL-termination (for n > 0, naturally). > > It was noted at the time that most calls to these functions in HotSpot > ignored the return value. That's still true. I looked at the 57 I > found via egrep "[=,(]\s*(jio_|::|)v?snprintf\(", and only found 2 > that seem to want the C99 vsnprintf return value behavior. (Also found > a handful that appear to be mishandling the result in one way or > another; will be filing some more bugs...) I didn't look at the > roughly 350 calls that are ignoring the result. > > So being consistent with C99 vsnprintf doesn't appear to be useful in > this code base, would add some cost to converting away from using the > jio_ functions in order to get the -Wformat warnings, and might add > some (likely not really measurable) performance cost for Windows. Just for the record: Starting with VS2015, (v)snprintf implementations are C99 conforming. > > So I'm not inclined toward making the return value from os::vsnprintf > consistent with C99 vsnprintf. If there's a groundswell of support, it > can be done, but I'd rather keep such an effort separate from this > change, which is intended to address a problem with building using > recent versions of Visual Studio, doing a little bit of cleanup in the > process. That's fine with me, I just wanted to voice my opinion on the API I would prefer. Also that keeping two implementations of essentially the same thing seems weird to me (vsnprintf vs log_vsnprintf). Thanks, Marcus From kim.barrett at oracle.com Thu Feb 22 09:52:05 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 22 Feb 2018 04:52:05 -0500 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> Message-ID: <86A0EA21-1141-4C73-AD80-EF46A44220B2@oracle.com> > On Feb 22, 2018, at 3:30 AM, Marcus Larsson wrote: > On 2018-02-22 05:20, Kim Barrett wrote: >>> >> There was discussion of the behavior in this area around JDK-8062370 >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-November/015779.html >> >> The consensus then was that a guarantee of NUL-termination was >> desirable. That's already at variance with C99 vsnprintf. > > From what I can tell, C99 guarantees NUL-termination (for n > 0, naturally). Hm, you are right. C99 is quite explicit about that. I should have looked there, rather than relying on the man page description. I?ve been reading the Linux snprintf(3) man page as saying it doesn?t nul-terminate on truncation. But looking at it again, in light of the C99 text, I?d now say the man page is ambiguous. Okay, that should simplify things a bit. >> So being consistent with C99 vsnprintf doesn't appear to be useful in >> this code base, would add some cost to converting away from using the >> jio_ functions in order to get the -Wformat warnings, and might add >> some (likely not really measurable) performance cost for Windows. > > Just for the record: Starting with VS2015, (v)snprintf implementations are C99 conforming. Yes. Not yet throwing out support for at least VS2013 though. >> So I'm not inclined toward making the return value from os::vsnprintf >> consistent with C99 vsnprintf. If there's a groundswell of support, it >> can be done, but I'd rather keep such an effort separate from this >> change, which is intended to address a problem with building using >> recent versions of Visual Studio, doing a little bit of cleanup in the >> process. > > That's fine with me, I just wanted to voice my opinion on the API I would prefer. Also that keeping two implementations of essentially the same thing seems weird to me (vsnprintf vs log_vsnprintf). I?d be okay with the C99 behavior. It means the jio_ functions differ though, which makes it harder to do the suggested conversion, as there are ~55 callers that use the return value. From erik.osterlund at oracle.com Thu Feb 22 11:13:42 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 22 Feb 2018 12:13:42 +0100 Subject: RFR(L): 8195142: Refactor out card table from CardTableModRefBS to flatten the BarrierSet hierarchy In-Reply-To: <1e5ec8ed-cddf-3b18-79f2-4dc81854c72a@oracle.com> References: <5A5F476E.6080000@oracle.com> <5A855376.5090203@oracle.com> <79f7fbf8-b0cc-1999-45a9-aec8f8879f77@oracle.com> <5A8D58FC.10603@oracle.com> <1f234e18-9226-f6a2-d912-76ef3cb75e21@oracle.com> <5A8D87EF.5070101@oracle.com> <1e5ec8ed-cddf-3b18-79f2-4dc81854c72a@oracle.com> Message-ID: <940429a5-27c1-aae0-3ea8-e58829c6f0cf@oracle.com> Hi Coleen, Okay great. Thanks for the review. /Erik On 2018-02-21 17:52, coleen.phillimore at oracle.com wrote: > > > On 2/21/18 9:53 AM, Erik ?sterlund wrote: >> Hi Coleen, >> >> Thank you for having a look at this. >> >> The BarrierSet switch statements in platform specific code are going >> away relatively soon. Do you still want me to synchronize all the >> switches to handle such errors the same way now before we get there, >> or wait a few patches and have them removed? > > Oh, sure, I'm fine if they're going to be removed (moved?).? I still > think SAP should make sure their platforms build before you push if > they have time though. > thanks, > Coleen > >> >> Thanks, >> /Erik >> >> On 2018-02-21 14:17, coleen.phillimore at oracle.com wrote: >>> >>> Hi Erik,? I started looking at this but was quickly overwhelmed by >>> the changes.? It looks like the case for BarrierSet::ModRef is >>> removed in the stubGenerator code(s) but not in templateTable >>> do_oop_store.?? Should the case of BarrierSet::ModRef get a >>> ShouldNotReachHere in stubGenerator in the places where they are >>> removed? >>> >>> Some platforms have code for this in do_oop_store in templateTable >>> and some platforms get ShouldNotReachHere(), which does not pattern >>> match for me. >>> >>> - case BarrierSet::CardTableForRS: >>> - case BarrierSet::CardTableExtension: >>> - case BarrierSet::ModRef: >>> + case BarrierSet::CardTableModRef: >>> >>> >>> I think SAP should test this out on the other platforms to hopefully >>> avoid any issues we've been seeing lately with multi-platform >>> changes.? CCing Thomas. >>> >>> thanks, >>> Coleen >>> >>> On 2/21/18 6:33 AM, Erik ?sterlund wrote: >>>> Hi Erik, >>>> >>>> Thank you for reviewing this. >>>> >>>> New full webrev: >>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.02/ >>>> >>>> New incremental webrev: >>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01_02/ >>>> >>>> On 2018-02-21 09:18, Erik Helin wrote: >>>>> Hi Erik, >>>>> >>>>> this is a very nice improvement, thanks for working on this! >>>>> >>>>> A few minor comments thus far: >>>>> - in stubGenerator_ppc.cpp: >>>>> ? you seem to have lost a `const` in the refactoring >>>> >>>> Fixed. >>>> >>>>> - in psCardTable.hpp: >>>>> ? I don't think card_mark_must_follow_store() is needed, since >>>>> ? PSCardTable passes `false` for `conc_scan` to the CardTable >>>>> ? constructor >>>> >>>> Fixed. I took the liberty of also making the condition for >>>> card_mark_must_follow_store() more precise on CMS by making the >>>> condition for scanned_concurrently consider whether >>>> CMSPrecleaningEnabled is set or not (like other generated code does). >>>> >>>>> - in g1CollectedHeap.hpp: >>>>> ? could you store the G1CardTable as a field in G1CollectedHeap? >>>>> Also, >>>>> ? could you name the "getter" just card_table()? (I see that >>>>> ? g1_hot_card_cache method above, but that one should also be >>>>> renamed to >>>>> ? just hot_card_cache, but in another patch) >>>> >>>> Fixed. >>>> >>>>> - in cardTable.hpp and cardTable.cpp: >>>>> ? could you use `hg cp` when constructing these files from >>>>> ? cardTableModRefBS.{hpp,cpp} so the history is preserved? >>>> >>>> Yes, I will do this before pushing to make sure the history is >>>> preserved. >>>> >>>> Thanks, >>>> /Erik >>>> >>>>> >>>>> Thanks, >>>>> Erik >>>>> >>>>> On 02/15/2018 10:31 AM, Erik ?sterlund wrote: >>>>>> Hi, >>>>>> >>>>>> Here is an updated revision of this webrev after internal >>>>>> feedback from StefanK who helped looking through my changes - >>>>>> thanks a lot for the help with that. >>>>>> >>>>>> The changes to the new revision are a bunch of minor clean up >>>>>> changes, e.g. copy right headers, indentation issues, sorting >>>>>> includes, adding/removing newlines, reverting an assert error >>>>>> message, fixing constructor initialization orders, and things >>>>>> like that. >>>>>> >>>>>> The problem I mentioned last time about the version number of our >>>>>> repo not yet being bumped to 11 and resulting awkwardness in >>>>>> JVMCI has been resolved by simply waiting. So now I changed the >>>>>> JVMCI logic to get the card values from the new location in the >>>>>> corresponding card tables when observing JDK version 11 or above. >>>>>> >>>>>> New full webrev (rebased onto a month fresher jdk-hs): >>>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01/ >>>>>> >>>>>> Incremental webrev (over the rebase): >>>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00_01/ >>>>>> >>>>>> This new version has run through hs-tier1-5 and jdk-tier1-3 >>>>>> without any issues. >>>>>> >>>>>> Thanks, >>>>>> /Erik >>>>>> >>>>>> On 2018-01-17 13:54, Erik ?sterlund wrote: >>>>>>> Hi, >>>>>>> >>>>>>> Today, both Parallel, CMS and Serial share the same code for its >>>>>>> card marking barrier. However, they have different requirements >>>>>>> how to manage its card tables by the GC. And as the card table >>>>>>> itself is embedded as a part of the CardTableModRefBS barrier >>>>>>> set, this has led to an unnecessary inheritance hierarchy for >>>>>>> CardTableModRefBS, where for example CardTableModRefBSForCTRS >>>>>>> and CardTableExtension are CardTableModRefBS subclasses that do >>>>>>> not change anything to do with the barriers. >>>>>>> >>>>>>> To clean up the code, there should really be a separate >>>>>>> CardTable hierarchy that contains the differences how to manage >>>>>>> the card table from the GC point of view, and simply let >>>>>>> CardTableModRefBS have a CardTable. This would allow removing >>>>>>> CardTableModRefBSForCTRS and CardTableExtension and their >>>>>>> references from shared code (that really have nothing to do with >>>>>>> the barriers, despite being barrier sets), and significantly >>>>>>> simplify the barrier set code. >>>>>>> >>>>>>> This patch mechanically performs this refactoring. A new >>>>>>> CardTable class has been created with a PSCardTable subclass for >>>>>>> Parallel, a CardTableRS for CMS and Serial, and a G1CardTable >>>>>>> for G1. All references to card tables and their values have been >>>>>>> updated accordingly. >>>>>>> >>>>>>> This touches a lot of platform specific code, so would be >>>>>>> fantastic if port maintainers could have a look that I have not >>>>>>> broken anything. >>>>>>> >>>>>>> There is a slight problem that should be pointed out. There is >>>>>>> an unfortunate interaction between Graal and hotspot. Graal >>>>>>> needs to know the values of g1 young cards and dirty cards. This >>>>>>> is queried in different ways in different versions of the JDK in >>>>>>> the ||GraalHotSpotVMConfig.java file. Now these values will move >>>>>>> from their barrier set class to their card table class. That >>>>>>> means we have at least three cases how to find the correct >>>>>>> values. There is one for JDK8, one for JDK9, and now a new one >>>>>>> for JDK11. Except, we have not yet bumped the version number to >>>>>>> 11 in the repo, and therefore it has to be from JDK10 - 11 for >>>>>>> now and updated after incrementing the version number. But that >>>>>>> means that it will be temporarily incompatible with JDK10. That >>>>>>> is okay for our own copy of Graal, but can not be used by >>>>>>> upstream Graal as they are given the choice whether to support >>>>>>> the public JDK10 or the JDK11 that does not quite admit to being >>>>>>> 11 yet. I chose the solution that works in our repository. I >>>>>>> will notify Graal folks of this issue. In the long run, it would >>>>>>> be nice if we could have a more solid interface here. >>>>>>> >>>>>>> However, as an added benefit, this changeset brings about a >>>>>>> hundred copyright headers up to date, so others do not have to >>>>>>> update them for a while. >>>>>>> >>>>>>> Bug: >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8195142 >>>>>>> >>>>>>> Webrev: >>>>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00/ >>>>>>> >>>>>>> Testing: mach5 hs-tier1-5 plus local AoT testing. >>>>>>> >>>>>>> Thanks, >>>>>>> /Erik >>>>>> >>>> >>> >> > From erik.osterlund at oracle.com Thu Feb 22 11:45:29 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 22 Feb 2018 12:45:29 +0100 Subject: RFR(L): 8195142: Refactor out card table from CardTableModRefBS to flatten the BarrierSet hierarchy In-Reply-To: <79ef5154-98d3-2578-b997-e179e8f9f634@oracle.com> References: <5A5F476E.6080000@oracle.com> <5A855376.5090203@oracle.com> <79f7fbf8-b0cc-1999-45a9-aec8f8879f77@oracle.com> <5A8D58FC.10603@oracle.com> <79ef5154-98d3-2578-b997-e179e8f9f634@oracle.com> Message-ID: Hi Vladimir, Thank you for having a look at this. I created some utility functions in ci/ciUtilities.hpp to get the card table: jbyte* ci_card_table_address() template T ci_card_table_address_as() The compiler code has been updated to use these helpers instead to fetch the card table in a consistent way. Hope this is kind of what you had in mind? New full webrev: http://cr.openjdk.java.net/~eosterlund/8195142/webrev.03/ New incremental webrev: http://cr.openjdk.java.net/~eosterlund/8195142/webrev.02_03/ Thanks, /Erik On 2018-02-21 18:12, Vladimir Kozlov wrote: > Hi Erik, > > I looked on compiler and aot changes. I noticed repeated sequence in > several files to get byte_map_base() > > +? BarrierSet* bs = Universe::heap()->barrier_set(); > +? CardTableModRefBS* ctbs = barrier_set_cast(bs); > +? CardTable* ct = ctbs->card_table(); > +? assert(sizeof(*(ct->byte_map_base())) == sizeof(jbyte), "adjust > this code"); > +? LIR_Const* card_table_base = new LIR_Const(ct->byte_map_base()); > > But sometimes it has the assert (graphKit.cpp) and sometimes does not > (aotCodeHeap.cpp). > > Can you factor this sequence into one method which can be used in all > such places? > > Thanks, > Vladimir > > On 2/21/18 3:33 AM, Erik ?sterlund wrote: >> Hi Erik, >> >> Thank you for reviewing this. >> >> New full webrev: >> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.02/ >> >> New incremental webrev: >> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01_02/ >> >> On 2018-02-21 09:18, Erik Helin wrote: >>> Hi Erik, >>> >>> this is a very nice improvement, thanks for working on this! >>> >>> A few minor comments thus far: >>> - in stubGenerator_ppc.cpp: >>> ? you seem to have lost a `const` in the refactoring >> >> Fixed. >> >>> - in psCardTable.hpp: >>> ? I don't think card_mark_must_follow_store() is needed, since >>> ? PSCardTable passes `false` for `conc_scan` to the CardTable >>> ? constructor >> >> Fixed. I took the liberty of also making the condition for >> card_mark_must_follow_store() more precise on CMS by making the >> condition for scanned_concurrently consider whether >> CMSPrecleaningEnabled is set or not (like other generated code does). >> >>> - in g1CollectedHeap.hpp: >>> ? could you store the G1CardTable as a field in G1CollectedHeap? Also, >>> ? could you name the "getter" just card_table()? (I see that >>> ? g1_hot_card_cache method above, but that one should also be >>> renamed to >>> ? just hot_card_cache, but in another patch) >> >> Fixed. >> >>> - in cardTable.hpp and cardTable.cpp: >>> ? could you use `hg cp` when constructing these files from >>> ? cardTableModRefBS.{hpp,cpp} so the history is preserved? >> >> Yes, I will do this before pushing to make sure the history is >> preserved. >> >> Thanks, >> /Erik >> >>> >>> Thanks, >>> Erik >>> >>> On 02/15/2018 10:31 AM, Erik ?sterlund wrote: >>>> Hi, >>>> >>>> Here is an updated revision of this webrev after internal feedback >>>> from StefanK who helped looking through my changes - thanks a lot >>>> for the help with that. >>>> >>>> The changes to the new revision are a bunch of minor clean up >>>> changes, e.g. copy right headers, indentation issues, sorting >>>> includes, adding/removing newlines, reverting an assert error >>>> message, fixing constructor initialization orders, and things like >>>> that. >>>> >>>> The problem I mentioned last time about the version number of our >>>> repo not yet being bumped to 11 and resulting awkwardness in JVMCI >>>> has been resolved by simply waiting. So now I changed the JVMCI >>>> logic to get the card values from the new location in the >>>> corresponding card tables when observing JDK version 11 or above. >>>> >>>> New full webrev (rebased onto a month fresher jdk-hs): >>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01/ >>>> >>>> Incremental webrev (over the rebase): >>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00_01/ >>>> >>>> This new version has run through hs-tier1-5 and jdk-tier1-3 without >>>> any issues. >>>> >>>> Thanks, >>>> /Erik >>>> >>>> On 2018-01-17 13:54, Erik ?sterlund wrote: >>>>> Hi, >>>>> >>>>> Today, both Parallel, CMS and Serial share the same code for its >>>>> card marking barrier. However, they have different requirements >>>>> how to manage its card tables by the GC. And as the card table >>>>> itself is embedded as a part of the CardTableModRefBS barrier set, >>>>> this has led to an unnecessary inheritance hierarchy for >>>>> CardTableModRefBS, where for example CardTableModRefBSForCTRS and >>>>> CardTableExtension are CardTableModRefBS subclasses that do not >>>>> change anything to do with the barriers. >>>>> >>>>> To clean up the code, there should really be a separate CardTable >>>>> hierarchy that contains the differences how to manage the card >>>>> table from the GC point of view, and simply let CardTableModRefBS >>>>> have a CardTable. This would allow removing >>>>> CardTableModRefBSForCTRS and CardTableExtension and their >>>>> references from shared code (that really have nothing to do with >>>>> the barriers, despite being barrier sets), and significantly >>>>> simplify the barrier set code. >>>>> >>>>> This patch mechanically performs this refactoring. A new CardTable >>>>> class has been created with a PSCardTable subclass for Parallel, a >>>>> CardTableRS for CMS and Serial, and a G1CardTable for G1. All >>>>> references to card tables and their values have been updated >>>>> accordingly. >>>>> >>>>> This touches a lot of platform specific code, so would be >>>>> fantastic if port maintainers could have a look that I have not >>>>> broken anything. >>>>> >>>>> There is a slight problem that should be pointed out. There is an >>>>> unfortunate interaction between Graal and hotspot. Graal needs to >>>>> know the values of g1 young cards and dirty cards. This is queried >>>>> in different ways in different versions of the JDK in the >>>>> ||GraalHotSpotVMConfig.java file. Now these values will move from >>>>> their barrier set class to their card table class. That means we >>>>> have at least three cases how to find the correct values. There is >>>>> one for JDK8, one for JDK9, and now a new one for JDK11. Except, >>>>> we have not yet bumped the version number to 11 in the repo, and >>>>> therefore it has to be from JDK10 - 11 for now and updated after >>>>> incrementing the version number. But that means that it will be >>>>> temporarily incompatible with JDK10. That is okay for our own copy >>>>> of Graal, but can not be used by upstream Graal as they are given >>>>> the choice whether to support the public JDK10 or the JDK11 that >>>>> does not quite admit to being 11 yet. I chose the solution that >>>>> works in our repository. I will notify Graal folks of this issue. >>>>> In the long run, it would be nice if we could have a more solid >>>>> interface here. >>>>> >>>>> However, as an added benefit, this changeset brings about a >>>>> hundred copyright headers up to date, so others do not have to >>>>> update them for a while. >>>>> >>>>> Bug: >>>>> https://bugs.openjdk.java.net/browse/JDK-8195142 >>>>> >>>>> Webrev: >>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00/ >>>>> >>>>> Testing: mach5 hs-tier1-5 plus local AoT testing. >>>>> >>>>> Thanks, >>>>> /Erik >>>> >> From thomas.stuefe at gmail.com Thu Feb 22 13:29:26 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 22 Feb 2018 14:29:26 +0100 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: <86A0EA21-1141-4C73-AD80-EF46A44220B2@oracle.com> References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> <86A0EA21-1141-4C73-AD80-EF46A44220B2@oracle.com> Message-ID: Kim, Marcus, On Thu, Feb 22, 2018 at 10:52 AM, Kim Barrett wrote: > > On Feb 22, 2018, at 3:30 AM, Marcus Larsson > wrote: > > On 2018-02-22 05:20, Kim Barrett wrote: > >>> > >> There was discussion of the behavior in this area around JDK-8062370 > >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2014- > November/015779.html > >> > >> The consensus then was that a guarantee of NUL-termination was > >> desirable. That's already at variance with C99 vsnprintf. > > > > From what I can tell, C99 guarantees NUL-termination (for n > 0, > naturally). > > Hm, you are right. C99 is quite explicit about that. I should have > looked there, > rather than relying on the man page description. I?ve been reading the > Linux > snprintf(3) man page as saying it doesn?t nul-terminate on truncation. > But looking > at it again, in light of the C99 text, I?d now say the man page is > ambiguous. Okay, > that should simplify things a bit. > > >> So being consistent with C99 vsnprintf doesn't appear to be useful in > >> this code base, would add some cost to converting away from using the > >> jio_ functions in order to get the -Wformat warnings, and might add > >> some (likely not really measurable) performance cost for Windows. > > > > Just for the record: Starting with VS2015, (v)snprintf implementations > are C99 conforming. > > Yes. Not yet throwing out support for at least VS2013 though. > > >> So I'm not inclined toward making the return value from os::vsnprintf > >> consistent with C99 vsnprintf. If there's a groundswell of support, it > >> can be done, but I'd rather keep such an effort separate from this > >> change, which is intended to address a problem with building using > >> recent versions of Visual Studio, doing a little bit of cleanup in the > >> process. > > > > That's fine with me, I just wanted to voice my opinion on the API I > would prefer. Also that keeping two implementations of essentially the same > thing seems weird to me (vsnprintf vs log_vsnprintf). > > I?d be okay with the C99 behavior. It means the jio_ functions differ > though, > which makes it harder to do the suggested conversion, as there are ~55 > callers > that use the return value. > > Just to voice my preference on this, I am okay with either version (c99 and returning -1 on truncation) but would prefer having only one global function, not two. Especially not a function which exists for the sole purpose of another component. @Kim: thanks for taking my suggestions. I'll take another look when you post a new webrev. Best Regards, Thomas From thomas.stuefe at gmail.com Thu Feb 22 13:58:02 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 22 Feb 2018 14:58:02 +0100 Subject: [8u] [RFR] Request for Review of Backport of JDK-8078628: linux-zero does not build without precompiled header In-Reply-To: References: Message-ID: Looks good. Should the include guard for src/cpu/zero/vm/methodHandles_zero.hpp also added to jdk9? Note that I am no reviewer for jdk8, only 9++. Regards, Thomas On Thu, Feb 22, 2018 at 5:01 AM, Andrew Hughes wrote: > [CCing hotspot list for review] > > Bug: https://bugs.openjdk.java.net/browse/JDK-8078628 > Webrev: http://cr.openjdk.java.net/~andrew/openjdk8/8078628/webrev.01/ > Review thread: http://mail.openjdk.java.net/pipermail/hotspot-dev/2015- > April/018239.html > > When testing a slowdebug build of Zero for the backport of 8194739, my > build > failed because I don't have pre-compiled headers enabled. It seems this > was fixed in OpenJDK 9, but never backported. > > The backported version is pretty similar with a few adjustments for context > in the older OpenJDK 8 version. The src/cpu/zero/vm/methodHandles_zero.hpp > are my own addition from the same fix I came up with independently, and > stops > multiple inclusions of that header. > > Please review and approve for OpenJDK 8 so Zero builds without > precompiled headers > work there. > > Thanks, > -- > Andrew :) > > Senior Free Java Software Engineer > Red Hat, Inc. (http://www.redhat.com) > > Web Site: http://fuseyism.com > Twitter: https://twitter.com/gnu_andrew_java > PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net) > Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222 > From thomas.stuefe at gmail.com Thu Feb 22 15:31:10 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 22 Feb 2018 16:31:10 +0100 Subject: RFR(xxxs): 8198558: Windows does not build without precompiled headers Message-ID: Hi all, may I please have reviews and a sponsor for this tiny fix: Issue: https://bugs.openjdk.java.net/browse/JDK-8198558 Webrev: http://cr.openjdk.java.net/~stuefe/webrevs/8198558-windows-noprecompiled-headers-broken/webrev.00/webrev/index.html Thank you, Thomas From marcus.larsson at oracle.com Thu Feb 22 15:35:34 2018 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Thu, 22 Feb 2018 16:35:34 +0100 Subject: RFR(xxxs): 8198558: Windows does not build without precompiled headers In-Reply-To: References: Message-ID: Looks good. Marcus On 2018-02-22 16:31, Thomas St?fe wrote: > Hi all, > > may I please have reviews and a sponsor for this tiny fix: > > Issue: https://bugs.openjdk.java.net/browse/JDK-8198558 > Webrev: > http://cr.openjdk.java.net/~stuefe/webrevs/8198558-windows-noprecompiled-headers-broken/webrev.00/webrev/index.html > > Thank you, Thomas From coleen.phillimore at oracle.com Thu Feb 22 15:39:16 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 22 Feb 2018 10:39:16 -0500 Subject: RFR(xxxs): 8198558: Windows does not build without precompiled headers In-Reply-To: References: Message-ID: <8fc9effe-97f1-d6ec-1099-ac9d2983f2d5@oracle.com> Looks good and I will sponsor this. thanks, Coleen On 2/22/18 10:31 AM, Thomas St?fe wrote: > Hi all, > > may I please have reviews and a sponsor for this tiny fix: > > Issue: https://bugs.openjdk.java.net/browse/JDK-8198558 > Webrev: > http://cr.openjdk.java.net/~stuefe/webrevs/8198558-windows-noprecompiled-headers-broken/webrev.00/webrev/index.html > > Thank you, Thomas From lois.foltan at oracle.com Thu Feb 22 15:42:00 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Thu, 22 Feb 2018 10:42:00 -0500 Subject: RFR(xxxs): 8198558: Windows does not build without precompiled headers In-Reply-To: References: Message-ID: <91d7c5b0-7afd-3e37-2004-70c3d87749cc@oracle.com> Looks good.? I have noticed this issues as well, thanks for fixing & I can sponsor. Lois On 2/22/2018 10:31 AM, Thomas St?fe wrote: > Hi all, > > may I please have reviews and a sponsor for this tiny fix: > > Issue: https://bugs.openjdk.java.net/browse/JDK-8198558 > Webrev: > http://cr.openjdk.java.net/~stuefe/webrevs/8198558-windows-noprecompiled-headers-broken/webrev.00/webrev/index.html > > Thank you, Thomas From thomas.stuefe at gmail.com Thu Feb 22 16:15:27 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 22 Feb 2018 16:15:27 +0000 Subject: RFR(xxxs): 8198558: Windows does not build without precompiled headers In-Reply-To: <8fc9effe-97f1-d6ec-1099-ac9d2983f2d5@oracle.com> References: <8fc9effe-97f1-d6ec-1099-ac9d2983f2d5@oracle.com> Message-ID: Thanks, Coleen! On Thu 22. Feb 2018 at 16:39, wrote: > > Looks good and I will sponsor this. > thanks, > Coleen > > On 2/22/18 10:31 AM, Thomas St?fe wrote: > > Hi all, > > > > may I please have reviews and a sponsor for this tiny fix: > > > > Issue: https://bugs.openjdk.java.net/browse/JDK-8198558 > > Webrev: > > > http://cr.openjdk.java.net/~stuefe/webrevs/8198558-windows-noprecompiled-headers-broken/webrev.00/webrev/index.html > > > > Thank you, Thomas > > From thomas.stuefe at gmail.com Thu Feb 22 16:16:00 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 22 Feb 2018 16:16:00 +0000 Subject: RFR(xxxs): 8198558: Windows does not build without precompiled headers In-Reply-To: References: Message-ID: Thank you, Marcus. On Thu 22. Feb 2018 at 16:34, Marcus Larsson wrote: > Looks good. > > Marcus > > > On 2018-02-22 16:31, Thomas St?fe wrote: > > Hi all, > > > > may I please have reviews and a sponsor for this tiny fix: > > > > Issue: https://bugs.openjdk.java.net/browse/JDK-8198558 > > Webrev: > > > http://cr.openjdk.java.net/~stuefe/webrevs/8198558-windows-noprecompiled-headers-broken/webrev.00/webrev/index.html > > > > Thank you, Thomas > > From thomas.stuefe at gmail.com Thu Feb 22 16:16:54 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 22 Feb 2018 16:16:54 +0000 Subject: RFR(xxxs): 8198558: Windows does not build without precompiled headers In-Reply-To: <91d7c5b0-7afd-3e37-2004-70c3d87749cc@oracle.com> References: <91d7c5b0-7afd-3e37-2004-70c3d87749cc@oracle.com> Message-ID: Thank you, Lois. I?ll take Coleen up on her offer. Best regards, Thomas On Thu 22. Feb 2018 at 16:42, Lois Foltan wrote: > Looks good. I have noticed this issues as well, thanks for fixing & I > can sponsor. > Lois > > On 2/22/2018 10:31 AM, Thomas St?fe wrote: > > Hi all, > > > > may I please have reviews and a sponsor for this tiny fix: > > > > Issue: https://bugs.openjdk.java.net/browse/JDK-8198558 > > Webrev: > > > http://cr.openjdk.java.net/~stuefe/webrevs/8198558-windows-noprecompiled-headers-broken/webrev.00/webrev/index.html > > > > Thank you, Thomas > > From volker.simonis at gmail.com Thu Feb 22 17:12:45 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Thu, 22 Feb 2018 18:12:45 +0100 Subject: SIGBUS in Access<1572864UL>::store_at on Solaris/SPARC Message-ID: Hi, since the push of "8197999: Accessors in typeArrayOopDesc should use new Access API" we see crashes on Solaris/SPARC (see below). The disassembly at the crash instruction looks as follows: ldx [ %fp + 0x7df ], %o4 st %i2, [ %o4 + %i1 ] O4=0x00000007b80e0468 I1=0x0000000000000012 which results in an unaligned access: siginfo: si_signo: 10 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: 0x00000007b80e047a We are compiling with SS12u4 with updates from October 2017 (i.e. Sun C++ 5.13 SunOS_sparc Patch 151845-28 2017/09/19) and running on Solaris 11.3. Which compilers are you using for compiling jdk-hs on Sun/SPARC? Do you have seen this as well or do you have any idea what might have caused this? Thank you and best regards, Volker # # A fatal error has been detected by the Java Runtime Environment: # # SIGBUS (0xa) at pc=0xfffffff67ffdb4d8, pid=321, tid=58934 # # JRE version: OpenJDK Runtime Environment (11.0.1) (fastdebug build 11.0.0.1-internal+0-adhoc..jdk-hs) # Java VM: OpenJDK 64-Bit Server VM (fastdebug 11.0.0.1-internal+0-adhoc..jdk-hs, mixed mode, tiered, compressed oops, g1 gc, solaris-sparc) # Problematic frame: # V [libjvm.so+0xcdb4d8] void Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 # # Core dump will be written. Default location: /priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/core or core.321 # # If you would like to submit a bug report, please visit: # http://bugreport.java.com/bugreport/crash.jsp # --------------- S U M M A R Y ------------ Command Line: -Djava.awt.headless=true -Xms128m -Xmx288m -XX:MaxJavaStackTraceDepth=1024 -Xverify:all -XX:+FailOverToOldVerifier -Xverify:all -agentlib:jckjvmti=same -Djdk.xml.maxXMLNameLimit=4000 -Djava.net.preferIPv4Stack=true -Djava.security.auth.policy=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.auth.policy -Djava.security.auth.login.config=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.auth.login.config -Djava.security.policy=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.policy -Djava.io.tmpdir=/priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/tempdir -Djavatest.security.allowPropertiesAccess=true -Djava.util.prefs.userRoot=/priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/tempdir -Djava.rmi.activation.port=6284 com.sun.javatest.agent.AgentMain -active -activeHost localhost -activePort 6584 Host: us04z2, Sparcv9 64 bit 2998 MHz, 128 cores, 100G, Oracle Solaris 11.3 SPARC Time: Thu Feb 22 09:24:06 2018 CET elapsed time: 2872 seconds (0d 0h 47m 52s) --------------- T H R E A D --------------- Current thread (0x0000000108bca000): JavaThread "Thread-41287" [_thread_in_vm, id=58934, stack(0xffffffff3f900000,0xffffffff3fa00000)] Stack: [0xffffffff3f900000,0xffffffff3fa00000], sp=0xffffffff3f9fd340, free space=1012k Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0xcdb4d8] void Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 V [libjvm.so+0x1bd2900] void Reflection::array_set(jvalue*,arrayOop,int,BasicType,Thread*)+0x300 V [libjvm.so+0x11cf464] JVM_SetArrayElement+0x6e4 C [libjava.so+0x147e8] Java_java_lang_reflect_Array_set+0x18 j java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+-1473468376 java.base at 11.0.0.1-internal j java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+0 java.base at 11.0.0.1-internal j javasoft.sqe.tests.vm.concepts.execution.execution080.execution08001.execution08001.run([Ljava/lang/String;Ljava/io/PrintStream;)I+617 v ~StubRoutines::call_stub V [libjvm.so+0x108989c] void JavaCalls::call_helper(JavaValue*,const methodHandle&,JavaCallArguments*,Thread*)+0x5bc V [libjvm.so+0x1be0410] oop invoke(InstanceKlass*,const methodHandle&,Handle,bool,objArrayHandle,BasicType,objArrayHandle,bool,Thread*)+0x2c60 V [libjvm.so+0x1be1084] oop Reflection::invoke_method(oop,Handle,objArrayHandle,Thread*)+0x7b4 V [libjvm.so+0x11d2868] JVM_InvokeMethod+0x5d8 C [libjava.so+0x16458] Java_jdk_internal_reflect_NativeMethodAccessorImpl_invoke0+0x18 J 1506 jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; java.base at 11.0.0.1-internal (0 bytes) @ 0xffffffff6f8ad338 [0xffffffff6f8ad040+0x00000000000002f8] J 6474 c2 jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; java.base at 11.0.0.1-internal (104 bytes) @ 0xffffffff6fd959c4 [0xffffffff6fd95960+0x0000000000000064] J 5773 c2 jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; java.base at 11.0.0.1-internal (10 bytes) @ 0xffffffff6f83e670 [0xffffffff6f83e620+0x0000000000000050] J 4866 c1 com.sun.jck.lib.ExecJCKTestSameJVMCmd$SimpleTest.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; (405 bytes) @ 0xffffffff696dfee4 [0xffffffff696df0a0+0x0000000000000e44] J 5654 c1 com.sun.jck.lib.ExecJCKTestSameJVMCmd.execute(Ljava/lang/ClassLoader;Ljava/lang/String;[Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;I)Lcom/sun/javatest/Status; (397 bytes) @ 0xffffffff68d4dd40 [0xffffffff68d4aea0+0x0000000000002ea0] J 6242 c2 com.sun.jck.lib.ExecJCKTestSameJVMCmd.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; (1022 bytes) @ 0xffffffff6fef30b0 [0xffffffff6fef0000+0x00000000000030b0] J 1689 c1 com.sun.jck.lib.ExecInSeparateThreadCmd$StatusCallable.call()Ljava/lang/Object; (5 bytes) @ 0xffffffff68d98114 [0xffffffff68d97f00+0x0000000000000214] J 6097 c1 java.util.concurrent.FutureTask.run()V java.base at 11.0.0.1-internal (123 bytes) @ 0xffffffff68e5f900 [0xffffffff68e5ee40+0x0000000000000ac0] J 5653 c2 java.lang.Thread.run()V java.base at 11.0.0.1-internal (17 bytes) @ 0xffffffff6f851b78 [0xffffffff6f851b20+0x0000000000000058] v ~StubRoutines::call_stub V [libjvm.so+0x108989c] void JavaCalls::call_helper(JavaValue*,const methodHandle&,JavaCallArguments*,Thread*)+0x5bc V [libjvm.so+0x1088220] void JavaCalls::call_virtual(JavaValue*,Klass*,Symbol*,Symbol*,JavaCallArguments*,Thread*)+0x1e0 V [libjvm.so+0x1088328] void JavaCalls::call_virtual(JavaValue*,Handle,Klass*,Symbol*,Symbol*,Thread*)+0xb8 V [libjvm.so+0x11c5140] void thread_entry(JavaThread*,Thread*)+0x1e0 V [libjvm.so+0x1de56e4] void JavaThread::thread_main_inner()+0x2e4 V [libjvm.so+0x1de53d0] void JavaThread::run()+0x350 V [libjvm.so+0x1aa4ff4] thread_native_entry+0x2e4 Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) j java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+0 java.base at 11.0.0.1-internal j javasoft.sqe.tests.vm.concepts.execution.execution080.execution08001.execution08001.run([Ljava/lang/String;Ljava/io/PrintStream;)I+617 v ~StubRoutines::call_stub J 1506 jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; java.base at 11.0.0.1-internal (0 bytes) @ 0xffffffff6f8ad0ec [0xffffffff6f8ad040+0x00000000000000ac] J 6474 c2 jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; java.base at 11.0.0.1-internal (104 bytes) @ 0xffffffff6fd959c4 [0xffffffff6fd95960+0x0000000000000064] J 5773 c2 jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; java.base at 11.0.0.1-internal (10 bytes) @ 0xffffffff6f83e670 [0xffffffff6f83e620+0x0000000000000050] J 4866 c1 com.sun.jck.lib.ExecJCKTestSameJVMCmd$SimpleTest.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; (405 bytes) @ 0xffffffff696dfee4 [0xffffffff696df0a0+0x0000000000000e44] J 5654 c1 com.sun.jck.lib.ExecJCKTestSameJVMCmd.execute(Ljava/lang/ClassLoader;Ljava/lang/String;[Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;I)Lcom/sun/javatest/Status; (397 bytes) @ 0xffffffff68d4dd40 [0xffffffff68d4aea0+0x0000000000002ea0] J 6242 c2 com.sun.jck.lib.ExecJCKTestSameJVMCmd.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; (1022 bytes) @ 0xffffffff6fef30b0 [0xffffffff6fef0000+0x00000000000030b0] J 1689 c1 com.sun.jck.lib.ExecInSeparateThreadCmd$StatusCallable.call()Ljava/lang/Object; (5 bytes) @ 0xffffffff68d98114 [0xffffffff68d97f00+0x0000000000000214] J 6097 c1 java.util.concurrent.FutureTask.run()V java.base at 11.0.0.1-internal (123 bytes) @ 0xffffffff68e5f900 [0xffffffff68e5ee40+0x0000000000000ac0] J 5653 c2 java.lang.Thread.run()V java.base at 11.0.0.1-internal (17 bytes) @ 0xffffffff6f851b78 [0xffffffff6f851b20+0x0000000000000058] v ~StubRoutines::call_stub siginfo: si_signo: 10 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: 0x00000007b80e047a Register to memory mapping: G1=0x000000000197000c is an unknown value G2=0xfffffffffffffd48 is an unknown value G3=0x00000000c0100400 is an unknown value G4=0x0 is NULL G5=0x00000007b80e0468 is pointing into object: 0x00000007b80635b0 [error occurred during error reporting (printing register info), id 0xa] Registers: G1=0x000000000197000c G2=0xfffffffffffffd48 G3=0x00000000c0100400 G4=0x0000000000000000 G5=0x00000007b80e0468 G6=0x0000000000000000 G7=0xffffffff5441a240 Y=0x0000000000000000 O0=0xffffffff3f9fd408 O1=0x0000000000091b61 O2=0x0000000000091800 O3=0xfffffff68194b410 O4=0x00000007b80e0468 O5=0x0000000000000010 O6=0xffffffff3f9fcb41 O7=0x00000007b80e0468 L0=0x00000007b80e0468 L1=0x00000007b80e0468 L2=0xfffffff68194b410 L3=0x0000000000000010 L4=0x0000000000000000 L5=0x00000007b80e0468 L6=0xfffffff68194b410 L7=0x0000000000092434 I0=0xffffffff3f9fd558 I1=0x0000000000000012 I2=0x0000000000000000 I3=0xfffffff6819dd844 I4=0x0000000000000010 I5=0x0000000000092400 I6=0xffffffff3f9fcc11 I7=0xfffffff680ed28f8 PC=0xfffffff67ffdb4d8 nPC=0xfffffff67ffdb4dc Top of Stack: (sp=0xffffffff3f9fd340) 0xffffffff3f9fd340: 00000007b80e0468 00000007b80e0468 0xffffffff3f9fd350: fffffff68194b410 0000000000000010 0xffffffff3f9fd360: 0000000000000000 00000007b80e0468 0xffffffff3f9fd370: fffffff68194b410 0000000000092434 0xffffffff3f9fd380: ffffffff3f9fd558 0000000000000012 0xffffffff3f9fd390: 0000000000000000 fffffff6819dd844 0xffffffff3f9fd3a0: 0000000000000010 0000000000092400 0xffffffff3f9fd3b0: ffffffff3f9fcc11 fffffff680ed28f8 0xffffffff3f9fd3c0: ffffffff3f9fcc61 fffffff680af1514 0xffffffff3f9fd3d0: fffffff6819c5d68 0000000100107880 0xffffffff3f9fd3e0: 00000003b80e00d0 fffffff6819c5d68 0xffffffff3f9fd3f0: 00000007b80e0468 00000007b80e0468 0xffffffff3f9fd400: 00000007b80e0468 00000007b80e0468 0xffffffff3f9fd410: fffffff68194b410 fffffff6819dd844 0xffffffff3f9fd420: 00000000000002dc 0000000000000000 0xffffffff3f9fd430: ffffffff3f9fd558 00000007b80e0468 Instructions: (pc=0xfffffff67ffdb4d8) 0xfffffff67ffdb4b8: 40 36 e0 42 90 07 a7 df 10 80 00 06 d8 5f a7 df 0xfffffff67ffdb4c8: e4 77 a7 e7 e6 5f a7 e7 e6 77 a7 df d8 5f a7 df 0xfffffff67ffdb4d8: f4 23 00 19 d6 0e e0 00 80 a2 e0 00 02 40 00 16 0xfffffff67ffdb4e8: 01 00 00 00 40 36 e0 89 90 07 a7 df da 0e e0 00 From stefan.karlsson at oracle.com Thu Feb 22 17:19:07 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Feb 2018 18:19:07 +0100 Subject: SIGBUS in Access<1572864UL>::store_at on Solaris/SPARC In-Reply-To: References: Message-ID: This looks suspicious: +inline void typeArrayOopDesc::short_at_put(int which, jshort contents) { + ptrdiff_t offset = element_offset(T_BOOLEAN, which); + HeapAccess::store_at(as_oop(), offset, contents); +} T_BOOLEAN together with jshort ... StefanK On 2018-02-22 18:12, Volker Simonis wrote: > Hi, > > since the push of "8197999: Accessors in typeArrayOopDesc should use new > Access API" we see crashes on Solaris/SPARC (see below). The disassembly at > the crash instruction looks as follows: > > ldx [ %fp + 0x7df ], %o4 > st %i2, [ %o4 + %i1 ] > > O4=0x00000007b80e0468 > I1=0x0000000000000012 > > which results in an unaligned access: > > siginfo: si_signo: 10 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: > 0x00000007b80e047a > > We are compiling with SS12u4 with updates from October 2017 (i.e. Sun C++ > 5.13 SunOS_sparc Patch 151845-28 2017/09/19) and running on Solaris 11.3. > Which compilers are you using for compiling jdk-hs on Sun/SPARC? > > Do you have seen this as well or do you have any idea what might have > caused this? > > Thank you and best regards, > Volker > > # > # A fatal error has been detected by the Java Runtime Environment: > # > # SIGBUS (0xa) at pc=0xfffffff67ffdb4d8, pid=321, tid=58934 > # > # JRE version: OpenJDK Runtime Environment (11.0.1) (fastdebug build > 11.0.0.1-internal+0-adhoc..jdk-hs) > # Java VM: OpenJDK 64-Bit Server VM (fastdebug > 11.0.0.1-internal+0-adhoc..jdk-hs, mixed mode, tiered, compressed oops, g1 > gc, solaris-sparc) > # Problematic frame: > # V [libjvm.so+0xcdb4d8] void > Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 > # > # Core dump will be written. Default location: > /priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/core > or core.321 > # > # If you would like to submit a bug report, please visit: > # http://bugreport.java.com/bugreport/crash.jsp > # > > --------------- S U M M A R Y ------------ > > Command Line: -Djava.awt.headless=true -Xms128m -Xmx288m > -XX:MaxJavaStackTraceDepth=1024 -Xverify:all -XX:+FailOverToOldVerifier > -Xverify:all -agentlib:jckjvmti=same -Djdk.xml.maxXMLNameLimit=4000 > -Djava.net.preferIPv4Stack=true > -Djava.security.auth.policy=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.auth.policy > -Djava.security.auth.login.config=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.auth.login.config > -Djava.security.policy=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.policy > -Djava.io.tmpdir=/priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/tempdir > -Djavatest.security.allowPropertiesAccess=true > -Djava.util.prefs.userRoot=/priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/tempdir > -Djava.rmi.activation.port=6284 com.sun.javatest.agent.AgentMain -active > -activeHost localhost -activePort 6584 > > Host: us04z2, Sparcv9 64 bit 2998 MHz, 128 cores, 100G, Oracle Solaris 11.3 > SPARC > Time: Thu Feb 22 09:24:06 2018 CET elapsed time: 2872 seconds (0d 0h 47m > 52s) > > --------------- T H R E A D --------------- > > Current thread (0x0000000108bca000): JavaThread "Thread-41287" > [_thread_in_vm, id=58934, stack(0xffffffff3f900000,0xffffffff3fa00000)] > > Stack: [0xffffffff3f900000,0xffffffff3fa00000], sp=0xffffffff3f9fd340, > free space=1012k > Native frames: (J=compiled Java code, A=aot compiled Java code, > j=interpreted, Vv=VM code, C=native code) > V [libjvm.so+0xcdb4d8] void > Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 > V [libjvm.so+0x1bd2900] void > Reflection::array_set(jvalue*,arrayOop,int,BasicType,Thread*)+0x300 > V [libjvm.so+0x11cf464] JVM_SetArrayElement+0x6e4 > C [libjava.so+0x147e8] Java_java_lang_reflect_Array_set+0x18 > j > java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+-1473468376 > java.base at 11.0.0.1-internal > j java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+0 > java.base at 11.0.0.1-internal > j > javasoft.sqe.tests.vm.concepts.execution.execution080.execution08001.execution08001.run([Ljava/lang/String;Ljava/io/PrintStream;)I+617 > v ~StubRoutines::call_stub > V [libjvm.so+0x108989c] void JavaCalls::call_helper(JavaValue*,const > methodHandle&,JavaCallArguments*,Thread*)+0x5bc > V [libjvm.so+0x1be0410] oop invoke(InstanceKlass*,const > methodHandle&,Handle,bool,objArrayHandle,BasicType,objArrayHandle,bool,Thread*)+0x2c60 > V [libjvm.so+0x1be1084] oop > Reflection::invoke_method(oop,Handle,objArrayHandle,Thread*)+0x7b4 > V [libjvm.so+0x11d2868] JVM_InvokeMethod+0x5d8 > C [libjava.so+0x16458] > Java_jdk_internal_reflect_NativeMethodAccessorImpl_invoke0+0x18 > J 1506 > jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; > java.base at 11.0.0.1-internal (0 bytes) @ 0xffffffff6f8ad338 > [0xffffffff6f8ad040+0x00000000000002f8] > J 6474 c2 > jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; > java.base at 11.0.0.1-internal (104 bytes) @ 0xffffffff6fd959c4 > [0xffffffff6fd95960+0x0000000000000064] > J 5773 c2 > jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; > java.base at 11.0.0.1-internal (10 bytes) @ 0xffffffff6f83e670 > [0xffffffff6f83e620+0x0000000000000050] > J 4866 c1 > com.sun.jck.lib.ExecJCKTestSameJVMCmd$SimpleTest.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; > (405 bytes) @ 0xffffffff696dfee4 [0xffffffff696df0a0+0x0000000000000e44] > J 5654 c1 > com.sun.jck.lib.ExecJCKTestSameJVMCmd.execute(Ljava/lang/ClassLoader;Ljava/lang/String;[Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;I)Lcom/sun/javatest/Status; > (397 bytes) @ 0xffffffff68d4dd40 [0xffffffff68d4aea0+0x0000000000002ea0] > J 6242 c2 > com.sun.jck.lib.ExecJCKTestSameJVMCmd.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; > (1022 bytes) @ 0xffffffff6fef30b0 [0xffffffff6fef0000+0x00000000000030b0] > J 1689 c1 > com.sun.jck.lib.ExecInSeparateThreadCmd$StatusCallable.call()Ljava/lang/Object; > (5 bytes) @ 0xffffffff68d98114 [0xffffffff68d97f00+0x0000000000000214] > J 6097 c1 java.util.concurrent.FutureTask.run()V java.base at 11.0.0.1-internal > (123 bytes) @ 0xffffffff68e5f900 [0xffffffff68e5ee40+0x0000000000000ac0] > J 5653 c2 java.lang.Thread.run()V java.base at 11.0.0.1-internal (17 bytes) @ > 0xffffffff6f851b78 [0xffffffff6f851b20+0x0000000000000058] > v ~StubRoutines::call_stub > V [libjvm.so+0x108989c] void JavaCalls::call_helper(JavaValue*,const > methodHandle&,JavaCallArguments*,Thread*)+0x5bc > V [libjvm.so+0x1088220] void > JavaCalls::call_virtual(JavaValue*,Klass*,Symbol*,Symbol*,JavaCallArguments*,Thread*)+0x1e0 > V [libjvm.so+0x1088328] void > JavaCalls::call_virtual(JavaValue*,Handle,Klass*,Symbol*,Symbol*,Thread*)+0xb8 > V [libjvm.so+0x11c5140] void thread_entry(JavaThread*,Thread*)+0x1e0 > V [libjvm.so+0x1de56e4] void JavaThread::thread_main_inner()+0x2e4 > V [libjvm.so+0x1de53d0] void JavaThread::run()+0x350 > V [libjvm.so+0x1aa4ff4] thread_native_entry+0x2e4 > > Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) > j java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+0 > java.base at 11.0.0.1-internal > j > javasoft.sqe.tests.vm.concepts.execution.execution080.execution08001.execution08001.run([Ljava/lang/String;Ljava/io/PrintStream;)I+617 > v ~StubRoutines::call_stub > J 1506 > jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; > java.base at 11.0.0.1-internal (0 bytes) @ 0xffffffff6f8ad0ec > [0xffffffff6f8ad040+0x00000000000000ac] > J 6474 c2 > jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; > java.base at 11.0.0.1-internal (104 bytes) @ 0xffffffff6fd959c4 > [0xffffffff6fd95960+0x0000000000000064] > J 5773 c2 > jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; > java.base at 11.0.0.1-internal (10 bytes) @ 0xffffffff6f83e670 > [0xffffffff6f83e620+0x0000000000000050] > J 4866 c1 > com.sun.jck.lib.ExecJCKTestSameJVMCmd$SimpleTest.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; > (405 bytes) @ 0xffffffff696dfee4 [0xffffffff696df0a0+0x0000000000000e44] > J 5654 c1 > com.sun.jck.lib.ExecJCKTestSameJVMCmd.execute(Ljava/lang/ClassLoader;Ljava/lang/String;[Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;I)Lcom/sun/javatest/Status; > (397 bytes) @ 0xffffffff68d4dd40 [0xffffffff68d4aea0+0x0000000000002ea0] > J 6242 c2 > com.sun.jck.lib.ExecJCKTestSameJVMCmd.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; > (1022 bytes) @ 0xffffffff6fef30b0 [0xffffffff6fef0000+0x00000000000030b0] > J 1689 c1 > com.sun.jck.lib.ExecInSeparateThreadCmd$StatusCallable.call()Ljava/lang/Object; > (5 bytes) @ 0xffffffff68d98114 [0xffffffff68d97f00+0x0000000000000214] > J 6097 c1 java.util.concurrent.FutureTask.run()V java.base at 11.0.0.1-internal > (123 bytes) @ 0xffffffff68e5f900 [0xffffffff68e5ee40+0x0000000000000ac0] > J 5653 c2 java.lang.Thread.run()V java.base at 11.0.0.1-internal (17 bytes) @ > 0xffffffff6f851b78 [0xffffffff6f851b20+0x0000000000000058] > v ~StubRoutines::call_stub > > siginfo: si_signo: 10 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: > 0x00000007b80e047a > > Register to memory mapping: > > G1=0x000000000197000c is an unknown value > G2=0xfffffffffffffd48 is an unknown value > G3=0x00000000c0100400 is an unknown value > G4=0x0 is NULL > G5=0x00000007b80e0468 is pointing into object: 0x00000007b80635b0 > > [error occurred during error reporting (printing register info), id 0xa] > > Registers: > G1=0x000000000197000c G2=0xfffffffffffffd48 G3=0x00000000c0100400 > G4=0x0000000000000000 > G5=0x00000007b80e0468 G6=0x0000000000000000 G7=0xffffffff5441a240 > Y=0x0000000000000000 > O0=0xffffffff3f9fd408 O1=0x0000000000091b61 O2=0x0000000000091800 > O3=0xfffffff68194b410 > O4=0x00000007b80e0468 O5=0x0000000000000010 O6=0xffffffff3f9fcb41 > O7=0x00000007b80e0468 > L0=0x00000007b80e0468 L1=0x00000007b80e0468 L2=0xfffffff68194b410 > L3=0x0000000000000010 > L4=0x0000000000000000 L5=0x00000007b80e0468 L6=0xfffffff68194b410 > L7=0x0000000000092434 > I0=0xffffffff3f9fd558 I1=0x0000000000000012 I2=0x0000000000000000 > I3=0xfffffff6819dd844 > I4=0x0000000000000010 I5=0x0000000000092400 I6=0xffffffff3f9fcc11 > I7=0xfffffff680ed28f8 > PC=0xfffffff67ffdb4d8 nPC=0xfffffff67ffdb4dc > > > Top of Stack: (sp=0xffffffff3f9fd340) > 0xffffffff3f9fd340: 00000007b80e0468 00000007b80e0468 > 0xffffffff3f9fd350: fffffff68194b410 0000000000000010 > 0xffffffff3f9fd360: 0000000000000000 00000007b80e0468 > 0xffffffff3f9fd370: fffffff68194b410 0000000000092434 > 0xffffffff3f9fd380: ffffffff3f9fd558 0000000000000012 > 0xffffffff3f9fd390: 0000000000000000 fffffff6819dd844 > 0xffffffff3f9fd3a0: 0000000000000010 0000000000092400 > 0xffffffff3f9fd3b0: ffffffff3f9fcc11 fffffff680ed28f8 > 0xffffffff3f9fd3c0: ffffffff3f9fcc61 fffffff680af1514 > 0xffffffff3f9fd3d0: fffffff6819c5d68 0000000100107880 > 0xffffffff3f9fd3e0: 00000003b80e00d0 fffffff6819c5d68 > 0xffffffff3f9fd3f0: 00000007b80e0468 00000007b80e0468 > 0xffffffff3f9fd400: 00000007b80e0468 00000007b80e0468 > 0xffffffff3f9fd410: fffffff68194b410 fffffff6819dd844 > 0xffffffff3f9fd420: 00000000000002dc 0000000000000000 > 0xffffffff3f9fd430: ffffffff3f9fd558 00000007b80e0468 > > Instructions: (pc=0xfffffff67ffdb4d8) > 0xfffffff67ffdb4b8: 40 36 e0 42 90 07 a7 df 10 80 00 06 d8 5f a7 df > 0xfffffff67ffdb4c8: e4 77 a7 e7 e6 5f a7 e7 e6 77 a7 df d8 5f a7 df > 0xfffffff67ffdb4d8: f4 23 00 19 d6 0e e0 00 80 a2 e0 00 02 40 00 16 > 0xfffffff67ffdb4e8: 01 00 00 00 40 36 e0 89 90 07 a7 df da 0e e0 00 From jesper.wilhelmsson at oracle.com Thu Feb 22 17:29:55 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Thu, 22 Feb 2018 18:29:55 +0100 Subject: SIGBUS in Access<1572864UL>::store_at on Solaris/SPARC In-Reply-To: References: Message-ID: <18E826A0-8E78-4FA4-9F92-F369DB5451DA@oracle.com> There were multiple SPARC failures in the HS nightly related to this. I filed JDK-8198564 for this. /Jesper > On 22 Feb 2018, at 18:19, Stefan Karlsson wrote: > > This looks suspicious: > > +inline void typeArrayOopDesc::short_at_put(int which, jshort contents) { > + ptrdiff_t offset = element_offset(T_BOOLEAN, which); > + HeapAccess::store_at(as_oop(), offset, contents); > +} > > > T_BOOLEAN together with jshort ... > > StefanK > > > On 2018-02-22 18:12, Volker Simonis wrote: >> Hi, >> >> since the push of "8197999: Accessors in typeArrayOopDesc should use new >> Access API" we see crashes on Solaris/SPARC (see below). The disassembly at >> the crash instruction looks as follows: >> >> ldx [ %fp + 0x7df ], %o4 >> st %i2, [ %o4 + %i1 ] >> >> O4=0x00000007b80e0468 >> I1=0x0000000000000012 >> >> which results in an unaligned access: >> >> siginfo: si_signo: 10 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: >> 0x00000007b80e047a >> >> We are compiling with SS12u4 with updates from October 2017 (i.e. Sun C++ >> 5.13 SunOS_sparc Patch 151845-28 2017/09/19) and running on Solaris 11.3. >> Which compilers are you using for compiling jdk-hs on Sun/SPARC? >> >> Do you have seen this as well or do you have any idea what might have >> caused this? >> >> Thank you and best regards, >> Volker >> >> # >> # A fatal error has been detected by the Java Runtime Environment: >> # >> # SIGBUS (0xa) at pc=0xfffffff67ffdb4d8, pid=321, tid=58934 >> # >> # JRE version: OpenJDK Runtime Environment (11.0.1) (fastdebug build >> 11.0.0.1-internal+0-adhoc..jdk-hs) >> # Java VM: OpenJDK 64-Bit Server VM (fastdebug >> 11.0.0.1-internal+0-adhoc..jdk-hs, mixed mode, tiered, compressed oops, g1 >> gc, solaris-sparc) >> # Problematic frame: >> # V [libjvm.so+0xcdb4d8] void >> Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 >> # >> # Core dump will be written. Default location: >> /priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/core >> or core.321 >> # >> # If you would like to submit a bug report, please visit: >> # http://bugreport.java.com/bugreport/crash.jsp >> # >> >> --------------- S U M M A R Y ------------ >> >> Command Line: -Djava.awt.headless=true -Xms128m -Xmx288m >> -XX:MaxJavaStackTraceDepth=1024 -Xverify:all -XX:+FailOverToOldVerifier >> -Xverify:all -agentlib:jckjvmti=same -Djdk.xml.maxXMLNameLimit=4000 >> -Djava.net.preferIPv4Stack=true >> -Djava.security.auth.policy=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.auth.policy >> -Djava.security.auth.login.config=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.auth.login.config >> -Djava.security.policy=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.policy >> -Djava.io.tmpdir=/priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/tempdir >> -Djavatest.security.allowPropertiesAccess=true >> -Djava.util.prefs.userRoot=/priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/tempdir >> -Djava.rmi.activation.port=6284 com.sun.javatest.agent.AgentMain -active >> -activeHost localhost -activePort 6584 >> >> Host: us04z2, Sparcv9 64 bit 2998 MHz, 128 cores, 100G, Oracle Solaris 11.3 >> SPARC >> Time: Thu Feb 22 09:24:06 2018 CET elapsed time: 2872 seconds (0d 0h 47m >> 52s) >> >> --------------- T H R E A D --------------- >> >> Current thread (0x0000000108bca000): JavaThread "Thread-41287" >> [_thread_in_vm, id=58934, stack(0xffffffff3f900000,0xffffffff3fa00000)] >> >> Stack: [0xffffffff3f900000,0xffffffff3fa00000], sp=0xffffffff3f9fd340, >> free space=1012k >> Native frames: (J=compiled Java code, A=aot compiled Java code, >> j=interpreted, Vv=VM code, C=native code) >> V [libjvm.so+0xcdb4d8] void >> Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 >> V [libjvm.so+0x1bd2900] void >> Reflection::array_set(jvalue*,arrayOop,int,BasicType,Thread*)+0x300 >> V [libjvm.so+0x11cf464] JVM_SetArrayElement+0x6e4 >> C [libjava.so+0x147e8] Java_java_lang_reflect_Array_set+0x18 >> j >> java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+-1473468376 >> java.base at 11.0.0.1-internal >> j java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+0 >> java.base at 11.0.0.1-internal >> j >> javasoft.sqe.tests.vm.concepts.execution.execution080.execution08001.execution08001.run([Ljava/lang/String;Ljava/io/PrintStream;)I+617 >> v ~StubRoutines::call_stub >> V [libjvm.so+0x108989c] void JavaCalls::call_helper(JavaValue*,const >> methodHandle&,JavaCallArguments*,Thread*)+0x5bc >> V [libjvm.so+0x1be0410] oop invoke(InstanceKlass*,const >> methodHandle&,Handle,bool,objArrayHandle,BasicType,objArrayHandle,bool,Thread*)+0x2c60 >> V [libjvm.so+0x1be1084] oop >> Reflection::invoke_method(oop,Handle,objArrayHandle,Thread*)+0x7b4 >> V [libjvm.so+0x11d2868] JVM_InvokeMethod+0x5d8 >> C [libjava.so+0x16458] >> Java_jdk_internal_reflect_NativeMethodAccessorImpl_invoke0+0x18 >> J 1506 >> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; >> java.base at 11.0.0.1-internal (0 bytes) @ 0xffffffff6f8ad338 >> [0xffffffff6f8ad040+0x00000000000002f8] >> J 6474 c2 >> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; >> java.base at 11.0.0.1-internal (104 bytes) @ 0xffffffff6fd959c4 >> [0xffffffff6fd95960+0x0000000000000064] >> J 5773 c2 >> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; >> java.base at 11.0.0.1-internal (10 bytes) @ 0xffffffff6f83e670 >> [0xffffffff6f83e620+0x0000000000000050] >> J 4866 c1 >> com.sun.jck.lib.ExecJCKTestSameJVMCmd$SimpleTest.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; >> (405 bytes) @ 0xffffffff696dfee4 [0xffffffff696df0a0+0x0000000000000e44] >> J 5654 c1 >> com.sun.jck.lib.ExecJCKTestSameJVMCmd.execute(Ljava/lang/ClassLoader;Ljava/lang/String;[Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;I)Lcom/sun/javatest/Status; >> (397 bytes) @ 0xffffffff68d4dd40 [0xffffffff68d4aea0+0x0000000000002ea0] >> J 6242 c2 >> com.sun.jck.lib.ExecJCKTestSameJVMCmd.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; >> (1022 bytes) @ 0xffffffff6fef30b0 [0xffffffff6fef0000+0x00000000000030b0] >> J 1689 c1 >> com.sun.jck.lib.ExecInSeparateThreadCmd$StatusCallable.call()Ljava/lang/Object; >> (5 bytes) @ 0xffffffff68d98114 [0xffffffff68d97f00+0x0000000000000214] >> J 6097 c1 java.util.concurrent.FutureTask.run()V java.base at 11.0.0.1-internal >> (123 bytes) @ 0xffffffff68e5f900 [0xffffffff68e5ee40+0x0000000000000ac0] >> J 5653 c2 java.lang.Thread.run()V java.base at 11.0.0.1-internal (17 bytes) @ >> 0xffffffff6f851b78 [0xffffffff6f851b20+0x0000000000000058] >> v ~StubRoutines::call_stub >> V [libjvm.so+0x108989c] void JavaCalls::call_helper(JavaValue*,const >> methodHandle&,JavaCallArguments*,Thread*)+0x5bc >> V [libjvm.so+0x1088220] void >> JavaCalls::call_virtual(JavaValue*,Klass*,Symbol*,Symbol*,JavaCallArguments*,Thread*)+0x1e0 >> V [libjvm.so+0x1088328] void >> JavaCalls::call_virtual(JavaValue*,Handle,Klass*,Symbol*,Symbol*,Thread*)+0xb8 >> V [libjvm.so+0x11c5140] void thread_entry(JavaThread*,Thread*)+0x1e0 >> V [libjvm.so+0x1de56e4] void JavaThread::thread_main_inner()+0x2e4 >> V [libjvm.so+0x1de53d0] void JavaThread::run()+0x350 >> V [libjvm.so+0x1aa4ff4] thread_native_entry+0x2e4 >> >> Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) >> j java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+0 >> java.base at 11.0.0.1-internal >> j >> javasoft.sqe.tests.vm.concepts.execution.execution080.execution08001.execution08001.run([Ljava/lang/String;Ljava/io/PrintStream;)I+617 >> v ~StubRoutines::call_stub >> J 1506 >> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; >> java.base at 11.0.0.1-internal (0 bytes) @ 0xffffffff6f8ad0ec >> [0xffffffff6f8ad040+0x00000000000000ac] >> J 6474 c2 >> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; >> java.base at 11.0.0.1-internal (104 bytes) @ 0xffffffff6fd959c4 >> [0xffffffff6fd95960+0x0000000000000064] >> J 5773 c2 >> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; >> java.base at 11.0.0.1-internal (10 bytes) @ 0xffffffff6f83e670 >> [0xffffffff6f83e620+0x0000000000000050] >> J 4866 c1 >> com.sun.jck.lib.ExecJCKTestSameJVMCmd$SimpleTest.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; >> (405 bytes) @ 0xffffffff696dfee4 [0xffffffff696df0a0+0x0000000000000e44] >> J 5654 c1 >> com.sun.jck.lib.ExecJCKTestSameJVMCmd.execute(Ljava/lang/ClassLoader;Ljava/lang/String;[Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;I)Lcom/sun/javatest/Status; >> (397 bytes) @ 0xffffffff68d4dd40 [0xffffffff68d4aea0+0x0000000000002ea0] >> J 6242 c2 >> com.sun.jck.lib.ExecJCKTestSameJVMCmd.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; >> (1022 bytes) @ 0xffffffff6fef30b0 [0xffffffff6fef0000+0x00000000000030b0] >> J 1689 c1 >> com.sun.jck.lib.ExecInSeparateThreadCmd$StatusCallable.call()Ljava/lang/Object; >> (5 bytes) @ 0xffffffff68d98114 [0xffffffff68d97f00+0x0000000000000214] >> J 6097 c1 java.util.concurrent.FutureTask.run()V java.base at 11.0.0.1-internal >> (123 bytes) @ 0xffffffff68e5f900 [0xffffffff68e5ee40+0x0000000000000ac0] >> J 5653 c2 java.lang.Thread.run()V java.base at 11.0.0.1-internal (17 bytes) @ >> 0xffffffff6f851b78 [0xffffffff6f851b20+0x0000000000000058] >> v ~StubRoutines::call_stub >> >> siginfo: si_signo: 10 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: >> 0x00000007b80e047a >> >> Register to memory mapping: >> >> G1=0x000000000197000c is an unknown value >> G2=0xfffffffffffffd48 is an unknown value >> G3=0x00000000c0100400 is an unknown value >> G4=0x0 is NULL >> G5=0x00000007b80e0468 is pointing into object: 0x00000007b80635b0 >> >> [error occurred during error reporting (printing register info), id 0xa] >> >> Registers: >> G1=0x000000000197000c G2=0xfffffffffffffd48 G3=0x00000000c0100400 >> G4=0x0000000000000000 >> G5=0x00000007b80e0468 G6=0x0000000000000000 G7=0xffffffff5441a240 >> Y=0x0000000000000000 >> O0=0xffffffff3f9fd408 O1=0x0000000000091b61 O2=0x0000000000091800 >> O3=0xfffffff68194b410 >> O4=0x00000007b80e0468 O5=0x0000000000000010 O6=0xffffffff3f9fcb41 >> O7=0x00000007b80e0468 >> L0=0x00000007b80e0468 L1=0x00000007b80e0468 L2=0xfffffff68194b410 >> L3=0x0000000000000010 >> L4=0x0000000000000000 L5=0x00000007b80e0468 L6=0xfffffff68194b410 >> L7=0x0000000000092434 >> I0=0xffffffff3f9fd558 I1=0x0000000000000012 I2=0x0000000000000000 >> I3=0xfffffff6819dd844 >> I4=0x0000000000000010 I5=0x0000000000092400 I6=0xffffffff3f9fcc11 >> I7=0xfffffff680ed28f8 >> PC=0xfffffff67ffdb4d8 nPC=0xfffffff67ffdb4dc >> >> >> Top of Stack: (sp=0xffffffff3f9fd340) >> 0xffffffff3f9fd340: 00000007b80e0468 00000007b80e0468 >> 0xffffffff3f9fd350: fffffff68194b410 0000000000000010 >> 0xffffffff3f9fd360: 0000000000000000 00000007b80e0468 >> 0xffffffff3f9fd370: fffffff68194b410 0000000000092434 >> 0xffffffff3f9fd380: ffffffff3f9fd558 0000000000000012 >> 0xffffffff3f9fd390: 0000000000000000 fffffff6819dd844 >> 0xffffffff3f9fd3a0: 0000000000000010 0000000000092400 >> 0xffffffff3f9fd3b0: ffffffff3f9fcc11 fffffff680ed28f8 >> 0xffffffff3f9fd3c0: ffffffff3f9fcc61 fffffff680af1514 >> 0xffffffff3f9fd3d0: fffffff6819c5d68 0000000100107880 >> 0xffffffff3f9fd3e0: 00000003b80e00d0 fffffff6819c5d68 >> 0xffffffff3f9fd3f0: 00000007b80e0468 00000007b80e0468 >> 0xffffffff3f9fd400: 00000007b80e0468 00000007b80e0468 >> 0xffffffff3f9fd410: fffffff68194b410 fffffff6819dd844 >> 0xffffffff3f9fd420: 00000000000002dc 0000000000000000 >> 0xffffffff3f9fd430: ffffffff3f9fd558 00000007b80e0468 >> >> Instructions: (pc=0xfffffff67ffdb4d8) >> 0xfffffff67ffdb4b8: 40 36 e0 42 90 07 a7 df 10 80 00 06 d8 5f a7 df >> 0xfffffff67ffdb4c8: e4 77 a7 e7 e6 5f a7 e7 e6 77 a7 df d8 5f a7 df >> 0xfffffff67ffdb4d8: f4 23 00 19 d6 0e e0 00 80 a2 e0 00 02 40 00 16 >> 0xfffffff67ffdb4e8: 01 00 00 00 40 36 e0 89 90 07 a7 df da 0e e0 00 > > From volker.simonis at gmail.com Thu Feb 22 17:33:40 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Thu, 22 Feb 2018 18:33:40 +0100 Subject: SIGBUS in Access<1572864UL>::store_at on Solaris/SPARC In-Reply-To: References: Message-ID: On Thu, Feb 22, 2018 at 6:19 PM, Stefan Karlsson wrote: > This looks suspicious: > > +inline void typeArrayOopDesc::short_at_put(int which, jshort contents) {+ ptrdiff_t offset = element_offset(T_BOOLEAN, which);+ HeapAccess::store_at(as_oop(), offset, contents);+} > > > T_BOOLEAN together with jshort ... > > Yes, that seems like a copy/paste error (which should be fixed), but in the end it is only used here as input for: Universe::element_type_should_be_aligned(type) and that one only differentiates between T_DOUBLE/T_LONG and all the other basic types. So it's probably not the cause for this error. Thanks, Volker > StefanK > > > > On 2018-02-22 18:12, Volker Simonis wrote: > > Hi, > > since the push of "8197999: Accessors in typeArrayOopDesc should use new > Access API" we see crashes on Solaris/SPARC (see below). The disassembly at > the crash instruction looks as follows: > > ldx [ %fp + 0x7df ], %o4 > st %i2, [ %o4 + %i1 ] > > O4=0x00000007b80e0468 > I1=0x0000000000000012 > > which results in an unaligned access: > > siginfo: si_signo: 10 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: > 0x00000007b80e047a > > We are compiling with SS12u4 with updates from October 2017 (i.e. Sun C++ > 5.13 SunOS_sparc Patch 151845-28 2017/09/19) and running on Solaris 11.3. > Which compilers are you using for compiling jdk-hs on Sun/SPARC? > > Do you have seen this as well or do you have any idea what might have > caused this? > > Thank you and best regards, > Volker > > # > # A fatal error has been detected by the Java Runtime Environment: > # > # SIGBUS (0xa) at pc=0xfffffff67ffdb4d8, pid=321, tid=58934 > # > # JRE version: OpenJDK Runtime Environment (11.0.1) (fastdebug build > 11.0.0.1-internal+0-adhoc..jdk-hs) > # Java VM: OpenJDK 64-Bit Server VM (fastdebug > 11.0.0.1-internal+0-adhoc..jdk-hs, mixed mode, tiered, compressed oops, g1 > gc, solaris-sparc) > # Problematic frame: > # V [libjvm.so+0xcdb4d8] void > Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 > # > # Core dump will be written. Default location: > /priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/core > or core.321 > # > # If you would like to submit a bug report, please visit: > # http://bugreport.java.com/bugreport/crash.jsp > # > > --------------- S U M M A R Y ------------ > > Command Line: -Djava.awt.headless=true -Xms128m -Xmx288m > -XX:MaxJavaStackTraceDepth=1024 -Xverify:all -XX:+FailOverToOldVerifier > -Xverify:all -agentlib:jckjvmti=same -Djdk.xml.maxXMLNameLimit=4000 > -Djava.net.preferIPv4Stack=true > -Djava.security.auth.policy=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.auth.policy > -Djava.security.auth.login.config=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.auth.login.config > -Djava.security.policy=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.policy > -Djava.io.tmpdir=/priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/tempdir > -Djavatest.security.allowPropertiesAccess=true > -Djava.util.prefs.userRoot=/priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/tempdir > -Djava.rmi.activation.port=6284 com.sun.javatest.agent.AgentMain -active > -activeHost localhost -activePort 6584 > > Host: us04z2, Sparcv9 64 bit 2998 MHz, 128 cores, 100G, Oracle Solaris 11.3 > SPARC > Time: Thu Feb 22 09:24:06 2018 CET elapsed time: 2872 seconds (0d 0h 47m > 52s) > > --------------- T H R E A D --------------- > > Current thread (0x0000000108bca000): JavaThread "Thread-41287" > [_thread_in_vm, id=58934, stack(0xffffffff3f900000,0xffffffff3fa00000)] > > Stack: [0xffffffff3f900000,0xffffffff3fa00000], sp=0xffffffff3f9fd340, > free space=1012k > Native frames: (J=compiled Java code, A=aot compiled Java code, > j=interpreted, Vv=VM code, C=native code) > V [libjvm.so+0xcdb4d8] void > Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 > V [libjvm.so+0x1bd2900] void > Reflection::array_set(jvalue*,arrayOop,int,BasicType,Thread*)+0x300 > V [libjvm.so+0x11cf464] JVM_SetArrayElement+0x6e4 > C [libjava.so+0x147e8] Java_java_lang_reflect_Array_set+0x18 > j > java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+-1473468376java.base at 11.0.0.1-internal > j java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+0java.base at 11.0.0.1-internal > j > javasoft.sqe.tests.vm.concepts.execution.execution080.execution08001.execution08001.run([Ljava/lang/String;Ljava/io/PrintStream;)I+617 > v ~StubRoutines::call_stub > V [libjvm.so+0x108989c] void JavaCalls::call_helper(JavaValue*,const > methodHandle&,JavaCallArguments*,Thread*)+0x5bc > V [libjvm.so+0x1be0410] oop invoke(InstanceKlass*,const > methodHandle&,Handle,bool,objArrayHandle,BasicType,objArrayHandle,bool,Thread*)+0x2c60 > V [libjvm.so+0x1be1084] oop > Reflection::invoke_method(oop,Handle,objArrayHandle,Thread*)+0x7b4 > V [libjvm.so+0x11d2868] JVM_InvokeMethod+0x5d8 > C [libjava.so+0x16458] > Java_jdk_internal_reflect_NativeMethodAccessorImpl_invoke0+0x18 > J 1506 > jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (0 bytes) @ 0xffffffff6f8ad338 > [0xffffffff6f8ad040+0x00000000000002f8] > J 6474 c2 > jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (104 bytes) @ 0xffffffff6fd959c4 > [0xffffffff6fd95960+0x0000000000000064] > J 5773 c2 > jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (10 bytes) @ 0xffffffff6f83e670 > [0xffffffff6f83e620+0x0000000000000050] > J 4866 c1 > com.sun.jck.lib.ExecJCKTestSameJVMCmd$SimpleTest.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; > (405 bytes) @ 0xffffffff696dfee4 [0xffffffff696df0a0+0x0000000000000e44] > J 5654 c1 > com.sun.jck.lib.ExecJCKTestSameJVMCmd.execute(Ljava/lang/ClassLoader;Ljava/lang/String;[Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;I)Lcom/sun/javatest/Status; > (397 bytes) @ 0xffffffff68d4dd40 [0xffffffff68d4aea0+0x0000000000002ea0] > J 6242 c2 > com.sun.jck.lib.ExecJCKTestSameJVMCmd.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; > (1022 bytes) @ 0xffffffff6fef30b0 [0xffffffff6fef0000+0x00000000000030b0] > J 1689 c1 > com.sun.jck.lib.ExecInSeparateThreadCmd$StatusCallable.call()Ljava/lang/Object; > (5 bytes) @ 0xffffffff68d98114 [0xffffffff68d97f00+0x0000000000000214] > J 6097 c1 java.util.concurrent.FutureTask.run()V java.base at 11.0.0.1-internal > (123 bytes) @ 0xffffffff68e5f900 [0xffffffff68e5ee40+0x0000000000000ac0] > J 5653 c2 java.lang.Thread.run()V java.base at 11.0.0.1-internal (17 bytes) @ > 0xffffffff6f851b78 [0xffffffff6f851b20+0x0000000000000058] > v ~StubRoutines::call_stub > V [libjvm.so+0x108989c] void JavaCalls::call_helper(JavaValue*,const > methodHandle&,JavaCallArguments*,Thread*)+0x5bc > V [libjvm.so+0x1088220] void > JavaCalls::call_virtual(JavaValue*,Klass*,Symbol*,Symbol*,JavaCallArguments*,Thread*)+0x1e0 > V [libjvm.so+0x1088328] void > JavaCalls::call_virtual(JavaValue*,Handle,Klass*,Symbol*,Symbol*,Thread*)+0xb8 > V [libjvm.so+0x11c5140] void thread_entry(JavaThread*,Thread*)+0x1e0 > V [libjvm.so+0x1de56e4] void JavaThread::thread_main_inner()+0x2e4 > V [libjvm.so+0x1de53d0] void JavaThread::run()+0x350 > V [libjvm.so+0x1aa4ff4] thread_native_entry+0x2e4 > > Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) > j java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+0java.base at 11.0.0.1-internal > j > javasoft.sqe.tests.vm.concepts.execution.execution080.execution08001.execution08001.run([Ljava/lang/String;Ljava/io/PrintStream;)I+617 > v ~StubRoutines::call_stub > J 1506 > jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (0 bytes) @ 0xffffffff6f8ad0ec > [0xffffffff6f8ad040+0x00000000000000ac] > J 6474 c2 > jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (104 bytes) @ 0xffffffff6fd959c4 > [0xffffffff6fd95960+0x0000000000000064] > J 5773 c2 > jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (10 bytes) @ 0xffffffff6f83e670 > [0xffffffff6f83e620+0x0000000000000050] > J 4866 c1 > com.sun.jck.lib.ExecJCKTestSameJVMCmd$SimpleTest.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; > (405 bytes) @ 0xffffffff696dfee4 [0xffffffff696df0a0+0x0000000000000e44] > J 5654 c1 > com.sun.jck.lib.ExecJCKTestSameJVMCmd.execute(Ljava/lang/ClassLoader;Ljava/lang/String;[Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;I)Lcom/sun/javatest/Status; > (397 bytes) @ 0xffffffff68d4dd40 [0xffffffff68d4aea0+0x0000000000002ea0] > J 6242 c2 > com.sun.jck.lib.ExecJCKTestSameJVMCmd.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; > (1022 bytes) @ 0xffffffff6fef30b0 [0xffffffff6fef0000+0x00000000000030b0] > J 1689 c1 > com.sun.jck.lib.ExecInSeparateThreadCmd$StatusCallable.call()Ljava/lang/Object; > (5 bytes) @ 0xffffffff68d98114 [0xffffffff68d97f00+0x0000000000000214] > J 6097 c1 java.util.concurrent.FutureTask.run()V java.base at 11.0.0.1-internal > (123 bytes) @ 0xffffffff68e5f900 [0xffffffff68e5ee40+0x0000000000000ac0] > J 5653 c2 java.lang.Thread.run()V java.base at 11.0.0.1-internal (17 bytes) @ > 0xffffffff6f851b78 [0xffffffff6f851b20+0x0000000000000058] > v ~StubRoutines::call_stub > > siginfo: si_signo: 10 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: > 0x00000007b80e047a > > Register to memory mapping: > > G1=0x000000000197000c is an unknown value > G2=0xfffffffffffffd48 is an unknown value > G3=0x00000000c0100400 is an unknown value > G4=0x0 is NULL > G5=0x00000007b80e0468 is pointing into object: 0x00000007b80635b0 > > [error occurred during error reporting (printing register info), id 0xa] > > Registers: > G1=0x000000000197000c G2=0xfffffffffffffd48 G3=0x00000000c0100400 > G4=0x0000000000000000 > G5=0x00000007b80e0468 G6=0x0000000000000000 G7=0xffffffff5441a240 > Y=0x0000000000000000 > O0=0xffffffff3f9fd408 O1=0x0000000000091b61 O2=0x0000000000091800 > O3=0xfffffff68194b410 > O4=0x00000007b80e0468 O5=0x0000000000000010 O6=0xffffffff3f9fcb41 > O7=0x00000007b80e0468 > L0=0x00000007b80e0468 L1=0x00000007b80e0468 L2=0xfffffff68194b410 > L3=0x0000000000000010 > L4=0x0000000000000000 L5=0x00000007b80e0468 L6=0xfffffff68194b410 > L7=0x0000000000092434 > I0=0xffffffff3f9fd558 I1=0x0000000000000012 I2=0x0000000000000000 > I3=0xfffffff6819dd844 > I4=0x0000000000000010 I5=0x0000000000092400 I6=0xffffffff3f9fcc11 > I7=0xfffffff680ed28f8 > PC=0xfffffff67ffdb4d8 nPC=0xfffffff67ffdb4dc > > > Top of Stack: (sp=0xffffffff3f9fd340) > 0xffffffff3f9fd340: 00000007b80e0468 00000007b80e0468 > 0xffffffff3f9fd350: fffffff68194b410 0000000000000010 > 0xffffffff3f9fd360: 0000000000000000 00000007b80e0468 > 0xffffffff3f9fd370: fffffff68194b410 0000000000092434 > 0xffffffff3f9fd380: ffffffff3f9fd558 0000000000000012 > 0xffffffff3f9fd390: 0000000000000000 fffffff6819dd844 > 0xffffffff3f9fd3a0: 0000000000000010 0000000000092400 > 0xffffffff3f9fd3b0: ffffffff3f9fcc11 fffffff680ed28f8 > 0xffffffff3f9fd3c0: ffffffff3f9fcc61 fffffff680af1514 > 0xffffffff3f9fd3d0: fffffff6819c5d68 0000000100107880 > 0xffffffff3f9fd3e0: 00000003b80e00d0 fffffff6819c5d68 > 0xffffffff3f9fd3f0: 00000007b80e0468 00000007b80e0468 > 0xffffffff3f9fd400: 00000007b80e0468 00000007b80e0468 > 0xffffffff3f9fd410: fffffff68194b410 fffffff6819dd844 > 0xffffffff3f9fd420: 00000000000002dc 0000000000000000 > 0xffffffff3f9fd430: ffffffff3f9fd558 00000007b80e0468 > > Instructions: (pc=0xfffffff67ffdb4d8) > 0xfffffff67ffdb4b8: 40 36 e0 42 90 07 a7 df 10 80 00 06 d8 5f a7 df > 0xfffffff67ffdb4c8: e4 77 a7 e7 e6 5f a7 e7 e6 77 a7 df d8 5f a7 df > 0xfffffff67ffdb4d8: f4 23 00 19 d6 0e e0 00 80 a2 e0 00 02 40 00 16 > 0xfffffff67ffdb4e8: 01 00 00 00 40 36 e0 89 90 07 a7 df da 0e e0 00 > > > From volker.simonis at gmail.com Thu Feb 22 17:35:40 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Thu, 22 Feb 2018 18:35:40 +0100 Subject: SIGBUS in Access<1572864UL>::store_at on Solaris/SPARC In-Reply-To: <18E826A0-8E78-4FA4-9F92-F369DB5451DA@oracle.com> References: <18E826A0-8E78-4FA4-9F92-F369DB5451DA@oracle.com> Message-ID: Thanks for the confirmation and for opening the bug! Regards, Volker On Thu, Feb 22, 2018 at 6:29 PM, wrote: > There were multiple SPARC failures in the HS nightly related to this. I > filed JDK-8198564 for this. > /Jesper > > > On 22 Feb 2018, at 18:19, Stefan Karlsson > wrote: > > > > This looks suspicious: > > > > +inline void typeArrayOopDesc::short_at_put(int which, jshort contents) > { > > + ptrdiff_t offset = element_offset(T_BOOLEAN, which); > > + HeapAccess::store_at(as_oop(), offset, contents); > > +} > > > > > > T_BOOLEAN together with jshort ... > > > > StefanK > > > > > > On 2018-02-22 18:12, Volker Simonis wrote: > >> Hi, > >> > >> since the push of "8197999: Accessors in typeArrayOopDesc should use new > >> Access API" we see crashes on Solaris/SPARC (see below). The > disassembly at > >> the crash instruction looks as follows: > >> > >> ldx [ %fp + 0x7df ], %o4 > >> st %i2, [ %o4 + %i1 ] > >> > >> O4=0x00000007b80e0468 > >> I1=0x0000000000000012 > >> > >> which results in an unaligned access: > >> > >> siginfo: si_signo: 10 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: > >> 0x00000007b80e047a > >> > >> We are compiling with SS12u4 with updates from October 2017 (i.e. Sun > C++ > >> 5.13 SunOS_sparc Patch 151845-28 2017/09/19) and running on Solaris > 11.3. > >> Which compilers are you using for compiling jdk-hs on Sun/SPARC? > >> > >> Do you have seen this as well or do you have any idea what might have > >> caused this? > >> > >> Thank you and best regards, > >> Volker > >> > >> # > >> # A fatal error has been detected by the Java Runtime Environment: > >> # > >> # SIGBUS (0xa) at pc=0xfffffff67ffdb4d8, pid=321, tid=58934 > >> # > >> # JRE version: OpenJDK Runtime Environment (11.0.1) (fastdebug build > >> 11.0.0.1-internal+0-adhoc..jdk-hs) > >> # Java VM: OpenJDK 64-Bit Server VM (fastdebug > >> 11.0.0.1-internal+0-adhoc..jdk-hs, mixed mode, tiered, compressed > oops, g1 > >> gc, solaris-sparc) > >> # Problematic frame: > >> # V [libjvm.so+0xcdb4d8] void > >> Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 > >> # > >> # Core dump will be written. Default location: > >> /priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/ > jck_lang_vm_work/core > >> or core.321 > >> # > >> # If you would like to submit a bug report, please visit: > >> # http://bugreport.java.com/bugreport/crash.jsp > >> # > >> > >> --------------- S U M M A R Y ------------ > >> > >> Command Line: -Djava.awt.headless=true -Xms128m -Xmx288m > >> -XX:MaxJavaStackTraceDepth=1024 -Xverify:all -XX:+FailOverToOldVerifier > >> -Xverify:all -agentlib:jckjvmti=same -Djdk.xml.maxXMLNameLimit=4000 > >> -Djava.net.preferIPv4Stack=true > >> -Djava.security.auth.policy=/sapmnt/hs0131/a/sapjvm_dev/ > jck/jck11/JCK-runtime-11/lib/jck.auth.policy > >> -Djava.security.auth.login.config=/sapmnt/hs0131/a/ > sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.auth.login.config > >> -Djava.security.policy=/sapmnt/hs0131/a/sapjvm_dev/ > jck/jck11/JCK-runtime-11/lib/jck.policy > >> -Djava.io.tmpdir=/priv/jvmtests/output_sapjvm11_o_ > jdk-hs_dbgU_sun_64/jck_lang_vm_work/tempdir > >> -Djavatest.security.allowPropertiesAccess=true > >> -Djava.util.prefs.userRoot=/priv/jvmtests/output_sapjvm11_ > o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/tempdir > >> -Djava.rmi.activation.port=6284 com.sun.javatest.agent.AgentMain > -active > >> -activeHost localhost -activePort 6584 > >> > >> Host: us04z2, Sparcv9 64 bit 2998 MHz, 128 cores, 100G, Oracle Solaris > 11.3 > >> SPARC > >> Time: Thu Feb 22 09:24:06 2018 CET elapsed time: 2872 seconds (0d 0h 47m > >> 52s) > >> > >> --------------- T H R E A D --------------- > >> > >> Current thread (0x0000000108bca000): JavaThread "Thread-41287" > >> [_thread_in_vm, id=58934, stack(0xffffffff3f900000,0xffffffff3fa00000)] > >> > >> Stack: [0xffffffff3f900000,0xffffffff3fa00000], sp=0xffffffff3f9fd340, > >> free space=1012k > >> Native frames: (J=compiled Java code, A=aot compiled Java code, > >> j=interpreted, Vv=VM code, C=native code) > >> V [libjvm.so+0xcdb4d8] void > >> Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 > >> V [libjvm.so+0x1bd2900] void > >> Reflection::array_set(jvalue*,arrayOop,int,BasicType,Thread*)+0x300 > >> V [libjvm.so+0x11cf464] JVM_SetArrayElement+0x6e4 > >> C [libjava.so+0x147e8] Java_java_lang_reflect_Array_set+0x18 > >> j > >> java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/ > Object;)V+-1473468376 > >> java.base at 11.0.0.1-internal > >> j java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/ > Object;)V+0 > >> java.base at 11.0.0.1-internal > >> j > >> javasoft.sqe.tests.vm.concepts.execution.execution080.execution08001. > execution08001.run([Ljava/lang/String;Ljava/io/PrintStream;)I+617 > >> v ~StubRoutines::call_stub > >> V [libjvm.so+0x108989c] void JavaCalls::call_helper(JavaValue*,const > >> methodHandle&,JavaCallArguments*,Thread*)+0x5bc > >> V [libjvm.so+0x1be0410] oop invoke(InstanceKlass*,const > >> methodHandle&,Handle,bool,objArrayHandle,BasicType, > objArrayHandle,bool,Thread*)+0x2c60 > >> V [libjvm.so+0x1be1084] oop > >> Reflection::invoke_method(oop,Handle,objArrayHandle,Thread*)+0x7b4 > >> V [libjvm.so+0x11d2868] JVM_InvokeMethod+0x5d8 > >> C [libjava.so+0x16458] > >> Java_jdk_internal_reflect_NativeMethodAccessorImpl_invoke0+0x18 > >> J 1506 > >> jdk.internal.reflect.NativeMethodAccessorImpl. > invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[ > Ljava/lang/Object;)Ljava/lang/Object; > >> java.base at 11.0.0.1-internal (0 bytes) @ 0xffffffff6f8ad338 > >> [0xffffffff6f8ad040+0x00000000000002f8] > >> J 6474 c2 > >> jdk.internal.reflect.NativeMethodAccessorImpl. > invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; > >> java.base at 11.0.0.1-internal (104 bytes) @ 0xffffffff6fd959c4 > >> [0xffffffff6fd95960+0x0000000000000064] > >> J 5773 c2 > >> jdk.internal.reflect.DelegatingMethodAccessorImpl. > invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; > >> java.base at 11.0.0.1-internal (10 bytes) @ 0xffffffff6f83e670 > >> [0xffffffff6f83e620+0x0000000000000050] > >> J 4866 c1 > >> com.sun.jck.lib.ExecJCKTestSameJVMCmd$SimpleTest.run([Ljava/lang/ > String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/ > sun/javatest/Status; > >> (405 bytes) @ 0xffffffff696dfee4 [0xffffffff696df0a0+ > 0x0000000000000e44] > >> J 5654 c1 > >> com.sun.jck.lib.ExecJCKTestSameJVMCmd.execute( > Ljava/lang/ClassLoader;Ljava/lang/String;[Ljava/lang/ > String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;I)Lcom/ > sun/javatest/Status; > >> (397 bytes) @ 0xffffffff68d4dd40 [0xffffffff68d4aea0+ > 0x0000000000002ea0] > >> J 6242 c2 > >> com.sun.jck.lib.ExecJCKTestSameJVMCmd.run([Ljava/lang/String;Ljava/io/ > PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; > >> (1022 bytes) @ 0xffffffff6fef30b0 [0xffffffff6fef0000+ > 0x00000000000030b0] > >> J 1689 c1 > >> com.sun.jck.lib.ExecInSeparateThreadCmd$StatusCallable.call()Ljava/ > lang/Object; > >> (5 bytes) @ 0xffffffff68d98114 [0xffffffff68d97f00+0x0000000000000214] > >> J 6097 c1 java.util.concurrent.FutureTask.run()V > java.base at 11.0.0.1-internal > >> (123 bytes) @ 0xffffffff68e5f900 [0xffffffff68e5ee40+ > 0x0000000000000ac0] > >> J 5653 c2 java.lang.Thread.run()V java.base at 11.0.0.1-internal (17 > bytes) @ > >> 0xffffffff6f851b78 [0xffffffff6f851b20+0x0000000000000058] > >> v ~StubRoutines::call_stub > >> V [libjvm.so+0x108989c] void JavaCalls::call_helper(JavaValue*,const > >> methodHandle&,JavaCallArguments*,Thread*)+0x5bc > >> V [libjvm.so+0x1088220] void > >> JavaCalls::call_virtual(JavaValue*,Klass*,Symbol*, > Symbol*,JavaCallArguments*,Thread*)+0x1e0 > >> V [libjvm.so+0x1088328] void > >> JavaCalls::call_virtual(JavaValue*,Handle,Klass*, > Symbol*,Symbol*,Thread*)+0xb8 > >> V [libjvm.so+0x11c5140] void thread_entry(JavaThread*,Thread*)+0x1e0 > >> V [libjvm.so+0x1de56e4] void JavaThread::thread_main_inner()+0x2e4 > >> V [libjvm.so+0x1de53d0] void JavaThread::run()+0x350 > >> V [libjvm.so+0x1aa4ff4] thread_native_entry+0x2e4 > >> > >> Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) > >> j java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/ > Object;)V+0 > >> java.base at 11.0.0.1-internal > >> j > >> javasoft.sqe.tests.vm.concepts.execution.execution080.execution08001. > execution08001.run([Ljava/lang/String;Ljava/io/PrintStream;)I+617 > >> v ~StubRoutines::call_stub > >> J 1506 > >> jdk.internal.reflect.NativeMethodAccessorImpl. > invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[ > Ljava/lang/Object;)Ljava/lang/Object; > >> java.base at 11.0.0.1-internal (0 bytes) @ 0xffffffff6f8ad0ec > >> [0xffffffff6f8ad040+0x00000000000000ac] > >> J 6474 c2 > >> jdk.internal.reflect.NativeMethodAccessorImpl. > invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; > >> java.base at 11.0.0.1-internal (104 bytes) @ 0xffffffff6fd959c4 > >> [0xffffffff6fd95960+0x0000000000000064] > >> J 5773 c2 > >> jdk.internal.reflect.DelegatingMethodAccessorImpl. > invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; > >> java.base at 11.0.0.1-internal (10 bytes) @ 0xffffffff6f83e670 > >> [0xffffffff6f83e620+0x0000000000000050] > >> J 4866 c1 > >> com.sun.jck.lib.ExecJCKTestSameJVMCmd$SimpleTest.run([Ljava/lang/ > String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/ > sun/javatest/Status; > >> (405 bytes) @ 0xffffffff696dfee4 [0xffffffff696df0a0+ > 0x0000000000000e44] > >> J 5654 c1 > >> com.sun.jck.lib.ExecJCKTestSameJVMCmd.execute( > Ljava/lang/ClassLoader;Ljava/lang/String;[Ljava/lang/ > String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;I)Lcom/ > sun/javatest/Status; > >> (397 bytes) @ 0xffffffff68d4dd40 [0xffffffff68d4aea0+ > 0x0000000000002ea0] > >> J 6242 c2 > >> com.sun.jck.lib.ExecJCKTestSameJVMCmd.run([Ljava/lang/String;Ljava/io/ > PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; > >> (1022 bytes) @ 0xffffffff6fef30b0 [0xffffffff6fef0000+ > 0x00000000000030b0] > >> J 1689 c1 > >> com.sun.jck.lib.ExecInSeparateThreadCmd$StatusCallable.call()Ljava/ > lang/Object; > >> (5 bytes) @ 0xffffffff68d98114 [0xffffffff68d97f00+0x0000000000000214] > >> J 6097 c1 java.util.concurrent.FutureTask.run()V > java.base at 11.0.0.1-internal > >> (123 bytes) @ 0xffffffff68e5f900 [0xffffffff68e5ee40+ > 0x0000000000000ac0] > >> J 5653 c2 java.lang.Thread.run()V java.base at 11.0.0.1-internal (17 > bytes) @ > >> 0xffffffff6f851b78 [0xffffffff6f851b20+0x0000000000000058] > >> v ~StubRoutines::call_stub > >> > >> siginfo: si_signo: 10 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: > >> 0x00000007b80e047a > >> > >> Register to memory mapping: > >> > >> G1=0x000000000197000c is an unknown value > >> G2=0xfffffffffffffd48 is an unknown value > >> G3=0x00000000c0100400 is an unknown value > >> G4=0x0 is NULL > >> G5=0x00000007b80e0468 is pointing into object: 0x00000007b80635b0 > >> > >> [error occurred during error reporting (printing register info), id 0xa] > >> > >> Registers: > >> G1=0x000000000197000c G2=0xfffffffffffffd48 G3=0x00000000c0100400 > >> G4=0x0000000000000000 > >> G5=0x00000007b80e0468 G6=0x0000000000000000 G7=0xffffffff5441a240 > >> Y=0x0000000000000000 > >> O0=0xffffffff3f9fd408 O1=0x0000000000091b61 O2=0x0000000000091800 > >> O3=0xfffffff68194b410 > >> O4=0x00000007b80e0468 O5=0x0000000000000010 O6=0xffffffff3f9fcb41 > >> O7=0x00000007b80e0468 > >> L0=0x00000007b80e0468 L1=0x00000007b80e0468 L2=0xfffffff68194b410 > >> L3=0x0000000000000010 > >> L4=0x0000000000000000 L5=0x00000007b80e0468 L6=0xfffffff68194b410 > >> L7=0x0000000000092434 > >> I0=0xffffffff3f9fd558 I1=0x0000000000000012 I2=0x0000000000000000 > >> I3=0xfffffff6819dd844 > >> I4=0x0000000000000010 I5=0x0000000000092400 I6=0xffffffff3f9fcc11 > >> I7=0xfffffff680ed28f8 > >> PC=0xfffffff67ffdb4d8 nPC=0xfffffff67ffdb4dc > >> > >> > >> Top of Stack: (sp=0xffffffff3f9fd340) > >> 0xffffffff3f9fd340: 00000007b80e0468 00000007b80e0468 > >> 0xffffffff3f9fd350: fffffff68194b410 0000000000000010 > >> 0xffffffff3f9fd360: 0000000000000000 00000007b80e0468 > >> 0xffffffff3f9fd370: fffffff68194b410 0000000000092434 > >> 0xffffffff3f9fd380: ffffffff3f9fd558 0000000000000012 > >> 0xffffffff3f9fd390: 0000000000000000 fffffff6819dd844 > >> 0xffffffff3f9fd3a0: 0000000000000010 0000000000092400 > >> 0xffffffff3f9fd3b0: ffffffff3f9fcc11 fffffff680ed28f8 > >> 0xffffffff3f9fd3c0: ffffffff3f9fcc61 fffffff680af1514 > >> 0xffffffff3f9fd3d0: fffffff6819c5d68 0000000100107880 > >> 0xffffffff3f9fd3e0: 00000003b80e00d0 fffffff6819c5d68 > >> 0xffffffff3f9fd3f0: 00000007b80e0468 00000007b80e0468 > >> 0xffffffff3f9fd400: 00000007b80e0468 00000007b80e0468 > >> 0xffffffff3f9fd410: fffffff68194b410 fffffff6819dd844 > >> 0xffffffff3f9fd420: 00000000000002dc 0000000000000000 > >> 0xffffffff3f9fd430: ffffffff3f9fd558 00000007b80e0468 > >> > >> Instructions: (pc=0xfffffff67ffdb4d8) > >> 0xfffffff67ffdb4b8: 40 36 e0 42 90 07 a7 df 10 80 00 06 d8 5f a7 df > >> 0xfffffff67ffdb4c8: e4 77 a7 e7 e6 5f a7 e7 e6 77 a7 df d8 5f a7 df > >> 0xfffffff67ffdb4d8: f4 23 00 19 d6 0e e0 00 80 a2 e0 00 02 40 00 16 > >> 0xfffffff67ffdb4e8: 01 00 00 00 40 36 e0 89 90 07 a7 df da 0e e0 00 > > > > > > From lois.foltan at oracle.com Thu Feb 22 18:32:57 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Thu, 22 Feb 2018 13:32:57 -0500 Subject: (11) RFR (S) JDK-8198304: VS2017 (C4838, C4312) Various conversion issues with gtest tests Message-ID: <3b5b2b0f-87ca-f563-1dc4-8c612b4dee14@oracle.com> Please review this change to fix VS2017 conversion compilation errors within two Hotspot gtest tests. open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8198304/webrev/ bug link https://bugs.openjdk.java.net/browse/JDK-8198304 Testing: hs-tier(1-3), jdk-tier(1-3) complete Thanks, Lois From vladimir.kozlov at oracle.com Thu Feb 22 18:30:10 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 22 Feb 2018 10:30:10 -0800 Subject: RFR(L): 8195142: Refactor out card table from CardTableModRefBS to flatten the BarrierSet hierarchy In-Reply-To: References: <5A5F476E.6080000@oracle.com> <5A855376.5090203@oracle.com> <79f7fbf8-b0cc-1999-45a9-aec8f8879f77@oracle.com> <5A8D58FC.10603@oracle.com> <79ef5154-98d3-2578-b997-e179e8f9f634@oracle.com> Message-ID: <28cd29a7-a7e7-f9a4-6e1f-47bf9eb47ba7@oracle.com> Thank you, Erik I am old C++ guy and using template for just casting is overkill to me. You still specify when you can just use cast (type). So what benefit template has in this case? Otherwise looks good. Thanks, Vladimir On 2/22/18 3:45 AM, Erik ?sterlund wrote: > Hi Vladimir, > > Thank you for having a look at this. > > I created some utility functions in ci/ciUtilities.hpp to get the card > table: > > jbyte* ci_card_table_address() > template T ci_card_table_address_as() > > The compiler code has been updated to use these helpers instead to fetch > the card table in a consistent way. > > Hope this is kind of what you had in mind? > > New full webrev: > http://cr.openjdk.java.net/~eosterlund/8195142/webrev.03/ > > New incremental webrev: > http://cr.openjdk.java.net/~eosterlund/8195142/webrev.02_03/ > > Thanks, > /Erik > > On 2018-02-21 18:12, Vladimir Kozlov wrote: >> Hi Erik, >> >> I looked on compiler and aot changes. I noticed repeated sequence in >> several files to get byte_map_base() >> >> +? BarrierSet* bs = Universe::heap()->barrier_set(); >> +? CardTableModRefBS* ctbs = barrier_set_cast(bs); >> +? CardTable* ct = ctbs->card_table(); >> +? assert(sizeof(*(ct->byte_map_base())) == sizeof(jbyte), "adjust >> this code"); >> +? LIR_Const* card_table_base = new LIR_Const(ct->byte_map_base()); >> >> But sometimes it has the assert (graphKit.cpp) and sometimes does not >> (aotCodeHeap.cpp). >> >> Can you factor this sequence into one method which can be used in all >> such places? >> >> Thanks, >> Vladimir >> >> On 2/21/18 3:33 AM, Erik ?sterlund wrote: >>> Hi Erik, >>> >>> Thank you for reviewing this. >>> >>> New full webrev: >>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.02/ >>> >>> New incremental webrev: >>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01_02/ >>> >>> On 2018-02-21 09:18, Erik Helin wrote: >>>> Hi Erik, >>>> >>>> this is a very nice improvement, thanks for working on this! >>>> >>>> A few minor comments thus far: >>>> - in stubGenerator_ppc.cpp: >>>> ? you seem to have lost a `const` in the refactoring >>> >>> Fixed. >>> >>>> - in psCardTable.hpp: >>>> ? I don't think card_mark_must_follow_store() is needed, since >>>> ? PSCardTable passes `false` for `conc_scan` to the CardTable >>>> ? constructor >>> >>> Fixed. I took the liberty of also making the condition for >>> card_mark_must_follow_store() more precise on CMS by making the >>> condition for scanned_concurrently consider whether >>> CMSPrecleaningEnabled is set or not (like other generated code does). >>> >>>> - in g1CollectedHeap.hpp: >>>> ? could you store the G1CardTable as a field in G1CollectedHeap? Also, >>>> ? could you name the "getter" just card_table()? (I see that >>>> ? g1_hot_card_cache method above, but that one should also be >>>> renamed to >>>> ? just hot_card_cache, but in another patch) >>> >>> Fixed. >>> >>>> - in cardTable.hpp and cardTable.cpp: >>>> ? could you use `hg cp` when constructing these files from >>>> ? cardTableModRefBS.{hpp,cpp} so the history is preserved? >>> >>> Yes, I will do this before pushing to make sure the history is >>> preserved. >>> >>> Thanks, >>> /Erik >>> >>>> >>>> Thanks, >>>> Erik >>>> >>>> On 02/15/2018 10:31 AM, Erik ?sterlund wrote: >>>>> Hi, >>>>> >>>>> Here is an updated revision of this webrev after internal feedback >>>>> from StefanK who helped looking through my changes - thanks a lot >>>>> for the help with that. >>>>> >>>>> The changes to the new revision are a bunch of minor clean up >>>>> changes, e.g. copy right headers, indentation issues, sorting >>>>> includes, adding/removing newlines, reverting an assert error >>>>> message, fixing constructor initialization orders, and things like >>>>> that. >>>>> >>>>> The problem I mentioned last time about the version number of our >>>>> repo not yet being bumped to 11 and resulting awkwardness in JVMCI >>>>> has been resolved by simply waiting. So now I changed the JVMCI >>>>> logic to get the card values from the new location in the >>>>> corresponding card tables when observing JDK version 11 or above. >>>>> >>>>> New full webrev (rebased onto a month fresher jdk-hs): >>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01/ >>>>> >>>>> Incremental webrev (over the rebase): >>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00_01/ >>>>> >>>>> This new version has run through hs-tier1-5 and jdk-tier1-3 without >>>>> any issues. >>>>> >>>>> Thanks, >>>>> /Erik >>>>> >>>>> On 2018-01-17 13:54, Erik ?sterlund wrote: >>>>>> Hi, >>>>>> >>>>>> Today, both Parallel, CMS and Serial share the same code for its >>>>>> card marking barrier. However, they have different requirements >>>>>> how to manage its card tables by the GC. And as the card table >>>>>> itself is embedded as a part of the CardTableModRefBS barrier set, >>>>>> this has led to an unnecessary inheritance hierarchy for >>>>>> CardTableModRefBS, where for example CardTableModRefBSForCTRS and >>>>>> CardTableExtension are CardTableModRefBS subclasses that do not >>>>>> change anything to do with the barriers. >>>>>> >>>>>> To clean up the code, there should really be a separate CardTable >>>>>> hierarchy that contains the differences how to manage the card >>>>>> table from the GC point of view, and simply let CardTableModRefBS >>>>>> have a CardTable. This would allow removing >>>>>> CardTableModRefBSForCTRS and CardTableExtension and their >>>>>> references from shared code (that really have nothing to do with >>>>>> the barriers, despite being barrier sets), and significantly >>>>>> simplify the barrier set code. >>>>>> >>>>>> This patch mechanically performs this refactoring. A new CardTable >>>>>> class has been created with a PSCardTable subclass for Parallel, a >>>>>> CardTableRS for CMS and Serial, and a G1CardTable for G1. All >>>>>> references to card tables and their values have been updated >>>>>> accordingly. >>>>>> >>>>>> This touches a lot of platform specific code, so would be >>>>>> fantastic if port maintainers could have a look that I have not >>>>>> broken anything. >>>>>> >>>>>> There is a slight problem that should be pointed out. There is an >>>>>> unfortunate interaction between Graal and hotspot. Graal needs to >>>>>> know the values of g1 young cards and dirty cards. This is queried >>>>>> in different ways in different versions of the JDK in the >>>>>> ||GraalHotSpotVMConfig.java file. Now these values will move from >>>>>> their barrier set class to their card table class. That means we >>>>>> have at least three cases how to find the correct values. There is >>>>>> one for JDK8, one for JDK9, and now a new one for JDK11. Except, >>>>>> we have not yet bumped the version number to 11 in the repo, and >>>>>> therefore it has to be from JDK10 - 11 for now and updated after >>>>>> incrementing the version number. But that means that it will be >>>>>> temporarily incompatible with JDK10. That is okay for our own copy >>>>>> of Graal, but can not be used by upstream Graal as they are given >>>>>> the choice whether to support the public JDK10 or the JDK11 that >>>>>> does not quite admit to being 11 yet. I chose the solution that >>>>>> works in our repository. I will notify Graal folks of this issue. >>>>>> In the long run, it would be nice if we could have a more solid >>>>>> interface here. >>>>>> >>>>>> However, as an added benefit, this changeset brings about a >>>>>> hundred copyright headers up to date, so others do not have to >>>>>> update them for a while. >>>>>> >>>>>> Bug: >>>>>> https://bugs.openjdk.java.net/browse/JDK-8195142 >>>>>> >>>>>> Webrev: >>>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00/ >>>>>> >>>>>> Testing: mach5 hs-tier1-5 plus local AoT testing. >>>>>> >>>>>> Thanks, >>>>>> /Erik >>>>> >>> > From vladimir.kozlov at oracle.com Thu Feb 22 18:41:02 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 22 Feb 2018 10:41:02 -0800 Subject: RFR(L): 8195142: Refactor out card table from CardTableModRefBS to flatten the BarrierSet hierarchy In-Reply-To: <28cd29a7-a7e7-f9a4-6e1f-47bf9eb47ba7@oracle.com> References: <5A5F476E.6080000@oracle.com> <5A855376.5090203@oracle.com> <79f7fbf8-b0cc-1999-45a9-aec8f8879f77@oracle.com> <5A8D58FC.10603@oracle.com> <79ef5154-98d3-2578-b997-e179e8f9f634@oracle.com> <28cd29a7-a7e7-f9a4-6e1f-47bf9eb47ba7@oracle.com> Message-ID: On 2/22/18 10:30 AM, Vladimir Kozlov wrote: > Thank you, Erik > > I am old C++ guy and using template for just casting is overkill to me. > You still specify when you can just use cast (type). So what > benefit template has in this case? Never mind my comment - I missed reinterpret_cast<> you need to cast a pointer to basic types. Changes are good. Thanks, Vladimir > > Otherwise looks good. > > Thanks, > Vladimir > > On 2/22/18 3:45 AM, Erik ?sterlund wrote: >> Hi Vladimir, >> >> Thank you for having a look at this. >> >> I created some utility functions in ci/ciUtilities.hpp to get the card >> table: >> >> jbyte* ci_card_table_address() >> template T ci_card_table_address_as() >> >> The compiler code has been updated to use these helpers instead to >> fetch the card table in a consistent way. >> >> Hope this is kind of what you had in mind? >> >> New full webrev: >> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.03/ >> >> New incremental webrev: >> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.02_03/ >> >> Thanks, >> /Erik >> >> On 2018-02-21 18:12, Vladimir Kozlov wrote: >>> Hi Erik, >>> >>> I looked on compiler and aot changes. I noticed repeated sequence in >>> several files to get byte_map_base() >>> >>> +? BarrierSet* bs = Universe::heap()->barrier_set(); >>> +? CardTableModRefBS* ctbs = barrier_set_cast(bs); >>> +? CardTable* ct = ctbs->card_table(); >>> +? assert(sizeof(*(ct->byte_map_base())) == sizeof(jbyte), "adjust >>> this code"); >>> +? LIR_Const* card_table_base = new LIR_Const(ct->byte_map_base()); >>> >>> But sometimes it has the assert (graphKit.cpp) and sometimes does not >>> (aotCodeHeap.cpp). >>> >>> Can you factor this sequence into one method which can be used in all >>> such places? >>> >>> Thanks, >>> Vladimir >>> >>> On 2/21/18 3:33 AM, Erik ?sterlund wrote: >>>> Hi Erik, >>>> >>>> Thank you for reviewing this. >>>> >>>> New full webrev: >>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.02/ >>>> >>>> New incremental webrev: >>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01_02/ >>>> >>>> On 2018-02-21 09:18, Erik Helin wrote: >>>>> Hi Erik, >>>>> >>>>> this is a very nice improvement, thanks for working on this! >>>>> >>>>> A few minor comments thus far: >>>>> - in stubGenerator_ppc.cpp: >>>>> ? you seem to have lost a `const` in the refactoring >>>> >>>> Fixed. >>>> >>>>> - in psCardTable.hpp: >>>>> ? I don't think card_mark_must_follow_store() is needed, since >>>>> ? PSCardTable passes `false` for `conc_scan` to the CardTable >>>>> ? constructor >>>> >>>> Fixed. I took the liberty of also making the condition for >>>> card_mark_must_follow_store() more precise on CMS by making the >>>> condition for scanned_concurrently consider whether >>>> CMSPrecleaningEnabled is set or not (like other generated code does). >>>> >>>>> - in g1CollectedHeap.hpp: >>>>> ? could you store the G1CardTable as a field in G1CollectedHeap? Also, >>>>> ? could you name the "getter" just card_table()? (I see that >>>>> ? g1_hot_card_cache method above, but that one should also be >>>>> renamed to >>>>> ? just hot_card_cache, but in another patch) >>>> >>>> Fixed. >>>> >>>>> - in cardTable.hpp and cardTable.cpp: >>>>> ? could you use `hg cp` when constructing these files from >>>>> ? cardTableModRefBS.{hpp,cpp} so the history is preserved? >>>> >>>> Yes, I will do this before pushing to make sure the history is >>>> preserved. >>>> >>>> Thanks, >>>> /Erik >>>> >>>>> >>>>> Thanks, >>>>> Erik >>>>> >>>>> On 02/15/2018 10:31 AM, Erik ?sterlund wrote: >>>>>> Hi, >>>>>> >>>>>> Here is an updated revision of this webrev after internal feedback >>>>>> from StefanK who helped looking through my changes - thanks a lot >>>>>> for the help with that. >>>>>> >>>>>> The changes to the new revision are a bunch of minor clean up >>>>>> changes, e.g. copy right headers, indentation issues, sorting >>>>>> includes, adding/removing newlines, reverting an assert error >>>>>> message, fixing constructor initialization orders, and things like >>>>>> that. >>>>>> >>>>>> The problem I mentioned last time about the version number of our >>>>>> repo not yet being bumped to 11 and resulting awkwardness in JVMCI >>>>>> has been resolved by simply waiting. So now I changed the JVMCI >>>>>> logic to get the card values from the new location in the >>>>>> corresponding card tables when observing JDK version 11 or above. >>>>>> >>>>>> New full webrev (rebased onto a month fresher jdk-hs): >>>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01/ >>>>>> >>>>>> Incremental webrev (over the rebase): >>>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00_01/ >>>>>> >>>>>> This new version has run through hs-tier1-5 and jdk-tier1-3 >>>>>> without any issues. >>>>>> >>>>>> Thanks, >>>>>> /Erik >>>>>> >>>>>> On 2018-01-17 13:54, Erik ?sterlund wrote: >>>>>>> Hi, >>>>>>> >>>>>>> Today, both Parallel, CMS and Serial share the same code for its >>>>>>> card marking barrier. However, they have different requirements >>>>>>> how to manage its card tables by the GC. And as the card table >>>>>>> itself is embedded as a part of the CardTableModRefBS barrier >>>>>>> set, this has led to an unnecessary inheritance hierarchy for >>>>>>> CardTableModRefBS, where for example CardTableModRefBSForCTRS and >>>>>>> CardTableExtension are CardTableModRefBS subclasses that do not >>>>>>> change anything to do with the barriers. >>>>>>> >>>>>>> To clean up the code, there should really be a separate CardTable >>>>>>> hierarchy that contains the differences how to manage the card >>>>>>> table from the GC point of view, and simply let CardTableModRefBS >>>>>>> have a CardTable. This would allow removing >>>>>>> CardTableModRefBSForCTRS and CardTableExtension and their >>>>>>> references from shared code (that really have nothing to do with >>>>>>> the barriers, despite being barrier sets), and significantly >>>>>>> simplify the barrier set code. >>>>>>> >>>>>>> This patch mechanically performs this refactoring. A new >>>>>>> CardTable class has been created with a PSCardTable subclass for >>>>>>> Parallel, a CardTableRS for CMS and Serial, and a G1CardTable for >>>>>>> G1. All references to card tables and their values have been >>>>>>> updated accordingly. >>>>>>> >>>>>>> This touches a lot of platform specific code, so would be >>>>>>> fantastic if port maintainers could have a look that I have not >>>>>>> broken anything. >>>>>>> >>>>>>> There is a slight problem that should be pointed out. There is an >>>>>>> unfortunate interaction between Graal and hotspot. Graal needs to >>>>>>> know the values of g1 young cards and dirty cards. This is >>>>>>> queried in different ways in different versions of the JDK in the >>>>>>> ||GraalHotSpotVMConfig.java file. Now these values will move from >>>>>>> their barrier set class to their card table class. That means we >>>>>>> have at least three cases how to find the correct values. There >>>>>>> is one for JDK8, one for JDK9, and now a new one for JDK11. >>>>>>> Except, we have not yet bumped the version number to 11 in the >>>>>>> repo, and therefore it has to be from JDK10 - 11 for now and >>>>>>> updated after incrementing the version number. But that means >>>>>>> that it will be temporarily incompatible with JDK10. That is okay >>>>>>> for our own copy of Graal, but can not be used by upstream Graal >>>>>>> as they are given the choice whether to support the public JDK10 >>>>>>> or the JDK11 that does not quite admit to being 11 yet. I chose >>>>>>> the solution that works in our repository. I will notify Graal >>>>>>> folks of this issue. In the long run, it would be nice if we >>>>>>> could have a more solid interface here. >>>>>>> >>>>>>> However, as an added benefit, this changeset brings about a >>>>>>> hundred copyright headers up to date, so others do not have to >>>>>>> update them for a while. >>>>>>> >>>>>>> Bug: >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8195142 >>>>>>> >>>>>>> Webrev: >>>>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00/ >>>>>>> >>>>>>> Testing: mach5 hs-tier1-5 plus local AoT testing. >>>>>>> >>>>>>> Thanks, >>>>>>> /Erik >>>>>> >>>> >> From rkennke at redhat.com Thu Feb 22 18:49:39 2018 From: rkennke at redhat.com (Roman Kennke) Date: Thu, 22 Feb 2018 19:49:39 +0100 Subject: SIGBUS in Access<1572864UL>::store_at on Solaris/SPARC In-Reply-To: References: Message-ID: well, if you store a short (2-bytes) into an offset computed for boolean (1-byte) you may store unaligned? Should I take over bug JDK-8198564 (after all, it was my change) or is somebody already on it? Roman On Thu, Feb 22, 2018 at 6:33 PM, Volker Simonis wrote: > On Thu, Feb 22, 2018 at 6:19 PM, Stefan Karlsson > wrote: > >> This looks suspicious: >> >> +inline void typeArrayOopDesc::short_at_put(int which, jshort contents) {+ ptrdiff_t offset = element_offset(T_BOOLEAN, which);+ HeapAccess::store_at(as_oop(), offset, contents);+} >> >> >> T_BOOLEAN together with jshort ... >> >> > Yes, that seems like a copy/paste error (which should be fixed), but in the > end it is only used here as input for: > > Universe::element_type_should_be_aligned(type) > > and that one only differentiates between T_DOUBLE/T_LONG and all the other > basic types. So it's probably not the cause for this error. > > Thanks, > Volker > > >> StefanK >> >> >> >> On 2018-02-22 18:12, Volker Simonis wrote: >> >> Hi, >> >> since the push of "8197999: Accessors in typeArrayOopDesc should use new >> Access API" we see crashes on Solaris/SPARC (see below). The disassembly at >> the crash instruction looks as follows: >> >> ldx [ %fp + 0x7df ], %o4 >> st %i2, [ %o4 + %i1 ] >> >> O4=0x00000007b80e0468 >> I1=0x0000000000000012 >> >> which results in an unaligned access: >> >> siginfo: si_signo: 10 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: >> 0x00000007b80e047a >> >> We are compiling with SS12u4 with updates from October 2017 (i.e. Sun C++ >> 5.13 SunOS_sparc Patch 151845-28 2017/09/19) and running on Solaris 11.3. >> Which compilers are you using for compiling jdk-hs on Sun/SPARC? >> >> Do you have seen this as well or do you have any idea what might have >> caused this? >> >> Thank you and best regards, >> Volker >> >> # >> # A fatal error has been detected by the Java Runtime Environment: >> # >> # SIGBUS (0xa) at pc=0xfffffff67ffdb4d8, pid=321, tid=58934 >> # >> # JRE version: OpenJDK Runtime Environment (11.0.1) (fastdebug build >> 11.0.0.1-internal+0-adhoc..jdk-hs) >> # Java VM: OpenJDK 64-Bit Server VM (fastdebug >> 11.0.0.1-internal+0-adhoc..jdk-hs, mixed mode, tiered, compressed oops, g1 >> gc, solaris-sparc) >> # Problematic frame: >> # V [libjvm.so+0xcdb4d8] void >> Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 >> # >> # Core dump will be written. Default location: >> /priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/core >> or core.321 >> # >> # If you would like to submit a bug report, please visit: >> # http://bugreport.java.com/bugreport/crash.jsp >> # >> >> --------------- S U M M A R Y ------------ >> >> Command Line: -Djava.awt.headless=true -Xms128m -Xmx288m >> -XX:MaxJavaStackTraceDepth=1024 -Xverify:all -XX:+FailOverToOldVerifier >> -Xverify:all -agentlib:jckjvmti=same -Djdk.xml.maxXMLNameLimit=4000 >> -Djava.net.preferIPv4Stack=true >> -Djava.security.auth.policy=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.auth.policy >> -Djava.security.auth.login.config=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.auth.login.config >> -Djava.security.policy=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.policy >> -Djava.io.tmpdir=/priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/tempdir >> -Djavatest.security.allowPropertiesAccess=true >> -Djava.util.prefs.userRoot=/priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/tempdir >> -Djava.rmi.activation.port=6284 com.sun.javatest.agent.AgentMain -active >> -activeHost localhost -activePort 6584 >> >> Host: us04z2, Sparcv9 64 bit 2998 MHz, 128 cores, 100G, Oracle Solaris 11.3 >> SPARC >> Time: Thu Feb 22 09:24:06 2018 CET elapsed time: 2872 seconds (0d 0h 47m >> 52s) >> >> --------------- T H R E A D --------------- >> >> Current thread (0x0000000108bca000): JavaThread "Thread-41287" >> [_thread_in_vm, id=58934, stack(0xffffffff3f900000,0xffffffff3fa00000)] >> >> Stack: [0xffffffff3f900000,0xffffffff3fa00000], sp=0xffffffff3f9fd340, >> free space=1012k >> Native frames: (J=compiled Java code, A=aot compiled Java code, >> j=interpreted, Vv=VM code, C=native code) >> V [libjvm.so+0xcdb4d8] void >> Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 >> V [libjvm.so+0x1bd2900] void >> Reflection::array_set(jvalue*,arrayOop,int,BasicType,Thread*)+0x300 >> V [libjvm.so+0x11cf464] JVM_SetArrayElement+0x6e4 >> C [libjava.so+0x147e8] Java_java_lang_reflect_Array_set+0x18 >> j >> java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+-1473468376java.base at 11.0.0.1-internal >> j java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+0java.base at 11.0.0.1-internal >> j >> javasoft.sqe.tests.vm.concepts.execution.execution080.execution08001.execution08001.run([Ljava/lang/String;Ljava/io/PrintStream;)I+617 >> v ~StubRoutines::call_stub >> V [libjvm.so+0x108989c] void JavaCalls::call_helper(JavaValue*,const >> methodHandle&,JavaCallArguments*,Thread*)+0x5bc >> V [libjvm.so+0x1be0410] oop invoke(InstanceKlass*,const >> methodHandle&,Handle,bool,objArrayHandle,BasicType,objArrayHandle,bool,Thread*)+0x2c60 >> V [libjvm.so+0x1be1084] oop >> Reflection::invoke_method(oop,Handle,objArrayHandle,Thread*)+0x7b4 >> V [libjvm.so+0x11d2868] JVM_InvokeMethod+0x5d8 >> C [libjava.so+0x16458] >> Java_jdk_internal_reflect_NativeMethodAccessorImpl_invoke0+0x18 >> J 1506 >> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (0 bytes) @ 0xffffffff6f8ad338 >> [0xffffffff6f8ad040+0x00000000000002f8] >> J 6474 c2 >> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (104 bytes) @ 0xffffffff6fd959c4 >> [0xffffffff6fd95960+0x0000000000000064] >> J 5773 c2 >> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (10 bytes) @ 0xffffffff6f83e670 >> [0xffffffff6f83e620+0x0000000000000050] >> J 4866 c1 >> com.sun.jck.lib.ExecJCKTestSameJVMCmd$SimpleTest.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; >> (405 bytes) @ 0xffffffff696dfee4 [0xffffffff696df0a0+0x0000000000000e44] >> J 5654 c1 >> com.sun.jck.lib.ExecJCKTestSameJVMCmd.execute(Ljava/lang/ClassLoader;Ljava/lang/String;[Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;I)Lcom/sun/javatest/Status; >> (397 bytes) @ 0xffffffff68d4dd40 [0xffffffff68d4aea0+0x0000000000002ea0] >> J 6242 c2 >> com.sun.jck.lib.ExecJCKTestSameJVMCmd.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; >> (1022 bytes) @ 0xffffffff6fef30b0 [0xffffffff6fef0000+0x00000000000030b0] >> J 1689 c1 >> com.sun.jck.lib.ExecInSeparateThreadCmd$StatusCallable.call()Ljava/lang/Object; >> (5 bytes) @ 0xffffffff68d98114 [0xffffffff68d97f00+0x0000000000000214] >> J 6097 c1 java.util.concurrent.FutureTask.run()V java.base at 11.0.0.1-internal >> (123 bytes) @ 0xffffffff68e5f900 [0xffffffff68e5ee40+0x0000000000000ac0] >> J 5653 c2 java.lang.Thread.run()V java.base at 11.0.0.1-internal (17 bytes) @ >> 0xffffffff6f851b78 [0xffffffff6f851b20+0x0000000000000058] >> v ~StubRoutines::call_stub >> V [libjvm.so+0x108989c] void JavaCalls::call_helper(JavaValue*,const >> methodHandle&,JavaCallArguments*,Thread*)+0x5bc >> V [libjvm.so+0x1088220] void >> JavaCalls::call_virtual(JavaValue*,Klass*,Symbol*,Symbol*,JavaCallArguments*,Thread*)+0x1e0 >> V [libjvm.so+0x1088328] void >> JavaCalls::call_virtual(JavaValue*,Handle,Klass*,Symbol*,Symbol*,Thread*)+0xb8 >> V [libjvm.so+0x11c5140] void thread_entry(JavaThread*,Thread*)+0x1e0 >> V [libjvm.so+0x1de56e4] void JavaThread::thread_main_inner()+0x2e4 >> V [libjvm.so+0x1de53d0] void JavaThread::run()+0x350 >> V [libjvm.so+0x1aa4ff4] thread_native_entry+0x2e4 >> >> Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) >> j java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+0java.base at 11.0.0.1-internal >> j >> javasoft.sqe.tests.vm.concepts.execution.execution080.execution08001.execution08001.run([Ljava/lang/String;Ljava/io/PrintStream;)I+617 >> v ~StubRoutines::call_stub >> J 1506 >> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (0 bytes) @ 0xffffffff6f8ad0ec >> [0xffffffff6f8ad040+0x00000000000000ac] >> J 6474 c2 >> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (104 bytes) @ 0xffffffff6fd959c4 >> [0xffffffff6fd95960+0x0000000000000064] >> J 5773 c2 >> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (10 bytes) @ 0xffffffff6f83e670 >> [0xffffffff6f83e620+0x0000000000000050] >> J 4866 c1 >> com.sun.jck.lib.ExecJCKTestSameJVMCmd$SimpleTest.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; >> (405 bytes) @ 0xffffffff696dfee4 [0xffffffff696df0a0+0x0000000000000e44] >> J 5654 c1 >> com.sun.jck.lib.ExecJCKTestSameJVMCmd.execute(Ljava/lang/ClassLoader;Ljava/lang/String;[Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;I)Lcom/sun/javatest/Status; >> (397 bytes) @ 0xffffffff68d4dd40 [0xffffffff68d4aea0+0x0000000000002ea0] >> J 6242 c2 >> com.sun.jck.lib.ExecJCKTestSameJVMCmd.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; >> (1022 bytes) @ 0xffffffff6fef30b0 [0xffffffff6fef0000+0x00000000000030b0] >> J 1689 c1 >> com.sun.jck.lib.ExecInSeparateThreadCmd$StatusCallable.call()Ljava/lang/Object; >> (5 bytes) @ 0xffffffff68d98114 [0xffffffff68d97f00+0x0000000000000214] >> J 6097 c1 java.util.concurrent.FutureTask.run()V java.base at 11.0.0.1-internal >> (123 bytes) @ 0xffffffff68e5f900 [0xffffffff68e5ee40+0x0000000000000ac0] >> J 5653 c2 java.lang.Thread.run()V java.base at 11.0.0.1-internal (17 bytes) @ >> 0xffffffff6f851b78 [0xffffffff6f851b20+0x0000000000000058] >> v ~StubRoutines::call_stub >> >> siginfo: si_signo: 10 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: >> 0x00000007b80e047a >> >> Register to memory mapping: >> >> G1=0x000000000197000c is an unknown value >> G2=0xfffffffffffffd48 is an unknown value >> G3=0x00000000c0100400 is an unknown value >> G4=0x0 is NULL >> G5=0x00000007b80e0468 is pointing into object: 0x00000007b80635b0 >> >> [error occurred during error reporting (printing register info), id 0xa] >> >> Registers: >> G1=0x000000000197000c G2=0xfffffffffffffd48 G3=0x00000000c0100400 >> G4=0x0000000000000000 >> G5=0x00000007b80e0468 G6=0x0000000000000000 G7=0xffffffff5441a240 >> Y=0x0000000000000000 >> O0=0xffffffff3f9fd408 O1=0x0000000000091b61 O2=0x0000000000091800 >> O3=0xfffffff68194b410 >> O4=0x00000007b80e0468 O5=0x0000000000000010 O6=0xffffffff3f9fcb41 >> O7=0x00000007b80e0468 >> L0=0x00000007b80e0468 L1=0x00000007b80e0468 L2=0xfffffff68194b410 >> L3=0x0000000000000010 >> L4=0x0000000000000000 L5=0x00000007b80e0468 L6=0xfffffff68194b410 >> L7=0x0000000000092434 >> I0=0xffffffff3f9fd558 I1=0x0000000000000012 I2=0x0000000000000000 >> I3=0xfffffff6819dd844 >> I4=0x0000000000000010 I5=0x0000000000092400 I6=0xffffffff3f9fcc11 >> I7=0xfffffff680ed28f8 >> PC=0xfffffff67ffdb4d8 nPC=0xfffffff67ffdb4dc >> >> >> Top of Stack: (sp=0xffffffff3f9fd340) >> 0xffffffff3f9fd340: 00000007b80e0468 00000007b80e0468 >> 0xffffffff3f9fd350: fffffff68194b410 0000000000000010 >> 0xffffffff3f9fd360: 0000000000000000 00000007b80e0468 >> 0xffffffff3f9fd370: fffffff68194b410 0000000000092434 >> 0xffffffff3f9fd380: ffffffff3f9fd558 0000000000000012 >> 0xffffffff3f9fd390: 0000000000000000 fffffff6819dd844 >> 0xffffffff3f9fd3a0: 0000000000000010 0000000000092400 >> 0xffffffff3f9fd3b0: ffffffff3f9fcc11 fffffff680ed28f8 >> 0xffffffff3f9fd3c0: ffffffff3f9fcc61 fffffff680af1514 >> 0xffffffff3f9fd3d0: fffffff6819c5d68 0000000100107880 >> 0xffffffff3f9fd3e0: 00000003b80e00d0 fffffff6819c5d68 >> 0xffffffff3f9fd3f0: 00000007b80e0468 00000007b80e0468 >> 0xffffffff3f9fd400: 00000007b80e0468 00000007b80e0468 >> 0xffffffff3f9fd410: fffffff68194b410 fffffff6819dd844 >> 0xffffffff3f9fd420: 00000000000002dc 0000000000000000 >> 0xffffffff3f9fd430: ffffffff3f9fd558 00000007b80e0468 >> >> Instructions: (pc=0xfffffff67ffdb4d8) >> 0xfffffff67ffdb4b8: 40 36 e0 42 90 07 a7 df 10 80 00 06 d8 5f a7 df >> 0xfffffff67ffdb4c8: e4 77 a7 e7 e6 5f a7 e7 e6 77 a7 df d8 5f a7 df >> 0xfffffff67ffdb4d8: f4 23 00 19 d6 0e e0 00 80 a2 e0 00 02 40 00 16 >> 0xfffffff67ffdb4e8: 01 00 00 00 40 36 e0 89 90 07 a7 df da 0e e0 00 >> >> >> From lois.foltan at oracle.com Thu Feb 22 18:55:14 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Thu, 22 Feb 2018 13:55:14 -0500 Subject: (11) RFR (S) JDK-8197864: VS2017 (C4334) Result of 32-bit Shift Implicitly Converted to 64 bits Message-ID: Please review this fix to properly perform a 64-bit shift when setting SlowSignatureHandler::_fp_identifiers within _WIN64 conditional code.? Since the compiler determined that the constant could fit into an int and the type of SlowSignatureHandler::_num_args is of unsigned int as well, a 32-bit shift would result yielding a VS2017 compilation warning. open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8197864/webrev/ bug link https://bugs.openjdk.java.net/browse/JDK-8197864 contributed-by: Kim Barrett & Lois Foltan Testing: hs-tier(1-3), jdk-tier(1-3) complete Thanks, Lois From christian.tornqvist at oracle.com Thu Feb 22 19:01:52 2018 From: christian.tornqvist at oracle.com (Christian Tornqvist) Date: Thu, 22 Feb 2018 14:01:52 -0500 Subject: RFR: 8198551 - Rename hotspot_tier1 test group to tier1 Message-ID: <91A3E5B6-CF08-41ED-A404-52080C902032@oracle.com> Please review this small change that renames the hotspot_tier1 test group to tier1 in order to match the naming definition of langtools, jdk, jaxp and nashorn. This enables the use of run-test to run all of the tier1 tests locally: make run-test-tier1 Building target 'run-test-tier1' in configuration 'macosx-x64' warning: no debug symbols in executable (-arch x86_64) warning: no debug symbols in executable (-arch x86_64) Test selection 'tier1', will run: * jtreg:open/test/hotspot/jtreg:tier1 * jtreg:open/test/jdk:tier1 * jtreg:open/test/langtools:tier1 * jtreg:open/test/nashorn:tier1 * jtreg:open/test/jaxp:tier1 The jdk_svc_sanity test group is part of the Hotspot tier1 definitions but not part of the Hotspot test root, so I moved that group into jdk:tier1 Webrev: http://cr.openjdk.java.net/~ctornqvi/webrev/8198551/webrev.00/ Bug: https://bugs.openjdk.java.net/browse/JDK-8198551 Thanks, Christian From rkennke at redhat.com Thu Feb 22 19:02:01 2018 From: rkennke at redhat.com (Roman Kennke) Date: Thu, 22 Feb 2018 20:02:01 +0100 Subject: SIGBUS in Access<1572864UL>::store_at on Solaris/SPARC In-Reply-To: References: Message-ID: Hmm. None of the tests mentioned in bug fails for me. While the patch is obvious (see below) I have no way to verify that it actually fixes the problem. # HG changeset patch # Parent f05f4b5cea20d69ac4cc56baf63c55c7e6c0f05c diff --git a/src/hotspot/share/oops/typeArrayOop.inline.hpp b/src/hotspot/share/oops/typeArrayOop.inline.hpp --- a/src/hotspot/share/oops/typeArrayOop.inline.hpp +++ b/src/hotspot/share/oops/typeArrayOop.inline.hpp @@ -130,7 +130,7 @@ return HeapAccess::load_at(as_oop(), offset); } inline void typeArrayOopDesc::short_at_put(int which, jshort contents) { - ptrdiff_t offset = element_offset(T_BOOLEAN, which); + ptrdiff_t offset = element_offset(T_SHORT, which); HeapAccess::store_at(as_oop(), offset, contents); } On Thu, Feb 22, 2018 at 7:49 PM, Roman Kennke wrote: > well, if you store a short (2-bytes) into an offset computed for > boolean (1-byte) you may store unaligned? > > Should I take over bug JDK-8198564 (after all, it was my change) or is > somebody already on it? > > Roman > > On Thu, Feb 22, 2018 at 6:33 PM, Volker Simonis > wrote: >> On Thu, Feb 22, 2018 at 6:19 PM, Stefan Karlsson >> wrote: >> >>> This looks suspicious: >>> >>> +inline void typeArrayOopDesc::short_at_put(int which, jshort contents) {+ ptrdiff_t offset = element_offset(T_BOOLEAN, which);+ HeapAccess::store_at(as_oop(), offset, contents);+} >>> >>> >>> T_BOOLEAN together with jshort ... >>> >>> >> Yes, that seems like a copy/paste error (which should be fixed), but in the >> end it is only used here as input for: >> >> Universe::element_type_should_be_aligned(type) >> >> and that one only differentiates between T_DOUBLE/T_LONG and all the other >> basic types. So it's probably not the cause for this error. >> >> Thanks, >> Volker >> >> >>> StefanK >>> >>> >>> >>> On 2018-02-22 18:12, Volker Simonis wrote: >>> >>> Hi, >>> >>> since the push of "8197999: Accessors in typeArrayOopDesc should use new >>> Access API" we see crashes on Solaris/SPARC (see below). The disassembly at >>> the crash instruction looks as follows: >>> >>> ldx [ %fp + 0x7df ], %o4 >>> st %i2, [ %o4 + %i1 ] >>> >>> O4=0x00000007b80e0468 >>> I1=0x0000000000000012 >>> >>> which results in an unaligned access: >>> >>> siginfo: si_signo: 10 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: >>> 0x00000007b80e047a >>> >>> We are compiling with SS12u4 with updates from October 2017 (i.e. Sun C++ >>> 5.13 SunOS_sparc Patch 151845-28 2017/09/19) and running on Solaris 11.3. >>> Which compilers are you using for compiling jdk-hs on Sun/SPARC? >>> >>> Do you have seen this as well or do you have any idea what might have >>> caused this? >>> >>> Thank you and best regards, >>> Volker >>> >>> # >>> # A fatal error has been detected by the Java Runtime Environment: >>> # >>> # SIGBUS (0xa) at pc=0xfffffff67ffdb4d8, pid=321, tid=58934 >>> # >>> # JRE version: OpenJDK Runtime Environment (11.0.1) (fastdebug build >>> 11.0.0.1-internal+0-adhoc..jdk-hs) >>> # Java VM: OpenJDK 64-Bit Server VM (fastdebug >>> 11.0.0.1-internal+0-adhoc..jdk-hs, mixed mode, tiered, compressed oops, g1 >>> gc, solaris-sparc) >>> # Problematic frame: >>> # V [libjvm.so+0xcdb4d8] void >>> Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 >>> # >>> # Core dump will be written. Default location: >>> /priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/core >>> or core.321 >>> # >>> # If you would like to submit a bug report, please visit: >>> # http://bugreport.java.com/bugreport/crash.jsp >>> # >>> >>> --------------- S U M M A R Y ------------ >>> >>> Command Line: -Djava.awt.headless=true -Xms128m -Xmx288m >>> -XX:MaxJavaStackTraceDepth=1024 -Xverify:all -XX:+FailOverToOldVerifier >>> -Xverify:all -agentlib:jckjvmti=same -Djdk.xml.maxXMLNameLimit=4000 >>> -Djava.net.preferIPv4Stack=true >>> -Djava.security.auth.policy=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.auth.policy >>> -Djava.security.auth.login.config=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.auth.login.config >>> -Djava.security.policy=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.policy >>> -Djava.io.tmpdir=/priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/tempdir >>> -Djavatest.security.allowPropertiesAccess=true >>> -Djava.util.prefs.userRoot=/priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/tempdir >>> -Djava.rmi.activation.port=6284 com.sun.javatest.agent.AgentMain -active >>> -activeHost localhost -activePort 6584 >>> >>> Host: us04z2, Sparcv9 64 bit 2998 MHz, 128 cores, 100G, Oracle Solaris 11.3 >>> SPARC >>> Time: Thu Feb 22 09:24:06 2018 CET elapsed time: 2872 seconds (0d 0h 47m >>> 52s) >>> >>> --------------- T H R E A D --------------- >>> >>> Current thread (0x0000000108bca000): JavaThread "Thread-41287" >>> [_thread_in_vm, id=58934, stack(0xffffffff3f900000,0xffffffff3fa00000)] >>> >>> Stack: [0xffffffff3f900000,0xffffffff3fa00000], sp=0xffffffff3f9fd340, >>> free space=1012k >>> Native frames: (J=compiled Java code, A=aot compiled Java code, >>> j=interpreted, Vv=VM code, C=native code) >>> V [libjvm.so+0xcdb4d8] void >>> Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 >>> V [libjvm.so+0x1bd2900] void >>> Reflection::array_set(jvalue*,arrayOop,int,BasicType,Thread*)+0x300 >>> V [libjvm.so+0x11cf464] JVM_SetArrayElement+0x6e4 >>> C [libjava.so+0x147e8] Java_java_lang_reflect_Array_set+0x18 >>> j >>> java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+-1473468376java.base at 11.0.0.1-internal >>> j java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+0java.base at 11.0.0.1-internal >>> j >>> javasoft.sqe.tests.vm.concepts.execution.execution080.execution08001.execution08001.run([Ljava/lang/String;Ljava/io/PrintStream;)I+617 >>> v ~StubRoutines::call_stub >>> V [libjvm.so+0x108989c] void JavaCalls::call_helper(JavaValue*,const >>> methodHandle&,JavaCallArguments*,Thread*)+0x5bc >>> V [libjvm.so+0x1be0410] oop invoke(InstanceKlass*,const >>> methodHandle&,Handle,bool,objArrayHandle,BasicType,objArrayHandle,bool,Thread*)+0x2c60 >>> V [libjvm.so+0x1be1084] oop >>> Reflection::invoke_method(oop,Handle,objArrayHandle,Thread*)+0x7b4 >>> V [libjvm.so+0x11d2868] JVM_InvokeMethod+0x5d8 >>> C [libjava.so+0x16458] >>> Java_jdk_internal_reflect_NativeMethodAccessorImpl_invoke0+0x18 >>> J 1506 >>> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (0 bytes) @ 0xffffffff6f8ad338 >>> [0xffffffff6f8ad040+0x00000000000002f8] >>> J 6474 c2 >>> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (104 bytes) @ 0xffffffff6fd959c4 >>> [0xffffffff6fd95960+0x0000000000000064] >>> J 5773 c2 >>> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (10 bytes) @ 0xffffffff6f83e670 >>> [0xffffffff6f83e620+0x0000000000000050] >>> J 4866 c1 >>> com.sun.jck.lib.ExecJCKTestSameJVMCmd$SimpleTest.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; >>> (405 bytes) @ 0xffffffff696dfee4 [0xffffffff696df0a0+0x0000000000000e44] >>> J 5654 c1 >>> com.sun.jck.lib.ExecJCKTestSameJVMCmd.execute(Ljava/lang/ClassLoader;Ljava/lang/String;[Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;I)Lcom/sun/javatest/Status; >>> (397 bytes) @ 0xffffffff68d4dd40 [0xffffffff68d4aea0+0x0000000000002ea0] >>> J 6242 c2 >>> com.sun.jck.lib.ExecJCKTestSameJVMCmd.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; >>> (1022 bytes) @ 0xffffffff6fef30b0 [0xffffffff6fef0000+0x00000000000030b0] >>> J 1689 c1 >>> com.sun.jck.lib.ExecInSeparateThreadCmd$StatusCallable.call()Ljava/lang/Object; >>> (5 bytes) @ 0xffffffff68d98114 [0xffffffff68d97f00+0x0000000000000214] >>> J 6097 c1 java.util.concurrent.FutureTask.run()V java.base at 11.0.0.1-internal >>> (123 bytes) @ 0xffffffff68e5f900 [0xffffffff68e5ee40+0x0000000000000ac0] >>> J 5653 c2 java.lang.Thread.run()V java.base at 11.0.0.1-internal (17 bytes) @ >>> 0xffffffff6f851b78 [0xffffffff6f851b20+0x0000000000000058] >>> v ~StubRoutines::call_stub >>> V [libjvm.so+0x108989c] void JavaCalls::call_helper(JavaValue*,const >>> methodHandle&,JavaCallArguments*,Thread*)+0x5bc >>> V [libjvm.so+0x1088220] void >>> JavaCalls::call_virtual(JavaValue*,Klass*,Symbol*,Symbol*,JavaCallArguments*,Thread*)+0x1e0 >>> V [libjvm.so+0x1088328] void >>> JavaCalls::call_virtual(JavaValue*,Handle,Klass*,Symbol*,Symbol*,Thread*)+0xb8 >>> V [libjvm.so+0x11c5140] void thread_entry(JavaThread*,Thread*)+0x1e0 >>> V [libjvm.so+0x1de56e4] void JavaThread::thread_main_inner()+0x2e4 >>> V [libjvm.so+0x1de53d0] void JavaThread::run()+0x350 >>> V [libjvm.so+0x1aa4ff4] thread_native_entry+0x2e4 >>> >>> Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) >>> j java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+0java.base at 11.0.0.1-internal >>> j >>> javasoft.sqe.tests.vm.concepts.execution.execution080.execution08001.execution08001.run([Ljava/lang/String;Ljava/io/PrintStream;)I+617 >>> v ~StubRoutines::call_stub >>> J 1506 >>> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (0 bytes) @ 0xffffffff6f8ad0ec >>> [0xffffffff6f8ad040+0x00000000000000ac] >>> J 6474 c2 >>> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (104 bytes) @ 0xffffffff6fd959c4 >>> [0xffffffff6fd95960+0x0000000000000064] >>> J 5773 c2 >>> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (10 bytes) @ 0xffffffff6f83e670 >>> [0xffffffff6f83e620+0x0000000000000050] >>> J 4866 c1 >>> com.sun.jck.lib.ExecJCKTestSameJVMCmd$SimpleTest.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; >>> (405 bytes) @ 0xffffffff696dfee4 [0xffffffff696df0a0+0x0000000000000e44] >>> J 5654 c1 >>> com.sun.jck.lib.ExecJCKTestSameJVMCmd.execute(Ljava/lang/ClassLoader;Ljava/lang/String;[Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;I)Lcom/sun/javatest/Status; >>> (397 bytes) @ 0xffffffff68d4dd40 [0xffffffff68d4aea0+0x0000000000002ea0] >>> J 6242 c2 >>> com.sun.jck.lib.ExecJCKTestSameJVMCmd.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; >>> (1022 bytes) @ 0xffffffff6fef30b0 [0xffffffff6fef0000+0x00000000000030b0] >>> J 1689 c1 >>> com.sun.jck.lib.ExecInSeparateThreadCmd$StatusCallable.call()Ljava/lang/Object; >>> (5 bytes) @ 0xffffffff68d98114 [0xffffffff68d97f00+0x0000000000000214] >>> J 6097 c1 java.util.concurrent.FutureTask.run()V java.base at 11.0.0.1-internal >>> (123 bytes) @ 0xffffffff68e5f900 [0xffffffff68e5ee40+0x0000000000000ac0] >>> J 5653 c2 java.lang.Thread.run()V java.base at 11.0.0.1-internal (17 bytes) @ >>> 0xffffffff6f851b78 [0xffffffff6f851b20+0x0000000000000058] >>> v ~StubRoutines::call_stub >>> >>> siginfo: si_signo: 10 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: >>> 0x00000007b80e047a >>> >>> Register to memory mapping: >>> >>> G1=0x000000000197000c is an unknown value >>> G2=0xfffffffffffffd48 is an unknown value >>> G3=0x00000000c0100400 is an unknown value >>> G4=0x0 is NULL >>> G5=0x00000007b80e0468 is pointing into object: 0x00000007b80635b0 >>> >>> [error occurred during error reporting (printing register info), id 0xa] >>> >>> Registers: >>> G1=0x000000000197000c G2=0xfffffffffffffd48 G3=0x00000000c0100400 >>> G4=0x0000000000000000 >>> G5=0x00000007b80e0468 G6=0x0000000000000000 G7=0xffffffff5441a240 >>> Y=0x0000000000000000 >>> O0=0xffffffff3f9fd408 O1=0x0000000000091b61 O2=0x0000000000091800 >>> O3=0xfffffff68194b410 >>> O4=0x00000007b80e0468 O5=0x0000000000000010 O6=0xffffffff3f9fcb41 >>> O7=0x00000007b80e0468 >>> L0=0x00000007b80e0468 L1=0x00000007b80e0468 L2=0xfffffff68194b410 >>> L3=0x0000000000000010 >>> L4=0x0000000000000000 L5=0x00000007b80e0468 L6=0xfffffff68194b410 >>> L7=0x0000000000092434 >>> I0=0xffffffff3f9fd558 I1=0x0000000000000012 I2=0x0000000000000000 >>> I3=0xfffffff6819dd844 >>> I4=0x0000000000000010 I5=0x0000000000092400 I6=0xffffffff3f9fcc11 >>> I7=0xfffffff680ed28f8 >>> PC=0xfffffff67ffdb4d8 nPC=0xfffffff67ffdb4dc >>> >>> >>> Top of Stack: (sp=0xffffffff3f9fd340) >>> 0xffffffff3f9fd340: 00000007b80e0468 00000007b80e0468 >>> 0xffffffff3f9fd350: fffffff68194b410 0000000000000010 >>> 0xffffffff3f9fd360: 0000000000000000 00000007b80e0468 >>> 0xffffffff3f9fd370: fffffff68194b410 0000000000092434 >>> 0xffffffff3f9fd380: ffffffff3f9fd558 0000000000000012 >>> 0xffffffff3f9fd390: 0000000000000000 fffffff6819dd844 >>> 0xffffffff3f9fd3a0: 0000000000000010 0000000000092400 >>> 0xffffffff3f9fd3b0: ffffffff3f9fcc11 fffffff680ed28f8 >>> 0xffffffff3f9fd3c0: ffffffff3f9fcc61 fffffff680af1514 >>> 0xffffffff3f9fd3d0: fffffff6819c5d68 0000000100107880 >>> 0xffffffff3f9fd3e0: 00000003b80e00d0 fffffff6819c5d68 >>> 0xffffffff3f9fd3f0: 00000007b80e0468 00000007b80e0468 >>> 0xffffffff3f9fd400: 00000007b80e0468 00000007b80e0468 >>> 0xffffffff3f9fd410: fffffff68194b410 fffffff6819dd844 >>> 0xffffffff3f9fd420: 00000000000002dc 0000000000000000 >>> 0xffffffff3f9fd430: ffffffff3f9fd558 00000007b80e0468 >>> >>> Instructions: (pc=0xfffffff67ffdb4d8) >>> 0xfffffff67ffdb4b8: 40 36 e0 42 90 07 a7 df 10 80 00 06 d8 5f a7 df >>> 0xfffffff67ffdb4c8: e4 77 a7 e7 e6 5f a7 e7 e6 77 a7 df d8 5f a7 df >>> 0xfffffff67ffdb4d8: f4 23 00 19 d6 0e e0 00 80 a2 e0 00 02 40 00 16 >>> 0xfffffff67ffdb4e8: 01 00 00 00 40 36 e0 89 90 07 a7 df da 0e e0 00 >>> >>> >>> From stefan.karlsson at oracle.com Thu Feb 22 19:03:28 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Feb 2018 20:03:28 +0100 Subject: SIGBUS in Access<1572864UL>::store_at on Solaris/SPARC In-Reply-To: References: Message-ID: <9b28dda5-2ce5-ade0-e3aa-c7c48dc53d86@oracle.com> On 2018-02-22 19:49, Roman Kennke wrote: > well, if you store a short (2-bytes) into an offset computed for > boolean (1-byte) you may store unaligned? I think Volker says that this typo is only affecting the base_offset_in_bytes part of element_offset, and therefore is benign (but should be fixed). What about this: +inline void typeArrayOopDesc::bool_at_put(int which, jboolean contents) { + ptrdiff_t offset = element_offset(T_BOOLEAN, which); + HeapAccess::store_at(as_oop(), offset, ((jint)contents) & 1); +} The type of ((jint)contents) & 1) is an int, and we end up incorrectly calling store_at. Just like the stack trace in Volker's mail: Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 StefanK > > Should I take over bug JDK-8198564 (after all, it was my change) or is > somebody already on it? > > Roman > > On Thu, Feb 22, 2018 at 6:33 PM, Volker Simonis > wrote: >> On Thu, Feb 22, 2018 at 6:19 PM, Stefan Karlsson >> wrote: >>> This looks suspicious: >>> >>> +inline void typeArrayOopDesc::short_at_put(int which, jshort contents) {+ ptrdiff_t offset = element_offset(T_BOOLEAN, which);+ HeapAccess::store_at(as_oop(), offset, contents);+} >>> >>> >>> T_BOOLEAN together with jshort ... >>> >>> >> Yes, that seems like a copy/paste error (which should be fixed), but in the >> end it is only used here as input for: >> >> Universe::element_type_should_be_aligned(type) >> >> and that one only differentiates between T_DOUBLE/T_LONG and all the other >> basic types. So it's probably not the cause for this error. >> >> Thanks, >> Volker >> >> >>> StefanK >>> >>> >>> >>> On 2018-02-22 18:12, Volker Simonis wrote: >>> >>> Hi, >>> >>> since the push of "8197999: Accessors in typeArrayOopDesc should use new >>> Access API" we see crashes on Solaris/SPARC (see below). The disassembly at >>> the crash instruction looks as follows: >>> >>> ldx [ %fp + 0x7df ], %o4 >>> st %i2, [ %o4 + %i1 ] >>> >>> O4=0x00000007b80e0468 >>> I1=0x0000000000000012 >>> >>> which results in an unaligned access: >>> >>> siginfo: si_signo: 10 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: >>> 0x00000007b80e047a >>> >>> We are compiling with SS12u4 with updates from October 2017 (i.e. Sun C++ >>> 5.13 SunOS_sparc Patch 151845-28 2017/09/19) and running on Solaris 11.3. >>> Which compilers are you using for compiling jdk-hs on Sun/SPARC? >>> >>> Do you have seen this as well or do you have any idea what might have >>> caused this? >>> >>> Thank you and best regards, >>> Volker >>> >>> # >>> # A fatal error has been detected by the Java Runtime Environment: >>> # >>> # SIGBUS (0xa) at pc=0xfffffff67ffdb4d8, pid=321, tid=58934 >>> # >>> # JRE version: OpenJDK Runtime Environment (11.0.1) (fastdebug build >>> 11.0.0.1-internal+0-adhoc..jdk-hs) >>> # Java VM: OpenJDK 64-Bit Server VM (fastdebug >>> 11.0.0.1-internal+0-adhoc..jdk-hs, mixed mode, tiered, compressed oops, g1 >>> gc, solaris-sparc) >>> # Problematic frame: >>> # V [libjvm.so+0xcdb4d8] void >>> Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 >>> # >>> # Core dump will be written. Default location: >>> /priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/core >>> or core.321 >>> # >>> # If you would like to submit a bug report, please visit: >>> # http://bugreport.java.com/bugreport/crash.jsp >>> # >>> >>> --------------- S U M M A R Y ------------ >>> >>> Command Line: -Djava.awt.headless=true -Xms128m -Xmx288m >>> -XX:MaxJavaStackTraceDepth=1024 -Xverify:all -XX:+FailOverToOldVerifier >>> -Xverify:all -agentlib:jckjvmti=same -Djdk.xml.maxXMLNameLimit=4000 >>> -Djava.net.preferIPv4Stack=true >>> -Djava.security.auth.policy=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.auth.policy >>> -Djava.security.auth.login.config=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.auth.login.config >>> -Djava.security.policy=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.policy >>> -Djava.io.tmpdir=/priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/tempdir >>> -Djavatest.security.allowPropertiesAccess=true >>> -Djava.util.prefs.userRoot=/priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/tempdir >>> -Djava.rmi.activation.port=6284 com.sun.javatest.agent.AgentMain -active >>> -activeHost localhost -activePort 6584 >>> >>> Host: us04z2, Sparcv9 64 bit 2998 MHz, 128 cores, 100G, Oracle Solaris 11.3 >>> SPARC >>> Time: Thu Feb 22 09:24:06 2018 CET elapsed time: 2872 seconds (0d 0h 47m >>> 52s) >>> >>> --------------- T H R E A D --------------- >>> >>> Current thread (0x0000000108bca000): JavaThread "Thread-41287" >>> [_thread_in_vm, id=58934, stack(0xffffffff3f900000,0xffffffff3fa00000)] >>> >>> Stack: [0xffffffff3f900000,0xffffffff3fa00000], sp=0xffffffff3f9fd340, >>> free space=1012k >>> Native frames: (J=compiled Java code, A=aot compiled Java code, >>> j=interpreted, Vv=VM code, C=native code) >>> V [libjvm.so+0xcdb4d8] void >>> Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 >>> V [libjvm.so+0x1bd2900] void >>> Reflection::array_set(jvalue*,arrayOop,int,BasicType,Thread*)+0x300 >>> V [libjvm.so+0x11cf464] JVM_SetArrayElement+0x6e4 >>> C [libjava.so+0x147e8] Java_java_lang_reflect_Array_set+0x18 >>> j >>> java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+-1473468376java.base at 11.0.0.1-internal >>> j java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+0java.base at 11.0.0.1-internal >>> j >>> javasoft.sqe.tests.vm.concepts.execution.execution080.execution08001.execution08001.run([Ljava/lang/String;Ljava/io/PrintStream;)I+617 >>> v ~StubRoutines::call_stub >>> V [libjvm.so+0x108989c] void JavaCalls::call_helper(JavaValue*,const >>> methodHandle&,JavaCallArguments*,Thread*)+0x5bc >>> V [libjvm.so+0x1be0410] oop invoke(InstanceKlass*,const >>> methodHandle&,Handle,bool,objArrayHandle,BasicType,objArrayHandle,bool,Thread*)+0x2c60 >>> V [libjvm.so+0x1be1084] oop >>> Reflection::invoke_method(oop,Handle,objArrayHandle,Thread*)+0x7b4 >>> V [libjvm.so+0x11d2868] JVM_InvokeMethod+0x5d8 >>> C [libjava.so+0x16458] >>> Java_jdk_internal_reflect_NativeMethodAccessorImpl_invoke0+0x18 >>> J 1506 >>> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (0 bytes) @ 0xffffffff6f8ad338 >>> [0xffffffff6f8ad040+0x00000000000002f8] >>> J 6474 c2 >>> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (104 bytes) @ 0xffffffff6fd959c4 >>> [0xffffffff6fd95960+0x0000000000000064] >>> J 5773 c2 >>> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (10 bytes) @ 0xffffffff6f83e670 >>> [0xffffffff6f83e620+0x0000000000000050] >>> J 4866 c1 >>> com.sun.jck.lib.ExecJCKTestSameJVMCmd$SimpleTest.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; >>> (405 bytes) @ 0xffffffff696dfee4 [0xffffffff696df0a0+0x0000000000000e44] >>> J 5654 c1 >>> com.sun.jck.lib.ExecJCKTestSameJVMCmd.execute(Ljava/lang/ClassLoader;Ljava/lang/String;[Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;I)Lcom/sun/javatest/Status; >>> (397 bytes) @ 0xffffffff68d4dd40 [0xffffffff68d4aea0+0x0000000000002ea0] >>> J 6242 c2 >>> com.sun.jck.lib.ExecJCKTestSameJVMCmd.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; >>> (1022 bytes) @ 0xffffffff6fef30b0 [0xffffffff6fef0000+0x00000000000030b0] >>> J 1689 c1 >>> com.sun.jck.lib.ExecInSeparateThreadCmd$StatusCallable.call()Ljava/lang/Object; >>> (5 bytes) @ 0xffffffff68d98114 [0xffffffff68d97f00+0x0000000000000214] >>> J 6097 c1 java.util.concurrent.FutureTask.run()V java.base at 11.0.0.1-internal >>> (123 bytes) @ 0xffffffff68e5f900 [0xffffffff68e5ee40+0x0000000000000ac0] >>> J 5653 c2 java.lang.Thread.run()V java.base at 11.0.0.1-internal (17 bytes) @ >>> 0xffffffff6f851b78 [0xffffffff6f851b20+0x0000000000000058] >>> v ~StubRoutines::call_stub >>> V [libjvm.so+0x108989c] void JavaCalls::call_helper(JavaValue*,const >>> methodHandle&,JavaCallArguments*,Thread*)+0x5bc >>> V [libjvm.so+0x1088220] void >>> JavaCalls::call_virtual(JavaValue*,Klass*,Symbol*,Symbol*,JavaCallArguments*,Thread*)+0x1e0 >>> V [libjvm.so+0x1088328] void >>> JavaCalls::call_virtual(JavaValue*,Handle,Klass*,Symbol*,Symbol*,Thread*)+0xb8 >>> V [libjvm.so+0x11c5140] void thread_entry(JavaThread*,Thread*)+0x1e0 >>> V [libjvm.so+0x1de56e4] void JavaThread::thread_main_inner()+0x2e4 >>> V [libjvm.so+0x1de53d0] void JavaThread::run()+0x350 >>> V [libjvm.so+0x1aa4ff4] thread_native_entry+0x2e4 >>> >>> Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) >>> j java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+0java.base at 11.0.0.1-internal >>> j >>> javasoft.sqe.tests.vm.concepts.execution.execution080.execution08001.execution08001.run([Ljava/lang/String;Ljava/io/PrintStream;)I+617 >>> v ~StubRoutines::call_stub >>> J 1506 >>> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (0 bytes) @ 0xffffffff6f8ad0ec >>> [0xffffffff6f8ad040+0x00000000000000ac] >>> J 6474 c2 >>> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (104 bytes) @ 0xffffffff6fd959c4 >>> [0xffffffff6fd95960+0x0000000000000064] >>> J 5773 c2 >>> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal (10 bytes) @ 0xffffffff6f83e670 >>> [0xffffffff6f83e620+0x0000000000000050] >>> J 4866 c1 >>> com.sun.jck.lib.ExecJCKTestSameJVMCmd$SimpleTest.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; >>> (405 bytes) @ 0xffffffff696dfee4 [0xffffffff696df0a0+0x0000000000000e44] >>> J 5654 c1 >>> com.sun.jck.lib.ExecJCKTestSameJVMCmd.execute(Ljava/lang/ClassLoader;Ljava/lang/String;[Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;I)Lcom/sun/javatest/Status; >>> (397 bytes) @ 0xffffffff68d4dd40 [0xffffffff68d4aea0+0x0000000000002ea0] >>> J 6242 c2 >>> com.sun.jck.lib.ExecJCKTestSameJVMCmd.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; >>> (1022 bytes) @ 0xffffffff6fef30b0 [0xffffffff6fef0000+0x00000000000030b0] >>> J 1689 c1 >>> com.sun.jck.lib.ExecInSeparateThreadCmd$StatusCallable.call()Ljava/lang/Object; >>> (5 bytes) @ 0xffffffff68d98114 [0xffffffff68d97f00+0x0000000000000214] >>> J 6097 c1 java.util.concurrent.FutureTask.run()V java.base at 11.0.0.1-internal >>> (123 bytes) @ 0xffffffff68e5f900 [0xffffffff68e5ee40+0x0000000000000ac0] >>> J 5653 c2 java.lang.Thread.run()V java.base at 11.0.0.1-internal (17 bytes) @ >>> 0xffffffff6f851b78 [0xffffffff6f851b20+0x0000000000000058] >>> v ~StubRoutines::call_stub >>> >>> siginfo: si_signo: 10 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: >>> 0x00000007b80e047a >>> >>> Register to memory mapping: >>> >>> G1=0x000000000197000c is an unknown value >>> G2=0xfffffffffffffd48 is an unknown value >>> G3=0x00000000c0100400 is an unknown value >>> G4=0x0 is NULL >>> G5=0x00000007b80e0468 is pointing into object: 0x00000007b80635b0 >>> >>> [error occurred during error reporting (printing register info), id 0xa] >>> >>> Registers: >>> G1=0x000000000197000c G2=0xfffffffffffffd48 G3=0x00000000c0100400 >>> G4=0x0000000000000000 >>> G5=0x00000007b80e0468 G6=0x0000000000000000 G7=0xffffffff5441a240 >>> Y=0x0000000000000000 >>> O0=0xffffffff3f9fd408 O1=0x0000000000091b61 O2=0x0000000000091800 >>> O3=0xfffffff68194b410 >>> O4=0x00000007b80e0468 O5=0x0000000000000010 O6=0xffffffff3f9fcb41 >>> O7=0x00000007b80e0468 >>> L0=0x00000007b80e0468 L1=0x00000007b80e0468 L2=0xfffffff68194b410 >>> L3=0x0000000000000010 >>> L4=0x0000000000000000 L5=0x00000007b80e0468 L6=0xfffffff68194b410 >>> L7=0x0000000000092434 >>> I0=0xffffffff3f9fd558 I1=0x0000000000000012 I2=0x0000000000000000 >>> I3=0xfffffff6819dd844 >>> I4=0x0000000000000010 I5=0x0000000000092400 I6=0xffffffff3f9fcc11 >>> I7=0xfffffff680ed28f8 >>> PC=0xfffffff67ffdb4d8 nPC=0xfffffff67ffdb4dc >>> >>> >>> Top of Stack: (sp=0xffffffff3f9fd340) >>> 0xffffffff3f9fd340: 00000007b80e0468 00000007b80e0468 >>> 0xffffffff3f9fd350: fffffff68194b410 0000000000000010 >>> 0xffffffff3f9fd360: 0000000000000000 00000007b80e0468 >>> 0xffffffff3f9fd370: fffffff68194b410 0000000000092434 >>> 0xffffffff3f9fd380: ffffffff3f9fd558 0000000000000012 >>> 0xffffffff3f9fd390: 0000000000000000 fffffff6819dd844 >>> 0xffffffff3f9fd3a0: 0000000000000010 0000000000092400 >>> 0xffffffff3f9fd3b0: ffffffff3f9fcc11 fffffff680ed28f8 >>> 0xffffffff3f9fd3c0: ffffffff3f9fcc61 fffffff680af1514 >>> 0xffffffff3f9fd3d0: fffffff6819c5d68 0000000100107880 >>> 0xffffffff3f9fd3e0: 00000003b80e00d0 fffffff6819c5d68 >>> 0xffffffff3f9fd3f0: 00000007b80e0468 00000007b80e0468 >>> 0xffffffff3f9fd400: 00000007b80e0468 00000007b80e0468 >>> 0xffffffff3f9fd410: fffffff68194b410 fffffff6819dd844 >>> 0xffffffff3f9fd420: 00000000000002dc 0000000000000000 >>> 0xffffffff3f9fd430: ffffffff3f9fd558 00000007b80e0468 >>> >>> Instructions: (pc=0xfffffff67ffdb4d8) >>> 0xfffffff67ffdb4b8: 40 36 e0 42 90 07 a7 df 10 80 00 06 d8 5f a7 df >>> 0xfffffff67ffdb4c8: e4 77 a7 e7 e6 5f a7 e7 e6 77 a7 df d8 5f a7 df >>> 0xfffffff67ffdb4d8: f4 23 00 19 d6 0e e0 00 80 a2 e0 00 02 40 00 16 >>> 0xfffffff67ffdb4e8: 01 00 00 00 40 36 e0 89 90 07 a7 df da 0e e0 00 >>> >>> >>> From lois.foltan at oracle.com Thu Feb 22 19:08:19 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Thu, 22 Feb 2018 14:08:19 -0500 Subject: RFR: 8198551 - Rename hotspot_tier1 test group to tier1 In-Reply-To: <91A3E5B6-CF08-41ED-A404-52080C902032@oracle.com> References: <91A3E5B6-CF08-41ED-A404-52080C902032@oracle.com> Message-ID: Looks good. Lois On 2/22/2018 2:01 PM, Christian Tornqvist wrote: > Please review this small change that renames the hotspot_tier1 test group to tier1 in order to match the naming definition of langtools, jdk, jaxp and nashorn. This enables the use of run-test to run all of the tier1 tests locally: > > make run-test-tier1 > Building target 'run-test-tier1' in configuration 'macosx-x64' > warning: no debug symbols in executable (-arch x86_64) > warning: no debug symbols in executable (-arch x86_64) > Test selection 'tier1', will run: > * jtreg:open/test/hotspot/jtreg:tier1 > * jtreg:open/test/jdk:tier1 > * jtreg:open/test/langtools:tier1 > * jtreg:open/test/nashorn:tier1 > * jtreg:open/test/jaxp:tier1 > > The jdk_svc_sanity test group is part of the Hotspot tier1 definitions but not part of the Hotspot test root, so I moved that group into jdk:tier1 > > Webrev: http://cr.openjdk.java.net/~ctornqvi/webrev/8198551/webrev.00/ > Bug: https://bugs.openjdk.java.net/browse/JDK-8198551 > > Thanks, > Christian From rkennke at redhat.com Thu Feb 22 19:14:52 2018 From: rkennke at redhat.com (Roman Kennke) Date: Thu, 22 Feb 2018 20:14:52 +0100 Subject: SIGBUS in Access<1572864UL>::store_at on Solaris/SPARC In-Reply-To: <9b28dda5-2ce5-ade0-e3aa-c7c48dc53d86@oracle.com> References: <9b28dda5-2ce5-ade0-e3aa-c7c48dc53d86@oracle.com> Message-ID: Right. This looks like possible and likely cause of the problem. And it worked before because of implicit conversion back to jboolean: - void bool_at_put(int which, jboolean contents) { *bool_at_addr(which) = (((jint)contents) & 1); } Can you test it? Because, I can't ;-) Roman On Thu, Feb 22, 2018 at 8:03 PM, Stefan Karlsson wrote: > On 2018-02-22 19:49, Roman Kennke wrote: > > well, if you store a short (2-bytes) into an offset computed for > boolean (1-byte) you may store unaligned? > > > I think Volker says that this typo is only affecting the > base_offset_in_bytes part of element_offset, and therefore is benign (but > should be fixed). > > What about this: > > +inline void typeArrayOopDesc::bool_at_put(int which, jboolean contents) { > + ptrdiff_t offset = element_offset(T_BOOLEAN, which); > + HeapAccess::store_at(as_oop(), offset, ((jint)contents) & > 1); > +} > > The type of ((jint)contents) & 1) is an int, and we end up incorrectly > calling store_at. > Just like the stack trace in Volker's mail: > Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 > > StefanK > > > > Should I take over bug JDK-8198564 (after all, it was my change) or is > somebody already on it? > > Roman > > On Thu, Feb 22, 2018 at 6:33 PM, Volker Simonis > wrote: > > On Thu, Feb 22, 2018 at 6:19 PM, Stefan Karlsson > wrote: > > This looks suspicious: > > +inline void typeArrayOopDesc::short_at_put(int which, jshort contents) {+ > ptrdiff_t offset = element_offset(T_BOOLEAN, which);+ > HeapAccess::store_at(as_oop(), offset, contents);+} > > > T_BOOLEAN together with jshort ... > > > Yes, that seems like a copy/paste error (which should be fixed), but in the > end it is only used here as input for: > > Universe::element_type_should_be_aligned(type) > > and that one only differentiates between T_DOUBLE/T_LONG and all the other > basic types. So it's probably not the cause for this error. > > Thanks, > Volker > > > StefanK > > > > On 2018-02-22 18:12, Volker Simonis wrote: > > Hi, > > since the push of "8197999: Accessors in typeArrayOopDesc should use new > Access API" we see crashes on Solaris/SPARC (see below). The disassembly at > the crash instruction looks as follows: > > ldx [ %fp + 0x7df ], %o4 > st %i2, [ %o4 + %i1 ] > > O4=0x00000007b80e0468 > I1=0x0000000000000012 > > which results in an unaligned access: > > siginfo: si_signo: 10 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: > 0x00000007b80e047a > > We are compiling with SS12u4 with updates from October 2017 (i.e. Sun C++ > 5.13 SunOS_sparc Patch 151845-28 2017/09/19) and running on Solaris 11.3. > Which compilers are you using for compiling jdk-hs on Sun/SPARC? > > Do you have seen this as well or do you have any idea what might have > caused this? > > Thank you and best regards, > Volker > > # > # A fatal error has been detected by the Java Runtime Environment: > # > # SIGBUS (0xa) at pc=0xfffffff67ffdb4d8, pid=321, tid=58934 > # > # JRE version: OpenJDK Runtime Environment (11.0.1) (fastdebug build > 11.0.0.1-internal+0-adhoc..jdk-hs) > # Java VM: OpenJDK 64-Bit Server VM (fastdebug > 11.0.0.1-internal+0-adhoc..jdk-hs, mixed mode, tiered, compressed oops, g1 > gc, solaris-sparc) > # Problematic frame: > # V [libjvm.so+0xcdb4d8] void > Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 > # > # Core dump will be written. Default location: > /priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/core > or core.321 > # > # If you would like to submit a bug report, please visit: > # http://bugreport.java.com/bugreport/crash.jsp > # > > --------------- S U M M A R Y ------------ > > Command Line: -Djava.awt.headless=true -Xms128m -Xmx288m > -XX:MaxJavaStackTraceDepth=1024 -Xverify:all -XX:+FailOverToOldVerifier > -Xverify:all -agentlib:jckjvmti=same -Djdk.xml.maxXMLNameLimit=4000 > -Djava.net.preferIPv4Stack=true > -Djava.security.auth.policy=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.auth.policy > -Djava.security.auth.login.config=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.auth.login.config > -Djava.security.policy=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.policy > -Djava.io.tmpdir=/priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/tempdir > -Djavatest.security.allowPropertiesAccess=true > -Djava.util.prefs.userRoot=/priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/tempdir > -Djava.rmi.activation.port=6284 com.sun.javatest.agent.AgentMain -active > -activeHost localhost -activePort 6584 > > Host: us04z2, Sparcv9 64 bit 2998 MHz, 128 cores, 100G, Oracle Solaris 11.3 > SPARC > Time: Thu Feb 22 09:24:06 2018 CET elapsed time: 2872 seconds (0d 0h 47m > 52s) > > --------------- T H R E A D --------------- > > Current thread (0x0000000108bca000): JavaThread "Thread-41287" > [_thread_in_vm, id=58934, stack(0xffffffff3f900000,0xffffffff3fa00000)] > > Stack: [0xffffffff3f900000,0xffffffff3fa00000], sp=0xffffffff3f9fd340, > free space=1012k > Native frames: (J=compiled Java code, A=aot compiled Java code, > j=interpreted, Vv=VM code, C=native code) > V [libjvm.so+0xcdb4d8] void > Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 > V [libjvm.so+0x1bd2900] void > Reflection::array_set(jvalue*,arrayOop,int,BasicType,Thread*)+0x300 > V [libjvm.so+0x11cf464] JVM_SetArrayElement+0x6e4 > C [libjava.so+0x147e8] Java_java_lang_reflect_Array_set+0x18 > j > > java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+-1473468376java.base at 11.0.0.1-internal > j > java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+0java.base at 11.0.0.1-internal > j > > javasoft.sqe.tests.vm.concepts.execution.execution080.execution08001.execution08001.run([Ljava/lang/String;Ljava/io/PrintStream;)I+617 > v ~StubRoutines::call_stub > V [libjvm.so+0x108989c] void JavaCalls::call_helper(JavaValue*,const > methodHandle&,JavaCallArguments*,Thread*)+0x5bc > V [libjvm.so+0x1be0410] oop invoke(InstanceKlass*,const > methodHandle&,Handle,bool,objArrayHandle,BasicType,objArrayHandle,bool,Thread*)+0x2c60 > V [libjvm.so+0x1be1084] oop > Reflection::invoke_method(oop,Handle,objArrayHandle,Thread*)+0x7b4 > V [libjvm.so+0x11d2868] JVM_InvokeMethod+0x5d8 > C [libjava.so+0x16458] > Java_jdk_internal_reflect_NativeMethodAccessorImpl_invoke0+0x18 > J 1506 > > jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal > (0 bytes) @ 0xffffffff6f8ad338 > [0xffffffff6f8ad040+0x00000000000002f8] > J 6474 c2 > jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal > (104 bytes) @ 0xffffffff6fd959c4 > [0xffffffff6fd95960+0x0000000000000064] > J 5773 c2 > jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal > (10 bytes) @ 0xffffffff6f83e670 > [0xffffffff6f83e620+0x0000000000000050] > J 4866 c1 > com.sun.jck.lib.ExecJCKTestSameJVMCmd$SimpleTest.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; > (405 bytes) @ 0xffffffff696dfee4 [0xffffffff696df0a0+0x0000000000000e44] > J 5654 c1 > com.sun.jck.lib.ExecJCKTestSameJVMCmd.execute(Ljava/lang/ClassLoader;Ljava/lang/String;[Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;I)Lcom/sun/javatest/Status; > (397 bytes) @ 0xffffffff68d4dd40 [0xffffffff68d4aea0+0x0000000000002ea0] > J 6242 c2 > com.sun.jck.lib.ExecJCKTestSameJVMCmd.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; > (1022 bytes) @ 0xffffffff6fef30b0 [0xffffffff6fef0000+0x00000000000030b0] > J 1689 c1 > com.sun.jck.lib.ExecInSeparateThreadCmd$StatusCallable.call()Ljava/lang/Object; > (5 bytes) @ 0xffffffff68d98114 [0xffffffff68d97f00+0x0000000000000214] > J 6097 c1 java.util.concurrent.FutureTask.run()V java.base at 11.0.0.1-internal > (123 bytes) @ 0xffffffff68e5f900 [0xffffffff68e5ee40+0x0000000000000ac0] > J 5653 c2 java.lang.Thread.run()V java.base at 11.0.0.1-internal (17 bytes) @ > 0xffffffff6f851b78 [0xffffffff6f851b20+0x0000000000000058] > v ~StubRoutines::call_stub > V [libjvm.so+0x108989c] void JavaCalls::call_helper(JavaValue*,const > methodHandle&,JavaCallArguments*,Thread*)+0x5bc > V [libjvm.so+0x1088220] void > JavaCalls::call_virtual(JavaValue*,Klass*,Symbol*,Symbol*,JavaCallArguments*,Thread*)+0x1e0 > V [libjvm.so+0x1088328] void > JavaCalls::call_virtual(JavaValue*,Handle,Klass*,Symbol*,Symbol*,Thread*)+0xb8 > V [libjvm.so+0x11c5140] void thread_entry(JavaThread*,Thread*)+0x1e0 > V [libjvm.so+0x1de56e4] void JavaThread::thread_main_inner()+0x2e4 > V [libjvm.so+0x1de53d0] void JavaThread::run()+0x350 > V [libjvm.so+0x1aa4ff4] thread_native_entry+0x2e4 > > Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) > j > java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+0java.base at 11.0.0.1-internal > j > > javasoft.sqe.tests.vm.concepts.execution.execution080.execution08001.execution08001.run([Ljava/lang/String;Ljava/io/PrintStream;)I+617 > v ~StubRoutines::call_stub > J 1506 > > jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal > (0 bytes) @ 0xffffffff6f8ad0ec > [0xffffffff6f8ad040+0x00000000000000ac] > J 6474 c2 > jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal > (104 bytes) @ 0xffffffff6fd959c4 > [0xffffffff6fd95960+0x0000000000000064] > J 5773 c2 > jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal > (10 bytes) @ 0xffffffff6f83e670 > [0xffffffff6f83e620+0x0000000000000050] > J 4866 c1 > com.sun.jck.lib.ExecJCKTestSameJVMCmd$SimpleTest.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; > (405 bytes) @ 0xffffffff696dfee4 [0xffffffff696df0a0+0x0000000000000e44] > J 5654 c1 > com.sun.jck.lib.ExecJCKTestSameJVMCmd.execute(Ljava/lang/ClassLoader;Ljava/lang/String;[Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;I)Lcom/sun/javatest/Status; > (397 bytes) @ 0xffffffff68d4dd40 [0xffffffff68d4aea0+0x0000000000002ea0] > J 6242 c2 > com.sun.jck.lib.ExecJCKTestSameJVMCmd.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; > (1022 bytes) @ 0xffffffff6fef30b0 [0xffffffff6fef0000+0x00000000000030b0] > J 1689 c1 > com.sun.jck.lib.ExecInSeparateThreadCmd$StatusCallable.call()Ljava/lang/Object; > (5 bytes) @ 0xffffffff68d98114 [0xffffffff68d97f00+0x0000000000000214] > J 6097 c1 java.util.concurrent.FutureTask.run()V java.base at 11.0.0.1-internal > (123 bytes) @ 0xffffffff68e5f900 [0xffffffff68e5ee40+0x0000000000000ac0] > J 5653 c2 java.lang.Thread.run()V java.base at 11.0.0.1-internal (17 bytes) @ > 0xffffffff6f851b78 [0xffffffff6f851b20+0x0000000000000058] > v ~StubRoutines::call_stub > > siginfo: si_signo: 10 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: > 0x00000007b80e047a > > Register to memory mapping: > > G1=0x000000000197000c is an unknown value > G2=0xfffffffffffffd48 is an unknown value > G3=0x00000000c0100400 is an unknown value > G4=0x0 is NULL > G5=0x00000007b80e0468 is pointing into object: 0x00000007b80635b0 > > [error occurred during error reporting (printing register info), id 0xa] > > Registers: > G1=0x000000000197000c G2=0xfffffffffffffd48 G3=0x00000000c0100400 > G4=0x0000000000000000 > G5=0x00000007b80e0468 G6=0x0000000000000000 G7=0xffffffff5441a240 > Y=0x0000000000000000 > O0=0xffffffff3f9fd408 O1=0x0000000000091b61 O2=0x0000000000091800 > O3=0xfffffff68194b410 > O4=0x00000007b80e0468 O5=0x0000000000000010 O6=0xffffffff3f9fcb41 > O7=0x00000007b80e0468 > L0=0x00000007b80e0468 L1=0x00000007b80e0468 L2=0xfffffff68194b410 > L3=0x0000000000000010 > L4=0x0000000000000000 L5=0x00000007b80e0468 L6=0xfffffff68194b410 > L7=0x0000000000092434 > I0=0xffffffff3f9fd558 I1=0x0000000000000012 I2=0x0000000000000000 > I3=0xfffffff6819dd844 > I4=0x0000000000000010 I5=0x0000000000092400 I6=0xffffffff3f9fcc11 > I7=0xfffffff680ed28f8 > PC=0xfffffff67ffdb4d8 nPC=0xfffffff67ffdb4dc > > > Top of Stack: (sp=0xffffffff3f9fd340) > 0xffffffff3f9fd340: 00000007b80e0468 00000007b80e0468 > 0xffffffff3f9fd350: fffffff68194b410 0000000000000010 > 0xffffffff3f9fd360: 0000000000000000 00000007b80e0468 > 0xffffffff3f9fd370: fffffff68194b410 0000000000092434 > 0xffffffff3f9fd380: ffffffff3f9fd558 0000000000000012 > 0xffffffff3f9fd390: 0000000000000000 fffffff6819dd844 > 0xffffffff3f9fd3a0: 0000000000000010 0000000000092400 > 0xffffffff3f9fd3b0: ffffffff3f9fcc11 fffffff680ed28f8 > 0xffffffff3f9fd3c0: ffffffff3f9fcc61 fffffff680af1514 > 0xffffffff3f9fd3d0: fffffff6819c5d68 0000000100107880 > 0xffffffff3f9fd3e0: 00000003b80e00d0 fffffff6819c5d68 > 0xffffffff3f9fd3f0: 00000007b80e0468 00000007b80e0468 > 0xffffffff3f9fd400: 00000007b80e0468 00000007b80e0468 > 0xffffffff3f9fd410: fffffff68194b410 fffffff6819dd844 > 0xffffffff3f9fd420: 00000000000002dc 0000000000000000 > 0xffffffff3f9fd430: ffffffff3f9fd558 00000007b80e0468 > > Instructions: (pc=0xfffffff67ffdb4d8) > 0xfffffff67ffdb4b8: 40 36 e0 42 90 07 a7 df 10 80 00 06 d8 5f a7 df > 0xfffffff67ffdb4c8: e4 77 a7 e7 e6 5f a7 e7 e6 77 a7 df d8 5f a7 df > 0xfffffff67ffdb4d8: f4 23 00 19 d6 0e e0 00 80 a2 e0 00 02 40 00 16 > 0xfffffff67ffdb4e8: 01 00 00 00 40 36 e0 89 90 07 a7 df da 0e e0 00 > > > > From stefan.karlsson at oracle.com Thu Feb 22 19:22:56 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Feb 2018 20:22:56 +0100 Subject: SIGBUS in Access<1572864UL>::store_at on Solaris/SPARC In-Reply-To: References: <9b28dda5-2ce5-ade0-e3aa-c7c48dc53d86@oracle.com> Message-ID: <7db7a8ff-5484-80c0-2cfa-9ea0a6a964be@oracle.com> On 2018-02-22 20:14, Roman Kennke wrote: > Right. This looks like possible and likely cause of the problem. And > it worked before because of implicit conversion back to jboolean: > > - void bool_at_put(int which, jboolean contents) { > *bool_at_addr(which) = (((jint)contents) & 1); } > > > Can you test it? Because, I can't ;-) Yes. I'm kicking of some testing on sparc. Could you write a gtest for this? StefanK > > Roman > > On Thu, Feb 22, 2018 at 8:03 PM, Stefan Karlsson > wrote: >> On 2018-02-22 19:49, Roman Kennke wrote: >> >> well, if you store a short (2-bytes) into an offset computed for >> boolean (1-byte) you may store unaligned? >> >> >> I think Volker says that this typo is only affecting the >> base_offset_in_bytes part of element_offset, and therefore is benign (but >> should be fixed). >> >> What about this: >> >> +inline void typeArrayOopDesc::bool_at_put(int which, jboolean contents) { >> + ptrdiff_t offset = element_offset(T_BOOLEAN, which); >> + HeapAccess::store_at(as_oop(), offset, ((jint)contents) & >> 1); >> +} >> >> The type of ((jint)contents) & 1) is an int, and we end up incorrectly >> calling store_at. >> Just like the stack trace in Volker's mail: >> Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 >> >> StefanK >> >> >> >> Should I take over bug JDK-8198564 (after all, it was my change) or is >> somebody already on it? >> >> Roman >> >> On Thu, Feb 22, 2018 at 6:33 PM, Volker Simonis >> wrote: >> >> On Thu, Feb 22, 2018 at 6:19 PM, Stefan Karlsson > >> wrote: >> >> This looks suspicious: >> >> +inline void typeArrayOopDesc::short_at_put(int which, jshort contents) {+ >> ptrdiff_t offset = element_offset(T_BOOLEAN, which);+ >> HeapAccess::store_at(as_oop(), offset, contents);+} >> >> >> T_BOOLEAN together with jshort ... >> >> >> Yes, that seems like a copy/paste error (which should be fixed), but in the >> end it is only used here as input for: >> >> Universe::element_type_should_be_aligned(type) >> >> and that one only differentiates between T_DOUBLE/T_LONG and all the other >> basic types. So it's probably not the cause for this error. >> >> Thanks, >> Volker >> >> >> StefanK >> >> >> >> On 2018-02-22 18:12, Volker Simonis wrote: >> >> Hi, >> >> since the push of "8197999: Accessors in typeArrayOopDesc should use new >> Access API" we see crashes on Solaris/SPARC (see below). The disassembly at >> the crash instruction looks as follows: >> >> ldx [ %fp + 0x7df ], %o4 >> st %i2, [ %o4 + %i1 ] >> >> O4=0x00000007b80e0468 >> I1=0x0000000000000012 >> >> which results in an unaligned access: >> >> siginfo: si_signo: 10 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: >> 0x00000007b80e047a >> >> We are compiling with SS12u4 with updates from October 2017 (i.e. Sun C++ >> 5.13 SunOS_sparc Patch 151845-28 2017/09/19) and running on Solaris 11.3. >> Which compilers are you using for compiling jdk-hs on Sun/SPARC? >> >> Do you have seen this as well or do you have any idea what might have >> caused this? >> >> Thank you and best regards, >> Volker >> >> # >> # A fatal error has been detected by the Java Runtime Environment: >> # >> # SIGBUS (0xa) at pc=0xfffffff67ffdb4d8, pid=321, tid=58934 >> # >> # JRE version: OpenJDK Runtime Environment (11.0.1) (fastdebug build >> 11.0.0.1-internal+0-adhoc..jdk-hs) >> # Java VM: OpenJDK 64-Bit Server VM (fastdebug >> 11.0.0.1-internal+0-adhoc..jdk-hs, mixed mode, tiered, compressed oops, g1 >> gc, solaris-sparc) >> # Problematic frame: >> # V [libjvm.so+0xcdb4d8] void >> Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 >> # >> # Core dump will be written. Default location: >> /priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/core >> or core.321 >> # >> # If you would like to submit a bug report, please visit: >> # http://bugreport.java.com/bugreport/crash.jsp >> # >> >> --------------- S U M M A R Y ------------ >> >> Command Line: -Djava.awt.headless=true -Xms128m -Xmx288m >> -XX:MaxJavaStackTraceDepth=1024 -Xverify:all -XX:+FailOverToOldVerifier >> -Xverify:all -agentlib:jckjvmti=same -Djdk.xml.maxXMLNameLimit=4000 >> -Djava.net.preferIPv4Stack=true >> -Djava.security.auth.policy=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.auth.policy >> -Djava.security.auth.login.config=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.auth.login.config >> -Djava.security.policy=/sapmnt/hs0131/a/sapjvm_dev/jck/jck11/JCK-runtime-11/lib/jck.policy >> -Djava.io.tmpdir=/priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/tempdir >> -Djavatest.security.allowPropertiesAccess=true >> -Djava.util.prefs.userRoot=/priv/jvmtests/output_sapjvm11_o_jdk-hs_dbgU_sun_64/jck_lang_vm_work/tempdir >> -Djava.rmi.activation.port=6284 com.sun.javatest.agent.AgentMain -active >> -activeHost localhost -activePort 6584 >> >> Host: us04z2, Sparcv9 64 bit 2998 MHz, 128 cores, 100G, Oracle Solaris 11.3 >> SPARC >> Time: Thu Feb 22 09:24:06 2018 CET elapsed time: 2872 seconds (0d 0h 47m >> 52s) >> >> --------------- T H R E A D --------------- >> >> Current thread (0x0000000108bca000): JavaThread "Thread-41287" >> [_thread_in_vm, id=58934, stack(0xffffffff3f900000,0xffffffff3fa00000)] >> >> Stack: [0xffffffff3f900000,0xffffffff3fa00000], sp=0xffffffff3f9fd340, >> free space=1012k >> Native frames: (J=compiled Java code, A=aot compiled Java code, >> j=interpreted, Vv=VM code, C=native code) >> V [libjvm.so+0xcdb4d8] void >> Access<1572864UL>::store_at(oop,long,__type_1)+0xd8 >> V [libjvm.so+0x1bd2900] void >> Reflection::array_set(jvalue*,arrayOop,int,BasicType,Thread*)+0x300 >> V [libjvm.so+0x11cf464] JVM_SetArrayElement+0x6e4 >> C [libjava.so+0x147e8] Java_java_lang_reflect_Array_set+0x18 >> j >> >> java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+-1473468376java.base at 11.0.0.1-internal >> j >> java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+0java.base at 11.0.0.1-internal >> j >> >> javasoft.sqe.tests.vm.concepts.execution.execution080.execution08001.execution08001.run([Ljava/lang/String;Ljava/io/PrintStream;)I+617 >> v ~StubRoutines::call_stub >> V [libjvm.so+0x108989c] void JavaCalls::call_helper(JavaValue*,const >> methodHandle&,JavaCallArguments*,Thread*)+0x5bc >> V [libjvm.so+0x1be0410] oop invoke(InstanceKlass*,const >> methodHandle&,Handle,bool,objArrayHandle,BasicType,objArrayHandle,bool,Thread*)+0x2c60 >> V [libjvm.so+0x1be1084] oop >> Reflection::invoke_method(oop,Handle,objArrayHandle,Thread*)+0x7b4 >> V [libjvm.so+0x11d2868] JVM_InvokeMethod+0x5d8 >> C [libjava.so+0x16458] >> Java_jdk_internal_reflect_NativeMethodAccessorImpl_invoke0+0x18 >> J 1506 >> >> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal >> (0 bytes) @ 0xffffffff6f8ad338 >> [0xffffffff6f8ad040+0x00000000000002f8] >> J 6474 c2 >> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal >> (104 bytes) @ 0xffffffff6fd959c4 >> [0xffffffff6fd95960+0x0000000000000064] >> J 5773 c2 >> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal >> (10 bytes) @ 0xffffffff6f83e670 >> [0xffffffff6f83e620+0x0000000000000050] >> J 4866 c1 >> com.sun.jck.lib.ExecJCKTestSameJVMCmd$SimpleTest.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; >> (405 bytes) @ 0xffffffff696dfee4 [0xffffffff696df0a0+0x0000000000000e44] >> J 5654 c1 >> com.sun.jck.lib.ExecJCKTestSameJVMCmd.execute(Ljava/lang/ClassLoader;Ljava/lang/String;[Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;I)Lcom/sun/javatest/Status; >> (397 bytes) @ 0xffffffff68d4dd40 [0xffffffff68d4aea0+0x0000000000002ea0] >> J 6242 c2 >> com.sun.jck.lib.ExecJCKTestSameJVMCmd.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; >> (1022 bytes) @ 0xffffffff6fef30b0 [0xffffffff6fef0000+0x00000000000030b0] >> J 1689 c1 >> com.sun.jck.lib.ExecInSeparateThreadCmd$StatusCallable.call()Ljava/lang/Object; >> (5 bytes) @ 0xffffffff68d98114 [0xffffffff68d97f00+0x0000000000000214] >> J 6097 c1 java.util.concurrent.FutureTask.run()V java.base at 11.0.0.1-internal >> (123 bytes) @ 0xffffffff68e5f900 [0xffffffff68e5ee40+0x0000000000000ac0] >> J 5653 c2 java.lang.Thread.run()V java.base at 11.0.0.1-internal (17 bytes) @ >> 0xffffffff6f851b78 [0xffffffff6f851b20+0x0000000000000058] >> v ~StubRoutines::call_stub >> V [libjvm.so+0x108989c] void JavaCalls::call_helper(JavaValue*,const >> methodHandle&,JavaCallArguments*,Thread*)+0x5bc >> V [libjvm.so+0x1088220] void >> JavaCalls::call_virtual(JavaValue*,Klass*,Symbol*,Symbol*,JavaCallArguments*,Thread*)+0x1e0 >> V [libjvm.so+0x1088328] void >> JavaCalls::call_virtual(JavaValue*,Handle,Klass*,Symbol*,Symbol*,Thread*)+0xb8 >> V [libjvm.so+0x11c5140] void thread_entry(JavaThread*,Thread*)+0x1e0 >> V [libjvm.so+0x1de56e4] void JavaThread::thread_main_inner()+0x2e4 >> V [libjvm.so+0x1de53d0] void JavaThread::run()+0x350 >> V [libjvm.so+0x1aa4ff4] thread_native_entry+0x2e4 >> >> Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) >> j >> java.lang.reflect.Array.set(Ljava/lang/Object;ILjava/lang/Object;)V+0java.base at 11.0.0.1-internal >> j >> >> javasoft.sqe.tests.vm.concepts.execution.execution080.execution08001.execution08001.run([Ljava/lang/String;Ljava/io/PrintStream;)I+617 >> v ~StubRoutines::call_stub >> J 1506 >> >> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal >> (0 bytes) @ 0xffffffff6f8ad0ec >> [0xffffffff6f8ad040+0x00000000000000ac] >> J 6474 c2 >> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal >> (104 bytes) @ 0xffffffff6fd959c4 >> [0xffffffff6fd95960+0x0000000000000064] >> J 5773 c2 >> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;java.base at 11.0.0.1-internal >> (10 bytes) @ 0xffffffff6f83e670 >> [0xffffffff6f83e620+0x0000000000000050] >> J 4866 c1 >> com.sun.jck.lib.ExecJCKTestSameJVMCmd$SimpleTest.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; >> (405 bytes) @ 0xffffffff696dfee4 [0xffffffff696df0a0+0x0000000000000e44] >> J 5654 c1 >> com.sun.jck.lib.ExecJCKTestSameJVMCmd.execute(Ljava/lang/ClassLoader;Ljava/lang/String;[Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;I)Lcom/sun/javatest/Status; >> (397 bytes) @ 0xffffffff68d4dd40 [0xffffffff68d4aea0+0x0000000000002ea0] >> J 6242 c2 >> com.sun.jck.lib.ExecJCKTestSameJVMCmd.run([Ljava/lang/String;Ljava/io/PrintWriter;Ljava/io/PrintWriter;)Lcom/sun/javatest/Status; >> (1022 bytes) @ 0xffffffff6fef30b0 [0xffffffff6fef0000+0x00000000000030b0] >> J 1689 c1 >> com.sun.jck.lib.ExecInSeparateThreadCmd$StatusCallable.call()Ljava/lang/Object; >> (5 bytes) @ 0xffffffff68d98114 [0xffffffff68d97f00+0x0000000000000214] >> J 6097 c1 java.util.concurrent.FutureTask.run()V java.base at 11.0.0.1-internal >> (123 bytes) @ 0xffffffff68e5f900 [0xffffffff68e5ee40+0x0000000000000ac0] >> J 5653 c2 java.lang.Thread.run()V java.base at 11.0.0.1-internal (17 bytes) @ >> 0xffffffff6f851b78 [0xffffffff6f851b20+0x0000000000000058] >> v ~StubRoutines::call_stub >> >> siginfo: si_signo: 10 (SIGBUS), si_code: 1 (BUS_ADRALN), si_addr: >> 0x00000007b80e047a >> >> Register to memory mapping: >> >> G1=0x000000000197000c is an unknown value >> G2=0xfffffffffffffd48 is an unknown value >> G3=0x00000000c0100400 is an unknown value >> G4=0x0 is NULL >> G5=0x00000007b80e0468 is pointing into object: 0x00000007b80635b0 >> >> [error occurred during error reporting (printing register info), id 0xa] >> >> Registers: >> G1=0x000000000197000c G2=0xfffffffffffffd48 G3=0x00000000c0100400 >> G4=0x0000000000000000 >> G5=0x00000007b80e0468 G6=0x0000000000000000 G7=0xffffffff5441a240 >> Y=0x0000000000000000 >> O0=0xffffffff3f9fd408 O1=0x0000000000091b61 O2=0x0000000000091800 >> O3=0xfffffff68194b410 >> O4=0x00000007b80e0468 O5=0x0000000000000010 O6=0xffffffff3f9fcb41 >> O7=0x00000007b80e0468 >> L0=0x00000007b80e0468 L1=0x00000007b80e0468 L2=0xfffffff68194b410 >> L3=0x0000000000000010 >> L4=0x0000000000000000 L5=0x00000007b80e0468 L6=0xfffffff68194b410 >> L7=0x0000000000092434 >> I0=0xffffffff3f9fd558 I1=0x0000000000000012 I2=0x0000000000000000 >> I3=0xfffffff6819dd844 >> I4=0x0000000000000010 I5=0x0000000000092400 I6=0xffffffff3f9fcc11 >> I7=0xfffffff680ed28f8 >> PC=0xfffffff67ffdb4d8 nPC=0xfffffff67ffdb4dc >> >> >> Top of Stack: (sp=0xffffffff3f9fd340) >> 0xffffffff3f9fd340: 00000007b80e0468 00000007b80e0468 >> 0xffffffff3f9fd350: fffffff68194b410 0000000000000010 >> 0xffffffff3f9fd360: 0000000000000000 00000007b80e0468 >> 0xffffffff3f9fd370: fffffff68194b410 0000000000092434 >> 0xffffffff3f9fd380: ffffffff3f9fd558 0000000000000012 >> 0xffffffff3f9fd390: 0000000000000000 fffffff6819dd844 >> 0xffffffff3f9fd3a0: 0000000000000010 0000000000092400 >> 0xffffffff3f9fd3b0: ffffffff3f9fcc11 fffffff680ed28f8 >> 0xffffffff3f9fd3c0: ffffffff3f9fcc61 fffffff680af1514 >> 0xffffffff3f9fd3d0: fffffff6819c5d68 0000000100107880 >> 0xffffffff3f9fd3e0: 00000003b80e00d0 fffffff6819c5d68 >> 0xffffffff3f9fd3f0: 00000007b80e0468 00000007b80e0468 >> 0xffffffff3f9fd400: 00000007b80e0468 00000007b80e0468 >> 0xffffffff3f9fd410: fffffff68194b410 fffffff6819dd844 >> 0xffffffff3f9fd420: 00000000000002dc 0000000000000000 >> 0xffffffff3f9fd430: ffffffff3f9fd558 00000007b80e0468 >> >> Instructions: (pc=0xfffffff67ffdb4d8) >> 0xfffffff67ffdb4b8: 40 36 e0 42 90 07 a7 df 10 80 00 06 d8 5f a7 df >> 0xfffffff67ffdb4c8: e4 77 a7 e7 e6 5f a7 e7 e6 77 a7 df d8 5f a7 df >> 0xfffffff67ffdb4d8: f4 23 00 19 d6 0e e0 00 80 a2 e0 00 02 40 00 16 >> 0xfffffff67ffdb4e8: 01 00 00 00 40 36 e0 89 90 07 a7 df da 0e e0 00 >> >> >> >> From igor.ignatyev at oracle.com Thu Feb 22 19:35:03 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Thu, 22 Feb 2018 11:35:03 -0800 Subject: RFR: 8198551 - Rename hotspot_tier1 test group to tier1 In-Reply-To: <91A3E5B6-CF08-41ED-A404-52080C902032@oracle.com> References: <91A3E5B6-CF08-41ED-A404-52080C902032@oracle.com> Message-ID: <96DDF815-1545-4CA9-BBD6-71AC37414FCB@oracle.com> Looks good to me. -- Igor > On Feb 22, 2018, at 11:01 AM, Christian Tornqvist wrote: > > Please review this small change that renames the hotspot_tier1 test group to tier1 in order to match the naming definition of langtools, jdk, jaxp and nashorn. This enables the use of run-test to run all of the tier1 tests locally: > > make run-test-tier1 > Building target 'run-test-tier1' in configuration 'macosx-x64' > warning: no debug symbols in executable (-arch x86_64) > warning: no debug symbols in executable (-arch x86_64) > Test selection 'tier1', will run: > * jtreg:open/test/hotspot/jtreg:tier1 > * jtreg:open/test/jdk:tier1 > * jtreg:open/test/langtools:tier1 > * jtreg:open/test/nashorn:tier1 > * jtreg:open/test/jaxp:tier1 > > The jdk_svc_sanity test group is part of the Hotspot tier1 definitions but not part of the Hotspot test root, so I moved that group into jdk:tier1 > > Webrev: http://cr.openjdk.java.net/~ctornqvi/webrev/8198551/webrev.00/ > Bug: https://bugs.openjdk.java.net/browse/JDK-8198551 > > Thanks, > Christian From rkennke at redhat.com Thu Feb 22 19:47:37 2018 From: rkennke at redhat.com (Roman Kennke) Date: Thu, 22 Feb 2018 20:47:37 +0100 Subject: SIGBUS in Access<1572864UL>::store_at on Solaris/SPARC In-Reply-To: <7db7a8ff-5484-80c0-2cfa-9ea0a6a964be@oracle.com> References: <9b28dda5-2ce5-ade0-e3aa-c7c48dc53d86@oracle.com> <7db7a8ff-5484-80c0-2cfa-9ea0a6a964be@oracle.com> Message-ID: On Thu, Feb 22, 2018 at 8:22 PM, Stefan Karlsson wrote: > On 2018-02-22 20:14, Roman Kennke wrote: >> >> Right. This looks like possible and likely cause of the problem. And >> it worked before because of implicit conversion back to jboolean: >> >> - void bool_at_put(int which, jboolean contents) { >> *bool_at_addr(which) = (((jint)contents) & 1); } >> >> >> Can you test it? Because, I can't ;-) > > > Yes. I'm kicking of some testing on sparc. Could you write a gtest for this? I can try. I never wrote a gtest before ;-) Is there an existing one that I could use as template, and/or pointers how to start? Roman From stefan.karlsson at oracle.com Thu Feb 22 20:00:47 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Feb 2018 21:00:47 +0100 Subject: SIGBUS in Access<1572864UL>::store_at on Solaris/SPARC In-Reply-To: References: <9b28dda5-2ce5-ade0-e3aa-c7c48dc53d86@oracle.com> <7db7a8ff-5484-80c0-2cfa-9ea0a6a964be@oracle.com> Message-ID: <000b2fd4-3455-3030-5a63-49ce841663d2@oracle.com> On 2018-02-22 20:47, Roman Kennke wrote: > On Thu, Feb 22, 2018 at 8:22 PM, Stefan Karlsson > wrote: >> On 2018-02-22 20:14, Roman Kennke wrote: >>> Right. This looks like possible and likely cause of the problem. And >>> it worked before because of implicit conversion back to jboolean: >>> >>> - void bool_at_put(int which, jboolean contents) { >>> *bool_at_addr(which) = (((jint)contents) & 1); } >>> >>> >>> Can you test it? Because, I can't ;-) >> >> Yes. I'm kicking of some testing on sparc. Could you write a gtest for this? > I can try. I never wrote a gtest before ;-) Is there an existing one > that I could use as template, and/or pointers how to start? You can look at the existing tests in test/hotspot/gtest. I suggest you read the official googletest doc to get started. There might be some other document about our adaption of googletest, but I don't know where it is. Maybe something like this would work: diff --git a/test/hotspot/gtest/oops/test_arrayOop.cpp b/test/hotspot/gtest/oops/test_arrayOop.cpp --- a/test/hotspot/gtest/oops/test_arrayOop.cpp +++ b/test/hotspot/gtest/oops/test_arrayOop.cpp @@ -22,6 +22,7 @@ ? */ ?#include "precompiled.hpp" +#include "memory/universe.hpp" ?#include "oops/arrayOop.hpp" ?#include "oops/oop.inline.hpp" ?#include "unittest.hpp" @@ -86,4 +87,37 @@ ?TEST_VM(arrayOopDesc, narrowOop) { ?? ASSERT_PRED1(check_max_length_overflow, T_NARROWOOP); ?} + +TEST_VM(arrayOopDesc, bool_at_put) { +? char mem[100]; +? memset(mem, 0, ARRAY_SIZE(mem)); + +? char* addr = align_up(mem, 16); + +? typeArrayOop o = (typeArrayOop) addr; +? o->set_klass(Universe::boolArrayKlassObj()); +? o->set_length(10); + + +? ASSERT_EQ((jboolean)0, o->bool_at(0)); +? ASSERT_EQ((jboolean)0, o->bool_at(1)); +? ASSERT_EQ((jboolean)0, o->bool_at(2)); +? ASSERT_EQ((jboolean)0, o->bool_at(3)); +? ASSERT_EQ((jboolean)0, o->bool_at(4)); +? ASSERT_EQ((jboolean)0, o->bool_at(5)); +? ASSERT_EQ((jboolean)0, o->bool_at(6)); +? ASSERT_EQ((jboolean)0, o->bool_at(7)); + +? o->bool_at_put(0, 1); + +? ASSERT_EQ((jboolean)1, o->bool_at(0)); +? ASSERT_EQ((jboolean)0, o->bool_at(1)); +? ASSERT_EQ((jboolean)0, o->bool_at(2)); +? ASSERT_EQ((jboolean)0, o->bool_at(3)); +? ASSERT_EQ((jboolean)0, o->bool_at(4)); +? ASSERT_EQ((jboolean)0, o->bool_at(5)); +? ASSERT_EQ((jboolean)0, o->bool_at(6)); +? ASSERT_EQ((jboolean)0, o->bool_at(7)); +} + ?// T_VOID and T_ADDRESS are not supported by max_array_length() And then run with: ../build/fastdebug/hotspot/variant-server/libjvm/gtest/gtestLauncher -jdk ../build/fastdebug/jdk --gtest_filter="arrayOopDesc*" StefanK > > Roman From harold.seigel at oracle.com Thu Feb 22 20:11:55 2018 From: harold.seigel at oracle.com (harold seigel) Date: Thu, 22 Feb 2018 15:11:55 -0500 Subject: (11) RFR (S) JDK-8198304: VS2017 (C4838, C4312) Various conversion issues with gtest tests In-Reply-To: <3b5b2b0f-87ca-f563-1dc4-8c612b4dee14@oracle.com> References: <3b5b2b0f-87ca-f563-1dc4-8c612b4dee14@oracle.com> Message-ID: <8b753cae-7422-d736-ce43-6987750a8eef@oracle.com> Hi Lois, This change looks good. Harold On 2/22/2018 1:32 PM, Lois Foltan wrote: > Please review this change to fix VS2017 conversion compilation errors > within two Hotspot gtest tests. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8198304/webrev/ > bug link https://bugs.openjdk.java.net/browse/JDK-8198304 > > Testing: hs-tier(1-3), jdk-tier(1-3) complete > > Thanks, > Lois From harold.seigel at oracle.com Thu Feb 22 20:18:25 2018 From: harold.seigel at oracle.com (harold seigel) Date: Thu, 22 Feb 2018 15:18:25 -0500 Subject: (11) RFR (S) JDK-8197864: VS2017 (C4334) Result of 32-bit Shift Implicitly Converted to 64 bits In-Reply-To: References: Message-ID: Hi Lois, This looks good! Thanks, Harold On 2/22/2018 1:55 PM, Lois Foltan wrote: > Please review this fix to properly perform a 64-bit shift when setting > SlowSignatureHandler::_fp_identifiers within _WIN64 conditional code.? > Since the compiler determined that the constant could fit into an int > and the type of SlowSignatureHandler::_num_args is of unsigned int as > well, a 32-bit shift would result yielding a VS2017 compilation warning. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8197864/webrev/ > bug link https://bugs.openjdk.java.net/browse/JDK-8197864 > contributed-by: Kim Barrett & Lois Foltan > > Testing: hs-tier(1-3), jdk-tier(1-3) complete > > Thanks, > Lois > > From rkennke at redhat.com Thu Feb 22 20:41:00 2018 From: rkennke at redhat.com (Roman Kennke) Date: Thu, 22 Feb 2018 21:41:00 +0100 Subject: SIGBUS in Access<1572864UL>::store_at on Solaris/SPARC In-Reply-To: <000b2fd4-3455-3030-5a63-49ce841663d2@oracle.com> References: <9b28dda5-2ce5-ade0-e3aa-c7c48dc53d86@oracle.com> <7db7a8ff-5484-80c0-2cfa-9ea0a6a964be@oracle.com> <000b2fd4-3455-3030-5a63-49ce841663d2@oracle.com> Message-ID: Ok thank you. I tried your patch and can confirm that it works/passes. :-) It also gives me some ideas how gtest works. I modified the test so that it fails without the fix, and passes with the fix: http://cr.openjdk.java.net/~rkennke/8198564/webrev.00/ If you think that's good, then I can post a formal RFR and take over the bug. Roman On Thu, Feb 22, 2018 at 9:00 PM, Stefan Karlsson wrote: > On 2018-02-22 20:47, Roman Kennke wrote: >> >> On Thu, Feb 22, 2018 at 8:22 PM, Stefan Karlsson >> wrote: >>> >>> On 2018-02-22 20:14, Roman Kennke wrote: >>>> >>>> Right. This looks like possible and likely cause of the problem. And >>>> it worked before because of implicit conversion back to jboolean: >>>> >>>> - void bool_at_put(int which, jboolean contents) { >>>> *bool_at_addr(which) = (((jint)contents) & 1); } >>>> >>>> >>>> Can you test it? Because, I can't ;-) >>> >>> >>> Yes. I'm kicking of some testing on sparc. Could you write a gtest for >>> this? >> >> I can try. I never wrote a gtest before ;-) Is there an existing one >> that I could use as template, and/or pointers how to start? > > > You can look at the existing tests in test/hotspot/gtest. I suggest you read > the official googletest doc to get started. There might be some other > document about our adaption of googletest, but I don't know where it is. > > Maybe something like this would work: > > diff --git a/test/hotspot/gtest/oops/test_arrayOop.cpp > b/test/hotspot/gtest/oops/test_arrayOop.cpp > --- a/test/hotspot/gtest/oops/test_arrayOop.cpp > +++ b/test/hotspot/gtest/oops/test_arrayOop.cpp > @@ -22,6 +22,7 @@ > */ > > #include "precompiled.hpp" > +#include "memory/universe.hpp" > #include "oops/arrayOop.hpp" > #include "oops/oop.inline.hpp" > #include "unittest.hpp" > @@ -86,4 +87,37 @@ > TEST_VM(arrayOopDesc, narrowOop) { > ASSERT_PRED1(check_max_length_overflow, T_NARROWOOP); > } > + > +TEST_VM(arrayOopDesc, bool_at_put) { > + char mem[100]; > + memset(mem, 0, ARRAY_SIZE(mem)); > + > + char* addr = align_up(mem, 16); > + > + typeArrayOop o = (typeArrayOop) addr; > + o->set_klass(Universe::boolArrayKlassObj()); > + o->set_length(10); > + > + > + ASSERT_EQ((jboolean)0, o->bool_at(0)); > + ASSERT_EQ((jboolean)0, o->bool_at(1)); > + ASSERT_EQ((jboolean)0, o->bool_at(2)); > + ASSERT_EQ((jboolean)0, o->bool_at(3)); > + ASSERT_EQ((jboolean)0, o->bool_at(4)); > + ASSERT_EQ((jboolean)0, o->bool_at(5)); > + ASSERT_EQ((jboolean)0, o->bool_at(6)); > + ASSERT_EQ((jboolean)0, o->bool_at(7)); > + > + o->bool_at_put(0, 1); > + > + ASSERT_EQ((jboolean)1, o->bool_at(0)); > + ASSERT_EQ((jboolean)0, o->bool_at(1)); > + ASSERT_EQ((jboolean)0, o->bool_at(2)); > + ASSERT_EQ((jboolean)0, o->bool_at(3)); > + ASSERT_EQ((jboolean)0, o->bool_at(4)); > + ASSERT_EQ((jboolean)0, o->bool_at(5)); > + ASSERT_EQ((jboolean)0, o->bool_at(6)); > + ASSERT_EQ((jboolean)0, o->bool_at(7)); > +} > + > // T_VOID and T_ADDRESS are not supported by max_array_length() > > And then run with: > ../build/fastdebug/hotspot/variant-server/libjvm/gtest/gtestLauncher -jdk > ../build/fastdebug/jdk --gtest_filter="arrayOopDesc*" > > StefanK > >> >> Roman > > > From stefan.karlsson at oracle.com Thu Feb 22 21:04:46 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Feb 2018 22:04:46 +0100 Subject: SIGBUS in Access<1572864UL>::store_at on Solaris/SPARC In-Reply-To: References: <9b28dda5-2ce5-ade0-e3aa-c7c48dc53d86@oracle.com> <7db7a8ff-5484-80c0-2cfa-9ea0a6a964be@oracle.com> <000b2fd4-3455-3030-5a63-49ce841663d2@oracle.com> Message-ID: On 2018-02-22 21:41, Roman Kennke wrote: > Ok thank you. > I tried your patch and can confirm that it works/passes. :-) It also > gives me some ideas how gtest works. > > I modified the test so that it fails without the fix, and passes with the fix: > > http://cr.openjdk.java.net/~rkennke/8198564/webrev.00/ > > If you think that's good, then I can post a formal RFR and take over the bug. Yes, this seems good. A similar patch using (jboolean)(((jint)contents) & 1) passes tests that used to fail on sparc. You might want to consider moving the test to a test_typeArrayOop.cpp file. Thanks, StefanK > > Roman > > On Thu, Feb 22, 2018 at 9:00 PM, Stefan Karlsson > wrote: >> On 2018-02-22 20:47, Roman Kennke wrote: >>> On Thu, Feb 22, 2018 at 8:22 PM, Stefan Karlsson >>> wrote: >>>> On 2018-02-22 20:14, Roman Kennke wrote: >>>>> Right. This looks like possible and likely cause of the problem. And >>>>> it worked before because of implicit conversion back to jboolean: >>>>> >>>>> - void bool_at_put(int which, jboolean contents) { >>>>> *bool_at_addr(which) = (((jint)contents) & 1); } >>>>> >>>>> >>>>> Can you test it? Because, I can't ;-) >>>> >>>> Yes. I'm kicking of some testing on sparc. Could you write a gtest for >>>> this? >>> I can try. I never wrote a gtest before ;-) Is there an existing one >>> that I could use as template, and/or pointers how to start? >> >> You can look at the existing tests in test/hotspot/gtest. I suggest you read >> the official googletest doc to get started. There might be some other >> document about our adaption of googletest, but I don't know where it is. >> >> Maybe something like this would work: >> >> diff --git a/test/hotspot/gtest/oops/test_arrayOop.cpp >> b/test/hotspot/gtest/oops/test_arrayOop.cpp >> --- a/test/hotspot/gtest/oops/test_arrayOop.cpp >> +++ b/test/hotspot/gtest/oops/test_arrayOop.cpp >> @@ -22,6 +22,7 @@ >> */ >> >> #include "precompiled.hpp" >> +#include "memory/universe.hpp" >> #include "oops/arrayOop.hpp" >> #include "oops/oop.inline.hpp" >> #include "unittest.hpp" >> @@ -86,4 +87,37 @@ >> TEST_VM(arrayOopDesc, narrowOop) { >> ASSERT_PRED1(check_max_length_overflow, T_NARROWOOP); >> } >> + >> +TEST_VM(arrayOopDesc, bool_at_put) { >> + char mem[100]; >> + memset(mem, 0, ARRAY_SIZE(mem)); >> + >> + char* addr = align_up(mem, 16); >> + >> + typeArrayOop o = (typeArrayOop) addr; >> + o->set_klass(Universe::boolArrayKlassObj()); >> + o->set_length(10); >> + >> + >> + ASSERT_EQ((jboolean)0, o->bool_at(0)); >> + ASSERT_EQ((jboolean)0, o->bool_at(1)); >> + ASSERT_EQ((jboolean)0, o->bool_at(2)); >> + ASSERT_EQ((jboolean)0, o->bool_at(3)); >> + ASSERT_EQ((jboolean)0, o->bool_at(4)); >> + ASSERT_EQ((jboolean)0, o->bool_at(5)); >> + ASSERT_EQ((jboolean)0, o->bool_at(6)); >> + ASSERT_EQ((jboolean)0, o->bool_at(7)); >> + >> + o->bool_at_put(0, 1); >> + >> + ASSERT_EQ((jboolean)1, o->bool_at(0)); >> + ASSERT_EQ((jboolean)0, o->bool_at(1)); >> + ASSERT_EQ((jboolean)0, o->bool_at(2)); >> + ASSERT_EQ((jboolean)0, o->bool_at(3)); >> + ASSERT_EQ((jboolean)0, o->bool_at(4)); >> + ASSERT_EQ((jboolean)0, o->bool_at(5)); >> + ASSERT_EQ((jboolean)0, o->bool_at(6)); >> + ASSERT_EQ((jboolean)0, o->bool_at(7)); >> +} >> + >> // T_VOID and T_ADDRESS are not supported by max_array_length() >> >> And then run with: >> ../build/fastdebug/hotspot/variant-server/libjvm/gtest/gtestLauncher -jdk >> ../build/fastdebug/jdk --gtest_filter="arrayOopDesc*" >> >> StefanK >> >>> Roman >> >> From rkennke at redhat.com Thu Feb 22 21:17:48 2018 From: rkennke at redhat.com (Roman Kennke) Date: Thu, 22 Feb 2018 22:17:48 +0100 Subject: SIGBUS in Access<1572864UL>::store_at on Solaris/SPARC In-Reply-To: References: <9b28dda5-2ce5-ade0-e3aa-c7c48dc53d86@oracle.com> <7db7a8ff-5484-80c0-2cfa-9ea0a6a964be@oracle.com> <000b2fd4-3455-3030-5a63-49ce841663d2@oracle.com> Message-ID: I took the bug. I tried with casting similar to what you suggested, but that failed my test. Maybe I had the parenthesis differently? In any case, I made it so that it matches what is in oop.inline.hpp boolean accessor. I'll move the test to a new file and post an RFR separately. Thanks, Roman On Thu, Feb 22, 2018 at 10:04 PM, Stefan Karlsson wrote: > On 2018-02-22 21:41, Roman Kennke wrote: >> >> Ok thank you. >> I tried your patch and can confirm that it works/passes. :-) It also >> gives me some ideas how gtest works. >> >> I modified the test so that it fails without the fix, and passes with the >> fix: >> >> http://cr.openjdk.java.net/~rkennke/8198564/webrev.00/ >> >> If you think that's good, then I can post a formal RFR and take over the >> bug. > > > Yes, this seems good. A similar patch using (jboolean)(((jint)contents) & 1) > passes tests that used to fail on sparc. > > You might want to consider moving the test to a test_typeArrayOop.cpp file. > > Thanks, > StefanK > > >> >> Roman >> >> On Thu, Feb 22, 2018 at 9:00 PM, Stefan Karlsson >> wrote: >>> >>> On 2018-02-22 20:47, Roman Kennke wrote: >>>> >>>> On Thu, Feb 22, 2018 at 8:22 PM, Stefan Karlsson >>>> wrote: >>>>> >>>>> On 2018-02-22 20:14, Roman Kennke wrote: >>>>>> >>>>>> Right. This looks like possible and likely cause of the problem. And >>>>>> it worked before because of implicit conversion back to jboolean: >>>>>> >>>>>> - void bool_at_put(int which, jboolean contents) { >>>>>> *bool_at_addr(which) = (((jint)contents) & 1); } >>>>>> >>>>>> >>>>>> Can you test it? Because, I can't ;-) >>>>> >>>>> >>>>> Yes. I'm kicking of some testing on sparc. Could you write a gtest for >>>>> this? >>>> >>>> I can try. I never wrote a gtest before ;-) Is there an existing one >>>> that I could use as template, and/or pointers how to start? >>> >>> >>> You can look at the existing tests in test/hotspot/gtest. I suggest you >>> read >>> the official googletest doc to get started. There might be some other >>> document about our adaption of googletest, but I don't know where it is. >>> >>> Maybe something like this would work: >>> >>> diff --git a/test/hotspot/gtest/oops/test_arrayOop.cpp >>> b/test/hotspot/gtest/oops/test_arrayOop.cpp >>> --- a/test/hotspot/gtest/oops/test_arrayOop.cpp >>> +++ b/test/hotspot/gtest/oops/test_arrayOop.cpp >>> @@ -22,6 +22,7 @@ >>> */ >>> >>> #include "precompiled.hpp" >>> +#include "memory/universe.hpp" >>> #include "oops/arrayOop.hpp" >>> #include "oops/oop.inline.hpp" >>> #include "unittest.hpp" >>> @@ -86,4 +87,37 @@ >>> TEST_VM(arrayOopDesc, narrowOop) { >>> ASSERT_PRED1(check_max_length_overflow, T_NARROWOOP); >>> } >>> + >>> +TEST_VM(arrayOopDesc, bool_at_put) { >>> + char mem[100]; >>> + memset(mem, 0, ARRAY_SIZE(mem)); >>> + >>> + char* addr = align_up(mem, 16); >>> + >>> + typeArrayOop o = (typeArrayOop) addr; >>> + o->set_klass(Universe::boolArrayKlassObj()); >>> + o->set_length(10); >>> + >>> + >>> + ASSERT_EQ((jboolean)0, o->bool_at(0)); >>> + ASSERT_EQ((jboolean)0, o->bool_at(1)); >>> + ASSERT_EQ((jboolean)0, o->bool_at(2)); >>> + ASSERT_EQ((jboolean)0, o->bool_at(3)); >>> + ASSERT_EQ((jboolean)0, o->bool_at(4)); >>> + ASSERT_EQ((jboolean)0, o->bool_at(5)); >>> + ASSERT_EQ((jboolean)0, o->bool_at(6)); >>> + ASSERT_EQ((jboolean)0, o->bool_at(7)); >>> + >>> + o->bool_at_put(0, 1); >>> + >>> + ASSERT_EQ((jboolean)1, o->bool_at(0)); >>> + ASSERT_EQ((jboolean)0, o->bool_at(1)); >>> + ASSERT_EQ((jboolean)0, o->bool_at(2)); >>> + ASSERT_EQ((jboolean)0, o->bool_at(3)); >>> + ASSERT_EQ((jboolean)0, o->bool_at(4)); >>> + ASSERT_EQ((jboolean)0, o->bool_at(5)); >>> + ASSERT_EQ((jboolean)0, o->bool_at(6)); >>> + ASSERT_EQ((jboolean)0, o->bool_at(7)); >>> +} >>> + >>> // T_VOID and T_ADDRESS are not supported by max_array_length() >>> >>> And then run with: >>> ../build/fastdebug/hotspot/variant-server/libjvm/gtest/gtestLauncher -jdk >>> ../build/fastdebug/jdk --gtest_filter="arrayOopDesc*" >>> >>> StefanK >>> >>>> Roman >>> >>> >>> > From george.triantafillou at oracle.com Thu Feb 22 21:36:54 2018 From: george.triantafillou at oracle.com (George Triantafillou) Date: Thu, 22 Feb 2018 16:36:54 -0500 Subject: (11) RFR (S) JDK-8197864: VS2017 (C4334) Result of 32-bit Shift Implicitly Converted to 64 bits In-Reply-To: References: Message-ID: <1da66c0a-75a4-8b2b-660b-4f2cddb36498@oracle.com> Hi Lois, Looks good! -George On 2/22/2018 1:55 PM, Lois Foltan wrote: > Please review this fix to properly perform a 64-bit shift when setting > SlowSignatureHandler::_fp_identifiers within _WIN64 conditional code.? > Since the compiler determined that the constant could fit into an int > and the type of SlowSignatureHandler::_num_args is of unsigned int as > well, a 32-bit shift would result yielding a VS2017 compilation warning. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8197864/webrev/ > bug link https://bugs.openjdk.java.net/browse/JDK-8197864 > contributed-by: Kim Barrett & Lois Foltan > > Testing: hs-tier(1-3), jdk-tier(1-3) complete > > Thanks, > Lois > > From george.triantafillou at oracle.com Thu Feb 22 21:38:16 2018 From: george.triantafillou at oracle.com (George Triantafillou) Date: Thu, 22 Feb 2018 16:38:16 -0500 Subject: (11) RFR (S) JDK-8198304: VS2017 (C4838, C4312) Various conversion issues with gtest tests In-Reply-To: <3b5b2b0f-87ca-f563-1dc4-8c612b4dee14@oracle.com> References: <3b5b2b0f-87ca-f563-1dc4-8c612b4dee14@oracle.com> Message-ID: <0d65aac2-9dd3-081d-cb02-f61ef2e6d105@oracle.com> Hi Lois, Your changes look good. -George On 2/22/2018 1:32 PM, Lois Foltan wrote: > Please review this change to fix VS2017 conversion compilation errors > within two Hotspot gtest tests. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8198304/webrev/ > bug link https://bugs.openjdk.java.net/browse/JDK-8198304 > > Testing: hs-tier(1-3), jdk-tier(1-3) complete > > Thanks, > Lois From lois.foltan at oracle.com Thu Feb 22 21:44:01 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Thu, 22 Feb 2018 16:44:01 -0500 Subject: (11) RFR (S) JDK-8197864: VS2017 (C4334) Result of 32-bit Shift Implicitly Converted to 64 bits In-Reply-To: References: Message-ID: Thanks Harold. Lois On 2/22/2018 3:18 PM, harold seigel wrote: > Hi Lois, > > This looks good! > > Thanks, Harold > > > On 2/22/2018 1:55 PM, Lois Foltan wrote: >> Please review this fix to properly perform a 64-bit shift when >> setting SlowSignatureHandler::_fp_identifiers within _WIN64 >> conditional code.? Since the compiler determined that the constant >> could fit into an int and the type of SlowSignatureHandler::_num_args >> is of unsigned int as well, a 32-bit shift would result yielding a >> VS2017 compilation warning. >> >> open webrev at >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8197864/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8197864 >> contributed-by: Kim Barrett & Lois Foltan >> >> Testing: hs-tier(1-3), jdk-tier(1-3) complete >> >> Thanks, >> Lois >> >> > From lois.foltan at oracle.com Thu Feb 22 21:44:35 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Thu, 22 Feb 2018 16:44:35 -0500 Subject: (11) RFR (S) JDK-8198304: VS2017 (C4838, C4312) Various conversion issues with gtest tests In-Reply-To: <0d65aac2-9dd3-081d-cb02-f61ef2e6d105@oracle.com> References: <3b5b2b0f-87ca-f563-1dc4-8c612b4dee14@oracle.com> <0d65aac2-9dd3-081d-cb02-f61ef2e6d105@oracle.com> Message-ID: <15518170-a9c2-3a0a-1cf2-ad1e6ced6a62@oracle.com> Thanks George. Lois On 2/22/2018 4:38 PM, George Triantafillou wrote: > Hi Lois, > > Your changes look good. > > -George > > On 2/22/2018 1:32 PM, Lois Foltan wrote: >> Please review this change to fix VS2017 conversion compilation errors >> within two Hotspot gtest tests. >> >> open webrev at >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8198304/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8198304 >> >> Testing: hs-tier(1-3), jdk-tier(1-3) complete >> >> Thanks, >> Lois > From lois.foltan at oracle.com Thu Feb 22 21:43:45 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Thu, 22 Feb 2018 16:43:45 -0500 Subject: (11) RFR (S) JDK-8198304: VS2017 (C4838, C4312) Various conversion issues with gtest tests In-Reply-To: <8b753cae-7422-d736-ce43-6987750a8eef@oracle.com> References: <3b5b2b0f-87ca-f563-1dc4-8c612b4dee14@oracle.com> <8b753cae-7422-d736-ce43-6987750a8eef@oracle.com> Message-ID: <07cb2345-de3e-b81b-173d-a5c83f9a58ca@oracle.com> Thanks Harold. Lois On 2/22/2018 3:11 PM, harold seigel wrote: > Hi Lois, > > This change looks good. > > Harold > > > On 2/22/2018 1:32 PM, Lois Foltan wrote: >> Please review this change to fix VS2017 conversion compilation errors >> within two Hotspot gtest tests. >> >> open webrev at >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8198304/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8198304 >> >> Testing: hs-tier(1-3), jdk-tier(1-3) complete >> >> Thanks, >> Lois > From lois.foltan at oracle.com Thu Feb 22 21:44:18 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Thu, 22 Feb 2018 16:44:18 -0500 Subject: (11) RFR (S) JDK-8197864: VS2017 (C4334) Result of 32-bit Shift Implicitly Converted to 64 bits In-Reply-To: <1da66c0a-75a4-8b2b-660b-4f2cddb36498@oracle.com> References: <1da66c0a-75a4-8b2b-660b-4f2cddb36498@oracle.com> Message-ID: <3e88f54a-e95d-8273-08b7-dc7b44645b76@oracle.com> Thanks George. Lois On 2/22/2018 4:36 PM, George Triantafillou wrote: > Hi Lois, > > Looks good! > > -George > > On 2/22/2018 1:55 PM, Lois Foltan wrote: >> Please review this fix to properly perform a 64-bit shift when >> setting SlowSignatureHandler::_fp_identifiers within _WIN64 >> conditional code.? Since the compiler determined that the constant >> could fit into an int and the type of SlowSignatureHandler::_num_args >> is of unsigned int as well, a 32-bit shift would result yielding a >> VS2017 compilation warning. >> >> open webrev at >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8197864/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8197864 >> contributed-by: Kim Barrett & Lois Foltan >> >> Testing: hs-tier(1-3), jdk-tier(1-3) complete >> >> Thanks, >> Lois >> >> > From david.holmes at oracle.com Fri Feb 23 00:40:56 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 23 Feb 2018 10:40:56 +1000 Subject: [8u] [RFR] Request for Review of Backport of JDK-8078628: linux-zero does not build without precompiled header In-Reply-To: References: Message-ID: <22c8ef87-9dd8-e488-23c7-631caa8683b5@oracle.com> Hi Andrew, The imported copyright year changes need to be updated to reflect current year. The ifdef cleanups seem fine. The actual include changes also seem fine. I assume non-zero builds havfe also been tested? Thanks, David On 22/02/2018 2:01 PM, Andrew Hughes wrote: > [CCing hotspot list for review] > > Bug: https://bugs.openjdk.java.net/browse/JDK-8078628 > Webrev: http://cr.openjdk.java.net/~andrew/openjdk8/8078628/webrev.01/ > Review thread: http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-April/018239.html > > When testing a slowdebug build of Zero for the backport of 8194739, my build > failed because I don't have pre-compiled headers enabled. It seems this > was fixed in OpenJDK 9, but never backported. > > The backported version is pretty similar with a few adjustments for context > in the older OpenJDK 8 version. The src/cpu/zero/vm/methodHandles_zero.hpp > are my own addition from the same fix I came up with independently, and stops > multiple inclusions of that header. > > Please review and approve for OpenJDK 8 so Zero builds without > precompiled headers > work there. > > Thanks, > From igor.ignatyev at oracle.com Fri Feb 23 01:19:08 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Thu, 22 Feb 2018 17:19:08 -0800 Subject: RFR(XS) : 8198568 : clean up test/hotspot/jtreg/ProblemList.txt Message-ID: <849071FB-3DD4-4CAA-B225-869C2AFD50B8@oracle.com> http://cr.openjdk.java.net/~iignatyev//8198568/webrev.00/index.html > 11 lines changed: 1 ins; 9 del; 1 mod; Hi all, could you please review this clean up in hotspot ProblemList? 8180324, 8173936, 8166548 are resolved, 8134286, 8175791 are closed an CNR, 8163805 is closed as WNF, 8179226 is a dup of 8180622, I have updated the ProblemList correspondingly. as a result 6 tests are un-qurantined (removed from the problem list), since we haven't run them for some time, there might be new (or old) failures. if they occur, new bugs should be filed and used to re-quarantine affected tests. JBS: https://bugs.openjdk.java.net/browse/JDK-8198568 webrev: http://cr.openjdk.java.net/~iignatyev/8198568/webrev.00/index.html testing: run the tests several times in mach5 + hs-tier[1-2] Thanks, -- Igor From gnu.andrew at redhat.com Fri Feb 23 05:39:54 2018 From: gnu.andrew at redhat.com (Andrew Hughes) Date: Fri, 23 Feb 2018 05:39:54 +0000 Subject: [8u] [RFR] Request for Review of Backport of JDK-8078628: linux-zero does not build without precompiled header In-Reply-To: References: Message-ID: On 22 February 2018 at 13:58, Thomas St?fe wrote: > Looks good. Should the include guard for > src/cpu/zero/vm/methodHandles_zero.hpp also added to jdk9? > > Note that I am no reviewer for jdk8, only 9++. > > Regards, Thomas > Hmm... that's a good point I hadn't considered. Maybe I should split that out into a separate fix and apply it to 9 as well? The rest is already in 9 as part of your changes. -- Andrew :) Senior Free Java Software Engineer Red Hat, Inc. (http://www.redhat.com) Web Site: http://fuseyism.com Twitter: https://twitter.com/gnu_andrew_java PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net) Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222 From gnu.andrew at redhat.com Fri Feb 23 05:51:06 2018 From: gnu.andrew at redhat.com (Andrew Hughes) Date: Fri, 23 Feb 2018 05:51:06 +0000 Subject: [8u] [RFR] Request for Review of Backport of JDK-8078628: linux-zero does not build without precompiled header In-Reply-To: <22c8ef87-9dd8-e488-23c7-631caa8683b5@oracle.com> References: <22c8ef87-9dd8-e488-23c7-631caa8683b5@oracle.com> Message-ID: On 23 February 2018 at 00:40, David Holmes wrote: > Hi Andrew, > > The imported copyright year changes need to be updated to reflect current > year. > The changes were actually my own rather than being part of the import, and 2015 is used because that's when the original patch was written. I assume in 9 the current year was already in use. In backporting patches, I've always kept the copyright dates used in the patch as I don't believe that moving a patch from one tree to another warrants new copyright. Is there a policy on handling this? > The ifdef cleanups seem fine. > > The actual include changes also seem fine. > > I assume non-zero builds havfe also been tested? > Yes, they're fine. I should mention that the patch has been in IcedTea for at least a year, so it's had a far bit of testing already on different builds. > Thanks, > David > Thanks, -- Andrew :) Senior Free Java Software Engineer Red Hat, Inc. (http://www.redhat.com) Web Site: http://fuseyism.com Twitter: https://twitter.com/gnu_andrew_java PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net) Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222 From david.holmes at oracle.com Fri Feb 23 06:17:21 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 23 Feb 2018 16:17:21 +1000 Subject: [8u] [RFR] Request for Review of Backport of JDK-8078628: linux-zero does not build without precompiled header In-Reply-To: References: <22c8ef87-9dd8-e488-23c7-631caa8683b5@oracle.com> Message-ID: <633c1920-f5cf-115a-2b5b-6a9c96ace131@oracle.com> On 23/02/2018 3:51 PM, Andrew Hughes wrote: > On 23 February 2018 at 00:40, David Holmes wrote: >> Hi Andrew, >> >> The imported copyright year changes need to be updated to reflect current >> year. >> > > The changes were actually my own rather than being part of the import, and > 2015 is used because that's when the original patch was written. I assume > in 9 the current year was already in use. In backporting patches, I've > always kept > the copyright dates used in the patch as I don't believe that moving a > patch from > one tree to another warrants new copyright. Is there a policy on handling this? Our internal policy is that any change to a file requires we update the copyright year. If you refactor code you move it from one file to another but that still requires a copyright update. Cheers, David >> The ifdef cleanups seem fine. >> >> The actual include changes also seem fine. >> >> I assume non-zero builds havfe also been tested? >> > > Yes, they're fine. I should mention that the patch has been in IcedTea > for at least > a year, so it's had a far bit of testing already on different builds. > >> Thanks, >> David >> > > Thanks, > From volker.simonis at gmail.com Fri Feb 23 07:28:18 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 23 Feb 2018 07:28:18 +0000 Subject: SIGBUS in Access<1572864UL>::store_at on Solaris/SPARC In-Reply-To: References: <9b28dda5-2ce5-ade0-e3aa-c7c48dc53d86@oracle.com> <7db7a8ff-5484-80c0-2cfa-9ea0a6a964be@oracle.com> <000b2fd4-3455-3030-5a63-49ce841663d2@oracle.com> Message-ID: Hi Stefan, Roman, thanks for figuring out this so quickly! Regards, Volker Roman Kennke schrieb am Do. 22. Feb. 2018 um 22:17: > I took the bug. > I tried with casting similar to what you suggested, but that failed my > test. Maybe I had the parenthesis differently? In any case, I made it > so that it matches what is in oop.inline.hpp boolean accessor. > > I'll move the test to a new file and post an RFR separately. > > Thanks, Roman > > > On Thu, Feb 22, 2018 at 10:04 PM, Stefan Karlsson > wrote: > > On 2018-02-22 21:41, Roman Kennke wrote: > >> > >> Ok thank you. > >> I tried your patch and can confirm that it works/passes. :-) It also > >> gives me some ideas how gtest works. > >> > >> I modified the test so that it fails without the fix, and passes with > the > >> fix: > >> > >> http://cr.openjdk.java.net/~rkennke/8198564/webrev.00/ > >> > >> If you think that's good, then I can post a formal RFR and take over the > >> bug. > > > > > > Yes, this seems good. A similar patch using (jboolean)(((jint)contents) > & 1) > > passes tests that used to fail on sparc. > > > > You might want to consider moving the test to a test_typeArrayOop.cpp > file. > > > > Thanks, > > StefanK > > > > > >> > >> Roman > >> > >> On Thu, Feb 22, 2018 at 9:00 PM, Stefan Karlsson > >> wrote: > >>> > >>> On 2018-02-22 20:47, Roman Kennke wrote: > >>>> > >>>> On Thu, Feb 22, 2018 at 8:22 PM, Stefan Karlsson > >>>> wrote: > >>>>> > >>>>> On 2018-02-22 20:14, Roman Kennke wrote: > >>>>>> > >>>>>> Right. This looks like possible and likely cause of the problem. And > >>>>>> it worked before because of implicit conversion back to jboolean: > >>>>>> > >>>>>> - void bool_at_put(int which, jboolean contents) { > >>>>>> *bool_at_addr(which) = (((jint)contents) & 1); } > >>>>>> > >>>>>> > >>>>>> Can you test it? Because, I can't ;-) > >>>>> > >>>>> > >>>>> Yes. I'm kicking of some testing on sparc. Could you write a gtest > for > >>>>> this? > >>>> > >>>> I can try. I never wrote a gtest before ;-) Is there an existing one > >>>> that I could use as template, and/or pointers how to start? > >>> > >>> > >>> You can look at the existing tests in test/hotspot/gtest. I suggest you > >>> read > >>> the official googletest doc to get started. There might be some other > >>> document about our adaption of googletest, but I don't know where it > is. > >>> > >>> Maybe something like this would work: > >>> > >>> diff --git a/test/hotspot/gtest/oops/test_arrayOop.cpp > >>> b/test/hotspot/gtest/oops/test_arrayOop.cpp > >>> --- a/test/hotspot/gtest/oops/test_arrayOop.cpp > >>> +++ b/test/hotspot/gtest/oops/test_arrayOop.cpp > >>> @@ -22,6 +22,7 @@ > >>> */ > >>> > >>> #include "precompiled.hpp" > >>> +#include "memory/universe.hpp" > >>> #include "oops/arrayOop.hpp" > >>> #include "oops/oop.inline.hpp" > >>> #include "unittest.hpp" > >>> @@ -86,4 +87,37 @@ > >>> TEST_VM(arrayOopDesc, narrowOop) { > >>> ASSERT_PRED1(check_max_length_overflow, T_NARROWOOP); > >>> } > >>> + > >>> +TEST_VM(arrayOopDesc, bool_at_put) { > >>> + char mem[100]; > >>> + memset(mem, 0, ARRAY_SIZE(mem)); > >>> + > >>> + char* addr = align_up(mem, 16); > >>> + > >>> + typeArrayOop o = (typeArrayOop) addr; > >>> + o->set_klass(Universe::boolArrayKlassObj()); > >>> + o->set_length(10); > >>> + > >>> + > >>> + ASSERT_EQ((jboolean)0, o->bool_at(0)); > >>> + ASSERT_EQ((jboolean)0, o->bool_at(1)); > >>> + ASSERT_EQ((jboolean)0, o->bool_at(2)); > >>> + ASSERT_EQ((jboolean)0, o->bool_at(3)); > >>> + ASSERT_EQ((jboolean)0, o->bool_at(4)); > >>> + ASSERT_EQ((jboolean)0, o->bool_at(5)); > >>> + ASSERT_EQ((jboolean)0, o->bool_at(6)); > >>> + ASSERT_EQ((jboolean)0, o->bool_at(7)); > >>> + > >>> + o->bool_at_put(0, 1); > >>> + > >>> + ASSERT_EQ((jboolean)1, o->bool_at(0)); > >>> + ASSERT_EQ((jboolean)0, o->bool_at(1)); > >>> + ASSERT_EQ((jboolean)0, o->bool_at(2)); > >>> + ASSERT_EQ((jboolean)0, o->bool_at(3)); > >>> + ASSERT_EQ((jboolean)0, o->bool_at(4)); > >>> + ASSERT_EQ((jboolean)0, o->bool_at(5)); > >>> + ASSERT_EQ((jboolean)0, o->bool_at(6)); > >>> + ASSERT_EQ((jboolean)0, o->bool_at(7)); > >>> +} > >>> + > >>> // T_VOID and T_ADDRESS are not supported by max_array_length() > >>> > >>> And then run with: > >>> ../build/fastdebug/hotspot/variant-server/libjvm/gtest/gtestLauncher > -jdk > >>> ../build/fastdebug/jdk --gtest_filter="arrayOopDesc*" > >>> > >>> StefanK > >>> > >>>> Roman > >>> > >>> > >>> > > > From tobias.hartmann at oracle.com Fri Feb 23 09:09:00 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Fri, 23 Feb 2018 10:09:00 +0100 Subject: RFR(XS) : 8198568 : clean up test/hotspot/jtreg/ProblemList.txt In-Reply-To: <849071FB-3DD4-4CAA-B225-869C2AFD50B8@oracle.com> References: <849071FB-3DD4-4CAA-B225-869C2AFD50B8@oracle.com> Message-ID: <99c4902d-dd10-3075-7841-57944a17fb5f@oracle.com> Hi Igor, this looks good to me. Best regards, Tobias On 23.02.2018 02:19, Igor Ignatyev wrote: > http://cr.openjdk.java.net/~iignatyev//8198568/webrev.00/index.html >> 11 lines changed: 1 ins; 9 del; 1 mod; > > Hi all, > > could you please review this clean up in hotspot ProblemList? > > 8180324, 8173936, 8166548 are resolved, 8134286, 8175791 are closed an CNR, 8163805 is closed as WNF, 8179226 is a dup of 8180622, I have updated the ProblemList correspondingly. as a result 6 tests are un-qurantined (removed from the problem list), since we haven't run them for some time, there might be new (or old) failures. if they occur, new bugs should be filed and used to re-quarantine affected tests. > > JBS: https://bugs.openjdk.java.net/browse/JDK-8198568 > webrev: http://cr.openjdk.java.net/~iignatyev/8198568/webrev.00/index.html > testing: run the tests several times in mach5 + hs-tier[1-2] > > Thanks, > -- Igor > From erik.osterlund at oracle.com Fri Feb 23 09:31:36 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Fri, 23 Feb 2018 10:31:36 +0100 Subject: RFR(L): 8195142: Refactor out card table from CardTableModRefBS to flatten the BarrierSet hierarchy In-Reply-To: References: <5A5F476E.6080000@oracle.com> <5A855376.5090203@oracle.com> <79f7fbf8-b0cc-1999-45a9-aec8f8879f77@oracle.com> <5A8D58FC.10603@oracle.com> <79ef5154-98d3-2578-b997-e179e8f9f634@oracle.com> <28cd29a7-a7e7-f9a4-6e1f-47bf9eb47ba7@oracle.com> Message-ID: <8d5ab06f-087f-9459-9df9-64eed47005b5@oracle.com> Hi Vladimir, Thank you for the review. /Erik On 2018-02-22 19:41, Vladimir Kozlov wrote: > On 2/22/18 10:30 AM, Vladimir Kozlov wrote: >> Thank you, Erik >> >> I am old C++ guy and using template for just casting is overkill to >> me. You still specify when you can just use cast (type). So >> what benefit template has in this case? > > Never mind my comment - I missed reinterpret_cast<> you need to cast a > pointer to basic types. > > Changes are good. > > Thanks, > Vladimir > >> >> Otherwise looks good. >> >> Thanks, >> Vladimir >> >> On 2/22/18 3:45 AM, Erik ?sterlund wrote: >>> Hi Vladimir, >>> >>> Thank you for having a look at this. >>> >>> I created some utility functions in ci/ciUtilities.hpp to get the >>> card table: >>> >>> jbyte* ci_card_table_address() >>> template T ci_card_table_address_as() >>> >>> The compiler code has been updated to use these helpers instead to >>> fetch the card table in a consistent way. >>> >>> Hope this is kind of what you had in mind? >>> >>> New full webrev: >>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.03/ >>> >>> New incremental webrev: >>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.02_03/ >>> >>> Thanks, >>> /Erik >>> >>> On 2018-02-21 18:12, Vladimir Kozlov wrote: >>>> Hi Erik, >>>> >>>> I looked on compiler and aot changes. I noticed repeated sequence >>>> in several files to get byte_map_base() >>>> >>>> +? BarrierSet* bs = Universe::heap()->barrier_set(); >>>> +? CardTableModRefBS* ctbs = barrier_set_cast(bs); >>>> +? CardTable* ct = ctbs->card_table(); >>>> +? assert(sizeof(*(ct->byte_map_base())) == sizeof(jbyte), "adjust >>>> this code"); >>>> +? LIR_Const* card_table_base = new LIR_Const(ct->byte_map_base()); >>>> >>>> But sometimes it has the assert (graphKit.cpp) and sometimes does >>>> not (aotCodeHeap.cpp). >>>> >>>> Can you factor this sequence into one method which can be used in >>>> all such places? >>>> >>>> Thanks, >>>> Vladimir >>>> >>>> On 2/21/18 3:33 AM, Erik ?sterlund wrote: >>>>> Hi Erik, >>>>> >>>>> Thank you for reviewing this. >>>>> >>>>> New full webrev: >>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.02/ >>>>> >>>>> New incremental webrev: >>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01_02/ >>>>> >>>>> On 2018-02-21 09:18, Erik Helin wrote: >>>>>> Hi Erik, >>>>>> >>>>>> this is a very nice improvement, thanks for working on this! >>>>>> >>>>>> A few minor comments thus far: >>>>>> - in stubGenerator_ppc.cpp: >>>>>> ? you seem to have lost a `const` in the refactoring >>>>> >>>>> Fixed. >>>>> >>>>>> - in psCardTable.hpp: >>>>>> ? I don't think card_mark_must_follow_store() is needed, since >>>>>> ? PSCardTable passes `false` for `conc_scan` to the CardTable >>>>>> ? constructor >>>>> >>>>> Fixed. I took the liberty of also making the condition for >>>>> card_mark_must_follow_store() more precise on CMS by making the >>>>> condition for scanned_concurrently consider whether >>>>> CMSPrecleaningEnabled is set or not (like other generated code does). >>>>> >>>>>> - in g1CollectedHeap.hpp: >>>>>> ? could you store the G1CardTable as a field in G1CollectedHeap? >>>>>> Also, >>>>>> ? could you name the "getter" just card_table()? (I see that >>>>>> ? g1_hot_card_cache method above, but that one should also be >>>>>> renamed to >>>>>> ? just hot_card_cache, but in another patch) >>>>> >>>>> Fixed. >>>>> >>>>>> - in cardTable.hpp and cardTable.cpp: >>>>>> ? could you use `hg cp` when constructing these files from >>>>>> ? cardTableModRefBS.{hpp,cpp} so the history is preserved? >>>>> >>>>> Yes, I will do this before pushing to make sure the history is >>>>> preserved. >>>>> >>>>> Thanks, >>>>> /Erik >>>>> >>>>>> >>>>>> Thanks, >>>>>> Erik >>>>>> >>>>>> On 02/15/2018 10:31 AM, Erik ?sterlund wrote: >>>>>>> Hi, >>>>>>> >>>>>>> Here is an updated revision of this webrev after internal >>>>>>> feedback from StefanK who helped looking through my changes - >>>>>>> thanks a lot for the help with that. >>>>>>> >>>>>>> The changes to the new revision are a bunch of minor clean up >>>>>>> changes, e.g. copy right headers, indentation issues, sorting >>>>>>> includes, adding/removing newlines, reverting an assert error >>>>>>> message, fixing constructor initialization orders, and things >>>>>>> like that. >>>>>>> >>>>>>> The problem I mentioned last time about the version number of >>>>>>> our repo not yet being bumped to 11 and resulting awkwardness in >>>>>>> JVMCI has been resolved by simply waiting. So now I changed the >>>>>>> JVMCI logic to get the card values from the new location in the >>>>>>> corresponding card tables when observing JDK version 11 or above. >>>>>>> >>>>>>> New full webrev (rebased onto a month fresher jdk-hs): >>>>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01/ >>>>>>> >>>>>>> Incremental webrev (over the rebase): >>>>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00_01/ >>>>>>> >>>>>>> This new version has run through hs-tier1-5 and jdk-tier1-3 >>>>>>> without any issues. >>>>>>> >>>>>>> Thanks, >>>>>>> /Erik >>>>>>> >>>>>>> On 2018-01-17 13:54, Erik ?sterlund wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> Today, both Parallel, CMS and Serial share the same code for >>>>>>>> its card marking barrier. However, they have different >>>>>>>> requirements how to manage its card tables by the GC. And as >>>>>>>> the card table itself is embedded as a part of the >>>>>>>> CardTableModRefBS barrier set, this has led to an unnecessary >>>>>>>> inheritance hierarchy for CardTableModRefBS, where for example >>>>>>>> CardTableModRefBSForCTRS and CardTableExtension are >>>>>>>> CardTableModRefBS subclasses that do not change anything to do >>>>>>>> with the barriers. >>>>>>>> >>>>>>>> To clean up the code, there should really be a separate >>>>>>>> CardTable hierarchy that contains the differences how to manage >>>>>>>> the card table from the GC point of view, and simply let >>>>>>>> CardTableModRefBS have a CardTable. This would allow removing >>>>>>>> CardTableModRefBSForCTRS and CardTableExtension and their >>>>>>>> references from shared code (that really have nothing to do >>>>>>>> with the barriers, despite being barrier sets), and >>>>>>>> significantly simplify the barrier set code. >>>>>>>> >>>>>>>> This patch mechanically performs this refactoring. A new >>>>>>>> CardTable class has been created with a PSCardTable subclass >>>>>>>> for Parallel, a CardTableRS for CMS and Serial, and a >>>>>>>> G1CardTable for G1. All references to card tables and their >>>>>>>> values have been updated accordingly. >>>>>>>> >>>>>>>> This touches a lot of platform specific code, so would be >>>>>>>> fantastic if port maintainers could have a look that I have not >>>>>>>> broken anything. >>>>>>>> >>>>>>>> There is a slight problem that should be pointed out. There is >>>>>>>> an unfortunate interaction between Graal and hotspot. Graal >>>>>>>> needs to know the values of g1 young cards and dirty cards. >>>>>>>> This is queried in different ways in different versions of the >>>>>>>> JDK in the ||GraalHotSpotVMConfig.java file. Now these values >>>>>>>> will move from their barrier set class to their card table >>>>>>>> class. That means we have at least three cases how to find the >>>>>>>> correct values. There is one for JDK8, one for JDK9, and now a >>>>>>>> new one for JDK11. Except, we have not yet bumped the version >>>>>>>> number to 11 in the repo, and therefore it has to be from JDK10 >>>>>>>> - 11 for now and updated after incrementing the version number. >>>>>>>> But that means that it will be temporarily incompatible with >>>>>>>> JDK10. That is okay for our own copy of Graal, but can not be >>>>>>>> used by upstream Graal as they are given the choice whether to >>>>>>>> support the public JDK10 or the JDK11 that does not quite admit >>>>>>>> to being 11 yet. I chose the solution that works in our >>>>>>>> repository. I will notify Graal folks of this issue. In the >>>>>>>> long run, it would be nice if we could have a more solid >>>>>>>> interface here. >>>>>>>> >>>>>>>> However, as an added benefit, this changeset brings about a >>>>>>>> hundred copyright headers up to date, so others do not have to >>>>>>>> update them for a while. >>>>>>>> >>>>>>>> Bug: >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8195142 >>>>>>>> >>>>>>>> Webrev: >>>>>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00/ >>>>>>>> >>>>>>>> Testing: mach5 hs-tier1-5 plus local AoT testing. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> /Erik >>>>>>> >>>>> >>> From erik.helin at oracle.com Fri Feb 23 09:56:14 2018 From: erik.helin at oracle.com (Erik Helin) Date: Fri, 23 Feb 2018 10:56:14 +0100 Subject: RFR: 8198551 - Rename hotspot_tier1 test group to tier1 In-Reply-To: <91A3E5B6-CF08-41ED-A404-52080C902032@oracle.com> References: <91A3E5B6-CF08-41ED-A404-52080C902032@oracle.com> Message-ID: On 02/22/2018 08:01 PM, Christian Tornqvist wrote: > Please review this small change that renames the hotspot_tier1 test group to tier1 in order to match the naming definition of langtools, jdk, jaxp and nashorn. This enables the use of run-test to run all of the tier1 tests locally: > > make run-test-tier1 > Building target 'run-test-tier1' in configuration 'macosx-x64' > warning: no debug symbols in executable (-arch x86_64) > warning: no debug symbols in executable (-arch x86_64) > Test selection 'tier1', will run: > * jtreg:open/test/hotspot/jtreg:tier1 > * jtreg:open/test/jdk:tier1 > * jtreg:open/test/langtools:tier1 > * jtreg:open/test/nashorn:tier1 > * jtreg:open/test/jaxp:tier1 > > The jdk_svc_sanity test group is part of the Hotspot tier1 definitions but not part of the Hotspot test root, so I moved that group into jdk:tier1 > > Webrev: http://cr.openjdk.java.net/~ctornqvi/webrev/8198551/webrev.00/ Looks good, Reviewed. Maybe you should ping core-libs-dev at openjdk.java.net just to give them a heads up on the changes to tier1 in test/jdk/TEST.groups? Thanks, Erik > Bug: https://bugs.openjdk.java.net/browse/JDK-8198551 > > Thanks, > Christian > From erik.helin at oracle.com Fri Feb 23 10:15:57 2018 From: erik.helin at oracle.com (Erik Helin) Date: Fri, 23 Feb 2018 11:15:57 +0100 Subject: RFR(L): 8195142: Refactor out card table from CardTableModRefBS to flatten the BarrierSet hierarchy In-Reply-To: <5A8D58FC.10603@oracle.com> References: <5A5F476E.6080000@oracle.com> <5A855376.5090203@oracle.com> <79f7fbf8-b0cc-1999-45a9-aec8f8879f77@oracle.com> <5A8D58FC.10603@oracle.com> Message-ID: <7c032434-d10a-3e39-2e89-1ea698b4563e@oracle.com> On 02/21/2018 12:33 PM, Erik ?sterlund wrote: > Hi Erik, > > Thank you for reviewing this. > > New full webrev: > http://cr.openjdk.java.net/~eosterlund/8195142/webrev.02/ > > New incremental webrev: > http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01_02/ The changes looks good, just a few very minor nits: - g1CollectedHeap.hpp: please make the method card_table in G1CollectedHeap const, as in: G1CardTable* card_table() const { return _card_table; } - g1CollectedHeap.cpp: when you are changing methods in G1CollectedHeap, and have access to a private field, please use the field instead of the getter. For example: + _card_table->initialize(cardtable_storage); instead of: + card_table()->initialize(cardtable_storage); - stubGenerator_ppc.cpp maybe add a space before the const qualifier? + CardTableModRefBS*const ctbs = + CardTable*const ct = That is, change the above to: + CardTableModRefBS* const ctbs = + CardTable* const ct = This is just my personal preference, but the code gets a bit dense otherwise IMHO :) I don't need to see a new webrev for the above changes, just please do these changes before you push. I also had a look at the patch after the comments from Coleen and Vladimir, and it looks good. Reviewed from my part. Thanks, Erik > On 2018-02-21 09:18, Erik Helin wrote: >> Hi Erik, >> >> this is a very nice improvement, thanks for working on this! >> >> A few minor comments thus far: >> - in stubGenerator_ppc.cpp: >> ? you seem to have lost a `const` in the refactoring > > Fixed. > >> - in psCardTable.hpp: >> ? I don't think card_mark_must_follow_store() is needed, since >> ? PSCardTable passes `false` for `conc_scan` to the CardTable >> ? constructor > > Fixed. I took the liberty of also making the condition for > card_mark_must_follow_store() more precise on CMS by making the > condition for scanned_concurrently consider whether > CMSPrecleaningEnabled is set or not (like other generated code does). > >> - in g1CollectedHeap.hpp: >> ? could you store the G1CardTable as a field in G1CollectedHeap? Also, >> ? could you name the "getter" just card_table()? (I see that >> ? g1_hot_card_cache method above, but that one should also be renamed to >> ? just hot_card_cache, but in another patch) > > Fixed. > >> - in cardTable.hpp and cardTable.cpp: >> ? could you use `hg cp` when constructing these files from >> ? cardTableModRefBS.{hpp,cpp} so the history is preserved? > > Yes, I will do this before pushing to make sure the history is preserved. > > Thanks, > /Erik > >> >> Thanks, >> Erik >> >> On 02/15/2018 10:31 AM, Erik ?sterlund wrote: >>> Hi, >>> >>> Here is an updated revision of this webrev after internal feedback >>> from StefanK who helped looking through my changes - thanks a lot for >>> the help with that. >>> >>> The changes to the new revision are a bunch of minor clean up >>> changes, e.g. copy right headers, indentation issues, sorting >>> includes, adding/removing newlines, reverting an assert error >>> message, fixing constructor initialization orders, and things like that. >>> >>> The problem I mentioned last time about the version number of our >>> repo not yet being bumped to 11 and resulting awkwardness in JVMCI >>> has been resolved by simply waiting. So now I changed the JVMCI logic >>> to get the card values from the new location in the corresponding >>> card tables when observing JDK version 11 or above. >>> >>> New full webrev (rebased onto a month fresher jdk-hs): >>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01/ >>> >>> Incremental webrev (over the rebase): >>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00_01/ >>> >>> This new version has run through hs-tier1-5 and jdk-tier1-3 without >>> any issues. >>> >>> Thanks, >>> /Erik >>> >>> On 2018-01-17 13:54, Erik ?sterlund wrote: >>>> Hi, >>>> >>>> Today, both Parallel, CMS and Serial share the same code for its >>>> card marking barrier. However, they have different requirements how >>>> to manage its card tables by the GC. And as the card table itself is >>>> embedded as a part of the CardTableModRefBS barrier set, this has >>>> led to an unnecessary inheritance hierarchy for CardTableModRefBS, >>>> where for example CardTableModRefBSForCTRS and CardTableExtension >>>> are CardTableModRefBS subclasses that do not change anything to do >>>> with the barriers. >>>> >>>> To clean up the code, there should really be a separate CardTable >>>> hierarchy that contains the differences how to manage the card table >>>> from the GC point of view, and simply let CardTableModRefBS have a >>>> CardTable. This would allow removing CardTableModRefBSForCTRS and >>>> CardTableExtension and their references from shared code (that >>>> really have nothing to do with the barriers, despite being barrier >>>> sets), and significantly simplify the barrier set code. >>>> >>>> This patch mechanically performs this refactoring. A new CardTable >>>> class has been created with a PSCardTable subclass for Parallel, a >>>> CardTableRS for CMS and Serial, and a G1CardTable for G1. All >>>> references to card tables and their values have been updated >>>> accordingly. >>>> >>>> This touches a lot of platform specific code, so would be fantastic >>>> if port maintainers could have a look that I have not broken anything. >>>> >>>> There is a slight problem that should be pointed out. There is an >>>> unfortunate interaction between Graal and hotspot. Graal needs to >>>> know the values of g1 young cards and dirty cards. This is queried >>>> in different ways in different versions of the JDK in the >>>> ||GraalHotSpotVMConfig.java file. Now these values will move from >>>> their barrier set class to their card table class. That means we >>>> have at least three cases how to find the correct values. There is >>>> one for JDK8, one for JDK9, and now a new one for JDK11. Except, we >>>> have not yet bumped the version number to 11 in the repo, and >>>> therefore it has to be from JDK10 - 11 for now and updated after >>>> incrementing the version number. But that means that it will be >>>> temporarily incompatible with JDK10. That is okay for our own copy >>>> of Graal, but can not be used by upstream Graal as they are given >>>> the choice whether to support the public JDK10 or the JDK11 that >>>> does not quite admit to being 11 yet. I chose the solution that >>>> works in our repository. I will notify Graal folks of this issue. In >>>> the long run, it would be nice if we could have a more solid >>>> interface here. >>>> >>>> However, as an added benefit, this changeset brings about a hundred >>>> copyright headers up to date, so others do not have to update them >>>> for a while. >>>> >>>> Bug: >>>> https://bugs.openjdk.java.net/browse/JDK-8195142 >>>> >>>> Webrev: >>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00/ >>>> >>>> Testing: mach5 hs-tier1-5 plus local AoT testing. >>>> >>>> Thanks, >>>> /Erik >>> > From erik.osterlund at oracle.com Fri Feb 23 10:21:31 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Fri, 23 Feb 2018 11:21:31 +0100 Subject: RFR(L): 8195142: Refactor out card table from CardTableModRefBS to flatten the BarrierSet hierarchy In-Reply-To: <7c032434-d10a-3e39-2e89-1ea698b4563e@oracle.com> References: <5A5F476E.6080000@oracle.com> <5A855376.5090203@oracle.com> <79f7fbf8-b0cc-1999-45a9-aec8f8879f77@oracle.com> <5A8D58FC.10603@oracle.com> <7c032434-d10a-3e39-2e89-1ea698b4563e@oracle.com> Message-ID: <358f1734-6288-e482-07a5-b048599c73e2@oracle.com> Hi Erik, Thank you for the review. I will apply your proposed tweaks before pushing. Thanks, /Erik On 2018-02-23 11:15, Erik Helin wrote: > > > On 02/21/2018 12:33 PM, Erik ?sterlund wrote: >> Hi Erik, >> >> Thank you for reviewing this. >> >> New full webrev: >> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.02/ >> >> New incremental webrev: >> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01_02/ > > The changes looks good, just a few very minor nits: > > - g1CollectedHeap.hpp: > ? please make the method card_table in G1CollectedHeap const, as in: > ? G1CardTable* card_table() const { > ??? return _card_table; > ? } > > - g1CollectedHeap.cpp: > ? when you are changing methods in G1CollectedHeap, and have access to a > ? private field, please use the field instead of the getter. For > ? example: > > ? + _card_table->initialize(cardtable_storage); > > ? instead of: > > ? +? card_table()->initialize(cardtable_storage); > > - stubGenerator_ppc.cpp > ? maybe add a space before the const qualifier? > > ? + CardTableModRefBS*const ctbs = > ? + CardTable*const ct = > > ? That is, change the above to: > > ? + CardTableModRefBS* const ctbs = > ? + CardTable* const ct = > > ? This is just my personal preference, but the code gets a bit dense > ? otherwise IMHO :) > > I don't need to see a new webrev for the above changes, just please do > these changes before you push. I also had a look at the patch after > the comments from Coleen and Vladimir, and it looks good. Reviewed > from my part. > > Thanks, > Erik > >> On 2018-02-21 09:18, Erik Helin wrote: >>> Hi Erik, >>> >>> this is a very nice improvement, thanks for working on this! >>> >>> A few minor comments thus far: >>> - in stubGenerator_ppc.cpp: >>> ? you seem to have lost a `const` in the refactoring >> >> Fixed. >> >>> - in psCardTable.hpp: >>> ? I don't think card_mark_must_follow_store() is needed, since >>> ? PSCardTable passes `false` for `conc_scan` to the CardTable >>> ? constructor >> >> Fixed. I took the liberty of also making the condition for >> card_mark_must_follow_store() more precise on CMS by making the >> condition for scanned_concurrently consider whether >> CMSPrecleaningEnabled is set or not (like other generated code does). >> >>> - in g1CollectedHeap.hpp: >>> ? could you store the G1CardTable as a field in G1CollectedHeap? Also, >>> ? could you name the "getter" just card_table()? (I see that >>> ? g1_hot_card_cache method above, but that one should also be >>> renamed to >>> ? just hot_card_cache, but in another patch) >> >> Fixed. >> >>> - in cardTable.hpp and cardTable.cpp: >>> ? could you use `hg cp` when constructing these files from >>> ? cardTableModRefBS.{hpp,cpp} so the history is preserved? >> >> Yes, I will do this before pushing to make sure the history is >> preserved. >> >> Thanks, >> /Erik >> >>> >>> Thanks, >>> Erik >>> >>> On 02/15/2018 10:31 AM, Erik ?sterlund wrote: >>>> Hi, >>>> >>>> Here is an updated revision of this webrev after internal feedback >>>> from StefanK who helped looking through my changes - thanks a lot >>>> for the help with that. >>>> >>>> The changes to the new revision are a bunch of minor clean up >>>> changes, e.g. copy right headers, indentation issues, sorting >>>> includes, adding/removing newlines, reverting an assert error >>>> message, fixing constructor initialization orders, and things like >>>> that. >>>> >>>> The problem I mentioned last time about the version number of our >>>> repo not yet being bumped to 11 and resulting awkwardness in JVMCI >>>> has been resolved by simply waiting. So now I changed the JVMCI >>>> logic to get the card values from the new location in the >>>> corresponding card tables when observing JDK version 11 or above. >>>> >>>> New full webrev (rebased onto a month fresher jdk-hs): >>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01/ >>>> >>>> Incremental webrev (over the rebase): >>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00_01/ >>>> >>>> This new version has run through hs-tier1-5 and jdk-tier1-3 without >>>> any issues. >>>> >>>> Thanks, >>>> /Erik >>>> >>>> On 2018-01-17 13:54, Erik ?sterlund wrote: >>>>> Hi, >>>>> >>>>> Today, both Parallel, CMS and Serial share the same code for its >>>>> card marking barrier. However, they have different requirements >>>>> how to manage its card tables by the GC. And as the card table >>>>> itself is embedded as a part of the CardTableModRefBS barrier set, >>>>> this has led to an unnecessary inheritance hierarchy for >>>>> CardTableModRefBS, where for example CardTableModRefBSForCTRS and >>>>> CardTableExtension are CardTableModRefBS subclasses that do not >>>>> change anything to do with the barriers. >>>>> >>>>> To clean up the code, there should really be a separate CardTable >>>>> hierarchy that contains the differences how to manage the card >>>>> table from the GC point of view, and simply let CardTableModRefBS >>>>> have a CardTable. This would allow removing >>>>> CardTableModRefBSForCTRS and CardTableExtension and their >>>>> references from shared code (that really have nothing to do with >>>>> the barriers, despite being barrier sets), and significantly >>>>> simplify the barrier set code. >>>>> >>>>> This patch mechanically performs this refactoring. A new CardTable >>>>> class has been created with a PSCardTable subclass for Parallel, a >>>>> CardTableRS for CMS and Serial, and a G1CardTable for G1. All >>>>> references to card tables and their values have been updated >>>>> accordingly. >>>>> >>>>> This touches a lot of platform specific code, so would be >>>>> fantastic if port maintainers could have a look that I have not >>>>> broken anything. >>>>> >>>>> There is a slight problem that should be pointed out. There is an >>>>> unfortunate interaction between Graal and hotspot. Graal needs to >>>>> know the values of g1 young cards and dirty cards. This is queried >>>>> in different ways in different versions of the JDK in the >>>>> ||GraalHotSpotVMConfig.java file. Now these values will move from >>>>> their barrier set class to their card table class. That means we >>>>> have at least three cases how to find the correct values. There is >>>>> one for JDK8, one for JDK9, and now a new one for JDK11. Except, >>>>> we have not yet bumped the version number to 11 in the repo, and >>>>> therefore it has to be from JDK10 - 11 for now and updated after >>>>> incrementing the version number. But that means that it will be >>>>> temporarily incompatible with JDK10. That is okay for our own copy >>>>> of Graal, but can not be used by upstream Graal as they are given >>>>> the choice whether to support the public JDK10 or the JDK11 that >>>>> does not quite admit to being 11 yet. I chose the solution that >>>>> works in our repository. I will notify Graal folks of this issue. >>>>> In the long run, it would be nice if we could have a more solid >>>>> interface here. >>>>> >>>>> However, as an added benefit, this changeset brings about a >>>>> hundred copyright headers up to date, so others do not have to >>>>> update them for a while. >>>>> >>>>> Bug: >>>>> https://bugs.openjdk.java.net/browse/JDK-8195142 >>>>> >>>>> Webrev: >>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00/ >>>>> >>>>> Testing: mach5 hs-tier1-5 plus local AoT testing. >>>>> >>>>> Thanks, >>>>> /Erik >>>> >> From jesper.wilhelmsson at oracle.com Fri Feb 23 11:45:25 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Fri, 23 Feb 2018 12:45:25 +0100 Subject: SIGBUS in Access<1572864UL>::store_at on Solaris/SPARC In-Reply-To: References: <9b28dda5-2ce5-ade0-e3aa-c7c48dc53d86@oracle.com> <7db7a8ff-5484-80c0-2cfa-9ea0a6a964be@oracle.com> <000b2fd4-3455-3030-5a63-49ce841663d2@oracle.com> Message-ID: <59D82EB2-F9F1-42F1-979D-A984E61C40E2@oracle.com> Hi Roman, Will this fix be pushed today? If not I would prefer to back out JDK-8197999 to get clean results in the Friday nightly. Thanks, /Jesper > On 22 Feb 2018, at 22:17, Roman Kennke wrote: > > I took the bug. > I tried with casting similar to what you suggested, but that failed my > test. Maybe I had the parenthesis differently? In any case, I made it > so that it matches what is in oop.inline.hpp boolean accessor. > > I'll move the test to a new file and post an RFR separately. > > Thanks, Roman > > > On Thu, Feb 22, 2018 at 10:04 PM, Stefan Karlsson > wrote: >> On 2018-02-22 21:41, Roman Kennke wrote: >>> >>> Ok thank you. >>> I tried your patch and can confirm that it works/passes. :-) It also >>> gives me some ideas how gtest works. >>> >>> I modified the test so that it fails without the fix, and passes with the >>> fix: >>> >>> http://cr.openjdk.java.net/~rkennke/8198564/webrev.00/ >>> >>> If you think that's good, then I can post a formal RFR and take over the >>> bug. >> >> >> Yes, this seems good. A similar patch using (jboolean)(((jint)contents) & 1) >> passes tests that used to fail on sparc. >> >> You might want to consider moving the test to a test_typeArrayOop.cpp file. >> >> Thanks, >> StefanK >> >> >>> >>> Roman >>> >>> On Thu, Feb 22, 2018 at 9:00 PM, Stefan Karlsson >>> wrote: >>>> >>>> On 2018-02-22 20:47, Roman Kennke wrote: >>>>> >>>>> On Thu, Feb 22, 2018 at 8:22 PM, Stefan Karlsson >>>>> wrote: >>>>>> >>>>>> On 2018-02-22 20:14, Roman Kennke wrote: >>>>>>> >>>>>>> Right. This looks like possible and likely cause of the problem. And >>>>>>> it worked before because of implicit conversion back to jboolean: >>>>>>> >>>>>>> - void bool_at_put(int which, jboolean contents) { >>>>>>> *bool_at_addr(which) = (((jint)contents) & 1); } >>>>>>> >>>>>>> >>>>>>> Can you test it? Because, I can't ;-) >>>>>> >>>>>> >>>>>> Yes. I'm kicking of some testing on sparc. Could you write a gtest for >>>>>> this? >>>>> >>>>> I can try. I never wrote a gtest before ;-) Is there an existing one >>>>> that I could use as template, and/or pointers how to start? >>>> >>>> >>>> You can look at the existing tests in test/hotspot/gtest. I suggest you >>>> read >>>> the official googletest doc to get started. There might be some other >>>> document about our adaption of googletest, but I don't know where it is. >>>> >>>> Maybe something like this would work: >>>> >>>> diff --git a/test/hotspot/gtest/oops/test_arrayOop.cpp >>>> b/test/hotspot/gtest/oops/test_arrayOop.cpp >>>> --- a/test/hotspot/gtest/oops/test_arrayOop.cpp >>>> +++ b/test/hotspot/gtest/oops/test_arrayOop.cpp >>>> @@ -22,6 +22,7 @@ >>>> */ >>>> >>>> #include "precompiled.hpp" >>>> +#include "memory/universe.hpp" >>>> #include "oops/arrayOop.hpp" >>>> #include "oops/oop.inline.hpp" >>>> #include "unittest.hpp" >>>> @@ -86,4 +87,37 @@ >>>> TEST_VM(arrayOopDesc, narrowOop) { >>>> ASSERT_PRED1(check_max_length_overflow, T_NARROWOOP); >>>> } >>>> + >>>> +TEST_VM(arrayOopDesc, bool_at_put) { >>>> + char mem[100]; >>>> + memset(mem, 0, ARRAY_SIZE(mem)); >>>> + >>>> + char* addr = align_up(mem, 16); >>>> + >>>> + typeArrayOop o = (typeArrayOop) addr; >>>> + o->set_klass(Universe::boolArrayKlassObj()); >>>> + o->set_length(10); >>>> + >>>> + >>>> + ASSERT_EQ((jboolean)0, o->bool_at(0)); >>>> + ASSERT_EQ((jboolean)0, o->bool_at(1)); >>>> + ASSERT_EQ((jboolean)0, o->bool_at(2)); >>>> + ASSERT_EQ((jboolean)0, o->bool_at(3)); >>>> + ASSERT_EQ((jboolean)0, o->bool_at(4)); >>>> + ASSERT_EQ((jboolean)0, o->bool_at(5)); >>>> + ASSERT_EQ((jboolean)0, o->bool_at(6)); >>>> + ASSERT_EQ((jboolean)0, o->bool_at(7)); >>>> + >>>> + o->bool_at_put(0, 1); >>>> + >>>> + ASSERT_EQ((jboolean)1, o->bool_at(0)); >>>> + ASSERT_EQ((jboolean)0, o->bool_at(1)); >>>> + ASSERT_EQ((jboolean)0, o->bool_at(2)); >>>> + ASSERT_EQ((jboolean)0, o->bool_at(3)); >>>> + ASSERT_EQ((jboolean)0, o->bool_at(4)); >>>> + ASSERT_EQ((jboolean)0, o->bool_at(5)); >>>> + ASSERT_EQ((jboolean)0, o->bool_at(6)); >>>> + ASSERT_EQ((jboolean)0, o->bool_at(7)); >>>> +} >>>> + >>>> // T_VOID and T_ADDRESS are not supported by max_array_length() >>>> >>>> And then run with: >>>> ../build/fastdebug/hotspot/variant-server/libjvm/gtest/gtestLauncher -jdk >>>> ../build/fastdebug/jdk --gtest_filter="arrayOopDesc*" >>>> >>>> StefanK >>>> >>>>> Roman >>>> >>>> >>>> >> From rkennke at redhat.com Fri Feb 23 11:49:30 2018 From: rkennke at redhat.com (Roman Kennke) Date: Fri, 23 Feb 2018 12:49:30 +0100 Subject: SIGBUS in Access<1572864UL>::store_at on Solaris/SPARC In-Reply-To: <59D82EB2-F9F1-42F1-979D-A984E61C40E2@oracle.com> References: <9b28dda5-2ce5-ade0-e3aa-c7c48dc53d86@oracle.com> <7db7a8ff-5484-80c0-2cfa-9ea0a6a964be@oracle.com> <000b2fd4-3455-3030-5a63-49ce841663d2@oracle.com> <59D82EB2-F9F1-42F1-979D-A984E61C40E2@oracle.com> Message-ID: I think it will. Roman On Fri, Feb 23, 2018 at 12:45 PM, wrote: > Hi Roman, > > Will this fix be pushed today? If not I would prefer to back out JDK-8197999 to get clean results in the Friday nightly. > > Thanks, > /Jesper > > >> On 22 Feb 2018, at 22:17, Roman Kennke wrote: >> >> I took the bug. >> I tried with casting similar to what you suggested, but that failed my >> test. Maybe I had the parenthesis differently? In any case, I made it >> so that it matches what is in oop.inline.hpp boolean accessor. >> >> I'll move the test to a new file and post an RFR separately. >> >> Thanks, Roman >> >> >> On Thu, Feb 22, 2018 at 10:04 PM, Stefan Karlsson >> wrote: >>> On 2018-02-22 21:41, Roman Kennke wrote: >>>> >>>> Ok thank you. >>>> I tried your patch and can confirm that it works/passes. :-) It also >>>> gives me some ideas how gtest works. >>>> >>>> I modified the test so that it fails without the fix, and passes with the >>>> fix: >>>> >>>> http://cr.openjdk.java.net/~rkennke/8198564/webrev.00/ >>>> >>>> If you think that's good, then I can post a formal RFR and take over the >>>> bug. >>> >>> >>> Yes, this seems good. A similar patch using (jboolean)(((jint)contents) & 1) >>> passes tests that used to fail on sparc. >>> >>> You might want to consider moving the test to a test_typeArrayOop.cpp file. >>> >>> Thanks, >>> StefanK >>> >>> >>>> >>>> Roman >>>> >>>> On Thu, Feb 22, 2018 at 9:00 PM, Stefan Karlsson >>>> wrote: >>>>> >>>>> On 2018-02-22 20:47, Roman Kennke wrote: >>>>>> >>>>>> On Thu, Feb 22, 2018 at 8:22 PM, Stefan Karlsson >>>>>> wrote: >>>>>>> >>>>>>> On 2018-02-22 20:14, Roman Kennke wrote: >>>>>>>> >>>>>>>> Right. This looks like possible and likely cause of the problem. And >>>>>>>> it worked before because of implicit conversion back to jboolean: >>>>>>>> >>>>>>>> - void bool_at_put(int which, jboolean contents) { >>>>>>>> *bool_at_addr(which) = (((jint)contents) & 1); } >>>>>>>> >>>>>>>> >>>>>>>> Can you test it? Because, I can't ;-) >>>>>>> >>>>>>> >>>>>>> Yes. I'm kicking of some testing on sparc. Could you write a gtest for >>>>>>> this? >>>>>> >>>>>> I can try. I never wrote a gtest before ;-) Is there an existing one >>>>>> that I could use as template, and/or pointers how to start? >>>>> >>>>> >>>>> You can look at the existing tests in test/hotspot/gtest. I suggest you >>>>> read >>>>> the official googletest doc to get started. There might be some other >>>>> document about our adaption of googletest, but I don't know where it is. >>>>> >>>>> Maybe something like this would work: >>>>> >>>>> diff --git a/test/hotspot/gtest/oops/test_arrayOop.cpp >>>>> b/test/hotspot/gtest/oops/test_arrayOop.cpp >>>>> --- a/test/hotspot/gtest/oops/test_arrayOop.cpp >>>>> +++ b/test/hotspot/gtest/oops/test_arrayOop.cpp >>>>> @@ -22,6 +22,7 @@ >>>>> */ >>>>> >>>>> #include "precompiled.hpp" >>>>> +#include "memory/universe.hpp" >>>>> #include "oops/arrayOop.hpp" >>>>> #include "oops/oop.inline.hpp" >>>>> #include "unittest.hpp" >>>>> @@ -86,4 +87,37 @@ >>>>> TEST_VM(arrayOopDesc, narrowOop) { >>>>> ASSERT_PRED1(check_max_length_overflow, T_NARROWOOP); >>>>> } >>>>> + >>>>> +TEST_VM(arrayOopDesc, bool_at_put) { >>>>> + char mem[100]; >>>>> + memset(mem, 0, ARRAY_SIZE(mem)); >>>>> + >>>>> + char* addr = align_up(mem, 16); >>>>> + >>>>> + typeArrayOop o = (typeArrayOop) addr; >>>>> + o->set_klass(Universe::boolArrayKlassObj()); >>>>> + o->set_length(10); >>>>> + >>>>> + >>>>> + ASSERT_EQ((jboolean)0, o->bool_at(0)); >>>>> + ASSERT_EQ((jboolean)0, o->bool_at(1)); >>>>> + ASSERT_EQ((jboolean)0, o->bool_at(2)); >>>>> + ASSERT_EQ((jboolean)0, o->bool_at(3)); >>>>> + ASSERT_EQ((jboolean)0, o->bool_at(4)); >>>>> + ASSERT_EQ((jboolean)0, o->bool_at(5)); >>>>> + ASSERT_EQ((jboolean)0, o->bool_at(6)); >>>>> + ASSERT_EQ((jboolean)0, o->bool_at(7)); >>>>> + >>>>> + o->bool_at_put(0, 1); >>>>> + >>>>> + ASSERT_EQ((jboolean)1, o->bool_at(0)); >>>>> + ASSERT_EQ((jboolean)0, o->bool_at(1)); >>>>> + ASSERT_EQ((jboolean)0, o->bool_at(2)); >>>>> + ASSERT_EQ((jboolean)0, o->bool_at(3)); >>>>> + ASSERT_EQ((jboolean)0, o->bool_at(4)); >>>>> + ASSERT_EQ((jboolean)0, o->bool_at(5)); >>>>> + ASSERT_EQ((jboolean)0, o->bool_at(6)); >>>>> + ASSERT_EQ((jboolean)0, o->bool_at(7)); >>>>> +} >>>>> + >>>>> // T_VOID and T_ADDRESS are not supported by max_array_length() >>>>> >>>>> And then run with: >>>>> ../build/fastdebug/hotspot/variant-server/libjvm/gtest/gtestLauncher -jdk >>>>> ../build/fastdebug/jdk --gtest_filter="arrayOopDesc*" >>>>> >>>>> StefanK >>>>> >>>>>> Roman >>>>> >>>>> >>>>> >>> > From david.holmes at oracle.com Fri Feb 23 11:54:53 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 23 Feb 2018 21:54:53 +1000 Subject: SIGBUS in Access<1572864UL>::store_at on Solaris/SPARC In-Reply-To: References: <9b28dda5-2ce5-ade0-e3aa-c7c48dc53d86@oracle.com> <7db7a8ff-5484-80c0-2cfa-9ea0a6a964be@oracle.com> <000b2fd4-3455-3030-5a63-49ce841663d2@oracle.com> <59D82EB2-F9F1-42F1-979D-A984E61C40E2@oracle.com> Message-ID: <95931bf7-3609-a3a9-9dc9-f22244f4d479@oracle.com> Fix has been pushed. David On 23/02/2018 9:49 PM, Roman Kennke wrote: > I think it will. > > Roman > > On Fri, Feb 23, 2018 at 12:45 PM, wrote: >> Hi Roman, >> >> Will this fix be pushed today? If not I would prefer to back out JDK-8197999 to get clean results in the Friday nightly. >> >> Thanks, >> /Jesper >> >> >>> On 22 Feb 2018, at 22:17, Roman Kennke wrote: >>> >>> I took the bug. >>> I tried with casting similar to what you suggested, but that failed my >>> test. Maybe I had the parenthesis differently? In any case, I made it >>> so that it matches what is in oop.inline.hpp boolean accessor. >>> >>> I'll move the test to a new file and post an RFR separately. >>> >>> Thanks, Roman >>> >>> >>> On Thu, Feb 22, 2018 at 10:04 PM, Stefan Karlsson >>> wrote: >>>> On 2018-02-22 21:41, Roman Kennke wrote: >>>>> >>>>> Ok thank you. >>>>> I tried your patch and can confirm that it works/passes. :-) It also >>>>> gives me some ideas how gtest works. >>>>> >>>>> I modified the test so that it fails without the fix, and passes with the >>>>> fix: >>>>> >>>>> http://cr.openjdk.java.net/~rkennke/8198564/webrev.00/ >>>>> >>>>> If you think that's good, then I can post a formal RFR and take over the >>>>> bug. >>>> >>>> >>>> Yes, this seems good. A similar patch using (jboolean)(((jint)contents) & 1) >>>> passes tests that used to fail on sparc. >>>> >>>> You might want to consider moving the test to a test_typeArrayOop.cpp file. >>>> >>>> Thanks, >>>> StefanK >>>> >>>> >>>>> >>>>> Roman >>>>> >>>>> On Thu, Feb 22, 2018 at 9:00 PM, Stefan Karlsson >>>>> wrote: >>>>>> >>>>>> On 2018-02-22 20:47, Roman Kennke wrote: >>>>>>> >>>>>>> On Thu, Feb 22, 2018 at 8:22 PM, Stefan Karlsson >>>>>>> wrote: >>>>>>>> >>>>>>>> On 2018-02-22 20:14, Roman Kennke wrote: >>>>>>>>> >>>>>>>>> Right. This looks like possible and likely cause of the problem. And >>>>>>>>> it worked before because of implicit conversion back to jboolean: >>>>>>>>> >>>>>>>>> - void bool_at_put(int which, jboolean contents) { >>>>>>>>> *bool_at_addr(which) = (((jint)contents) & 1); } >>>>>>>>> >>>>>>>>> >>>>>>>>> Can you test it? Because, I can't ;-) >>>>>>>> >>>>>>>> >>>>>>>> Yes. I'm kicking of some testing on sparc. Could you write a gtest for >>>>>>>> this? >>>>>>> >>>>>>> I can try. I never wrote a gtest before ;-) Is there an existing one >>>>>>> that I could use as template, and/or pointers how to start? >>>>>> >>>>>> >>>>>> You can look at the existing tests in test/hotspot/gtest. I suggest you >>>>>> read >>>>>> the official googletest doc to get started. There might be some other >>>>>> document about our adaption of googletest, but I don't know where it is. >>>>>> >>>>>> Maybe something like this would work: >>>>>> >>>>>> diff --git a/test/hotspot/gtest/oops/test_arrayOop.cpp >>>>>> b/test/hotspot/gtest/oops/test_arrayOop.cpp >>>>>> --- a/test/hotspot/gtest/oops/test_arrayOop.cpp >>>>>> +++ b/test/hotspot/gtest/oops/test_arrayOop.cpp >>>>>> @@ -22,6 +22,7 @@ >>>>>> */ >>>>>> >>>>>> #include "precompiled.hpp" >>>>>> +#include "memory/universe.hpp" >>>>>> #include "oops/arrayOop.hpp" >>>>>> #include "oops/oop.inline.hpp" >>>>>> #include "unittest.hpp" >>>>>> @@ -86,4 +87,37 @@ >>>>>> TEST_VM(arrayOopDesc, narrowOop) { >>>>>> ASSERT_PRED1(check_max_length_overflow, T_NARROWOOP); >>>>>> } >>>>>> + >>>>>> +TEST_VM(arrayOopDesc, bool_at_put) { >>>>>> + char mem[100]; >>>>>> + memset(mem, 0, ARRAY_SIZE(mem)); >>>>>> + >>>>>> + char* addr = align_up(mem, 16); >>>>>> + >>>>>> + typeArrayOop o = (typeArrayOop) addr; >>>>>> + o->set_klass(Universe::boolArrayKlassObj()); >>>>>> + o->set_length(10); >>>>>> + >>>>>> + >>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(0)); >>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(1)); >>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(2)); >>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(3)); >>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(4)); >>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(5)); >>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(6)); >>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(7)); >>>>>> + >>>>>> + o->bool_at_put(0, 1); >>>>>> + >>>>>> + ASSERT_EQ((jboolean)1, o->bool_at(0)); >>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(1)); >>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(2)); >>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(3)); >>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(4)); >>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(5)); >>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(6)); >>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(7)); >>>>>> +} >>>>>> + >>>>>> // T_VOID and T_ADDRESS are not supported by max_array_length() >>>>>> >>>>>> And then run with: >>>>>> ../build/fastdebug/hotspot/variant-server/libjvm/gtest/gtestLauncher -jdk >>>>>> ../build/fastdebug/jdk --gtest_filter="arrayOopDesc*" >>>>>> >>>>>> StefanK >>>>>> >>>>>>> Roman >>>>>> >>>>>> >>>>>> >>>> >> From jesper.wilhelmsson at oracle.com Fri Feb 23 12:03:44 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Fri, 23 Feb 2018 13:03:44 +0100 Subject: SIGBUS in Access<1572864UL>::store_at on Solaris/SPARC In-Reply-To: <95931bf7-3609-a3a9-9dc9-f22244f4d479@oracle.com> References: <9b28dda5-2ce5-ade0-e3aa-c7c48dc53d86@oracle.com> <7db7a8ff-5484-80c0-2cfa-9ea0a6a964be@oracle.com> <000b2fd4-3455-3030-5a63-49ce841663d2@oracle.com> <59D82EB2-F9F1-42F1-979D-A984E61C40E2@oracle.com> <95931bf7-3609-a3a9-9dc9-f22244f4d479@oracle.com> Message-ID: <5EB5DF93-FAC1-4547-B7D7-B8857DCB5755@oracle.com> Awesome! Thanks, /Jesper > On 23 Feb 2018, at 12:54, David Holmes wrote: > > Fix has been pushed. > > David > > On 23/02/2018 9:49 PM, Roman Kennke wrote: >> I think it will. >> Roman >> On Fri, Feb 23, 2018 at 12:45 PM, wrote: >>> Hi Roman, >>> >>> Will this fix be pushed today? If not I would prefer to back out JDK-8197999 to get clean results in the Friday nightly. >>> >>> Thanks, >>> /Jesper >>> >>> >>>> On 22 Feb 2018, at 22:17, Roman Kennke wrote: >>>> >>>> I took the bug. >>>> I tried with casting similar to what you suggested, but that failed my >>>> test. Maybe I had the parenthesis differently? In any case, I made it >>>> so that it matches what is in oop.inline.hpp boolean accessor. >>>> >>>> I'll move the test to a new file and post an RFR separately. >>>> >>>> Thanks, Roman >>>> >>>> >>>> On Thu, Feb 22, 2018 at 10:04 PM, Stefan Karlsson >>>> wrote: >>>>> On 2018-02-22 21:41, Roman Kennke wrote: >>>>>> >>>>>> Ok thank you. >>>>>> I tried your patch and can confirm that it works/passes. :-) It also >>>>>> gives me some ideas how gtest works. >>>>>> >>>>>> I modified the test so that it fails without the fix, and passes with the >>>>>> fix: >>>>>> >>>>>> http://cr.openjdk.java.net/~rkennke/8198564/webrev.00/ >>>>>> >>>>>> If you think that's good, then I can post a formal RFR and take over the >>>>>> bug. >>>>> >>>>> >>>>> Yes, this seems good. A similar patch using (jboolean)(((jint)contents) & 1) >>>>> passes tests that used to fail on sparc. >>>>> >>>>> You might want to consider moving the test to a test_typeArrayOop.cpp file. >>>>> >>>>> Thanks, >>>>> StefanK >>>>> >>>>> >>>>>> >>>>>> Roman >>>>>> >>>>>> On Thu, Feb 22, 2018 at 9:00 PM, Stefan Karlsson >>>>>> wrote: >>>>>>> >>>>>>> On 2018-02-22 20:47, Roman Kennke wrote: >>>>>>>> >>>>>>>> On Thu, Feb 22, 2018 at 8:22 PM, Stefan Karlsson >>>>>>>> wrote: >>>>>>>>> >>>>>>>>> On 2018-02-22 20:14, Roman Kennke wrote: >>>>>>>>>> >>>>>>>>>> Right. This looks like possible and likely cause of the problem. And >>>>>>>>>> it worked before because of implicit conversion back to jboolean: >>>>>>>>>> >>>>>>>>>> - void bool_at_put(int which, jboolean contents) { >>>>>>>>>> *bool_at_addr(which) = (((jint)contents) & 1); } >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Can you test it? Because, I can't ;-) >>>>>>>>> >>>>>>>>> >>>>>>>>> Yes. I'm kicking of some testing on sparc. Could you write a gtest for >>>>>>>>> this? >>>>>>>> >>>>>>>> I can try. I never wrote a gtest before ;-) Is there an existing one >>>>>>>> that I could use as template, and/or pointers how to start? >>>>>>> >>>>>>> >>>>>>> You can look at the existing tests in test/hotspot/gtest. I suggest you >>>>>>> read >>>>>>> the official googletest doc to get started. There might be some other >>>>>>> document about our adaption of googletest, but I don't know where it is. >>>>>>> >>>>>>> Maybe something like this would work: >>>>>>> >>>>>>> diff --git a/test/hotspot/gtest/oops/test_arrayOop.cpp >>>>>>> b/test/hotspot/gtest/oops/test_arrayOop.cpp >>>>>>> --- a/test/hotspot/gtest/oops/test_arrayOop.cpp >>>>>>> +++ b/test/hotspot/gtest/oops/test_arrayOop.cpp >>>>>>> @@ -22,6 +22,7 @@ >>>>>>> */ >>>>>>> >>>>>>> #include "precompiled.hpp" >>>>>>> +#include "memory/universe.hpp" >>>>>>> #include "oops/arrayOop.hpp" >>>>>>> #include "oops/oop.inline.hpp" >>>>>>> #include "unittest.hpp" >>>>>>> @@ -86,4 +87,37 @@ >>>>>>> TEST_VM(arrayOopDesc, narrowOop) { >>>>>>> ASSERT_PRED1(check_max_length_overflow, T_NARROWOOP); >>>>>>> } >>>>>>> + >>>>>>> +TEST_VM(arrayOopDesc, bool_at_put) { >>>>>>> + char mem[100]; >>>>>>> + memset(mem, 0, ARRAY_SIZE(mem)); >>>>>>> + >>>>>>> + char* addr = align_up(mem, 16); >>>>>>> + >>>>>>> + typeArrayOop o = (typeArrayOop) addr; >>>>>>> + o->set_klass(Universe::boolArrayKlassObj()); >>>>>>> + o->set_length(10); >>>>>>> + >>>>>>> + >>>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(0)); >>>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(1)); >>>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(2)); >>>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(3)); >>>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(4)); >>>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(5)); >>>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(6)); >>>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(7)); >>>>>>> + >>>>>>> + o->bool_at_put(0, 1); >>>>>>> + >>>>>>> + ASSERT_EQ((jboolean)1, o->bool_at(0)); >>>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(1)); >>>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(2)); >>>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(3)); >>>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(4)); >>>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(5)); >>>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(6)); >>>>>>> + ASSERT_EQ((jboolean)0, o->bool_at(7)); >>>>>>> +} >>>>>>> + >>>>>>> // T_VOID and T_ADDRESS are not supported by max_array_length() >>>>>>> >>>>>>> And then run with: >>>>>>> ../build/fastdebug/hotspot/variant-server/libjvm/gtest/gtestLauncher -jdk >>>>>>> ../build/fastdebug/jdk --gtest_filter="arrayOopDesc*" >>>>>>> >>>>>>> StefanK >>>>>>> >>>>>>>> Roman >>>>>>> >>>>>>> >>>>>>> >>>>> >>> From adam.farley at uk.ibm.com Fri Feb 23 12:09:17 2018 From: adam.farley at uk.ibm.com (Adam Farley8) Date: Fri, 23 Feb 2018 12:09:17 +0000 Subject: [PATCH] RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers In-Reply-To: <4B429F1D-5727-4B20-A051-E39E1E8C69AA@oracle.com> References: <39D8F43A-06BD-483B-8901-6F4444A8235F@oracle.com> <4B429F1D-5727-4B20-A051-E39E1E8C69AA@oracle.com> Message-ID: Hi Paul, The larger picture for (read: effect of) these changes is best explained in my email here: http://mail.openjdk.java.net/pipermail/core-libs-dev/2018-February/051441.html See the hyperlink I posted, and the few lines before it. Unfortunately the only understanding I have regarding the workings of the native code is would be derived from the OpenJ9 implimentation. I figured I wouldn't be thanked for posting that code here, so I posted what code I could share, with the additional note that the Hotspot native side of this should be implimented by: 1) Turning those Unsafe.java methods into native methods, and make them abstract (descriptor only, for the uninitiated). 2) Find the Hotspot native code for native memory allocation, reallocation, and freeing. Basically create a method that stores the sum total amount of native memory used by the DBBs, and then calls the regular allocate/reallocate/free methods. E.g. NM Allocate (Java) - NM Allocate (Native) DBB NM Allocate (Java) - DBB NM Allocate (Native) - NM allocate (Native) 3) Find the code that prints the current native memory usage in core files, and add a similar bit to show the native memory usage for DBBs as a subset (see the aforementioned linked link for an example). This seems like a straightforward task, though that's easy for me to say. :) Does that answer your question? Also, I'm unfamiliar with Java Flight Recorder. Are other developers on the list familiar with JFR that can snwer this? I'll put the message in IRC as well, and update here if I get any answers. Best Regards Adam Farley From: Paul Sandoz To: Adam Farley8 Cc: core-libs-dev , hotspot-dev developers Date: 22/02/2018 02:20 Subject: Re: [PATCH] RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers Hi Adam, While the burden is minimal there is a principle here that i think we should adhere to regarding additions to the code base: additions should have value within OpenJDK itself otherwise it can become a thin end of the wedge to more stuff (?well you added these things, why not just add these too??). So i would still be reluctant to add such methods without understanding the larger picture and what you have in mind. Can you send a pointer to your email referring in more detail to the larger change sets? This use-case might also apply in other related areas too with regards to logging/monitoring. I would be interested to understand what Java Flight Recorder (JFR) does in this regard (it being open sourced soon i believe) and how JFR might relate to what you are doing. Should we be adding JFR events to unsafe memory allocation? Can JFR efficiently access part of the Java call stack to determine the origin? Thanks, Paul. On Feb 19, 2018, at 5:08 AM, Adam Farley8 wrote: Hi Paul, > Hi Adam, > > From reading the thread i cannot tell if this is part of a wider solution including some yet to be proposed HotSpot changes. The wider solution would need to include some Hotspot changes, yes. I'm proposing raising a bug, committing the code we have here to "set the stage", and then we can invest more time&energy later if the concept goes down well and the community agrees to pursue the full solution. As an aside, I tried submitting a big code set (including hotspot changes) months ago, and I'm *still* struggling to find someone to commit the thing, so I figured I'd try a more gradual, staged approach this time. > > As is i would be resistant to adding such standalone internal wrapper methods to Unsafe that have no apparent benefit within the OpenJDK itself since it's a maintenance burden. I'm hoping the fact that the methods are a single line (sans comments, descriptors and curly braces) will minimise this burden. > > Can you determine if the calls to UNSAFE.freeMemory/allocateMemory come from a DBB by looking at the call stack frame above the unsafe call? > > Thanks, > Paul. Yes that is possible, though I would advise against this because: A) Checking the call stack is expensive, and doing this every time we allocate native memory is an easy way to slow down a program, or rack up mips. and B) deciding which code path we're using based on the stack means the DBB class+method (and anything the parsing code mistakes for that class+method) can only ever allocate native memory for DBBs. What do you think? Best Regards Adam Farley > >> On Feb 14, 2018, at 3:32 AM, Adam Farley8 wrote: >> >> Hi All, >> >> Currently, diagnostic core files generated from OpenJDK seem to lump all >> of the >> native memory usages together, making it near-impossible for someone to >> figure >> out *what* is using all that memory in the event of a memory leak. >> >> The OpenJ9 VM has a feature which allows it to track the allocation of >> native >> memory for Direct Byte Buffers (DBBs), and to supply that information into >> the >> cores when they are generated. This makes it a *lot* easier to find out >> what is using >> all that native memory, making memory leak resolution less like some dark >> art, and >> more like logical debugging. >> >> To use this feature, there is a native method referenced in Unsafe.java. >> To open >> up this feature so that any VM can make use of it, the java code below >> sets the >> stage for it. This change starts letting people call DBB-specific methods >> when >> allocating native memory, and getting into the habit of using it. >> >> Thoughts? >> >> Best Regards >> >> Adam Farley >> >> P.S. Code: >> >> diff --git >> a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >> b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >> --- a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >> +++ b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template >> @@ -85,7 +85,7 @@ >> // Paranoia >> return; >> } >> - UNSAFE.freeMemory(address); >> + UNSAFE.freeDBBMemory(address); >> address = 0; >> Bits.unreserveMemory(size, capacity); >> } >> @@ -118,7 +118,7 @@ >> >> long base = 0; >> try { >> - base = UNSAFE.allocateMemory(size); >> + base = UNSAFE.allocateDBBMemory(size); >> } catch (OutOfMemoryError x) { >> Bits.unreserveMemory(size, cap); >> throw x; >> diff --git a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >> b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >> --- a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >> +++ b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java >> @@ -632,6 +632,26 @@ >> } >> >> /** >> + * Allocates a new block of native memory for DirectByteBuffers, of >> the >> + * given size in bytes. The contents of the memory are >> uninitialized; >> + * they will generally be garbage. The resulting native pointer will >> + * never be zero, and will be aligned for all value types. Dispose >> of >> + * this memory by calling {@link #freeDBBMemory} or resize it with >> + * {@link #reallocateDBBMemory}. >> + * >> + * @throws RuntimeException if the size is negative or too large >> + * for the native size_t type >> + * >> + * @throws OutOfMemoryError if the allocation is refused by the >> system >> + * >> + * @see #getByte(long) >> + * @see #putByte(long, byte) >> + */ >> + public long allocateDBBMemory(long bytes) { >> + return allocateMemory(bytes); >> + } >> + >> + /** >> * Resizes a new block of native memory, to the given size in bytes. >> The >> * contents of the new block past the size of the old block are >> * uninitialized; they will generally be garbage. The resulting >> native >> @@ -687,6 +707,27 @@ >> } >> >> /** >> + * Resizes a new block of native memory for DirectByteBuffers, to the >> + * given size in bytes. The contents of the new block past the size >> of >> + * the old block are uninitialized; they will generally be garbage. >> The >> + * resulting native pointer will be zero if and only if the requested >> size >> + * is zero. The resulting native pointer will be aligned for all >> value >> + * types. Dispose of this memory by calling {@link #freeDBBMemory}, >> or >> + * resize it with {@link #reallocateDBBMemory}. The address passed >> to >> + * this method may be null, in which case an allocation will be >> performed. >> + * >> + * @throws RuntimeException if the size is negative or too large >> + * for the native size_t type >> + * >> + * @throws OutOfMemoryError if the allocation is refused by the >> system >> + * >> + * @see #allocateDBBMemory >> + */ >> + public long reallocateDBBMemory(long address, long bytes) { >> + return reallocateMemory(address, bytes); >> + } >> + >> + /** >> * Sets all bytes in a given block of memory to a fixed value >> * (usually zero). >> * >> @@ -918,6 +959,17 @@ >> checkPointer(null, address); >> } >> >> + /** >> + * Disposes of a block of native memory, as obtained from {@link >> + * #allocateDBBMemory} or {@link #reallocateDBBMemory}. The address >> passed >> + * to this method may be null, in which case no action is taken. >> + * >> + * @see #allocateDBBMemory >> + */ >> + public void freeDBBMemory(long address) { >> + freeMemory(address); >> + } >> + >> /// random queries >> >> /** >> >> Unless stated otherwise above: >> IBM United Kingdom Limited - Registered in England and Wales with number >> 741598. >> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU From george.triantafillou at oracle.com Fri Feb 23 14:58:18 2018 From: george.triantafillou at oracle.com (George Triantafillou) Date: Fri, 23 Feb 2018 09:58:18 -0500 Subject: RFR(XS) : 8198568 : clean up test/hotspot/jtreg/ProblemList.txt In-Reply-To: <849071FB-3DD4-4CAA-B225-869C2AFD50B8@oracle.com> References: <849071FB-3DD4-4CAA-B225-869C2AFD50B8@oracle.com> Message-ID: <5f58ceff-0b72-1205-833f-38debd0d13d3@oracle.com> Igor, Looks good. -George On 2/22/2018 8:19 PM, Igor Ignatyev wrote: > http://cr.openjdk.java.net/~iignatyev//8198568/webrev.00/index.html >> 11 lines changed: 1 ins; 9 del; 1 mod; > Hi all, > > could you please review this clean up in hotspot ProblemList? > > 8180324, 8173936, 8166548 are resolved, 8134286, 8175791 are closed an CNR, 8163805 is closed as WNF, 8179226 is a dup of 8180622, I have updated the ProblemList correspondingly. as a result 6 tests are un-qurantined (removed from the problem list), since we haven't run them for some time, there might be new (or old) failures. if they occur, new bugs should be filed and used to re-quarantine affected tests. > > JBS: https://bugs.openjdk.java.net/browse/JDK-8198568 > webrev: http://cr.openjdk.java.net/~iignatyev/8198568/webrev.00/index.html > testing: run the tests several times in mach5 + hs-tier[1-2] > > Thanks, > -- Igor From peter.levart at gmail.com Fri Feb 23 15:28:34 2018 From: peter.levart at gmail.com (Peter Levart) Date: Fri, 23 Feb 2018 16:28:34 +0100 Subject: [PATCH] RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers In-Reply-To: References: <39D8F43A-06BD-483B-8901-6F4444A8235F@oracle.com> <4B429F1D-5727-4B20-A051-E39E1E8C69AA@oracle.com> Message-ID: Hi Adam, Did you know that native memory is already tracked on the Java side for direct ByteBuffers? See class java.nio.Bits. Could you make use of it? Regards, Peter On 23 Feb 2018 1:09 pm, "Adam Farley8" wrote: > Hi Paul, > > The larger picture for (read: effect of) these changes is best explained > in my email here: > > http://mail.openjdk.java.net/pipermail/core-libs-dev/2018- > February/051441.html > > See the hyperlink I posted, and the few lines before it. > > Unfortunately the only understanding I have regarding the workings of the > native code > is would be derived from the OpenJ9 implimentation. I figured I wouldn't > be thanked for > posting that code here, so I posted what code I could share, with the > additional note > that the Hotspot native side of this should be implimented by: > > 1) Turning those Unsafe.java methods into native methods, and make them > abstract > (descriptor only, for the uninitiated). > > 2) Find the Hotspot native code for native memory allocation, > reallocation, and > freeing. Basically create a method that stores the sum total amount of > native memory > used by the DBBs, and then calls the regular allocate/reallocate/free > methods. > > E.g. > > NM Allocate (Java) - NM Allocate (Native) > > DBB NM Allocate (Java) - DBB NM Allocate (Native) - NM allocate (Native) > > 3) Find the code that prints the current native memory usage in core > files, and > add a similar bit to show the native memory usage for DBBs as a subset > (see the > aforementioned linked link for an example). > > This seems like a straightforward task, though that's easy for me to say. > :) > > Does that answer your question? > > Also, I'm unfamiliar with Java Flight Recorder. Are other developers on > the list > familiar with JFR that can snwer this? I'll put the message in IRC as > well, and > update here if I get any answers. > > Best Regards > > Adam Farley > > > > From: Paul Sandoz > To: Adam Farley8 > Cc: core-libs-dev , hotspot-dev > developers > Date: 22/02/2018 02:20 > Subject: Re: [PATCH] RFR Bug-pending: Enable Hotspot to Track > Native Memory Usage for Direct Byte Buffers > > > > Hi Adam, > > While the burden is minimal there is a principle here that i think we > should adhere to regarding additions to the code base: additions should > have value within OpenJDK itself otherwise it can become a thin end of the > wedge to more stuff (?well you added these things, why not just add these > too??). > > So i would still be reluctant to add such methods without understanding > the larger picture and what you have in mind. > > Can you send a pointer to your email referring in more detail to the > larger change sets? > > This use-case might also apply in other related areas too with regards to > logging/monitoring. I would be interested to understand what Java Flight > Recorder (JFR) does in this regard (it being open sourced soon i believe) > and how JFR might relate to what you are doing. Should we be adding JFR > events to unsafe memory allocation? Can JFR efficiently access part of the > Java call stack to determine the origin? > > Thanks, > Paul. > > On Feb 19, 2018, at 5:08 AM, Adam Farley8 wrote: > > Hi Paul, > > > Hi Adam, > > > > From reading the thread i cannot tell if this is part of a wider > solution including some yet to be proposed HotSpot changes. > > The wider solution would need to include some Hotspot changes, yes. > I'm proposing raising a bug, committing the code we have here to > "set the stage", and then we can invest more time&energy later > if the concept goes down well and the community agrees to pursue > the full solution. > > As an aside, I tried submitting a big code set (including hotspot > changes) months ago, and I'm *still* struggling to find someone to > commit the thing, so I figured I'd try a more gradual, staged approach > this time. > > > > > As is i would be resistant to adding such standalone internal wrapper > methods to Unsafe that have no apparent benefit within the OpenJDK itself > since it's a maintenance burden. > > I'm hoping the fact that the methods are a single line (sans > comments, descriptors and curly braces) will minimise this burden. > > > > > Can you determine if the calls to UNSAFE.freeMemory/allocateMemory come > from a DBB by looking at the call stack frame above the unsafe call? > > > > Thanks, > > Paul. > > Yes that is possible, though I would advise against this because: > > A) Checking the call stack is expensive, and doing this every time we > allocate native memory is an easy way to slow down a program, > or rack up mips. > and > B) deciding which code path we're using based on the stack > means the DBB class+method (and anything the parsing code > mistakes for that class+method) can only ever allocate native > memory for DBBs. > > What do you think? > > Best Regards > > Adam Farley > > > > >> On Feb 14, 2018, at 3:32 AM, Adam Farley8 > wrote: > >> > >> Hi All, > >> > >> Currently, diagnostic core files generated from OpenJDK seem to lump > all > >> of the > >> native memory usages together, making it near-impossible for someone to > > >> figure > >> out *what* is using all that memory in the event of a memory leak. > >> > >> The OpenJ9 VM has a feature which allows it to track the allocation of > >> native > >> memory for Direct Byte Buffers (DBBs), and to supply that information > into > >> the > >> cores when they are generated. This makes it a *lot* easier to find out > > >> what is using > >> all that native memory, making memory leak resolution less like some > dark > >> art, and > >> more like logical debugging. > >> > >> To use this feature, there is a native method referenced in > Unsafe.java. > >> To open > >> up this feature so that any VM can make use of it, the java code below > >> sets the > >> stage for it. This change starts letting people call DBB-specific > methods > >> when > >> allocating native memory, and getting into the habit of using it. > >> > >> Thoughts? > >> > >> Best Regards > >> > >> Adam Farley > >> > >> P.S. Code: > >> > >> diff --git > >> a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > >> b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > >> --- > a/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > >> +++ > b/src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template > >> @@ -85,7 +85,7 @@ > >> // Paranoia > >> return; > >> } > >> - UNSAFE.freeMemory(address); > >> + UNSAFE.freeDBBMemory(address); > >> address = 0; > >> Bits.unreserveMemory(size, capacity); > >> } > >> @@ -118,7 +118,7 @@ > >> > >> long base = 0; > >> try { > >> - base = UNSAFE.allocateMemory(size); > >> + base = UNSAFE.allocateDBBMemory(size); > >> } catch (OutOfMemoryError x) { > >> Bits.unreserveMemory(size, cap); > >> throw x; > >> diff --git a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > >> b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > >> --- a/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > >> +++ b/src/java.base/share/classes/jdk/internal/misc/Unsafe.java > >> @@ -632,6 +632,26 @@ > >> } > >> > >> /** > >> + * Allocates a new block of native memory for DirectByteBuffers, > of > >> the > >> + * given size in bytes. The contents of the memory are > >> uninitialized; > >> + * they will generally be garbage. The resulting native pointer > will > >> + * never be zero, and will be aligned for all value types. Dispose > > >> of > >> + * this memory by calling {@link #freeDBBMemory} or resize it with > > >> + * {@link #reallocateDBBMemory}. > >> + * > >> + * @throws RuntimeException if the size is negative or too large > >> + * for the native size_t type > >> + * > >> + * @throws OutOfMemoryError if the allocation is refused by the > >> system > >> + * > >> + * @see #getByte(long) > >> + * @see #putByte(long, byte) > >> + */ > >> + public long allocateDBBMemory(long bytes) { > >> + return allocateMemory(bytes); > >> + } > >> + > >> + /** > >> * Resizes a new block of native memory, to the given size in > bytes. > >> The > >> * contents of the new block past the size of the old block are > >> * uninitialized; they will generally be garbage. The resulting > >> native > >> @@ -687,6 +707,27 @@ > >> } > >> > >> /** > >> + * Resizes a new block of native memory for DirectByteBuffers, to > the > >> + * given size in bytes. The contents of the new block past the > size > >> of > >> + * the old block are uninitialized; they will generally be > garbage. > >> The > >> + * resulting native pointer will be zero if and only if the > requested > >> size > >> + * is zero. The resulting native pointer will be aligned for all > >> value > >> + * types. Dispose of this memory by calling {@link > #freeDBBMemory}, > >> or > >> + * resize it with {@link #reallocateDBBMemory}. The address > passed > >> to > >> + * this method may be null, in which case an allocation will be > >> performed. > >> + * > >> + * @throws RuntimeException if the size is negative or too large > >> + * for the native size_t type > >> + * > >> + * @throws OutOfMemoryError if the allocation is refused by the > >> system > >> + * > >> + * @see #allocateDBBMemory > >> + */ > >> + public long reallocateDBBMemory(long address, long bytes) { > >> + return reallocateMemory(address, bytes); > >> + } > >> + > >> + /** > >> * Sets all bytes in a given block of memory to a fixed value > >> * (usually zero). > >> * > >> @@ -918,6 +959,17 @@ > >> checkPointer(null, address); > >> } > >> > >> + /** > >> + * Disposes of a block of native memory, as obtained from {@link > >> + * #allocateDBBMemory} or {@link #reallocateDBBMemory}. The > address > >> passed > >> + * to this method may be null, in which case no action is taken. > >> + * > >> + * @see #allocateDBBMemory > >> + */ > >> + public void freeDBBMemory(long address) { > >> + freeMemory(address); > >> + } > >> + > >> /// random queries > >> > >> /** > >> > >> Unless stated otherwise above: > >> IBM United Kingdom Limited - Registered in England and Wales with > number > >> 741598. > >> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 > 3AU > > > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > From Alan.Bateman at oracle.com Fri Feb 23 17:44:20 2018 From: Alan.Bateman at oracle.com (Alan Bateman) Date: Fri, 23 Feb 2018 17:44:20 +0000 Subject: [PATCH] RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers In-Reply-To: References: <39D8F43A-06BD-483B-8901-6F4444A8235F@oracle.com> <4B429F1D-5727-4B20-A051-E39E1E8C69AA@oracle.com> Message-ID: <712770ae-72ee-10b8-cd93-fb1fc34bd5b3@oracle.com> On 23/02/2018 15:28, Peter Levart wrote: > Hi Adam, > > Did you know that native memory is already tracked on the Java side for > direct ByteBuffers? See class java.nio.Bits. Could you make use of it? > Right, these are the fields that are exposed at runtime via BufferPoolMXBean. A SA based tool could read from a core file. I can't tell if this is enough for Adam, it may be that the his tool reveals more details on the buffers in the pools. -Alan From lois.foltan at oracle.com Fri Feb 23 17:54:52 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 23 Feb 2018 12:54:52 -0500 Subject: (11) RFR (S) JDK-8198312: VS2017: Upgrade HOTSPOT_BUILD_COMPILER in vm_version.cpp Message-ID: Please review this small fix to set HOTSPOT_BUILD_COMPILER correctly for VS2017. open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8198312/webrev/ bug link https://bugs.openjdk.java.net/browse/JDK-8198312 Testing: hs-tier(1-3), jdk-tier(1-3) complete Thanks, Lois From erik.joelsson at oracle.com Fri Feb 23 18:05:58 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Fri, 23 Feb 2018 10:05:58 -0800 Subject: (11) RFR (S) JDK-8198312: VS2017: Upgrade HOTSPOT_BUILD_COMPILER in vm_version.cpp In-Reply-To: References: Message-ID: <09b4db01-4928-ab93-0bb7-e0b07d302d33@oracle.com> Hello Lois, This looks good, but I would suggest to also add 1900 for VS2015, for completeness. /Erik On 2018-02-23 09:54, Lois Foltan wrote: > Please review this small fix to set HOTSPOT_BUILD_COMPILER correctly > for VS2017. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8198312/webrev/ > bug link https://bugs.openjdk.java.net/browse/JDK-8198312 > > Testing: hs-tier(1-3), jdk-tier(1-3) complete > > Thanks, > Lois > From george.triantafillou at oracle.com Fri Feb 23 18:14:34 2018 From: george.triantafillou at oracle.com (George Triantafillou) Date: Fri, 23 Feb 2018 13:14:34 -0500 Subject: (11) RFR (S) JDK-8198312: VS2017: Upgrade HOTSPOT_BUILD_COMPILER in vm_version.cpp In-Reply-To: References: Message-ID: <304c6b29-80c9-cf2f-96f7-672a1c05375a@oracle.com> Hi Lois, Looks good. -George On 2/23/2018 12:54 PM, Lois Foltan wrote: > Please review this small fix to set HOTSPOT_BUILD_COMPILER correctly > for VS2017. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8198312/webrev/ > bug link https://bugs.openjdk.java.net/browse/JDK-8198312 > > Testing: hs-tier(1-3), jdk-tier(1-3) complete > > Thanks, > Lois > From lois.foltan at oracle.com Fri Feb 23 19:17:18 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 23 Feb 2018 14:17:18 -0500 Subject: (11) RFR (S) JDK-8198312: VS2017: Upgrade HOTSPOT_BUILD_COMPILER in vm_version.cpp In-Reply-To: <304c6b29-80c9-cf2f-96f7-672a1c05375a@oracle.com> References: <304c6b29-80c9-cf2f-96f7-672a1c05375a@oracle.com> Message-ID: Thanks George. Lois On 2/23/2018 1:14 PM, George Triantafillou wrote: > Hi Lois, > > Looks good. > > -George > > On 2/23/2018 12:54 PM, Lois Foltan wrote: >> Please review this small fix to set HOTSPOT_BUILD_COMPILER correctly >> for VS2017. >> >> open webrev at >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8198312/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8198312 >> >> Testing: hs-tier(1-3), jdk-tier(1-3) complete >> >> Thanks, >> Lois >> > From lois.foltan at oracle.com Fri Feb 23 19:16:33 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 23 Feb 2018 14:16:33 -0500 Subject: (11) RFR (S) JDK-8198312: VS2017: Upgrade HOTSPOT_BUILD_COMPILER in vm_version.cpp In-Reply-To: <09b4db01-4928-ab93-0bb7-e0b07d302d33@oracle.com> References: <09b4db01-4928-ab93-0bb7-e0b07d302d33@oracle.com> Message-ID: On 2/23/2018 1:05 PM, Erik Joelsson wrote: > Hello Lois, > > This looks good, but I would suggest to also add 1900 for VS2015, for > completeness. Thanks for the review Erik!? I have updated the webrev to add 1900, however, I couldn't find a release # for VS2015, since all documentation I could find seemed to indicated that only 2015 and updates 1-3 were released.? If you have more info on this let me know! http://cr.openjdk.java.net/~lfoltan/bug_jdk8198312.1/webrev/ Thanks, Lois > > /Erik > > > On 2018-02-23 09:54, Lois Foltan wrote: >> Please review this small fix to set HOTSPOT_BUILD_COMPILER correctly >> for VS2017. >> >> open webrev at >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8198312/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8198312 >> >> Testing: hs-tier(1-3), jdk-tier(1-3) complete >> >> Thanks, >> Lois >> > From erik.joelsson at oracle.com Fri Feb 23 19:31:22 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Fri, 23 Feb 2018 11:31:22 -0800 Subject: (11) RFR (S) JDK-8198312: VS2017: Upgrade HOTSPOT_BUILD_COMPILER in vm_version.cpp In-Reply-To: References: <09b4db01-4928-ab93-0bb7-e0b07d302d33@oracle.com> Message-ID: On 2018-02-23 11:16, Lois Foltan wrote: > On 2/23/2018 1:05 PM, Erik Joelsson wrote: > >> Hello Lois, >> >> This looks good, but I would suggest to also add 1900 for VS2015, for >> completeness. > Thanks for the review Erik!? I have updated the webrev to add 1900, > however, I couldn't find a release # for VS2015, since all > documentation I could find seemed to indicated that only 2015 and > updates 1-3 were released.? If you have more info on this let me know! > My installation of 2015 was put in "Microsoft Visual Studio 14.0" following the pattern of previous versions (12.0, 11.0, 10.0 etc), so I think that would be the appropriate number here. Otherwise I think this looks good. /Erik > http://cr.openjdk.java.net/~lfoltan/bug_jdk8198312.1/webrev/ > > Thanks, > Lois > >> >> /Erik >> >> >> On 2018-02-23 09:54, Lois Foltan wrote: >>> Please review this small fix to set HOTSPOT_BUILD_COMPILER correctly >>> for VS2017. >>> >>> open webrev at >>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8198312/webrev/ >>> bug link https://bugs.openjdk.java.net/browse/JDK-8198312 >>> >>> Testing: hs-tier(1-3), jdk-tier(1-3) complete >>> >>> Thanks, >>> Lois >>> >> > From lois.foltan at oracle.com Fri Feb 23 19:48:54 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 23 Feb 2018 14:48:54 -0500 Subject: (11) RFR (S) JDK-8198312: VS2017: Upgrade HOTSPOT_BUILD_COMPILER in vm_version.cpp In-Reply-To: <62c9ef99-5a6f-40e8-4fb5-9f8a0e9ff5c7@oracle.com> References: <09b4db01-4928-ab93-0bb7-e0b07d302d33@oracle.com> <8994b07f-794f-a986-972b-6c8adad854fb@oracle.com> <62c9ef99-5a6f-40e8-4fb5-9f8a0e9ff5c7@oracle.com> Message-ID: <766a33d2-11ce-7af6-fbfb-5e4130560fc7@oracle.com> Thanks again for the review Erik! Lois On 2/23/2018 2:44 PM, Erik Joelsson wrote: > Looks good! > > /Erik > > > On 2018-02-23 11:39, Lois Foltan wrote: >> On 2/23/2018 2:31 PM, Erik Joelsson wrote: >> >>> On 2018-02-23 11:16, Lois Foltan wrote: >>>> On 2/23/2018 1:05 PM, Erik Joelsson wrote: >>>> >>>>> Hello Lois, >>>>> >>>>> This looks good, but I would suggest to also add 1900 for VS2015, >>>>> for completeness. >>>> Thanks for the review Erik!? I have updated the webrev to add 1900, >>>> however, I couldn't find a release # for VS2015, since all >>>> documentation I could find seemed to indicated that only 2015 and >>>> updates 1-3 were released.? If you have more info on this let me know! >>>> >>> My installation of 2015 was put in "Microsoft Visual Studio 14.0" >>> following the pattern of previous versions (12.0, 11.0, 10.0 etc), >>> so I think that would be the appropriate number here. Otherwise I >>> think this looks good. >> >> Got it, hopefully final webrev at >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8198312.2/webrev/ >> Thanks again! >> Lois >> >>> >>> /Erik >>>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8198312.1/webrev/ >>>> >>>> Thanks, >>>> Lois >>>> >>>>> >>>>> /Erik >>>>> >>>>> >>>>> On 2018-02-23 09:54, Lois Foltan wrote: >>>>>> Please review this small fix to set HOTSPOT_BUILD_COMPILER >>>>>> correctly for VS2017. >>>>>> >>>>>> open webrev at >>>>>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8198312/webrev/ >>>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8198312 >>>>>> >>>>>> Testing: hs-tier(1-3), jdk-tier(1-3) complete >>>>>> >>>>>> Thanks, >>>>>> Lois >>>>>> >>>>> >>>> >>> >> > From lois.foltan at oracle.com Fri Feb 23 19:39:12 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 23 Feb 2018 14:39:12 -0500 Subject: (11) RFR (S) JDK-8198312: VS2017: Upgrade HOTSPOT_BUILD_COMPILER in vm_version.cpp In-Reply-To: References: <09b4db01-4928-ab93-0bb7-e0b07d302d33@oracle.com> Message-ID: <8994b07f-794f-a986-972b-6c8adad854fb@oracle.com> On 2/23/2018 2:31 PM, Erik Joelsson wrote: > On 2018-02-23 11:16, Lois Foltan wrote: >> On 2/23/2018 1:05 PM, Erik Joelsson wrote: >> >>> Hello Lois, >>> >>> This looks good, but I would suggest to also add 1900 for VS2015, >>> for completeness. >> Thanks for the review Erik!? I have updated the webrev to add 1900, >> however, I couldn't find a release # for VS2015, since all >> documentation I could find seemed to indicated that only 2015 and >> updates 1-3 were released.? If you have more info on this let me know! >> > My installation of 2015 was put in "Microsoft Visual Studio 14.0" > following the pattern of previous versions (12.0, 11.0, 10.0 etc), so > I think that would be the appropriate number here. Otherwise I think > this looks good. Got it, hopefully final webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8198312.2/webrev/ Thanks again! Lois > > /Erik >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8198312.1/webrev/ >> >> Thanks, >> Lois >> >>> >>> /Erik >>> >>> >>> On 2018-02-23 09:54, Lois Foltan wrote: >>>> Please review this small fix to set HOTSPOT_BUILD_COMPILER >>>> correctly for VS2017. >>>> >>>> open webrev at >>>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8198312/webrev/ >>>> bug link https://bugs.openjdk.java.net/browse/JDK-8198312 >>>> >>>> Testing: hs-tier(1-3), jdk-tier(1-3) complete >>>> >>>> Thanks, >>>> Lois >>>> >>> >> > From erik.joelsson at oracle.com Fri Feb 23 19:44:34 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Fri, 23 Feb 2018 11:44:34 -0800 Subject: (11) RFR (S) JDK-8198312: VS2017: Upgrade HOTSPOT_BUILD_COMPILER in vm_version.cpp In-Reply-To: <8994b07f-794f-a986-972b-6c8adad854fb@oracle.com> References: <09b4db01-4928-ab93-0bb7-e0b07d302d33@oracle.com> <8994b07f-794f-a986-972b-6c8adad854fb@oracle.com> Message-ID: <62c9ef99-5a6f-40e8-4fb5-9f8a0e9ff5c7@oracle.com> Looks good! /Erik On 2018-02-23 11:39, Lois Foltan wrote: > On 2/23/2018 2:31 PM, Erik Joelsson wrote: > >> On 2018-02-23 11:16, Lois Foltan wrote: >>> On 2/23/2018 1:05 PM, Erik Joelsson wrote: >>> >>>> Hello Lois, >>>> >>>> This looks good, but I would suggest to also add 1900 for VS2015, >>>> for completeness. >>> Thanks for the review Erik!? I have updated the webrev to add 1900, >>> however, I couldn't find a release # for VS2015, since all >>> documentation I could find seemed to indicated that only 2015 and >>> updates 1-3 were released.? If you have more info on this let me know! >>> >> My installation of 2015 was put in "Microsoft Visual Studio 14.0" >> following the pattern of previous versions (12.0, 11.0, 10.0 etc), so >> I think that would be the appropriate number here. Otherwise I think >> this looks good. > > Got it, hopefully final webrev at > http://cr.openjdk.java.net/~lfoltan/bug_jdk8198312.2/webrev/ > Thanks again! > Lois > >> >> /Erik >>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8198312.1/webrev/ >>> >>> Thanks, >>> Lois >>> >>>> >>>> /Erik >>>> >>>> >>>> On 2018-02-23 09:54, Lois Foltan wrote: >>>>> Please review this small fix to set HOTSPOT_BUILD_COMPILER >>>>> correctly for VS2017. >>>>> >>>>> open webrev at >>>>> http://cr.openjdk.java.net/~lfoltan/bug_jdk8198312/webrev/ >>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8198312 >>>>> >>>>> Testing: hs-tier(1-3), jdk-tier(1-3) complete >>>>> >>>>> Thanks, >>>>> Lois >>>>> >>>> >>> >> > From lois.foltan at oracle.com Fri Feb 23 20:11:02 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 23 Feb 2018 15:11:02 -0500 Subject: (11) RFR (S) JDK-8198640: VS2017 (LNK4281) Link Warning Against Missed ASLR Optimization Message-ID: <9d08bbb6-ee4e-5092-9ffa-1c9cf2ebcd1e@oracle.com> Please review this fix to ignore linker warning (LNK4281).? This is a new linker warning generated by VS2017 v15.5 to "to point out any 64-bit image specified to link with a lower than 4GB base address doesn't get best ASLR optimization". The Hotspot jvm.dll is specifically linked with -base:0x8000000.? As recommended by https://developercommunity.visualstudio.com/content/problem/160970/upgrading-from-154-to-155-throw-lnk4281-warning.html, this linker warning can be suppressed with -ignore. open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8198640/webrev/ bug link https://bugs.openjdk.java.net/browse/JDK-8198640 Testing (hs-tier1-3 & jdk-tier1-3) in progress Thanks, Lois From christian.tornqvist at oracle.com Fri Feb 23 20:18:29 2018 From: christian.tornqvist at oracle.com (Christian Tornqvist) Date: Fri, 23 Feb 2018 15:18:29 -0500 Subject: (11) RFR (S) JDK-8198640: VS2017 (LNK4281) Link Warning Against Missed ASLR Optimization In-Reply-To: <9d08bbb6-ee4e-5092-9ffa-1c9cf2ebcd1e@oracle.com> References: <9d08bbb6-ee4e-5092-9ffa-1c9cf2ebcd1e@oracle.com> Message-ID: <71C7BC17-DEE6-4B5C-9BA4-12E5FCCA8173@oracle.com> Hi Lois, Why do we link jvm.dll with -base? Thanks, Christian > On Feb 23, 2018, at 3:11 PM, Lois Foltan wrote: > > Please review this fix to ignore linker warning (LNK4281). This is a new linker warning generated by VS2017 v15.5 to "to point out any 64-bit image specified to link with a lower than 4GB base address doesn't get best ASLR optimization". The Hotspot jvm.dll is specifically linked with -base:0x8000000. As recommended by https://developercommunity.visualstudio.com/content/problem/160970/upgrading-from-154-to-155-throw-lnk4281-warning.html, this linker warning can be suppressed with -ignore. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8198640/webrev/ > bug link https://bugs.openjdk.java.net/browse/JDK-8198640 > > Testing (hs-tier1-3 & jdk-tier1-3) in progress > > Thanks, > Lois > > From lois.foltan at oracle.com Fri Feb 23 20:22:08 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 23 Feb 2018 15:22:08 -0500 Subject: (11) RFR (S) JDK-8198640: VS2017 (LNK4281) Link Warning Against Missed ASLR Optimization In-Reply-To: <71C7BC17-DEE6-4B5C-9BA4-12E5FCCA8173@oracle.com> References: <9d08bbb6-ee4e-5092-9ffa-1c9cf2ebcd1e@oracle.com> <71C7BC17-DEE6-4B5C-9BA4-12E5FCCA8173@oracle.com> Message-ID: <0ec5f3ec-904d-bd58-6e1f-75243ae11c66@oracle.com> On 2/23/2018 3:18 PM, Christian Tornqvist wrote: > Hi Lois, > > Why do we link jvm.dll with -base? Hi Christian, It is not clear to me why we do, so I was going to follow up with an RFE to investigate & suggest the removal of -base if unnecessary. Lois > > Thanks, > Christian > >> On Feb 23, 2018, at 3:11 PM, Lois Foltan wrote: >> >> Please review this fix to ignore linker warning (LNK4281). This is a new linker warning generated by VS2017 v15.5 to "to point out any 64-bit image specified to link with a lower than 4GB base address doesn't get best ASLR optimization". The Hotspot jvm.dll is specifically linked with -base:0x8000000. As recommended by https://developercommunity.visualstudio.com/content/problem/160970/upgrading-from-154-to-155-throw-lnk4281-warning.html, this linker warning can be suppressed with -ignore. >> >> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8198640/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8198640 >> >> Testing (hs-tier1-3 & jdk-tier1-3) in progress >> >> Thanks, >> Lois >> >> From christian.tornqvist at oracle.com Fri Feb 23 20:53:07 2018 From: christian.tornqvist at oracle.com (Christian Tornqvist) Date: Fri, 23 Feb 2018 15:53:07 -0500 Subject: (11) RFR (S) JDK-8198640: VS2017 (LNK4281) Link Warning Against Missed ASLR Optimization In-Reply-To: <0ec5f3ec-904d-bd58-6e1f-75243ae11c66@oracle.com> References: <9d08bbb6-ee4e-5092-9ffa-1c9cf2ebcd1e@oracle.com> <71C7BC17-DEE6-4B5C-9BA4-12E5FCCA8173@oracle.com> <0ec5f3ec-904d-bd58-6e1f-75243ae11c66@oracle.com> Message-ID: Sounds like a good plan :) Thanks, Christian > On Feb 23, 2018, at 3:22 PM, Lois Foltan wrote: > >> On 2/23/2018 3:18 PM, Christian Tornqvist wrote: >> >> Hi Lois, >> >> Why do we link jvm.dll with -base? > Hi Christian, > It is not clear to me why we do, so I was going to follow up with an RFE to investigate & suggest the removal of -base if unnecessary. > Lois > >> >> Thanks, >> Christian >> >>> On Feb 23, 2018, at 3:11 PM, Lois Foltan wrote: >>> >>> Please review this fix to ignore linker warning (LNK4281). This is a new linker warning generated by VS2017 v15.5 to "to point out any 64-bit image specified to link with a lower than 4GB base address doesn't get best ASLR optimization". The Hotspot jvm.dll is specifically linked with -base:0x8000000. As recommended by https://developercommunity.visualstudio.com/content/problem/160970/upgrading-from-154-to-155-throw-lnk4281-warning.html, this linker warning can be suppressed with -ignore. >>> >>> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8198640/webrev/ >>> bug link https://bugs.openjdk.java.net/browse/JDK-8198640 >>> >>> Testing (hs-tier1-3 & jdk-tier1-3) in progress >>> >>> Thanks, >>> Lois >>> >>> > From christian.tornqvist at oracle.com Fri Feb 23 20:54:10 2018 From: christian.tornqvist at oracle.com (Christian Tornqvist) Date: Fri, 23 Feb 2018 15:54:10 -0500 Subject: (11) RFR (S) JDK-8198640: VS2017 (LNK4281) Link Warning Against Missed ASLR Optimization In-Reply-To: References: <9d08bbb6-ee4e-5092-9ffa-1c9cf2ebcd1e@oracle.com> <71C7BC17-DEE6-4B5C-9BA4-12E5FCCA8173@oracle.com> <0ec5f3ec-904d-bd58-6e1f-75243ae11c66@oracle.com> Message-ID: Forgot to say that the webrev looks good! > On Feb 23, 2018, at 3:53 PM, Christian Tornqvist wrote: > > Sounds like a good plan :) > > Thanks, > Christian > >>> On Feb 23, 2018, at 3:22 PM, Lois Foltan wrote: >>> >>> On 2/23/2018 3:18 PM, Christian Tornqvist wrote: >>> >>> Hi Lois, >>> >>> Why do we link jvm.dll with -base? >> Hi Christian, >> It is not clear to me why we do, so I was going to follow up with an RFE to investigate & suggest the removal of -base if unnecessary. >> Lois >> >>> >>> Thanks, >>> Christian >>> >>>> On Feb 23, 2018, at 3:11 PM, Lois Foltan wrote: >>>> >>>> Please review this fix to ignore linker warning (LNK4281). This is a new linker warning generated by VS2017 v15.5 to "to point out any 64-bit image specified to link with a lower than 4GB base address doesn't get best ASLR optimization". The Hotspot jvm.dll is specifically linked with -base:0x8000000. As recommended by https://developercommunity.visualstudio.com/content/problem/160970/upgrading-from-154-to-155-throw-lnk4281-warning.html, this linker warning can be suppressed with -ignore. >>>> >>>> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8198640/webrev/ >>>> bug link https://bugs.openjdk.java.net/browse/JDK-8198640 >>>> >>>> Testing (hs-tier1-3 & jdk-tier1-3) in progress >>>> >>>> Thanks, >>>> Lois >>>> >>>> >> From lois.foltan at oracle.com Fri Feb 23 21:05:51 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 23 Feb 2018 16:05:51 -0500 Subject: (11) RFR (S) JDK-8198640: VS2017 (LNK4281) Link Warning Against Missed ASLR Optimization In-Reply-To: References: <9d08bbb6-ee4e-5092-9ffa-1c9cf2ebcd1e@oracle.com> <71C7BC17-DEE6-4B5C-9BA4-12E5FCCA8173@oracle.com> <0ec5f3ec-904d-bd58-6e1f-75243ae11c66@oracle.com> Message-ID: <1093fd16-2b6c-77d1-6b77-71bd8e0985cc@oracle.com> Thank you for the review Christian!? I have gone forward and created the RFE at https://bugs.openjdk.java.net/browse/JDK-8198652. Lois On 2/23/2018 3:54 PM, Christian Tornqvist wrote: > Forgot to say that the webrev looks good! > >> On Feb 23, 2018, at 3:53 PM, Christian Tornqvist wrote: >> >> Sounds like a good plan :) >> >> Thanks, >> Christian >> >>>> On Feb 23, 2018, at 3:22 PM, Lois Foltan wrote: >>>> >>>> On 2/23/2018 3:18 PM, Christian Tornqvist wrote: >>>> >>>> Hi Lois, >>>> >>>> Why do we link jvm.dll with -base? >>> Hi Christian, >>> It is not clear to me why we do, so I was going to follow up with an RFE to investigate & suggest the removal of -base if unnecessary. >>> Lois >>> >>>> Thanks, >>>> Christian >>>> >>>>> On Feb 23, 2018, at 3:11 PM, Lois Foltan wrote: >>>>> >>>>> Please review this fix to ignore linker warning (LNK4281). This is a new linker warning generated by VS2017 v15.5 to "to point out any 64-bit image specified to link with a lower than 4GB base address doesn't get best ASLR optimization". The Hotspot jvm.dll is specifically linked with -base:0x8000000. As recommended by https://developercommunity.visualstudio.com/content/problem/160970/upgrading-from-154-to-155-throw-lnk4281-warning.html, this linker warning can be suppressed with -ignore. >>>>> >>>>> open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8198640/webrev/ >>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8198640 >>>>> >>>>> Testing (hs-tier1-3 & jdk-tier1-3) in progress >>>>> >>>>> Thanks, >>>>> Lois >>>>> >>>>> From erik.joelsson at oracle.com Fri Feb 23 21:08:25 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Fri, 23 Feb 2018 13:08:25 -0800 Subject: (11) RFR (S) JDK-8198640: VS2017 (LNK4281) Link Warning Against Missed ASLR Optimization In-Reply-To: <9d08bbb6-ee4e-5092-9ffa-1c9cf2ebcd1e@oracle.com> References: <9d08bbb6-ee4e-5092-9ffa-1c9cf2ebcd1e@oracle.com> Message-ID: <2994afcc-cf1f-88e0-025d-52adc7ea41ba@oracle.com> Looks good. /Erik On 2018-02-23 12:11, Lois Foltan wrote: > Please review this fix to ignore linker warning (LNK4281).? This is a > new linker warning generated by VS2017 v15.5 to "to point out any > 64-bit image specified to link with a lower than 4GB base address > doesn't get best ASLR optimization". The Hotspot jvm.dll is > specifically linked with -base:0x8000000.? As recommended by > https://developercommunity.visualstudio.com/content/problem/160970/upgrading-from-154-to-155-throw-lnk4281-warning.html, > this linker warning can be suppressed with -ignore. > > open webrev at http://cr.openjdk.java.net/~lfoltan/bug_jdk8198640/webrev/ > bug link https://bugs.openjdk.java.net/browse/JDK-8198640 > > Testing (hs-tier1-3 & jdk-tier1-3) in progress > > Thanks, > Lois > > From lois.foltan at oracle.com Fri Feb 23 21:12:33 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 23 Feb 2018 16:12:33 -0500 Subject: (11) RFR (S) JDK-8198640: VS2017 (LNK4281) Link Warning Against Missed ASLR Optimization In-Reply-To: <2994afcc-cf1f-88e0-025d-52adc7ea41ba@oracle.com> References: <9d08bbb6-ee4e-5092-9ffa-1c9cf2ebcd1e@oracle.com> <2994afcc-cf1f-88e0-025d-52adc7ea41ba@oracle.com> Message-ID: Thanks Erik! Lois On 2/23/2018 4:08 PM, Erik Joelsson wrote: > Looks good. > > /Erik > > > On 2018-02-23 12:11, Lois Foltan wrote: >> Please review this fix to ignore linker warning (LNK4281).? This is a >> new linker warning generated by VS2017 v15.5 to "to point out any >> 64-bit image specified to link with a lower than 4GB base address >> doesn't get best ASLR optimization". The Hotspot jvm.dll is >> specifically linked with -base:0x8000000.? As recommended by >> https://developercommunity.visualstudio.com/content/problem/160970/upgrading-from-154-to-155-throw-lnk4281-warning.html, >> this linker warning can be suppressed with -ignore. >> >> open webrev at >> http://cr.openjdk.java.net/~lfoltan/bug_jdk8198640/webrev/ >> bug link https://bugs.openjdk.java.net/browse/JDK-8198640 >> >> Testing (hs-tier1-3 & jdk-tier1-3) in progress >> >> Thanks, >> Lois >> >> > From kim.barrett at oracle.com Sun Feb 25 22:37:40 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Sun, 25 Feb 2018 17:37:40 -0500 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> <86A0EA21-1141-4C73-AD80-EF46A44220B2@oracle.com> Message-ID: > On Feb 22, 2018, at 8:29 AM, Thomas St?fe wrote: > Just to voice my preference on this, I am okay with either version (c99 and returning -1 on truncation) but would prefer having only one global function, not two. Especially not a function which exists for the sole purpose of another component. > > @Kim: thanks for taking my suggestions. I'll take another look when you post a new webrev. > > Best Regards, Thomas Based on discussion, I've changed the new os::vsnprintf and os::snprintf to conform to C99. For POSIX platforms, this just calls ::vsnprintf. For Windows, conditionalized to call ::vsnprintf for VS2015 and later; earlier versions emulate that behavior using _vsnprintf and _vscprintf. Improved new gtest-based tests, so we should quickly find out if some platform doesn't behave as expected. I've also removed the now redundant os::log_vsnprintf, and changed callers to use os::vsnprintf. Also changed jio_vsnprintf to use os::vsnprintf, reverting some of the ealier change. An interesting point, which I'm not intending to address with this change, is that vsnprintf only returns a negative value to indicate an encoding error. I think encoding errors can only arise when dealing with wide characters or strings, e.g. when processing a %lc or %ls directive. I don't think HotSpot code would ever use either of those, though perhaps call to jio_vsnprintf from outside HotSpot could. Maybe the function we want for internal HotSpot use should return unsigned (and internally error on an encoding error), as that might simplify usage. Updated webrevs: full: http://cr.openjdk.java.net/~kbarrett/8196882/open.02/ incr: http://cr.openjdk.java.net/~kbarrett/8196882/open.02.inc/ (Ignore open.01*; open.02.inc is delat from open.00.) From kim.barrett at oracle.com Sun Feb 25 22:42:06 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Sun, 25 Feb 2018 17:42:06 -0500 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> <86A0EA21-1141-4C73-AD80-EF46A44220B2@oracle.com> Message-ID: > On Feb 25, 2018, at 5:37 PM, Kim Barrett wrote: > Based on discussion, I've changed the new os::vsnprintf and > os::snprintf to conform to C99. [?] > > Also changed jio_vsnprintf to use os::vsnprintf, reverting some of the > ealier change. Just to be clear, the jio_vsnprintf behavior has not been changed. It?s just been reimplemented in terms of os::vsnprintf rather than directly using ::vsnprintf and trying to account for its platform variations. From gnu.andrew at redhat.com Mon Feb 26 06:01:49 2018 From: gnu.andrew at redhat.com (Andrew Hughes) Date: Mon, 26 Feb 2018 06:01:49 +0000 Subject: [8u] [RFR] Request for Review of Backport of JDK-8078628: linux-zero does not build without precompiled header In-Reply-To: <633c1920-f5cf-115a-2b5b-6a9c96ace131@oracle.com> References: <22c8ef87-9dd8-e488-23c7-631caa8683b5@oracle.com> <633c1920-f5cf-115a-2b5b-6a9c96ace131@oracle.com> Message-ID: On 23 February 2018 at 06:17, David Holmes wrote: ... > > Our internal policy is that any change to a file requires we update the > copyright year. > > If you refactor code you move it from one file to another but that still > requires a copyright update. > > Cheers, > David > > And it has been updated; to the year the changes were made. To change it to the current year would be a lie as that's not when these changes were made or published. -- Andrew :) Senior Free Java Software Engineer Red Hat, Inc. (http://www.redhat.com) Web Site: http://fuseyism.com Twitter: https://twitter.com/gnu_andrew_java PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net) Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222 From david.holmes at oracle.com Mon Feb 26 06:14:44 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 26 Feb 2018 16:14:44 +1000 Subject: [8u] [RFR] Request for Review of Backport of JDK-8078628: linux-zero does not build without precompiled header In-Reply-To: References: <22c8ef87-9dd8-e488-23c7-631caa8683b5@oracle.com> <633c1920-f5cf-115a-2b5b-6a9c96ace131@oracle.com> Message-ID: <83890746-491e-4692-4a26-7e831825de65@oracle.com> On 26/02/2018 4:01 PM, Andrew Hughes wrote: > On 23 February 2018 at 06:17, David Holmes wrote: > > ... > >> >> Our internal policy is that any change to a file requires we update the >> copyright year. >> >> If you refactor code you move it from one file to another but that still >> requires a copyright update. >> >> Cheers, >> David >> >> > > And it has been updated; to the year the changes were made. That's not the rule we (Oracle) have. > To change it to the current year would be a lie as that's not when > these changes were made or published. It's when they were made/published in _this_ file and the copyright is applied to the file. Always frustrating that what should be a simple set of rules easily expressed and clearly written down, never are because they are the domain of the lawyers. :( David From stefan.johansson at oracle.com Mon Feb 26 10:54:20 2018 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Mon, 26 Feb 2018 11:54:20 +0100 Subject: RFR: 8198432: Remove Thread extension point Message-ID: Hi, Please review this small change to remove a now unused extension point. Links JBS: https://bugs.openjdk.java.net/browse/JDK-8198432 Webrev: http://cr.openjdk.java.net/~sjohanss/8198432/00/ Summary The Thread class extension support made it possible to add extra data to it. This is no longer needed since the internal code using it has been removed. Testing Built locally and through Mach5. Thanks, Stefan From stefan.johansson at oracle.com Mon Feb 26 11:05:11 2018 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Mon, 26 Feb 2018 12:05:11 +0100 Subject: RFR: 8198433: Remove WhiteBox extension point Message-ID: <8a324b2b-7800-3174-7158-1b12fec290f2@oracle.com> Hi, Please review this small change to remove a now unused extension point. Links JBS: https://bugs.openjdk.java.net/browse/JDK-8198433 Webrev: http://cr.openjdk.java.net/~sjohanss/8198433/00/ Summary The WhiteBox testing API extension support made it possible to add specialized native code for some test methods. This is no longer needed since the internal code using it has been removed. Testing Built locally and through Mach5. Thanks, Stefan From erik.helin at oracle.com Mon Feb 26 11:13:02 2018 From: erik.helin at oracle.com (Erik Helin) Date: Mon, 26 Feb 2018 12:13:02 +0100 Subject: RFR: 8198432: Remove Thread extension point In-Reply-To: References: Message-ID: <222d563c-a5f6-3f69-6e9d-fa2942f66dd1@oracle.com> On 02/26/2018 11:54 AM, Stefan Johansson wrote: > Hi, > > Please review this small change to remove a now unused extension point. > > Links > JBS: https://bugs.openjdk.java.net/browse/JDK-8198432 > Webrev: http://cr.openjdk.java.net/~sjohanss/8198432/00/ Looks good, Reviewed. Thanks, Erik > Summary > The Thread class extension support made it possible to add extra data to > it. This is no longer needed since the internal code using it has been > removed. > > Testing > Built locally and through Mach5. > > Thanks, > Stefan From erik.helin at oracle.com Mon Feb 26 11:13:43 2018 From: erik.helin at oracle.com (Erik Helin) Date: Mon, 26 Feb 2018 12:13:43 +0100 Subject: RFR: 8198433: Remove WhiteBox extension point In-Reply-To: <8a324b2b-7800-3174-7158-1b12fec290f2@oracle.com> References: <8a324b2b-7800-3174-7158-1b12fec290f2@oracle.com> Message-ID: On 02/26/2018 12:05 PM, Stefan Johansson wrote: > Hi, > > Please review this small change to remove a now unused extension point. > > Links > JBS: https://bugs.openjdk.java.net/browse/JDK-8198433 > Webrev: http://cr.openjdk.java.net/~sjohanss/8198433/00/ Looks good, Reviewed. Thanks, Erik > Summary > The WhiteBox testing API extension support made it possible to add > specialized native code for some test methods. This is no longer needed > since the internal code using it has been removed. > > Testing > Built locally and through Mach5. > > Thanks, > Stefan From david.holmes at oracle.com Mon Feb 26 11:59:14 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 26 Feb 2018 21:59:14 +1000 Subject: RFR: 8198432: Remove Thread extension point In-Reply-To: References: Message-ID: <3d720fb5-1d02-e5a7-8a51-e1ddd357aa3c@oracle.com> Hi Stefan, On 26/02/2018 8:54 PM, Stefan Johansson wrote: > Hi, > > Please review this small change to remove a now unused extension point. Unused by Oracle. While I'm glad to see this gone we need to wait and see if anyone else had utilized these "extension" points. Thanks, David > Links > JBS: https://bugs.openjdk.java.net/browse/JDK-8198432 > Webrev: http://cr.openjdk.java.net/~sjohanss/8198432/00/ > > Summary > The Thread class extension support made it possible to add extra data to > it. This is no longer needed since the internal code using it has been > removed. > > Testing > Built locally and through Mach5. > > Thanks, > Stefan From thomas.schatzl at oracle.com Mon Feb 26 11:59:32 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Mon, 26 Feb 2018 12:59:32 +0100 Subject: RFR: 8198432: Remove Thread extension point In-Reply-To: References: Message-ID: <1519646372.2157.3.camel@oracle.com> Hi, On Mon, 2018-02-26 at 11:54 +0100, Stefan Johansson wrote: > Hi, > > Please review this small change to remove a now unused extension > point. > > Links > JBS: https://bugs.openjdk.java.net/browse/JDK-8198432 > Webrev: http://cr.openjdk.java.net/~sjohanss/8198432/00/ > > Summary > The Thread class extension support made it possible to add extra data > to it. This is no longer needed since the internal code using it has > been removed. > looks good. Thomas From thomas.schatzl at oracle.com Mon Feb 26 11:58:45 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Mon, 26 Feb 2018 12:58:45 +0100 Subject: RFR: 8198433: Remove WhiteBox extension point In-Reply-To: <8a324b2b-7800-3174-7158-1b12fec290f2@oracle.com> References: <8a324b2b-7800-3174-7158-1b12fec290f2@oracle.com> Message-ID: <1519646325.2157.2.camel@oracle.com> Hi, On Mon, 2018-02-26 at 12:05 +0100, Stefan Johansson wrote: > Hi, > > Please review this small change to remove a now unused extension > point. > > Links > JBS: https://bugs.openjdk.java.net/browse/JDK-8198433 > Webrev: http://cr.openjdk.java.net/~sjohanss/8198433/00/ > > Summary > The WhiteBox testing API extension support made it possible to add > specialized native code for some test methods. This is no longer > needed > since the internal code using it has been removed. is good. Thomas From david.holmes at oracle.com Mon Feb 26 12:09:52 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 26 Feb 2018 22:09:52 +1000 Subject: RFR: 8198433: Remove WhiteBox extension point In-Reply-To: <8a324b2b-7800-3174-7158-1b12fec290f2@oracle.com> References: <8a324b2b-7800-3174-7158-1b12fec290f2@oracle.com> Message-ID: <98b347b2-d946-7fd5-034d-cc40fc4a696f@oracle.com> On 26/02/2018 9:05 PM, Stefan Johansson wrote: > Hi, > > Please review this small change to remove a now unused extension point. Didn't even realize this one existed! Glad to see it gone. Probably less chance of this being used than the thread extension. Thanks, David > Links > JBS: https://bugs.openjdk.java.net/browse/JDK-8198433 > Webrev: http://cr.openjdk.java.net/~sjohanss/8198433/00/ > > Summary > The WhiteBox testing API extension support made it possible to add > specialized native code for some test methods. This is no longer needed > since the internal code using it has been removed. > > Testing > Built locally and through Mach5. > > Thanks, > Stefan From stefan.johansson at oracle.com Mon Feb 26 13:21:41 2018 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Mon, 26 Feb 2018 14:21:41 +0100 Subject: RFR: 8198432: Remove Thread extension point In-Reply-To: <3d720fb5-1d02-e5a7-8a51-e1ddd357aa3c@oracle.com> References: <3d720fb5-1d02-e5a7-8a51-e1ddd357aa3c@oracle.com> Message-ID: <7d0f3a77-3c36-3ce4-530f-36659ba77339@oracle.com> On 2018-02-26 12:59, David Holmes wrote: > Hi Stefan, > > On 26/02/2018 8:54 PM, Stefan Johansson wrote: >> Hi, >> >> Please review this small change to remove a now unused extension point. > > Unused by Oracle. While I'm glad to see this gone we need to wait and > see if anyone else had utilized these "extension" points. > True and I agree. I now have reviews for all these extension point removals, but to let others react I plan to wait a bit longer than the usual 24h before pushing them out. Thanks, Stefan > Thanks, > David > >> Links >> JBS: https://bugs.openjdk.java.net/browse/JDK-8198432 >> Webrev: http://cr.openjdk.java.net/~sjohanss/8198432/00/ >> >> Summary >> The Thread class extension support made it possible to add extra data >> to it. This is no longer needed since the internal code using it has >> been removed. >> >> Testing >> Built locally and through Mach5. >> >> Thanks, >> Stefan From stefan.johansson at oracle.com Mon Feb 26 13:22:41 2018 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Mon, 26 Feb 2018 14:22:41 +0100 Subject: RFR: 8198433: Remove WhiteBox extension point In-Reply-To: <98b347b2-d946-7fd5-034d-cc40fc4a696f@oracle.com> References: <8a324b2b-7800-3174-7158-1b12fec290f2@oracle.com> <98b347b2-d946-7fd5-034d-cc40fc4a696f@oracle.com> Message-ID: <12a7f26e-1586-a0b2-f621-c9312e9fc50f@oracle.com> Thanks for reviewing Thomas, Erik and David. Cheers, Stefan On 2018-02-26 13:09, David Holmes wrote: > On 26/02/2018 9:05 PM, Stefan Johansson wrote: >> Hi, >> >> Please review this small change to remove a now unused extension point. > > Didn't even realize this one existed! Glad to see it gone. > > Probably less chance of this being used than the thread extension. > > Thanks, > David > >> Links >> JBS: https://bugs.openjdk.java.net/browse/JDK-8198433 >> Webrev: http://cr.openjdk.java.net/~sjohanss/8198433/00/ >> >> Summary >> The WhiteBox testing API extension support made it possible to add >> specialized native code for some test methods. This is no longer >> needed since the internal code using it has been removed. >> >> Testing >> Built locally and through Mach5. >> >> Thanks, >> Stefan From erik.osterlund at oracle.com Mon Feb 26 13:32:52 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 26 Feb 2018 14:32:52 +0100 Subject: RFR (S): 8198561: Make oop and narrowOop always have their own class type Message-ID: <5A940C84.7040508@oracle.com> Hi, Making oop sometimes map to class types and sometimes to primitives comes with some unfortunate problems. Advantages of making them always have their own type include: 1) Not getting compilation errors in configuration X but not Y 2) Making it easier to adopt existing code to use Shenandoah equals barriers 3) Recognize oops and narrowOops safely in template Therefore, I would like to make both oop and narrowOop always map to a class type consistently. Webrev: http://cr.openjdk.java.net/~eosterlund/8198561/webrev.00/ Bug: https://bugs.openjdk.java.net/browse/JDK-8198561 Thanks, /Erik From erik.osterlund at oracle.com Mon Feb 26 13:38:40 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 26 Feb 2018 14:38:40 +0100 Subject: 8195148: Collapse G1SATBCardTableModRefBS and G1SATBCardTableLoggingModRefBS into a single G1BarrierSet Message-ID: <5A940DE0.7040108@oracle.com> Hi, G1 has two barrier sets: an abstract G1SATBCardTableModRefBS barrier set that is incomplete and you can't use, and a concrete G1SATBCardTableLoggingModRefBS barrier set is what is the one actually used all over the place. The inheritance makes this code more difficult to understand than it needs to be. There should really not be an abstract G1 barrier set that is not used - it serves no purpose. There should be a single G1BarrierSet instead reflecting the actual G1 barriers used. Webrev: http://cr.openjdk.java.net/~eosterlund/8195148/webrev.00/ Bug: https://bugs.openjdk.java.net/browse/JDK-8195148 Thanks, /Erik From goetz.lindenmaier at sap.com Mon Feb 26 14:07:26 2018 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Mon, 26 Feb 2018 14:07:26 +0000 Subject: RFR(L): 8195142: Refactor out card table from CardTableModRefBS to flatten the BarrierSet hierarchy In-Reply-To: <358f1734-6288-e482-07a5-b048599c73e2@oracle.com> References: <5A5F476E.6080000@oracle.com> <5A855376.5090203@oracle.com> <79f7fbf8-b0cc-1999-45a9-aec8f8879f77@oracle.com> <5A8D58FC.10603@oracle.com> <7c032434-d10a-3e39-2e89-1ea698b4563e@oracle.com> <358f1734-6288-e482-07a5-b048599c73e2@oracle.com> Message-ID: <1d42efa3b84b4c468d9b1c3759c2a923@sap.com> Hi Eric, your change seems to break the build without precompiled headers: opto/graphKit.hpp:760:32: error: CardTableModRefBS was not declared in this scope && barrier_set_cast(bs)->can_elide_tlab_store_barriers() This fixes it: --- a/src/hotspot/share/opto/graphKit.hpp Mon Feb 26 14:36:14 2018 +0100 +++ b/src/hotspot/share/opto/graphKit.hpp Mon Feb 26 15:06:23 2018 +0100 @@ -27,6 +27,7 @@ #include "ci/ciEnv.hpp" #include "ci/ciMethodData.hpp" +#include "gc/shared/cardTableModRefBS.hpp" #include "opto/addnode.hpp" #include "opto/callnode.hpp" #include "opto/cfgnode.hpp" Best regards, Goetz. > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf > Of Erik ?sterlund > Sent: Freitag, 23. Februar 2018 11:22 > To: Erik Helin ; hotspot-dev developers dev at openjdk.java.net> > Subject: Re: RFR(L): 8195142: Refactor out card table from CardTableModRefBS > to flatten the BarrierSet hierarchy > > Hi Erik, > > Thank you for the review. I will apply your proposed tweaks before pushing. > > Thanks, > /Erik > > On 2018-02-23 11:15, Erik Helin wrote: > > > > > > On 02/21/2018 12:33 PM, Erik ?sterlund wrote: > >> Hi Erik, > >> > >> Thank you for reviewing this. > >> > >> New full webrev: > >> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.02/ > >> > >> New incremental webrev: > >> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01_02/ > > > > The changes looks good, just a few very minor nits: > > > > - g1CollectedHeap.hpp: > > ? please make the method card_table in G1CollectedHeap const, as in: > > ? G1CardTable* card_table() const { > > ??? return _card_table; > > ? } > > > > - g1CollectedHeap.cpp: > > ? when you are changing methods in G1CollectedHeap, and have access to a > > ? private field, please use the field instead of the getter. For > > ? example: > > > > ? + _card_table->initialize(cardtable_storage); > > > > ? instead of: > > > > ? +? card_table()->initialize(cardtable_storage); > > > > - stubGenerator_ppc.cpp > > ? maybe add a space before the const qualifier? > > > > ? + CardTableModRefBS*const ctbs = > > ? + CardTable*const ct = > > > > ? That is, change the above to: > > > > ? + CardTableModRefBS* const ctbs = > > ? + CardTable* const ct = > > > > ? This is just my personal preference, but the code gets a bit dense > > ? otherwise IMHO :) > > > > I don't need to see a new webrev for the above changes, just please do > > these changes before you push. I also had a look at the patch after > > the comments from Coleen and Vladimir, and it looks good. Reviewed > > from my part. > > > > Thanks, > > Erik > > > >> On 2018-02-21 09:18, Erik Helin wrote: > >>> Hi Erik, > >>> > >>> this is a very nice improvement, thanks for working on this! > >>> > >>> A few minor comments thus far: > >>> - in stubGenerator_ppc.cpp: > >>> ? you seem to have lost a `const` in the refactoring > >> > >> Fixed. > >> > >>> - in psCardTable.hpp: > >>> ? I don't think card_mark_must_follow_store() is needed, since > >>> ? PSCardTable passes `false` for `conc_scan` to the CardTable > >>> ? constructor > >> > >> Fixed. I took the liberty of also making the condition for > >> card_mark_must_follow_store() more precise on CMS by making the > >> condition for scanned_concurrently consider whether > >> CMSPrecleaningEnabled is set or not (like other generated code does). > >> > >>> - in g1CollectedHeap.hpp: > >>> ? could you store the G1CardTable as a field in G1CollectedHeap? Also, > >>> ? could you name the "getter" just card_table()? (I see that > >>> ? g1_hot_card_cache method above, but that one should also be > >>> renamed to > >>> ? just hot_card_cache, but in another patch) > >> > >> Fixed. > >> > >>> - in cardTable.hpp and cardTable.cpp: > >>> ? could you use `hg cp` when constructing these files from > >>> ? cardTableModRefBS.{hpp,cpp} so the history is preserved? > >> > >> Yes, I will do this before pushing to make sure the history is > >> preserved. > >> > >> Thanks, > >> /Erik > >> > >>> > >>> Thanks, > >>> Erik > >>> > >>> On 02/15/2018 10:31 AM, Erik ?sterlund wrote: > >>>> Hi, > >>>> > >>>> Here is an updated revision of this webrev after internal feedback > >>>> from StefanK who helped looking through my changes - thanks a lot > >>>> for the help with that. > >>>> > >>>> The changes to the new revision are a bunch of minor clean up > >>>> changes, e.g. copy right headers, indentation issues, sorting > >>>> includes, adding/removing newlines, reverting an assert error > >>>> message, fixing constructor initialization orders, and things like > >>>> that. > >>>> > >>>> The problem I mentioned last time about the version number of our > >>>> repo not yet being bumped to 11 and resulting awkwardness in JVMCI > >>>> has been resolved by simply waiting. So now I changed the JVMCI > >>>> logic to get the card values from the new location in the > >>>> corresponding card tables when observing JDK version 11 or above. > >>>> > >>>> New full webrev (rebased onto a month fresher jdk-hs): > >>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01/ > >>>> > >>>> Incremental webrev (over the rebase): > >>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00_01/ > >>>> > >>>> This new version has run through hs-tier1-5 and jdk-tier1-3 without > >>>> any issues. > >>>> > >>>> Thanks, > >>>> /Erik > >>>> > >>>> On 2018-01-17 13:54, Erik ?sterlund wrote: > >>>>> Hi, > >>>>> > >>>>> Today, both Parallel, CMS and Serial share the same code for its > >>>>> card marking barrier. However, they have different requirements > >>>>> how to manage its card tables by the GC. And as the card table > >>>>> itself is embedded as a part of the CardTableModRefBS barrier set, > >>>>> this has led to an unnecessary inheritance hierarchy for > >>>>> CardTableModRefBS, where for example CardTableModRefBSForCTRS > and > >>>>> CardTableExtension are CardTableModRefBS subclasses that do not > >>>>> change anything to do with the barriers. > >>>>> > >>>>> To clean up the code, there should really be a separate CardTable > >>>>> hierarchy that contains the differences how to manage the card > >>>>> table from the GC point of view, and simply let CardTableModRefBS > >>>>> have a CardTable. This would allow removing > >>>>> CardTableModRefBSForCTRS and CardTableExtension and their > >>>>> references from shared code (that really have nothing to do with > >>>>> the barriers, despite being barrier sets), and significantly > >>>>> simplify the barrier set code. > >>>>> > >>>>> This patch mechanically performs this refactoring. A new CardTable > >>>>> class has been created with a PSCardTable subclass for Parallel, a > >>>>> CardTableRS for CMS and Serial, and a G1CardTable for G1. All > >>>>> references to card tables and their values have been updated > >>>>> accordingly. > >>>>> > >>>>> This touches a lot of platform specific code, so would be > >>>>> fantastic if port maintainers could have a look that I have not > >>>>> broken anything. > >>>>> > >>>>> There is a slight problem that should be pointed out. There is an > >>>>> unfortunate interaction between Graal and hotspot. Graal needs to > >>>>> know the values of g1 young cards and dirty cards. This is queried > >>>>> in different ways in different versions of the JDK in the > >>>>> ||GraalHotSpotVMConfig.java file. Now these values will move from > >>>>> their barrier set class to their card table class. That means we > >>>>> have at least three cases how to find the correct values. There is > >>>>> one for JDK8, one for JDK9, and now a new one for JDK11. Except, > >>>>> we have not yet bumped the version number to 11 in the repo, and > >>>>> therefore it has to be from JDK10 - 11 for now and updated after > >>>>> incrementing the version number. But that means that it will be > >>>>> temporarily incompatible with JDK10. That is okay for our own copy > >>>>> of Graal, but can not be used by upstream Graal as they are given > >>>>> the choice whether to support the public JDK10 or the JDK11 that > >>>>> does not quite admit to being 11 yet. I chose the solution that > >>>>> works in our repository. I will notify Graal folks of this issue. > >>>>> In the long run, it would be nice if we could have a more solid > >>>>> interface here. > >>>>> > >>>>> However, as an added benefit, this changeset brings about a > >>>>> hundred copyright headers up to date, so others do not have to > >>>>> update them for a while. > >>>>> > >>>>> Bug: > >>>>> https://bugs.openjdk.java.net/browse/JDK-8195142 > >>>>> > >>>>> Webrev: > >>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00/ > >>>>> > >>>>> Testing: mach5 hs-tier1-5 plus local AoT testing. > >>>>> > >>>>> Thanks, > >>>>> /Erik > >>>> > >> From thomas.stuefe at gmail.com Mon Feb 26 14:20:58 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 26 Feb 2018 15:20:58 +0100 Subject: RFR(L): 8198423: Improve metaspace chunk allocation (was: Proposal for improvements to the metaspace chunk allocator) Message-ID: Hi all, I know this patch is a bit larger, but may I please have reviews and/or other input? Issue: https://bugs.openjdk.java.net/browse/JDK-8198423 Latest version: http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-26/webrev/ For those who followed the mail thread, this is the incremental diff to the last changes (included feedback Goetz gave me on- and off-list): http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-26/webrev-incr/webrev/ Thank you! Kind Regards, Thomas Stuefe On Thu, Feb 8, 2018 at 12:58 PM, Thomas St?fe wrote: > Hi, > > We would like to contribute a patch developed at SAP which has been live > in our VM for some time. It improves the metaspace chunk allocation: > reduces fragmentation and raises the chance of reusing free metaspace > chunks. > > The patch: http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc > ation/2018-02-05--2/webrev/ > > In very short, this patch helps with a number of pathological cases where > metaspace chunks are free but cannot be reused because they are of the > wrong size. For example, the metaspace freelist could be full of small > chunks, which would not be reusable if we need larger chunks. So, we could > get metaspace OOMs even in situations where the metaspace was far from > exhausted. Our patch adds the ability to split and merge metaspace chunks > dynamically and thus remove the "size-lock-in" problem. > > Note that there have been other attempts to get a grip on this problem, > see e.g. "SpaceManager::get_small_chunks_and_allocate()". But arguably > our patch attempts a more complete solution. > > In 2016 I discussed the idea for this patch with some folks off-list, > among them Jon Matsimutso. He then did advice me to create a JEP. So I did: > [1]. However, meanwhile changes to the JEP process were discussed [2], and > I am not sure anymore this patch needs even needs a JEP. It may be > moderately complex and hence carries the risk inherent in any patch, but > its effects would not be externally visible (if you discount seeing fewer > metaspace OOMs). So, I'd prefer to handle this as a simple RFE. > > -- > > How this patch works: > > 1) When a class loader dies, its metaspace chunks are freed and returned > to the freelist for reuse by the next class loader. With the patch, upon > returning a chunk to the freelist, an attempt is made to merge it with its > neighboring chunks - should they happen to be free too - to form a larger > chunk. Which then is placed in the free list. > > As a result, the freelist should be populated by larger chunks at the > expense of smaller chunks. In other words, all free chunks should always be > as "coalesced as possible". > > 2) When a class loader needs a new chunk and a chunk of the requested size > cannot be found in the free list, before carving out a new chunk from the > virtual space, we first check if there is a larger chunk in the free list. > If there is, that larger chunk is chopped up into n smaller chunks. One of > them is returned to the callers, the others are re-added to the freelist. > > (1) and (2) together have the effect of removing the size-lock-in for > chunks. If fragmentation allows it, small chunks are dynamically combined > to form larger chunks, and larger chunks are split on demand. > > -- > > What this patch does not: > > This is not a rewrite of the chunk allocator - most of the mechanisms stay > intact. Specifically, chunk sizes remain unchanged, and so do chunk > allocation processes (when do which class loaders get handed which chunk > size). Almost everthing this patch does affects only internal workings of > the ChunkManager. > > Also note that I refrained from doing any cleanups, since I wanted > reviewers to be able to gauge this patch without filtering noise. > Unfortunately this patch adds some complexity. But there are many future > opportunities for code cleanup and simplification, some of which we already > discussed in existing RFEs ([3], [4]). All of them are out of the scope for > this particular patch. > > -- > > Details: > > Before the patch, the following rules held: > - All chunk sizes are multiples of the smallest chunk size ("specialized > chunks") > - All chunk sizes of larger chunks are also clean multiples of the next > smaller chunk size (e.g. for class space, the ratio of > specialized/small/medium chunks is 1:2:32) > - All chunk start addresses are aligned to the smallest chunk size (more > or less accidentally, see metaspace_reserve_alignment). > The patch makes the last rule explicit and more strict: > - All (non-humongous) chunk start addresses are now aligned to their own > chunk size. So, e.g. medium chunks are allocated at addresses which are a > multiple of medium chunk size. This rule is not extended to humongous > chunks, whose start addresses continue to be aligned to the smallest chunk > size. > > The reason for this new alignment rule is that it makes it cheap both to > find chunk predecessors of a chunk and to check which chunks are free. > > When a class loader dies and its chunk is returned to the freelist, all we > have is its address. In order to merge it with its neighbors to form a > larger chunk, we need to find those neighbors, including those preceding > the returned chunk. Prior to this patch that was not easy - one would have > to iterate chunks starting at the beginning of the VirtualSpaceNode. But > due to the new alignment rule, we now know where the prospective larger > chunk must start - at the next lower larger-chunk-size-aligned boundary. We > also know that currently a smaller chunk must start there (*). > > In order to check the free-ness of chunks quickly, each VirtualSpaceNode > now keeps a bitmap which describes its occupancy. One bit in this bitmap > corresponds to a range the size of the smallest chunk size and starting at > an address aligned to the smallest chunk size. Because of the alignment > rules above, such a range belongs to one single chunk. The bit is 1 if the > associated chunk is in use by a class loader, 0 if it is free. > > When we have calculated the address range a prospective larger chunk would > span, we now need to check if all chunks in that range are free. Only then > we can merge them. We do that by querying the bitmap. Note that the most > common use case here is forming medium chunks from smaller chunks. With the > new alignment rules, the bitmap portion covering a medium chunk now always > happens to be 16- or 32bit in size and is 16- or 32bit aligned, so reading > the bitmap in many cases becomes a simple 16- or 32bit load. > > If the range is free, only then we need to iterate the chunks in that > range: pull them from the freelist, combine them to one new larger chunk, > re-add that one to the freelist. > > (*) Humongous chunks make this a bit more complicated. Since the new > alignment rule does not extend to them, a humongous chunk could still > straddle the lower or upper boundary of the prospective larger chunk. So I > gave the occupancy map a second layer, which is used to mark the start of > chunks. > An alternative approach could have been to make humongous chunks size and > start address always a multiple of the largest non-humongous chunk size > (medium chunks). That would have caused a bit of waste per humongous chunk > (<64K) in exchange for simpler coding and a simpler occupancy map. > > -- > > The patch shows its best results in scenarios where a lot of smallish > class loaders are alive simultaneously. When dying, they leave continuous > expanses of metaspace covered in small chunks, which can be merged nicely. > However, if class loader life times vary more, we have more interleaving of > dead and alive small chunks, and hence chunk merging does not work as well > as it could. > > For an example of a pathological case like this see example program: [5] > > Executed like this: "java -XX:CompressedClassSpaceSize=10M -cp test3 > test3.Example2" the test will load 3000 small classes in separate class > loaders, then throw them away and start loading large classes. The small > classes will have flooded the metaspace with small chunks, which are > unusable for the large classes. When executing with the rather limited > CompressedClassSpaceSize=10M, we will run into an OOM after loading about > 800 large classes, having used only 40% of the class space, the rest is > wasted to unused small chunks. However, with our patch the example program > will manage to allocate ~2900 large classes before running into an OOM, and > class space will show almost no waste. > > Do demonstrate this, add -Xlog:gc+metaspace+freelist. After running into > an OOM, statistics and an ASCII representation of the class space will be > shown. The unpatched version will show large expanses of unused small > chunks, the patched variant will show almost no waste. > > Note that the patch could be made more effective with a different size > ratio between small and medium chunks: in class space, that ratio is 1:16, > so 16 small chunks must happen to be free to form one larger chunk. With a > smaller ratio the chance for coalescation would be larger. So there may be > room for future improvement here: Since we now can merge and split chunks > on demand, we could introduce more chunk sizes. Potentially arriving at a > buddy-ish allocator style where we drop hard-wired chunk sizes for a > dynamic model where the ratio between chunk sizes is always 1:2 and we > could in theory have no limit to the chunk size? But this is just a thought > and well out of the scope of this patch. > > -- > > What does this patch cost (memory): > > - the occupancy bitmap adds 1 byte per 4K metaspace. > - MetaChunk headers get larger, since we add an enum and two bools to it. > Depending on what the c++ compiler does with that, chunk headers grow by > one or two MetaWords, reducing the payload size by that amount. > - The new alignment rules mean we may need to create padding chunks to > precede larger chunks. But since these padding chunks are added to the > freelist, they should be used up before the need for new padding chunks > arises. So, the maximally possible number of unused padding chunks should > be limited by design to about 64K. > > The expectation is that the memory savings by this patch far outweighs its > added memory costs. > > .. (performance): > > We did not see measurable drops in standard benchmarks raising over the > normal noise. I also measured times for a program which stresses metaspace > chunk coalescation, with the same result. > > I am open to suggestions what else I should measure, and/or independent > measurements. > > -- > > Other details: > > I removed SpaceManager::get_small_chunk_and_allocate() to reduce > complexity somewhat, because it was made mostly obsolete by this patch: > since small chunks are combined to larger chunks upon return to the > freelist, in theory we should not have that many free small chunks anymore > anyway. However, there may be still cases where we could benefit from this > workaround, so I am asking your opinion on this one. > > About tests: There were two native tests - ChunkManagerReturnTest and > TestVirtualSpaceNode (the former was added by me last year) - which did not > make much sense anymore, since they relied heavily on internal behavior > which was made unpredictable with this patch. > To make up for these lost tests, I added a new gtest which attempts to > stress the many combinations of allocation pattern but does so from a layer > above the old tests. It now uses Metaspace::allocate() and friends. By > using that point as entry for tests, I am less dependent on implementation > internals and still cover a lot of scenarios. > > -- > > Review pointers: > > Good points to start are > - ChunkManager::return_single_chunk() - specifically, > ChunkManager::attempt_to_coalesce_around_chunk() - here we merge chunks > upon return to the free list > - ChunkManager::free_chunks_get(): Here we now split large chunks into > smaller chunks on demand > - VirtualSpaceNode::take_from_committed() : chunks are allocated > according to align rules now, padding chunks are handles > - The OccupancyMap class is the helper class implementing the new > occupancy bitmap > > The rest is mostly chaff: helper functions, added tests and verifications. > > -- > > Thanks and Best Regards, Thomas > > [1] https://bugs.openjdk.java.net/browse/JDK-8166690 > [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2017-November > /000128.html > [3] https://bugs.openjdk.java.net/browse/JDK-8185034 > [4] https://bugs.openjdk.java.net/browse/JDK-8176808 > [5] https://bugs.openjdk.java.net/secure/attachment/63532/test3.zip > > > From erik.osterlund at oracle.com Mon Feb 26 15:54:36 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 26 Feb 2018 16:54:36 +0100 Subject: RFR(L): 8195142: Refactor out card table from CardTableModRefBS to flatten the BarrierSet hierarchy In-Reply-To: <1d42efa3b84b4c468d9b1c3759c2a923@sap.com> References: <5A5F476E.6080000@oracle.com> <5A855376.5090203@oracle.com> <79f7fbf8-b0cc-1999-45a9-aec8f8879f77@oracle.com> <5A8D58FC.10603@oracle.com> <7c032434-d10a-3e39-2e89-1ea698b4563e@oracle.com> <358f1734-6288-e482-07a5-b048599c73e2@oracle.com> <1d42efa3b84b4c468d9b1c3759c2a923@sap.com> Message-ID: <5A942DBC.7020003@oracle.com> Hi Goetz, Thank you for letting me know. This builds with and without precompiled headers in our builds and I did not make any changes to graphKit.hpp and the affected code seemingly has nothing to do with the new CardTable class. Yet it does indeed not seem right and looks like a pre-existing include dependency problem. I will file a bug and fix this. Thanks, /Erik On 2018-02-26 15:07, Lindenmaier, Goetz wrote: > Hi Eric, > > your change seems to break the build without precompiled headers: > opto/graphKit.hpp:760:32: error: CardTableModRefBS was not declared in this scope > && barrier_set_cast(bs)->can_elide_tlab_store_barriers() > > This fixes it: > > --- a/src/hotspot/share/opto/graphKit.hpp Mon Feb 26 14:36:14 2018 +0100 > +++ b/src/hotspot/share/opto/graphKit.hpp Mon Feb 26 15:06:23 2018 +0100 > @@ -27,6 +27,7 @@ > > #include "ci/ciEnv.hpp" > #include "ci/ciMethodData.hpp" > +#include "gc/shared/cardTableModRefBS.hpp" > #include "opto/addnode.hpp" > #include "opto/callnode.hpp" > #include "opto/cfgnode.hpp" > > Best regards, > Goetz. > >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf >> Of Erik ?sterlund >> Sent: Freitag, 23. Februar 2018 11:22 >> To: Erik Helin ; hotspot-dev developers > dev at openjdk.java.net> >> Subject: Re: RFR(L): 8195142: Refactor out card table from CardTableModRefBS >> to flatten the BarrierSet hierarchy >> >> Hi Erik, >> >> Thank you for the review. I will apply your proposed tweaks before pushing. >> >> Thanks, >> /Erik >> >> On 2018-02-23 11:15, Erik Helin wrote: >>> >>> On 02/21/2018 12:33 PM, Erik ?sterlund wrote: >>>> Hi Erik, >>>> >>>> Thank you for reviewing this. >>>> >>>> New full webrev: >>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.02/ >>>> >>>> New incremental webrev: >>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01_02/ >>> The changes looks good, just a few very minor nits: >>> >>> - g1CollectedHeap.hpp: >>> please make the method card_table in G1CollectedHeap const, as in: >>> G1CardTable* card_table() const { >>> return _card_table; >>> } >>> >>> - g1CollectedHeap.cpp: >>> when you are changing methods in G1CollectedHeap, and have access to a >>> private field, please use the field instead of the getter. For >>> example: >>> >>> + _card_table->initialize(cardtable_storage); >>> >>> instead of: >>> >>> + card_table()->initialize(cardtable_storage); >>> >>> - stubGenerator_ppc.cpp >>> maybe add a space before the const qualifier? >>> >>> + CardTableModRefBS*const ctbs = >>> + CardTable*const ct = >>> >>> That is, change the above to: >>> >>> + CardTableModRefBS* const ctbs = >>> + CardTable* const ct = >>> >>> This is just my personal preference, but the code gets a bit dense >>> otherwise IMHO :) >>> >>> I don't need to see a new webrev for the above changes, just please do >>> these changes before you push. I also had a look at the patch after >>> the comments from Coleen and Vladimir, and it looks good. Reviewed >>> from my part. >>> >>> Thanks, >>> Erik >>> >>>> On 2018-02-21 09:18, Erik Helin wrote: >>>>> Hi Erik, >>>>> >>>>> this is a very nice improvement, thanks for working on this! >>>>> >>>>> A few minor comments thus far: >>>>> - in stubGenerator_ppc.cpp: >>>>> you seem to have lost a `const` in the refactoring >>>> Fixed. >>>> >>>>> - in psCardTable.hpp: >>>>> I don't think card_mark_must_follow_store() is needed, since >>>>> PSCardTable passes `false` for `conc_scan` to the CardTable >>>>> constructor >>>> Fixed. I took the liberty of also making the condition for >>>> card_mark_must_follow_store() more precise on CMS by making the >>>> condition for scanned_concurrently consider whether >>>> CMSPrecleaningEnabled is set or not (like other generated code does). >>>> >>>>> - in g1CollectedHeap.hpp: >>>>> could you store the G1CardTable as a field in G1CollectedHeap? Also, >>>>> could you name the "getter" just card_table()? (I see that >>>>> g1_hot_card_cache method above, but that one should also be >>>>> renamed to >>>>> just hot_card_cache, but in another patch) >>>> Fixed. >>>> >>>>> - in cardTable.hpp and cardTable.cpp: >>>>> could you use `hg cp` when constructing these files from >>>>> cardTableModRefBS.{hpp,cpp} so the history is preserved? >>>> Yes, I will do this before pushing to make sure the history is >>>> preserved. >>>> >>>> Thanks, >>>> /Erik >>>> >>>>> Thanks, >>>>> Erik >>>>> >>>>> On 02/15/2018 10:31 AM, Erik ?sterlund wrote: >>>>>> Hi, >>>>>> >>>>>> Here is an updated revision of this webrev after internal feedback >>>>>> from StefanK who helped looking through my changes - thanks a lot >>>>>> for the help with that. >>>>>> >>>>>> The changes to the new revision are a bunch of minor clean up >>>>>> changes, e.g. copy right headers, indentation issues, sorting >>>>>> includes, adding/removing newlines, reverting an assert error >>>>>> message, fixing constructor initialization orders, and things like >>>>>> that. >>>>>> >>>>>> The problem I mentioned last time about the version number of our >>>>>> repo not yet being bumped to 11 and resulting awkwardness in JVMCI >>>>>> has been resolved by simply waiting. So now I changed the JVMCI >>>>>> logic to get the card values from the new location in the >>>>>> corresponding card tables when observing JDK version 11 or above. >>>>>> >>>>>> New full webrev (rebased onto a month fresher jdk-hs): >>>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.01/ >>>>>> >>>>>> Incremental webrev (over the rebase): >>>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00_01/ >>>>>> >>>>>> This new version has run through hs-tier1-5 and jdk-tier1-3 without >>>>>> any issues. >>>>>> >>>>>> Thanks, >>>>>> /Erik >>>>>> >>>>>> On 2018-01-17 13:54, Erik ?sterlund wrote: >>>>>>> Hi, >>>>>>> >>>>>>> Today, both Parallel, CMS and Serial share the same code for its >>>>>>> card marking barrier. However, they have different requirements >>>>>>> how to manage its card tables by the GC. And as the card table >>>>>>> itself is embedded as a part of the CardTableModRefBS barrier set, >>>>>>> this has led to an unnecessary inheritance hierarchy for >>>>>>> CardTableModRefBS, where for example CardTableModRefBSForCTRS >> and >>>>>>> CardTableExtension are CardTableModRefBS subclasses that do not >>>>>>> change anything to do with the barriers. >>>>>>> >>>>>>> To clean up the code, there should really be a separate CardTable >>>>>>> hierarchy that contains the differences how to manage the card >>>>>>> table from the GC point of view, and simply let CardTableModRefBS >>>>>>> have a CardTable. This would allow removing >>>>>>> CardTableModRefBSForCTRS and CardTableExtension and their >>>>>>> references from shared code (that really have nothing to do with >>>>>>> the barriers, despite being barrier sets), and significantly >>>>>>> simplify the barrier set code. >>>>>>> >>>>>>> This patch mechanically performs this refactoring. A new CardTable >>>>>>> class has been created with a PSCardTable subclass for Parallel, a >>>>>>> CardTableRS for CMS and Serial, and a G1CardTable for G1. All >>>>>>> references to card tables and their values have been updated >>>>>>> accordingly. >>>>>>> >>>>>>> This touches a lot of platform specific code, so would be >>>>>>> fantastic if port maintainers could have a look that I have not >>>>>>> broken anything. >>>>>>> >>>>>>> There is a slight problem that should be pointed out. There is an >>>>>>> unfortunate interaction between Graal and hotspot. Graal needs to >>>>>>> know the values of g1 young cards and dirty cards. This is queried >>>>>>> in different ways in different versions of the JDK in the >>>>>>> ||GraalHotSpotVMConfig.java file. Now these values will move from >>>>>>> their barrier set class to their card table class. That means we >>>>>>> have at least three cases how to find the correct values. There is >>>>>>> one for JDK8, one for JDK9, and now a new one for JDK11. Except, >>>>>>> we have not yet bumped the version number to 11 in the repo, and >>>>>>> therefore it has to be from JDK10 - 11 for now and updated after >>>>>>> incrementing the version number. But that means that it will be >>>>>>> temporarily incompatible with JDK10. That is okay for our own copy >>>>>>> of Graal, but can not be used by upstream Graal as they are given >>>>>>> the choice whether to support the public JDK10 or the JDK11 that >>>>>>> does not quite admit to being 11 yet. I chose the solution that >>>>>>> works in our repository. I will notify Graal folks of this issue. >>>>>>> In the long run, it would be nice if we could have a more solid >>>>>>> interface here. >>>>>>> >>>>>>> However, as an added benefit, this changeset brings about a >>>>>>> hundred copyright headers up to date, so others do not have to >>>>>>> update them for a while. >>>>>>> >>>>>>> Bug: >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8195142 >>>>>>> >>>>>>> Webrev: >>>>>>> http://cr.openjdk.java.net/~eosterlund/8195142/webrev.00/ >>>>>>> >>>>>>> Testing: mach5 hs-tier1-5 plus local AoT testing. >>>>>>> >>>>>>> Thanks, >>>>>>> /Erik From thomas.stuefe at gmail.com Mon Feb 26 15:55:24 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 26 Feb 2018 16:55:24 +0100 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> <86A0EA21-1141-4C73-AD80-EF46A44220B2@oracle.com> Message-ID: src/hotspot/os/windows/os_windows.cpp Very minor nit: There is the theoretical possibility of _vsnprintf returning -1 for some reason other than errno. Documentation states "A return value of -1 indicates that an encoding error has occurred.". Since it does not say what the state of the output buffer is in that case, it may have been unchanged, in which case we would return undefined buffer content. To prevent this, maybe we could set the first buffer byte to \0 before invoking vsnprintf (if len > 0). However, I admit this is very far fetched. Will probably never happen, at least, I have never seen it. So, I leave it to you if you do this or not. --- test/hotspot/gtest/runtime/test_os.cpp - check_buffer is used to check prefix and suffix range for stray writes? I think this may be overthinking it a bit, I would not expect strays beyond buf - 1 and buf + len, in which case you would not need the check_buffer. - By initializing buffer with \0 you will miss a faulty os::snprintf() failing to write the terminating zero, no? I would use a different value. Otherwise, looks good to me. Best Regards, Thomas On Sun, Feb 25, 2018 at 11:42 PM, Kim Barrett wrote: > > On Feb 25, 2018, at 5:37 PM, Kim Barrett wrote: > > Based on discussion, I've changed the new os::vsnprintf and > > os::snprintf to conform to C99. [?] > > > > Also changed jio_vsnprintf to use os::vsnprintf, reverting some of the > > ealier change. > > Just to be clear, the jio_vsnprintf behavior has not been changed. It?s > just been > reimplemented in terms of os::vsnprintf rather than directly using > ::vsnprintf and > trying to account for its platform variations. > > From thomas.stuefe at gmail.com Mon Feb 26 17:01:43 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 26 Feb 2018 18:01:43 +0100 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> <86A0EA21-1141-4C73-AD80-EF46A44220B2@oracle.com> Message-ID: > > Very minor nit: There is the theoretical possibility of _vsnprintf > returning -1 for some reason other than errno. > I meant to say, " for some reason other than truncation". Sorry, slip of mind :) Thanks, Thomas From lois.foltan at oracle.com Mon Feb 26 18:41:44 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Mon, 26 Feb 2018 13:41:44 -0500 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> <86A0EA21-1141-4C73-AD80-EF46A44220B2@oracle.com> Message-ID: <3759784e-02c2-b940-52fb-5b4d6e71278d@oracle.com> Looks good Kim. Thank you for handling this! Lois On 2/25/2018 5:37 PM, Kim Barrett wrote: >> On Feb 22, 2018, at 8:29 AM, Thomas St?fe wrote: >> Just to voice my preference on this, I am okay with either version (c99 and returning -1 on truncation) but would prefer having only one global function, not two. Especially not a function which exists for the sole purpose of another component. >> >> @Kim: thanks for taking my suggestions. I'll take another look when you post a new webrev. >> >> Best Regards, Thomas > Based on discussion, I've changed the new os::vsnprintf and > os::snprintf to conform to C99. For POSIX platforms, this just calls > ::vsnprintf. For Windows, conditionalized to call ::vsnprintf for > VS2015 and later; earlier versions emulate that behavior using > _vsnprintf and _vscprintf. Improved new gtest-based tests, so we > should quickly find out if some platform doesn't behave as expected. > > I've also removed the now redundant os::log_vsnprintf, and changed > callers to use os::vsnprintf. > > Also changed jio_vsnprintf to use os::vsnprintf, reverting some of the > ealier change. > > An interesting point, which I'm not intending to address with this > change, is that vsnprintf only returns a negative value to indicate an > encoding error. I think encoding errors can only arise when dealing > with wide characters or strings, e.g. when processing a %lc or %ls > directive. I don't think HotSpot code would ever use either of those, > though perhaps call to jio_vsnprintf from outside HotSpot could. > Maybe the function we want for internal HotSpot use should return > unsigned (and internally error on an encoding error), as that might > simplify usage. > > Updated webrevs: > full: http://cr.openjdk.java.net/~kbarrett/8196882/open.02/ > incr: http://cr.openjdk.java.net/~kbarrett/8196882/open.02.inc/ > > (Ignore open.01*; open.02.inc is delat from open.00.) > > From gnu.andrew at redhat.com Mon Feb 26 18:58:37 2018 From: gnu.andrew at redhat.com (Andrew Hughes) Date: Mon, 26 Feb 2018 18:58:37 +0000 Subject: [8u] [RFR] Request for Review of Backport of JDK-8078628: linux-zero does not build without precompiled header In-Reply-To: <83890746-491e-4692-4a26-7e831825de65@oracle.com> References: <22c8ef87-9dd8-e488-23c7-631caa8683b5@oracle.com> <633c1920-f5cf-115a-2b5b-6a9c96ace131@oracle.com> <83890746-491e-4692-4a26-7e831825de65@oracle.com> Message-ID: On 26 February 2018 at 06:14, David Holmes wrote: > On 26/02/2018 4:01 PM, Andrew Hughes wrote: >> >> On 23 February 2018 at 06:17, David Holmes >> wrote: >> >> ... >> >>> >>> Our internal policy is that any change to a file requires we update the >>> copyright year. >>> >>> If you refactor code you move it from one file to another but that still >>> requires a copyright update. >>> >>> Cheers, >>> David >>> >>> >> >> And it has been updated; to the year the changes were made. > > > That's not the rule we (Oracle) have. > >> To change it to the current year would be a lie as that's not when >> these changes were made or published. > > > It's when they were made/published in _this_ file and the copyright is > applied to the file. > > Always frustrating that what should be a simple set of rules easily > expressed and clearly written down, never are because they are the domain of > the lawyers. :( > > David I guess it was a mistake to apply logic to legal reasoning. It seldom works out happily. Here's a revised version with the current year used. thread.hpp already has the current year, thanks to 8189170. I also dropped the guard changes in src/cpu/zero/vm/methodHandles_zero.hpp as it makes more sense to do that under a separate bug and patch, which includes the newer versions too. http://cr.openjdk.java.net/~andrew/openjdk8/8078628/webrev.02 Thanks, -- Andrew :) Senior Free Java Software Engineer Red Hat, Inc. (http://www.redhat.com) Web Site: http://fuseyism.com Twitter: https://twitter.com/gnu_andrew_java PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net) Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222 From coleen.phillimore at oracle.com Mon Feb 26 19:55:24 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 26 Feb 2018 14:55:24 -0500 Subject: RFR (S): 8198561: Make oop and narrowOop always have their own class type In-Reply-To: <5A940C84.7040508@oracle.com> References: <5A940C84.7040508@oracle.com> Message-ID: <004fcd31-f9c1-5eb2-3d48-34571a1f3cee@oracle.com> Hi Erik, This looks great.?? I assume that the generated code (for these classes vs. oopDesc* and juint) comes out the same? thanks, Coleen On 2/26/18 8:32 AM, Erik ?sterlund wrote: > Hi, > > Making oop sometimes map to class types and sometimes to primitives > comes with some unfortunate problems. Advantages of making them always > have their own type include: > > 1) Not getting compilation errors in configuration X but not Y > 2) Making it easier to adopt existing code to use Shenandoah equals > barriers > 3) Recognize oops and narrowOops safely in template > > Therefore, I would like to make both oop and narrowOop always map to a > class type consistently. > > Webrev: > http://cr.openjdk.java.net/~eosterlund/8198561/webrev.00/ > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8198561 > > Thanks, > /Erik From rkennke at redhat.com Mon Feb 26 20:11:31 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 26 Feb 2018 21:11:31 +0100 Subject: RFR (S): 8198561: Make oop and narrowOop always have their own class type In-Reply-To: <004fcd31-f9c1-5eb2-3d48-34571a1f3cee@oracle.com> References: <5A940C84.7040508@oracle.com> <004fcd31-f9c1-5eb2-3d48-34571a1f3cee@oracle.com> Message-ID: This is a very welcome change! Changeset looks good to me (except I've no idea what the sparc part does). Same question as Colleen though. Thanks, Roman On Mon, Feb 26, 2018 at 8:55 PM, wrote: > > Hi Erik, > > This looks great. I assume that the generated code (for these classes vs. > oopDesc* and juint) comes out the same? > > thanks, > Coleen > > > On 2/26/18 8:32 AM, Erik ?sterlund wrote: >> >> Hi, >> >> Making oop sometimes map to class types and sometimes to primitives comes >> with some unfortunate problems. Advantages of making them always have their >> own type include: >> >> 1) Not getting compilation errors in configuration X but not Y >> 2) Making it easier to adopt existing code to use Shenandoah equals >> barriers >> 3) Recognize oops and narrowOops safely in template >> >> Therefore, I would like to make both oop and narrowOop always map to a >> class type consistently. >> >> Webrev: >> http://cr.openjdk.java.net/~eosterlund/8198561/webrev.00/ >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8198561 >> >> Thanks, >> /Erik > > From coleen.phillimore at oracle.com Mon Feb 26 20:27:11 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 26 Feb 2018 15:27:11 -0500 Subject: RFR (S): 8198561: Make oop and narrowOop always have their own class type In-Reply-To: References: <5A940C84.7040508@oracle.com> <004fcd31-f9c1-5eb2-3d48-34571a1f3cee@oracle.com> Message-ID: <80872bba-322a-bb3f-ca25-5067c7656b15@oracle.com> Yeah I forgot to ask for a comment why this is: http://cr.openjdk.java.net/~eosterlund/8198561/webrev.00/src/hotspot/share/oops/oopsHierarchy.hpp.udiff.html +#ifndef SOLARIS + operator PrimitiveType () const { return _value; } +#endif + operator PrimitiveType () const volatile { return _value; } Thanks, Coleen On 2/26/18 3:11 PM, Roman Kennke wrote: > This is a very welcome change! > Changeset looks good to me (except I've no idea what the sparc part > does). Same question as Colleen though. > > Thanks, > Roman > > On Mon, Feb 26, 2018 at 8:55 PM, wrote: >> Hi Erik, >> >> This looks great. I assume that the generated code (for these classes vs. >> oopDesc* and juint) comes out the same? >> >> thanks, >> Coleen >> >> >> On 2/26/18 8:32 AM, Erik ?sterlund wrote: >>> Hi, >>> >>> Making oop sometimes map to class types and sometimes to primitives comes >>> with some unfortunate problems. Advantages of making them always have their >>> own type include: >>> >>> 1) Not getting compilation errors in configuration X but not Y >>> 2) Making it easier to adopt existing code to use Shenandoah equals >>> barriers >>> 3) Recognize oops and narrowOops safely in template >>> >>> Therefore, I would like to make both oop and narrowOop always map to a >>> class type consistently. >>> >>> Webrev: >>> http://cr.openjdk.java.net/~eosterlund/8198561/webrev.00/ >>> >>> Bug: >>> https://bugs.openjdk.java.net/browse/JDK-8198561 >>> >>> Thanks, >>> /Erik >> From kim.barrett at oracle.com Mon Feb 26 23:21:46 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 26 Feb 2018 18:21:46 -0500 Subject: RFR (S): 8198561: Make oop and narrowOop always have their own class type In-Reply-To: <5A940C84.7040508@oracle.com> References: <5A940C84.7040508@oracle.com> Message-ID: <757B32C2-615A-4635-A84B-00460CC19522@oracle.com> > On Feb 26, 2018, at 8:32 AM, Erik ?sterlund wrote: > > Hi, > > Making oop sometimes map to class types and sometimes to primitives comes with some unfortunate problems. Advantages of making them always have their own type include: > > 1) Not getting compilation errors in configuration X but not Y > 2) Making it easier to adopt existing code to use Shenandoah equals barriers > 3) Recognize oops and narrowOops safely in template > > Therefore, I would like to make both oop and narrowOop always map to a class type consistently. > > Webrev: > http://cr.openjdk.java.net/~eosterlund/8198561/webrev.00/ > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8198561 > > Thanks, > /Erik ----- Why is narrowOop::_value public? ----- 54 narrowOop& operator=(const narrowOop o) { _value = o._value; return *this; } 55 narrowOop& operator=(const PrimitiveType value) { _value = value; return *this; } Given we have the conversion from PrimitiveType, do we need the assignment from PrimitiveType? That wouldn?t permit direct assignment if the conversion were made explicit (which I think it should), but then, I?m not sure that assignment from a bare PrimitiveType is really a good idea either. ----- 45 narrowOop(const PrimitiveType value) : _value(value) {} Should this conversion be explicit? And should it permit implicit narrowing integral conversions? The narrowing conversions could be poisoned, though that?s a bit uglier for a constructor than for the other operations (see below). ----- All the narrowOop operations on PrimitiveType will permit implicit narrowing integer conversions to the PrimitiveType. That doesn?t seem like such a good idea. The narrowing conversions could be poisoned. ----- src/hotspot/cpu/sparc/relocInfo_sparc.cpp 99 uint32_t np = type() == relocInfo::oop_type ? (uint32_t)oopDesc::encode_heap_oop((oop)x) : Klass::encode_klass((Klass 100 inst &= ~Assembler::hi22(-1); 101 inst |= Assembler::hi22((intptr_t)(uintptr_t)np); (1) Some text seems to have been lost at the end of line 99. I suspect this doesn?t compile. (2) In old code, np was of type jint, and was just cast to intptr_t. Both value clauses in the initializer return 32bit unsigned values. If the high bit of of the value can be set, then the value passed to hi22 will differ between the old code and the new. From igor.ignatyev at oracle.com Tue Feb 27 01:25:26 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Mon, 26 Feb 2018 17:25:26 -0800 Subject: RFR(XXS) : 8190679 : java/util/Arrays/TimSortStackSize2.java fails with "Initial heap size set to a larger value than the maximum heap size" Message-ID: <02AFE3F4-5D26-49B6-8787-B04817EDE6C1@oracle.com> http://cr.openjdk.java.net/~iignatyev//8190679/webrev.00/index.html > 9 lines changed: 2 ins; 0 del; 7 mod; Hi all, could you please review the patch for TimSortStackSize2 test? the test failed when externally passed (via -javaoption or -vmoption) -Xmx value is less than 770m or 385m, depending on UseCompressedOops. it happened because the test explicitly set Xms value, but didn't set Xmx. now, the test sets Xmx as Xms times 2. PS as it mostly affects hotspot testing, the patch will be pushed to jdk/hs. webrev: http://cr.openjdk.java.net/~iignatyev//8190679/webrev.00/index.html testing: java/util/Arrays/TimSortStackSize2.java w/ and w/o externally provided Xmx value JBS: https://bugs.openjdk.java.net/browse/JDK-8190679 Thanks, -- Igor From david.holmes at oracle.com Tue Feb 27 04:04:37 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 27 Feb 2018 14:04:37 +1000 Subject: [8u] [RFR] Request for Review of Backport of JDK-8078628: linux-zero does not build without precompiled header In-Reply-To: References: <22c8ef87-9dd8-e488-23c7-631caa8683b5@oracle.com> <633c1920-f5cf-115a-2b5b-6a9c96ace131@oracle.com> <83890746-491e-4692-4a26-7e831825de65@oracle.com> Message-ID: <38fc8a5d-c1ec-264f-f381-8778acd81528@oracle.com> Thanks Andrew! Reviewed. David On 27/02/2018 4:58 AM, Andrew Hughes wrote: > On 26 February 2018 at 06:14, David Holmes wrote: >> On 26/02/2018 4:01 PM, Andrew Hughes wrote: >>> >>> On 23 February 2018 at 06:17, David Holmes >>> wrote: >>> >>> ... >>> >>>> >>>> Our internal policy is that any change to a file requires we update the >>>> copyright year. >>>> >>>> If you refactor code you move it from one file to another but that still >>>> requires a copyright update. >>>> >>>> Cheers, >>>> David >>>> >>>> >>> >>> And it has been updated; to the year the changes were made. >> >> >> That's not the rule we (Oracle) have. >> >>> To change it to the current year would be a lie as that's not when >>> these changes were made or published. >> >> >> It's when they were made/published in _this_ file and the copyright is >> applied to the file. >> >> Always frustrating that what should be a simple set of rules easily >> expressed and clearly written down, never are because they are the domain of >> the lawyers. :( >> >> David > > I guess it was a mistake to apply logic to legal reasoning. It seldom > works out happily. > > Here's a revised version with the current year used. thread.hpp already has > the current year, thanks to 8189170. I also dropped the guard changes in > src/cpu/zero/vm/methodHandles_zero.hpp as it makes more sense to do > that under a separate bug and patch, which includes the newer versions too. > > http://cr.openjdk.java.net/~andrew/openjdk8/8078628/webrev.02 > > Thanks, > From gnu.andrew at redhat.com Tue Feb 27 04:54:37 2018 From: gnu.andrew at redhat.com (Andrew Hughes) Date: Tue, 27 Feb 2018 04:54:37 +0000 Subject: [8u] [RFR] Request for Review of Backport of JDK-8078628: linux-zero does not build without precompiled header In-Reply-To: <38fc8a5d-c1ec-264f-f381-8778acd81528@oracle.com> References: <22c8ef87-9dd8-e488-23c7-631caa8683b5@oracle.com> <633c1920-f5cf-115a-2b5b-6a9c96ace131@oracle.com> <83890746-491e-4692-4a26-7e831825de65@oracle.com> <38fc8a5d-c1ec-264f-f381-8778acd81528@oracle.com> Message-ID: On 27 February 2018 at 04:04, David Holmes wrote: > Thanks Andrew! Reviewed. > > David > Thanks :) Approval request sent: http://mail.openjdk.java.net/pipermail/jdk8u-dev/2018-February/007272.html -- Andrew :) Senior Free Java Software Engineer Red Hat, Inc. (http://www.redhat.com) Web Site: http://fuseyism.com Twitter: https://twitter.com/gnu_andrew_java PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net) Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222 From david.holmes at oracle.com Tue Feb 27 05:00:45 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 27 Feb 2018 15:00:45 +1000 Subject: RFR(XXS) : 8190679 : java/util/Arrays/TimSortStackSize2.java fails with "Initial heap size set to a larger value than the maximum heap size" In-Reply-To: <02AFE3F4-5D26-49B6-8787-B04817EDE6C1@oracle.com> References: <02AFE3F4-5D26-49B6-8787-B04817EDE6C1@oracle.com> Message-ID: Hi Igor, On 27/02/2018 11:25 AM, Igor Ignatyev wrote: > http://cr.openjdk.java.net/~iignatyev//8190679/webrev.00/index.html >> 9 lines changed: 2 ins; 0 del; 7 mod; > > Hi all, > > could you please review the patch for TimSortStackSize2 test? > > the test failed when externally passed (via -javaoption or -vmoption) -Xmx value is less than 770m or 385m, depending on UseCompressedOops. it happened because the test explicitly set Xms value, but didn't set Xmx. > now, the test sets Xmx as Xms times 2. I'm not happy with setting Xmx at 2 times Xms - that seems to be setting ourselves up for another case where we can't set -Xmx at startup. This test has encountered problems in the past with external flag settings - see in particular the review thread for JDK-8075071: http://mail.openjdk.java.net/pipermail/core-libs-dev/2015-March/032316.html Will the test pass if we simply set -Xmx and -Xms to the same? Or (equivalently based on on previous review discussions) just set -Xmx instead of -Xms? Thanks, David > PS as it mostly affects hotspot testing, the patch will be pushed to jdk/hs. > > webrev: http://cr.openjdk.java.net/~iignatyev//8190679/webrev.00/index.html > testing: java/util/Arrays/TimSortStackSize2.java w/ and w/o externally provided Xmx value > JBS: https://bugs.openjdk.java.net/browse/JDK-8190679 > > Thanks, > -- Igor > From kim.barrett at oracle.com Tue Feb 27 07:50:23 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 27 Feb 2018 02:50:23 -0500 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: <3759784e-02c2-b940-52fb-5b4d6e71278d@oracle.com> References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> <86A0EA21-1141-4C73-AD80-EF46A44220B2@oracle.com> <3759784e-02c2-b940-52fb-5b4d6e71278d@oracle.com> Message-ID: <4630098E-0497-4F59-BEC1-4CA5DD53D55A@oracle.com> > On Feb 26, 2018, at 1:41 PM, Lois Foltan wrote: > > Looks good Kim. Thank you for handling this! > Lois Thanks. > > On 2/25/2018 5:37 PM, Kim Barrett wrote: >>> On Feb 22, 2018, at 8:29 AM, Thomas St?fe wrote: >>> Just to voice my preference on this, I am okay with either version (c99 and returning -1 on truncation) but would prefer having only one global function, not two. Especially not a function which exists for the sole purpose of another component. >>> >>> @Kim: thanks for taking my suggestions. I'll take another look when you post a new webrev. >>> >>> Best Regards, Thomas >> Based on discussion, I've changed the new os::vsnprintf and >> os::snprintf to conform to C99. For POSIX platforms, this just calls >> ::vsnprintf. For Windows, conditionalized to call ::vsnprintf for >> VS2015 and later; earlier versions emulate that behavior using >> _vsnprintf and _vscprintf. Improved new gtest-based tests, so we >> should quickly find out if some platform doesn't behave as expected. >> >> I've also removed the now redundant os::log_vsnprintf, and changed >> callers to use os::vsnprintf. >> >> Also changed jio_vsnprintf to use os::vsnprintf, reverting some of the >> ealier change. >> >> An interesting point, which I'm not intending to address with this >> change, is that vsnprintf only returns a negative value to indicate an >> encoding error. I think encoding errors can only arise when dealing >> with wide characters or strings, e.g. when processing a %lc or %ls >> directive. I don't think HotSpot code would ever use either of those, >> though perhaps call to jio_vsnprintf from outside HotSpot could. >> Maybe the function we want for internal HotSpot use should return >> unsigned (and internally error on an encoding error), as that might >> simplify usage. >> >> Updated webrevs: >> full: http://cr.openjdk.java.net/~kbarrett/8196882/open.02/ >> incr: http://cr.openjdk.java.net/~kbarrett/8196882/open.02.inc/ >> >> (Ignore open.01*; open.02.inc is delat from open.00.) From marcus.larsson at oracle.com Tue Feb 27 08:01:00 2018 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Tue, 27 Feb 2018 09:01:00 +0100 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> <86A0EA21-1141-4C73-AD80-EF46A44220B2@oracle.com> Message-ID: Hi Kim, On 2018-02-25 23:37, Kim Barrett wrote: >> On Feb 22, 2018, at 8:29 AM, Thomas St?fe wrote: >> Just to voice my preference on this, I am okay with either version (c99 and returning -1 on truncation) but would prefer having only one global function, not two. Especially not a function which exists for the sole purpose of another component. >> >> @Kim: thanks for taking my suggestions. I'll take another look when you post a new webrev. >> >> Best Regards, Thomas > Based on discussion, I've changed the new os::vsnprintf and > os::snprintf to conform to C99. For POSIX platforms, this just calls > ::vsnprintf. For Windows, conditionalized to call ::vsnprintf for > VS2015 and later; earlier versions emulate that behavior using > _vsnprintf and _vscprintf. Improved new gtest-based tests, so we > should quickly find out if some platform doesn't behave as expected. > > I've also removed the now redundant os::log_vsnprintf, and changed > callers to use os::vsnprintf. > > Also changed jio_vsnprintf to use os::vsnprintf, reverting some of the > ealier change. > > An interesting point, which I'm not intending to address with this > change, is that vsnprintf only returns a negative value to indicate an > encoding error. I think encoding errors can only arise when dealing > with wide characters or strings, e.g. when processing a %lc or %ls > directive. I don't think HotSpot code would ever use either of those, > though perhaps call to jio_vsnprintf from outside HotSpot could. > Maybe the function we want for internal HotSpot use should return > unsigned (and internally error on an encoding error), as that might > simplify usage. > > Updated webrevs: > full: http://cr.openjdk.java.net/~kbarrett/8196882/open.02/ > incr: http://cr.openjdk.java.net/~kbarrett/8196882/open.02.inc/ > > (Ignore open.01*; open.02.inc is delat from open.00.) Looks good! Many thanks for fixing. Marcus From kim.barrett at oracle.com Tue Feb 27 07:52:20 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 27 Feb 2018 02:52:20 -0500 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> <86A0EA21-1141-4C73-AD80-EF46A44220B2@oracle.com> Message-ID: <761A1FEF-95F4-4015-AA43-2962D615B50C@oracle.com> > On Feb 26, 2018, at 10:55 AM, Thomas St?fe wrote: > > > src/hotspot/os/windows/os_windows.cpp > > Very minor nit: There is the theoretical possibility of _vsnprintf returning -1 for some reason other than errno. Documentation states "A return value of -1 indicates that an encoding error has occurred.". Since it does not say what the state of the output buffer is in that case, it may have been unchanged, in which case we would return undefined buffer content. To prevent this, maybe we could set the first buffer byte to \0 before invoking vsnprintf (if len > 0). > > However, I admit this is very far fetched. Will probably never happen, at least, I have never seen it. So, I leave it to you if you do this or not. This is related to my earlier "interesting point"; encoding errors only appear to be possible when dealing with wide characters or strings, which I don't think would ever happen for HotSpot usage. I agree the state of the output buffer does not seem to be well defined in the case where an encoding error occured. The C99 description of snprintf looks to me like it might be trying to say the output is NUL terminated even in that case, but I don't think it unambiguously succeeds. However, pre-setting the first byte to NUL doesn't really help; that byte may have been overwritten before the encoding error is detected. We can instead set the last byte of the buffer to NUL when len > 0 and result < 0. (We could do so without checking result, but that makes the gtest's test for stray writes a little more complicated.) I'm making that change, mostly in case we get here via the jio_ functions. Note that the documentation for _vsnprintf (or vsnprintf, prior to VS2015) makes no mention (that I could find) of encoding errors as a possible reason for a negative return value. (Interestingly, the Java Access Bridge (jdk.accessibility) native windows code uses wide char/string format directives, and appears to in at least some cases write them using bare vsnprintf, and that's irrespective of which VS version is being used.) > > --- > > test/hotspot/gtest/runtime/test_os.cpp > > - check_buffer is used to check prefix and suffix range for stray writes? I think this may be overthinking it a bit, I would not expect strays beyond buf - 1 and buf + len, in which case you would not need the check_buffer. Using check_buffer demonstrated behavior differences between some versions of these changes (because of NULing out buf[len-1]). I'm inclinded to keep it. > - By initializing buffer with \0 you will miss a faulty os::snprintf() failing to write the terminating zero, no? I would use a different value. The fill value for checking is not '\0', it's '0'. I'll change that to make it more obviously different. > Otherwise, looks good to me. It seems I sent out the open.02 webrev with a few final edits in shared code that were only tested locally, and failed to build on some platforms. Windows complained about an implicit narrowing conversion in ostream.cpp. For Solaris, a C linkage function has a different type than a C++ linkage function with the same signature, so for testing jio_snprintf I changed test_sprintf to a function template with the print function type deduced. New webrevs: full: http://cr.openjdk.java.net/~kbarrett/8196882/open.03/ incr: http://cr.openjdk.java.net/~kbarrett/8196882/open.03.inc/ > Best Regards, Thomas > > On Sun, Feb 25, 2018 at 11:42 PM, Kim Barrett wrote: > > On Feb 25, 2018, at 5:37 PM, Kim Barrett wrote: > > Based on discussion, I've changed the new os::vsnprintf and > > os::snprintf to conform to C99. [?] > > > > Also changed jio_vsnprintf to use os::vsnprintf, reverting some of the > > ealier change. > > Just to be clear, the jio_vsnprintf behavior has not been changed. It?s just been > reimplemented in terms of os::vsnprintf rather than directly using ::vsnprintf and > trying to account for its platform variations. From kim.barrett at oracle.com Tue Feb 27 08:20:01 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 27 Feb 2018 03:20:01 -0500 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> <86A0EA21-1141-4C73-AD80-EF46A44220B2@oracle.com> Message-ID: <18D9D625-327F-42AB-9455-2824FD56ADD6@oracle.com> > On Feb 27, 2018, at 3:01 AM, Marcus Larsson wrote: > > Hi Kim, > > > On 2018-02-25 23:37, Kim Barrett wrote: >>> On Feb 22, 2018, at 8:29 AM, Thomas St?fe wrote: >>> Just to voice my preference on this, I am okay with either version (c99 and returning -1 on truncation) but would prefer having only one global function, not two. Especially not a function which exists for the sole purpose of another component. >>> >>> @Kim: thanks for taking my suggestions. I'll take another look when you post a new webrev. >>> >>> Best Regards, Thomas >> Based on discussion, I've changed the new os::vsnprintf and >> os::snprintf to conform to C99. For POSIX platforms, this just calls >> ::vsnprintf. For Windows, conditionalized to call ::vsnprintf for >> VS2015 and later; earlier versions emulate that behavior using >> _vsnprintf and _vscprintf. Improved new gtest-based tests, so we >> should quickly find out if some platform doesn't behave as expected. >> >> I've also removed the now redundant os::log_vsnprintf, and changed >> callers to use os::vsnprintf. >> >> Also changed jio_vsnprintf to use os::vsnprintf, reverting some of the >> ealier change. >> >> An interesting point, which I'm not intending to address with this >> change, is that vsnprintf only returns a negative value to indicate an >> encoding error. I think encoding errors can only arise when dealing >> with wide characters or strings, e.g. when processing a %lc or %ls >> directive. I don't think HotSpot code would ever use either of those, >> though perhaps call to jio_vsnprintf from outside HotSpot could. >> Maybe the function we want for internal HotSpot use should return >> unsigned (and internally error on an encoding error), as that might >> simplify usage. >> >> Updated webrevs: >> full: http://cr.openjdk.java.net/~kbarrett/8196882/open.02/ >> incr: http://cr.openjdk.java.net/~kbarrett/8196882/open.02.inc/ >> >> (Ignore open.01*; open.02.inc is delat from open.00.) > > Looks good! Many thanks for fixing. > > Marcus Thanks. You might have missed open.03 though. From thomas.stuefe at gmail.com Tue Feb 27 08:32:13 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 27 Feb 2018 09:32:13 +0100 Subject: Metaspace "capacity" reporting Message-ID: Hi all, I am looking into cleaning up metaspace.cpp a tiny bit (the logging and reporting part), see https://bugs.openjdk.java.net/browse/JDK-8185034. While examining the various flavors of MetaspaceAux::print_on(), I notice that we often include "metaspace capacity" in our reports. I wonder what the point of this number is. I can see "used", "committed" and "reserved" being useful. "capacity" however is the sum of space allocated to class loaders in the form of chunks. It includes both the used portion of the chunks and the unused portion. The latter includes both waste (unused in not-current chunks) and "free" (kinda) - unused in the current chunk. IMHO both "free" (unused space in the current chunk) and "capacity" (all space assigned to a class loader) are only useful when looking at a single class loader. But as a sum over all class loaders? It cannot be used to gauge how much space we currently take from the system, because it does not include neither free chunks (in freelist) nor the committed-but-not-yet-handed-out portion of memory. For that, "committed" is more useful. It cannot be used to estimate when the next metaspace-induced GC will happen. Can someone enlighten me please? Can we maybe remove MetaspaceAux::xxx_capacity() completely? Thanks! Thomas From marcus.larsson at oracle.com Tue Feb 27 08:54:38 2018 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Tue, 27 Feb 2018 09:54:38 +0100 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: <761A1FEF-95F4-4015-AA43-2962D615B50C@oracle.com> References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> <86A0EA21-1141-4C73-AD80-EF46A44220B2@oracle.com> <761A1FEF-95F4-4015-AA43-2962D615B50C@oracle.com> Message-ID: <72ef1993-c72c-a6f2-1e9c-9c7f28dc173e@oracle.com> Hi, On 2018-02-27 08:52, Kim Barrett wrote: >> On Feb 26, 2018, at 10:55 AM, Thomas St?fe wrote: >> >> >> src/hotspot/os/windows/os_windows.cpp >> >> Very minor nit: There is the theoretical possibility of _vsnprintf returning -1 for some reason other than errno. Documentation states "A return value of -1 indicates that an encoding error has occurred.". Since it does not say what the state of the output buffer is in that case, it may have been unchanged, in which case we would return undefined buffer content. To prevent this, maybe we could set the first buffer byte to \0 before invoking vsnprintf (if len > 0). >> >> However, I admit this is very far fetched. Will probably never happen, at least, I have never seen it. So, I leave it to you if you do this or not. > This is related to my earlier "interesting point"; encoding errors > only appear to be possible when dealing with wide characters or > strings, which I don't think would ever happen for HotSpot usage. I > agree the state of the output buffer does not seem to be well defined > in the case where an encoding error occured. The C99 description of > snprintf looks to me like it might be trying to say the output is NUL > terminated even in that case, but I don't think it unambiguously > succeeds. > > However, pre-setting the first byte to NUL doesn't really help; that > byte may have been overwritten before the encoding error is detected. > We can instead set the last byte of the buffer to NUL when len > 0 and > result < 0. (We could do so without checking result, but that makes > the gtest's test for stray writes a little more complicated.) I'm > making that change, mostly in case we get here via the jio_ functions. > > Note that the documentation for _vsnprintf (or vsnprintf, prior to > VS2015) makes no mention (that I could find) of encoding errors as a > possible reason for a negative return value. > > (Interestingly, the Java Access Bridge (jdk.accessibility) native > windows code uses wide char/string format directives, and appears to > in at least some cases write them using bare vsnprintf, and that's > irrespective of which VS version is being used.) > >> --- >> >> test/hotspot/gtest/runtime/test_os.cpp >> >> - check_buffer is used to check prefix and suffix range for stray writes? I think this may be overthinking it a bit, I would not expect strays beyond buf - 1 and buf + len, in which case you would not need the check_buffer. > Using check_buffer demonstrated behavior differences between some > versions of these changes (because of NULing out buf[len-1]). I'm > inclinded to keep it. > >> - By initializing buffer with \0 you will miss a faulty os::snprintf() failing to write the terminating zero, no? I would use a different value. > The fill value for checking is not '\0', it's '0'. I'll change that > to make it more obviously different. > >> Otherwise, looks good to me. > It seems I sent out the open.02 webrev with a few final edits in > shared code that were only tested locally, and failed to build on some > platforms. Windows complained about an implicit narrowing conversion > in ostream.cpp. For Solaris, a C linkage function has a different > type than a C++ linkage function with the same signature, so for testing > jio_snprintf I changed test_sprintf to a function template with the > print function type deduced. > > New webrevs: > full: http://cr.openjdk.java.net/~kbarrett/8196882/open.03/ > incr: http://cr.openjdk.java.net/~kbarrett/8196882/open.03.inc/ Still looks good to me! Marcus > >> Best Regards, Thomas >> >> On Sun, Feb 25, 2018 at 11:42 PM, Kim Barrett wrote: >>> On Feb 25, 2018, at 5:37 PM, Kim Barrett wrote: >>> Based on discussion, I've changed the new os::vsnprintf and >>> os::snprintf to conform to C99. [?] >>> >>> Also changed jio_vsnprintf to use os::vsnprintf, reverting some of the >>> ealier change. >> Just to be clear, the jio_vsnprintf behavior has not been changed. It?s just been >> reimplemented in terms of os::vsnprintf rather than directly using ::vsnprintf and >> trying to account for its platform variations. > From thomas.stuefe at gmail.com Tue Feb 27 10:14:08 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 27 Feb 2018 11:14:08 +0100 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: <761A1FEF-95F4-4015-AA43-2962D615B50C@oracle.com> References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> <86A0EA21-1141-4C73-AD80-EF46A44220B2@oracle.com> <761A1FEF-95F4-4015-AA43-2962D615B50C@oracle.com> Message-ID: Hi Kim, On Tue, Feb 27, 2018 at 8:52 AM, Kim Barrett wrote: > > On Feb 26, 2018, at 10:55 AM, Thomas St?fe > wrote: > > > > > > src/hotspot/os/windows/os_windows.cpp > > > > Very minor nit: There is the theoretical possibility of _vsnprintf > returning -1 for some reason other than errno. Documentation states "A > return value of -1 indicates that an encoding error has occurred.". Since > it does not say what the state of the output buffer is in that case, it may > have been unchanged, in which case we would return undefined buffer > content. To prevent this, maybe we could set the first buffer byte to \0 > before invoking vsnprintf (if len > 0). > > > > However, I admit this is very far fetched. Will probably never happen, > at least, I have never seen it. So, I leave it to you if you do this or not. > > This is related to my earlier "interesting point"; encoding errors > only appear to be possible when dealing with wide characters or > strings, which I don't think would ever happen for HotSpot usage. I > agree the state of the output buffer does not seem to be well defined > in the case where an encoding error occured. The C99 description of > snprintf looks to me like it might be trying to say the output is NUL > terminated even in that case, but I don't think it unambiguously > succeeds. > > However, pre-setting the first byte to NUL doesn't really help; that > byte may have been overwritten before the encoding error is detected. > We can instead set the last byte of the buffer to NUL when len > 0 and > result < 0. (We could do so without checking result, but that makes > the gtest's test for stray writes a little more complicated.) I'm > making that change, mostly in case we get here via the jio_ functions. > > That makes sense. > Note that the documentation for _vsnprintf (or vsnprintf, prior to > VS2015) makes no mention (that I could find) of encoding errors as a > possible reason for a negative return value. > > (Interestingly, the Java Access Bridge (jdk.accessibility) native > windows code uses wide char/string format directives, and appears to > in at least some cases write them using bare vsnprintf, and that's > irrespective of which VS version is being used.) > > Where? I took a short look and did not find it. However, I found a number of wcsncpy() with no truncation handling, so, no zero-termination upon truncation (I think, but I may be mistaken, just had a very quick look). > > > > --- > > > > test/hotspot/gtest/runtime/test_os.cpp > > > > - check_buffer is used to check prefix and suffix range for stray > writes? I think this may be overthinking it a bit, I would not expect > strays beyond buf - 1 and buf + len, in which case you would not need the > check_buffer. > > Using check_buffer demonstrated behavior differences between some > versions of these changes (because of NULing out buf[len-1]). I'm > inclinded to keep it. > > Ok > > - By initializing buffer with \0 you will miss a faulty os::snprintf() > failing to write the terminating zero, no? I would use a different value. > > The fill value for checking is not '\0', it's '0'. I'll change that > to make it more obviously different. > > Yes, that is clearer, thanks. > > Otherwise, looks good to me. > > It seems I sent out the open.02 webrev with a few final edits in > shared code that were only tested locally, and failed to build on some > platforms. Windows complained about an implicit narrowing conversion > in ostream.cpp. For Solaris, a C linkage function has a different > type than a C++ linkage function with the same signature, so for testing > jio_snprintf I changed test_sprintf to a function template with the > print function type deduced. > > New webrevs: > full: http://cr.openjdk.java.net/~kbarrett/8196882/open.03/ > incr: http://cr.openjdk.java.net/~kbarrett/8196882/open.03.inc/ > > > All looks well. Thank you. Best Regards, Thomas > > Best Regards, Thomas > > > > On Sun, Feb 25, 2018 at 11:42 PM, Kim Barrett > wrote: > > > On Feb 25, 2018, at 5:37 PM, Kim Barrett > wrote: > > > Based on discussion, I've changed the new os::vsnprintf and > > > os::snprintf to conform to C99. [?] > > > > > > Also changed jio_vsnprintf to use os::vsnprintf, reverting some of the > > > ealier change. > > > > Just to be clear, the jio_vsnprintf behavior has not been changed. It?s > just been > > reimplemented in terms of os::vsnprintf rather than directly using > ::vsnprintf and > > trying to account for its platform variations. > > > From thomas.stuefe at gmail.com Tue Feb 27 11:04:44 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 27 Feb 2018 12:04:44 +0100 Subject: RFR(L): 8198423: Improve metaspace chunk allocation (was: Proposal for improvements to the metaspace chunk allocator) In-Reply-To: References: Message-ID: Note: you can also test this patch in the jdk-submit repository by switching to branch JDK-8166690. It passes all tests (run as part of the jdk-submit push) successfully. Thanks, Thomas On Mon, Feb 26, 2018 at 3:20 PM, Thomas St?fe wrote: > Hi all, > > I know this patch is a bit larger, but may I please have reviews and/or > other input? > > Issue: https://bugs.openjdk.java.net/browse/JDK-8198423 > Latest version: http://cr.openjdk.java.net/~stuefe/webrevs/ > metaspace-coalescation/2018-02-26/webrev/ > > For those who followed the mail thread, this is the incremental diff to > the last changes (included feedback Goetz gave me on- and off-list): > http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018- > 02-26/webrev-incr/webrev/ > > Thank you! > > Kind Regards, Thomas Stuefe > > > > On Thu, Feb 8, 2018 at 12:58 PM, Thomas St?fe > wrote: > >> Hi, >> >> We would like to contribute a patch developed at SAP which has been live >> in our VM for some time. It improves the metaspace chunk allocation: >> reduces fragmentation and raises the chance of reusing free metaspace >> chunks. >> >> The patch: http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >> ation/2018-02-05--2/webrev/ >> >> In very short, this patch helps with a number of pathological cases where >> metaspace chunks are free but cannot be reused because they are of the >> wrong size. For example, the metaspace freelist could be full of small >> chunks, which would not be reusable if we need larger chunks. So, we could >> get metaspace OOMs even in situations where the metaspace was far from >> exhausted. Our patch adds the ability to split and merge metaspace chunks >> dynamically and thus remove the "size-lock-in" problem. >> >> Note that there have been other attempts to get a grip on this problem, >> see e.g. "SpaceManager::get_small_chunks_and_allocate()". But arguably >> our patch attempts a more complete solution. >> >> In 2016 I discussed the idea for this patch with some folks off-list, >> among them Jon Matsimutso. He then did advice me to create a JEP. So I did: >> [1]. However, meanwhile changes to the JEP process were discussed [2], and >> I am not sure anymore this patch needs even needs a JEP. It may be >> moderately complex and hence carries the risk inherent in any patch, but >> its effects would not be externally visible (if you discount seeing fewer >> metaspace OOMs). So, I'd prefer to handle this as a simple RFE. >> >> -- >> >> How this patch works: >> >> 1) When a class loader dies, its metaspace chunks are freed and returned >> to the freelist for reuse by the next class loader. With the patch, upon >> returning a chunk to the freelist, an attempt is made to merge it with its >> neighboring chunks - should they happen to be free too - to form a larger >> chunk. Which then is placed in the free list. >> >> As a result, the freelist should be populated by larger chunks at the >> expense of smaller chunks. In other words, all free chunks should always be >> as "coalesced as possible". >> >> 2) When a class loader needs a new chunk and a chunk of the requested >> size cannot be found in the free list, before carving out a new chunk from >> the virtual space, we first check if there is a larger chunk in the free >> list. If there is, that larger chunk is chopped up into n smaller chunks. >> One of them is returned to the callers, the others are re-added to the >> freelist. >> >> (1) and (2) together have the effect of removing the size-lock-in for >> chunks. If fragmentation allows it, small chunks are dynamically combined >> to form larger chunks, and larger chunks are split on demand. >> >> -- >> >> What this patch does not: >> >> This is not a rewrite of the chunk allocator - most of the mechanisms >> stay intact. Specifically, chunk sizes remain unchanged, and so do chunk >> allocation processes (when do which class loaders get handed which chunk >> size). Almost everthing this patch does affects only internal workings of >> the ChunkManager. >> >> Also note that I refrained from doing any cleanups, since I wanted >> reviewers to be able to gauge this patch without filtering noise. >> Unfortunately this patch adds some complexity. But there are many future >> opportunities for code cleanup and simplification, some of which we already >> discussed in existing RFEs ([3], [4]). All of them are out of the scope for >> this particular patch. >> >> -- >> >> Details: >> >> Before the patch, the following rules held: >> - All chunk sizes are multiples of the smallest chunk size ("specialized >> chunks") >> - All chunk sizes of larger chunks are also clean multiples of the next >> smaller chunk size (e.g. for class space, the ratio of >> specialized/small/medium chunks is 1:2:32) >> - All chunk start addresses are aligned to the smallest chunk size (more >> or less accidentally, see metaspace_reserve_alignment). >> The patch makes the last rule explicit and more strict: >> - All (non-humongous) chunk start addresses are now aligned to their own >> chunk size. So, e.g. medium chunks are allocated at addresses which are a >> multiple of medium chunk size. This rule is not extended to humongous >> chunks, whose start addresses continue to be aligned to the smallest chunk >> size. >> >> The reason for this new alignment rule is that it makes it cheap both to >> find chunk predecessors of a chunk and to check which chunks are free. >> >> When a class loader dies and its chunk is returned to the freelist, all >> we have is its address. In order to merge it with its neighbors to form a >> larger chunk, we need to find those neighbors, including those preceding >> the returned chunk. Prior to this patch that was not easy - one would have >> to iterate chunks starting at the beginning of the VirtualSpaceNode. But >> due to the new alignment rule, we now know where the prospective larger >> chunk must start - at the next lower larger-chunk-size-aligned boundary. We >> also know that currently a smaller chunk must start there (*). >> >> In order to check the free-ness of chunks quickly, each VirtualSpaceNode >> now keeps a bitmap which describes its occupancy. One bit in this bitmap >> corresponds to a range the size of the smallest chunk size and starting at >> an address aligned to the smallest chunk size. Because of the alignment >> rules above, such a range belongs to one single chunk. The bit is 1 if the >> associated chunk is in use by a class loader, 0 if it is free. >> >> When we have calculated the address range a prospective larger chunk >> would span, we now need to check if all chunks in that range are free. Only >> then we can merge them. We do that by querying the bitmap. Note that the >> most common use case here is forming medium chunks from smaller chunks. >> With the new alignment rules, the bitmap portion covering a medium chunk >> now always happens to be 16- or 32bit in size and is 16- or 32bit aligned, >> so reading the bitmap in many cases becomes a simple 16- or 32bit load. >> >> If the range is free, only then we need to iterate the chunks in that >> range: pull them from the freelist, combine them to one new larger chunk, >> re-add that one to the freelist. >> >> (*) Humongous chunks make this a bit more complicated. Since the new >> alignment rule does not extend to them, a humongous chunk could still >> straddle the lower or upper boundary of the prospective larger chunk. So I >> gave the occupancy map a second layer, which is used to mark the start of >> chunks. >> An alternative approach could have been to make humongous chunks size and >> start address always a multiple of the largest non-humongous chunk size >> (medium chunks). That would have caused a bit of waste per humongous chunk >> (<64K) in exchange for simpler coding and a simpler occupancy map. >> >> -- >> >> The patch shows its best results in scenarios where a lot of smallish >> class loaders are alive simultaneously. When dying, they leave continuous >> expanses of metaspace covered in small chunks, which can be merged nicely. >> However, if class loader life times vary more, we have more interleaving of >> dead and alive small chunks, and hence chunk merging does not work as well >> as it could. >> >> For an example of a pathological case like this see example program: [5] >> >> Executed like this: "java -XX:CompressedClassSpaceSize=10M -cp test3 >> test3.Example2" the test will load 3000 small classes in separate class >> loaders, then throw them away and start loading large classes. The small >> classes will have flooded the metaspace with small chunks, which are >> unusable for the large classes. When executing with the rather limited >> CompressedClassSpaceSize=10M, we will run into an OOM after loading about >> 800 large classes, having used only 40% of the class space, the rest is >> wasted to unused small chunks. However, with our patch the example program >> will manage to allocate ~2900 large classes before running into an OOM, and >> class space will show almost no waste. >> >> Do demonstrate this, add -Xlog:gc+metaspace+freelist. After running into >> an OOM, statistics and an ASCII representation of the class space will be >> shown. The unpatched version will show large expanses of unused small >> chunks, the patched variant will show almost no waste. >> >> Note that the patch could be made more effective with a different size >> ratio between small and medium chunks: in class space, that ratio is 1:16, >> so 16 small chunks must happen to be free to form one larger chunk. With a >> smaller ratio the chance for coalescation would be larger. So there may be >> room for future improvement here: Since we now can merge and split chunks >> on demand, we could introduce more chunk sizes. Potentially arriving at a >> buddy-ish allocator style where we drop hard-wired chunk sizes for a >> dynamic model where the ratio between chunk sizes is always 1:2 and we >> could in theory have no limit to the chunk size? But this is just a thought >> and well out of the scope of this patch. >> >> -- >> >> What does this patch cost (memory): >> >> - the occupancy bitmap adds 1 byte per 4K metaspace. >> - MetaChunk headers get larger, since we add an enum and two bools to >> it. Depending on what the c++ compiler does with that, chunk headers grow >> by one or two MetaWords, reducing the payload size by that amount. >> - The new alignment rules mean we may need to create padding chunks to >> precede larger chunks. But since these padding chunks are added to the >> freelist, they should be used up before the need for new padding chunks >> arises. So, the maximally possible number of unused padding chunks should >> be limited by design to about 64K. >> >> The expectation is that the memory savings by this patch far outweighs >> its added memory costs. >> >> .. (performance): >> >> We did not see measurable drops in standard benchmarks raising over the >> normal noise. I also measured times for a program which stresses metaspace >> chunk coalescation, with the same result. >> >> I am open to suggestions what else I should measure, and/or independent >> measurements. >> >> -- >> >> Other details: >> >> I removed SpaceManager::get_small_chunk_and_allocate() to reduce >> complexity somewhat, because it was made mostly obsolete by this patch: >> since small chunks are combined to larger chunks upon return to the >> freelist, in theory we should not have that many free small chunks anymore >> anyway. However, there may be still cases where we could benefit from this >> workaround, so I am asking your opinion on this one. >> >> About tests: There were two native tests - ChunkManagerReturnTest and >> TestVirtualSpaceNode (the former was added by me last year) - which did not >> make much sense anymore, since they relied heavily on internal behavior >> which was made unpredictable with this patch. >> To make up for these lost tests, I added a new gtest which attempts to >> stress the many combinations of allocation pattern but does so from a layer >> above the old tests. It now uses Metaspace::allocate() and friends. By >> using that point as entry for tests, I am less dependent on implementation >> internals and still cover a lot of scenarios. >> >> -- >> >> Review pointers: >> >> Good points to start are >> - ChunkManager::return_single_chunk() - specifically, >> ChunkManager::attempt_to_coalesce_around_chunk() - here we merge chunks >> upon return to the free list >> - ChunkManager::free_chunks_get(): Here we now split large chunks into >> smaller chunks on demand >> - VirtualSpaceNode::take_from_committed() : chunks are allocated >> according to align rules now, padding chunks are handles >> - The OccupancyMap class is the helper class implementing the new >> occupancy bitmap >> >> The rest is mostly chaff: helper functions, added tests and verifications. >> >> -- >> >> Thanks and Best Regards, Thomas >> >> [1] https://bugs.openjdk.java.net/browse/JDK-8166690 >> [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2017-November >> /000128.html >> [3] https://bugs.openjdk.java.net/browse/JDK-8185034 >> [4] https://bugs.openjdk.java.net/browse/JDK-8176808 >> [5] https://bugs.openjdk.java.net/secure/attachment/63532/test3.zip >> >> >> > From goetz.lindenmaier at sap.com Tue Feb 27 11:29:23 2018 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Tue, 27 Feb 2018 11:29:23 +0000 Subject: RFR(L): 8198423: Improve metaspace chunk allocation (was: Proposal for improvements to the metaspace chunk allocator) In-Reply-To: References: Message-ID: <7cee371a5888407e9e96382992dd37c9@sap.com> Hi Thomas, Thanks for posting this and for incorporating my comments. You missed two occurrences of PRODUCT, see below. No new webrev needed. Looks fine now, Reviewed. Cross-posting this to runtime-dev because I think that?s the proper list for this change. I copied some text from http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-February/030231.html to comment on it here, so the discussion is continued in this thread, see below. Best regards, Goetz. > > If not, for simplicity, I would have implemented this by just taking > > the next medium chunk (which would always be aligned) and > > split it into the needed size and add all the rest to > > the corresponding free lists. But no change needed here, > > I just want to understand. (Probably this is not feasible > > because the humongous ones are not aliged to medium chunks size...) > > > > You understand everything correctly. > > As for your proposal, I am not sure it would make matters much simpler. > Maybe I do not fully understand: > > Now, we do: > - is watermark aligned to chunk size? No -> carve out padding chunks, add > them to freelist, then - with the watermark now properly aligned - carve > out the desired chunk we wanted in the first place. > > After your proposal: > - the watermark should always be correctly aligned. So, first, carve out > desired chunk. Then, if it is smaller than a medium chunk, carve out n > padding chunks until the watermark is properly aligned again. > > Not sure this is better. Only the order of operations is reversed. > > Also, yes, the one thorn is that Humongous chunks are still unaligned, but > we could change the alignment rules for humongous chunks - that would be > not difficult. Right now, you have to consider two sizes, the current alignment and the requested alignment. In my proposal you know the current alignemnt (the largetst one), thus you have one dimension of checks less. But with the humongous chunks not being aligned this is pointless, so just leave as-is. > > I think the naming "padding chunks" is a bit misleading. > > It sounds as if the chunks would be wasted, but as they > > are added to the free lists they are not lost. > > dict.leo gives "offcut" for "Verschnitt" ... not a word > > common to me, but at least the german translation and the > > wordwise translation better fit the situation I think. > > Feel free to keep it as is, though. > > I agree. "Alignment chunks"? Sounds better, too. > > TestVirtualSpaceNode_test() is empty. Maybe remove it altogether? > Makes sense. Thanks. > > A lot of the methods are passed 'true' or 'false' to indicate > > whether it is for the class or metaspace manager. Maybe you > > could define enum is_class and is_metaspace or the like, to > > make these calls more speaking? > > > > > There is already one, "MetadataType". One could use that throughout the > code. > > However, there already was a mixture of "MetadataType" and "bool is_class" > predating this patch - so, my patch did not add to the confusion, I just > choose one of the prevalent forms. Unifying those two forms makes sense and > can be done in a later cleanup (or? Opinions?). Well, as there are so many occurances you could clean up right away, but feel free to leave it as-is. > > Minor nit: as you anyways normalize #defines to ASSERT, you > > might want to fix the remaining two or three #defines in metaspace.cpp > > from PRODUCT to ASSERT/DEBUG, too. > Sure! You missed some: 3813 #ifdef PRODUCT 4954 #ifndef PRODUCT 5212 #endif // !PRODUCT From: Thomas St?fe [mailto:thomas.stuefe at gmail.com] Sent: Monday, February 26, 2018 3:21 PM To: HotSpot Open Source Developers > Cc: Lindenmaier, Goetz > Subject: RFR(L): 8198423: Improve metaspace chunk allocation (was: Proposal for improvements to the metaspace chunk allocator) Hi all, I know this patch is a bit larger, but may I please have reviews and/or other input? Issue: https://bugs.openjdk.java.net/browse/JDK-8198423 Latest version: http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-26/webrev/ For those who followed the mail thread, this is the incremental diff to the last changes (included feedback Goetz gave me on- and off-list): http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-26/webrev-incr/webrev/ Thank you! Kind Regards, Thomas Stuefe On Thu, Feb 8, 2018 at 12:58 PM, Thomas St?fe > wrote: Hi, We would like to contribute a patch developed at SAP which has been live in our VM for some time. It improves the metaspace chunk allocation: reduces fragmentation and raises the chance of reusing free metaspace chunks. The patch: http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-05--2/webrev/ In very short, this patch helps with a number of pathological cases where metaspace chunks are free but cannot be reused because they are of the wrong size. For example, the metaspace freelist could be full of small chunks, which would not be reusable if we need larger chunks. So, we could get metaspace OOMs even in situations where the metaspace was far from exhausted. Our patch adds the ability to split and merge metaspace chunks dynamically and thus remove the "size-lock-in" problem. Note that there have been other attempts to get a grip on this problem, see e.g. "SpaceManager::get_small_chunks_and_allocate()". But arguably our patch attempts a more complete solution. In 2016 I discussed the idea for this patch with some folks off-list, among them Jon Matsimutso. He then did advice me to create a JEP. So I did: [1]. However, meanwhile changes to the JEP process were discussed [2], and I am not sure anymore this patch needs even needs a JEP. It may be moderately complex and hence carries the risk inherent in any patch, but its effects would not be externally visible (if you discount seeing fewer metaspace OOMs). So, I'd prefer to handle this as a simple RFE. -- How this patch works: 1) When a class loader dies, its metaspace chunks are freed and returned to the freelist for reuse by the next class loader. With the patch, upon returning a chunk to the freelist, an attempt is made to merge it with its neighboring chunks - should they happen to be free too - to form a larger chunk. Which then is placed in the free list. As a result, the freelist should be populated by larger chunks at the expense of smaller chunks. In other words, all free chunks should always be as "coalesced as possible". 2) When a class loader needs a new chunk and a chunk of the requested size cannot be found in the free list, before carving out a new chunk from the virtual space, we first check if there is a larger chunk in the free list. If there is, that larger chunk is chopped up into n smaller chunks. One of them is returned to the callers, the others are re-added to the freelist. (1) and (2) together have the effect of removing the size-lock-in for chunks. If fragmentation allows it, small chunks are dynamically combined to form larger chunks, and larger chunks are split on demand. -- What this patch does not: This is not a rewrite of the chunk allocator - most of the mechanisms stay intact. Specifically, chunk sizes remain unchanged, and so do chunk allocation processes (when do which class loaders get handed which chunk size). Almost everthing this patch does affects only internal workings of the ChunkManager. Also note that I refrained from doing any cleanups, since I wanted reviewers to be able to gauge this patch without filtering noise. Unfortunately this patch adds some complexity. But there are many future opportunities for code cleanup and simplification, some of which we already discussed in existing RFEs ([3], [4]). All of them are out of the scope for this particular patch. -- Details: Before the patch, the following rules held: - All chunk sizes are multiples of the smallest chunk size ("specialized chunks") - All chunk sizes of larger chunks are also clean multiples of the next smaller chunk size (e.g. for class space, the ratio of specialized/small/medium chunks is 1:2:32) - All chunk start addresses are aligned to the smallest chunk size (more or less accidentally, see metaspace_reserve_alignment). The patch makes the last rule explicit and more strict: - All (non-humongous) chunk start addresses are now aligned to their own chunk size. So, e.g. medium chunks are allocated at addresses which are a multiple of medium chunk size. This rule is not extended to humongous chunks, whose start addresses continue to be aligned to the smallest chunk size. The reason for this new alignment rule is that it makes it cheap both to find chunk predecessors of a chunk and to check which chunks are free. When a class loader dies and its chunk is returned to the freelist, all we have is its address. In order to merge it with its neighbors to form a larger chunk, we need to find those neighbors, including those preceding the returned chunk. Prior to this patch that was not easy - one would have to iterate chunks starting at the beginning of the VirtualSpaceNode. But due to the new alignment rule, we now know where the prospective larger chunk must start - at the next lower larger-chunk-size-aligned boundary. We also know that currently a smaller chunk must start there (*). In order to check the free-ness of chunks quickly, each VirtualSpaceNode now keeps a bitmap which describes its occupancy. One bit in this bitmap corresponds to a range the size of the smallest chunk size and starting at an address aligned to the smallest chunk size. Because of the alignment rules above, such a range belongs to one single chunk. The bit is 1 if the associated chunk is in use by a class loader, 0 if it is free. When we have calculated the address range a prospective larger chunk would span, we now need to check if all chunks in that range are free. Only then we can merge them. We do that by querying the bitmap. Note that the most common use case here is forming medium chunks from smaller chunks. With the new alignment rules, the bitmap portion covering a medium chunk now always happens to be 16- or 32bit in size and is 16- or 32bit aligned, so reading the bitmap in many cases becomes a simple 16- or 32bit load. If the range is free, only then we need to iterate the chunks in that range: pull them from the freelist, combine them to one new larger chunk, re-add that one to the freelist. (*) Humongous chunks make this a bit more complicated. Since the new alignment rule does not extend to them, a humongous chunk could still straddle the lower or upper boundary of the prospective larger chunk. So I gave the occupancy map a second layer, which is used to mark the start of chunks. An alternative approach could have been to make humongous chunks size and start address always a multiple of the largest non-humongous chunk size (medium chunks). That would have caused a bit of waste per humongous chunk (<64K) in exchange for simpler coding and a simpler occupancy map. -- The patch shows its best results in scenarios where a lot of smallish class loaders are alive simultaneously. When dying, they leave continuous expanses of metaspace covered in small chunks, which can be merged nicely. However, if class loader life times vary more, we have more interleaving of dead and alive small chunks, and hence chunk merging does not work as well as it could. For an example of a pathological case like this see example program: [5] Executed like this: "java -XX:CompressedClassSpaceSize=10M -cp test3 test3.Example2" the test will load 3000 small classes in separate class loaders, then throw them away and start loading large classes. The small classes will have flooded the metaspace with small chunks, which are unusable for the large classes. When executing with the rather limited CompressedClassSpaceSize=10M, we will run into an OOM after loading about 800 large classes, having used only 40% of the class space, the rest is wasted to unused small chunks. However, with our patch the example program will manage to allocate ~2900 large classes before running into an OOM, and class space will show almost no waste. Do demonstrate this, add -Xlog:gc+metaspace+freelist. After running into an OOM, statistics and an ASCII representation of the class space will be shown. The unpatched version will show large expanses of unused small chunks, the patched variant will show almost no waste. Note that the patch could be made more effective with a different size ratio between small and medium chunks: in class space, that ratio is 1:16, so 16 small chunks must happen to be free to form one larger chunk. With a smaller ratio the chance for coalescation would be larger. So there may be room for future improvement here: Since we now can merge and split chunks on demand, we could introduce more chunk sizes. Potentially arriving at a buddy-ish allocator style where we drop hard-wired chunk sizes for a dynamic model where the ratio between chunk sizes is always 1:2 and we could in theory have no limit to the chunk size? But this is just a thought and well out of the scope of this patch. -- What does this patch cost (memory): - the occupancy bitmap adds 1 byte per 4K metaspace. - MetaChunk headers get larger, since we add an enum and two bools to it. Depending on what the c++ compiler does with that, chunk headers grow by one or two MetaWords, reducing the payload size by that amount. - The new alignment rules mean we may need to create padding chunks to precede larger chunks. But since these padding chunks are added to the freelist, they should be used up before the need for new padding chunks arises. So, the maximally possible number of unused padding chunks should be limited by design to about 64K. The expectation is that the memory savings by this patch far outweighs its added memory costs. .. (performance): We did not see measurable drops in standard benchmarks raising over the normal noise. I also measured times for a program which stresses metaspace chunk coalescation, with the same result. I am open to suggestions what else I should measure, and/or independent measurements. -- Other details: I removed SpaceManager::get_small_chunk_and_allocate() to reduce complexity somewhat, because it was made mostly obsolete by this patch: since small chunks are combined to larger chunks upon return to the freelist, in theory we should not have that many free small chunks anymore anyway. However, there may be still cases where we could benefit from this workaround, so I am asking your opinion on this one. About tests: There were two native tests - ChunkManagerReturnTest and TestVirtualSpaceNode (the former was added by me last year) - which did not make much sense anymore, since they relied heavily on internal behavior which was made unpredictable with this patch. To make up for these lost tests, I added a new gtest which attempts to stress the many combinations of allocation pattern but does so from a layer above the old tests. It now uses Metaspace::allocate() and friends. By using that point as entry for tests, I am less dependent on implementation internals and still cover a lot of scenarios. -- Review pointers: Good points to start are - ChunkManager::return_single_chunk() - specifically, ChunkManager::attempt_to_coalesce_around_chunk() - here we merge chunks upon return to the free list - ChunkManager::free_chunks_get(): Here we now split large chunks into smaller chunks on demand - VirtualSpaceNode::take_from_committed() : chunks are allocated according to align rules now, padding chunks are handles - The OccupancyMap class is the helper class implementing the new occupancy bitmap The rest is mostly chaff: helper functions, added tests and verifications. -- Thanks and Best Regards, Thomas [1] https://bugs.openjdk.java.net/browse/JDK-8166690 [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2017-November/000128.html [3] https://bugs.openjdk.java.net/browse/JDK-8185034 [4] https://bugs.openjdk.java.net/browse/JDK-8176808 [5] https://bugs.openjdk.java.net/secure/attachment/63532/test3.zip From adam.farley at uk.ibm.com Tue Feb 27 11:53:18 2018 From: adam.farley at uk.ibm.com (Adam Farley8) Date: Tue, 27 Feb 2018 11:53:18 +0000 Subject: [PATCH] RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers In-Reply-To: <712770ae-72ee-10b8-cd93-fb1fc34bd5b3@oracle.com> References: <39D8F43A-06BD-483B-8901-6F4444A8235F@oracle.com> <4B429F1D-5727-4B20-A051-E39E1E8C69AA@oracle.com> <712770ae-72ee-10b8-cd93-fb1fc34bd5b3@oracle.com> Message-ID: Hi Alan, Peter, The main bit of data I wanted out of this was the sum total of native memory being used to store DirectByteBuffers, and to have that information printed in diagnostic cores. Thanks for pointing out Bits. I will investigate if there is a way to make that data available to the VM when a diagnostic core is generated (I'm poking through SharedSecrets and JavaNioAccess now) without running java code. Worst case scenario, we don't get this feature, and at least we can retrieve this information from the core by using, as Alan suggests, an SA- based tool to retrieve the state of the Bits variables at crash-time. Best Regards Adam Farley From: Alan Bateman To: Peter Levart , Adam Farley8 Cc: "hotspot-dev at openjdk.java.net developers" , core-libs-dev Date: 23/02/2018 17:52 Subject: Re: [PATCH] RFR Bug-pending: Enable Hotspot to Track Native Memory Usage for Direct Byte Buffers On 23/02/2018 15:28, Peter Levart wrote: > Hi Adam, > > Did you know that native memory is already tracked on the Java side for > direct ByteBuffers? See class java.nio.Bits. Could you make use of it? > Right, these are the fields that are exposed at runtime via BufferPoolMXBean. A SA based tool could read from a core file. I can't tell if this is enough for Adam, it may be that the his tool reveals more details on the buffers in the pools. -Alan Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU From erik.helin at oracle.com Tue Feb 27 14:30:55 2018 From: erik.helin at oracle.com (Erik Helin) Date: Tue, 27 Feb 2018 15:30:55 +0100 Subject: 8195148: Collapse G1SATBCardTableModRefBS and G1SATBCardTableLoggingModRefBS into a single G1BarrierSet In-Reply-To: <5A940DE0.7040108@oracle.com> References: <5A940DE0.7040108@oracle.com> Message-ID: <44b5fb10-b579-428a-c77d-62d063997a7c@oracle.com> On 02/26/2018 02:38 PM, Erik ?sterlund wrote: > Hi, > > G1 has two barrier sets: an abstract G1SATBCardTableModRefBS barrier set > that is incomplete and you can't use, and a concrete > G1SATBCardTableLoggingModRefBS barrier set is what is the one actually > used all over the place. The inheritance makes this code more difficult > to understand than it needs to be. > > There should really not be an abstract G1 barrier set that is not used - > it serves no purpose. There should be a single G1BarrierSet instead > reflecting the actual G1 barriers used. > > Webrev: > http://cr.openjdk.java.net/~eosterlund/8195148/webrev.00/ Looks good, Reviewed. Just one very minor nit: +G1BarrierSet::G1BarrierSet( + G1CardTable* card_table) : + CardTableModRefBS(card_table, BarrierSet::FakeRtti(BarrierSet::G1BarrierSet)), + _dcqs(JavaThread::dirty_card_queue_set()) Maybe put the constructor parameters on the same line as the constructor name (now that it fits), as in: +G1BarrierSet::G1BarrierSet(G1CardTable* card_table) : + CardTableModRefBS(card_table, BarrierSet::FakeRtti(BarrierSet::G1BarrierSet)), + _dcqs(JavaThread::dirty_card_queue_set()) This looks a bit better IMO and seems to be a more commonly used style in HotSpot. No need for a new patch, just remember to fix this before pushing :) Thanks, Erik > Bug: > https://bugs.openjdk.java.net/browse/JDK-8195148 > > Thanks, > /Erik From erik.osterlund at oracle.com Tue Feb 27 14:37:17 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 27 Feb 2018 15:37:17 +0100 Subject: 8195148: Collapse G1SATBCardTableModRefBS and G1SATBCardTableLoggingModRefBS into a single G1BarrierSet In-Reply-To: <44b5fb10-b579-428a-c77d-62d063997a7c@oracle.com> References: <5A940DE0.7040108@oracle.com> <44b5fb10-b579-428a-c77d-62d063997a7c@oracle.com> Message-ID: <5A956D1D.70909@oracle.com> Hi Erik, Thanks for the quick review. I will remove that newline before pushing. /Erik On 2018-02-27 15:30, Erik Helin wrote: > On 02/26/2018 02:38 PM, Erik ?sterlund wrote: >> Hi, >> >> G1 has two barrier sets: an abstract G1SATBCardTableModRefBS barrier >> set that is incomplete and you can't use, and a concrete >> G1SATBCardTableLoggingModRefBS barrier set is what is the one >> actually used all over the place. The inheritance makes this code >> more difficult to understand than it needs to be. >> >> There should really not be an abstract G1 barrier set that is not >> used - it serves no purpose. There should be a single G1BarrierSet >> instead reflecting the actual G1 barriers used. >> >> Webrev: >> http://cr.openjdk.java.net/~eosterlund/8195148/webrev.00/ > > Looks good, Reviewed. Just one very minor nit: > > +G1BarrierSet::G1BarrierSet( > + G1CardTable* card_table) : > + CardTableModRefBS(card_table, > BarrierSet::FakeRtti(BarrierSet::G1BarrierSet)), > + _dcqs(JavaThread::dirty_card_queue_set()) > > Maybe put the constructor parameters on the same line as the > constructor name (now that it fits), as in: > > +G1BarrierSet::G1BarrierSet(G1CardTable* card_table) : > + CardTableModRefBS(card_table, > BarrierSet::FakeRtti(BarrierSet::G1BarrierSet)), > + _dcqs(JavaThread::dirty_card_queue_set()) > > This looks a bit better IMO and seems to be a more commonly used style > in HotSpot. No need for a new patch, just remember to fix this before > pushing :) > > Thanks, > Erik > >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8195148 >> >> Thanks, >> /Erik From erik.osterlund at oracle.com Tue Feb 27 14:45:52 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 27 Feb 2018 15:45:52 +0100 Subject: RFR (S): 8198561: Make oop and narrowOop always have their own class type In-Reply-To: <757B32C2-615A-4635-A84B-00460CC19522@oracle.com> References: <5A940C84.7040508@oracle.com> <757B32C2-615A-4635-A84B-00460CC19522@oracle.com> Message-ID: <5A956F20.5060605@oracle.com> Hi Kim, Thank you for looking at this. New full webrev covering all comments so far (hopefully): http://cr.openjdk.java.net/~eosterlund/8198561/webrev.01/ Incremental webrev: http://cr.openjdk.java.net/~eosterlund/8198561/webrev.00_01/ On 2018-02-27 00:21, Kim Barrett wrote: >> On Feb 26, 2018, at 8:32 AM, Erik ?sterlund wrote: >> >> Hi, >> >> Making oop sometimes map to class types and sometimes to primitives comes with some unfortunate problems. Advantages of making them always have their own type include: >> >> 1) Not getting compilation errors in configuration X but not Y >> 2) Making it easier to adopt existing code to use Shenandoah equals barriers >> 3) Recognize oops and narrowOops safely in template >> >> Therefore, I would like to make both oop and narrowOop always map to a class type consistently. >> >> Webrev: >> http://cr.openjdk.java.net/~eosterlund/8198561/webrev.00/ >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8198561 >> >> Thanks, >> /Erik > ----- > Why is narrowOop::_value public? That was simply a mistake. I have corrected it to be private now. Thank you for catching that. > ----- > 54 narrowOop& operator=(const narrowOop o) { _value = o._value; return *this; } > 55 narrowOop& operator=(const PrimitiveType value) { _value = value; return *this; } > > Given we have the conversion from PrimitiveType, do we need the assignment from PrimitiveType? > That wouldn?t permit direct assignment if the conversion were made explicit (which I think it should), > but then, I?m not sure that assignment from a bare PrimitiveType is really a good idea either. You are right - we seem to be fine without assignment from PrimitiveType. I removed it. > ----- > 45 narrowOop(const PrimitiveType value) : _value(value) {} > > Should this conversion be explicit? And should it permit implicit narrowing integral conversions? > The narrowing conversions could be poisoned, though that?s a bit uglier for a constructor than for > the other operations (see below). Since narrowOop used to be a juint, there is a bunch of code that relies on this conversion being implicit for now. So I think I would prefer to keep this implicit to reduce fanout. I tried making it explicit, and unfortunately changes required for that propagate surprisingly far. > ----- > All the narrowOop operations on PrimitiveType will permit implicit narrowing integer conversions > to the PrimitiveType. That doesn?t seem like such a good idea. The narrowing conversions could > be poisoned. That is true. I managed to get rid of that in the latest revision. > ----- > > src/hotspot/cpu/sparc/relocInfo_sparc.cpp > 99 uint32_t np = type() == relocInfo::oop_type ? (uint32_t)oopDesc::encode_heap_oop((oop)x) : Klass::encode_klass((Klass > 100 inst &= ~Assembler::hi22(-1); > 101 inst |= Assembler::hi22((intptr_t)(uintptr_t)np); > > (1) Some text seems to have been lost at the end of line 99. I suspect this doesn?t compile. This seems to be a tooling problem that my webrev tool cuts the text after 125 characters. Sorry about that. This seems to indicate though that perhaps that line is too long anyway, so I split it into two lines. > (2) In old code, np was of type jint, and was just cast to intptr_t. Both value clauses in the initializer > return 32bit unsigned values. If the high bit of of the value can be set, then the value passed to hi22 > will differ between the old code and the new. Yes you are right. I thought sign extending a narrowOop to an int64_t value seemed like a bad idea and that if it produced different values, I would never want to sign extend the narrowOop. After looking closer though, the expected argument to hi22 is an int. So the whole sign extending cast to intptr_t thing seems pointless anyway and would result in the same int you already had. I changed it to an int and removed subsequent explicit casts to intptr_t that then implicitly got converted back to int (that it already was declared as). Thanks, /Erik From erik.osterlund at oracle.com Tue Feb 27 14:47:26 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 27 Feb 2018 15:47:26 +0100 Subject: RFR (S): 8198561: Make oop and narrowOop always have their own class type In-Reply-To: <004fcd31-f9c1-5eb2-3d48-34571a1f3cee@oracle.com> References: <5A940C84.7040508@oracle.com> <004fcd31-f9c1-5eb2-3d48-34571a1f3cee@oracle.com> Message-ID: <5A956F7E.5090205@oracle.com> Hi Coleen, Thanks for the review. On 2018-02-26 20:55, coleen.phillimore at oracle.com wrote: > > Hi Erik, > > This looks great. I assume that the generated code (for these > classes vs. oopDesc* and juint) comes out the same? I assume so too. Or at least that the performance does not regress. Maybe I run some benchmarks to be sure since the question has been asked. Thanks, /Erik > thanks, > Coleen > > On 2/26/18 8:32 AM, Erik ?sterlund wrote: >> Hi, >> >> Making oop sometimes map to class types and sometimes to primitives >> comes with some unfortunate problems. Advantages of making them >> always have their own type include: >> >> 1) Not getting compilation errors in configuration X but not Y >> 2) Making it easier to adopt existing code to use Shenandoah equals >> barriers >> 3) Recognize oops and narrowOops safely in template >> >> Therefore, I would like to make both oop and narrowOop always map to >> a class type consistently. >> >> Webrev: >> http://cr.openjdk.java.net/~eosterlund/8198561/webrev.00/ >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8198561 >> >> Thanks, >> /Erik > From erik.osterlund at oracle.com Tue Feb 27 14:50:30 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 27 Feb 2018 15:50:30 +0100 Subject: RFR (S): 8198561: Make oop and narrowOop always have their own class type In-Reply-To: <80872bba-322a-bb3f-ca25-5067c7656b15@oracle.com> References: <5A940C84.7040508@oracle.com> <004fcd31-f9c1-5eb2-3d48-34571a1f3cee@oracle.com> <80872bba-322a-bb3f-ca25-5067c7656b15@oracle.com> Message-ID: <5A957036.7030306@oracle.com> Hi Coleen, As I said to Roman: the studio compiler finds it ambiguous to have both const and const volatile implicit conversion overloads - you gotta pick your favourite. Seemingly same issue as oopDesc* -> void* where solaris is forced to pick just one. But my latest revision http://cr.openjdk.java.net/~eosterlund/8198561/webrev.01/ managed to get rid of it by introducing a new constructor accepting const volatile narrowOop& instead. Thanks, /Erik On 2018-02-26 21:27, coleen.phillimore at oracle.com wrote: > Yeah I forgot to ask for a comment why this is: > > http://cr.openjdk.java.net/~eosterlund/8198561/webrev.00/src/hotspot/share/oops/oopsHierarchy.hpp.udiff.html > > > +#ifndef SOLARIS > + operator PrimitiveType () const { return _value; } > +#endif > + operator PrimitiveType () const volatile { return _value; } > > Thanks, > Coleen > > On 2/26/18 3:11 PM, Roman Kennke wrote: >> This is a very welcome change! >> Changeset looks good to me (except I've no idea what the sparc part >> does). Same question as Colleen though. >> >> Thanks, >> Roman >> >> On Mon, Feb 26, 2018 at 8:55 PM, wrote: >>> Hi Erik, >>> >>> This looks great. I assume that the generated code (for these >>> classes vs. >>> oopDesc* and juint) comes out the same? >>> >>> thanks, >>> Coleen >>> >>> >>> On 2/26/18 8:32 AM, Erik ?sterlund wrote: >>>> Hi, >>>> >>>> Making oop sometimes map to class types and sometimes to primitives >>>> comes >>>> with some unfortunate problems. Advantages of making them always >>>> have their >>>> own type include: >>>> >>>> 1) Not getting compilation errors in configuration X but not Y >>>> 2) Making it easier to adopt existing code to use Shenandoah equals >>>> barriers >>>> 3) Recognize oops and narrowOops safely in template >>>> >>>> Therefore, I would like to make both oop and narrowOop always map to a >>>> class type consistently. >>>> >>>> Webrev: >>>> http://cr.openjdk.java.net/~eosterlund/8198561/webrev.00/ >>>> >>>> Bug: >>>> https://bugs.openjdk.java.net/browse/JDK-8198561 >>>> >>>> Thanks, >>>> /Erik >>> > From erik.osterlund at oracle.com Tue Feb 27 14:47:11 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 27 Feb 2018 15:47:11 +0100 Subject: RFR (S): 8198561: Make oop and narrowOop always have their own class type In-Reply-To: References: <5A940C84.7040508@oracle.com> <004fcd31-f9c1-5eb2-3d48-34571a1f3cee@oracle.com> Message-ID: <5A956F6F.80104@oracle.com> Hi Roman, Thank you for the review. The solaris ifdef stuff in oopsHierarchy.hpp (if that was what you were referring to) is there because the studio compiler incorrectly thinks having both a const implicit conversion operator and a const volatile implicit conversion operator is ambiguous. The same problem occurred in oopDesc to void* conversion. The good news is that my latest revision manages to get rid of this by adding a narrowOop(const volatile narrowOop& o) constructor instead. Thanks, /Erik On 2018-02-26 21:11, Roman Kennke wrote: > This is a very welcome change! > Changeset looks good to me (except I've no idea what the sparc part > does). Same question as Colleen though. > > Thanks, > Roman > > On Mon, Feb 26, 2018 at 8:55 PM, wrote: >> Hi Erik, >> >> This looks great. I assume that the generated code (for these classes vs. >> oopDesc* and juint) comes out the same? >> >> thanks, >> Coleen >> >> >> On 2/26/18 8:32 AM, Erik ?sterlund wrote: >>> Hi, >>> >>> Making oop sometimes map to class types and sometimes to primitives comes >>> with some unfortunate problems. Advantages of making them always have their >>> own type include: >>> >>> 1) Not getting compilation errors in configuration X but not Y >>> 2) Making it easier to adopt existing code to use Shenandoah equals >>> barriers >>> 3) Recognize oops and narrowOops safely in template >>> >>> Therefore, I would like to make both oop and narrowOop always map to a >>> class type consistently. >>> >>> Webrev: >>> http://cr.openjdk.java.net/~eosterlund/8198561/webrev.00/ >>> >>> Bug: >>> https://bugs.openjdk.java.net/browse/JDK-8198561 >>> >>> Thanks, >>> /Erik >> From erik.helin at oracle.com Tue Feb 27 14:42:01 2018 From: erik.helin at oracle.com (Erik Helin) Date: Tue, 27 Feb 2018 15:42:01 +0100 Subject: RFR: 8197841: Remove unused function Universe::create_heap_ext Message-ID: <5c45c910-2b4d-e791-46db-715d845172a0@oracle.com> Hi all, this small patch removes an unused extension point, Universe::create_heap_ext. Since the definition of Universe::create_heap_ext is the only code in the file src/hotspot/share/memory/universe_ext.cpp, I also removed that file. Issue: https://bugs.openjdk.java.net/browse/JDK-8197841 Patch: http://cr.openjdk.java.net/~ehelin/8197841/00/ Testing: - `make run-test-tier1` on Linux x86-64 Thanks, Erik From coleen.phillimore at oracle.com Tue Feb 27 14:51:08 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 27 Feb 2018 09:51:08 -0500 Subject: RFR (S): 8198561: Make oop and narrowOop always have their own class type In-Reply-To: <5A957036.7030306@oracle.com> References: <5A940C84.7040508@oracle.com> <004fcd31-f9c1-5eb2-3d48-34571a1f3cee@oracle.com> <80872bba-322a-bb3f-ca25-5067c7656b15@oracle.com> <5A957036.7030306@oracle.com> Message-ID: On 2/27/18 9:50 AM, Erik ?sterlund wrote: > Hi Coleen, > > As I said to Roman: the studio compiler finds it ambiguous to have > both const and const volatile implicit conversion overloads - you > gotta pick your favourite. Seemingly same issue as oopDesc* -> void* > where solaris is forced to pick just one. But my latest revision > http://cr.openjdk.java.net/~eosterlund/8198561/webrev.01/ managed to > get rid of it by introducing a new constructor accepting const > volatile narrowOop& instead. That's great.? I just wanted a comment but this is better now. This came out very nicely. Coleen > > Thanks, > /Erik > > On 2018-02-26 21:27, coleen.phillimore at oracle.com wrote: >> Yeah I forgot to ask for a comment why this is: >> >> http://cr.openjdk.java.net/~eosterlund/8198561/webrev.00/src/hotspot/share/oops/oopsHierarchy.hpp.udiff.html >> >> >> +#ifndef SOLARIS >> + operator PrimitiveType () const { return _value; } >> +#endif >> + operator PrimitiveType () const volatile { return _value; } >> >> Thanks, >> Coleen >> >> On 2/26/18 3:11 PM, Roman Kennke wrote: >>> This is a very welcome change! >>> Changeset looks good to me (except I've no idea what the sparc part >>> does). Same question as Colleen though. >>> >>> Thanks, >>> Roman >>> >>> On Mon, Feb 26, 2018 at 8:55 PM, wrote: >>>> Hi Erik, >>>> >>>> This looks great.?? I assume that the generated code (for these >>>> classes vs. >>>> oopDesc* and juint) comes out the same? >>>> >>>> thanks, >>>> Coleen >>>> >>>> >>>> On 2/26/18 8:32 AM, Erik ?sterlund wrote: >>>>> Hi, >>>>> >>>>> Making oop sometimes map to class types and sometimes to >>>>> primitives comes >>>>> with some unfortunate problems. Advantages of making them always >>>>> have their >>>>> own type include: >>>>> >>>>> 1) Not getting compilation errors in configuration X but not Y >>>>> 2) Making it easier to adopt existing code to use Shenandoah equals >>>>> barriers >>>>> 3) Recognize oops and narrowOops safely in template >>>>> >>>>> Therefore, I would like to make both oop and narrowOop always map >>>>> to a >>>>> class type consistently. >>>>> >>>>> Webrev: >>>>> http://cr.openjdk.java.net/~eosterlund/8198561/webrev.00/ >>>>> >>>>> Bug: >>>>> https://bugs.openjdk.java.net/browse/JDK-8198561 >>>>> >>>>> Thanks, >>>>> /Erik >>>> >> > From harold.seigel at oracle.com Tue Feb 27 14:52:22 2018 From: harold.seigel at oracle.com (harold seigel) Date: Tue, 27 Feb 2018 09:52:22 -0500 Subject: RFR: 8197841: Remove unused function Universe::create_heap_ext In-Reply-To: <5c45c910-2b4d-e791-46db-715d845172a0@oracle.com> References: <5c45c910-2b4d-e791-46db-715d845172a0@oracle.com> Message-ID: <9225a33d-0afb-6363-9b72-5d20dac31640@oracle.com> Hi Erik, This looks good! Thanks, Harold On 2/27/2018 9:42 AM, Erik Helin wrote: > Hi all, > > this small patch removes an unused extension point, > Universe::create_heap_ext. Since the definition of > Universe::create_heap_ext is the only code in the file > src/hotspot/share/memory/universe_ext.cpp, I also removed that file. > > Issue: > https://bugs.openjdk.java.net/browse/JDK-8197841 > > Patch: > http://cr.openjdk.java.net/~ehelin/8197841/00/ > > Testing: > - `make run-test-tier1` on Linux x86-64 > > Thanks, > Erik From erik.osterlund at oracle.com Tue Feb 27 14:53:39 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 27 Feb 2018 15:53:39 +0100 Subject: RFR: 8197841: Remove unused function Universe::create_heap_ext In-Reply-To: <5c45c910-2b4d-e791-46db-715d845172a0@oracle.com> References: <5c45c910-2b4d-e791-46db-715d845172a0@oracle.com> Message-ID: <5A9570F3.9050301@oracle.com> Hi Erik, Looks fantastic. Thanks, /Erik On 2018-02-27 15:42, Erik Helin wrote: > Hi all, > > this small patch removes an unused extension point, > Universe::create_heap_ext. Since the definition of > Universe::create_heap_ext is the only code in the file > src/hotspot/share/memory/universe_ext.cpp, I also removed that file. > > Issue: > https://bugs.openjdk.java.net/browse/JDK-8197841 > > Patch: > http://cr.openjdk.java.net/~ehelin/8197841/00/ > > Testing: > - `make run-test-tier1` on Linux x86-64 > > Thanks, > Erik From adinn at redhat.com Tue Feb 27 15:06:44 2018 From: adinn at redhat.com (Andrew Dinn) Date: Tue, 27 Feb 2018 15:06:44 +0000 Subject: RFR (S): 8198561: Make oop and narrowOop always have their own class type In-Reply-To: <5A956F7E.5090205@oracle.com> References: <5A940C84.7040508@oracle.com> <004fcd31-f9c1-5eb2-3d48-34571a1f3cee@oracle.com> <5A956F7E.5090205@oracle.com> Message-ID: <75bf913c-8fc6-d80a-132f-38da505b5210@redhat.com> On 27/02/18 14:47, Erik ?sterlund wrote: > On 2018-02-26 20:55, coleen.phillimore at oracle.com wrote: >> >> Hi Erik, >> >> This looks great.?? I assume that the generated code (for these >> classes vs. oopDesc* and juint) comes out the same? > > I assume so too. Or at least that the performance does not regress. > Maybe I run some benchmarks to be sure since the question has been asked. Surely it would be better to disassemble some (before and after) compiled code which uses these narrow oop definitions? Will the compiler generate different machine code with this change? regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From kim.barrett at oracle.com Tue Feb 27 15:08:10 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 27 Feb 2018 10:08:10 -0500 Subject: RFR (S): 8198561: Make oop and narrowOop always have their own class type In-Reply-To: <5A956F20.5060605@oracle.com> References: <5A940C84.7040508@oracle.com> <757B32C2-615A-4635-A84B-00460CC19522@oracle.com> <5A956F20.5060605@oracle.com> Message-ID: <3FB5D868-6460-4FE3-AC12-D1460F694567@oracle.com> > On Feb 27, 2018, at 9:45 AM, Erik ?sterlund wrote: > > Hi Kim, > > Thank you for looking at this. > > New full webrev covering all comments so far (hopefully): > http://cr.openjdk.java.net/~eosterlund/8198561/webrev.01/ > > Incremental webrev: > http://cr.openjdk.java.net/~eosterlund/8198561/webrev.00_01/ So we have implicit conversions from PrimitiveType to narrowOop, and that allows implicit narrowing conversions from other integral types to PrimitiveType. It would be nice to poison the implicit narrowing conversions, but maybe VS implicit narrowing warnings are sufficient for that? And eventual -Wconversion for gcc; see JDK-8135181. Looks good. From rkennke at redhat.com Tue Feb 27 15:41:38 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 27 Feb 2018 16:41:38 +0100 Subject: RFR: 8197841: Remove unused function Universe::create_heap_ext In-Reply-To: <5c45c910-2b4d-e791-46db-715d845172a0@oracle.com> References: <5c45c910-2b4d-e791-46db-715d845172a0@oracle.com> Message-ID: This is a welcome change. Patch looks good. Thanks, Roman On Tue, Feb 27, 2018 at 3:42 PM, Erik Helin wrote: > Hi all, > > this small patch removes an unused extension point, > Universe::create_heap_ext. Since the definition of Universe::create_heap_ext > is the only code in the file > src/hotspot/share/memory/universe_ext.cpp, I also removed that file. > > Issue: > https://bugs.openjdk.java.net/browse/JDK-8197841 > > Patch: > http://cr.openjdk.java.net/~ehelin/8197841/00/ > > Testing: > - `make run-test-tier1` on Linux x86-64 > > Thanks, > Erik From lois.foltan at oracle.com Tue Feb 27 19:09:30 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Tue, 27 Feb 2018 14:09:30 -0500 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: <761A1FEF-95F4-4015-AA43-2962D615B50C@oracle.com> References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> <86A0EA21-1141-4C73-AD80-EF46A44220B2@oracle.com> <761A1FEF-95F4-4015-AA43-2962D615B50C@oracle.com> Message-ID: <5bb27dc7-6117-1046-85fe-0f069acac276@oracle.com> On 2/27/2018 2:52 AM, Kim Barrett wrote: >> On Feb 26, 2018, at 10:55 AM, Thomas St?fe wrote: >> >> >> src/hotspot/os/windows/os_windows.cpp >> >> Very minor nit: There is the theoretical possibility of _vsnprintf returning -1 for some reason other than errno. Documentation states "A return value of -1 indicates that an encoding error has occurred.". Since it does not say what the state of the output buffer is in that case, it may have been unchanged, in which case we would return undefined buffer content. To prevent this, maybe we could set the first buffer byte to \0 before invoking vsnprintf (if len > 0). >> >> However, I admit this is very far fetched. Will probably never happen, at least, I have never seen it. So, I leave it to you if you do this or not. > This is related to my earlier "interesting point"; encoding errors > only appear to be possible when dealing with wide characters or > strings, which I don't think would ever happen for HotSpot usage. I > agree the state of the output buffer does not seem to be well defined > in the case where an encoding error occured. The C99 description of > snprintf looks to me like it might be trying to say the output is NUL > terminated even in that case, but I don't think it unambiguously > succeeds. > > However, pre-setting the first byte to NUL doesn't really help; that > byte may have been overwritten before the encoding error is detected. > We can instead set the last byte of the buffer to NUL when len > 0 and > result < 0. (We could do so without checking result, but that makes > the gtest's test for stray writes a little more complicated.) I'm > making that change, mostly in case we get here via the jio_ functions. > > Note that the documentation for _vsnprintf (or vsnprintf, prior to > VS2015) makes no mention (that I could find) of encoding errors as a > possible reason for a negative return value. > > (Interestingly, the Java Access Bridge (jdk.accessibility) native > windows code uses wide char/string format directives, and appears to > in at least some cases write them using bare vsnprintf, and that's > irrespective of which VS version is being used.) > >> --- >> >> test/hotspot/gtest/runtime/test_os.cpp >> >> - check_buffer is used to check prefix and suffix range for stray writes? I think this may be overthinking it a bit, I would not expect strays beyond buf - 1 and buf + len, in which case you would not need the check_buffer. > Using check_buffer demonstrated behavior differences between some > versions of these changes (because of NULing out buf[len-1]). I'm > inclinded to keep it. > >> - By initializing buffer with \0 you will miss a faulty os::snprintf() failing to write the terminating zero, no? I would use a different value. > The fill value for checking is not '\0', it's '0'. I'll change that > to make it more obviously different. > >> Otherwise, looks good to me. > It seems I sent out the open.02 webrev with a few final edits in > shared code that were only tested locally, and failed to build on some > platforms. Windows complained about an implicit narrowing conversion > in ostream.cpp. For Solaris, a C linkage function has a different > type than a C++ linkage function with the same signature, so for testing > jio_snprintf I changed test_sprintf to a function template with the > print function type deduced. > > New webrevs: > full: http://cr.openjdk.java.net/~kbarrett/8196882/open.03/ > incr: http://cr.openjdk.java.net/~kbarrett/8196882/open.03.inc/ New .03 webrevs looks good. Lois > > >> Best Regards, Thomas >> >> On Sun, Feb 25, 2018 at 11:42 PM, Kim Barrett wrote: >>> On Feb 25, 2018, at 5:37 PM, Kim Barrett wrote: >>> Based on discussion, I've changed the new os::vsnprintf and >>> os::snprintf to conform to C99. [?] >>> >>> Also changed jio_vsnprintf to use os::vsnprintf, reverting some of the >>> ealier change. >> Just to be clear, the jio_vsnprintf behavior has not been changed. It?s just been >> reimplemented in terms of os::vsnprintf rather than directly using ::vsnprintf and >> trying to account for its platform variations. > From kim.barrett at oracle.com Tue Feb 27 21:04:22 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 27 Feb 2018 16:04:22 -0500 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: <5bb27dc7-6117-1046-85fe-0f069acac276@oracle.com> References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> <86A0EA21-1141-4C73-AD80-EF46A44220B2@oracle.com> <761A1FEF-95F4-4015-AA43-2962D615B50C@oracle.com> <5bb27dc7-6117-1046-85fe-0f069acac276@oracle.com> Message-ID: <85C6FD9B-BE98-4F5B-A35D-2F907760E240@oracle.com> > On Feb 27, 2018, at 2:09 PM, Lois Foltan wrote: > > On 2/27/2018 2:52 AM, Kim Barrett wrote: >> >> New webrevs: >> full: http://cr.openjdk.java.net/~kbarrett/8196882/open.03/ >> incr: http://cr.openjdk.java.net/~kbarrett/8196882/open.03.inc/ > New .03 webrevs looks good. > Lois Thanks. From igor.ignatyev at oracle.com Tue Feb 27 21:17:12 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Tue, 27 Feb 2018 13:17:12 -0800 Subject: RFR(XXS) : 8190679 : java/util/Arrays/TimSortStackSize2.java fails with "Initial heap size set to a larger value than the maximum heap size" In-Reply-To: References: <02AFE3F4-5D26-49B6-8787-B04817EDE6C1@oracle.com> Message-ID: Hi David, I have set Xmx equal to Xms, the test passes w/ different externally passed combinations of Xmx, Xms and UseCompressedOops. http://cr.openjdk.java.net/~iignatyev//8190679/webrev.01/index.html Thanks, -- Igor > On Feb 26, 2018, at 9:00 PM, David Holmes wrote: > > Hi Igor, > > On 27/02/2018 11:25 AM, Igor Ignatyev wrote: >> http://cr.openjdk.java.net/~iignatyev//8190679/webrev.00/index.html >>> 9 lines changed: 2 ins; 0 del; 7 mod; >> Hi all, >> could you please review the patch for TimSortStackSize2 test? >> the test failed when externally passed (via -javaoption or -vmoption) -Xmx value is less than 770m or 385m, depending on UseCompressedOops. it happened because the test explicitly set Xms value, but didn't set Xmx. >> now, the test sets Xmx as Xms times 2. > > I'm not happy with setting Xmx at 2 times Xms - that seems to be setting ourselves up for another case where we can't set -Xmx at startup. This test has encountered problems in the past with external flag settings - see in particular the review thread for JDK-8075071: > > http://mail.openjdk.java.net/pipermail/core-libs-dev/2015-March/032316.html > > Will the test pass if we simply set -Xmx and -Xms to the same? Or (equivalently based on on previous review discussions) just set -Xmx instead of -Xms? > > Thanks, > David > >> PS as it mostly affects hotspot testing, the patch will be pushed to jdk/hs. >> webrev: http://cr.openjdk.java.net/~iignatyev//8190679/webrev.00/index.html >> testing: java/util/Arrays/TimSortStackSize2.java w/ and w/o externally provided Xmx value >> JBS: https://bugs.openjdk.java.net/browse/JDK-8190679 >> Thanks, >> -- Igor From kim.barrett at oracle.com Tue Feb 27 21:28:49 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 27 Feb 2018 16:28:49 -0500 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> <86A0EA21-1141-4C73-AD80-EF46A44220B2@oracle.com> <761A1FEF-95F4-4015-AA43-2962D615B50C@oracle.com> Message-ID: > On Feb 27, 2018, at 5:14 AM, Thomas St?fe wrote: > (Interestingly, the Java Access Bridge (jdk.accessibility) native > windows code uses wide char/string format directives, and appears to > in at least some cases write them using bare vsnprintf, and that's > irrespective of which VS version is being used.) > > > Where? I took a short look and did not find it. > > However, I found a number of wcsncpy() with no truncation handling, so, no zero-termination upon truncation (I think, but I may be mistaken, just had a very quick look). For example, in src/jdk.accessibility/windows/native/toolscommon/AccessInfo.cpp there are calls to appendToBuffer and PrintDebugString that have format strings containing %ls. appendToBuffer is defined in that file, and calls vsnprintf. It presumes the buffer will be NUL-terminated; see its call to strlen(buf). PrintDebugString is in src/jdk.accessibility/windows/native/common/AccessBridgeDebug.cpp It also calls vsnprintf, and assumes the buffer will be NUL-terminated, since it is passed to OutputDebugString and/or printf. For the call to printf, better hope there's no "%" in the output of the vsnprintf. And all the variadic functions in this file seem to be missing the va_end associated with va_start. Maybe that works on Windows... > > New webrevs: > full: http://cr.openjdk.java.net/~kbarrett/8196882/open.03/ > incr: http://cr.openjdk.java.net/~kbarrett/8196882/open.03.inc/ > > > > All looks well. Thank you. > > Best Regards, Thomas > Thanks. From coleen.phillimore at oracle.com Tue Feb 27 22:22:03 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 27 Feb 2018 17:22:03 -0500 Subject: RFR(L): 8198423: Improve metaspace chunk allocation (was: Proposal for improvements to the metaspace chunk allocator) In-Reply-To: References: Message-ID: <2005ef0d-9d95-9805-f7aa-94193f683fb3@oracle.com> Thomas, review comments: http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-26/webrev/src/hotspot/share/memory/metachunk.hpp.udiff.html +// ChunkIndex (todo: rename?) defines the type of chunk. Chunk types It's really both, isn't it?? The type is the index into the free list or in use lists.? The name seems fine. Can you add comments on the #endifs if the #ifdef is more than a couple 2-3 lines above (it's a nit that bothers me). +#ifdef ASSERT + // A 32bit sentinel for debugging purposes. +#define CHUNK_SENTINEL 0x4d4554EF // "MET" +#define CHUNK_SENTINEL_INVALID 0xFEEEEEEF + uint32_t _sentinel; +#endif + const ChunkIndex _chunk_type; + const bool _is_class; + // Whether the chunk is free (in freelist) or in use by some class loader. bool _is_tagged_free; +#ifdef ASSERT + ChunkOrigin _origin; + int _use_count; +#endif + It seems that if you could move origin and _use_count into the ASSERT block above (maybe putting use_count before _origin. http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-26/webrev/src/hotspot/share/memory/metaspace.cpp.udiff.html In take_from_committed, can the allocation of padding chunks be its own function like add_chunks_to_aligment() lines 1574-1615? The function is too long now. I don't think coalescation is a word in English, at least my dictionary cannot find it.? Although it makes sense in the context, just distracting. + // Now check if in the coalescation area there are still life chunks. "live" chunks I guess.?? A sentence you won't read often :). In free_chunks_get() can you handle the Humongous case first? The else for humongous chunk size is buried tons of lines below. Otherwise it might be helpful to the logic to make your addition to this function be a function you call like ? chunk = split_from_larger_free_chunk(); You might want to keep the origin in product mode if it doesn't add to the chunk footprint.?? Might help with customer debugging. Awesome looking test... I've read through most of this and thank you for adding this to at least partially solve the fragmentation problem.? The irony is that we templatized the Dictionary from CMS so that we could use it for Metaspace and that has splitting and coalescing but it seems this code makes more sense than adapting that code (if it's even possible). Thank you for working on this.? I'll sponsor this for you. Coleen On 2/26/18 9:20 AM, Thomas St?fe wrote: > Hi all, > > I know this patch is a bit larger, but may I please have reviews and/or > other input? > > Issue: https://bugs.openjdk.java.net/browse/JDK-8198423 > Latest version: > http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-26/webrev/ > > For those who followed the mail thread, this is the incremental diff to the > last changes (included feedback Goetz gave me on- and off-list): > http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-26/webrev-incr/webrev/ > > Thank you! > > Kind Regards, Thomas Stuefe > > > > On Thu, Feb 8, 2018 at 12:58 PM, Thomas St?fe > wrote: > >> Hi, >> >> We would like to contribute a patch developed at SAP which has been live >> in our VM for some time. It improves the metaspace chunk allocation: >> reduces fragmentation and raises the chance of reusing free metaspace >> chunks. >> >> The patch: http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >> ation/2018-02-05--2/webrev/ >> >> In very short, this patch helps with a number of pathological cases where >> metaspace chunks are free but cannot be reused because they are of the >> wrong size. For example, the metaspace freelist could be full of small >> chunks, which would not be reusable if we need larger chunks. So, we could >> get metaspace OOMs even in situations where the metaspace was far from >> exhausted. Our patch adds the ability to split and merge metaspace chunks >> dynamically and thus remove the "size-lock-in" problem. >> >> Note that there have been other attempts to get a grip on this problem, >> see e.g. "SpaceManager::get_small_chunks_and_allocate()". But arguably >> our patch attempts a more complete solution. >> >> In 2016 I discussed the idea for this patch with some folks off-list, >> among them Jon Matsimutso. He then did advice me to create a JEP. So I did: >> [1]. However, meanwhile changes to the JEP process were discussed [2], and >> I am not sure anymore this patch needs even needs a JEP. It may be >> moderately complex and hence carries the risk inherent in any patch, but >> its effects would not be externally visible (if you discount seeing fewer >> metaspace OOMs). So, I'd prefer to handle this as a simple RFE. >> >> -- >> >> How this patch works: >> >> 1) When a class loader dies, its metaspace chunks are freed and returned >> to the freelist for reuse by the next class loader. With the patch, upon >> returning a chunk to the freelist, an attempt is made to merge it with its >> neighboring chunks - should they happen to be free too - to form a larger >> chunk. Which then is placed in the free list. >> >> As a result, the freelist should be populated by larger chunks at the >> expense of smaller chunks. In other words, all free chunks should always be >> as "coalesced as possible". >> >> 2) When a class loader needs a new chunk and a chunk of the requested size >> cannot be found in the free list, before carving out a new chunk from the >> virtual space, we first check if there is a larger chunk in the free list. >> If there is, that larger chunk is chopped up into n smaller chunks. One of >> them is returned to the callers, the others are re-added to the freelist. >> >> (1) and (2) together have the effect of removing the size-lock-in for >> chunks. If fragmentation allows it, small chunks are dynamically combined >> to form larger chunks, and larger chunks are split on demand. >> >> -- >> >> What this patch does not: >> >> This is not a rewrite of the chunk allocator - most of the mechanisms stay >> intact. Specifically, chunk sizes remain unchanged, and so do chunk >> allocation processes (when do which class loaders get handed which chunk >> size). Almost everthing this patch does affects only internal workings of >> the ChunkManager. >> >> Also note that I refrained from doing any cleanups, since I wanted >> reviewers to be able to gauge this patch without filtering noise. >> Unfortunately this patch adds some complexity. But there are many future >> opportunities for code cleanup and simplification, some of which we already >> discussed in existing RFEs ([3], [4]). All of them are out of the scope for >> this particular patch. >> >> -- >> >> Details: >> >> Before the patch, the following rules held: >> - All chunk sizes are multiples of the smallest chunk size ("specialized >> chunks") >> - All chunk sizes of larger chunks are also clean multiples of the next >> smaller chunk size (e.g. for class space, the ratio of >> specialized/small/medium chunks is 1:2:32) >> - All chunk start addresses are aligned to the smallest chunk size (more >> or less accidentally, see metaspace_reserve_alignment). >> The patch makes the last rule explicit and more strict: >> - All (non-humongous) chunk start addresses are now aligned to their own >> chunk size. So, e.g. medium chunks are allocated at addresses which are a >> multiple of medium chunk size. This rule is not extended to humongous >> chunks, whose start addresses continue to be aligned to the smallest chunk >> size. >> >> The reason for this new alignment rule is that it makes it cheap both to >> find chunk predecessors of a chunk and to check which chunks are free. >> >> When a class loader dies and its chunk is returned to the freelist, all we >> have is its address. In order to merge it with its neighbors to form a >> larger chunk, we need to find those neighbors, including those preceding >> the returned chunk. Prior to this patch that was not easy - one would have >> to iterate chunks starting at the beginning of the VirtualSpaceNode. But >> due to the new alignment rule, we now know where the prospective larger >> chunk must start - at the next lower larger-chunk-size-aligned boundary. We >> also know that currently a smaller chunk must start there (*). >> >> In order to check the free-ness of chunks quickly, each VirtualSpaceNode >> now keeps a bitmap which describes its occupancy. One bit in this bitmap >> corresponds to a range the size of the smallest chunk size and starting at >> an address aligned to the smallest chunk size. Because of the alignment >> rules above, such a range belongs to one single chunk. The bit is 1 if the >> associated chunk is in use by a class loader, 0 if it is free. >> >> When we have calculated the address range a prospective larger chunk would >> span, we now need to check if all chunks in that range are free. Only then >> we can merge them. We do that by querying the bitmap. Note that the most >> common use case here is forming medium chunks from smaller chunks. With the >> new alignment rules, the bitmap portion covering a medium chunk now always >> happens to be 16- or 32bit in size and is 16- or 32bit aligned, so reading >> the bitmap in many cases becomes a simple 16- or 32bit load. >> >> If the range is free, only then we need to iterate the chunks in that >> range: pull them from the freelist, combine them to one new larger chunk, >> re-add that one to the freelist. >> >> (*) Humongous chunks make this a bit more complicated. Since the new >> alignment rule does not extend to them, a humongous chunk could still >> straddle the lower or upper boundary of the prospective larger chunk. So I >> gave the occupancy map a second layer, which is used to mark the start of >> chunks. >> An alternative approach could have been to make humongous chunks size and >> start address always a multiple of the largest non-humongous chunk size >> (medium chunks). That would have caused a bit of waste per humongous chunk >> (<64K) in exchange for simpler coding and a simpler occupancy map. >> >> -- >> >> The patch shows its best results in scenarios where a lot of smallish >> class loaders are alive simultaneously. When dying, they leave continuous >> expanses of metaspace covered in small chunks, which can be merged nicely. >> However, if class loader life times vary more, we have more interleaving of >> dead and alive small chunks, and hence chunk merging does not work as well >> as it could. >> >> For an example of a pathological case like this see example program: [5] >> >> Executed like this: "java -XX:CompressedClassSpaceSize=10M -cp test3 >> test3.Example2" the test will load 3000 small classes in separate class >> loaders, then throw them away and start loading large classes. The small >> classes will have flooded the metaspace with small chunks, which are >> unusable for the large classes. When executing with the rather limited >> CompressedClassSpaceSize=10M, we will run into an OOM after loading about >> 800 large classes, having used only 40% of the class space, the rest is >> wasted to unused small chunks. However, with our patch the example program >> will manage to allocate ~2900 large classes before running into an OOM, and >> class space will show almost no waste. >> >> Do demonstrate this, add -Xlog:gc+metaspace+freelist. After running into >> an OOM, statistics and an ASCII representation of the class space will be >> shown. The unpatched version will show large expanses of unused small >> chunks, the patched variant will show almost no waste. >> >> Note that the patch could be made more effective with a different size >> ratio between small and medium chunks: in class space, that ratio is 1:16, >> so 16 small chunks must happen to be free to form one larger chunk. With a >> smaller ratio the chance for coalescation would be larger. So there may be >> room for future improvement here: Since we now can merge and split chunks >> on demand, we could introduce more chunk sizes. Potentially arriving at a >> buddy-ish allocator style where we drop hard-wired chunk sizes for a >> dynamic model where the ratio between chunk sizes is always 1:2 and we >> could in theory have no limit to the chunk size? But this is just a thought >> and well out of the scope of this patch. >> >> -- >> >> What does this patch cost (memory): >> >> - the occupancy bitmap adds 1 byte per 4K metaspace. >> - MetaChunk headers get larger, since we add an enum and two bools to it. >> Depending on what the c++ compiler does with that, chunk headers grow by >> one or two MetaWords, reducing the payload size by that amount. >> - The new alignment rules mean we may need to create padding chunks to >> precede larger chunks. But since these padding chunks are added to the >> freelist, they should be used up before the need for new padding chunks >> arises. So, the maximally possible number of unused padding chunks should >> be limited by design to about 64K. >> >> The expectation is that the memory savings by this patch far outweighs its >> added memory costs. >> >> .. (performance): >> >> We did not see measurable drops in standard benchmarks raising over the >> normal noise. I also measured times for a program which stresses metaspace >> chunk coalescation, with the same result. >> >> I am open to suggestions what else I should measure, and/or independent >> measurements. >> >> -- >> >> Other details: >> >> I removed SpaceManager::get_small_chunk_and_allocate() to reduce >> complexity somewhat, because it was made mostly obsolete by this patch: >> since small chunks are combined to larger chunks upon return to the >> freelist, in theory we should not have that many free small chunks anymore >> anyway. However, there may be still cases where we could benefit from this >> workaround, so I am asking your opinion on this one. >> >> About tests: There were two native tests - ChunkManagerReturnTest and >> TestVirtualSpaceNode (the former was added by me last year) - which did not >> make much sense anymore, since they relied heavily on internal behavior >> which was made unpredictable with this patch. >> To make up for these lost tests, I added a new gtest which attempts to >> stress the many combinations of allocation pattern but does so from a layer >> above the old tests. It now uses Metaspace::allocate() and friends. By >> using that point as entry for tests, I am less dependent on implementation >> internals and still cover a lot of scenarios. >> >> -- >> >> Review pointers: >> >> Good points to start are >> - ChunkManager::return_single_chunk() - specifically, >> ChunkManager::attempt_to_coalesce_around_chunk() - here we merge chunks >> upon return to the free list >> - ChunkManager::free_chunks_get(): Here we now split large chunks into >> smaller chunks on demand >> - VirtualSpaceNode::take_from_committed() : chunks are allocated >> according to align rules now, padding chunks are handles >> - The OccupancyMap class is the helper class implementing the new >> occupancy bitmap >> >> The rest is mostly chaff: helper functions, added tests and verifications. >> >> -- >> >> Thanks and Best Regards, Thomas >> >> [1] https://bugs.openjdk.java.net/browse/JDK-8166690 >> [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2017-November >> /000128.html >> [3] https://bugs.openjdk.java.net/browse/JDK-8185034 >> [4] https://bugs.openjdk.java.net/browse/JDK-8176808 >> [5] https://bugs.openjdk.java.net/secure/attachment/63532/test3.zip >> >> >> From david.holmes at oracle.com Wed Feb 28 04:33:29 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 28 Feb 2018 14:33:29 +1000 Subject: RFR(XXS) : 8190679 : java/util/Arrays/TimSortStackSize2.java fails with "Initial heap size set to a larger value than the maximum heap size" In-Reply-To: References: <02AFE3F4-5D26-49B6-8787-B04817EDE6C1@oracle.com> Message-ID: On 28/02/2018 7:17 AM, Igor Ignatyev wrote: > Hi David, > > I have set Xmx equal to Xms, the test passes w/ different externally > passed combinations of Xmx, Xms and UseCompressedOops. > > http://cr.openjdk.java.net/~iignatyev//8190679/webrev.01/index.html Looks good! Thanks, David > Thanks, > -- Igor > >> On Feb 26, 2018, at 9:00 PM, David Holmes > > wrote: >> >> Hi Igor, >> >> On 27/02/2018 11:25 AM, Igor Ignatyev wrote: >>> http://cr.openjdk.java.net/~iignatyev//8190679/webrev.00/index.html >>>> 9 lines changed: 2 ins; 0 del; 7 mod; >>> Hi all, >>> could you please review the patch for TimSortStackSize2 test? >>> the test failed when externally passed (via -javaoption or -vmoption) >>> -Xmx value is less than 770m or 385m, depending on UseCompressedOops. >>> it happened because the test explicitly set Xms value, but didn't set >>> Xmx. >>> now, the test sets Xmx as Xms times 2. >> >> I'm not happy with setting Xmx at 2 times Xms - that seems to be >> setting ourselves up for another case where we can't set -Xmx at >> startup. This test has encountered problems in the past with external >> flag settings - see in particular the review thread for JDK-8075071: >> >> http://mail.openjdk.java.net/pipermail/core-libs-dev/2015-March/032316.html >> >> Will the test pass if we simply set -Xmx and -Xms to the same? Or >> (equivalently based on on previous review discussions) just set -Xmx >> instead of -Xms? >> >> Thanks, >> David >> >>> PS as it mostly affects hotspot testing, the patch will be pushed to >>> jdk/hs. >>> webrev: >>> http://cr.openjdk.java.net/~iignatyev//8190679/webrev.00/index.html >>> testing: java/util/Arrays/TimSortStackSize2.java ?w/ and w/o >>> externally provided Xmx value >>> JBS: https://bugs.openjdk.java.net/browse/JDK-8190679 >>> Thanks, >>> -- Igor > From tobias.hartmann at oracle.com Wed Feb 28 13:25:42 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 28 Feb 2018 14:25:42 +0100 Subject: [11] RFR(S): 8148871: Possible wrong expression stack depth at deopt point Message-ID: <8b2f400d-fdfb-d5ea-0034-1bd71895a28a@oracle.com> Hi, please review the following patch: https://bugs.openjdk.java.net/browse/JDK-8148871 http://cr.openjdk.java.net/~thartmann/8148871/webrev.00/ The problem is that the stack verification code uses the interpreter oop map to get the stack size of the next instruction. However, for calls, the oop map contains the state *after* the instruction. With next_mask_expression_stack_size = 0, the result of 'next_mask_expression_stack_size - top_frame_expression_stack_adjustment' is negative and verification fails. For details, see my comment in the bug [1]. The fix is to add a special case for invoke bytecodes and use the parameter size instead of the oop map in that case. Tested with hs-tier1/2 with -XX:+VerifyStack (I hit 8198826 which I'll fix with another patch). Thanks, Tobias [1] https://bugs.openjdk.java.net/browse/JDK-8148871?focusedCommentId=14160003&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14160003 From tobias.hartmann at oracle.com Wed Feb 28 14:21:19 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 28 Feb 2018 15:21:19 +0100 Subject: [11] RFR(XS): 8198826: -XX:+VerifyStack fails with fatal error: ExceptionMark constructor expects no pending exceptions Message-ID: <11874ef4-0897-b641-cbe4-b06d772957be@oracle.com> Hi, please review the following patch: https://bugs.openjdk.java.net/browse/JDK-8198826 http://cr.openjdk.java.net/~thartmann/8198826/webrev.00/ If an OutOfMemoryError is thrown during reallocation of scalar replaced objects, stack verification crashes after calling OopMapCache::compute_one_oop_map because that code does not expect pending exceptions. Please note that the exception is not thrown in that method but earlier in Deoptimization::realloc_objects() and then propagated through the deoptimization blob. I propose to skip stack verification in this exceptional case. Thanks, Tobias From erik.helin at oracle.com Wed Feb 28 15:28:13 2018 From: erik.helin at oracle.com (Erik Helin) Date: Wed, 28 Feb 2018 16:28:13 +0100 Subject: RFR(L): 8198423: Improve metaspace chunk allocation (was: Proposal for improvements to the metaspace chunk allocator) In-Reply-To: References: Message-ID: <633ac65d-83af-68e0-ea84-d0e7da181871@oracle.com> Hi Thomas, I will take a look at this, I just have been a bit busy lately (sorry for not responding earlier). Thanks, Erik On 02/26/2018 03:20 PM, Thomas St?fe wrote: > Hi all, > > I know this patch is a bit larger, but may I please have reviews and/or > other input? > > Issue: https://bugs.openjdk.java.net/browse/JDK-8198423 > Latest version: > http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-26/webrev/ > > For those who followed the mail thread, this is the incremental diff to the > last changes (included feedback Goetz gave me on- and off-list): > http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-26/webrev-incr/webrev/ > > Thank you! > > Kind Regards, Thomas Stuefe > > > > On Thu, Feb 8, 2018 at 12:58 PM, Thomas St?fe > wrote: > >> Hi, >> >> We would like to contribute a patch developed at SAP which has been live >> in our VM for some time. It improves the metaspace chunk allocation: >> reduces fragmentation and raises the chance of reusing free metaspace >> chunks. >> >> The patch: http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >> ation/2018-02-05--2/webrev/ >> >> In very short, this patch helps with a number of pathological cases where >> metaspace chunks are free but cannot be reused because they are of the >> wrong size. For example, the metaspace freelist could be full of small >> chunks, which would not be reusable if we need larger chunks. So, we could >> get metaspace OOMs even in situations where the metaspace was far from >> exhausted. Our patch adds the ability to split and merge metaspace chunks >> dynamically and thus remove the "size-lock-in" problem. >> >> Note that there have been other attempts to get a grip on this problem, >> see e.g. "SpaceManager::get_small_chunks_and_allocate()". But arguably >> our patch attempts a more complete solution. >> >> In 2016 I discussed the idea for this patch with some folks off-list, >> among them Jon Matsimutso. He then did advice me to create a JEP. So I did: >> [1]. However, meanwhile changes to the JEP process were discussed [2], and >> I am not sure anymore this patch needs even needs a JEP. It may be >> moderately complex and hence carries the risk inherent in any patch, but >> its effects would not be externally visible (if you discount seeing fewer >> metaspace OOMs). So, I'd prefer to handle this as a simple RFE. >> >> -- >> >> How this patch works: >> >> 1) When a class loader dies, its metaspace chunks are freed and returned >> to the freelist for reuse by the next class loader. With the patch, upon >> returning a chunk to the freelist, an attempt is made to merge it with its >> neighboring chunks - should they happen to be free too - to form a larger >> chunk. Which then is placed in the free list. >> >> As a result, the freelist should be populated by larger chunks at the >> expense of smaller chunks. In other words, all free chunks should always be >> as "coalesced as possible". >> >> 2) When a class loader needs a new chunk and a chunk of the requested size >> cannot be found in the free list, before carving out a new chunk from the >> virtual space, we first check if there is a larger chunk in the free list. >> If there is, that larger chunk is chopped up into n smaller chunks. One of >> them is returned to the callers, the others are re-added to the freelist. >> >> (1) and (2) together have the effect of removing the size-lock-in for >> chunks. If fragmentation allows it, small chunks are dynamically combined >> to form larger chunks, and larger chunks are split on demand. >> >> -- >> >> What this patch does not: >> >> This is not a rewrite of the chunk allocator - most of the mechanisms stay >> intact. Specifically, chunk sizes remain unchanged, and so do chunk >> allocation processes (when do which class loaders get handed which chunk >> size). Almost everthing this patch does affects only internal workings of >> the ChunkManager. >> >> Also note that I refrained from doing any cleanups, since I wanted >> reviewers to be able to gauge this patch without filtering noise. >> Unfortunately this patch adds some complexity. But there are many future >> opportunities for code cleanup and simplification, some of which we already >> discussed in existing RFEs ([3], [4]). All of them are out of the scope for >> this particular patch. >> >> -- >> >> Details: >> >> Before the patch, the following rules held: >> - All chunk sizes are multiples of the smallest chunk size ("specialized >> chunks") >> - All chunk sizes of larger chunks are also clean multiples of the next >> smaller chunk size (e.g. for class space, the ratio of >> specialized/small/medium chunks is 1:2:32) >> - All chunk start addresses are aligned to the smallest chunk size (more >> or less accidentally, see metaspace_reserve_alignment). >> The patch makes the last rule explicit and more strict: >> - All (non-humongous) chunk start addresses are now aligned to their own >> chunk size. So, e.g. medium chunks are allocated at addresses which are a >> multiple of medium chunk size. This rule is not extended to humongous >> chunks, whose start addresses continue to be aligned to the smallest chunk >> size. >> >> The reason for this new alignment rule is that it makes it cheap both to >> find chunk predecessors of a chunk and to check which chunks are free. >> >> When a class loader dies and its chunk is returned to the freelist, all we >> have is its address. In order to merge it with its neighbors to form a >> larger chunk, we need to find those neighbors, including those preceding >> the returned chunk. Prior to this patch that was not easy - one would have >> to iterate chunks starting at the beginning of the VirtualSpaceNode. But >> due to the new alignment rule, we now know where the prospective larger >> chunk must start - at the next lower larger-chunk-size-aligned boundary. We >> also know that currently a smaller chunk must start there (*). >> >> In order to check the free-ness of chunks quickly, each VirtualSpaceNode >> now keeps a bitmap which describes its occupancy. One bit in this bitmap >> corresponds to a range the size of the smallest chunk size and starting at >> an address aligned to the smallest chunk size. Because of the alignment >> rules above, such a range belongs to one single chunk. The bit is 1 if the >> associated chunk is in use by a class loader, 0 if it is free. >> >> When we have calculated the address range a prospective larger chunk would >> span, we now need to check if all chunks in that range are free. Only then >> we can merge them. We do that by querying the bitmap. Note that the most >> common use case here is forming medium chunks from smaller chunks. With the >> new alignment rules, the bitmap portion covering a medium chunk now always >> happens to be 16- or 32bit in size and is 16- or 32bit aligned, so reading >> the bitmap in many cases becomes a simple 16- or 32bit load. >> >> If the range is free, only then we need to iterate the chunks in that >> range: pull them from the freelist, combine them to one new larger chunk, >> re-add that one to the freelist. >> >> (*) Humongous chunks make this a bit more complicated. Since the new >> alignment rule does not extend to them, a humongous chunk could still >> straddle the lower or upper boundary of the prospective larger chunk. So I >> gave the occupancy map a second layer, which is used to mark the start of >> chunks. >> An alternative approach could have been to make humongous chunks size and >> start address always a multiple of the largest non-humongous chunk size >> (medium chunks). That would have caused a bit of waste per humongous chunk >> (<64K) in exchange for simpler coding and a simpler occupancy map. >> >> -- >> >> The patch shows its best results in scenarios where a lot of smallish >> class loaders are alive simultaneously. When dying, they leave continuous >> expanses of metaspace covered in small chunks, which can be merged nicely. >> However, if class loader life times vary more, we have more interleaving of >> dead and alive small chunks, and hence chunk merging does not work as well >> as it could. >> >> For an example of a pathological case like this see example program: [5] >> >> Executed like this: "java -XX:CompressedClassSpaceSize=10M -cp test3 >> test3.Example2" the test will load 3000 small classes in separate class >> loaders, then throw them away and start loading large classes. The small >> classes will have flooded the metaspace with small chunks, which are >> unusable for the large classes. When executing with the rather limited >> CompressedClassSpaceSize=10M, we will run into an OOM after loading about >> 800 large classes, having used only 40% of the class space, the rest is >> wasted to unused small chunks. However, with our patch the example program >> will manage to allocate ~2900 large classes before running into an OOM, and >> class space will show almost no waste. >> >> Do demonstrate this, add -Xlog:gc+metaspace+freelist. After running into >> an OOM, statistics and an ASCII representation of the class space will be >> shown. The unpatched version will show large expanses of unused small >> chunks, the patched variant will show almost no waste. >> >> Note that the patch could be made more effective with a different size >> ratio between small and medium chunks: in class space, that ratio is 1:16, >> so 16 small chunks must happen to be free to form one larger chunk. With a >> smaller ratio the chance for coalescation would be larger. So there may be >> room for future improvement here: Since we now can merge and split chunks >> on demand, we could introduce more chunk sizes. Potentially arriving at a >> buddy-ish allocator style where we drop hard-wired chunk sizes for a >> dynamic model where the ratio between chunk sizes is always 1:2 and we >> could in theory have no limit to the chunk size? But this is just a thought >> and well out of the scope of this patch. >> >> -- >> >> What does this patch cost (memory): >> >> - the occupancy bitmap adds 1 byte per 4K metaspace. >> - MetaChunk headers get larger, since we add an enum and two bools to it. >> Depending on what the c++ compiler does with that, chunk headers grow by >> one or two MetaWords, reducing the payload size by that amount. >> - The new alignment rules mean we may need to create padding chunks to >> precede larger chunks. But since these padding chunks are added to the >> freelist, they should be used up before the need for new padding chunks >> arises. So, the maximally possible number of unused padding chunks should >> be limited by design to about 64K. >> >> The expectation is that the memory savings by this patch far outweighs its >> added memory costs. >> >> .. (performance): >> >> We did not see measurable drops in standard benchmarks raising over the >> normal noise. I also measured times for a program which stresses metaspace >> chunk coalescation, with the same result. >> >> I am open to suggestions what else I should measure, and/or independent >> measurements. >> >> -- >> >> Other details: >> >> I removed SpaceManager::get_small_chunk_and_allocate() to reduce >> complexity somewhat, because it was made mostly obsolete by this patch: >> since small chunks are combined to larger chunks upon return to the >> freelist, in theory we should not have that many free small chunks anymore >> anyway. However, there may be still cases where we could benefit from this >> workaround, so I am asking your opinion on this one. >> >> About tests: There were two native tests - ChunkManagerReturnTest and >> TestVirtualSpaceNode (the former was added by me last year) - which did not >> make much sense anymore, since they relied heavily on internal behavior >> which was made unpredictable with this patch. >> To make up for these lost tests, I added a new gtest which attempts to >> stress the many combinations of allocation pattern but does so from a layer >> above the old tests. It now uses Metaspace::allocate() and friends. By >> using that point as entry for tests, I am less dependent on implementation >> internals and still cover a lot of scenarios. >> >> -- >> >> Review pointers: >> >> Good points to start are >> - ChunkManager::return_single_chunk() - specifically, >> ChunkManager::attempt_to_coalesce_around_chunk() - here we merge chunks >> upon return to the free list >> - ChunkManager::free_chunks_get(): Here we now split large chunks into >> smaller chunks on demand >> - VirtualSpaceNode::take_from_committed() : chunks are allocated >> according to align rules now, padding chunks are handles >> - The OccupancyMap class is the helper class implementing the new >> occupancy bitmap >> >> The rest is mostly chaff: helper functions, added tests and verifications. >> >> -- >> >> Thanks and Best Regards, Thomas >> >> [1] https://bugs.openjdk.java.net/browse/JDK-8166690 >> [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2017-November >> /000128.html >> [3] https://bugs.openjdk.java.net/browse/JDK-8185034 >> [4] https://bugs.openjdk.java.net/browse/JDK-8176808 >> [5] https://bugs.openjdk.java.net/secure/attachment/63532/test3.zip >> >> >> From erik.helin at oracle.com Wed Feb 28 15:49:48 2018 From: erik.helin at oracle.com (Erik Helin) Date: Wed, 28 Feb 2018 16:49:48 +0100 Subject: RFR: 8197842: Remove unused macros VM_STRUCTS_EXT and VM_TYPES_EXT Message-ID: Hi all, this patch removes the unused extension marcos VM_STRUCTS_EXT and VM_TYPES_EXT. Since these macros are the only content of vmStructs_ext.hpp, this patch also removes the file vmStructs_ext.hpp. Issue: https://bugs.openjdk.java.net/browse/JDK-8197842 Webrev: http://cr.openjdk.java.net/~ehelin/8197842/00/ Testing: - `make run-test-tier1` on Linux x86-64 Thanks, Erik From erik.helin at oracle.com Wed Feb 28 15:50:30 2018 From: erik.helin at oracle.com (Erik Helin) Date: Wed, 28 Feb 2018 16:50:30 +0100 Subject: RFR: 8197841: Remove unused function Universe::create_heap_ext In-Reply-To: <5A9570F3.9050301@oracle.com> References: <5c45c910-2b4d-e791-46db-715d845172a0@oracle.com> <5A9570F3.9050301@oracle.com> Message-ID: <9f675107-f538-a5cd-eae7-1dba74ab9256@oracle.com> On 02/27/2018 03:53 PM, Erik ?sterlund wrote: > Hi Erik, > > Looks fantastic. Thanks :) Erik > Thanks, > /Erik > > On 2018-02-27 15:42, Erik Helin wrote: >> Hi all, >> >> this small patch removes an unused extension point, >> Universe::create_heap_ext. Since the definition of >> Universe::create_heap_ext is the only code in the file >> src/hotspot/share/memory/universe_ext.cpp, I also removed that file. >> >> Issue: >> https://bugs.openjdk.java.net/browse/JDK-8197841 >> >> Patch: >> http://cr.openjdk.java.net/~ehelin/8197841/00/ >> >> Testing: >> - `make run-test-tier1` on Linux x86-64 >> >> Thanks, >> Erik > From erik.helin at oracle.com Wed Feb 28 15:50:54 2018 From: erik.helin at oracle.com (Erik Helin) Date: Wed, 28 Feb 2018 16:50:54 +0100 Subject: RFR: 8197841: Remove unused function Universe::create_heap_ext In-Reply-To: References: <5c45c910-2b4d-e791-46db-715d845172a0@oracle.com> Message-ID: On 02/27/2018 04:41 PM, Roman Kennke wrote: > This is a welcome change. Patch looks good. Thanks for taking your time and reviewing! Erik > Thanks, Roman > > On Tue, Feb 27, 2018 at 3:42 PM, Erik Helin wrote: >> Hi all, >> >> this small patch removes an unused extension point, >> Universe::create_heap_ext. Since the definition of Universe::create_heap_ext >> is the only code in the file >> src/hotspot/share/memory/universe_ext.cpp, I also removed that file. >> >> Issue: >> https://bugs.openjdk.java.net/browse/JDK-8197841 >> >> Patch: >> http://cr.openjdk.java.net/~ehelin/8197841/00/ >> >> Testing: >> - `make run-test-tier1` on Linux x86-64 >> >> Thanks, >> Erik From thomas.stuefe at gmail.com Wed Feb 28 16:17:29 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 28 Feb 2018 17:17:29 +0100 Subject: RFR(L): 8198423: Improve metaspace chunk allocation (was: Proposal for improvements to the metaspace chunk allocator) In-Reply-To: <633ac65d-83af-68e0-ea84-d0e7da181871@oracle.com> References: <633ac65d-83af-68e0-ea84-d0e7da181871@oracle.com> Message-ID: Hi Eric, no problem! Thanks, Thomas On Wed, Feb 28, 2018 at 4:28 PM, Erik Helin wrote: > Hi Thomas, > > I will take a look at this, I just have been a bit busy lately (sorry for > not responding earlier). > > Thanks, > Erik > > > On 02/26/2018 03:20 PM, Thomas St?fe wrote: > >> Hi all, >> >> I know this patch is a bit larger, but may I please have reviews and/or >> other input? >> >> Issue: https://bugs.openjdk.java.net/browse/JDK-8198423 >> Latest version: >> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >> ation/2018-02-26/webrev/ >> >> For those who followed the mail thread, this is the incremental diff to >> the >> last changes (included feedback Goetz gave me on- and off-list): >> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >> ation/2018-02-26/webrev-incr/webrev/ >> >> Thank you! >> >> Kind Regards, Thomas Stuefe >> >> >> >> On Thu, Feb 8, 2018 at 12:58 PM, Thomas St?fe >> wrote: >> >> Hi, >>> >>> We would like to contribute a patch developed at SAP which has been live >>> in our VM for some time. It improves the metaspace chunk allocation: >>> reduces fragmentation and raises the chance of reusing free metaspace >>> chunks. >>> >>> The patch: http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >>> ation/2018-02-05--2/webrev/ >>> >>> In very short, this patch helps with a number of pathological cases where >>> metaspace chunks are free but cannot be reused because they are of the >>> wrong size. For example, the metaspace freelist could be full of small >>> chunks, which would not be reusable if we need larger chunks. So, we >>> could >>> get metaspace OOMs even in situations where the metaspace was far from >>> exhausted. Our patch adds the ability to split and merge metaspace chunks >>> dynamically and thus remove the "size-lock-in" problem. >>> >>> Note that there have been other attempts to get a grip on this problem, >>> see e.g. "SpaceManager::get_small_chunks_and_allocate()". But arguably >>> our patch attempts a more complete solution. >>> >>> In 2016 I discussed the idea for this patch with some folks off-list, >>> among them Jon Matsimutso. He then did advice me to create a JEP. So I >>> did: >>> [1]. However, meanwhile changes to the JEP process were discussed [2], >>> and >>> I am not sure anymore this patch needs even needs a JEP. It may be >>> moderately complex and hence carries the risk inherent in any patch, but >>> its effects would not be externally visible (if you discount seeing fewer >>> metaspace OOMs). So, I'd prefer to handle this as a simple RFE. >>> >>> -- >>> >>> How this patch works: >>> >>> 1) When a class loader dies, its metaspace chunks are freed and returned >>> to the freelist for reuse by the next class loader. With the patch, upon >>> returning a chunk to the freelist, an attempt is made to merge it with >>> its >>> neighboring chunks - should they happen to be free too - to form a larger >>> chunk. Which then is placed in the free list. >>> >>> As a result, the freelist should be populated by larger chunks at the >>> expense of smaller chunks. In other words, all free chunks should always >>> be >>> as "coalesced as possible". >>> >>> 2) When a class loader needs a new chunk and a chunk of the requested >>> size >>> cannot be found in the free list, before carving out a new chunk from the >>> virtual space, we first check if there is a larger chunk in the free >>> list. >>> If there is, that larger chunk is chopped up into n smaller chunks. One >>> of >>> them is returned to the callers, the others are re-added to the freelist. >>> >>> (1) and (2) together have the effect of removing the size-lock-in for >>> chunks. If fragmentation allows it, small chunks are dynamically combined >>> to form larger chunks, and larger chunks are split on demand. >>> >>> -- >>> >>> What this patch does not: >>> >>> This is not a rewrite of the chunk allocator - most of the mechanisms >>> stay >>> intact. Specifically, chunk sizes remain unchanged, and so do chunk >>> allocation processes (when do which class loaders get handed which chunk >>> size). Almost everthing this patch does affects only internal workings of >>> the ChunkManager. >>> >>> Also note that I refrained from doing any cleanups, since I wanted >>> reviewers to be able to gauge this patch without filtering noise. >>> Unfortunately this patch adds some complexity. But there are many future >>> opportunities for code cleanup and simplification, some of which we >>> already >>> discussed in existing RFEs ([3], [4]). All of them are out of the scope >>> for >>> this particular patch. >>> >>> -- >>> >>> Details: >>> >>> Before the patch, the following rules held: >>> - All chunk sizes are multiples of the smallest chunk size ("specialized >>> chunks") >>> - All chunk sizes of larger chunks are also clean multiples of the next >>> smaller chunk size (e.g. for class space, the ratio of >>> specialized/small/medium chunks is 1:2:32) >>> - All chunk start addresses are aligned to the smallest chunk size (more >>> or less accidentally, see metaspace_reserve_alignment). >>> The patch makes the last rule explicit and more strict: >>> - All (non-humongous) chunk start addresses are now aligned to their own >>> chunk size. So, e.g. medium chunks are allocated at addresses which are a >>> multiple of medium chunk size. This rule is not extended to humongous >>> chunks, whose start addresses continue to be aligned to the smallest >>> chunk >>> size. >>> >>> The reason for this new alignment rule is that it makes it cheap both to >>> find chunk predecessors of a chunk and to check which chunks are free. >>> >>> When a class loader dies and its chunk is returned to the freelist, all >>> we >>> have is its address. In order to merge it with its neighbors to form a >>> larger chunk, we need to find those neighbors, including those preceding >>> the returned chunk. Prior to this patch that was not easy - one would >>> have >>> to iterate chunks starting at the beginning of the VirtualSpaceNode. But >>> due to the new alignment rule, we now know where the prospective larger >>> chunk must start - at the next lower larger-chunk-size-aligned boundary. >>> We >>> also know that currently a smaller chunk must start there (*). >>> >>> In order to check the free-ness of chunks quickly, each VirtualSpaceNode >>> now keeps a bitmap which describes its occupancy. One bit in this bitmap >>> corresponds to a range the size of the smallest chunk size and starting >>> at >>> an address aligned to the smallest chunk size. Because of the alignment >>> rules above, such a range belongs to one single chunk. The bit is 1 if >>> the >>> associated chunk is in use by a class loader, 0 if it is free. >>> >>> When we have calculated the address range a prospective larger chunk >>> would >>> span, we now need to check if all chunks in that range are free. Only >>> then >>> we can merge them. We do that by querying the bitmap. Note that the most >>> common use case here is forming medium chunks from smaller chunks. With >>> the >>> new alignment rules, the bitmap portion covering a medium chunk now >>> always >>> happens to be 16- or 32bit in size and is 16- or 32bit aligned, so >>> reading >>> the bitmap in many cases becomes a simple 16- or 32bit load. >>> >>> If the range is free, only then we need to iterate the chunks in that >>> range: pull them from the freelist, combine them to one new larger chunk, >>> re-add that one to the freelist. >>> >>> (*) Humongous chunks make this a bit more complicated. Since the new >>> alignment rule does not extend to them, a humongous chunk could still >>> straddle the lower or upper boundary of the prospective larger chunk. So >>> I >>> gave the occupancy map a second layer, which is used to mark the start of >>> chunks. >>> An alternative approach could have been to make humongous chunks size and >>> start address always a multiple of the largest non-humongous chunk size >>> (medium chunks). That would have caused a bit of waste per humongous >>> chunk >>> (<64K) in exchange for simpler coding and a simpler occupancy map. >>> >>> -- >>> >>> The patch shows its best results in scenarios where a lot of smallish >>> class loaders are alive simultaneously. When dying, they leave continuous >>> expanses of metaspace covered in small chunks, which can be merged >>> nicely. >>> However, if class loader life times vary more, we have more interleaving >>> of >>> dead and alive small chunks, and hence chunk merging does not work as >>> well >>> as it could. >>> >>> For an example of a pathological case like this see example program: [5] >>> >>> Executed like this: "java -XX:CompressedClassSpaceSize=10M -cp test3 >>> test3.Example2" the test will load 3000 small classes in separate class >>> loaders, then throw them away and start loading large classes. The small >>> classes will have flooded the metaspace with small chunks, which are >>> unusable for the large classes. When executing with the rather limited >>> CompressedClassSpaceSize=10M, we will run into an OOM after loading about >>> 800 large classes, having used only 40% of the class space, the rest is >>> wasted to unused small chunks. However, with our patch the example >>> program >>> will manage to allocate ~2900 large classes before running into an OOM, >>> and >>> class space will show almost no waste. >>> >>> Do demonstrate this, add -Xlog:gc+metaspace+freelist. After running into >>> an OOM, statistics and an ASCII representation of the class space will be >>> shown. The unpatched version will show large expanses of unused small >>> chunks, the patched variant will show almost no waste. >>> >>> Note that the patch could be made more effective with a different size >>> ratio between small and medium chunks: in class space, that ratio is >>> 1:16, >>> so 16 small chunks must happen to be free to form one larger chunk. With >>> a >>> smaller ratio the chance for coalescation would be larger. So there may >>> be >>> room for future improvement here: Since we now can merge and split chunks >>> on demand, we could introduce more chunk sizes. Potentially arriving at a >>> buddy-ish allocator style where we drop hard-wired chunk sizes for a >>> dynamic model where the ratio between chunk sizes is always 1:2 and we >>> could in theory have no limit to the chunk size? But this is just a >>> thought >>> and well out of the scope of this patch. >>> >>> -- >>> >>> What does this patch cost (memory): >>> >>> - the occupancy bitmap adds 1 byte per 4K metaspace. >>> - MetaChunk headers get larger, since we add an enum and two bools to >>> it. >>> Depending on what the c++ compiler does with that, chunk headers grow by >>> one or two MetaWords, reducing the payload size by that amount. >>> - The new alignment rules mean we may need to create padding chunks to >>> precede larger chunks. But since these padding chunks are added to the >>> freelist, they should be used up before the need for new padding chunks >>> arises. So, the maximally possible number of unused padding chunks should >>> be limited by design to about 64K. >>> >>> The expectation is that the memory savings by this patch far outweighs >>> its >>> added memory costs. >>> >>> .. (performance): >>> >>> We did not see measurable drops in standard benchmarks raising over the >>> normal noise. I also measured times for a program which stresses >>> metaspace >>> chunk coalescation, with the same result. >>> >>> I am open to suggestions what else I should measure, and/or independent >>> measurements. >>> >>> -- >>> >>> Other details: >>> >>> I removed SpaceManager::get_small_chunk_and_allocate() to reduce >>> complexity somewhat, because it was made mostly obsolete by this patch: >>> since small chunks are combined to larger chunks upon return to the >>> freelist, in theory we should not have that many free small chunks >>> anymore >>> anyway. However, there may be still cases where we could benefit from >>> this >>> workaround, so I am asking your opinion on this one. >>> >>> About tests: There were two native tests - ChunkManagerReturnTest and >>> TestVirtualSpaceNode (the former was added by me last year) - which did >>> not >>> make much sense anymore, since they relied heavily on internal behavior >>> which was made unpredictable with this patch. >>> To make up for these lost tests, I added a new gtest which attempts to >>> stress the many combinations of allocation pattern but does so from a >>> layer >>> above the old tests. It now uses Metaspace::allocate() and friends. By >>> using that point as entry for tests, I am less dependent on >>> implementation >>> internals and still cover a lot of scenarios. >>> >>> -- >>> >>> Review pointers: >>> >>> Good points to start are >>> - ChunkManager::return_single_chunk() - specifically, >>> ChunkManager::attempt_to_coalesce_around_chunk() - here we merge chunks >>> upon return to the free list >>> - ChunkManager::free_chunks_get(): Here we now split large chunks into >>> smaller chunks on demand >>> - VirtualSpaceNode::take_from_committed() : chunks are allocated >>> according to align rules now, padding chunks are handles >>> - The OccupancyMap class is the helper class implementing the new >>> occupancy bitmap >>> >>> The rest is mostly chaff: helper functions, added tests and >>> verifications. >>> >>> -- >>> >>> Thanks and Best Regards, Thomas >>> >>> [1] https://bugs.openjdk.java.net/browse/JDK-8166690 >>> [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2017-November >>> /000128.html >>> [3] https://bugs.openjdk.java.net/browse/JDK-8185034 >>> [4] https://bugs.openjdk.java.net/browse/JDK-8176808 >>> [5] https://bugs.openjdk.java.net/secure/attachment/63532/test3.zip >>> >>> >>> >>> From erik.helin at oracle.com Wed Feb 28 15:50:14 2018 From: erik.helin at oracle.com (Erik Helin) Date: Wed, 28 Feb 2018 16:50:14 +0100 Subject: RFR: 8197841: Remove unused function Universe::create_heap_ext In-Reply-To: <9225a33d-0afb-6363-9b72-5d20dac31640@oracle.com> References: <5c45c910-2b4d-e791-46db-715d845172a0@oracle.com> <9225a33d-0afb-6363-9b72-5d20dac31640@oracle.com> Message-ID: On 02/27/2018 03:52 PM, harold seigel wrote: > Hi Erik, > > This looks good! Thanks Harold for reviewing! Erik > Thanks, Harold > > > On 2/27/2018 9:42 AM, Erik Helin wrote: >> Hi all, >> >> this small patch removes an unused extension point, >> Universe::create_heap_ext. Since the definition of >> Universe::create_heap_ext is the only code in the file >> src/hotspot/share/memory/universe_ext.cpp, I also removed that file. >> >> Issue: >> https://bugs.openjdk.java.net/browse/JDK-8197841 >> >> Patch: >> http://cr.openjdk.java.net/~ehelin/8197841/00/ >> >> Testing: >> - `make run-test-tier1` on Linux x86-64 >> >> Thanks, >> Erik > From jesper.wilhelmsson at oracle.com Wed Feb 28 16:43:44 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Wed, 28 Feb 2018 17:43:44 +0100 Subject: RFR(xs): JDK-8198726 - Quarantine SADebugDTest.java again Message-ID: <5B0B1883-7B2C-43BE-B631-50AEF8CD8D6E@oracle.com> Hi, Please review this trivial change to quarantine SADebugDTest.java. This is currently an integration blocker. I aligned the bugids as well, which is why two more lines was touched. Bug: https://bugs.openjdk.java.net/browse/JDK-8198726 Patch: diff --git a/test/hotspot/jtreg/ProblemList.txt b/test/hotspot/jtreg/ProblemList.txt --- a/test/hotspot/jtreg/ProblemList.txt +++ b/test/hotspot/jtreg/ProblemList.txt @@ -79,8 +79,9 @@ # :hotspot_serviceability -serviceability/jdwp/AllModulesCommandTest.java 8170541 generic-all -serviceability/sa/TestRevPtrsForInvokeDynamic.java 8191270 generic-all +serviceability/jdwp/AllModulesCommandTest.java 8170541 generic-all +serviceability/sa/TestRevPtrsForInvokeDynamic.java 8191270 generic-all +serviceability/sa/sadebugd/SADebugDTest.java 8163805 generic-all ############################################################################# Thanks, /Jesper From lois.foltan at oracle.com Wed Feb 28 16:55:52 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Wed, 28 Feb 2018 11:55:52 -0500 Subject: RFR: 8197842: Remove unused macros VM_STRUCTS_EXT and VM_TYPES_EXT In-Reply-To: References: Message-ID: <2e86524a-6a85-13bf-b7e9-a7d4f52430d6@oracle.com> Looks good. Lois On 2/28/2018 10:49 AM, Erik Helin wrote: > Hi all, > > this patch removes the unused extension marcos VM_STRUCTS_EXT and > VM_TYPES_EXT. Since these macros are the only content of > vmStructs_ext.hpp, this patch also removes the file vmStructs_ext.hpp. > > Issue: > https://bugs.openjdk.java.net/browse/JDK-8197842 > > Webrev: > http://cr.openjdk.java.net/~ehelin/8197842/00/ > > Testing: > - `make run-test-tier1` on Linux x86-64 > > Thanks, > Erik From daniel.daugherty at oracle.com Wed Feb 28 17:06:56 2018 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Wed, 28 Feb 2018 12:06:56 -0500 Subject: RFR(xs): JDK-8198726 - Quarantine SADebugDTest.java again In-Reply-To: <5B0B1883-7B2C-43BE-B631-50AEF8CD8D6E@oracle.com> References: <5B0B1883-7B2C-43BE-B631-50AEF8CD8D6E@oracle.com> Message-ID: Thumbs up! Dan On 2/28/18 11:43 AM, jesper.wilhelmsson at oracle.com wrote: > Hi, > > Please review this trivial change to quarantine SADebugDTest.java. This is currently an integration blocker. > > I aligned the bugids as well, which is why two more lines was touched. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8198726 > Patch: > > diff --git a/test/hotspot/jtreg/ProblemList.txt b/test/hotspot/jtreg/ProblemList.txt > --- a/test/hotspot/jtreg/ProblemList.txt > +++ b/test/hotspot/jtreg/ProblemList.txt > @@ -79,8 +79,9 @@ > > # :hotspot_serviceability > > -serviceability/jdwp/AllModulesCommandTest.java 8170541 generic-all > -serviceability/sa/TestRevPtrsForInvokeDynamic.java 8191270 generic-all > +serviceability/jdwp/AllModulesCommandTest.java 8170541 generic-all > +serviceability/sa/TestRevPtrsForInvokeDynamic.java 8191270 generic-all > +serviceability/sa/sadebugd/SADebugDTest.java 8163805 generic-all > > ############################################################################# > > > > Thanks, > /Jesper > From jesper.wilhelmsson at oracle.com Wed Feb 28 17:22:27 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Wed, 28 Feb 2018 18:22:27 +0100 Subject: RFR(xs): JDK-8198726 - Quarantine SADebugDTest.java again In-Reply-To: References: <5B0B1883-7B2C-43BE-B631-50AEF8CD8D6E@oracle.com> Message-ID: <8B51FD9F-1A26-40DF-990C-3C4712CA1E07@oracle.com> Thanks Dan! /Jesper > On 28 Feb 2018, at 18:06, Daniel D. Daugherty wrote: > > Thumbs up! > > Dan > > > On 2/28/18 11:43 AM, jesper.wilhelmsson at oracle.com wrote: >> Hi, >> >> Please review this trivial change to quarantine SADebugDTest.java. This is currently an integration blocker. >> >> I aligned the bugids as well, which is why two more lines was touched. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8198726 >> Patch: >> >> diff --git a/test/hotspot/jtreg/ProblemList.txt b/test/hotspot/jtreg/ProblemList.txt >> --- a/test/hotspot/jtreg/ProblemList.txt >> +++ b/test/hotspot/jtreg/ProblemList.txt >> @@ -79,8 +79,9 @@ >> >> # :hotspot_serviceability >> >> -serviceability/jdwp/AllModulesCommandTest.java 8170541 generic-all >> -serviceability/sa/TestRevPtrsForInvokeDynamic.java 8191270 generic-all >> +serviceability/jdwp/AllModulesCommandTest.java 8170541 generic-all >> +serviceability/sa/TestRevPtrsForInvokeDynamic.java 8191270 generic-all >> +serviceability/sa/sadebugd/SADebugDTest.java 8163805 generic-all >> >> ############################################################################# >> >> >> >> Thanks, >> /Jesper >> > From tom.rodriguez at oracle.com Wed Feb 28 18:21:51 2018 From: tom.rodriguez at oracle.com (Tom Rodriguez) Date: Wed, 28 Feb 2018 10:21:51 -0800 Subject: [11] RFR(S): 8148871: Possible wrong expression stack depth at deopt point In-Reply-To: <8b2f400d-fdfb-d5ea-0034-1bd71895a28a@oracle.com> References: <8b2f400d-fdfb-d5ea-0034-1bd71895a28a@oracle.com> Message-ID: <5A96F33F.8000208@oracle.com> Looks good. Thanks for diagnosing this. tom Tobias Hartmann wrote: > Hi, > > please review the following patch: > https://bugs.openjdk.java.net/browse/JDK-8148871 > http://cr.openjdk.java.net/~thartmann/8148871/webrev.00/ > > The problem is that the stack verification code uses the interpreter oop map to get the stack size > of the next instruction. However, for calls, the oop map contains the state *after* the instruction. > With next_mask_expression_stack_size = 0, the result of 'next_mask_expression_stack_size - > top_frame_expression_stack_adjustment' is negative and verification fails. For details, see my > comment in the bug [1]. > > The fix is to add a special case for invoke bytecodes and use the parameter size instead of the oop > map in that case. Tested with hs-tier1/2 with -XX:+VerifyStack (I hit 8198826 which I'll fix with > another patch). > > Thanks, > Tobias > > [1] > https://bugs.openjdk.java.net/browse/JDK-8148871?focusedCommentId=14160003&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14160003 From erik.helin at oracle.com Wed Feb 28 20:19:21 2018 From: erik.helin at oracle.com (Erik Helin) Date: Wed, 28 Feb 2018 21:19:21 +0100 Subject: RFR: 8197842: Remove unused macros VM_STRUCTS_EXT and VM_TYPES_EXT In-Reply-To: <2e86524a-6a85-13bf-b7e9-a7d4f52430d6@oracle.com> References: <2e86524a-6a85-13bf-b7e9-a7d4f52430d6@oracle.com> Message-ID: <734b6526-4616-9faf-91bc-3d7f4540fd12@oracle.com> On 02/28/2018 05:55 PM, Lois Foltan wrote: > Looks good. Thanks for reviewing Lois! Erik > Lois > > On 2/28/2018 10:49 AM, Erik Helin wrote: >> Hi all, >> >> this patch removes the unused extension marcos VM_STRUCTS_EXT and >> VM_TYPES_EXT. Since these macros are the only content of >> vmStructs_ext.hpp, this patch also removes the file vmStructs_ext.hpp. >> >> Issue: >> https://bugs.openjdk.java.net/browse/JDK-8197842 >> >> Webrev: >> http://cr.openjdk.java.net/~ehelin/8197842/00/ >> >> Testing: >> - `make run-test-tier1` on Linux x86-64 >> >> Thanks, >> Erik > From david.holmes at oracle.com Wed Feb 28 21:53:44 2018 From: david.holmes at oracle.com (David Holmes) Date: Thu, 1 Mar 2018 07:53:44 +1000 Subject: [11] RFR(XS): 8198826: -XX:+VerifyStack fails with fatal error: ExceptionMark constructor expects no pending exceptions In-Reply-To: <11874ef4-0897-b641-cbe4-b06d772957be@oracle.com> References: <11874ef4-0897-b641-cbe4-b06d772957be@oracle.com> Message-ID: Hi Tobias, On 1/03/2018 12:21 AM, Tobias Hartmann wrote: > Hi, > > please review the following patch: > https://bugs.openjdk.java.net/browse/JDK-8198826 > http://cr.openjdk.java.net/~thartmann/8198826/webrev.00/ > > If an OutOfMemoryError is thrown during reallocation of scalar replaced objects, stack verification > crashes after calling OopMapCache::compute_one_oop_map because that code does not expect pending > exceptions. Please note that the exception is not thrown in that method but earlier in > Deoptimization::realloc_objects() and then propagated through the deoptimization blob. > > I propose to skip stack verification in this exceptional case. Once an exception is pending code has to be very careful about how it proceeds - both in terms of "the previous action failed so what do I do now?" and "I've got a pending exception so need to be very careful about what I call". I'm not familiar with this code at all and looking at it it is very hard for me to understand exactly what the occurrence of the OOME means for the rest of the code. Normally I would expect to see code "bail out" as soon as possible, while this code seems to continue to do lots of (presumably necessary) things. My concern with this simple fix is that if the occurrence of the OOME has actually resulted in breakage, then skipping the VerifyStack logic may be skipping the code that would detect that breakage. In which case it may be better to save and clear the exception and restore it afterwards. But this isn't my code area and I may be jumping at shadows, so will defer to more knowledgeable reviewers. Thanks, David > Thanks, > Tobias > From david.holmes at oracle.com Wed Feb 28 22:04:03 2018 From: david.holmes at oracle.com (David Holmes) Date: Thu, 1 Mar 2018 08:04:03 +1000 Subject: Enabling use of hugepages with java In-Reply-To: References: Message-ID: <9e71448b-2d3d-acb6-79e8-47d1d0987da3@oracle.com> Hi Richard, Moving to hotspot-dev as the appropriate list. David On 1/03/2018 1:20 AM, Richard Achmatowicz wrote: > Hi > > I hope that I am directing this question to the correct mailing list. > > I have a question concerning the OS setup on Linux required for correct > use of the java option -XX:+UseLargePages in JDK 8. > > Official Oracle documentation > (http://www.oracle.com/technetwork/java/javase/tech/largememory-jsp-137182.html) > suggests that in order to make use of large memory pages, in addition to > setting the flag -XX:+UseLargePages, an OS option shmmax needs to be > tuned to be larger than the java heap size. > > From looking at the java documentation, there are various ways of > enabling the use of huge pages: -XX:+UseHugeTLBFS, > -XX:+UseTransparentHugePages, -XX:+UseSHM and, if I understand > correctly, these correspond in part to making use of different OS-level > APIs for accessing huge pages (via shared memory, hugetlbfs, and other > means). > > My question is this: is setting the shmmax OS value only relevant if we > are using -XX:+UseSHM? In other words, if we are using -XX:+UseHugeTLBFS > to enable use of hugepages by the JVM, is it the case that setting the > shmmax OS setting has no effect on the use of hugepages by the JVM? > > Thanks in advance > > Richard > From dean.long at oracle.com Wed Feb 28 22:43:34 2018 From: dean.long at oracle.com (dean.long at oracle.com) Date: Wed, 28 Feb 2018 14:43:34 -0800 Subject: [11] RFR(S): 8148871: Possible wrong expression stack depth at deopt point In-Reply-To: <8b2f400d-fdfb-d5ea-0034-1bd71895a28a@oracle.com> References: <8b2f400d-fdfb-d5ea-0034-1bd71895a28a@oracle.com> Message-ID: <602f48dd-79ef-6d21-720f-d7a64c9ef5e9@oracle.com> This looks good. dl On 2/28/18 5:25 AM, Tobias Hartmann wrote: > Hi, > > please review the following patch: > https://bugs.openjdk.java.net/browse/JDK-8148871 > http://cr.openjdk.java.net/~thartmann/8148871/webrev.00/ > > The problem is that the stack verification code uses the interpreter oop map to get the stack size > of the next instruction. However, for calls, the oop map contains the state *after* the instruction. > With next_mask_expression_stack_size = 0, the result of 'next_mask_expression_stack_size - > top_frame_expression_stack_adjustment' is negative and verification fails. For details, see my > comment in the bug [1]. > > The fix is to add a special case for invoke bytecodes and use the parameter size instead of the oop > map in that case. Tested with hs-tier1/2 with -XX:+VerifyStack (I hit 8198826 which I'll fix with > another patch). > > Thanks, > Tobias > > [1] > https://bugs.openjdk.java.net/browse/JDK-8148871?focusedCommentId=14160003&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14160003 From kim.barrett at oracle.com Wed Feb 28 23:46:34 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 28 Feb 2018 18:46:34 -0500 Subject: RFR: 8198474: Move JNIHandles::resolve into jniHandles.inline.hpp In-Reply-To: <1519217045.2401.14.camel@oracle.com> References: <1519217045.2401.14.camel@oracle.com> Message-ID: Finally, updated webrevs: full: http://cr.openjdk.java.net/~kbarrett/8198474/open.01/ incr: http://cr.openjdk.java.net/~kbarrett/8198474/open.01.inc/ To remove the #include of jniHandles.inline.hpp by jvmciCodeInstaller.hpp, I've moved the definitions referring to JNIHandles::resolve from the .hpp file to the .cpp file. For jvmciJavaClasses.hpp, I've left it including jniHandles.inline.hpp. It already includes two other .inline.hpp files. I'm leaving it to whoever fixes the existing two to fix this one as well. From coleen.phillimore at oracle.com Wed Feb 28 23:50:37 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 28 Feb 2018 18:50:37 -0500 Subject: RFR: 8198474: Move JNIHandles::resolve into jniHandles.inline.hpp In-Reply-To: References: <1519217045.2401.14.camel@oracle.com> Message-ID: This looks good. Coleen On 2/28/18 6:46 PM, Kim Barrett wrote: > Finally, updated webrevs: > full: http://cr.openjdk.java.net/~kbarrett/8198474/open.01/ > incr: http://cr.openjdk.java.net/~kbarrett/8198474/open.01.inc/ > > To remove the #include of jniHandles.inline.hpp by > jvmciCodeInstaller.hpp, I've moved the definitions referring to > JNIHandles::resolve from the .hpp file to the .cpp file. > > For jvmciJavaClasses.hpp, I've left it including > jniHandles.inline.hpp. It already includes two other .inline.hpp > files. I'm leaving it to whoever fixes the existing two to fix this > one as well. >