From kim.barrett at oracle.com Thu Mar 1 00:43:35 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 28 Feb 2018 19:43:35 -0500 Subject: RFR: 8198474: Move JNIHandles::resolve into jniHandles.inline.hpp In-Reply-To: References: <1519217045.2401.14.camel@oracle.com> Message-ID: <501D0A30-3F19-4870-97D5-6B0A0DC8EBEC@oracle.com> > On Feb 28, 2018, at 6:50 PM, coleen.phillimore at oracle.com wrote: > > This looks good. > Coleen Thanks. > > On 2/28/18 6:46 PM, Kim Barrett wrote: >> Finally, updated webrevs: >> full: http://cr.openjdk.java.net/~kbarrett/8198474/open.01/ >> incr: http://cr.openjdk.java.net/~kbarrett/8198474/open.01.inc/ >> >> To remove the #include of jniHandles.inline.hpp by >> jvmciCodeInstaller.hpp, I've moved the definitions referring to >> JNIHandles::resolve from the .hpp file to the .cpp file. >> >> For jvmciJavaClasses.hpp, I've left it including >> jniHandles.inline.hpp. It already includes two other .inline.hpp >> files. I'm leaving it to whoever fixes the existing two to fix this >> one as well. From magnus.ihse.bursie at oracle.com Thu Mar 1 00:48:20 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Thu, 1 Mar 2018 01:48:20 +0100 Subject: RFR: JDK-8198862 Stop doing funky compilation stuff for dtrace Message-ID: We're doing a lot of weird compilation stuff for dtrace. With this patch, most of the weirdness is removed. The remaining calls to $(CC) -E has been changed to $(CPP) to clarify that we do not compile, we just use the precompiler. One of the changes I made was to actually split up the last and final dtrace call into a separate preprocessing step. However, this uses the solaris studio preprocessor instead of the ancient system preprocessor, which has changed behavior. A string like (&``_var) is now expanded to (& ` ` _var), which is not accepted by dtrace. :-( I have worked around this by adding the preprocessed output, without the spaces, in two places. If anyone wants to dig deeper into dtrace script file syntax, or C preprocessor magic, to avoid this, let me know... (I'll just state that the "obvious" solution of sending -Xs to the preprocessor to get old-style behavior does not work: this just makes the solaris studio preprocessor call the ancient preprocessor in turn, and we've gained nothing...) Bug: https://bugs.openjdk.java.net/browse/JDK-8198862 WebRev: http://cr.openjdk.java.net/~ihse/JDK-8198862-stop-doing-funky-dtrace-compilation-stuff/webrev.01 /Magnus From yumin.qi at gmail.com Thu Mar 1 02:11:55 2018 From: yumin.qi at gmail.com (yumin qi) Date: Wed, 28 Feb 2018 18:11:55 -0800 Subject: [11] RFR(S): 8148871: Possible wrong expression stack depth at deopt point In-Reply-To: <602f48dd-79ef-6d21-720f-d7a64c9ef5e9@oracle.com> References: <8b2f400d-fdfb-d5ea-0034-1bd71895a28a@oracle.com> <602f48dd-79ef-6d21-720f-d7a64c9ef5e9@oracle.com> Message-ID: I am not reviewing the change, just wonder if you could modify the comment in the function: 605 JRT_LEAF(BasicType, Deoptimization::unpack_frames(JavaThread* thread, int exec_mode)) 606 607 // We are already active int he special DeoptResourceMark any ResourceObj's we 608 // allocate will be freed at the end of the routine. It looks a typo in the comment. 'int he' -> 'in the' Yumin On Wed, Feb 28, 2018 at 2:43 PM, wrote: > This looks good. > > dl > > > On 2/28/18 5:25 AM, Tobias Hartmann wrote: > >> Hi, >> >> please review the following patch: >> https://bugs.openjdk.java.net/browse/JDK-8148871 >> http://cr.openjdk.java.net/~thartmann/8148871/webrev.00/ >> >> The problem is that the stack verification code uses the interpreter oop >> map to get the stack size >> of the next instruction. However, for calls, the oop map contains the >> state *after* the instruction. >> With next_mask_expression_stack_size = 0, the result of >> 'next_mask_expression_stack_size - >> top_frame_expression_stack_adjustment' is negative and verification >> fails. For details, see my >> comment in the bug [1]. >> >> The fix is to add a special case for invoke bytecodes and use the >> parameter size instead of the oop >> map in that case. Tested with hs-tier1/2 with -XX:+VerifyStack (I hit >> 8198826 which I'll fix with >> another patch). >> >> Thanks, >> Tobias >> >> [1] >> https://bugs.openjdk.java.net/browse/JDK-8148871?focusedComm >> entId=14160003&page=com.atlassian.jira.plugin.system. >> issuetabpanels:comment-tabpanel#comment-14160003 >> > > From tobias.hartmann at oracle.com Thu Mar 1 06:35:39 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Thu, 1 Mar 2018 07:35:39 +0100 Subject: [11] RFR(S): 8148871: Possible wrong expression stack depth at deopt point In-Reply-To: <5A96F33F.8000208@oracle.com> References: <8b2f400d-fdfb-d5ea-0034-1bd71895a28a@oracle.com> <5A96F33F.8000208@oracle.com> Message-ID: <17e1c60b-ae02-a27d-8c53-300e7bfe993f@oracle.com> Thanks Tom. Best regards, Tobias On 28.02.2018 19:21, Tom Rodriguez wrote: > Looks good.? Thanks for diagnosing this. > > tom > > Tobias Hartmann wrote: >> Hi, >> >> please review the following patch: >> https://bugs.openjdk.java.net/browse/JDK-8148871 >> http://cr.openjdk.java.net/~thartmann/8148871/webrev.00/ >> >> The problem is that the stack verification code uses the interpreter oop map to get the stack size >> of the next instruction. However, for calls, the oop map contains the state *after* the instruction. >> With next_mask_expression_stack_size = 0, the result of 'next_mask_expression_stack_size - >> top_frame_expression_stack_adjustment' is negative and verification fails. For details, see my >> comment in the bug [1]. >> >> The fix is to add a special case for invoke bytecodes and use the parameter size instead of the oop >> map in that case. Tested with hs-tier1/2 with -XX:+VerifyStack (I hit 8198826 which I'll fix with >> another patch). >> >> Thanks, >> Tobias >> >> [1] >> https://bugs.openjdk.java.net/browse/JDK-8148871?focusedCommentId=14160003&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14160003 >> From tobias.hartmann at oracle.com Thu Mar 1 06:36:01 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Thu, 1 Mar 2018 07:36:01 +0100 Subject: [11] RFR(S): 8148871: Possible wrong expression stack depth at deopt point In-Reply-To: <602f48dd-79ef-6d21-720f-d7a64c9ef5e9@oracle.com> References: <8b2f400d-fdfb-d5ea-0034-1bd71895a28a@oracle.com> <602f48dd-79ef-6d21-720f-d7a64c9ef5e9@oracle.com> Message-ID: <0cdcbc32-3b1e-0c13-a5ce-2f7b3f933d21@oracle.com> Thanks Dean. Best regards, Tobias On 28.02.2018 23:43, dean.long at oracle.com wrote: > This looks good. > > dl > > On 2/28/18 5:25 AM, Tobias Hartmann wrote: >> Hi, >> >> please review the following patch: >> https://bugs.openjdk.java.net/browse/JDK-8148871 >> http://cr.openjdk.java.net/~thartmann/8148871/webrev.00/ >> >> The problem is that the stack verification code uses the interpreter oop map to get the stack size >> of the next instruction. However, for calls, the oop map contains the state *after* the instruction. >> With next_mask_expression_stack_size = 0, the result of 'next_mask_expression_stack_size - >> top_frame_expression_stack_adjustment' is negative and verification fails. For details, see my >> comment in the bug [1]. >> >> The fix is to add a special case for invoke bytecodes and use the parameter size instead of the oop >> map in that case. Tested with hs-tier1/2 with -XX:+VerifyStack (I hit 8198826 which I'll fix with >> another patch). >> >> Thanks, >> Tobias >> >> [1] >> https://bugs.openjdk.java.net/browse/JDK-8148871?focusedCommentId=14160003&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14160003 >> > From tobias.hartmann at oracle.com Thu Mar 1 06:36:42 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Thu, 1 Mar 2018 07:36:42 +0100 Subject: [11] RFR(S): 8148871: Possible wrong expression stack depth at deopt point In-Reply-To: References: <8b2f400d-fdfb-d5ea-0034-1bd71895a28a@oracle.com> <602f48dd-79ef-6d21-720f-d7a64c9ef5e9@oracle.com> Message-ID: <741923c9-83bd-e0fc-483e-36a17de02984@oracle.com> Hi Yumin, thanks for looking at this. On 01.03.2018 03:11, yumin qi wrote: > I am not reviewing the change, just wonder if you could modify the comment in the function: > > 605 JRT_LEAF(BasicType, Deoptimization::unpack_frames(JavaThread* thread, int exec_mode)) > 606 > 607 // We are already active int he special DeoptResourceMark any ResourceObj's we > 608 // allocate will be freed at the end of the routine. > > It looks a typo in the comment.? 'int he' -> 'in the' Good catch, I'll fix that comment before pushing. Best regards, Tobias > On Wed, Feb 28, 2018 at 2:43 PM, > wrote: > > This looks good. > > dl > > > On 2/28/18 5:25 AM, Tobias Hartmann wrote: > > Hi, > > please review the following patch: > https://bugs.openjdk.java.net/browse/JDK-8148871 > > http://cr.openjdk.java.net/~thartmann/8148871/webrev.00/ > > > The problem is that the stack verification code uses the interpreter oop map to get the > stack size > of the next instruction. However, for calls, the oop map contains the state *after* the > instruction. > With next_mask_expression_stack_size = 0, the result of 'next_mask_expression_stack_size - > top_frame_expression_stack_adjustment' is negative and verification fails. For details, see my > comment in the bug [1]. > > The fix is to add a special case for invoke bytecodes and use the parameter size instead of > the oop > map in that case. Tested with hs-tier1/2 with -XX:+VerifyStack (I hit 8198826 which I'll fix > with > another patch). > > Thanks, > Tobias > > [1] > https://bugs.openjdk.java.net/browse/JDK-8148871?focusedCommentId=14160003&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14160003 > > > > From dms at samersoff.net Thu Mar 1 07:52:55 2018 From: dms at samersoff.net (Dmitry Samersoff) Date: Thu, 1 Mar 2018 10:52:55 +0300 Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 In-Reply-To: References: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> <5A82408B.7070001@oracle.com> Message-ID: <5160cc6c-d3f7-6804-a46b-98712163aefc@samersoff.net> Hi Mikhailo, Please, find exported changeset under the link below: http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/8196590-02.export -Dmitry On 20.02.2018 23:51, mikhailo wrote: > Hi Dmitry, > > > On 02/18/2018 10:31 AM, Dmitry Samersoff wrote: >> Mikhailo, >> >> Here is the changes rebased to recent sources. >> >> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.02/ > Changes look good to me. >> >> Could you sponsor the push? > I can sponsor the change, once the updated change is reviewed. Once it > is ready, please send me the latest hg changeset (with usual fields, > description, reviewers). > > > Thank you, > Misha >> >> -Dmitry >> >> On 02/13/2018 04:34 AM, Mikhailo Seledtsov wrote: >>> Changes look good from my point of view. >>> >>> Misha >>> >>> On 2/10/18, 4:10 AM, Dmitry Samersoff wrote: >>>> Everybody, >>>> >>>> Please review small changes, that enables docker testing on >>>> Linux/AArch64 >>>> >>>> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.01/ >>>> >>>> PS: >>>> >>>> Matthias - I refactored VMProps.dockerSupport() a bit to make it more >>>> readable, please check that it doesn't brake your work. >>>> >>>> -Dmitry >>>> >>>> --? >>>> Dmitry Samersoff >>>> http://devnull.samersoff.net >>>> * There will come soft rains ... > -- Dmitry Samersoff http://devnull.samersoff.net * There will come soft rains ... From stefan.johansson at oracle.com Thu Mar 1 08:37:03 2018 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Thu, 1 Mar 2018 09:37:03 +0100 Subject: RFR: 8197842: Remove unused macros VM_STRUCTS_EXT and VM_TYPES_EXT In-Reply-To: References: Message-ID: <63ded1c3-3479-343e-28d4-1fd49e269885@oracle.com> Looks good, Stefan On 2018-02-28 16:49, Erik Helin wrote: > Hi all, > > this patch removes the unused extension marcos VM_STRUCTS_EXT and > VM_TYPES_EXT. Since these macros are the only content of > vmStructs_ext.hpp, this patch also removes the file vmStructs_ext.hpp. > > Issue: > https://bugs.openjdk.java.net/browse/JDK-8197842 > > Webrev: > http://cr.openjdk.java.net/~ehelin/8197842/00/ > > Testing: > - `make run-test-tier1` on Linux x86-64 > > Thanks, > Erik From matthias.baesken at sap.com Thu Mar 1 08:46:32 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Thu, 1 Mar 2018 08:46:32 +0000 Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 In-Reply-To: <5160cc6c-d3f7-6804-a46b-98712163aefc@samersoff.net> References: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> <5A82408B.7070001@oracle.com> <5160cc6c-d3f7-6804-a46b-98712163aefc@samersoff.net> Message-ID: <6bb60d8546ee48b29b023d69485c5116@sap.com> Hi, there is a little typo in the Dockerfile-BasicTest-aarch64 ("AArh64") . +++ b/test/hotspot/jtreg/runtime/containers/docker/Dockerfile-BasicTest-aarch64 Thu Mar 01 07:45:56 2018 +0000 @@ -0,0 +1,8 @@ +# Use generic ubuntu Linux on AArh64 Otherwise it looks good to me ( not a Reviewer however). Best regards, Matthias > -----Original Message----- > From: dms at mircat.net [mailto:dms at mircat.net] On Behalf Of Dmitry > Samersoff > Sent: Donnerstag, 1. M?rz 2018 08:53 > To: mikhailo ; Dmitry Samersoff > > Cc: 'hotspot-dev at openjdk.java.net' ; > Baesken, Matthias > Subject: Re: RFR(S): JDK-8196590 Enable docker container related tests for > linux AARCH64 > > Hi Mikhailo, > > Please, find exported changeset under the link below: > > http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/8196590-02.export > > -Dmitry > > On 20.02.2018 23:51, mikhailo wrote: > > Hi Dmitry, > > > > > > On 02/18/2018 10:31 AM, Dmitry Samersoff wrote: > >> Mikhailo, > >> > >> Here is the changes rebased to recent sources. > >> > >> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.02/ > > Changes look good to me. > >> > >> Could you sponsor the push? > > I can sponsor the change, once the updated change is reviewed. Once it > > is ready, please send me the latest hg changeset (with usual fields, > > description, reviewers). > > > > > > Thank you, > > Misha > >> > >> -Dmitry > >> > >> On 02/13/2018 04:34 AM, Mikhailo Seledtsov wrote: > >>> Changes look good from my point of view. > >>> > >>> Misha > >>> > >>> On 2/10/18, 4:10 AM, Dmitry Samersoff wrote: > >>>> Everybody, > >>>> > >>>> Please review small changes, that enables docker testing on > >>>> Linux/AArch64 > >>>> > >>>> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.01/ > >>>> > >>>> PS: > >>>> > >>>> Matthias - I refactored VMProps.dockerSupport() a bit to make it more > >>>> readable, please check that it doesn't brake your work. > >>>> > >>>> -Dmitry > >>>> > >>>> -- > >>>> Dmitry Samersoff > >>>> http://devnull.samersoff.net > >>>> * There will come soft rains ... > > > > > -- > Dmitry Samersoff > http://devnull.samersoff.net > * There will come soft rains ... From martin.doerr at sap.com Thu Mar 1 09:16:15 2018 From: martin.doerr at sap.com (Doerr, Martin) Date: Thu, 1 Mar 2018 09:16:15 +0000 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> <86A0EA21-1141-4C73-AD80-EF46A44220B2@oracle.com> <761A1FEF-95F4-4015-AA43-2962D615B50C@oracle.com> Message-ID: <5ab12da719db45699b8d1b9a83f21520@sap.com> Hi Kim, this change causes a build warning on 32 bit Windows with Visual Studio 2013: os_windows.cpp(1521) : warning C4018: '>=' : signed/unsigned mismatch I think " result >= len" should get fixed. Or is Visual Studio 2013 no longer supported? Do you have a pending change in which you can update this? Best regards, Martin From dms at samersoff.net Thu Mar 1 09:28:24 2018 From: dms at samersoff.net (Dmitry Samersoff) Date: Thu, 1 Mar 2018 12:28:24 +0300 Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 In-Reply-To: <6bb60d8546ee48b29b023d69485c5116@sap.com> References: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> <5A82408B.7070001@oracle.com> <5160cc6c-d3f7-6804-a46b-98712163aefc@samersoff.net> <6bb60d8546ee48b29b023d69485c5116@sap.com> Message-ID: <658e905b-e3a5-eea8-45b5-41db80ff7c1f@samersoff.net> Hi Matthias, Thank you. TypeO fixed in-place. -Dmitry On 01.03.2018 11:46, Baesken, Matthias wrote: > Hi, there is a little typo in the Dockerfile-BasicTest-aarch64 ("AArh64") . > > +++ b/test/hotspot/jtreg/runtime/containers/docker/Dockerfile-BasicTest-aarch64 Thu Mar 01 07:45:56 2018 +0000 > @@ -0,0 +1,8 @@ > +# Use generic ubuntu Linux on AArh64 > > > Otherwise it looks good to me ( not a Reviewer however). > > Best regards, Matthias > > > >> -----Original Message----- >> From: dms at mircat.net [mailto:dms at mircat.net] On Behalf Of Dmitry >> Samersoff >> Sent: Donnerstag, 1. M?rz 2018 08:53 >> To: mikhailo ; Dmitry Samersoff >> >> Cc: 'hotspot-dev at openjdk.java.net' ; >> Baesken, Matthias >> Subject: Re: RFR(S): JDK-8196590 Enable docker container related tests for >> linux AARCH64 >> >> Hi Mikhailo, >> >> Please, find exported changeset under the link below: >> >> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/8196590-02.export >> >> -Dmitry >> >> On 20.02.2018 23:51, mikhailo wrote: >>> Hi Dmitry, >>> >>> >>> On 02/18/2018 10:31 AM, Dmitry Samersoff wrote: >>>> Mikhailo, >>>> >>>> Here is the changes rebased to recent sources. >>>> >>>> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.02/ >>> Changes look good to me. >>>> >>>> Could you sponsor the push? >>> I can sponsor the change, once the updated change is reviewed. Once it >>> is ready, please send me the latest hg changeset (with usual fields, >>> description, reviewers). >>> >>> >>> Thank you, >>> Misha >>>> >>>> -Dmitry >>>> >>>> On 02/13/2018 04:34 AM, Mikhailo Seledtsov wrote: >>>>> Changes look good from my point of view. >>>>> >>>>> Misha >>>>> >>>>> On 2/10/18, 4:10 AM, Dmitry Samersoff wrote: >>>>>> Everybody, >>>>>> >>>>>> Please review small changes, that enables docker testing on >>>>>> Linux/AArch64 >>>>>> >>>>>> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.01/ >>>>>> >>>>>> PS: >>>>>> >>>>>> Matthias - I refactored VMProps.dockerSupport() a bit to make it more >>>>>> readable, please check that it doesn't brake your work. >>>>>> >>>>>> -Dmitry >>>>>> >>>>>> -- >>>>>> Dmitry Samersoff >>>>>> http://devnull.samersoff.net >>>>>> * There will come soft rains ... >>> >> >> >> -- >> Dmitry Samersoff >> http://devnull.samersoff.net >> * There will come soft rains ... > -- Dmitry Samersoff http://devnull.samersoff.net * There will come soft rains ... From tobias.hartmann at oracle.com Thu Mar 1 09:32:52 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Thu, 1 Mar 2018 10:32:52 +0100 Subject: [11] RFR(XS): 8198826: -XX:+VerifyStack fails with fatal error: ExceptionMark constructor expects no pending exceptions In-Reply-To: References: <11874ef4-0897-b641-cbe4-b06d772957be@oracle.com> Message-ID: Hi David, thanks for looking at this! On 28.02.2018 22:53, David Holmes wrote: > Once an exception is pending code has to be very careful about how it proceeds - both in terms of > "the previous action failed so what do I do now?" and "I've got a pending exception so need to be > very careful about what I call". > > I'm not familiar with this code at all and looking at it it is very hard for me to understand > exactly what the occurrence of the OOME means for the rest of the code. Normally I would expect to > see code "bail out" as soon as possible, while this code seems to continue to do lots of (presumably > necessary) things. In this case, C2 did aggressive scalarization based on escape analysis to remove an object allocation in compiled code. When deoptimizing, we need to restore the interpreter state including re-allocating that scalarized object (because the interpreter does not support scalarization). If re-allocation fails due to an OOME, we still need to continue restoring the interpreter state (while propagating that exception to be later thrown by the interpreter). So we cannot just simply bail out but need to make sure that the following code works fine with a pending exception. > My concern with this simple fix is that if the occurrence of the OOME has actually resulted in > breakage, then skipping the VerifyStack logic may be skipping the code that would detect that > breakage. In which case it may be better to save and clear the exception and restore it afterwards. Yes, that's also what Vladimir suggested in the bug comments. Here's a new webrev that saves and restores the pending oop while still executing the stack verification code: http://cr.openjdk.java.net/~thartmann/8198826/webrev.01/ Thanks, Tobias From marcus.larsson at oracle.com Thu Mar 1 10:07:10 2018 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Thu, 1 Mar 2018 11:07:10 +0100 Subject: RFR(S): 8198887: JDK-8168722 broke the build on macosx Message-ID: Hi, Please review the following patch to fix the broken assert in logOutput.cpp. Issue: https://bugs.openjdk.java.net/browse/JDK-8198887 Webrev: http://cr.openjdk.java.net/~mlarsson/8198887/webrev.00 Thanks, Marcus From david.holmes at oracle.com Thu Mar 1 10:11:47 2018 From: david.holmes at oracle.com (David Holmes) Date: Thu, 1 Mar 2018 20:11:47 +1000 Subject: [11] RFR(XS): 8198826: -XX:+VerifyStack fails with fatal error: ExceptionMark constructor expects no pending exceptions In-Reply-To: References: <11874ef4-0897-b641-cbe4-b06d772957be@oracle.com> Message-ID: Hi Tobias, On 1/03/2018 7:32 PM, Tobias Hartmann wrote: > Hi David, > > thanks for looking at this! > > On 28.02.2018 22:53, David Holmes wrote: >> Once an exception is pending code has to be very careful about how it proceeds - both in terms of >> "the previous action failed so what do I do now?" and "I've got a pending exception so need to be >> very careful about what I call". >> >> I'm not familiar with this code at all and looking at it it is very hard for me to understand >> exactly what the occurrence of the OOME means for the rest of the code. Normally I would expect to >> see code "bail out" as soon as possible, while this code seems to continue to do lots of (presumably >> necessary) things. > > In this case, C2 did aggressive scalarization based on escape analysis to remove an object > allocation in compiled code. When deoptimizing, we need to restore the interpreter state including > re-allocating that scalarized object (because the interpreter does not support scalarization). > > If re-allocation fails due to an OOME, we still need to continue restoring the interpreter state > (while propagating that exception to be later thrown by the interpreter). So we cannot just simply > bail out but need to make sure that the following code works fine with a pending exception. Interesting - does that mean you have to roll back everything that happened since the allocation point ??? >> My concern with this simple fix is that if the occurrence of the OOME has actually resulted in >> breakage, then skipping the VerifyStack logic may be skipping the code that would detect that >> breakage. In which case it may be better to save and clear the exception and restore it afterwards. > > Yes, that's also what Vladimir suggested in the bug comments. Here's a new webrev that saves and > restores the pending oop while still executing the stack verification code: > http://cr.openjdk.java.net/~thartmann/8198826/webrev.01/ Looks okay. My only thought is whether the PEM should be across the full scope of the VerifyStack logic (as you have it) or whether it should only wrap the code you know has the ExceptionMark? I guess as this is a JRT_LEAF function we don't expect anything else to generate exceptions, so the placement shouldn't really matter. Thanks, David > Thanks, > Tobias > From david.holmes at oracle.com Thu Mar 1 10:14:30 2018 From: david.holmes at oracle.com (David Holmes) Date: Thu, 1 Mar 2018 20:14:30 +1000 Subject: RFR(S): 8198887: JDK-8168722 broke the build on macosx In-Reply-To: References: Message-ID: Seems reasonable. Please run through mach5 before pushing. ;-) Thanks, David On 1/03/2018 8:07 PM, Marcus Larsson wrote: > Hi, > > Please review the following patch to fix the broken assert in > logOutput.cpp. > > Issue: > https://bugs.openjdk.java.net/browse/JDK-8198887 > > Webrev: > http://cr.openjdk.java.net/~mlarsson/8198887/webrev.00 > > Thanks, > Marcus From tobias.hartmann at oracle.com Thu Mar 1 10:20:26 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Thu, 1 Mar 2018 11:20:26 +0100 Subject: RFR(S): 8198887: JDK-8168722 broke the build on macosx In-Reply-To: References: Message-ID: Hi Marcus, looks good to me. Best regards, Tobias On 01.03.2018 11:07, Marcus Larsson wrote: > Hi, > > Please review the following patch to fix the broken assert in logOutput.cpp. > > Issue: > https://bugs.openjdk.java.net/browse/JDK-8198887 > > Webrev: > http://cr.openjdk.java.net/~mlarsson/8198887/webrev.00 > > Thanks, > Marcus From marcus.larsson at oracle.com Thu Mar 1 10:22:35 2018 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Thu, 1 Mar 2018 11:22:35 +0100 Subject: RFR(S): 8198887: JDK-8168722 broke the build on macosx In-Reply-To: References: Message-ID: <7d119887-5243-4531-e7b5-5f4925b20e75@oracle.com> Thanks for reviewing. Can I consider this trivial? On 2018-03-01 11:14, David Holmes wrote: > Seems reasonable. > > Please run through mach5 before pushing. ;-) Yes, apparently I never ran it again after adding that assert. My bad! Thanks, Marcus > > Thanks, > David > > On 1/03/2018 8:07 PM, Marcus Larsson wrote: >> Hi, >> >> Please review the following patch to fix the broken assert in >> logOutput.cpp. >> >> Issue: >> https://bugs.openjdk.java.net/browse/JDK-8198887 >> >> Webrev: >> http://cr.openjdk.java.net/~mlarsson/8198887/webrev.00 >> >> Thanks, >> Marcus From marcus.larsson at oracle.com Thu Mar 1 10:31:08 2018 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Thu, 1 Mar 2018 11:31:08 +0100 Subject: RFR(S): 8198887: JDK-8168722 broke the build on macosx In-Reply-To: References: Message-ID: <4eccb660-6e94-95f5-845f-b2ee2e4a18cf@oracle.com> Thanks for the review! Marcus On 2018-03-01 11:20, Tobias Hartmann wrote: > Hi Marcus, > > looks good to me. > > Best regards, > Tobias > > On 01.03.2018 11:07, Marcus Larsson wrote: >> Hi, >> >> Please review the following patch to fix the broken assert in logOutput.cpp. >> >> Issue: >> https://bugs.openjdk.java.net/browse/JDK-8198887 >> >> Webrev: >> http://cr.openjdk.java.net/~mlarsson/8198887/webrev.00 >> >> Thanks, >> Marcus From thomas.stuefe at gmail.com Thu Mar 1 10:36:37 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 1 Mar 2018 11:36:37 +0100 Subject: RFR(L): 8198423: Improve metaspace chunk allocation (was: Proposal for improvements to the metaspace chunk allocator) In-Reply-To: <2005ef0d-9d95-9805-f7aa-94193f683fb3@oracle.com> References: <2005ef0d-9d95-9805-f7aa-94193f683fb3@oracle.com> Message-ID: Hi Coleen, thanks a lot for the review and the sponsoring offer! New version (full): http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-03-01/webrev-full/webrev/ incremental: http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-03-01/webrev-incr/webrev/ Please find remarks inline: On Tue, Feb 27, 2018 at 11:22 PM, wrote: > > Thomas, review comments: > > http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc > ation/2018-02-26/webrev/src/hotspot/share/memory/metachunk.hpp.udiff.html > > +// ChunkIndex (todo: rename?) defines the type of chunk. Chunk types > > > It's really both, isn't it? The type is the index into the free list or > in use lists. The name seems fine. > > You are right. What I meant was that a lot of code needs to know about the different chunk sizes, but naming it "Index" and adding enum values like "NumberOfFreeLists" we expose implementation details no-one outside of SpaceManager and ChunkManager cares about (namely, the fact that these values are internally used as indices into arrays). A more neutral naming would be something like "enum ChunkTypes { spec,small, .... , NumberOfNonHumongousChunkTypes, NumberOfChunkTypes }. However, I can leave this out for a possible future cleanup. The change is big enough as it is. > Can you add comments on the #endifs if the #ifdef is more than a couple > 2-3 lines above (it's a nit that bothers me). > > +#ifdef ASSERT > + // A 32bit sentinel for debugging purposes. > +#define CHUNK_SENTINEL 0x4d4554EF // "MET" > +#define CHUNK_SENTINEL_INVALID 0xFEEEEEEF > + uint32_t _sentinel; > +#endif > + const ChunkIndex _chunk_type; > + const bool _is_class; > + // Whether the chunk is free (in freelist) or in use by some class > loader. > bool _is_tagged_free; > +#ifdef ASSERT > + ChunkOrigin _origin; > + int _use_count; > +#endif > + > > I removed the asserts completely, following your suggestion below that "origin" would be valuable in customer scenarios too. By that logic, the other members are valuable too: the sentinel is valuable when examining memory dumps to see the start of chunks, and the in-use counter is useful too. What do you think? So, I leave the members in - which, depending what the C++ compiler does to enums and bools, may cost up to 128bit additional header space. I think that is ok. In one of my earlier versions of this patch I hand-crafted the header using chars and bitfields to be as small as possible, but that seemed over-engineered. However, I left out any automatic verifications accessing these debug members. These are still only done in debug builds. > > It seems that if you could move origin and _use_count into the ASSERT > block above (maybe putting use_count before _origin. > > http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc > ation/2018-02-26/webrev/src/hotspot/share/memory/metaspace.cpp.udiff.html > > In take_from_committed, can the allocation of padding chunks be its own > function like add_chunks_to_aligment() lines 1574-1615? The function is too > long now. > > I moved the padding chunk allocation into an own function as you suggested. > I don't think coalescation is a word in English, at least my dictionary > cannot find it. Although it makes sense in the context, just distracting. > > I replaced "coalescation" with "chunk merging" throughout the code. Also less of a tongue breaker. > + // Now check if in the coalescation area there are still life chunks. > > > "live" chunks I guess. A sentence you won't read often :). > Now that I read it it almost sounded sinister :) Fixed. > > In free_chunks_get() can you handle the Humongous case first? The else for > humongous chunk size is buried tons of lines below. > > Otherwise it might be helpful to the logic to make your addition to this > function be a function you call like > chunk = split_from_larger_free_chunk(); > I did the latter. I moved the splitting of a larger chunk to an own function. This causes a slight logic change: the new function (ChunkManager::split_chunk()) splits an existing large free chunks into n smaller free chunks and adds them all back to the freelist - that includes the chunk we are about to return. That allows us to use the same exit path - which removes the chunk from the freelist and adjusts all counters - in the caller function "ChunkManager::free_chunks_get" instead of having to return in the middle of the function. To make the test more readable, I also remove the "test-that-free-chunks-are-optimally-merged" verification - which was quite lengthy - from VirtualSpaceNode::verify() to a new function, VirtualSpaceNode::verify_free_chunks_are_ideally_merged(). > You might want to keep the origin in product mode if it doesn't add to the > chunk footprint. Might help with customer debugging. > > See above > Awesome looking test... > > Thanks, I was worried it would be too complicated. I changed it a bit because there were sporadic errors. Not a "real" error, just the test itself was faulty. The "metaspaces_in_use" counter was slightly wrong in one corner case. > I've read through most of this and thank you for adding this to at least > partially solve the fragmentation problem. The irony is that we > templatized the Dictionary from CMS so that we could use it for Metaspace > and that has splitting and coalescing but it seems this code makes more > sense than adapting that code (if it's even possible). > Well, it helps other metadata use cases too, no. > > Thank you for working on this. I'll sponsor this for you. > Coleen > > Thanks again! I also updated my jdk-submit branch to include these latest changes; tests are still runnning. Kind Regards, Thomas > > On 2/26/18 9:20 AM, Thomas St?fe wrote: > >> Hi all, >> >> I know this patch is a bit larger, but may I please have reviews and/or >> other input? >> >> Issue: https://bugs.openjdk.java.net/browse/JDK-8198423 >> Latest version: >> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >> ation/2018-02-26/webrev/ >> >> For those who followed the mail thread, this is the incremental diff to >> the >> last changes (included feedback Goetz gave me on- and off-list): >> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >> ation/2018-02-26/webrev-incr/webrev/ >> >> Thank you! >> >> Kind Regards, Thomas Stuefe >> >> >> >> On Thu, Feb 8, 2018 at 12:58 PM, Thomas St?fe >> wrote: >> >> Hi, >>> >>> We would like to contribute a patch developed at SAP which has been live >>> in our VM for some time. It improves the metaspace chunk allocation: >>> reduces fragmentation and raises the chance of reusing free metaspace >>> chunks. >>> >>> The patch: http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >>> ation/2018-02-05--2/webrev/ >>> >>> In very short, this patch helps with a number of pathological cases where >>> metaspace chunks are free but cannot be reused because they are of the >>> wrong size. For example, the metaspace freelist could be full of small >>> chunks, which would not be reusable if we need larger chunks. So, we >>> could >>> get metaspace OOMs even in situations where the metaspace was far from >>> exhausted. Our patch adds the ability to split and merge metaspace chunks >>> dynamically and thus remove the "size-lock-in" problem. >>> >>> Note that there have been other attempts to get a grip on this problem, >>> see e.g. "SpaceManager::get_small_chunks_and_allocate()". But arguably >>> our patch attempts a more complete solution. >>> >>> In 2016 I discussed the idea for this patch with some folks off-list, >>> among them Jon Matsimutso. He then did advice me to create a JEP. So I >>> did: >>> [1]. However, meanwhile changes to the JEP process were discussed [2], >>> and >>> I am not sure anymore this patch needs even needs a JEP. It may be >>> moderately complex and hence carries the risk inherent in any patch, but >>> its effects would not be externally visible (if you discount seeing fewer >>> metaspace OOMs). So, I'd prefer to handle this as a simple RFE. >>> >>> -- >>> >>> How this patch works: >>> >>> 1) When a class loader dies, its metaspace chunks are freed and returned >>> to the freelist for reuse by the next class loader. With the patch, upon >>> returning a chunk to the freelist, an attempt is made to merge it with >>> its >>> neighboring chunks - should they happen to be free too - to form a larger >>> chunk. Which then is placed in the free list. >>> >>> As a result, the freelist should be populated by larger chunks at the >>> expense of smaller chunks. In other words, all free chunks should always >>> be >>> as "coalesced as possible". >>> >>> 2) When a class loader needs a new chunk and a chunk of the requested >>> size >>> cannot be found in the free list, before carving out a new chunk from the >>> virtual space, we first check if there is a larger chunk in the free >>> list. >>> If there is, that larger chunk is chopped up into n smaller chunks. One >>> of >>> them is returned to the callers, the others are re-added to the freelist. >>> >>> (1) and (2) together have the effect of removing the size-lock-in for >>> chunks. If fragmentation allows it, small chunks are dynamically combined >>> to form larger chunks, and larger chunks are split on demand. >>> >>> -- >>> >>> What this patch does not: >>> >>> This is not a rewrite of the chunk allocator - most of the mechanisms >>> stay >>> intact. Specifically, chunk sizes remain unchanged, and so do chunk >>> allocation processes (when do which class loaders get handed which chunk >>> size). Almost everthing this patch does affects only internal workings of >>> the ChunkManager. >>> >>> Also note that I refrained from doing any cleanups, since I wanted >>> reviewers to be able to gauge this patch without filtering noise. >>> Unfortunately this patch adds some complexity. But there are many future >>> opportunities for code cleanup and simplification, some of which we >>> already >>> discussed in existing RFEs ([3], [4]). All of them are out of the scope >>> for >>> this particular patch. >>> >>> -- >>> >>> Details: >>> >>> Before the patch, the following rules held: >>> - All chunk sizes are multiples of the smallest chunk size ("specialized >>> chunks") >>> - All chunk sizes of larger chunks are also clean multiples of the next >>> smaller chunk size (e.g. for class space, the ratio of >>> specialized/small/medium chunks is 1:2:32) >>> - All chunk start addresses are aligned to the smallest chunk size (more >>> or less accidentally, see metaspace_reserve_alignment). >>> The patch makes the last rule explicit and more strict: >>> - All (non-humongous) chunk start addresses are now aligned to their own >>> chunk size. So, e.g. medium chunks are allocated at addresses which are a >>> multiple of medium chunk size. This rule is not extended to humongous >>> chunks, whose start addresses continue to be aligned to the smallest >>> chunk >>> size. >>> >>> The reason for this new alignment rule is that it makes it cheap both to >>> find chunk predecessors of a chunk and to check which chunks are free. >>> >>> When a class loader dies and its chunk is returned to the freelist, all >>> we >>> have is its address. In order to merge it with its neighbors to form a >>> larger chunk, we need to find those neighbors, including those preceding >>> the returned chunk. Prior to this patch that was not easy - one would >>> have >>> to iterate chunks starting at the beginning of the VirtualSpaceNode. But >>> due to the new alignment rule, we now know where the prospective larger >>> chunk must start - at the next lower larger-chunk-size-aligned boundary. >>> We >>> also know that currently a smaller chunk must start there (*). >>> >>> In order to check the free-ness of chunks quickly, each VirtualSpaceNode >>> now keeps a bitmap which describes its occupancy. One bit in this bitmap >>> corresponds to a range the size of the smallest chunk size and starting >>> at >>> an address aligned to the smallest chunk size. Because of the alignment >>> rules above, such a range belongs to one single chunk. The bit is 1 if >>> the >>> associated chunk is in use by a class loader, 0 if it is free. >>> >>> When we have calculated the address range a prospective larger chunk >>> would >>> span, we now need to check if all chunks in that range are free. Only >>> then >>> we can merge them. We do that by querying the bitmap. Note that the most >>> common use case here is forming medium chunks from smaller chunks. With >>> the >>> new alignment rules, the bitmap portion covering a medium chunk now >>> always >>> happens to be 16- or 32bit in size and is 16- or 32bit aligned, so >>> reading >>> the bitmap in many cases becomes a simple 16- or 32bit load. >>> >>> If the range is free, only then we need to iterate the chunks in that >>> range: pull them from the freelist, combine them to one new larger chunk, >>> re-add that one to the freelist. >>> >>> (*) Humongous chunks make this a bit more complicated. Since the new >>> alignment rule does not extend to them, a humongous chunk could still >>> straddle the lower or upper boundary of the prospective larger chunk. So >>> I >>> gave the occupancy map a second layer, which is used to mark the start of >>> chunks. >>> An alternative approach could have been to make humongous chunks size and >>> start address always a multiple of the largest non-humongous chunk size >>> (medium chunks). That would have caused a bit of waste per humongous >>> chunk >>> (<64K) in exchange for simpler coding and a simpler occupancy map. >>> >>> -- >>> >>> The patch shows its best results in scenarios where a lot of smallish >>> class loaders are alive simultaneously. When dying, they leave continuous >>> expanses of metaspace covered in small chunks, which can be merged >>> nicely. >>> However, if class loader life times vary more, we have more interleaving >>> of >>> dead and alive small chunks, and hence chunk merging does not work as >>> well >>> as it could. >>> >>> For an example of a pathological case like this see example program: [5] >>> >>> Executed like this: "java -XX:CompressedClassSpaceSize=10M -cp test3 >>> test3.Example2" the test will load 3000 small classes in separate class >>> loaders, then throw them away and start loading large classes. The small >>> classes will have flooded the metaspace with small chunks, which are >>> unusable for the large classes. When executing with the rather limited >>> CompressedClassSpaceSize=10M, we will run into an OOM after loading about >>> 800 large classes, having used only 40% of the class space, the rest is >>> wasted to unused small chunks. However, with our patch the example >>> program >>> will manage to allocate ~2900 large classes before running into an OOM, >>> and >>> class space will show almost no waste. >>> >>> Do demonstrate this, add -Xlog:gc+metaspace+freelist. After running into >>> an OOM, statistics and an ASCII representation of the class space will be >>> shown. The unpatched version will show large expanses of unused small >>> chunks, the patched variant will show almost no waste. >>> >>> Note that the patch could be made more effective with a different size >>> ratio between small and medium chunks: in class space, that ratio is >>> 1:16, >>> so 16 small chunks must happen to be free to form one larger chunk. With >>> a >>> smaller ratio the chance for coalescation would be larger. So there may >>> be >>> room for future improvement here: Since we now can merge and split chunks >>> on demand, we could introduce more chunk sizes. Potentially arriving at a >>> buddy-ish allocator style where we drop hard-wired chunk sizes for a >>> dynamic model where the ratio between chunk sizes is always 1:2 and we >>> could in theory have no limit to the chunk size? But this is just a >>> thought >>> and well out of the scope of this patch. >>> >>> -- >>> >>> What does this patch cost (memory): >>> >>> - the occupancy bitmap adds 1 byte per 4K metaspace. >>> - MetaChunk headers get larger, since we add an enum and two bools to >>> it. >>> Depending on what the c++ compiler does with that, chunk headers grow by >>> one or two MetaWords, reducing the payload size by that amount. >>> - The new alignment rules mean we may need to create padding chunks to >>> precede larger chunks. But since these padding chunks are added to the >>> freelist, they should be used up before the need for new padding chunks >>> arises. So, the maximally possible number of unused padding chunks should >>> be limited by design to about 64K. >>> >>> The expectation is that the memory savings by this patch far outweighs >>> its >>> added memory costs. >>> >>> .. (performance): >>> >>> We did not see measurable drops in standard benchmarks raising over the >>> normal noise. I also measured times for a program which stresses >>> metaspace >>> chunk coalescation, with the same result. >>> >>> I am open to suggestions what else I should measure, and/or independent >>> measurements. >>> >>> -- >>> >>> Other details: >>> >>> I removed SpaceManager::get_small_chunk_and_allocate() to reduce >>> complexity somewhat, because it was made mostly obsolete by this patch: >>> since small chunks are combined to larger chunks upon return to the >>> freelist, in theory we should not have that many free small chunks >>> anymore >>> anyway. However, there may be still cases where we could benefit from >>> this >>> workaround, so I am asking your opinion on this one. >>> >>> About tests: There were two native tests - ChunkManagerReturnTest and >>> TestVirtualSpaceNode (the former was added by me last year) - which did >>> not >>> make much sense anymore, since they relied heavily on internal behavior >>> which was made unpredictable with this patch. >>> To make up for these lost tests, I added a new gtest which attempts to >>> stress the many combinations of allocation pattern but does so from a >>> layer >>> above the old tests. It now uses Metaspace::allocate() and friends. By >>> using that point as entry for tests, I am less dependent on >>> implementation >>> internals and still cover a lot of scenarios. >>> >>> -- >>> >>> Review pointers: >>> >>> Good points to start are >>> - ChunkManager::return_single_chunk() - specifically, >>> ChunkManager::attempt_to_coalesce_around_chunk() - here we merge chunks >>> upon return to the free list >>> - ChunkManager::free_chunks_get(): Here we now split large chunks into >>> smaller chunks on demand >>> - VirtualSpaceNode::take_from_committed() : chunks are allocated >>> according to align rules now, padding chunks are handles >>> - The OccupancyMap class is the helper class implementing the new >>> occupancy bitmap >>> >>> The rest is mostly chaff: helper functions, added tests and >>> verifications. >>> >>> -- >>> >>> Thanks and Best Regards, Thomas >>> >>> [1] https://bugs.openjdk.java.net/browse/JDK-8166690 >>> [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2017-November >>> /000128.html >>> [3] https://bugs.openjdk.java.net/browse/JDK-8185034 >>> [4] https://bugs.openjdk.java.net/browse/JDK-8176808 >>> [5] https://bugs.openjdk.java.net/secure/attachment/63532/test3.zip >>> >>> >>> >>> > From tobias.hartmann at oracle.com Thu Mar 1 10:38:44 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Thu, 1 Mar 2018 11:38:44 +0100 Subject: [11] RFR(XS): 8198826: -XX:+VerifyStack fails with fatal error: ExceptionMark constructor expects no pending exceptions In-Reply-To: References: <11874ef4-0897-b641-cbe4-b06d772957be@oracle.com> Message-ID: Hi David, On 01.03.2018 11:11, David Holmes wrote: > Interesting - does that mean you have to roll back everything that happened since the allocation > point ??? No, besides the general JVM state, C2 compiled code also keeps track of the state of scalarized objects to be able to reconstruct them at safepoints, i.e., when deoptimization could happen (see 'SafePointScalarObjectNode'). > Looks okay. My only thought is whether the PEM should be across the full scope of the VerifyStack > logic (as you have it) or whether it should only wrap the code you know has the ExceptionMark? I > guess as this is a JRT_LEAF function we don't expect anything else to generate exceptions, so the > placement shouldn't really matter. The ExceptionMark is in OopMapCache::compute_one_oop_map() which is called from two locations in the VerifyStack scope. A more narrow scope would be inside the for-loop and I think we would like to avoid that. And yes, we don't expect anything else to generate an exception (the PEM actually guards against that). Thanks, Tobias From marcus.larsson at oracle.com Thu Mar 1 10:58:26 2018 From: marcus.larsson at oracle.com (Marcus Larsson) Date: Thu, 1 Mar 2018 11:58:26 +0100 Subject: RFR(S): 8198887: JDK-8168722 broke the build on macosx In-Reply-To: <7d119887-5243-4531-e7b5-5f4925b20e75@oracle.com> References: <7d119887-5243-4531-e7b5-5f4925b20e75@oracle.com> Message-ID: Hi again, On 2018-03-01 11:22, Marcus Larsson wrote: > Thanks for reviewing. > > Can I consider this trivial? I will. Pushing now to resolve this! Thanks, Marcus > > > On 2018-03-01 11:14, David Holmes wrote: >> Seems reasonable. >> >> Please run through mach5 before pushing. ;-) > > Yes, apparently I never ran it again after adding that assert. My bad! > > Thanks, > Marcus > >> >> Thanks, >> David >> >> On 1/03/2018 8:07 PM, Marcus Larsson wrote: >>> Hi, >>> >>> Please review the following patch to fix the broken assert in >>> logOutput.cpp. >>> >>> Issue: >>> https://bugs.openjdk.java.net/browse/JDK-8198887 >>> >>> Webrev: >>> http://cr.openjdk.java.net/~mlarsson/8198887/webrev.00 >>> >>> Thanks, >>> Marcus > From david.holmes at oracle.com Thu Mar 1 11:53:37 2018 From: david.holmes at oracle.com (David Holmes) Date: Thu, 1 Mar 2018 21:53:37 +1000 Subject: [11] RFR(XS): 8198826: -XX:+VerifyStack fails with fatal error: ExceptionMark constructor expects no pending exceptions In-Reply-To: References: <11874ef4-0897-b641-cbe4-b06d772957be@oracle.com> Message-ID: On 1/03/2018 8:38 PM, Tobias Hartmann wrote: > Hi David, > > On 01.03.2018 11:11, David Holmes wrote: >> Interesting - does that mean you have to roll back everything that happened since the allocation >> point ??? > > No, besides the general JVM state, C2 compiled code also keeps track of the state of scalarized > objects to be able to reconstruct them at safepoints, i.e., when deoptimization could happen (see > 'SafePointScalarObjectNode'). But if the allocation fails you don't have the ability to reconstruct the object ?? The OOME must appear to happen at the point at which the object should have been allocated, and nothing that happens after that point can be seen to have happened. >> Looks okay. My only thought is whether the PEM should be across the full scope of the VerifyStack >> logic (as you have it) or whether it should only wrap the code you know has the ExceptionMark? I >> guess as this is a JRT_LEAF function we don't expect anything else to generate exceptions, so the >> placement shouldn't really matter. > > The ExceptionMark is in OopMapCache::compute_one_oop_map() which is called from two locations in the > VerifyStack scope. A more narrow scope would be inside the for-loop and I think we would like to > avoid that. And yes, we don't expect anything else to generate an exception (the PEM actually guards > against that). Ok. Thanks, David > Thanks, > Tobias > From tobias.hartmann at oracle.com Thu Mar 1 13:26:55 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Thu, 1 Mar 2018 14:26:55 +0100 Subject: [11] RFR(XS): 8198826: -XX:+VerifyStack fails with fatal error: ExceptionMark constructor expects no pending exceptions In-Reply-To: References: <11874ef4-0897-b641-cbe4-b06d772957be@oracle.com> Message-ID: <3529ec7e-02c9-519e-42e6-56072d5b2800@oracle.com> On 01.03.2018 12:53, David Holmes wrote: > But if the allocation fails you don't have the ability to reconstruct the object ?? The OOME must > appear to happen at the point at which the object should have been allocated, and nothing that > happens after that point can be seen to have happened. Yes, if the re-allocation of the scalar replaced object fails during deoptimization, we cannot re-construct the object. But that's okay because the interpreter will throw an OutOfMemoryError and not use that object anyway (please note that C2 will only perform scalarization if escape analysis determined that it's safe to do so and the object is not leaked). This is actually what the TestDeoptOOM verifies. Best regards, Tobias From vladimir.kozlov at oracle.com Thu Mar 1 17:10:39 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 1 Mar 2018 09:10:39 -0800 Subject: [11] RFR(XS): 8198826: -XX:+VerifyStack fails with fatal error: ExceptionMark constructor expects no pending exceptions In-Reply-To: References: <11874ef4-0897-b641-cbe4-b06d772957be@oracle.com> Message-ID: Looks nice. Thanks, Vladimir On 3/1/18 1:32 AM, Tobias Hartmann wrote: > Hi David, > > thanks for looking at this! > > On 28.02.2018 22:53, David Holmes wrote: >> Once an exception is pending code has to be very careful about how it proceeds - both in terms of >> "the previous action failed so what do I do now?" and "I've got a pending exception so need to be >> very careful about what I call". >> >> I'm not familiar with this code at all and looking at it it is very hard for me to understand >> exactly what the occurrence of the OOME means for the rest of the code. Normally I would expect to >> see code "bail out" as soon as possible, while this code seems to continue to do lots of (presumably >> necessary) things. > > In this case, C2 did aggressive scalarization based on escape analysis to remove an object > allocation in compiled code. When deoptimizing, we need to restore the interpreter state including > re-allocating that scalarized object (because the interpreter does not support scalarization). > > If re-allocation fails due to an OOME, we still need to continue restoring the interpreter state > (while propagating that exception to be later thrown by the interpreter). So we cannot just simply > bail out but need to make sure that the following code works fine with a pending exception. > >> My concern with this simple fix is that if the occurrence of the OOME has actually resulted in >> breakage, then skipping the VerifyStack logic may be skipping the code that would detect that >> breakage. In which case it may be better to save and clear the exception and restore it afterwards. > > Yes, that's also what Vladimir suggested in the bug comments. Here's a new webrev that saves and > restores the pending oop while still executing the stack verification code: > http://cr.openjdk.java.net/~thartmann/8198826/webrev.01/ > > Thanks, > Tobias > From tobias.hartmann at oracle.com Thu Mar 1 17:07:03 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Thu, 1 Mar 2018 18:07:03 +0100 Subject: [11] RFR(XS): 8198826: -XX:+VerifyStack fails with fatal error: ExceptionMark constructor expects no pending exceptions In-Reply-To: References: <11874ef4-0897-b641-cbe4-b06d772957be@oracle.com> Message-ID: <25fdd147-72bf-6535-3799-9f48520cfc76@oracle.com> Thanks Vladimir! Best regards, Tobias On 01.03.2018 18:10, Vladimir Kozlov wrote: > Looks nice. > > Thanks, > Vladimir > > On 3/1/18 1:32 AM, Tobias Hartmann wrote: >> Hi David, >> >> thanks for looking at this! >> >> On 28.02.2018 22:53, David Holmes wrote: >>> Once an exception is pending code has to be very careful about how it proceeds - both in terms of >>> "the previous action failed so what do I do now?" and "I've got a pending exception so need to be >>> very careful about what I call". >>> >>> I'm not familiar with this code at all and looking at it it is very hard for me to understand >>> exactly what the occurrence of the OOME means for the rest of the code. Normally I would expect to >>> see code "bail out" as soon as possible, while this code seems to continue to do lots of (presumably >>> necessary) things. >> >> In this case, C2 did aggressive scalarization based on escape analysis to remove an object >> allocation in compiled code. When deoptimizing, we need to restore the interpreter state including >> re-allocating that scalarized object (because the interpreter does not support scalarization). >> >> If re-allocation fails due to an OOME, we still need to continue restoring the interpreter state >> (while propagating that exception to be later thrown by the interpreter). So we cannot just simply >> bail out but need to make sure that the following code works fine with a pending exception. >> >>> My concern with this simple fix is that if the occurrence of the OOME has actually resulted in >>> breakage, then skipping the VerifyStack logic may be skipping the code that would detect that >>> breakage. In which case it may be better to save and clear the exception and restore it afterwards. >> >> Yes, that's also what Vladimir suggested in the bug comments. Here's a new webrev that saves and >> restores the pending oop while still executing the stack verification code: >> http://cr.openjdk.java.net/~thartmann/8198826/webrev.01/ >> >> Thanks, >> Tobias >> From kim.barrett at oracle.com Thu Mar 1 17:46:53 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 1 Mar 2018 12:46:53 -0500 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: <5ab12da719db45699b8d1b9a83f21520@sap.com> References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> <86A0EA21-1141-4C73-AD80-EF46A44220B2@oracle.com> <761A1FEF-95F4-4015-AA43-2962D615B50C@oracle.com> <5ab12da719db45699b8d1b9a83f21520@sap.com> Message-ID: <10DC9D59-2019-4161-8CD8-06F574173A4E@oracle.com> > On Mar 1, 2018, at 4:16 AM, Doerr, Martin wrote: > > Hi Kim, > > this change causes a build warning on 32 bit Windows with Visual Studio 2013: > os_windows.cpp(1521) : warning C4018: '>=' : signed/unsigned mismatch > > I think " result >= len" should get fixed. Or is Visual Studio 2013 no longer supported? > Do you have a pending change in which you can update this? > > Best regards, > Martin Thanks for the report. Oracle isn?t currently testing jdk on win32, but CRs and patches related to problems specific to it are still welcome. I?m not sure why we?re not seeing this with win64; we?re still building with VS2013, with VS2017 support still under development and not yet our default. I?ll file a bug and send out a fix for review shortly. From per.liden at oracle.com Thu Mar 1 18:28:03 2018 From: per.liden at oracle.com (Per Liden) Date: Thu, 1 Mar 2018 19:28:03 +0100 Subject: RFR: 8198474: Move JNIHandles::resolve into jniHandles.inline.hpp In-Reply-To: References: <1519217045.2401.14.camel@oracle.com> Message-ID: <26764728-4e8b-ef62-d1bc-9b744062e169@oracle.com> Hi, On 2018-03-01 00:46, Kim Barrett wrote: > Finally, updated webrevs: > full: http://cr.openjdk.java.net/~kbarrett/8198474/open.01/ > incr: http://cr.openjdk.java.net/~kbarrett/8198474/open.01.inc/ Looks good! > > To remove the #include of jniHandles.inline.hpp by > jvmciCodeInstaller.hpp, I've moved the definitions referring to > JNIHandles::resolve from the .hpp file to the .cpp file. > > For jvmciJavaClasses.hpp, I've left it including > jniHandles.inline.hpp. It already includes two other .inline.hpp > files. I'm leaving it to whoever fixes the existing two to fix this > one as well. I'd love to see this fixed, but I realize it's non-trivial and would require quite a bit of work, so I'm ok with leaving this as is for now. /Per From kim.barrett at oracle.com Thu Mar 1 19:48:10 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 1 Mar 2018 14:48:10 -0500 Subject: RFR: 8198474: Move JNIHandles::resolve into jniHandles.inline.hpp In-Reply-To: <26764728-4e8b-ef62-d1bc-9b744062e169@oracle.com> References: <1519217045.2401.14.camel@oracle.com> <26764728-4e8b-ef62-d1bc-9b744062e169@oracle.com> Message-ID: <9D4E4A33-8079-4875-B36A-74B41C096003@oracle.com> > On Mar 1, 2018, at 1:28 PM, Per Liden wrote: > > Hi, > > On 2018-03-01 00:46, Kim Barrett wrote: >> Finally, updated webrevs: >> full: http://cr.openjdk.java.net/~kbarrett/8198474/open.01/ >> incr: http://cr.openjdk.java.net/~kbarrett/8198474/open.01.inc/ > > Looks good! Thanks. >> To remove the #include of jniHandles.inline.hpp by >> jvmciCodeInstaller.hpp, I've moved the definitions referring to >> JNIHandles::resolve from the .hpp file to the .cpp file. >> For jvmciJavaClasses.hpp, I've left it including >> jniHandles.inline.hpp. It already includes two other .inline.hpp >> files. I'm leaving it to whoever fixes the existing two to fix this >> one as well. > > I'd love to see this fixed, but I realize it's non-trivial and would require quite a bit of work, so I'm ok with leaving this as is for now. I think there are currently about 35 .hpp files that include .inline.hpp files. And then there?s precompiled.hpp. None that I?ve looked at looked ?easy?. From kim.barrett at oracle.com Thu Mar 1 20:29:21 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 1 Mar 2018 15:29:21 -0500 Subject: RFR(XS): 8198906: JDK-8196882 breaks VS2013 Win32 builds Message-ID: <593F2346-3234-496C-94BA-B0EAAFC0DDC3@oracle.com> Please review this change to fix a build failure on Win32 when using VS2013 (and likely earlier). In os::vsnprintf, cast the int result to size_t for comparison with the buffer size, after having verified the result is non-negative. I'm not sure why this failure doesn't occur with VS2013 Win64 builds. CR: https://bugs.openjdk.java.net/browse/JDK-8198906 Webrev: http://cr.openjdk.java.net/~kbarrett/8198906/open.00/ Testing: VS2013 Win64 still builds. I don't have access to Win32, but the change is pretty simple. From erik.joelsson at oracle.com Thu Mar 1 22:50:03 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Thu, 1 Mar 2018 14:50:03 -0800 Subject: RFR: JDK-8198862 Stop doing funky compilation stuff for dtrace In-Reply-To: References: Message-ID: <6c39d7a4-607f-5f9b-d8e6-6e07f64cc670@oracle.com> Hello, I don't think you can remove the extra ( ) around the preprocessor commands. I added those to avoid race conditions in JDK-8158629. My conclusion then was that any command that redirected stdout needed to be wrapped in (). Otherwise this looks ok. /Erik On 2018-02-28 16:48, Magnus Ihse Bursie wrote: > We're doing a lot of weird compilation stuff for dtrace. With this > patch, most of the weirdness is removed. The remaining calls to $(CC) > -E has been changed to $(CPP) to clarify that we do not compile, we > just use the precompiler. > > One of the changes I made was to actually split up the last and final > dtrace call into a separate preprocessing step. However, this uses the > solaris studio preprocessor instead of the ancient system > preprocessor, which has changed behavior. A string like (&``_var) is > now expanded to (& ` ` _var), which is not accepted by dtrace. :-( I > have worked around this by adding the preprocessed output, without the > spaces, in two places. If anyone wants to dig deeper into dtrace > script file syntax, or C preprocessor magic, to avoid this, let me > know... (I'll just state that the "obvious" solution of sending -Xs to > the preprocessor to get old-style behavior does not work: this just > makes the solaris studio preprocessor call the ancient preprocessor in > turn, and we've gained nothing...) > > Bug: https://bugs.openjdk.java.net/browse/JDK-8198862 > WebRev: > http://cr.openjdk.java.net/~ihse/JDK-8198862-stop-doing-funky-dtrace-compilation-stuff/webrev.01 > > /Magnus > From mikhailo.seledtsov at oracle.com Thu Mar 1 23:21:11 2018 From: mikhailo.seledtsov at oracle.com (Mikhailo Seledtsov) Date: Thu, 01 Mar 2018 15:21:11 -0800 Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 In-Reply-To: <5160cc6c-d3f7-6804-a46b-98712163aefc@samersoff.net> References: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> <5A82408B.7070001@oracle.com> <5160cc6c-d3f7-6804-a46b-98712163aefc@samersoff.net> Message-ID: <5A988AE7.90000@oracle.com> Hi Dmitry, We require at least one (Capital R) Reviewer for any changes in open jdk, including changes in tests. Please ask a Reviewer to look at the change, and update the hg-export. Then I can sponsor and integrate your change. Thank you, Misha On 2/28/18, 11:52 PM, Dmitry Samersoff wrote: > Hi Mikhailo, > > Please, find exported changeset under the link below: > > http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/8196590-02.export > > -Dmitry > > On 20.02.2018 23:51, mikhailo wrote: >> Hi Dmitry, >> >> >> On 02/18/2018 10:31 AM, Dmitry Samersoff wrote: >>> Mikhailo, >>> >>> Here is the changes rebased to recent sources. >>> >>> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.02/ >> Changes look good to me. >>> Could you sponsor the push? >> I can sponsor the change, once the updated change is reviewed. Once it >> is ready, please send me the latest hg changeset (with usual fields, >> description, reviewers). >> >> >> Thank you, >> Misha >>> -Dmitry >>> >>> On 02/13/2018 04:34 AM, Mikhailo Seledtsov wrote: >>>> Changes look good from my point of view. >>>> >>>> Misha >>>> >>>> On 2/10/18, 4:10 AM, Dmitry Samersoff wrote: >>>>> Everybody, >>>>> >>>>> Please review small changes, that enables docker testing on >>>>> Linux/AArch64 >>>>> >>>>> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.01/ >>>>> >>>>> PS: >>>>> >>>>> Matthias - I refactored VMProps.dockerSupport() a bit to make it more >>>>> readable, please check that it doesn't brake your work. >>>>> >>>>> -Dmitry >>>>> >>>>> -- >>>>> Dmitry Samersoff >>>>> http://devnull.samersoff.net >>>>> * There will come soft rains ... > From david.holmes at oracle.com Fri Mar 2 01:45:09 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 2 Mar 2018 11:45:09 +1000 Subject: RFR(XS): 8198906: JDK-8196882 breaks VS2013 Win32 builds In-Reply-To: <593F2346-3234-496C-94BA-B0EAAFC0DDC3@oracle.com> References: <593F2346-3234-496C-94BA-B0EAAFC0DDC3@oracle.com> Message-ID: On 2/03/2018 6:29 AM, Kim Barrett wrote: > Please review this change to fix a build failure on Win32 when using > VS2013 (and likely earlier). In os::vsnprintf, cast the int result to > size_t for comparison with the buffer size, after having verified the > result is non-negative. Seems harmless. > I'm not sure why this failure doesn't occur with VS2013 Win64 builds. I'm a bit confused how a function that takes a parameter indicating the number of characters to write, and then returns the number of characters written, can use two different, and potentially different sized, types! On 64-bit size_t would be 64-bit but int is 32-bit, so how can you possibly always return the correct count? Granted it may not be a practical concern, but from a typing perspective this seems seriously messed up. Further size_t is unsigned so has twice the range of int - so even on 32-bit this doesn't make sense. Am I missing something? Thanks, David > CR: > https://bugs.openjdk.java.net/browse/JDK-8198906 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8198906/open.00/ > > Testing: > VS2013 Win64 still builds. I don't have access to Win32, but the > change is pretty simple. > > From david.holmes at oracle.com Fri Mar 2 02:01:50 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 2 Mar 2018 12:01:50 +1000 Subject: RFR: JDK-8198862 Stop doing funky compilation stuff for dtrace In-Reply-To: References: Message-ID: <6a865220-1f02-1df3-84ff-e8d786756a22@oracle.com> Hi Magnus, On 1/03/2018 10:48 AM, Magnus Ihse Bursie wrote: > We're doing a lot of weird compilation stuff for dtrace. With this > patch, most of the weirdness is removed. The remaining calls to $(CC) -E > has been changed to $(CPP) to clarify that we do not compile, we just > use the precompiler. > > One of the changes I made was to actually split up the last and final > dtrace call into a separate preprocessing step. However, this uses the > solaris studio preprocessor instead of the ancient system preprocessor, > which has changed behavior. A string like (&``_var) is now expanded to > (& ` ` _var), which is not accepted by dtrace. :-( I have worked around > this by adding the preprocessed output, without the spaces, in two > places. If anyone wants to dig deeper into dtrace script file syntax, or > C preprocessor magic, to avoid this, let me know... (I'll just state > that the "obvious" solution of sending -Xs to the preprocessor to get > old-style behavior does not work: this just makes the solaris studio > preprocessor call the ancient preprocessor in turn, and we've gained > nothing...) > > Bug: https://bugs.openjdk.java.net/browse/JDK-8198862 > WebRev: > http://cr.openjdk.java.net/~ihse/JDK-8198862-stop-doing-funky-dtrace-compilation-stuff/webrev.01 Why did you rename generateJvmOffsetsMain.c to generateJvmOffsetsMain.cpp? It isn't a C++ program, it's just a C program. I agree the logic is quite confusing. I think this build logic was victim of the CPP_FLAGS (meaning C preprocessor) to CXX_FLAGS (meaning C++ flags) renaming. But this is a trivial C program and should require trivial C compiler flags. I don't see it should be being built with all the JVM_CFLAGS. The latter may be harmless but it seems wrong to lump this in together with other things. make/hotspot/lib/CompileDtracePreJvm.gmk ! # Since we cannot generated JvmOffsets.cpp as part of the gensrc step, Comment doesn't read right. Thanks, David > > /Magnus > From david.holmes at oracle.com Fri Mar 2 02:11:10 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 2 Mar 2018 12:11:10 +1000 Subject: [11] RFR(XS): 8198826: -XX:+VerifyStack fails with fatal error: ExceptionMark constructor expects no pending exceptions In-Reply-To: <3529ec7e-02c9-519e-42e6-56072d5b2800@oracle.com> References: <11874ef4-0897-b641-cbe4-b06d772957be@oracle.com> <3529ec7e-02c9-519e-42e6-56072d5b2800@oracle.com> Message-ID: <9918c64a-db03-95ba-5bcf-f6442c2399c3@oracle.com> On 1/03/2018 11:26 PM, Tobias Hartmann wrote: > > On 01.03.2018 12:53, David Holmes wrote: >> But if the allocation fails you don't have the ability to reconstruct the object ?? The OOME must >> appear to happen at the point at which the object should have been allocated, and nothing that >> happens after that point can be seen to have happened. > > Yes, if the re-allocation of the scalar replaced object fails during deoptimization, we cannot > re-construct the object. But that's okay because the interpreter will throw an OutOfMemoryError and > not use that object anyway (please note that C2 will only perform scalarization if escape analysis > determined that it's safe to do so and the object is not leaked). It's not the scalarized object I'm concerned about but other actions that may happen after the allocation point and which would never have occurred if the allocation threw OOME. But perhaps such actions are not possible under the definition of "safe" for this transformation. But we should probably discuss this elsewhere. Cheers, David > This is actually what the TestDeoptOOM verifies. > > Best regards, > Tobias > From kim.barrett at oracle.com Fri Mar 2 02:15:21 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 1 Mar 2018 21:15:21 -0500 Subject: 8195148: Collapse G1SATBCardTableModRefBS and G1SATBCardTableLoggingModRefBS into a single G1BarrierSet In-Reply-To: <5A940DE0.7040108@oracle.com> References: <5A940DE0.7040108@oracle.com> Message-ID: <6C98D06F-BEB8-4B35-9B90-7579DFD7BB43@oracle.com> > On Feb 26, 2018, at 8:38 AM, Erik ?sterlund wrote: > > Hi, > > G1 has two barrier sets: an abstract G1SATBCardTableModRefBS barrier set that is incomplete and you can't use, and a concrete G1SATBCardTableLoggingModRefBS barrier set is what is the one actually used all over the place. The inheritance makes this code more difficult to understand than it needs to be. > > There should really not be an abstract G1 barrier set that is not used - it serves no purpose. There should be a single G1BarrierSet instead reflecting the actual G1 barriers used. > > Webrev: > http://cr.openjdk.java.net/~eosterlund/8195148/webrev.00/ > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8195148 > > Thanks, > /Erik Mostly looks good. One minor formatting nit, and one significant not so sure about this. ------------------------------------------------------------------------------ src/hotspot/share/gc/g1/g1BarrierSet.cpp 35 G1BarrierSet::G1BarrierSet( 36 G1CardTable* card_table) : 37 CardTableModRefBS(card_table, BarrierSet::FakeRtti(BarrierSet::G1BarrierSet)), 38 _dcqs(JavaThread::dirty_card_queue_set()) 39 { } Move the argument to the same line as the constructor name, so it's easier to tell what are arguments and what are initializers. ------------------------------------------------------------------------------ src/hotspot/share/gc/g1/g1BarrierSet.inline.hpp 32 inline void G1BarrierSet::write_ref_field_pre(T* field) { The change here doesn't seem to have anything to do with the renaming. Rather, it looks like a separate bug fix? The old code deferred the decode until after the null check, with the decoding benefitting from having already done the null check. At first glance, the new code seems like it might not perform as well. I do see why adding volatile is needed here though. ------------------------------------------------------------------------------ From kim.barrett at oracle.com Fri Mar 2 02:28:08 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 1 Mar 2018 21:28:08 -0500 Subject: RFR(XS): 8198906: JDK-8196882 breaks VS2013 Win32 builds In-Reply-To: References: <593F2346-3234-496C-94BA-B0EAAFC0DDC3@oracle.com> Message-ID: <4DED9F3E-8601-4710-8CE4-7EB90B5DE405@oracle.com> > On Mar 1, 2018, at 8:45 PM, David Holmes wrote: > > On 2/03/2018 6:29 AM, Kim Barrett wrote: >> Please review this change to fix a build failure on Win32 when using >> VS2013 (and likely earlier). In os::vsnprintf, cast the int result to >> size_t for comparison with the buffer size, after having verified the >> result is non-negative. > > Seems harmless. Thanks. >> I'm not sure why this failure doesn't occur with VS2013 Win64 builds. > > I'm a bit confused how a function that takes a parameter indicating the number of characters to write, and then returns the number of characters written, can use two different, and potentially different sized, types! On 64-bit size_t would be 64-bit but int is 32-bit, so how can you possibly always return the correct count? Granted it may not be a practical concern, but from a typing perspective this seems seriously messed up. Further size_t is unsigned so has twice the range of int - so even on 32-bit this doesn't make sense. > > Am I missing something? As far as I can tell, the relevant standards bodies and such made something of a hash of things here. > Thanks, > David > >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8198906 >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8198906/open.00/ >> Testing: >> VS2013 Win64 still builds. I don't have access to Win32, but the >> change is pretty simple. From kim.barrett at oracle.com Fri Mar 2 02:43:05 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 1 Mar 2018 21:43:05 -0500 Subject: 8195148: Collapse G1SATBCardTableModRefBS and G1SATBCardTableLoggingModRefBS into a single G1BarrierSet In-Reply-To: <6C98D06F-BEB8-4B35-9B90-7579DFD7BB43@oracle.com> References: <5A940DE0.7040108@oracle.com> <6C98D06F-BEB8-4B35-9B90-7579DFD7BB43@oracle.com> Message-ID: > On Mar 1, 2018, at 9:15 PM, Kim Barrett wrote: > >> On Feb 26, 2018, at 8:38 AM, Erik ?sterlund wrote: >> >> Hi, >> >> G1 has two barrier sets: an abstract G1SATBCardTableModRefBS barrier set that is incomplete and you can't use, and a concrete G1SATBCardTableLoggingModRefBS barrier set is what is the one actually used all over the place. The inheritance makes this code more difficult to understand than it needs to be. >> >> There should really not be an abstract G1 barrier set that is not used - it serves no purpose. There should be a single G1BarrierSet instead reflecting the actual G1 barriers used. >> >> Webrev: >> http://cr.openjdk.java.net/~eosterlund/8195148/webrev.00/ >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8195148 >> >> Thanks, >> /Erik > > Mostly looks good. One minor formatting nit, and one significant not so sure about this. > > [?] > ------------------------------------------------------------------------------ > src/hotspot/share/gc/g1/g1BarrierSet.inline.hpp > 32 inline void G1BarrierSet::write_ref_field_pre(T* field) { > > The change here doesn't seem to have anything to do with the renaming. > Rather, it looks like a separate bug fix? > > The old code deferred the decode until after the null check, with the > decoding benefitting from having already done the null check. At > first glance, the new code seems like it might not perform as well. > > I do see why adding volatile is needed here though. Without exploring too deeply or doing any testing, it seems to me the only change that should be made here is to replace the use of oopDesc::load_heap_oop with RawAccess::oop_load, e.g. it should be T heap_oop = RawAccess::oop_load(field); if (!oopDesc::is_null(heap_oop)) { enqueue(oopDesc::decode_heap_oop_not_null(heap_oop)); } That's assuming the LoadProxy can resolve to a narrowOop. From kim.barrett at oracle.com Fri Mar 2 03:23:00 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 1 Mar 2018 22:23:00 -0500 Subject: RFR: 8196876: OopStorage::assert_at_safepoint clashes with assert_at_safepoint macros in g1CollectedHeap.hpp Message-ID: <805343C9-7419-4C8B-B575-D0BA01186524@oracle.com> Please remove this fix for a macro name collision. g1CollectedHeap.hpp contains a collection of assertion macros, embedded in the middle of the class definition. A couple of them could be quite widely useful: assert_at_safepoint and assert_not_at_safepoint. The specific implementations here aren't really what we'd want elsewhere, and assert_at_safepoint has some extra stuff about whether the current thread is the VM thread, which is similarly not what we'd want elsewhere. And it collides with a helper function in OopStorage. Moved assert_at_safepoint() and assert_not_at_safepoint() macros to runtime/safepoint.hpp, along with a pair of associated macros that let the caller provide the failure message. These have the "obvious" implementations using SafepointSynchronize::is_at_safepoint() and assert. The assert_at_safepoint macro that was in g1CollectedHeap.hpp is now called assert_at_safepoint_on_vm_thread, since that's the test that all but one of the (G1) callers wanted. The only outlier was DirtyCardQueueSet::apply_closure_during_gc, and it doesn't really care whether it's called from the VM thread. The colliding OopStorage function has been removed; OopStorage now just uses the new shared macro. There are a large number of places that could use the new safepoint assertion macros; that's a cleanup I'm going to leave for later. CR: https://bugs.openjdk.java.net/browse/JDK-8196876 Webrev: http://cr.openjdk.java.net/~kbarrett/8196876/open.00/ Testing: mach5 {hs,jdk}-tier{1,2,3} From thomas.stuefe at gmail.com Fri Mar 2 06:28:25 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Fri, 2 Mar 2018 07:28:25 +0100 Subject: RFR(XS): 8198906: JDK-8196882 breaks VS2013 Win32 builds In-Reply-To: <593F2346-3234-496C-94BA-B0EAAFC0DDC3@oracle.com> References: <593F2346-3234-496C-94BA-B0EAAFC0DDC3@oracle.com> Message-ID: Hi Kim, looks good. Thanks for taking care of this. ..Thomas On Thu, Mar 1, 2018 at 9:29 PM, Kim Barrett wrote: > Please review this change to fix a build failure on Win32 when using > VS2013 (and likely earlier). In os::vsnprintf, cast the int result to > size_t for comparison with the buffer size, after having verified the > result is non-negative. > > I'm not sure why this failure doesn't occur with VS2013 Win64 builds. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8198906 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8198906/open.00/ > > Testing: > VS2013 Win64 still builds. I don't have access to Win32, but the > change is pretty simple. > > > From thomas.stuefe at gmail.com Fri Mar 2 06:34:11 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Fri, 2 Mar 2018 07:34:11 +0100 Subject: RFR(XS): 8198906: JDK-8196882 breaks VS2013 Win32 builds In-Reply-To: References: <593F2346-3234-496C-94BA-B0EAAFC0DDC3@oracle.com> Message-ID: On Fri, Mar 2, 2018 at 2:45 AM, David Holmes wrote: > On 2/03/2018 6:29 AM, Kim Barrett wrote: > >> Please review this change to fix a build failure on Win32 when using >> VS2013 (and likely earlier). In os::vsnprintf, cast the int result to >> size_t for comparison with the buffer size, after having verified the >> result is non-negative. >> > > Seems harmless. > > I'm not sure why this failure doesn't occur with VS2013 Win64 builds. >> > > I'm a bit confused how a function that takes a parameter indicating the > number of characters to write, and then returns the number of characters > written, can use two different, and potentially different sized, types! On > 64-bit size_t would be 64-bit but int is 32-bit, so how can you possibly > always return the correct count? Granted it may not be a practical concern, > but from a typing perspective this seems seriously messed up. Further > size_t is unsigned so has twice the range of int - so even on 32-bit this > doesn't make sense. > > Am I missing something? > One problem is that the result type must be signed to be able to represent error cases as well. So even if it were ssize_t it would not cover the whole range of the input size_t. Thanks, > David > > > CR: >> https://bugs.openjdk.java.net/browse/JDK-8198906 >> >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8198906/open.00/ >> >> Testing: >> VS2013 Win64 still builds. I don't have access to Win32, but the >> change is pretty simple. >> >> >> From erik.osterlund at oracle.com Fri Mar 2 07:31:06 2018 From: erik.osterlund at oracle.com (Erik Osterlund) Date: Fri, 2 Mar 2018 08:31:06 +0100 Subject: 8195148: Collapse G1SATBCardTableModRefBS and G1SATBCardTableLoggingModRefBS into a single G1BarrierSet In-Reply-To: <6C98D06F-BEB8-4B35-9B90-7579DFD7BB43@oracle.com> References: <5A940DE0.7040108@oracle.com> <6C98D06F-BEB8-4B35-9B90-7579DFD7BB43@oracle.com> Message-ID: <44E8DB56-A4D6-4D1B-BF09-0780963227E4@oracle.com> Hi Kim, Thanks for the review. On 2 Mar 2018, at 03:15, Kim Barrett wrote: >> On Feb 26, 2018, at 8:38 AM, Erik ?sterlund wrote: >> >> Hi, >> >> G1 has two barrier sets: an abstract G1SATBCardTableModRefBS barrier set that is incomplete and you can't use, and a concrete G1SATBCardTableLoggingModRefBS barrier set is what is the one actually used all over the place. The inheritance makes this code more difficult to understand than it needs to be. >> >> There should really not be an abstract G1 barrier set that is not used - it serves no purpose. There should be a single G1BarrierSet instead reflecting the actual G1 barriers used. >> >> Webrev: >> http://cr.openjdk.java.net/~eosterlund/8195148/webrev.00/ >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8195148 >> >> Thanks, >> /Erik > > Mostly looks good. One minor formatting nit, and one significant not so sure about this. > > ------------------------------------------------------------------------------ > src/hotspot/share/gc/g1/g1BarrierSet.cpp > 35 G1BarrierSet::G1BarrierSet( > 36 G1CardTable* card_table) : > 37 CardTableModRefBS(card_table, BarrierSet::FakeRtti(BarrierSet::G1BarrierSet)), > 38 _dcqs(JavaThread::dirty_card_queue_set()) > 39 { } > > Move the argument to the same line as the constructor name, so it's > easier to tell what are arguments and what are initializers. Will fix. > ------------------------------------------------------------------------------ > src/hotspot/share/gc/g1/g1BarrierSet.inline.hpp > 32 inline void G1BarrierSet::write_ref_field_pre(T* field) { > > The change here doesn't seem to have anything to do with the renaming. > Rather, it looks like a separate bug fix? > > The old code deferred the decode until after the null check, with the > decoding benefitting from having already done the null check. At > first glance, the new code seems like it might not perform as well. > > I do see why adding volatile is needed here though. I understand this might look unrelated. Here is my explanation: There has been an unfortunate implicit dependency to oop.inline.hpp. Now with some headers included in different order, it no longer compiles without adding that include. But including oop.inline.hpp causes an unfortunate include cycle that causes other problems. By loading the oop with RawAccess instead, those issues are solved. As for MO_VOLATILE, I thought I might as well correct that while I am at it. Some compilers can and actually do reload the oop after the null check, at which point they may be NULL and break the algorithm. I couldn?t not put that decorator in there to solve that. So you see how that started as a necessary include dependency fix (I had to do something or it would not compile) but ended up fixing more things. Hope that is okay. Thanks, /Erik > ------------------------------------------------------------------------------ > From erik.osterlund at oracle.com Fri Mar 2 07:35:03 2018 From: erik.osterlund at oracle.com (Erik Osterlund) Date: Fri, 2 Mar 2018 08:35:03 +0100 Subject: 8195148: Collapse G1SATBCardTableModRefBS and G1SATBCardTableLoggingModRefBS into a single G1BarrierSet In-Reply-To: References: <5A940DE0.7040108@oracle.com> <6C98D06F-BEB8-4B35-9B90-7579DFD7BB43@oracle.com> Message-ID: Hi Kim, On 2 Mar 2018, at 03:43, Kim Barrett wrote: >> On Mar 1, 2018, at 9:15 PM, Kim Barrett wrote: >> >>> On Feb 26, 2018, at 8:38 AM, Erik ?sterlund wrote: >>> >>> Hi, >>> >>> G1 has two barrier sets: an abstract G1SATBCardTableModRefBS barrier set that is incomplete and you can't use, and a concrete G1SATBCardTableLoggingModRefBS barrier set is what is the one actually used all over the place. The inheritance makes this code more difficult to understand than it needs to be. >>> >>> There should really not be an abstract G1 barrier set that is not used - it serves no purpose. There should be a single G1BarrierSet instead reflecting the actual G1 barriers used. >>> >>> Webrev: >>> http://cr.openjdk.java.net/~eosterlund/8195148/webrev.00/ >>> >>> Bug: >>> https://bugs.openjdk.java.net/browse/JDK-8195148 >>> >>> Thanks, >>> /Erik >> >> Mostly looks good. One minor formatting nit, and one significant not so sure about this. >> >> [?] > >> ------------------------------------------------------------------------------ >> src/hotspot/share/gc/g1/g1BarrierSet.inline.hpp >> 32 inline void G1BarrierSet::write_ref_field_pre(T* field) { >> >> The change here doesn't seem to have anything to do with the renaming. >> Rather, it looks like a separate bug fix? >> >> The old code deferred the decode until after the null check, with the >> decoding benefitting from having already done the null check. At >> first glance, the new code seems like it might not perform as well. >> >> I do see why adding volatile is needed here though. > > Without exploring too deeply or doing any testing, it seems to me the > only change that should be made here is to replace the use of > oopDesc::load_heap_oop with RawAccess::oop_load, e.g. it > should be > > T heap_oop = RawAccess::oop_load(field); > if (!oopDesc::is_null(heap_oop)) { > enqueue(oopDesc::decode_heap_oop_not_null(heap_oop)); > } > > That's assuming the LoadProxy can resolve to a narrowOop. It can, but then I would still have the include dependency problem that served as primary motivator. Thanks, /Erik From kim.barrett at oracle.com Fri Mar 2 08:09:58 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 2 Mar 2018 03:09:58 -0500 Subject: RFR(XS): 8198906: JDK-8196882 breaks VS2013 Win32 builds In-Reply-To: References: <593F2346-3234-496C-94BA-B0EAAFC0DDC3@oracle.com> Message-ID: > On Mar 2, 2018, at 1:28 AM, Thomas St?fe wrote: > > Hi Kim, looks good. Thanks for taking care of this. > > ..Thomas Thanks. > > On Thu, Mar 1, 2018 at 9:29 PM, Kim Barrett wrote: > Please review this change to fix a build failure on Win32 when using > VS2013 (and likely earlier). In os::vsnprintf, cast the int result to > size_t for comparison with the buffer size, after having verified the > result is non-negative. > > I'm not sure why this failure doesn't occur with VS2013 Win64 builds. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8198906 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8198906/open.00/ > > Testing: > VS2013 Win64 still builds. I don't have access to Win32, but the > change is pretty simple. From tobias.hartmann at oracle.com Fri Mar 2 09:15:12 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Fri, 2 Mar 2018 10:15:12 +0100 Subject: [11] RFR(XS): 8198826: -XX:+VerifyStack fails with fatal error: ExceptionMark constructor expects no pending exceptions In-Reply-To: <9918c64a-db03-95ba-5bcf-f6442c2399c3@oracle.com> References: <11874ef4-0897-b641-cbe4-b06d772957be@oracle.com> <3529ec7e-02c9-519e-42e6-56072d5b2800@oracle.com> <9918c64a-db03-95ba-5bcf-f6442c2399c3@oracle.com> Message-ID: On 02.03.2018 03:11, David Holmes wrote: > It's not the scalarized object I'm concerned about but other actions that may happen after the > allocation point and which would never have occurred if the allocation threw OOME. But perhaps such > actions are not possible under the definition of "safe" for this transformation. It depends on what kind of other actions that would be but it's clearly possible that the user would see an OOME at an "unexpected" location in the code. For example: MyObject o = new MyObject(); o.x = 42; o.y = 43; System.out.println(o.x); myMethod(); // Deoptimize here for some reason System.out.println(o.y); return; C2 could scalarize 'o' because EA determined that it does not escape and we would then need to re-allocate once we deoptimize at the myMethod() call (because we will continue execution in the interpreter and 'o' is still live). If this re-allocation fails, the user would see the printed "42" followed by an OOME which is kind of unexpected at that point. It's the same with VirtualMachineErrors or metaspace OOMEs during class loading (they might be "unexpected" to the user as well). According to [1], OOMEs can basically be thrown at any point in the program because "OutOfMemoryError may be thrown when an excessive amount of time is being spent doing garbage collection and little memory is being freed" [1]. > But we should probably discuss this elsewhere. Yes, the discussion is independent of this patch. Feel free to follow up by email or chat. Can I assume you are okay with webrev.01? Thanks, Tobias [1] https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/memleaks002.html From david.holmes at oracle.com Fri Mar 2 09:35:14 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 2 Mar 2018 19:35:14 +1000 Subject: [11] RFR(XS): 8198826: -XX:+VerifyStack fails with fatal error: ExceptionMark constructor expects no pending exceptions In-Reply-To: References: <11874ef4-0897-b641-cbe4-b06d772957be@oracle.com> <3529ec7e-02c9-519e-42e6-56072d5b2800@oracle.com> <9918c64a-db03-95ba-5bcf-f6442c2399c3@oracle.com> Message-ID: <776f468e-86f8-a5c2-7847-ff46de6d3186@oracle.com> On 2/03/2018 7:15 PM, Tobias Hartmann wrote: > On 02.03.2018 03:11, David Holmes wrote: >> It's not the scalarized object I'm concerned about but other actions that may happen after the >> allocation point and which would never have occurred if the allocation threw OOME. But perhaps such >> actions are not possible under the definition of "safe" for this transformation. > > It depends on what kind of other actions that would be but it's clearly possible that the user would > see an OOME at an "unexpected" location in the code. For example: > > MyObject o = new MyObject(); > o.x = 42; > o.y = 43; > System.out.println(o.x); > myMethod(); // Deoptimize here for some reason > System.out.println(o.y); > return; > > C2 could scalarize 'o' because EA determined that it does not escape and we would then need to > re-allocate once we deoptimize at the myMethod() call (because we will continue execution in the > interpreter and 'o' is still live). If this re-allocation fails, the user would see the printed "42" > followed by an OOME which is kind of unexpected at that point. > > It's the same with VirtualMachineErrors or metaspace OOMEs during class loading (they might be > "unexpected" to the user as well). According to [1], OOMEs can basically be thrown at any point in > the program because "OutOfMemoryError may be thrown when an excessive amount of time is being spent > doing garbage collection and little memory is being freed" [1]. Hmmm to me that violates the precise exceptions requirement of the language. >> But we should probably discuss this elsewhere. > > Yes, the discussion is independent of this patch. Feel free to follow up by email or chat. > > Can I assume you are okay with webrev.01? Yes - thanks. David > Thanks, > Tobias > > [1] https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/memleaks002.html > From david.holmes at oracle.com Fri Mar 2 09:43:15 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 2 Mar 2018 19:43:15 +1000 Subject: [11] RFR(XS): 8198826: -XX:+VerifyStack fails with fatal error: ExceptionMark constructor expects no pending exceptions In-Reply-To: <776f468e-86f8-a5c2-7847-ff46de6d3186@oracle.com> References: <11874ef4-0897-b641-cbe4-b06d772957be@oracle.com> <3529ec7e-02c9-519e-42e6-56072d5b2800@oracle.com> <9918c64a-db03-95ba-5bcf-f6442c2399c3@oracle.com> <776f468e-86f8-a5c2-7847-ff46de6d3186@oracle.com> Message-ID: On 2/03/2018 7:35 PM, David Holmes wrote: > On 2/03/2018 7:15 PM, Tobias Hartmann wrote: >> On 02.03.2018 03:11, David Holmes wrote: >>> It's not the scalarized object I'm concerned about but other actions >>> that may happen after the >>> allocation point and which would never have occurred if the >>> allocation threw OOME. But perhaps such >>> actions are not possible under the definition of "safe" for this >>> transformation. >> >> It depends on what kind of other actions that would be but it's >> clearly possible that the user would >> see an OOME at an "unexpected" location in the code. For example: >> >> MyObject o = new MyObject(); >> o.x = 42; >> o.y = 43; >> System.out.println(o.x); >> myMethod(); // Deoptimize here for some reason >> System.out.println(o.y); >> return; >> >> C2 could scalarize 'o' because EA determined that it does not escape >> and we would then need to >> re-allocate once we deoptimize at the myMethod() call (because we will >> continue execution in the >> interpreter and 'o' is still live). If this re-allocation fails, the >> user would see the printed "42" >> followed by an OOME which is kind of unexpected at that point. >> >> It's the same with VirtualMachineErrors or metaspace OOMEs during >> class loading (they might be >> "unexpected" to the user as well). According to [1], OOMEs can >> basically be thrown at any point in >> the program because "OutOfMemoryError may be thrown when an excessive >> amount of time is being spent >> doing garbage collection and little memory is being freed" [1]. > > Hmmm to me that violates the precise exceptions requirement of the > language. Nope Im wrong - OOME is allowed to occur as an asynchronous exception, not just synchronous. [JLS 11.1.3] So this is allowed for. David >>> But we should probably discuss this elsewhere. >> >> Yes, the discussion is independent of this patch. Feel free to follow >> up by email or chat. >> >> Can I assume you are okay with webrev.01? > > Yes - thanks. > > David > >> Thanks, >> Tobias >> >> [1] >> https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/memleaks002.html >> >> From erik.osterlund at oracle.com Fri Mar 2 09:59:52 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Fri, 2 Mar 2018 10:59:52 +0100 Subject: RFR: 8196876: OopStorage::assert_at_safepoint clashes with assert_at_safepoint macros in g1CollectedHeap.hpp In-Reply-To: <805343C9-7419-4C8B-B575-D0BA01186524@oracle.com> References: <805343C9-7419-4C8B-B575-D0BA01186524@oracle.com> Message-ID: <5A992098.5060309@oracle.com> Hi Kim, Looks good. Thanks, /Erik On 2018-03-02 04:23, Kim Barrett wrote: > Please remove this fix for a macro name collision. > > g1CollectedHeap.hpp contains a collection of assertion macros, > embedded in the middle of the class definition. A couple of them > could be quite widely useful: assert_at_safepoint and > assert_not_at_safepoint. The specific implementations here aren't > really what we'd want elsewhere, and assert_at_safepoint has some > extra stuff about whether the current thread is the VM thread, which > is similarly not what we'd want elsewhere. And it collides with a > helper function in OopStorage. > > Moved assert_at_safepoint() and assert_not_at_safepoint() macros to > runtime/safepoint.hpp, along with a pair of associated macros that let > the caller provide the failure message. These have the "obvious" > implementations using SafepointSynchronize::is_at_safepoint() and > assert. > > The assert_at_safepoint macro that was in g1CollectedHeap.hpp is now > called assert_at_safepoint_on_vm_thread, since that's the test that > all but one of the (G1) callers wanted. The only outlier was > DirtyCardQueueSet::apply_closure_during_gc, and it doesn't really care > whether it's called from the VM thread. > > The colliding OopStorage function has been removed; OopStorage now > just uses the new shared macro. > > There are a large number of places that could use the new safepoint > assertion macros; that's a cleanup I'm going to leave for later. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8196876 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8196876/open.00/ > > Testing: > mach5 {hs,jdk}-tier{1,2,3} > From magnus.ihse.bursie at oracle.com Fri Mar 2 11:10:08 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Fri, 2 Mar 2018 12:10:08 +0100 Subject: RFR: JDK-8198862 Stop doing funky compilation stuff for dtrace In-Reply-To: <6a865220-1f02-1df3-84ff-e8d786756a22@oracle.com> References: <6a865220-1f02-1df3-84ff-e8d786756a22@oracle.com> Message-ID: <8c44d583-d001-3a4c-fc2d-40140f811f84@oracle.com> On 2018-03-02 03:01, David Holmes wrote: > Hi Magnus, > > On 1/03/2018 10:48 AM, Magnus Ihse Bursie wrote: >> We're doing a lot of weird compilation stuff for dtrace. With this >> patch, most of the weirdness is removed. The remaining calls to $(CC) >> -E has been changed to $(CPP) to clarify that we do not compile, we >> just use the precompiler. >> >> One of the changes I made was to actually split up the last and final >> dtrace call into a separate preprocessing step. However, this uses >> the solaris studio preprocessor instead of the ancient system >> preprocessor, which has changed behavior. A string like (&``_var) is >> now expanded to (& ` ` _var), which is not accepted by dtrace. :-( I >> have worked around this by adding the preprocessed output, without >> the spaces, in two places. If anyone wants to dig deeper into dtrace >> script file syntax, or C preprocessor magic, to avoid this, let me >> know... (I'll just state that the "obvious" solution of sending -Xs >> to the preprocessor to get old-style behavior does not work: this >> just makes the solaris studio preprocessor call the ancient >> preprocessor in turn, and we've gained nothing...) >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8198862 >> WebRev: >> http://cr.openjdk.java.net/~ihse/JDK-8198862-stop-doing-funky-dtrace-compilation-stuff/webrev.01 > > > Why did you rename generateJvmOffsetsMain.c to > generateJvmOffsetsMain.cpp? It isn't a C++ program, it's just a C > program. Yes, but so are generateJvmOffsets.cpp. :-& There was no point in mixing a .cpp and .c file for this trivial build tool helper. In fact, I don't even understand why they are two separate files -- if I get the blessings from someone in hotspot, I'll gladly just concatenate them into a single file. > I agree the logic is quite confusing. I think this build logic was > victim of the CPP_FLAGS (meaning C preprocessor) to CXX_FLAGS (meaning > C++ flags) renaming. But this is a trivial C program and should > require trivial C compiler flags. I don't see it should be being built > with all the JVM_CFLAGS. The latter may be harmless but it seems wrong > to lump this in together with other things. Actually, no. Or, maybe it was a victim of CPP_FLAGS/CXX_FLAGS confusion, too. But the JVM_CFLAGS *are* needed. Otherwise I'd removed them, trust me. And I would have moved the entire piece of code to gensrc, where it belongs. (This is just about compiling a build tool that will generate source code that should be later compiled, typical gensrc stuff). But, this file includes a lot of hotspot include files. And for that to work, we need the -I and -D flags from JVM_CFLAGS. We probably don't need any other parts of the JVM_CFLAGS, so in theory, we could probably split JVM_CFLAGS into a "defines and include paths" part, and a "rest" part. But I would not bet on it, suddenly you'd have some kind of option (-xc99?) that modifies the parsing of the include files... This is the general problem with all dtrace stuff, it needs to poke it's fingers deep down in the libjvm. :( > > make/hotspot/lib/CompileDtracePreJvm.gmk > > !???? # Since we cannot generated JvmOffsets.cpp as part of the gensrc > step, > > Comment doesn't read right. Typo, should be "generate". I'll fix. /Magnus > > Thanks, > David > >> >> /Magnus >> From magnus.ihse.bursie at oracle.com Fri Mar 2 12:45:16 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Fri, 2 Mar 2018 13:45:16 +0100 Subject: RFR: JDK-8198862 Stop doing funky compilation stuff for dtrace In-Reply-To: <8c44d583-d001-3a4c-fc2d-40140f811f84@oracle.com> References: <6a865220-1f02-1df3-84ff-e8d786756a22@oracle.com> <8c44d583-d001-3a4c-fc2d-40140f811f84@oracle.com> Message-ID: <3d15bc14-e39f-288f-0ccc-77d3b95fc66c@oracle.com> On 2018-03-02 12:10, Magnus Ihse Bursie wrote: > On 2018-03-02 03:01, David Holmes wrote: >> Hi Magnus, >> >> On 1/03/2018 10:48 AM, Magnus Ihse Bursie wrote: >>> We're doing a lot of weird compilation stuff for dtrace. With this >>> patch, most of the weirdness is removed. The remaining calls to >>> $(CC) -E has been changed to $(CPP) to clarify that we do not >>> compile, we just use the precompiler. >>> >>> One of the changes I made was to actually split up the last and >>> final dtrace call into a separate preprocessing step. However, this >>> uses the solaris studio preprocessor instead of the ancient system >>> preprocessor, which has changed behavior. A string like (&``_var) is >>> now expanded to (& ` ` _var), which is not accepted by dtrace. :-( I >>> have worked around this by adding the preprocessed output, without >>> the spaces, in two places. If anyone wants to dig deeper into dtrace >>> script file syntax, or C preprocessor magic, to avoid this, let me >>> know... (I'll just state that the "obvious" solution of sending -Xs >>> to the preprocessor to get old-style behavior does not work: this >>> just makes the solaris studio preprocessor call the ancient >>> preprocessor in turn, and we've gained nothing...) >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8198862 >>> WebRev: >>> http://cr.openjdk.java.net/~ihse/JDK-8198862-stop-doing-funky-dtrace-compilation-stuff/webrev.01 >> >> >> >> Why did you rename generateJvmOffsetsMain.c to >> generateJvmOffsetsMain.cpp? It isn't a C++ program, it's just a C >> program. > Yes, but so are generateJvmOffsets.cpp. :-& There was no point in > mixing a .cpp and .c file for this trivial build tool helper. In fact, > I don't even understand why they are two separate files -- if I get > the blessings from someone in hotspot, I'll gladly just concatenate > them into a single file. Come to think about it, I don't care about the hotspot group's blessing. ;-) I just moved the main function into the generateJvmOffsets.cpp file. It was just silly having it as a separate file. > >> !???? # Since we cannot generated JvmOffsets.cpp as part of the >> gensrc step, >> >> Comment doesn't read right. > Typo, should be "generate". I'll fix. Updated. I also restored the extra ( ) in ExecuteWithLog with redirection, and added an additional ( ) for one case that was previously missing one. Finally I also added the changes to dtrace that Erik requested for JDK-8198859, but which was already pushed by that time. New webrev: http://cr.openjdk.java.net/~ihse/JDK-8198862-stop-doing-funky-dtrace-compilation-stuff/webrev.02 /Magnus From coleen.phillimore at oracle.com Fri Mar 2 12:50:09 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 2 Mar 2018 07:50:09 -0500 Subject: RFR: 8196876: OopStorage::assert_at_safepoint clashes with assert_at_safepoint macros in g1CollectedHeap.hpp In-Reply-To: <805343C9-7419-4C8B-B575-D0BA01186524@oracle.com> References: <805343C9-7419-4C8B-B575-D0BA01186524@oracle.com> Message-ID: <902c444c-38e0-197c-48d6-81157fafde2b@oracle.com> On 3/1/18 10:23 PM, Kim Barrett wrote: > Please remove this fix for a macro name collision. > > g1CollectedHeap.hpp contains a collection of assertion macros, > embedded in the middle of the class definition. A couple of them > could be quite widely useful: assert_at_safepoint and > assert_not_at_safepoint. The specific implementations here aren't > really what we'd want elsewhere, and assert_at_safepoint has some > extra stuff about whether the current thread is the VM thread, which > is similarly not what we'd want elsewhere. And it collides with a > helper function in OopStorage. > > Moved assert_at_safepoint() and assert_not_at_safepoint() macros to > runtime/safepoint.hpp, along with a pair of associated macros that let > the caller provide the failure message. These have the "obvious" > implementations using SafepointSynchronize::is_at_safepoint() and > assert. > > The assert_at_safepoint macro that was in g1CollectedHeap.hpp is now > called assert_at_safepoint_on_vm_thread, since that's the test that > all but one of the (G1) callers wanted. The only outlier was > DirtyCardQueueSet::apply_closure_during_gc, and it doesn't really care > whether it's called from the VM thread. > > The colliding OopStorage function has been removed; OopStorage now > just uses the new shared macro. > > There are a large number of places that could use the new safepoint > assertion macros; that's a cleanup I'm going to leave for later. Yes, please leave for later.? The rest of the code can migrate to the new macro eventually now that it's there. Change looks good. Thanks, Coleen > > CR: > https://bugs.openjdk.java.net/browse/JDK-8196876 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8196876/open.00/ > > Testing: > mach5 {hs,jdk}-tier{1,2,3} > From erik.joelsson at oracle.com Fri Mar 2 15:29:47 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Fri, 2 Mar 2018 07:29:47 -0800 Subject: RFR: JDK-8198862 Stop doing funky compilation stuff for dtrace In-Reply-To: <3d15bc14-e39f-288f-0ccc-77d3b95fc66c@oracle.com> References: <6a865220-1f02-1df3-84ff-e8d786756a22@oracle.com> <8c44d583-d001-3a4c-fc2d-40140f811f84@oracle.com> <3d15bc14-e39f-288f-0ccc-77d3b95fc66c@oracle.com> Message-ID: This looks good to me. /Erik On 2018-03-02 04:45, Magnus Ihse Bursie wrote: > On 2018-03-02 12:10, Magnus Ihse Bursie wrote: >> On 2018-03-02 03:01, David Holmes wrote: >>> Hi Magnus, >>> >>> On 1/03/2018 10:48 AM, Magnus Ihse Bursie wrote: >>>> We're doing a lot of weird compilation stuff for dtrace. With this >>>> patch, most of the weirdness is removed. The remaining calls to >>>> $(CC) -E has been changed to $(CPP) to clarify that we do not >>>> compile, we just use the precompiler. >>>> >>>> One of the changes I made was to actually split up the last and >>>> final dtrace call into a separate preprocessing step. However, this >>>> uses the solaris studio preprocessor instead of the ancient system >>>> preprocessor, which has changed behavior. A string like (&``_var) >>>> is now expanded to (& ` ` _var), which is not accepted by dtrace. >>>> :-( I have worked around this by adding the preprocessed output, >>>> without the spaces, in two places. If anyone wants to dig deeper >>>> into dtrace script file syntax, or C preprocessor magic, to avoid >>>> this, let me know... (I'll just state that the "obvious" solution >>>> of sending -Xs to the preprocessor to get old-style behavior does >>>> not work: this just makes the solaris studio preprocessor call the >>>> ancient preprocessor in turn, and we've gained nothing...) >>>> >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8198862 >>>> WebRev: >>>> http://cr.openjdk.java.net/~ihse/JDK-8198862-stop-doing-funky-dtrace-compilation-stuff/webrev.01 >>> >>> >>> >>> >>> Why did you rename generateJvmOffsetsMain.c to >>> generateJvmOffsetsMain.cpp? It isn't a C++ program, it's just a C >>> program. >> Yes, but so are generateJvmOffsets.cpp. :-& There was no point in >> mixing a .cpp and .c file for this trivial build tool helper. In >> fact, I don't even understand why they are two separate files -- if I >> get the blessings from someone in hotspot, I'll gladly just >> concatenate them into a single file. > Come to think about it, I don't care about the hotspot group's > blessing. ;-) I just moved the main function into the > generateJvmOffsets.cpp file. It was just silly having it as a separate > file. > >> >>> !???? # Since we cannot generated JvmOffsets.cpp as part of the >>> gensrc step, >>> >>> Comment doesn't read right. >> Typo, should be "generate". I'll fix. > > Updated. > > I also restored the extra ( ) in ExecuteWithLog with redirection, and > added an additional ( ) for one case that was previously missing one. > > Finally I also added the changes to dtrace that Erik requested for > JDK-8198859, but which was already pushed by that time. > > New webrev: > http://cr.openjdk.java.net/~ihse/JDK-8198862-stop-doing-funky-dtrace-compilation-stuff/webrev.02 > > > /Magnus > From tim.bell at oracle.com Fri Mar 2 15:38:05 2018 From: tim.bell at oracle.com (Tim Bell) Date: Fri, 02 Mar 2018 07:38:05 -0800 Subject: RFR: JDK-8198862 Stop doing funky compilation stuff for dtrace In-Reply-To: References: <6a865220-1f02-1df3-84ff-e8d786756a22@oracle.com> <8c44d583-d001-3a4c-fc2d-40140f811f84@oracle.com> <3d15bc14-e39f-288f-0ccc-77d3b95fc66c@oracle.com> Message-ID: <5A996FDD.9080800@oracle.com> Looks good to me as well. Tim On 03/02/18 07:29, Erik Joelsson wrote: > This looks good to me. > > /Erik > > > On 2018-03-02 04:45, Magnus Ihse Bursie wrote: >> On 2018-03-02 12:10, Magnus Ihse Bursie wrote: >>> On 2018-03-02 03:01, David Holmes wrote: >>>> Hi Magnus, >>>> >>>> On 1/03/2018 10:48 AM, Magnus Ihse Bursie wrote: >>>>> We're doing a lot of weird compilation stuff for dtrace. With this >>>>> patch, most of the weirdness is removed. The remaining calls to >>>>> $(CC) -E has been changed to $(CPP) to clarify that we do not >>>>> compile, we just use the precompiler. >>>>> >>>>> One of the changes I made was to actually split up the last and >>>>> final dtrace call into a separate preprocessing step. However, this >>>>> uses the solaris studio preprocessor instead of the ancient system >>>>> preprocessor, which has changed behavior. A string like (&``_var) >>>>> is now expanded to (& ` ` _var), which is not accepted by dtrace. >>>>> :-( I have worked around this by adding the preprocessed output, >>>>> without the spaces, in two places. If anyone wants to dig deeper >>>>> into dtrace script file syntax, or C preprocessor magic, to avoid >>>>> this, let me know... (I'll just state that the "obvious" solution >>>>> of sending -Xs to the preprocessor to get old-style behavior does >>>>> not work: this just makes the solaris studio preprocessor call the >>>>> ancient preprocessor in turn, and we've gained nothing...) >>>>> >>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8198862 >>>>> WebRev: >>>>> http://cr.openjdk.java.net/~ihse/JDK-8198862-stop-doing-funky-dtrace-compilation-stuff/webrev.01 >>>> >>>> >>>> >>>> >>>> >>>> Why did you rename generateJvmOffsetsMain.c to >>>> generateJvmOffsetsMain.cpp? It isn't a C++ program, it's just a C >>>> program. >>> Yes, but so are generateJvmOffsets.cpp. :-& There was no point in >>> mixing a .cpp and .c file for this trivial build tool helper. In >>> fact, I don't even understand why they are two separate files -- if I >>> get the blessings from someone in hotspot, I'll gladly just >>> concatenate them into a single file. >> Come to think about it, I don't care about the hotspot group's >> blessing. ;-) I just moved the main function into the >> generateJvmOffsets.cpp file. It was just silly having it as a separate >> file. >> >>> >>>> ! # Since we cannot generated JvmOffsets.cpp as part of the >>>> gensrc step, >>>> >>>> Comment doesn't read right. >>> Typo, should be "generate". I'll fix. >> >> Updated. >> >> I also restored the extra ( ) in ExecuteWithLog with redirection, and >> added an additional ( ) for one case that was previously missing one. >> >> Finally I also added the changes to dtrace that Erik requested for >> JDK-8198859, but which was already pushed by that time. >> >> New webrev: >> http://cr.openjdk.java.net/~ihse/JDK-8198862-stop-doing-funky-dtrace-compilation-stuff/webrev.02 >> >> >> /Magnus >> > From kim.barrett at oracle.com Fri Mar 2 17:25:25 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 2 Mar 2018 12:25:25 -0500 Subject: RFR: 8196882: VS2017 Hotspot Defined vsnprintf Function Causes C2084 Already Defined Compilation Error In-Reply-To: <10DC9D59-2019-4161-8CD8-06F574173A4E@oracle.com> References: <9E005462-4EF9-4DE5-A547-E0DF7824BD11@oracle.com> <10015fab-b6ee-80fc-369e-d4a615e87d54@oracle.com> <86A0EA21-1141-4C73-AD80-EF46A44220B2@oracle.com> <761A1FEF-95F4-4015-AA43-2962D615B50C@oracle.com> <5ab12da719db45699b8d1b9a83f21520@sap.com> <10DC9D59-2019-4161-8CD8-06F574173A4E@oracle.com> Message-ID: <8B5D308B-005E-4B15-9CE8-DF3B7DE977BA@oracle.com> > On Mar 1, 2018, at 12:46 PM, Kim Barrett wrote: > >> On Mar 1, 2018, at 4:16 AM, Doerr, Martin wrote: >> >> Hi Kim, >> >> this change causes a build warning on 32 bit Windows with Visual Studio 2013: >> os_windows.cpp(1521) : warning C4018: '>=' : signed/unsigned mismatch >> >> I think " result >= len" should get fixed. Or is Visual Studio 2013 no longer supported? >> Do you have a pending change in which you can update this? >> >> Best regards, >> Martin > > Thanks for the report. > > Oracle isn?t currently testing jdk on win32, but CRs and patches related to problems specific to it > are still welcome. I?m not sure why we?re not seeing this with win64; we?re still building with VS2013, > with VS2017 support still under development and not yet our default. > > I?ll file a bug and send out a fix for review shortly. https://bugs.openjdk.java.net/browse/JDK-8198906 I pushed a fix for it last night. From kim.barrett at oracle.com Fri Mar 2 22:51:02 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 2 Mar 2018 17:51:02 -0500 Subject: 8195148: Collapse G1SATBCardTableModRefBS and G1SATBCardTableLoggingModRefBS into a single G1BarrierSet In-Reply-To: <44E8DB56-A4D6-4D1B-BF09-0780963227E4@oracle.com> References: <5A940DE0.7040108@oracle.com> <6C98D06F-BEB8-4B35-9B90-7579DFD7BB43@oracle.com> <44E8DB56-A4D6-4D1B-BF09-0780963227E4@oracle.com> Message-ID: > On Mar 2, 2018, at 2:31 AM, Erik Osterlund wrote: >> src/hotspot/share/gc/g1/g1BarrierSet.inline.hpp >> 32 inline void G1BarrierSet::write_ref_field_pre(T* field) { >> >> The change here doesn't seem to have anything to do with the renaming. >> Rather, it looks like a separate bug fix? >> >> The old code deferred the decode until after the null check, with the >> decoding benefitting from having already done the null check. At >> first glance, the new code seems like it might not perform as well. >> >> I do see why adding volatile is needed here though. > > I understand this might look unrelated. Here is my explanation: > > There has been an unfortunate implicit dependency to oop.inline.hpp. Now with some headers included in different order, it no longer compiles without adding that include. But including oop.inline.hpp causes an unfortunate include cycle that causes other problems. By loading the oop with RawAccess instead, those issues are solved. > > As for MO_VOLATILE, I thought I might as well correct that while I am at it. Some compilers can and actually do reload the oop after the null check, at which point they may be NULL and break the algorithm. I couldn?t not put that decorator in there to solve that. > > So you see how that started as a necessary include dependency fix (I had to do something or it would not compile) but ended up fixing more things. Hope that is okay. I tried this out, to try to get a better understanding of the issues, and I don't know what problems you are referring to. I used the variant I suggested, e.g. RawAccess and conversion to T rather than collapsing to oop, with the original oopDesc-based null check and decoding. That did indeed fail to compile (not too surpisingly). But adding an #include of oop.inline.hpp seemed to just work. While thinking about this before trying any experiments, it occurred to me that we might have a usage mistake around .inline.hpp files, but that didn't seem to arise here. So no, I'm not (yet) okay with this part of the change. From serguei.spitsyn at oracle.com Sat Mar 3 10:02:46 2018 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Sat, 3 Mar 2018 02:02:46 -0800 Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 In-Reply-To: <5160cc6c-d3f7-6804-a46b-98712163aefc@samersoff.net> References: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> <5A82408B.7070001@oracle.com> <5160cc6c-d3f7-6804-a46b-98712163aefc@samersoff.net> Message-ID: Hi Dmitry, The fix looks good to me. Thanks, Serguei On 2/28/18 23:52, Dmitry Samersoff wrote: > Hi Mikhailo, > > Please, find exported changeset under the link below: > > http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/8196590-02.export > > -Dmitry > > On 20.02.2018 23:51, mikhailo wrote: >> Hi Dmitry, >> >> >> On 02/18/2018 10:31 AM, Dmitry Samersoff wrote: >>> Mikhailo, >>> >>> Here is the changes rebased to recent sources. >>> >>> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.02/ >> Changes look good to me. >>> Could you sponsor the push? >> I can sponsor the change, once the updated change is reviewed. Once it >> is ready, please send me the latest hg changeset (with usual fields, >> description, reviewers). >> >> >> Thank you, >> Misha >>> -Dmitry >>> >>> On 02/13/2018 04:34 AM, Mikhailo Seledtsov wrote: >>>> Changes look good from my point of view. >>>> >>>> Misha >>>> >>>> On 2/10/18, 4:10 AM, Dmitry Samersoff wrote: >>>>> Everybody, >>>>> >>>>> Please review small changes, that enables docker testing on >>>>> Linux/AArch64 >>>>> >>>>> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.01/ >>>>> >>>>> PS: >>>>> >>>>> Matthias - I refactored VMProps.dockerSupport() a bit to make it more >>>>> readable, please check that it doesn't brake your work. >>>>> >>>>> -Dmitry >>>>> >>>>> -- >>>>> Dmitry Samersoff >>>>> http://devnull.samersoff.net >>>>> * There will come soft rains ... > From dmitry.samersoff at bell-sw.com Sat Mar 3 10:21:19 2018 From: dmitry.samersoff at bell-sw.com (Dmitry Samersoff) Date: Sat, 3 Mar 2018 13:21:19 +0300 Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 In-Reply-To: <5A988AE7.90000@oracle.com> References: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> <5A82408B.7070001@oracle.com> <5160cc6c-d3f7-6804-a46b-98712163aefc@samersoff.net> <5A988AE7.90000@oracle.com> Message-ID: <2a467bda-ee2b-c29a-8885-04ee0258111e@bell-sw.com> Hi Mikhailo, Please find updated changeset under: http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/8196590-02_1.export -Dmitry On 03/02/2018 02:21 AM, Mikhailo Seledtsov wrote: > Hi Dmitry, > > ?? We require at least one (Capital R) Reviewer for any changes in open > jdk, including changes in tests. > Please ask a Reviewer to look at the change, and update the hg-export. > Then I can sponsor and integrate your change. > > Thank you, > Misha > > > On 2/28/18, 11:52 PM, Dmitry Samersoff wrote: >> Hi Mikhailo, >> >> Please, find exported changeset under the link below: >> >> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/8196590-02.export >> >> -Dmitry >> >> On 20.02.2018 23:51, mikhailo wrote: >>> Hi Dmitry, >>> >>> >>> On 02/18/2018 10:31 AM, Dmitry Samersoff wrote: >>>> Mikhailo, >>>> >>>> Here is the changes rebased to recent sources. >>>> >>>> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.02/ >>> Changes look good to me. >>>> Could you sponsor the push? >>> I can sponsor the change, once the updated change is reviewed. Once it >>> is ready, please send me the latest hg changeset (with usual fields, >>> description, reviewers). >>> >>> >>> Thank you, >>> Misha >>>> -Dmitry >>>> >>>> On 02/13/2018 04:34 AM, Mikhailo Seledtsov wrote: >>>>> Changes look good from my point of view. >>>>> >>>>> Misha >>>>> >>>>> On 2/10/18, 4:10 AM, Dmitry Samersoff wrote: >>>>>> Everybody, >>>>>> >>>>>> Please review small changes, that enables docker testing on >>>>>> Linux/AArch64 >>>>>> >>>>>> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.01/ >>>>>> >>>>>> PS: >>>>>> >>>>>> Matthias - I refactored VMProps.dockerSupport() a bit to make it more >>>>>> readable, please check that it doesn't brake your work. >>>>>> >>>>>> -Dmitry >>>>>> >>>>>> --? >>>>>> Dmitry Samersoff >>>>>> http://devnull.samersoff.net >>>>>> * There will come soft rains ... >> From kim.barrett at oracle.com Sun Mar 4 01:46:57 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Sat, 3 Mar 2018 20:46:57 -0500 Subject: RFR: 8196876: OopStorage::assert_at_safepoint clashes with assert_at_safepoint macros in g1CollectedHeap.hpp In-Reply-To: <902c444c-38e0-197c-48d6-81157fafde2b@oracle.com> References: <805343C9-7419-4C8B-B575-D0BA01186524@oracle.com> <902c444c-38e0-197c-48d6-81157fafde2b@oracle.com> Message-ID: > On Mar 2, 2018, at 7:50 AM, coleen.phillimore at oracle.com wrote: > > > > On 3/1/18 10:23 PM, Kim Barrett wrote: >> Please remove this fix for a macro name collision. >> >> g1CollectedHeap.hpp contains a collection of assertion macros, >> embedded in the middle of the class definition. A couple of them >> could be quite widely useful: assert_at_safepoint and >> assert_not_at_safepoint. The specific implementations here aren't >> really what we'd want elsewhere, and assert_at_safepoint has some >> extra stuff about whether the current thread is the VM thread, which >> is similarly not what we'd want elsewhere. And it collides with a >> helper function in OopStorage. >> >> Moved assert_at_safepoint() and assert_not_at_safepoint() macros to >> runtime/safepoint.hpp, along with a pair of associated macros that let >> the caller provide the failure message. These have the "obvious" >> implementations using SafepointSynchronize::is_at_safepoint() and >> assert. >> >> The assert_at_safepoint macro that was in g1CollectedHeap.hpp is now >> called assert_at_safepoint_on_vm_thread, since that's the test that >> all but one of the (G1) callers wanted. The only outlier was >> DirtyCardQueueSet::apply_closure_during_gc, and it doesn't really care >> whether it's called from the VM thread. >> >> The colliding OopStorage function has been removed; OopStorage now >> just uses the new shared macro. >> >> There are a large number of places that could use the new safepoint >> assertion macros; that's a cleanup I'm going to leave for later. > > Yes, please leave for later. The rest of the code can migrate to the new macro eventually now that it's there. > > Change looks good. > > Thanks, > Coleen Thanks. >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8196876 >> >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8196876/open.00/ >> >> Testing: >> mach5 {hs,jdk}-tier{1,2,3} From kim.barrett at oracle.com Sun Mar 4 01:47:08 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Sat, 3 Mar 2018 20:47:08 -0500 Subject: RFR: 8196876: OopStorage::assert_at_safepoint clashes with assert_at_safepoint macros in g1CollectedHeap.hpp In-Reply-To: <5A992098.5060309@oracle.com> References: <805343C9-7419-4C8B-B575-D0BA01186524@oracle.com> <5A992098.5060309@oracle.com> Message-ID: <97938E50-BA5F-4611-87C5-942B43B9B3FF@oracle.com> > On Mar 2, 2018, at 4:59 AM, Erik ?sterlund wrote: > > Hi Kim, > > Looks good. > > Thanks, > /Erik Thanks. > > On 2018-03-02 04:23, Kim Barrett wrote: >> Please remove this fix for a macro name collision. >> >> g1CollectedHeap.hpp contains a collection of assertion macros, >> embedded in the middle of the class definition. A couple of them >> could be quite widely useful: assert_at_safepoint and >> assert_not_at_safepoint. The specific implementations here aren't >> really what we'd want elsewhere, and assert_at_safepoint has some >> extra stuff about whether the current thread is the VM thread, which >> is similarly not what we'd want elsewhere. And it collides with a >> helper function in OopStorage. >> >> Moved assert_at_safepoint() and assert_not_at_safepoint() macros to >> runtime/safepoint.hpp, along with a pair of associated macros that let >> the caller provide the failure message. These have the "obvious" >> implementations using SafepointSynchronize::is_at_safepoint() and >> assert. >> >> The assert_at_safepoint macro that was in g1CollectedHeap.hpp is now >> called assert_at_safepoint_on_vm_thread, since that's the test that >> all but one of the (G1) callers wanted. The only outlier was >> DirtyCardQueueSet::apply_closure_during_gc, and it doesn't really care >> whether it's called from the VM thread. >> >> The colliding OopStorage function has been removed; OopStorage now >> just uses the new shared macro. >> >> There are a large number of places that could use the new safepoint >> assertion macros; that's a cleanup I'm going to leave for later. >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8196876 >> >> Webrev: >> http://cr.openjdk.java.net/~kbarrett/8196876/open.00/ >> >> Testing: >> mach5 {hs,jdk}-tier{1,2,3} From thomas.stuefe at gmail.com Mon Mar 5 05:20:52 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 5 Mar 2018 06:20:52 +0100 Subject: RFR(L): 8198423: Improve metaspace chunk allocation (was: Proposal for improvements to the metaspace chunk allocator) In-Reply-To: References: <2005ef0d-9d95-9805-f7aa-94193f683fb3@oracle.com> Message-ID: On Thu, Mar 1, 2018 at 11:36 AM, Thomas St?fe wrote: > Hi Coleen, > > thanks a lot for the review and the sponsoring offer! > > New version (full): http://cr.openjdk.java.net/~stuefe/webrevs/ > metaspace-coalescation/2018-03-01/webrev-full/webrev/ > incremental: http://cr.openjdk.java.net/~stuefe/webrevs/metaspace- > coalescation/2018-03-01/webrev-incr/webrev/ > > Please find remarks inline: > > > On Tue, Feb 27, 2018 at 11:22 PM, wrote: > >> >> Thomas, review comments: >> >> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >> ation/2018-02-26/webrev/src/hotspot/share/memory/metachunk.hpp.udiff.html >> >> +// ChunkIndex (todo: rename?) defines the type of chunk. Chunk types >> >> >> It's really both, isn't it? The type is the index into the free list or >> in use lists. The name seems fine. >> >> > You are right. What I meant was that a lot of code needs to know about the > different chunk sizes, but naming it "Index" and adding enum values like > "NumberOfFreeLists" we expose implementation details no-one outside of > SpaceManager and ChunkManager cares about (namely, the fact that these > values are internally used as indices into arrays). A more neutral naming > would be something like "enum ChunkTypes { spec,small, .... , > NumberOfNonHumongousChunkTypes, NumberOfChunkTypes }. > > However, I can leave this out for a possible future cleanup. The change is > big enough as it is. > > >> Can you add comments on the #endifs if the #ifdef is more than a couple >> 2-3 lines above (it's a nit that bothers me). >> >> +#ifdef ASSERT >> + // A 32bit sentinel for debugging purposes. >> +#define CHUNK_SENTINEL 0x4d4554EF // "MET" >> +#define CHUNK_SENTINEL_INVALID 0xFEEEEEEF >> + uint32_t _sentinel; >> +#endif >> + const ChunkIndex _chunk_type; >> + const bool _is_class; >> + // Whether the chunk is free (in freelist) or in use by some class >> loader. >> bool _is_tagged_free; >> +#ifdef ASSERT >> + ChunkOrigin _origin; >> + int _use_count; >> +#endif >> + >> >> > I removed the asserts completely, following your suggestion below that > "origin" would be valuable in customer scenarios too. By that logic, the > other members are valuable too: the sentinel is valuable when examining > memory dumps to see the start of chunks, and the in-use counter is useful > too. What do you think? > > So, I leave the members in - which, depending what the C++ compiler does > to enums and bools, may cost up to 128bit additional header space. I think > that is ok. In one of my earlier versions of this patch I hand-crafted the > header using chars and bitfields to be as small as possible, but that > seemed over-engineered. > > However, I left out any automatic verifications accessing these debug > members. These are still only done in debug builds. > > >> >> It seems that if you could move origin and _use_count into the ASSERT >> block above (maybe putting use_count before _origin. >> >> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >> ation/2018-02-26/webrev/src/hotspot/share/memory/metaspace.cpp.udiff.html >> >> In take_from_committed, can the allocation of padding chunks be its own >> function like add_chunks_to_aligment() lines 1574-1615? The function is too >> long now. >> >> > I moved the padding chunk allocation into an own function as you suggested. > > >> I don't think coalescation is a word in English, at least my dictionary >> cannot find it. Although it makes sense in the context, just distracting. >> >> > I replaced "coalescation" with "chunk merging" throughout the code. Also > less of a tongue breaker. > > >> + // Now check if in the coalescation area there are still life chunks. >> >> >> "live" chunks I guess. A sentence you won't read often :). >> > > Now that I read it it almost sounded sinister :) Fixed. > > >> >> In free_chunks_get() can you handle the Humongous case first? The else >> for humongous chunk size is buried tons of lines below. >> >> Otherwise it might be helpful to the logic to make your addition to this >> function be a function you call like >> chunk = split_from_larger_free_chunk(); >> > > I did the latter. I moved the splitting of a larger chunk to an own > function. This causes a slight logic change: the new function > (ChunkManager::split_chunk()) splits an existing large free chunks into n > smaller free chunks and adds them all back to the freelist - that includes > the chunk we are about to return. That allows us to use the same exit path > - which removes the chunk from the freelist and adjusts all counters - in > the caller function "ChunkManager::free_chunks_get" instead of having to > return in the middle of the function. > > To make the test more readable, I also remove the > "test-that-free-chunks-are-optimally-merged" verification - which was > quite lengthy - from VirtualSpaceNode::verify() to a new function, > VirtualSpaceNode::verify_free_chunks_are_ideally_merged(). > > >> You might want to keep the origin in product mode if it doesn't add to >> the chunk footprint. Might help with customer debugging. >> >> > See above > > >> Awesome looking test... >> >> > Thanks, I was worried it would be too complicated. > I changed it a bit because there were sporadic errors. Not a "real" error, > just the test itself was faulty. The "metaspaces_in_use" counter was > slightly wrong in one corner case. > > >> I've read through most of this and thank you for adding this to at least >> partially solve the fragmentation problem. The irony is that we >> templatized the Dictionary from CMS so that we could use it for Metaspace >> and that has splitting and coalescing but it seems this code makes more >> sense than adapting that code (if it's even possible). >> > > Well, it helps other metadata use cases too, no. > > >> >> Thank you for working on this. I'll sponsor this for you. >> > Coleen >> >> > > Thanks again! > > I also updated my jdk-submit branch to include these latest changes; tests > are still runnning. > > Tests in jdk-submit ran without errors. Our nightly tests do not show errors on any of our platforms. Best Regards, Thomas > Kind Regards, Thomas > > > >> >> On 2/26/18 9:20 AM, Thomas St?fe wrote: >> >>> Hi all, >>> >>> I know this patch is a bit larger, but may I please have reviews and/or >>> other input? >>> >>> Issue: https://bugs.openjdk.java.net/browse/JDK-8198423 >>> Latest version: >>> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >>> ation/2018-02-26/webrev/ >>> >>> For those who followed the mail thread, this is the incremental diff to >>> the >>> last changes (included feedback Goetz gave me on- and off-list): >>> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >>> ation/2018-02-26/webrev-incr/webrev/ >>> >>> Thank you! >>> >>> Kind Regards, Thomas Stuefe >>> >>> >>> >>> On Thu, Feb 8, 2018 at 12:58 PM, Thomas St?fe >>> wrote: >>> >>> Hi, >>>> >>>> We would like to contribute a patch developed at SAP which has been live >>>> in our VM for some time. It improves the metaspace chunk allocation: >>>> reduces fragmentation and raises the chance of reusing free metaspace >>>> chunks. >>>> >>>> The patch: http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >>>> ation/2018-02-05--2/webrev/ >>>> >>>> In very short, this patch helps with a number of pathological cases >>>> where >>>> metaspace chunks are free but cannot be reused because they are of the >>>> wrong size. For example, the metaspace freelist could be full of small >>>> chunks, which would not be reusable if we need larger chunks. So, we >>>> could >>>> get metaspace OOMs even in situations where the metaspace was far from >>>> exhausted. Our patch adds the ability to split and merge metaspace >>>> chunks >>>> dynamically and thus remove the "size-lock-in" problem. >>>> >>>> Note that there have been other attempts to get a grip on this problem, >>>> see e.g. "SpaceManager::get_small_chunks_and_allocate()". But arguably >>>> our patch attempts a more complete solution. >>>> >>>> In 2016 I discussed the idea for this patch with some folks off-list, >>>> among them Jon Matsimutso. He then did advice me to create a JEP. So I >>>> did: >>>> [1]. However, meanwhile changes to the JEP process were discussed [2], >>>> and >>>> I am not sure anymore this patch needs even needs a JEP. It may be >>>> moderately complex and hence carries the risk inherent in any patch, but >>>> its effects would not be externally visible (if you discount seeing >>>> fewer >>>> metaspace OOMs). So, I'd prefer to handle this as a simple RFE. >>>> >>>> -- >>>> >>>> How this patch works: >>>> >>>> 1) When a class loader dies, its metaspace chunks are freed and returned >>>> to the freelist for reuse by the next class loader. With the patch, upon >>>> returning a chunk to the freelist, an attempt is made to merge it with >>>> its >>>> neighboring chunks - should they happen to be free too - to form a >>>> larger >>>> chunk. Which then is placed in the free list. >>>> >>>> As a result, the freelist should be populated by larger chunks at the >>>> expense of smaller chunks. In other words, all free chunks should >>>> always be >>>> as "coalesced as possible". >>>> >>>> 2) When a class loader needs a new chunk and a chunk of the requested >>>> size >>>> cannot be found in the free list, before carving out a new chunk from >>>> the >>>> virtual space, we first check if there is a larger chunk in the free >>>> list. >>>> If there is, that larger chunk is chopped up into n smaller chunks. One >>>> of >>>> them is returned to the callers, the others are re-added to the >>>> freelist. >>>> >>>> (1) and (2) together have the effect of removing the size-lock-in for >>>> chunks. If fragmentation allows it, small chunks are dynamically >>>> combined >>>> to form larger chunks, and larger chunks are split on demand. >>>> >>>> -- >>>> >>>> What this patch does not: >>>> >>>> This is not a rewrite of the chunk allocator - most of the mechanisms >>>> stay >>>> intact. Specifically, chunk sizes remain unchanged, and so do chunk >>>> allocation processes (when do which class loaders get handed which chunk >>>> size). Almost everthing this patch does affects only internal workings >>>> of >>>> the ChunkManager. >>>> >>>> Also note that I refrained from doing any cleanups, since I wanted >>>> reviewers to be able to gauge this patch without filtering noise. >>>> Unfortunately this patch adds some complexity. But there are many future >>>> opportunities for code cleanup and simplification, some of which we >>>> already >>>> discussed in existing RFEs ([3], [4]). All of them are out of the scope >>>> for >>>> this particular patch. >>>> >>>> -- >>>> >>>> Details: >>>> >>>> Before the patch, the following rules held: >>>> - All chunk sizes are multiples of the smallest chunk size ("specialized >>>> chunks") >>>> - All chunk sizes of larger chunks are also clean multiples of the next >>>> smaller chunk size (e.g. for class space, the ratio of >>>> specialized/small/medium chunks is 1:2:32) >>>> - All chunk start addresses are aligned to the smallest chunk size (more >>>> or less accidentally, see metaspace_reserve_alignment). >>>> The patch makes the last rule explicit and more strict: >>>> - All (non-humongous) chunk start addresses are now aligned to their own >>>> chunk size. So, e.g. medium chunks are allocated at addresses which are >>>> a >>>> multiple of medium chunk size. This rule is not extended to humongous >>>> chunks, whose start addresses continue to be aligned to the smallest >>>> chunk >>>> size. >>>> >>>> The reason for this new alignment rule is that it makes it cheap both to >>>> find chunk predecessors of a chunk and to check which chunks are free. >>>> >>>> When a class loader dies and its chunk is returned to the freelist, all >>>> we >>>> have is its address. In order to merge it with its neighbors to form a >>>> larger chunk, we need to find those neighbors, including those preceding >>>> the returned chunk. Prior to this patch that was not easy - one would >>>> have >>>> to iterate chunks starting at the beginning of the VirtualSpaceNode. But >>>> due to the new alignment rule, we now know where the prospective larger >>>> chunk must start - at the next lower larger-chunk-size-aligned >>>> boundary. We >>>> also know that currently a smaller chunk must start there (*). >>>> >>>> In order to check the free-ness of chunks quickly, each VirtualSpaceNode >>>> now keeps a bitmap which describes its occupancy. One bit in this bitmap >>>> corresponds to a range the size of the smallest chunk size and starting >>>> at >>>> an address aligned to the smallest chunk size. Because of the alignment >>>> rules above, such a range belongs to one single chunk. The bit is 1 if >>>> the >>>> associated chunk is in use by a class loader, 0 if it is free. >>>> >>>> When we have calculated the address range a prospective larger chunk >>>> would >>>> span, we now need to check if all chunks in that range are free. Only >>>> then >>>> we can merge them. We do that by querying the bitmap. Note that the most >>>> common use case here is forming medium chunks from smaller chunks. With >>>> the >>>> new alignment rules, the bitmap portion covering a medium chunk now >>>> always >>>> happens to be 16- or 32bit in size and is 16- or 32bit aligned, so >>>> reading >>>> the bitmap in many cases becomes a simple 16- or 32bit load. >>>> >>>> If the range is free, only then we need to iterate the chunks in that >>>> range: pull them from the freelist, combine them to one new larger >>>> chunk, >>>> re-add that one to the freelist. >>>> >>>> (*) Humongous chunks make this a bit more complicated. Since the new >>>> alignment rule does not extend to them, a humongous chunk could still >>>> straddle the lower or upper boundary of the prospective larger chunk. >>>> So I >>>> gave the occupancy map a second layer, which is used to mark the start >>>> of >>>> chunks. >>>> An alternative approach could have been to make humongous chunks size >>>> and >>>> start address always a multiple of the largest non-humongous chunk size >>>> (medium chunks). That would have caused a bit of waste per humongous >>>> chunk >>>> (<64K) in exchange for simpler coding and a simpler occupancy map. >>>> >>>> -- >>>> >>>> The patch shows its best results in scenarios where a lot of smallish >>>> class loaders are alive simultaneously. When dying, they leave >>>> continuous >>>> expanses of metaspace covered in small chunks, which can be merged >>>> nicely. >>>> However, if class loader life times vary more, we have more >>>> interleaving of >>>> dead and alive small chunks, and hence chunk merging does not work as >>>> well >>>> as it could. >>>> >>>> For an example of a pathological case like this see example program: [5] >>>> >>>> Executed like this: "java -XX:CompressedClassSpaceSize=10M -cp test3 >>>> test3.Example2" the test will load 3000 small classes in separate class >>>> loaders, then throw them away and start loading large classes. The small >>>> classes will have flooded the metaspace with small chunks, which are >>>> unusable for the large classes. When executing with the rather limited >>>> CompressedClassSpaceSize=10M, we will run into an OOM after loading >>>> about >>>> 800 large classes, having used only 40% of the class space, the rest is >>>> wasted to unused small chunks. However, with our patch the example >>>> program >>>> will manage to allocate ~2900 large classes before running into an OOM, >>>> and >>>> class space will show almost no waste. >>>> >>>> Do demonstrate this, add -Xlog:gc+metaspace+freelist. After running into >>>> an OOM, statistics and an ASCII representation of the class space will >>>> be >>>> shown. The unpatched version will show large expanses of unused small >>>> chunks, the patched variant will show almost no waste. >>>> >>>> Note that the patch could be made more effective with a different size >>>> ratio between small and medium chunks: in class space, that ratio is >>>> 1:16, >>>> so 16 small chunks must happen to be free to form one larger chunk. >>>> With a >>>> smaller ratio the chance for coalescation would be larger. So there may >>>> be >>>> room for future improvement here: Since we now can merge and split >>>> chunks >>>> on demand, we could introduce more chunk sizes. Potentially arriving at >>>> a >>>> buddy-ish allocator style where we drop hard-wired chunk sizes for a >>>> dynamic model where the ratio between chunk sizes is always 1:2 and we >>>> could in theory have no limit to the chunk size? But this is just a >>>> thought >>>> and well out of the scope of this patch. >>>> >>>> -- >>>> >>>> What does this patch cost (memory): >>>> >>>> - the occupancy bitmap adds 1 byte per 4K metaspace. >>>> - MetaChunk headers get larger, since we add an enum and two bools to >>>> it. >>>> Depending on what the c++ compiler does with that, chunk headers grow by >>>> one or two MetaWords, reducing the payload size by that amount. >>>> - The new alignment rules mean we may need to create padding chunks to >>>> precede larger chunks. But since these padding chunks are added to the >>>> freelist, they should be used up before the need for new padding chunks >>>> arises. So, the maximally possible number of unused padding chunks >>>> should >>>> be limited by design to about 64K. >>>> >>>> The expectation is that the memory savings by this patch far outweighs >>>> its >>>> added memory costs. >>>> >>>> .. (performance): >>>> >>>> We did not see measurable drops in standard benchmarks raising over the >>>> normal noise. I also measured times for a program which stresses >>>> metaspace >>>> chunk coalescation, with the same result. >>>> >>>> I am open to suggestions what else I should measure, and/or independent >>>> measurements. >>>> >>>> -- >>>> >>>> Other details: >>>> >>>> I removed SpaceManager::get_small_chunk_and_allocate() to reduce >>>> complexity somewhat, because it was made mostly obsolete by this patch: >>>> since small chunks are combined to larger chunks upon return to the >>>> freelist, in theory we should not have that many free small chunks >>>> anymore >>>> anyway. However, there may be still cases where we could benefit from >>>> this >>>> workaround, so I am asking your opinion on this one. >>>> >>>> About tests: There were two native tests - ChunkManagerReturnTest and >>>> TestVirtualSpaceNode (the former was added by me last year) - which did >>>> not >>>> make much sense anymore, since they relied heavily on internal behavior >>>> which was made unpredictable with this patch. >>>> To make up for these lost tests, I added a new gtest which attempts to >>>> stress the many combinations of allocation pattern but does so from a >>>> layer >>>> above the old tests. It now uses Metaspace::allocate() and friends. By >>>> using that point as entry for tests, I am less dependent on >>>> implementation >>>> internals and still cover a lot of scenarios. >>>> >>>> -- >>>> >>>> Review pointers: >>>> >>>> Good points to start are >>>> - ChunkManager::return_single_chunk() - specifically, >>>> ChunkManager::attempt_to_coalesce_around_chunk() - here we merge chunks >>>> upon return to the free list >>>> - ChunkManager::free_chunks_get(): Here we now split large chunks into >>>> smaller chunks on demand >>>> - VirtualSpaceNode::take_from_committed() : chunks are allocated >>>> according to align rules now, padding chunks are handles >>>> - The OccupancyMap class is the helper class implementing the new >>>> occupancy bitmap >>>> >>>> The rest is mostly chaff: helper functions, added tests and >>>> verifications. >>>> >>>> -- >>>> >>>> Thanks and Best Regards, Thomas >>>> >>>> [1] https://bugs.openjdk.java.net/browse/JDK-8166690 >>>> [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2017-November >>>> /000128.html >>>> [3] https://bugs.openjdk.java.net/browse/JDK-8185034 >>>> [4] https://bugs.openjdk.java.net/browse/JDK-8176808 >>>> [5] https://bugs.openjdk.java.net/secure/attachment/63532/test3.zip >>>> >>>> >>>> >>>> >> > From irogers at google.com Sun Mar 4 19:24:54 2018 From: irogers at google.com (Ian Rogers) Date: Sun, 04 Mar 2018 19:24:54 +0000 Subject: GetPrimitiveArrayCritical vs GetByteArrayRegion: 140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream Message-ID: Hi, we've been encountering poor performance with -Xcheck:jni, for the following example the performance is 140x to 510x slower with the flag enabled: >>>> import java.io.ByteArrayOutputStream; import java.io.IOException; import java.util.Random; import java.util.zip.DeflaterOutputStream; public final class CheckJniTest { static void deflateBytesPerformance() throws IOException { byte[] inflated = new byte[1 << 23]; new Random(71917367).nextBytes(inflated); ByteArrayOutputStream deflated = new ByteArrayOutputStream(); try (DeflaterOutputStream dout = new DeflaterOutputStream(deflated)) { dout.write(inflated, 0, inflated.length); } if (8391174 != deflated.size()) { throw new AssertionError(); } } public static void main(String args[]) throws IOException { int n = 5; if (args.length > 0) { n = Integer.parseInt(args[0]); } for (int i = 0; i < n; i++) { long startTime = System.currentTimeMillis(); deflateBytesPerformance(); long endTime = System.currentTimeMillis(); System.out.println("Round " + i + " took " + (endTime - startTime) + "ms"); } } } <<<< The issue is in the libzip Deflater.c JNI code: http://hg.openjdk.java.net/jdk/jdk/file/c5eb27eed365/src/java.base/share/native/libzip/Deflater.c#l131 The test has an 8MB array to deflate/compress. The DeflaterOutputStream has an buffer of size 512 bytes: http://hg.openjdk.java.net/jdk/jdk/file/c5eb27eed365/src/java.base/share/classes/java/util/zip/DeflaterOutputStream.java#l128 To compress the array, 16,384 native calls are made that use the 8MB input array and the 512 byte output array. These arrays are accessed using GetPrimitiveArrayCritical that with -Xcheck:jni copies the array: http://hg.openjdk.java.net/jdk/jdk/file/c5eb27eed365/src/hotspot/share/prims/jniCheck.cpp#l1862 The copying of the arrays leads to 128GB of copying which dominates execution time. One approach to fix the problem is to rewrite libzip in Java. GNU Classpath has such an implementation: http://cvs.savannah.gnu.org/viewvc/classpath/classpath/java/util/zip/Deflater.java?view=markup#l417 A different approach is to use Get/SetByteArrayRegion (using Get/SetByteArrayElements would be no different than the current approach accept potentially causing more copies). I've included a patch and performance data for this approach below where regions of the arrays are copied onto a 512 byte buffer on the stack. The non -Xcheck:jni performance is roughly equivalent before and after the patch, the -Xcheck:jni performance is now similar to the non -Xcheck:jni performance. The choice to go from a using GetPrimitiveArrayCritical to GetByteArrayRegion is a controversial one, as engineers have many different expectations about what critical means and does. GetPrimitiveArrayCritical may have similar overhead to GetByteArrayElements if primitive arrays (possibly of a certain size) are known to be non-moving. There may be a cost to pin critical arrays or regions they exist within. There may be a global or region lock that is in play that can cause interactions with the garbage collector - such interactions may cause tail latency issues in production environments. GetByteArrayRegion loses compared to GetPrimitiveArrayCritical as it must always copy a region of the array for access. Given these issues it is hard to develop a benchmark of GetPrimitiveArrayCritical vs GetByteArrayRegion that can take into account the GC interactions. Most benchmarks will see that avoiding a copy can be good for performance. For more background, Aleksey has a good post on GetPrimitiveArrayCritical here: https://shipilev.net/jvm-anatomy-park/9-jni-critical-gclocker/ A different solution to the performance problem presented here is to change the check JNI implementation to do less copying of arrays. This would be motivated if GetPrimitiveArrayCritical were expected to be used more widely than GetByteArrayRegion in performance sensitive, commonly used code. Given the range of problems possible with GetPrimitiveArrayCritical I'd personally prefer GetByteArrayRegion to be more commonly used, as I'm yet to see a performance problem that made GetPrimitiveArrayCritical so compelling. For example, ObjectOutputStream has burnt me previously: http://hg.openjdk.java.net/jdk/jdk/file/c5eb27eed365/src/java.base/share/native/libjava/ObjectOutputStream.c#l69 and these trivial copy operations, should really be a call to fix the JIT/AOT compilers. Next steps: it'd be great to get this turned in to a bug although its not clear to me whether this is a JDK issue (as it uses GetPrimitiveArrayCritical) or a HotSpot performance issue related to -Xcheck:jni (hence the cross post). We're happy to contribute the attached patch but there should be greater consistency in libzip, were it applied, as there are uses of GetPrimitiveArrayCritical in the inflation/uncompressing code and elsewhere. Thanks, Ian Rogers - Google Current non -Xcheck:jni performance: Round 0 took 382ms Round 1 took 340ms Round 2 took 303ms Round 3 took 256ms Round 4 took 258ms Round 5 took 255ms Round 6 took 260ms Round 7 took 257ms Round 8 took 253ms Round 9 took 246ms Round 10 took 246ms Round 11 took 247ms Round 12 took 245ms Round 13 took 245ms Round 14 took 246ms Round 15 took 244ms Round 16 took 268ms Round 17 took 248ms Round 18 took 247ms Round 19 took 247ms Current -Xcheck:jni performance: Round 0 took 99223ms Round 1 took 56829ms Round 2 took 55620ms Round 3 took 55558ms Round 4 took 55893ms Round 5 took 55779ms Round 6 took 56039ms Round 7 took 54443ms Round 8 took 126490ms Round 9 took 124516ms Round 10 took 125497ms Round 11 took 125022ms Round 12 took 126630ms Round 13 took 122399ms Round 14 took 125623ms Round 15 took 125574ms Round 16 took 123621ms Round 17 took 125374ms Round 18 took 124973ms Round 19 took 125248ms Patched non -Xcheck:jni performance: Round 0 took 363ms Round 1 took 382ms Round 2 took 354ms Round 3 took 274ms Round 4 took 265ms Round 5 took 264ms Round 6 took 266ms Round 7 took 255ms Round 8 took 259ms Round 9 took 245ms Round 10 took 245ms Round 11 took 245ms Round 12 took 245ms Round 13 took 244ms Round 14 took 244ms Round 15 took 245ms Round 16 took 358ms Round 17 took 318ms Round 18 took 311ms Round 19 took 251ms Patched -Xcheck:jni performance: Round 0 took 385ms Round 1 took 373ms Round 2 took 366ms Round 3 took 297ms Round 4 took 301ms Round 5 took 303ms Round 6 took 304ms Round 7 took 308ms Round 8 took 298ms Round 9 took 287ms Round 10 took 290ms Round 11 took 396ms Round 12 took 368ms Round 13 took 328ms Round 14 took 283ms Round 15 took 282ms Round 16 took 296ms Round 17 took 283ms Round 18 took 283ms Round 19 took 281ms From david.holmes at oracle.com Mon Mar 5 06:33:20 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 5 Mar 2018 16:33:20 +1000 Subject: GetPrimitiveArrayCritical vs GetByteArrayRegion: 140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream In-Reply-To: References: Message-ID: <22bbd632-47c1-7702-bd65-6a51f0bbad02@oracle.com> Hi Ian, Do you run with -Xcheck:jni in production mode because you load unknown native code libraries? It's mainly intended as a diagnostic option to turn on if you encounter a possible JNI problem. I'll leave the debate on your actual patch proposal to others more familiar with the library code. Thanks, David On 5/03/2018 5:24 AM, Ian Rogers wrote: > Hi, > > we've been encountering poor performance with -Xcheck:jni, for the > following example the performance is 140x to 510x slower with the flag > enabled: > >>>>> > import java.io.ByteArrayOutputStream; > import java.io.IOException; > import java.util.Random; > import java.util.zip.DeflaterOutputStream; > > public final class CheckJniTest { > static void deflateBytesPerformance() throws IOException { > byte[] inflated = new byte[1 << 23]; > new Random(71917367).nextBytes(inflated); > ByteArrayOutputStream deflated = new ByteArrayOutputStream(); > try (DeflaterOutputStream dout = new DeflaterOutputStream(deflated)) { > dout.write(inflated, 0, inflated.length); > } > if (8391174 != deflated.size()) { > throw new AssertionError(); > } > } > > public static void main(String args[]) throws IOException { > int n = 5; > if (args.length > 0) { > n = Integer.parseInt(args[0]); > } > for (int i = 0; i < n; i++) { > long startTime = System.currentTimeMillis(); > deflateBytesPerformance(); > long endTime = System.currentTimeMillis(); > System.out.println("Round " + i + " took " + (endTime - startTime) + > "ms"); > } > } > } > <<<< > > The issue is in the libzip Deflater.c JNI code: > http://hg.openjdk.java.net/jdk/jdk/file/c5eb27eed365/src/java.base/share/native/libzip/Deflater.c#l131 > > The test has an 8MB array to deflate/compress. The DeflaterOutputStream has > an buffer of size 512 bytes: > http://hg.openjdk.java.net/jdk/jdk/file/c5eb27eed365/src/java.base/share/classes/java/util/zip/DeflaterOutputStream.java#l128 > > To compress the array, 16,384 native calls are made that use the 8MB input > array and the 512 byte output array. These arrays are accessed using > GetPrimitiveArrayCritical that with -Xcheck:jni copies the array: > http://hg.openjdk.java.net/jdk/jdk/file/c5eb27eed365/src/hotspot/share/prims/jniCheck.cpp#l1862 > The copying of the arrays leads to 128GB of copying which dominates > execution time. > > One approach to fix the problem is to rewrite libzip in Java. GNU Classpath > has such an implementation: > http://cvs.savannah.gnu.org/viewvc/classpath/classpath/java/util/zip/Deflater.java?view=markup#l417 > > A different approach is to use Get/SetByteArrayRegion (using > Get/SetByteArrayElements would be no different than the current approach > accept potentially causing more copies). I've included a patch and > performance data for this approach below where regions of the arrays are > copied onto a 512 byte buffer on the stack. The non -Xcheck:jni performance > is roughly equivalent before and after the patch, the -Xcheck:jni > performance is now similar to the non -Xcheck:jni performance. > > The choice to go from a using GetPrimitiveArrayCritical to > GetByteArrayRegion is a controversial one, as engineers have many different > expectations about what critical means and does. GetPrimitiveArrayCritical > may have similar overhead to GetByteArrayElements if primitive arrays > (possibly of a certain size) are known to be non-moving. There may be a > cost to pin critical arrays or regions they exist within. There may be a > global or region lock that is in play that can cause interactions with the > garbage collector - such interactions may cause tail latency issues in > production environments. GetByteArrayRegion loses compared to > GetPrimitiveArrayCritical as it must always copy a region of the array for > access. Given these issues it is hard to develop a benchmark of > GetPrimitiveArrayCritical > vs GetByteArrayRegion that can take into account the GC interactions. Most > benchmarks will see that avoiding a copy can be good for performance. > > For more background, Aleksey has a good post on GetPrimitiveArrayCritical > here: > https://shipilev.net/jvm-anatomy-park/9-jni-critical-gclocker/ > > A different solution to the performance problem presented here is to change > the check JNI implementation to do less copying of arrays. This would be > motivated if GetPrimitiveArrayCritical were expected to be used more widely > than GetByteArrayRegion in performance sensitive, commonly used code. Given > the range of problems possible with GetPrimitiveArrayCritical I'd > personally prefer GetByteArrayRegion to be more commonly used, as I'm yet > to see a performance problem that made GetPrimitiveArrayCritical so > compelling. For example, ObjectOutputStream has burnt me previously: > http://hg.openjdk.java.net/jdk/jdk/file/c5eb27eed365/src/java.base/share/native/libjava/ObjectOutputStream.c#l69 > and these trivial copy operations, should really be a call to fix the > JIT/AOT compilers. > > Next steps: it'd be great to get this turned in to a bug although its not > clear to me whether this is a JDK issue (as it uses > GetPrimitiveArrayCritical) or a HotSpot performance issue related to > -Xcheck:jni (hence the cross post). We're happy to contribute the attached > patch but there should be greater consistency in libzip, were it applied, > as there are uses of GetPrimitiveArrayCritical in the > inflation/uncompressing code and elsewhere. > > Thanks, > Ian Rogers - Google > > Current non -Xcheck:jni performance: > Round 0 took 382ms > Round 1 took 340ms > Round 2 took 303ms > Round 3 took 256ms > Round 4 took 258ms > Round 5 took 255ms > Round 6 took 260ms > Round 7 took 257ms > Round 8 took 253ms > Round 9 took 246ms > Round 10 took 246ms > Round 11 took 247ms > Round 12 took 245ms > Round 13 took 245ms > Round 14 took 246ms > Round 15 took 244ms > Round 16 took 268ms > Round 17 took 248ms > Round 18 took 247ms > Round 19 took 247ms > > Current -Xcheck:jni performance: > Round 0 took 99223ms > Round 1 took 56829ms > Round 2 took 55620ms > Round 3 took 55558ms > Round 4 took 55893ms > Round 5 took 55779ms > Round 6 took 56039ms > Round 7 took 54443ms > Round 8 took 126490ms > Round 9 took 124516ms > Round 10 took 125497ms > Round 11 took 125022ms > Round 12 took 126630ms > Round 13 took 122399ms > Round 14 took 125623ms > Round 15 took 125574ms > Round 16 took 123621ms > Round 17 took 125374ms > Round 18 took 124973ms > Round 19 took 125248ms > > Patched non -Xcheck:jni performance: > Round 0 took 363ms > Round 1 took 382ms > Round 2 took 354ms > Round 3 took 274ms > Round 4 took 265ms > Round 5 took 264ms > Round 6 took 266ms > Round 7 took 255ms > Round 8 took 259ms > Round 9 took 245ms > Round 10 took 245ms > Round 11 took 245ms > Round 12 took 245ms > Round 13 took 244ms > Round 14 took 244ms > Round 15 took 245ms > Round 16 took 358ms > Round 17 took 318ms > Round 18 took 311ms > Round 19 took 251ms > > Patched -Xcheck:jni performance: > Round 0 took 385ms > Round 1 took 373ms > Round 2 took 366ms > Round 3 took 297ms > Round 4 took 301ms > Round 5 took 303ms > Round 6 took 304ms > Round 7 took 308ms > Round 8 took 298ms > Round 9 took 287ms > Round 10 took 290ms > Round 11 took 396ms > Round 12 took 368ms > Round 13 took 328ms > Round 14 took 283ms > Round 15 took 282ms > Round 16 took 296ms > Round 17 took 283ms > Round 18 took 283ms > Round 19 took 281ms > From thomas.schatzl at oracle.com Mon Mar 5 08:08:04 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Mon, 05 Mar 2018 09:08:04 +0100 Subject: RFR: 8198474: Move JNIHandles::resolve into jniHandles.inline.hpp In-Reply-To: References: <1519217045.2401.14.camel@oracle.com> Message-ID: <1520237284.2532.1.camel@oracle.com> Hi, On Wed, 2018-02-28 at 18:46 -0500, Kim Barrett wrote: > Finally, updated webrevs: > full: http://cr.openjdk.java.net/~kbarrett/8198474/open.01/ > incr: http://cr.openjdk.java.net/~kbarrett/8198474/open.01.inc/ > looks good. Thomas From Alan.Bateman at oracle.com Mon Mar 5 08:42:12 2018 From: Alan.Bateman at oracle.com (Alan Bateman) Date: Mon, 5 Mar 2018 08:42:12 +0000 Subject: GetPrimitiveArrayCritical vs GetByteArrayRegion: 140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream In-Reply-To: <22bbd632-47c1-7702-bd65-6a51f0bbad02@oracle.com> References: <22bbd632-47c1-7702-bd65-6a51f0bbad02@oracle.com> Message-ID: <9b5ff318-af41-03c4-efdc-b41b97cf6e61@oracle.com> On 05/03/2018 06:33, David Holmes wrote: > > > Hi Ian, > > Do you run with -Xcheck:jni in production mode because you load > unknown native code libraries? It's mainly intended as a diagnostic > option to turn on if you encounter a possible JNI problem. It does unusual to be running with -Xcheck:jni in a performance critical environment. I would expect to see-Xcheck:jni when developing or maintaining a library that uses JNI and drop the option the code has been fulling tested. Lots of good work was done in JDK 9 to replace the ZipFile implementation with a Java implementation and it would be good to get some results with a re-write of Inflater and Deflater. It would need lots of testing, including startup. We would need have a dependency on libz of course as it is needed by the VM to deflate entries in the jimage container (when they are compressed) or access JAR files on the boot class path. -Alan From erik.osterlund at oracle.com Mon Mar 5 10:08:20 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 5 Mar 2018 11:08:20 +0100 Subject: 8195148: Collapse G1SATBCardTableModRefBS and G1SATBCardTableLoggingModRefBS into a single G1BarrierSet In-Reply-To: References: <5A940DE0.7040108@oracle.com> <6C98D06F-BEB8-4B35-9B90-7579DFD7BB43@oracle.com> <44E8DB56-A4D6-4D1B-BF09-0780963227E4@oracle.com> Message-ID: <5A9D1714.7040607@oracle.com> Hi Kim, New full webrev: http://cr.openjdk.java.net/~eosterlund/8195148/webrev.01/ Incremental: http://cr.openjdk.java.net/~eosterlund/8195148/webrev.00_01/ On 2018-03-02 23:51, Kim Barrett wrote: >> On Mar 2, 2018, at 2:31 AM, Erik Osterlund wrote: >>> src/hotspot/share/gc/g1/g1BarrierSet.inline.hpp >>> 32 inline void G1BarrierSet::write_ref_field_pre(T* field) { >>> >>> The change here doesn't seem to have anything to do with the renaming. >>> Rather, it looks like a separate bug fix? >>> >>> The old code deferred the decode until after the null check, with the >>> decoding benefitting from having already done the null check. At >>> first glance, the new code seems like it might not perform as well. >>> >>> I do see why adding volatile is needed here though. >> I understand this might look unrelated. Here is my explanation: >> >> There has been an unfortunate implicit dependency to oop.inline.hpp. Now with some headers included in different order, it no longer compiles without adding that include. But including oop.inline.hpp causes an unfortunate include cycle that causes other problems. By loading the oop with RawAccess instead, those issues are solved. >> >> As for MO_VOLATILE, I thought I might as well correct that while I am at it. Some compilers can and actually do reload the oop after the null check, at which point they may be NULL and break the algorithm. I couldn?t not put that decorator in there to solve that. >> >> So you see how that started as a necessary include dependency fix (I had to do something or it would not compile) but ended up fixing more things. Hope that is okay. > I tried this out, to try to get a better understanding of the issues, > and I don't know what problems you are referring to. > > I used the variant I suggested, e.g. RawAccess and conversion to T > rather than collapsing to oop, with the original oopDesc-based null > check and decoding. That did indeed fail to compile (not too > surpisingly). But adding an #include of oop.inline.hpp seemed to just > work. > > While thinking about this before trying any experiments, it occurred > to me that we might have a usage mistake around .inline.hpp files, but > that didn't seem to arise here. > > So no, I'm not (yet) okay with this part of the change. The problem does not manifest in our repository when including oop.inline.hpp in g1BarrierSet.inline.hpp. You are right about that. There is however still a problem. If you were to add a new barrier set with a name that comes after g1 in lexicographical order, and plug it in to the Access API through the barrierSetConfig files, the problem does manifest when oop.inline.hpp is included in g1BarrierSet.inline.hpp then. The problem seems to be that access.inline.hpp includes barrierSet.inline.hpp, which includes barrierSetConfig.inline.hpp, which includes all the relevant barrier sets. If files other than Access include barrierSet.inline.hpp directly, then you will end up with a compilation error that the metafunction for getting the barrier set type from the enum number is not defined. The good news is that I found a way of untangling this and going with your proposed solution of using RawAccess with oop.inline.hpp. The solution is to let access.inline.hpp include barrierSetConfig.inline.hpp directly (instead of barrierSet.inline.hpp), and remove the include of barrierSetConfig.inline.hpp in barrierSet.inline.hpp. I hope you like the solution. Thanks, /Erik From irogers at google.com Mon Mar 5 16:34:46 2018 From: irogers at google.com (Ian Rogers) Date: Mon, 05 Mar 2018 16:34:46 +0000 Subject: GetPrimitiveArrayCritical vs GetByteArrayRegion: 140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream In-Reply-To: <9b5ff318-af41-03c4-efdc-b41b97cf6e61@oracle.com> References: <22bbd632-47c1-7702-bd65-6a51f0bbad02@oracle.com> <9b5ff318-af41-03c4-efdc-b41b97cf6e61@oracle.com> Message-ID: Firstly, we're not running -Xcheck:jni in production code :-) During development and testing it doesn't seem an unreasonable flag to enable, but a 140x regression is too much to get developers to swallow. There are 2 performance considerations: 1) the performance of -Xcheck:jni, which probably shouldn't be orders of magnitude worse than without the flag. 2) the problems associated with JNI criticals, for which GetByteArrayRegion is a panacea but by introducing a copying overhead. Moving this code to pure Java would be awesome! Could we start a bug/process? Thanks, Ian On Mon, Mar 5, 2018 at 12:42 AM Alan Bateman wrote: > > > On 05/03/2018 06:33, David Holmes wrote: > > > > > > Hi Ian, > > > > Do you run with -Xcheck:jni in production mode because you load > > unknown native code libraries? It's mainly intended as a diagnostic > > option to turn on if you encounter a possible JNI problem. > It does unusual to be running with -Xcheck:jni in a performance critical > environment. I would expect to see-Xcheck:jni when developing or > maintaining a library that uses JNI and drop the option the code has > been fulling tested. > > Lots of good work was done in JDK 9 to replace the ZipFile > implementation with a Java implementation and it would be good to get > some results with a re-write of Inflater and Deflater. It would need > lots of testing, including startup. We would need have a dependency on > libz of course as it is needed by the VM to deflate entries in the > jimage container (when they are compressed) or access JAR files on the > boot class path. > > -Alan > From xueming.shen at oracle.com Mon Mar 5 18:25:34 2018 From: xueming.shen at oracle.com (Xueming Shen) Date: Mon, 05 Mar 2018 10:25:34 -0800 Subject: GetPrimitiveArrayCritical vs GetByteArrayRegion: 140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream In-Reply-To: References: <22bbd632-47c1-7702-bd65-6a51f0bbad02@oracle.com> <9b5ff318-af41-03c4-efdc-b41b97cf6e61@oracle.com> Message-ID: <5A9D8B9E.8060706@oracle.com> On 03/05/2018 08:34 AM, Ian Rogers wrote: > Firstly, we're not running -Xcheck:jni in production code :-) During > development and testing it doesn't seem an unreasonable flag to enable, but > a 140x regression is too much to get developers to swallow. > > There are 2 performance considerations: > 1) the performance of -Xcheck:jni, which probably shouldn't be orders of > magnitude worse than without the flag. > 2) the problems associated with JNI criticals, for which GetByteArrayRegion > is a panacea but by introducing a copying overhead. The reason the GetByteArrayCritical was/is being used here is exactly to avoid the copy overhead, which was an issue escalated in the past. Though the "copy overhead" appears to be much bigger for the GBAC when -Xcheck:jni is used here. Another issue with the DeflaterOutputStream is the default buf size is relative too small, for historical reason. So with a DeflaterOutStream(deflated, new Deflater(), 8192 *64), is which a bigger buf/8192*64, the performance is close to the run with the -Xcheck:jni for the byte[1 << 23] input. Understood the test case is to show the issue for the GetPrimitiveArrayCritical+check:jni use scenario. -Sherman > Moving this code to pure Java would be awesome! Could we start a > bug/process? > > Thanks, > Ian > > > On Mon, Mar 5, 2018 at 12:42 AM Alan Bateman > wrote: > >> >> On 05/03/2018 06:33, David Holmes wrote: >>> >>> >>> Hi Ian, >>> >>> Do you run with -Xcheck:jni in production mode because you load >>> unknown native code libraries? It's mainly intended as a diagnostic >>> option to turn on if you encounter a possible JNI problem. >> It does unusual to be running with -Xcheck:jni in a performance critical >> environment. I would expect to see-Xcheck:jni when developing or >> maintaining a library that uses JNI and drop the option the code has >> been fulling tested. >> >> Lots of good work was done in JDK 9 to replace the ZipFile >> implementation with a Java implementation and it would be good to get >> some results with a re-write of Inflater and Deflater. It would need >> lots of testing, including startup. We would need have a dependency on >> libz of course as it is needed by the VM to deflate entries in the >> jimage container (when they are compressed) or access JAR files on the >> boot class path. >> >> -Alan >> From xueming.shen at oracle.com Mon Mar 5 18:28:48 2018 From: xueming.shen at oracle.com (Xueming Shen) Date: Mon, 05 Mar 2018 10:28:48 -0800 Subject: GetPrimitiveArrayCritical vs GetByteArrayRegion: 140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream In-Reply-To: References: <22bbd632-47c1-7702-bd65-6a51f0bbad02@oracle.com> <9b5ff318-af41-03c4-efdc-b41b97cf6e61@oracle.com> Message-ID: <5A9D8C60.3050505@oracle.com> On 03/05/2018 08:34 AM, Ian Rogers wrote: > Firstly, we're not running -Xcheck:jni in production code :-) During > development and testing it doesn't seem an unreasonable flag to enable, but > a 140x regression is too much to get developers to swallow. > > There are 2 performance considerations: > 1) the performance of -Xcheck:jni, which probably shouldn't be orders of > magnitude worse than without the flag. > 2) the problems associated with JNI criticals, for which GetByteArrayRegion > is a panacea but by introducing a copying overhead. > > The reason the GetByteArrayCritical was/is being used here is exactly to avoid the copy overhead, which was an issue escalated in the past. Though the "copy overhead" appears to be much bigger for the GBAC when -Xcheck:jni is used here. Another issue with the DeflaterOutputStream is the default buf size is relative too small, for historical reason. So with a DeflaterOutStream(deflated, new Deflater(), 8192 *64), is which a bigger buf/8192*64, the performance is close to the run with the -Xcheck:jni for the byte[1 << 23] input. Understood the test case is to show the issue for the GetPrimitiveArrayCritical+check:jni use scenario. -Sherman From xueming.shen at oracle.com Mon Mar 5 18:40:51 2018 From: xueming.shen at oracle.com (Xueming Shen) Date: Mon, 05 Mar 2018 10:40:51 -0800 Subject: GetPrimitiveArrayCritical vs GetByteArrayRegion: 140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream In-Reply-To: <5A9D8C60.3050505@oracle.com> References: <22bbd632-47c1-7702-bd65-6a51f0bbad02@oracle.com> <9b5ff318-af41-03c4-efdc-b41b97cf6e61@oracle.com> <5A9D8C60.3050505@oracle.com> Message-ID: <5A9D8F33.5060500@oracle.com> On 03/05/2018 10:28 AM, Xueming Shen wrote: > On 03/05/2018 08:34 AM, Ian Rogers wrote: >> Firstly, we're not running -Xcheck:jni in production code :-) During >> development and testing it doesn't seem an unreasonable flag to enable, but >> a 140x regression is too much to get developers to swallow. >> >> There are 2 performance considerations: >> 1) the performance of -Xcheck:jni, which probably shouldn't be orders of >> magnitude worse than without the flag. >> 2) the problems associated with JNI criticals, for which GetByteArrayRegion >> is a panacea but by introducing a copying overhead. >> >> > > The reason the GetByteArrayCritical was/is being used here is exactly to avoid the copy > overhead, which was an issue escalated in the past. Though the "copy overhead" appears > to be much bigger for the GBAC when -Xcheck:jni is used here. > > Another issue with the DeflaterOutputStream is the default buf size is relative too small, > for historical reason. So with a DeflaterOutStream(deflated, new Deflater(), 8192 *64), > is which a bigger buf/8192*64, the performance is close to the run with the -Xcheck:jni > type: in which a bigger buf/8192*64 is used, .... run without the -Xcheck:jni is specified. -Sherman From irogers at google.com Mon Mar 5 19:15:09 2018 From: irogers at google.com (Ian Rogers) Date: Mon, 05 Mar 2018 19:15:09 +0000 Subject: GetPrimitiveArrayCritical vs GetByteArrayRegion: 140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream In-Reply-To: <5A9D8F33.5060500@oracle.com> References: <22bbd632-47c1-7702-bd65-6a51f0bbad02@oracle.com> <9b5ff318-af41-03c4-efdc-b41b97cf6e61@oracle.com> <5A9D8C60.3050505@oracle.com> <5A9D8F33.5060500@oracle.com> Message-ID: Thanks! Changing the DeflaterOutputStream buffer size to be something other than the default reduces the number of JNI native calls and is a possible work around here, as this is an implementation detail could it be made in the JDK? Unfortunately larger input sizes will also regress the issue as the number of calls is "input size / buffer size". The JNI critical may give direct access to the array but depending on the GC, may require a lock and so lock contention may be a significant issue with the code and contribute to tail latencies. In my original post I mention this is difficult to measure and I think good practice is to avoid JNI critical regions. Thanks, Ian On Mon, Mar 5, 2018 at 10:41 AM Xueming Shen wrote: > On 03/05/2018 10:28 AM, Xueming Shen wrote: > > On 03/05/2018 08:34 AM, Ian Rogers wrote: > >> Firstly, we're not running -Xcheck:jni in production code :-) During > >> development and testing it doesn't seem an unreasonable flag to enable, > but > >> a 140x regression is too much to get developers to swallow. > >> > >> There are 2 performance considerations: > >> 1) the performance of -Xcheck:jni, which probably shouldn't be orders of > >> magnitude worse than without the flag. > >> 2) the problems associated with JNI criticals, for which > GetByteArrayRegion > >> is a panacea but by introducing a copying overhead. > >> > >> > > > > The reason the GetByteArrayCritical was/is being used here is exactly to > avoid the copy > > overhead, which was an issue escalated in the past. Though the "copy > overhead" appears > > to be much bigger for the GBAC when -Xcheck:jni is used here. > > > > Another issue with the DeflaterOutputStream is the default buf size is > relative too small, > > for historical reason. So with a DeflaterOutStream(deflated, new > Deflater(), 8192 *64), > > is which a bigger buf/8192*64, the performance is close to the run with > the -Xcheck:jni > > > > type: > > in which a bigger buf/8192*64 is used, .... run without the -Xcheck:jni is > specified. > > -Sherman > > From kim.barrett at oracle.com Mon Mar 5 19:33:42 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 5 Mar 2018 14:33:42 -0500 Subject: 8195148: Collapse G1SATBCardTableModRefBS and G1SATBCardTableLoggingModRefBS into a single G1BarrierSet In-Reply-To: <5A9D1714.7040607@oracle.com> References: <5A940DE0.7040108@oracle.com> <6C98D06F-BEB8-4B35-9B90-7579DFD7BB43@oracle.com> <44E8DB56-A4D6-4D1B-BF09-0780963227E4@oracle.com> <5A9D1714.7040607@oracle.com> Message-ID: <1C8E1159-8A12-4D46-9372-EDEDBD7627E1@oracle.com> > On Mar 5, 2018, at 5:08 AM, Erik ?sterlund wrote: > > Hi Kim, > > New full webrev: > http://cr.openjdk.java.net/~eosterlund/8195148/webrev.01/ > > Incremental: > http://cr.openjdk.java.net/~eosterlund/8195148/webrev.00_01/ src/hotspot/share/gc/g1/g1BarrierSet.inline.hpp 40 if (oopDesc::is_null(heap_oop)) { Test is backward. I?m not sure yet what the new include changes in other files are about. From xueming.shen at oracle.com Mon Mar 5 20:45:56 2018 From: xueming.shen at oracle.com (Xueming Shen) Date: Mon, 05 Mar 2018 12:45:56 -0800 Subject: GetPrimitiveArrayCritical vs GetByteArrayRegion: 140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream In-Reply-To: References: <22bbd632-47c1-7702-bd65-6a51f0bbad02@oracle.com> <9b5ff318-af41-03c4-efdc-b41b97cf6e61@oracle.com> <5A9D8C60.3050505@oracle.com> <5A9D8F33.5060500@oracle.com> Message-ID: <5A9DAC84.906@oracle.com> On 3/5/18, 11:15 AM, Ian Rogers wrote: > Thanks! Changing the DeflaterOutputStream buffer size to be something > other than the default reduces the number of JNI native calls and is a > possible work around here, as this is an implementation detail could > it be made in the JDK? Unfortunately larger input sizes will also > regress the issue as the number of calls is "input size / buffer > size". The JNI critical may give direct access to the array but > depending on the GC, may require a lock and so lock contention may be > a significant issue with the code and contribute to tail latencies. In > my original post I mention this is difficult to measure and I think > good practice is to avoid JNI critical regions. We do have a history on the usage of GetPrimitiveArrayCritical/Elements() here regarding the potential "lock contention", copy overhead... and went back and forth on which jni call is the appropriate one to go. Martin might still have the memory of that :-) Some related bugids: https://bugs.openjdk.java.net/browse/JDK-6206933 https://bugs.openjdk.java.net/browse/JDK-6348045 https://bugs.openjdk.java.net/browse/JDK-5043044 https://bugs.openjdk.java.net/browse/JDK-6356456 There was once a "lock contention" bug that affects the performance of GetPrimitiveArrayCritical. But that bug was fixed since (in hotspot). With various use scenario, a "copy overhead" when using GetPrimitiveArrayElement() was concluded not acceptable, at least back then. It appears to be an easy to do/must to do (nice to have?) to increase the "default buf size" used in DeflaterOutputStream. But every time we tried to touch those default setting/configuration values that have been there for decades, some "regression" complains would come back and hurt us :-) For this particular case a potential "regression" is that the footprint increase because of the the default buffer size is increased from 512 -> xxxx might not be desirable for some use scenario. For example if hundreds of thousands of DeflaterOutputStream are being open/closed on hundreds of thousands of compressed jar or zip files the increase might be huge. And the API does provide the constructor that you can customize the buffer size. It might be desired to keep the default size asis. That said, I agreed that 512 appears to be a "too-small" default size if the majority of the use scenario for the DeflaterOutputStream is to open a jar/zip file entry. Sherman From xueming.shen at oracle.com Mon Mar 5 20:54:31 2018 From: xueming.shen at oracle.com (Xueming Shen) Date: Mon, 05 Mar 2018 12:54:31 -0800 Subject: GetPrimitiveArrayCritical vs GetByteArrayRegion: 140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream In-Reply-To: <5A9DAC84.906@oracle.com> References: <22bbd632-47c1-7702-bd65-6a51f0bbad02@oracle.com> <9b5ff318-af41-03c4-efdc-b41b97cf6e61@oracle.com> <5A9D8C60.3050505@oracle.com> <5A9D8F33.5060500@oracle.com> <5A9DAC84.906@oracle.com> Message-ID: <5A9DAE87.8020801@oracle.com> On 3/5/18, 12:45 PM, Xueming Shen wrote: > On 3/5/18, 11:15 AM, Ian Rogers wrote: >> Thanks! Changing the DeflaterOutputStream buffer size to be something >> other than the default reduces the number of JNI native calls and is >> a possible work around here, as this is an implementation detail >> could it be made in the JDK? Unfortunately larger input sizes will >> also regress the issue as the number of calls is "input size / buffer >> size". The JNI critical may give direct access to the array but >> depending on the GC, may require a lock and so lock contention may be >> a significant issue with the code and contribute to tail latencies. >> In my original post I mention this is difficult to measure and I >> think good practice is to avoid JNI critical regions. > > We do have a history on the usage of > GetPrimitiveArrayCritical/Elements() here regarding the > potential "lock contention", copy overhead... and went back and forth > on which jni call is the > appropriate one to go. Martin might still have the memory of that :-) > And for the record. The direct root trigger of this issue probably is the fix for JDK-6311046 that went into jdk9. https://bugs.openjdk.java.net/browse/JDK-6311046 -Sherman > > > From erik.osterlund at oracle.com Mon Mar 5 21:10:20 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 5 Mar 2018 22:10:20 +0100 Subject: 8195148: Collapse G1SATBCardTableModRefBS and G1SATBCardTableLoggingModRefBS into a single G1BarrierSet In-Reply-To: <1C8E1159-8A12-4D46-9372-EDEDBD7627E1@oracle.com> References: <5A940DE0.7040108@oracle.com> <6C98D06F-BEB8-4B35-9B90-7579DFD7BB43@oracle.com> <44E8DB56-A4D6-4D1B-BF09-0780963227E4@oracle.com> <5A9D1714.7040607@oracle.com> <1C8E1159-8A12-4D46-9372-EDEDBD7627E1@oracle.com> Message-ID: <17a033da-d8cb-9b72-6923-70518efb3c14@oracle.com> Hi Kim, On 2018-03-05 20:33, Kim Barrett wrote: >> On Mar 5, 2018, at 5:08 AM, Erik ?sterlund wrote: >> >> Hi Kim, >> >> New full webrev: >> http://cr.openjdk.java.net/~eosterlund/8195148/webrev.01/ >> >> Incremental: >> http://cr.openjdk.java.net/~eosterlund/8195148/webrev.00_01/ > src/hotspot/share/gc/g1/g1BarrierSet.inline.hpp > 40 if (oopDesc::is_null(heap_oop)) { > > Test is backward. Webrev: http://cr.openjdk.java.net/~eosterlund/8195148/webrev.02/ Incremental: http://cr.openjdk.java.net/~eosterlund/8195148/webrev.01_02/ Fixed. > I?m not sure yet what the new include changes in other files are about. It breaks an include cycle that is necessary for ZGC (and probably Shenandoah too) to build with #include "oops/oop.inline.hpp" being in g1BarrierSet.inline.hpp. It reproduces when files include both barrierSet.inline.hpp and access.inline.hpp. Because barrierSet.inline.hpp includes barrierSetConfig.inline.hpp which includes all concrete barrier sets, which now also includes oop.inline.hpp which includes access.inline.hpp which includes barrierSet.inline.hpp again. This cycle causes resulution to barrierset accessors to happen before their metafunctions have been declared that translate barrierset types to/from enum values, which breaks the build. This bad cycle is broken with these changes by having access.inline.hpp include barrierSetConfig.inline.hpp instead of barrierSet.inline.hpp so that the barrierset type translation metafunctions have always been declared when resulution is defined. Thanks, /Erik From kim.barrett at oracle.com Mon Mar 5 22:02:59 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 5 Mar 2018 17:02:59 -0500 Subject: 8195148: Collapse G1SATBCardTableModRefBS and G1SATBCardTableLoggingModRefBS into a single G1BarrierSet In-Reply-To: <17a033da-d8cb-9b72-6923-70518efb3c14@oracle.com> References: <5A940DE0.7040108@oracle.com> <6C98D06F-BEB8-4B35-9B90-7579DFD7BB43@oracle.com> <44E8DB56-A4D6-4D1B-BF09-0780963227E4@oracle.com> <5A9D1714.7040607@oracle.com> <1C8E1159-8A12-4D46-9372-EDEDBD7627E1@oracle.com> <17a033da-d8cb-9b72-6923-70518efb3c14@oracle.com> Message-ID: > On Mar 5, 2018, at 4:10 PM, Erik ?sterlund wrote: > > Hi Kim, > > On 2018-03-05 20:33, Kim Barrett wrote: >>> On Mar 5, 2018, at 5:08 AM, Erik ?sterlund wrote: >>> >>> Hi Kim, >>> >>> New full webrev: >>> http://cr.openjdk.java.net/~eosterlund/8195148/webrev.01/ >>> >>> Incremental: >>> http://cr.openjdk.java.net/~eosterlund/8195148/webrev.00_01/ >> src/hotspot/share/gc/g1/g1BarrierSet.inline.hpp >> 40 if (oopDesc::is_null(heap_oop)) { >> >> Test is backward. > > Webrev: http://cr.openjdk.java.net/~eosterlund/8195148/webrev.02/ > Incremental: http://cr.openjdk.java.net/~eosterlund/8195148/webrev.01_02/ > > Fixed. Good. >> I?m not sure yet what the new include changes in other files are about. > > It breaks an include cycle that is necessary for ZGC (and probably Shenandoah too) to build with #include "oops/oop.inline.hpp" being in g1BarrierSet.inline.hpp. It reproduces when files include both barrierSet.inline.hpp and access.inline.hpp. Because barrierSet.inline.hpp includes barrierSetConfig.inline.hpp which includes all concrete barrier sets, which now also includes oop.inline.hpp which includes access.inline.hpp which includes barrierSet.inline.hpp again. This cycle causes resulution to barrierset accessors to happen before their metafunctions have been declared that translate barrierset types to/from enum values, which breaks the build. This bad cycle is broken with these changes by having access.inline.hpp include barrierSetConfig.inline.hpp instead of barrierSet.inline.hpp so that the barrierset type translation metafunctions have always been declared when resulution is defined. I suspected it was something like that. Can you provide more detail, or much better, tell me how to reproduce the problem? In the absence of such, the proposed changes look kind of ad hoc and fragile to me. I've been thinking about some questions about how we write and use .inline.hpp files, and have some ideas that may or may not be relevant. If I could reproduce the problem at hand, I might be able to suggest some alternative ideas. Depending on what I find, I might say go ahead with the proposed change, but I'd like to have a look first, in case there's a (relatively) simple alternative that seems more solid. From erik.osterlund at oracle.com Mon Mar 5 22:11:09 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 5 Mar 2018 23:11:09 +0100 Subject: 8195148: Collapse G1SATBCardTableModRefBS and G1SATBCardTableLoggingModRefBS into a single G1BarrierSet In-Reply-To: References: <5A940DE0.7040108@oracle.com> <6C98D06F-BEB8-4B35-9B90-7579DFD7BB43@oracle.com> <44E8DB56-A4D6-4D1B-BF09-0780963227E4@oracle.com> <5A9D1714.7040607@oracle.com> <1C8E1159-8A12-4D46-9372-EDEDBD7627E1@oracle.com> <17a033da-d8cb-9b72-6923-70518efb3c14@oracle.com> Message-ID: <419e115e-5534-77c2-d744-d470ee804184@oracle.com> Hi Kim, On 2018-03-05 23:02, Kim Barrett wrote: >> On Mar 5, 2018, at 4:10 PM, Erik ?sterlund wrote: >> >> Hi Kim, >> >> On 2018-03-05 20:33, Kim Barrett wrote: >>>> On Mar 5, 2018, at 5:08 AM, Erik ?sterlund wrote: >>>> >>>> Hi Kim, >>>> >>>> New full webrev: >>>> http://cr.openjdk.java.net/~eosterlund/8195148/webrev.01/ >>>> >>>> Incremental: >>>> http://cr.openjdk.java.net/~eosterlund/8195148/webrev.00_01/ >>> src/hotspot/share/gc/g1/g1BarrierSet.inline.hpp >>> 40 if (oopDesc::is_null(heap_oop)) { >>> >>> Test is backward. >> Webrev: http://cr.openjdk.java.net/~eosterlund/8195148/webrev.02/ >> Incremental: http://cr.openjdk.java.net/~eosterlund/8195148/webrev.01_02/ >> >> Fixed. > Good. > >>> I?m not sure yet what the new include changes in other files are about. >> It breaks an include cycle that is necessary for ZGC (and probably Shenandoah too) to build with #include "oops/oop.inline.hpp" being in g1BarrierSet.inline.hpp. It reproduces when files include both barrierSet.inline.hpp and access.inline.hpp. Because barrierSet.inline.hpp includes barrierSetConfig.inline.hpp which includes all concrete barrier sets, which now also includes oop.inline.hpp which includes access.inline.hpp which includes barrierSet.inline.hpp again. This cycle causes resulution to barrierset accessors to happen before their metafunctions have been declared that translate barrierset types to/from enum values, which breaks the build. This bad cycle is broken with these changes by having access.inline.hpp include barrierSetConfig.inline.hpp instead of barrierSet.inline.hpp so that the barrierset type translation metafunctions have always been declared when resulution is defined. > I suspected it was something like that. Can you provide more detail, > or much better, tell me how to reproduce the problem? In the absence > of such, the proposed changes look kind of ad hoc and fragile to me. > > I've been thinking about some questions about how we write and use > .inline.hpp files, and have some ideas that may or may not be > relevant. If I could reproduce the problem at hand, I might be able > to suggest some alternative ideas. Depending on what I find, I might > say go ahead with the proposed change, but I'd like to have a look > first, in case there's a (relatively) simple alternative that seems > more solid. The easiest way to reproduce is to either: 1) Add #include "oops/oop.inline.hpp" in g1SATBCardTableModRefBS.inline.hpp in the Z repo, or: ...or... 2) Add any minimal no-op barrier set to jdk-hs with a name that comes after "g1" in lexicographical order (and hence include order), and plug it in to the barrierSetConfig files, and then add #include "oops/oop.inline.hpp" to the g1SATBCardTableModRefBS.inline.hpp file. Thanks, /Erik From coleen.phillimore at oracle.com Mon Mar 5 23:59:22 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 5 Mar 2018 18:59:22 -0500 Subject: RFR(L): 8198423: Improve metaspace chunk allocation (was: Proposal for improvements to the metaspace chunk allocator) In-Reply-To: References: <2005ef0d-9d95-9805-f7aa-94193f683fb3@oracle.com> Message-ID: <16fadc98-ffa2-cb4d-f611-78c3f63ab893@oracle.com> Hi Thomas, I've read through the new code.? I don't have any substantive comments.? Thank you for adding the functions. Has this been tested on any 32 bit platforms??? I will sponsor this when you have another reviewer. Thanks for taking on the metaspace! Coleen On 3/1/18 5:36 AM, Thomas St?fe wrote: > Hi Coleen, > > thanks a lot for the review and the sponsoring offer! > > New version (full): > http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-03-01/webrev-full/webrev/ > > incremental: > http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-03-01/webrev-incr/webrev/ > > > Please find remarks inline: > > > On Tue, Feb 27, 2018 at 11:22 PM, > wrote: > > > Thomas, review comments: > > http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-26/webrev/src/hotspot/share/memory/metachunk.hpp.udiff.html > > > +// ChunkIndex (todo: rename?) defines the type of chunk. Chunk types > > > It's really both, isn't it?? The type is the index into the free > list or in use lists.? The name seems fine. > > > You are right. What I meant was that a lot of code needs to know about > the different chunk sizes, but naming it "Index" and adding enum > values like "NumberOfFreeLists" we expose implementation details > no-one outside of SpaceManager and ChunkManager cares about (namely, > the fact that these values are internally used as indices into > arrays). A more neutral naming would be something like "enum > ChunkTypes { spec,small, .... , NumberOfNonHumongousChunkTypes, > NumberOfChunkTypes }. > > However, I can leave this out for a possible future cleanup. The > change is big enough as it is. > > Can you add comments on the #endifs if the #ifdef is more than a > couple 2-3 lines above (it's a nit that bothers me). > > +#ifdef ASSERT > + // A 32bit sentinel for debugging purposes. > +#define CHUNK_SENTINEL 0x4d4554EF // "MET" > +#define CHUNK_SENTINEL_INVALID 0xFEEEEEEF > + uint32_t _sentinel; > +#endif > + const ChunkIndex _chunk_type; > + const bool _is_class; > + // Whether the chunk is free (in freelist) or in use by some > class loader. > ? ?bool _is_tagged_free; > ?+#ifdef ASSERT > + ChunkOrigin _origin; > + int _use_count; > +#endif > + > > > I removed the asserts completely, following your suggestion below that > "origin" would be valuable in customer scenarios too. By that logic, > the other members are valuable too: the sentinel is valuable when > examining memory dumps to see the start of chunks, and the in-use > counter is useful too. What do you think? > > So, I leave the members in - which, depending what the C++ compiler > does to enums and bools, may cost up to 128bit additional header > space. I think that is ok. In one of my earlier versions of this patch > I hand-crafted the header using chars and bitfields to be as small as > possible, but that seemed over-engineered. > > However, I left out any automatic verifications accessing these debug > members. These are still only done in debug builds. > > > It seems that if you could move origin and _use_count into the > ASSERT block above (maybe putting use_count before _origin. > > http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-26/webrev/src/hotspot/share/memory/metaspace.cpp.udiff.html > > > In take_from_committed, can the allocation of padding chunks be > its own function like add_chunks_to_aligment() lines 1574-1615? > The function is too long now. > > > I moved the padding chunk allocation into an own function as you > suggested. > > I don't think coalescation is a word in English, at least my > dictionary cannot find it.? Although it makes sense in the > context, just distracting. > > > I replaced "coalescation" with "chunk merging" throughout the code. > Also less of a tongue breaker. > > + // Now check if in the coalescation area there are still life > chunks. > > > "live" chunks I guess.?? A sentence you won't read often :). > > > Now that I read it it almost sounded sinister :) Fixed. > > > In free_chunks_get() can you handle the Humongous case first? The > else for humongous chunk size is buried tons of lines below. > > Otherwise it might be helpful to the logic to make your addition > to this function be a function you call like > ? chunk = split_from_larger_free_chunk(); > > > I did the latter. I moved the splitting of a larger chunk to an own > function. This causes a slight logic change: the new function > (ChunkManager::split_chunk()) splits an existing large free chunks > into n smaller free chunks and adds them all back to the freelist - > that includes the chunk we are about to return. That allows us to use > the same exit path - which removes the chunk from the freelist and > adjusts all counters - in the caller function > "ChunkManager::free_chunks_get" instead of having to return in the > middle of the function. > > To make the test more readable, I also remove the > "test-that-free-chunks-are-optimally-merged" verification - which was > quite lengthy - from VirtualSpaceNode::verify() to a new function, > VirtualSpaceNode::verify_free_chunks_are_ideally_merged(). > > > You might want to keep the origin in product mode if it doesn't > add to the chunk footprint.?? Might help with customer debugging. > > > See above > > Awesome looking test... > > > Thanks, I was worried it would be too complicated. > I changed it a bit because there were sporadic errors. Not a "real" > error, just the test itself was faulty. The "metaspaces_in_use" > counter was slightly wrong in one corner case. > > I've read through most of this and thank you for adding this to at > least partially solve the fragmentation problem.? The irony is > that we templatized the Dictionary from CMS so that we could use > it for Metaspace and that has splitting and coalescing but it > seems this code makes more sense than adapting that code (if it's > even possible). > > > Well, it helps other metadata use cases too, no. > > > Thank you for working on this.? I'll sponsor this for you. > > Coleen > > > > Thanks again! > > I also updated my jdk-submit branch to include these latest changes; > tests are still runnning. > > Kind Regards, Thomas > > > On 2/26/18 9:20 AM, Thomas St?fe wrote: > > Hi all, > > I know this patch is a bit larger, but may I please have > reviews and/or > other input? > > Issue: https://bugs.openjdk.java.net/browse/JDK-8198423 > > Latest version: > http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-26/webrev/ > > > For those who followed the mail thread, this is the > incremental diff to the > last changes (included feedback Goetz gave me on- and off-list): > http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-26/webrev-incr/webrev/ > > > Thank you! > > Kind Regards, Thomas Stuefe > > > > On Thu, Feb 8, 2018 at 12:58 PM, Thomas St?fe > > > wrote: > > Hi, > > We would like to contribute a patch developed at SAP which > has been live > in our VM for some time. It improves the metaspace chunk > allocation: > reduces fragmentation and raises the chance of reusing > free metaspace > chunks. > > The patch: > http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc > > ation/2018-02-05--2/webrev/ > > In very short, this patch helps with a number of > pathological cases where > metaspace chunks are free but cannot be reused because > they are of the > wrong size. For example, the metaspace freelist could be > full of small > chunks, which would not be reusable if we need larger > chunks. So, we could > get metaspace OOMs even in situations where the metaspace > was far from > exhausted. Our patch adds the ability to split and merge > metaspace chunks > dynamically and thus remove the "size-lock-in" problem. > > Note that there have been other attempts to get a grip on > this problem, > see e.g. "SpaceManager::get_small_chunks_and_allocate()". > But arguably > our patch attempts a more complete solution. > > In 2016 I discussed the idea for this patch with some > folks off-list, > among them Jon Matsimutso. He then did advice me to create > a JEP. So I did: > [1]. However, meanwhile changes to the JEP process were > discussed [2], and > I am not sure anymore this patch needs even needs a JEP. > It may be > moderately complex and hence carries the risk inherent in > any patch, but > its effects would not be externally visible (if you > discount seeing fewer > metaspace OOMs). So, I'd prefer to handle this as a simple > RFE. > > -- > > How this patch works: > > 1) When a class loader dies, its metaspace chunks are > freed and returned > to the freelist for reuse by the next class loader. With > the patch, upon > returning a chunk to the freelist, an attempt is made to > merge it with its > neighboring chunks - should they happen to be free too - > to form a larger > chunk. Which then is placed in the free list. > > As a result, the freelist should be populated by larger > chunks at the > expense of smaller chunks. In other words, all free chunks > should always be > as "coalesced as possible". > > 2) When a class loader needs a new chunk and a chunk of > the requested size > cannot be found in the free list, before carving out a new > chunk from the > virtual space, we first check if there is a larger chunk > in the free list. > If there is, that larger chunk is chopped up into n > smaller chunks. One of > them is returned to the callers, the others are re-added > to the freelist. > > (1) and (2) together have the effect of removing the > size-lock-in for > chunks. If fragmentation allows it, small chunks are > dynamically combined > to form larger chunks, and larger chunks are split on demand. > > -- > > What this patch does not: > > This is not a rewrite of the chunk allocator - most of the > mechanisms stay > intact. Specifically, chunk sizes remain unchanged, and so > do chunk > allocation processes (when do which class loaders get > handed which chunk > size). Almost everthing this patch does affects only > internal workings of > the ChunkManager. > > Also note that I refrained from doing any cleanups, since > I wanted > reviewers to be able to gauge this patch without filtering > noise. > Unfortunately this patch adds some complexity. But there > are many future > opportunities for code cleanup and simplification, some of > which we already > discussed in existing RFEs ([3], [4]). All of them are out > of the scope for > this particular patch. > > -- > > Details: > > Before the patch, the following rules held: > - All chunk sizes are multiples of the smallest chunk size > ("specialized > chunks") > - All chunk sizes of larger chunks are also clean > multiples of the next > smaller chunk size (e.g. for class space, the ratio of > specialized/small/medium chunks is 1:2:32) > - All chunk start addresses are aligned to the smallest > chunk size (more > or less accidentally, see metaspace_reserve_alignment). > The patch makes the last rule explicit and more strict: > - All (non-humongous) chunk start addresses are now > aligned to their own > chunk size. So, e.g. medium chunks are allocated at > addresses which are a > multiple of medium chunk size. This rule is not extended > to humongous > chunks, whose start addresses continue to be aligned to > the smallest chunk > size. > > The reason for this new alignment rule is that it makes it > cheap both to > find chunk predecessors of a chunk and to check which > chunks are free. > > When a class loader dies and its chunk is returned to the > freelist, all we > have is its address. In order to merge it with its > neighbors to form a > larger chunk, we need to find those neighbors, including > those preceding > the returned chunk. Prior to this patch that was not easy > - one would have > to iterate chunks starting at the beginning of the > VirtualSpaceNode. But > due to the new alignment rule, we now know where the > prospective larger > chunk must start - at the next lower > larger-chunk-size-aligned boundary. We > also know that currently a smaller chunk must start there (*). > > In order to check the free-ness of chunks quickly, each > VirtualSpaceNode > now keeps a bitmap which describes its occupancy. One bit > in this bitmap > corresponds to a range the size of the smallest chunk size > and starting at > an address aligned to the smallest chunk size. Because of > the alignment > rules above, such a range belongs to one single chunk. The > bit is 1 if the > associated chunk is in use by a class loader, 0 if it is free. > > When we have calculated the address range a prospective > larger chunk would > span, we now need to check if all chunks in that range are > free. Only then > we can merge them. We do that by querying the bitmap. Note > that the most > common use case here is forming medium chunks from smaller > chunks. With the > new alignment rules, the bitmap portion covering a medium > chunk now always > happens to be 16- or 32bit in size and is 16- or 32bit > aligned, so reading > the bitmap in many cases becomes a simple 16- or 32bit load. > > If the range is free, only then we need to iterate the > chunks in that > range: pull them from the freelist, combine them to one > new larger chunk, > re-add that one to the freelist. > > (*) Humongous chunks make this a bit more complicated. > Since the new > alignment rule does not extend to them, a humongous chunk > could still > straddle the lower or upper boundary of the prospective > larger chunk. So I > gave the occupancy map a second layer, which is used to > mark the start of > chunks. > An alternative approach could have been to make humongous > chunks size and > start address always a multiple of the largest > non-humongous chunk size > (medium chunks). That would have caused a bit of waste per > humongous chunk > (<64K) in exchange for simpler coding and a simpler > occupancy map. > > -- > > The patch shows its best results in scenarios where a lot > of smallish > class loaders are alive simultaneously. When dying, they > leave continuous > expanses of metaspace covered in small chunks, which can > be merged nicely. > However, if class loader life times vary more, we have > more interleaving of > dead and alive small chunks, and hence chunk merging does > not work as well > as it could. > > For an example of a pathological case like this see > example program: [5] > > Executed like this: "java -XX:CompressedClassSpaceSize=10M > -cp test3 > test3.Example2" the test will load 3000 small classes in > separate class > loaders, then throw them away and start loading large > classes. The small > classes will have flooded the metaspace with small chunks, > which are > unusable for the large classes. When executing with the > rather limited > CompressedClassSpaceSize=10M, we will run into an OOM > after loading about > 800 large classes, having used only 40% of the class > space, the rest is > wasted to unused small chunks. However, with our patch the > example program > will manage to allocate ~2900 large classes before running > into an OOM, and > class space will show almost no waste. > > Do demonstrate this, add -Xlog:gc+metaspace+freelist. > After running into > an OOM, statistics and an ASCII representation of the > class space will be > shown. The unpatched version will show large expanses of > unused small > chunks, the patched variant will show almost no waste. > > Note that the patch could be made more effective with a > different size > ratio between small and medium chunks: in class space, > that ratio is 1:16, > so 16 small chunks must happen to be free to form one > larger chunk. With a > smaller ratio the chance for coalescation would be larger. > So there may be > room for future improvement here: Since we now can merge > and split chunks > on demand, we could introduce more chunk sizes. > Potentially arriving at a > buddy-ish allocator style where we drop hard-wired chunk > sizes for a > dynamic model where the ratio between chunk sizes is > always 1:2 and we > could in theory have no limit to the chunk size? But this > is just a thought > and well out of the scope of this patch. > > -- > > What does this patch cost (memory): > > ? - the occupancy bitmap adds 1 byte per 4K metaspace. > ? - MetaChunk headers get larger, since we add an enum and > two bools to it. > Depending on what the c++ compiler does with that, chunk > headers grow by > one or two MetaWords, reducing the payload size by that > amount. > - The new alignment rules mean we may need to create > padding chunks to > precede larger chunks. But since these padding chunks are > added to the > freelist, they should be used up before the need for new > padding chunks > arises. So, the maximally possible number of unused > padding chunks should > be limited by design to about 64K. > > The expectation is that the memory savings by this patch > far outweighs its > added memory costs. > > .. (performance): > > We did not see measurable drops in standard benchmarks > raising over the > normal noise. I also measured times for a program which > stresses metaspace > chunk coalescation, with the same result. > > I am open to suggestions what else I should measure, > and/or independent > measurements. > > -- > > Other details: > > I removed SpaceManager::get_small_chunk_and_allocate() to > reduce > complexity somewhat, because it was made mostly obsolete > by this patch: > since small chunks are combined to larger chunks upon > return to the > freelist, in theory we should not have that many free > small chunks anymore > anyway. However, there may be still cases where we could > benefit from this > workaround, so I am asking your opinion on this one. > > About tests: There were two native tests - > ChunkManagerReturnTest and > TestVirtualSpaceNode (the former was added by me last > year) - which did not > make much sense anymore, since they relied heavily on > internal behavior > which was made unpredictable with this patch. > To make up for these lost tests,? I added a new gtest > which attempts to > stress the many combinations of allocation pattern but > does so from a layer > above the old tests. It now uses Metaspace::allocate() and > friends. By > using that point as entry for tests, I am less dependent > on implementation > internals and still cover a lot of scenarios. > > -- > > Review pointers: > > Good points to start are > - ChunkManager::return_single_chunk() - specifically, > ChunkManager::attempt_to_coalesce_around_chunk() - here we > merge chunks > upon return to the free list > - ChunkManager::free_chunks_get(): Here we now split large > chunks into > smaller chunks on demand > - VirtualSpaceNode::take_from_committed() : chunks are > allocated > according to align rules now, padding chunks are handles > - The OccupancyMap class is the helper class implementing > the new > occupancy bitmap > > The rest is mostly chaff: helper functions, added tests > and verifications. > > -- > > Thanks and Best Regards, Thomas > > [1] https://bugs.openjdk.java.net/browse/JDK-8166690 > > [2] > http://mail.openjdk.java.net/pipermail/jdk-dev/2017-November > > /000128.html > [3] https://bugs.openjdk.java.net/browse/JDK-8185034 > > [4] https://bugs.openjdk.java.net/browse/JDK-8176808 > > [5] > https://bugs.openjdk.java.net/secure/attachment/63532/test3.zip > > > > > > From mikhailo.seledtsov at oracle.com Tue Mar 6 01:52:51 2018 From: mikhailo.seledtsov at oracle.com (Mikhailo Seledtsov) Date: Mon, 05 Mar 2018 17:52:51 -0800 Subject: RFR(S): JDK-8196590 Enable docker container related tests for linux AARCH64 In-Reply-To: <2a467bda-ee2b-c29a-8885-04ee0258111e@bell-sw.com> References: <966904a3-2f1f-a48e-a7c9-a541742d1bc8@bell-sw.com> <5A82408B.7070001@oracle.com> <5160cc6c-d3f7-6804-a46b-98712163aefc@samersoff.net> <5A988AE7.90000@oracle.com> <2a467bda-ee2b-c29a-8885-04ee0258111e@bell-sw.com> Message-ID: <5A9DF473.7000105@oracle.com> I have pushed the change, Misha On 3/3/18, 2:21 AM, Dmitry Samersoff wrote: > Hi Mikhailo, > > Please find updated changeset under: > > http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/8196590-02_1.export > > -Dmitry > > On 03/02/2018 02:21 AM, Mikhailo Seledtsov wrote: >> Hi Dmitry, >> >> We require at least one (Capital R) Reviewer for any changes in open >> jdk, including changes in tests. >> Please ask a Reviewer to look at the change, and update the hg-export. >> Then I can sponsor and integrate your change. >> >> Thank you, >> Misha >> >> >> On 2/28/18, 11:52 PM, Dmitry Samersoff wrote: >>> Hi Mikhailo, >>> >>> Please, find exported changeset under the link below: >>> >>> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/8196590-02.export >>> >>> -Dmitry >>> >>> On 20.02.2018 23:51, mikhailo wrote: >>>> Hi Dmitry, >>>> >>>> >>>> On 02/18/2018 10:31 AM, Dmitry Samersoff wrote: >>>>> Mikhailo, >>>>> >>>>> Here is the changes rebased to recent sources. >>>>> >>>>> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.02/ >>>> Changes look good to me. >>>>> Could you sponsor the push? >>>> I can sponsor the change, once the updated change is reviewed. Once it >>>> is ready, please send me the latest hg changeset (with usual fields, >>>> description, reviewers). >>>> >>>> >>>> Thank you, >>>> Misha >>>>> -Dmitry >>>>> >>>>> On 02/13/2018 04:34 AM, Mikhailo Seledtsov wrote: >>>>>> Changes look good from my point of view. >>>>>> >>>>>> Misha >>>>>> >>>>>> On 2/10/18, 4:10 AM, Dmitry Samersoff wrote: >>>>>>> Everybody, >>>>>>> >>>>>>> Please review small changes, that enables docker testing on >>>>>>> Linux/AArch64 >>>>>>> >>>>>>> http://cr.openjdk.java.net/~dsamersoff/JDK-8196590/webrev.01/ >>>>>>> >>>>>>> PS: >>>>>>> >>>>>>> Matthias - I refactored VMProps.dockerSupport() a bit to make it more >>>>>>> readable, please check that it doesn't brake your work. >>>>>>> >>>>>>> -Dmitry >>>>>>> >>>>>>> -- >>>>>>> Dmitry Samersoff >>>>>>> http://devnull.samersoff.net >>>>>>> * There will come soft rains ... From kim.barrett at oracle.com Tue Mar 6 03:56:28 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 5 Mar 2018 22:56:28 -0500 Subject: 8195148: Collapse G1SATBCardTableModRefBS and G1SATBCardTableLoggingModRefBS into a single G1BarrierSet In-Reply-To: <419e115e-5534-77c2-d744-d470ee804184@oracle.com> References: <5A940DE0.7040108@oracle.com> <6C98D06F-BEB8-4B35-9B90-7579DFD7BB43@oracle.com> <44E8DB56-A4D6-4D1B-BF09-0780963227E4@oracle.com> <5A9D1714.7040607@oracle.com> <1C8E1159-8A12-4D46-9372-EDEDBD7627E1@oracle.com> <17a033da-d8cb-9b72-6923-70518efb3c14@oracle.com> <419e115e-5534-77c2-d744-d470ee804184@oracle.com> Message-ID: <83017731-18B1-4B91-A915-026505F7C61E@oracle.com> > On Mar 5, 2018, at 5:11 PM, Erik ?sterlund wrote: > > Hi Kim, > > On 2018-03-05 23:02, Kim Barrett wrote: >>> On Mar 5, 2018, at 4:10 PM, Erik ?sterlund wrote: >>> >>> It breaks an include cycle that is necessary for ZGC (and probably Shenandoah too) to build with #include "oops/oop.inline.hpp" being in g1BarrierSet.inline.hpp. It reproduces when files include both barrierSet.inline.hpp and access.inline.hpp. Because barrierSet.inline.hpp includes barrierSetConfig.inline.hpp which includes all concrete barrier sets, which now also includes oop.inline.hpp which includes access.inline.hpp which includes barrierSet.inline.hpp again. This cycle causes resulution to barrierset accessors to happen before their metafunctions have been declared that translate barrierset types to/from enum values, which breaks the build. This bad cycle is broken with these changes by having access.inline.hpp include barrierSetConfig.inline.hpp instead of barrierSet.inline.hpp so that the barrierset type translation metafunctions have always been declared when resulution is defined. >> I suspected it was something like that. Can you provide more detail, >> or much better, tell me how to reproduce the problem? In the absence >> of such, the proposed changes look kind of ad hoc and fragile to me. >> >> I've been thinking about some questions about how we write and use >> .inline.hpp files, and have some ideas that may or may not be >> relevant. If I could reproduce the problem at hand, I might be able >> to suggest some alternative ideas. Depending on what I find, I might >> say go ahead with the proposed change, but I'd like to have a look >> first, in case there's a (relatively) simple alternative that seems >> more solid. > > The easiest way to reproduce is to either: > 1) Add #include "oops/oop.inline.hpp" in g1SATBCardTableModRefBS.inline.hpp in the Z repo, or: > ...or... > 2) Add any minimal no-op barrier set to jdk-hs with a name that comes after "g1" in lexicographical order (and hence include order), and plug it in to the barrierSetConfig files, and then add #include "oops/oop.inline.hpp" to the g1SATBCardTableModRefBS.inline.hpp file. > > Thanks, > /Erik Thanks for the instructions for reproducing. I was able to create a minimal no-op barrier set and did indeed get compile errors. Now that I understand the problem, your proposed include changes look like a good minimal fix. I think there are deeper issues regarding .inline.hpp files, but that can be discussed separately and without delaying this change any further. Looks good. From thomas.stuefe at gmail.com Tue Mar 6 06:10:37 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 6 Mar 2018 07:10:37 +0100 Subject: RFR(L): 8198423: Improve metaspace chunk allocation (was: Proposal for improvements to the metaspace chunk allocator) In-Reply-To: <16fadc98-ffa2-cb4d-f611-78c3f63ab893@oracle.com> References: <2005ef0d-9d95-9805-f7aa-94193f683fb3@oracle.com> <16fadc98-ffa2-cb4d-f611-78c3f63ab893@oracle.com> Message-ID: Hi Coleen, We test nightly in windows 32bit. I'll go and run some tests on 32bit linux too. Thanks for the sponsoring offer! Goetz already reviewed this patch, would that be sufficient or should I look for another reviewer from Oracle? Kind Regards, Thomas On Tue, Mar 6, 2018 at 12:59 AM, wrote: > > Hi Thomas, > > I've read through the new code. I don't have any substantive comments. > Thank you for adding the functions. > > Has this been tested on any 32 bit platforms? I will sponsor this when > you have another reviewer. > > Thanks for taking on the metaspace! > > Coleen > > > On 3/1/18 5:36 AM, Thomas St?fe wrote: > > Hi Coleen, > > thanks a lot for the review and the sponsoring offer! > > New version (full): http://cr.openjdk.java.net/~stuefe/webrevs/ > metaspace-coalescation/2018-03-01/webrev-full/webrev/ > incremental: http://cr.openjdk.java.net/~stuefe/webrevs/metaspace- > coalescation/2018-03-01/webrev-incr/webrev/ > > Please find remarks inline: > > > On Tue, Feb 27, 2018 at 11:22 PM, wrote: > >> >> Thomas, review comments: >> >> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >> ation/2018-02-26/webrev/src/hotspot/share/memory/metachunk.hpp.udiff.html >> >> +// ChunkIndex (todo: rename?) defines the type of chunk. Chunk types >> >> >> It's really both, isn't it? The type is the index into the free list or >> in use lists. The name seems fine. >> >> > You are right. What I meant was that a lot of code needs to know about the > different chunk sizes, but naming it "Index" and adding enum values like > "NumberOfFreeLists" we expose implementation details no-one outside of > SpaceManager and ChunkManager cares about (namely, the fact that these > values are internally used as indices into arrays). A more neutral naming > would be something like "enum ChunkTypes { spec,small, .... , > NumberOfNonHumongousChunkTypes, NumberOfChunkTypes }. > > However, I can leave this out for a possible future cleanup. The change is > big enough as it is. > > >> Can you add comments on the #endifs if the #ifdef is more than a couple >> 2-3 lines above (it's a nit that bothers me). >> >> +#ifdef ASSERT >> + // A 32bit sentinel for debugging purposes. >> +#define CHUNK_SENTINEL 0x4d4554EF // "MET" >> +#define CHUNK_SENTINEL_INVALID 0xFEEEEEEF >> + uint32_t _sentinel; >> +#endif >> + const ChunkIndex _chunk_type; >> + const bool _is_class; >> + // Whether the chunk is free (in freelist) or in use by some class >> loader. >> bool _is_tagged_free; >> +#ifdef ASSERT >> + ChunkOrigin _origin; >> + int _use_count; >> +#endif >> + >> >> > I removed the asserts completely, following your suggestion below that > "origin" would be valuable in customer scenarios too. By that logic, the > other members are valuable too: the sentinel is valuable when examining > memory dumps to see the start of chunks, and the in-use counter is useful > too. What do you think? > > So, I leave the members in - which, depending what the C++ compiler does > to enums and bools, may cost up to 128bit additional header space. I think > that is ok. In one of my earlier versions of this patch I hand-crafted the > header using chars and bitfields to be as small as possible, but that > seemed over-engineered. > > However, I left out any automatic verifications accessing these debug > members. These are still only done in debug builds. > > >> >> It seems that if you could move origin and _use_count into the ASSERT >> block above (maybe putting use_count before _origin. >> >> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >> ation/2018-02-26/webrev/src/hotspot/share/memory/metaspace.cpp.udiff.html >> >> In take_from_committed, can the allocation of padding chunks be its own >> function like add_chunks_to_aligment() lines 1574-1615? The function is too >> long now. >> >> > I moved the padding chunk allocation into an own function as you suggested. > > >> I don't think coalescation is a word in English, at least my dictionary >> cannot find it. Although it makes sense in the context, just distracting. >> >> > I replaced "coalescation" with "chunk merging" throughout the code. Also > less of a tongue breaker. > > >> + // Now check if in the coalescation area there are still life chunks. >> >> >> "live" chunks I guess. A sentence you won't read often :). >> > > Now that I read it it almost sounded sinister :) Fixed. > > >> >> In free_chunks_get() can you handle the Humongous case first? The else >> for humongous chunk size is buried tons of lines below. >> >> Otherwise it might be helpful to the logic to make your addition to this >> function be a function you call like >> chunk = split_from_larger_free_chunk(); >> > > I did the latter. I moved the splitting of a larger chunk to an own > function. This causes a slight logic change: the new function > (ChunkManager::split_chunk()) splits an existing large free chunks into n > smaller free chunks and adds them all back to the freelist - that includes > the chunk we are about to return. That allows us to use the same exit path > - which removes the chunk from the freelist and adjusts all counters - in > the caller function "ChunkManager::free_chunks_get" instead of having to > return in the middle of the function. > > To make the test more readable, I also remove the > "test-that-free-chunks-are-optimally-merged" verification - which was > quite lengthy - from VirtualSpaceNode::verify() to a new function, > VirtualSpaceNode::verify_free_chunks_are_ideally_merged(). > > >> You might want to keep the origin in product mode if it doesn't add to >> the chunk footprint. Might help with customer debugging. >> >> > See above > > >> Awesome looking test... >> >> > Thanks, I was worried it would be too complicated. > I changed it a bit because there were sporadic errors. Not a "real" error, > just the test itself was faulty. The "metaspaces_in_use" counter was > slightly wrong in one corner case. > > >> I've read through most of this and thank you for adding this to at least >> partially solve the fragmentation problem. The irony is that we >> templatized the Dictionary from CMS so that we could use it for Metaspace >> and that has splitting and coalescing but it seems this code makes more >> sense than adapting that code (if it's even possible). >> > > Well, it helps other metadata use cases too, no. > > >> >> Thank you for working on this. I'll sponsor this for you. >> > Coleen >> >> > > Thanks again! > > I also updated my jdk-submit branch to include these latest changes; tests > are still runnning. > > Kind Regards, Thomas > > > >> >> On 2/26/18 9:20 AM, Thomas St?fe wrote: >> >>> Hi all, >>> >>> I know this patch is a bit larger, but may I please have reviews and/or >>> other input? >>> >>> Issue: https://bugs.openjdk.java.net/browse/JDK-8198423 >>> Latest version: >>> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >>> ation/2018-02-26/webrev/ >>> >>> For those who followed the mail thread, this is the incremental diff to >>> the >>> last changes (included feedback Goetz gave me on- and off-list): >>> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >>> ation/2018-02-26/webrev-incr/webrev/ >>> >>> Thank you! >>> >>> Kind Regards, Thomas Stuefe >>> >>> >>> >>> On Thu, Feb 8, 2018 at 12:58 PM, Thomas St?fe >>> wrote: >>> >>> Hi, >>>> >>>> We would like to contribute a patch developed at SAP which has been live >>>> in our VM for some time. It improves the metaspace chunk allocation: >>>> reduces fragmentation and raises the chance of reusing free metaspace >>>> chunks. >>>> >>>> The patch: http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >>>> ation/2018-02-05--2/webrev/ >>>> >>>> In very short, this patch helps with a number of pathological cases >>>> where >>>> metaspace chunks are free but cannot be reused because they are of the >>>> wrong size. For example, the metaspace freelist could be full of small >>>> chunks, which would not be reusable if we need larger chunks. So, we >>>> could >>>> get metaspace OOMs even in situations where the metaspace was far from >>>> exhausted. Our patch adds the ability to split and merge metaspace >>>> chunks >>>> dynamically and thus remove the "size-lock-in" problem. >>>> >>>> Note that there have been other attempts to get a grip on this problem, >>>> see e.g. "SpaceManager::get_small_chunks_and_allocate()". But arguably >>>> our patch attempts a more complete solution. >>>> >>>> In 2016 I discussed the idea for this patch with some folks off-list, >>>> among them Jon Matsimutso. He then did advice me to create a JEP. So I >>>> did: >>>> [1]. However, meanwhile changes to the JEP process were discussed [2], >>>> and >>>> I am not sure anymore this patch needs even needs a JEP. It may be >>>> moderately complex and hence carries the risk inherent in any patch, but >>>> its effects would not be externally visible (if you discount seeing >>>> fewer >>>> metaspace OOMs). So, I'd prefer to handle this as a simple RFE. >>>> >>>> -- >>>> >>>> How this patch works: >>>> >>>> 1) When a class loader dies, its metaspace chunks are freed and returned >>>> to the freelist for reuse by the next class loader. With the patch, upon >>>> returning a chunk to the freelist, an attempt is made to merge it with >>>> its >>>> neighboring chunks - should they happen to be free too - to form a >>>> larger >>>> chunk. Which then is placed in the free list. >>>> >>>> As a result, the freelist should be populated by larger chunks at the >>>> expense of smaller chunks. In other words, all free chunks should >>>> always be >>>> as "coalesced as possible". >>>> >>>> 2) When a class loader needs a new chunk and a chunk of the requested >>>> size >>>> cannot be found in the free list, before carving out a new chunk from >>>> the >>>> virtual space, we first check if there is a larger chunk in the free >>>> list. >>>> If there is, that larger chunk is chopped up into n smaller chunks. One >>>> of >>>> them is returned to the callers, the others are re-added to the >>>> freelist. >>>> >>>> (1) and (2) together have the effect of removing the size-lock-in for >>>> chunks. If fragmentation allows it, small chunks are dynamically >>>> combined >>>> to form larger chunks, and larger chunks are split on demand. >>>> >>>> -- >>>> >>>> What this patch does not: >>>> >>>> This is not a rewrite of the chunk allocator - most of the mechanisms >>>> stay >>>> intact. Specifically, chunk sizes remain unchanged, and so do chunk >>>> allocation processes (when do which class loaders get handed which chunk >>>> size). Almost everthing this patch does affects only internal workings >>>> of >>>> the ChunkManager. >>>> >>>> Also note that I refrained from doing any cleanups, since I wanted >>>> reviewers to be able to gauge this patch without filtering noise. >>>> Unfortunately this patch adds some complexity. But there are many future >>>> opportunities for code cleanup and simplification, some of which we >>>> already >>>> discussed in existing RFEs ([3], [4]). All of them are out of the scope >>>> for >>>> this particular patch. >>>> >>>> -- >>>> >>>> Details: >>>> >>>> Before the patch, the following rules held: >>>> - All chunk sizes are multiples of the smallest chunk size ("specialized >>>> chunks") >>>> - All chunk sizes of larger chunks are also clean multiples of the next >>>> smaller chunk size (e.g. for class space, the ratio of >>>> specialized/small/medium chunks is 1:2:32) >>>> - All chunk start addresses are aligned to the smallest chunk size (more >>>> or less accidentally, see metaspace_reserve_alignment). >>>> The patch makes the last rule explicit and more strict: >>>> - All (non-humongous) chunk start addresses are now aligned to their own >>>> chunk size. So, e.g. medium chunks are allocated at addresses which are >>>> a >>>> multiple of medium chunk size. This rule is not extended to humongous >>>> chunks, whose start addresses continue to be aligned to the smallest >>>> chunk >>>> size. >>>> >>>> The reason for this new alignment rule is that it makes it cheap both to >>>> find chunk predecessors of a chunk and to check which chunks are free. >>>> >>>> When a class loader dies and its chunk is returned to the freelist, all >>>> we >>>> have is its address. In order to merge it with its neighbors to form a >>>> larger chunk, we need to find those neighbors, including those preceding >>>> the returned chunk. Prior to this patch that was not easy - one would >>>> have >>>> to iterate chunks starting at the beginning of the VirtualSpaceNode. But >>>> due to the new alignment rule, we now know where the prospective larger >>>> chunk must start - at the next lower larger-chunk-size-aligned >>>> boundary. We >>>> also know that currently a smaller chunk must start there (*). >>>> >>>> In order to check the free-ness of chunks quickly, each VirtualSpaceNode >>>> now keeps a bitmap which describes its occupancy. One bit in this bitmap >>>> corresponds to a range the size of the smallest chunk size and starting >>>> at >>>> an address aligned to the smallest chunk size. Because of the alignment >>>> rules above, such a range belongs to one single chunk. The bit is 1 if >>>> the >>>> associated chunk is in use by a class loader, 0 if it is free. >>>> >>>> When we have calculated the address range a prospective larger chunk >>>> would >>>> span, we now need to check if all chunks in that range are free. Only >>>> then >>>> we can merge them. We do that by querying the bitmap. Note that the most >>>> common use case here is forming medium chunks from smaller chunks. With >>>> the >>>> new alignment rules, the bitmap portion covering a medium chunk now >>>> always >>>> happens to be 16- or 32bit in size and is 16- or 32bit aligned, so >>>> reading >>>> the bitmap in many cases becomes a simple 16- or 32bit load. >>>> >>>> If the range is free, only then we need to iterate the chunks in that >>>> range: pull them from the freelist, combine them to one new larger >>>> chunk, >>>> re-add that one to the freelist. >>>> >>>> (*) Humongous chunks make this a bit more complicated. Since the new >>>> alignment rule does not extend to them, a humongous chunk could still >>>> straddle the lower or upper boundary of the prospective larger chunk. >>>> So I >>>> gave the occupancy map a second layer, which is used to mark the start >>>> of >>>> chunks. >>>> An alternative approach could have been to make humongous chunks size >>>> and >>>> start address always a multiple of the largest non-humongous chunk size >>>> (medium chunks). That would have caused a bit of waste per humongous >>>> chunk >>>> (<64K) in exchange for simpler coding and a simpler occupancy map. >>>> >>>> -- >>>> >>>> The patch shows its best results in scenarios where a lot of smallish >>>> class loaders are alive simultaneously. When dying, they leave >>>> continuous >>>> expanses of metaspace covered in small chunks, which can be merged >>>> nicely. >>>> However, if class loader life times vary more, we have more >>>> interleaving of >>>> dead and alive small chunks, and hence chunk merging does not work as >>>> well >>>> as it could. >>>> >>>> For an example of a pathological case like this see example program: [5] >>>> >>>> Executed like this: "java -XX:CompressedClassSpaceSize=10M -cp test3 >>>> test3.Example2" the test will load 3000 small classes in separate class >>>> loaders, then throw them away and start loading large classes. The small >>>> classes will have flooded the metaspace with small chunks, which are >>>> unusable for the large classes. When executing with the rather limited >>>> CompressedClassSpaceSize=10M, we will run into an OOM after loading >>>> about >>>> 800 large classes, having used only 40% of the class space, the rest is >>>> wasted to unused small chunks. However, with our patch the example >>>> program >>>> will manage to allocate ~2900 large classes before running into an OOM, >>>> and >>>> class space will show almost no waste. >>>> >>>> Do demonstrate this, add -Xlog:gc+metaspace+freelist. After running into >>>> an OOM, statistics and an ASCII representation of the class space will >>>> be >>>> shown. The unpatched version will show large expanses of unused small >>>> chunks, the patched variant will show almost no waste. >>>> >>>> Note that the patch could be made more effective with a different size >>>> ratio between small and medium chunks: in class space, that ratio is >>>> 1:16, >>>> so 16 small chunks must happen to be free to form one larger chunk. >>>> With a >>>> smaller ratio the chance for coalescation would be larger. So there may >>>> be >>>> room for future improvement here: Since we now can merge and split >>>> chunks >>>> on demand, we could introduce more chunk sizes. Potentially arriving at >>>> a >>>> buddy-ish allocator style where we drop hard-wired chunk sizes for a >>>> dynamic model where the ratio between chunk sizes is always 1:2 and we >>>> could in theory have no limit to the chunk size? But this is just a >>>> thought >>>> and well out of the scope of this patch. >>>> >>>> -- >>>> >>>> What does this patch cost (memory): >>>> >>>> - the occupancy bitmap adds 1 byte per 4K metaspace. >>>> - MetaChunk headers get larger, since we add an enum and two bools to >>>> it. >>>> Depending on what the c++ compiler does with that, chunk headers grow by >>>> one or two MetaWords, reducing the payload size by that amount. >>>> - The new alignment rules mean we may need to create padding chunks to >>>> precede larger chunks. But since these padding chunks are added to the >>>> freelist, they should be used up before the need for new padding chunks >>>> arises. So, the maximally possible number of unused padding chunks >>>> should >>>> be limited by design to about 64K. >>>> >>>> The expectation is that the memory savings by this patch far outweighs >>>> its >>>> added memory costs. >>>> >>>> .. (performance): >>>> >>>> We did not see measurable drops in standard benchmarks raising over the >>>> normal noise. I also measured times for a program which stresses >>>> metaspace >>>> chunk coalescation, with the same result. >>>> >>>> I am open to suggestions what else I should measure, and/or independent >>>> measurements. >>>> >>>> -- >>>> >>>> Other details: >>>> >>>> I removed SpaceManager::get_small_chunk_and_allocate() to reduce >>>> complexity somewhat, because it was made mostly obsolete by this patch: >>>> since small chunks are combined to larger chunks upon return to the >>>> freelist, in theory we should not have that many free small chunks >>>> anymore >>>> anyway. However, there may be still cases where we could benefit from >>>> this >>>> workaround, so I am asking your opinion on this one. >>>> >>>> About tests: There were two native tests - ChunkManagerReturnTest and >>>> TestVirtualSpaceNode (the former was added by me last year) - which did >>>> not >>>> make much sense anymore, since they relied heavily on internal behavior >>>> which was made unpredictable with this patch. >>>> To make up for these lost tests, I added a new gtest which attempts to >>>> stress the many combinations of allocation pattern but does so from a >>>> layer >>>> above the old tests. It now uses Metaspace::allocate() and friends. By >>>> using that point as entry for tests, I am less dependent on >>>> implementation >>>> internals and still cover a lot of scenarios. >>>> >>>> -- >>>> >>>> Review pointers: >>>> >>>> Good points to start are >>>> - ChunkManager::return_single_chunk() - specifically, >>>> ChunkManager::attempt_to_coalesce_around_chunk() - here we merge chunks >>>> upon return to the free list >>>> - ChunkManager::free_chunks_get(): Here we now split large chunks into >>>> smaller chunks on demand >>>> - VirtualSpaceNode::take_from_committed() : chunks are allocated >>>> according to align rules now, padding chunks are handles >>>> - The OccupancyMap class is the helper class implementing the new >>>> occupancy bitmap >>>> >>>> The rest is mostly chaff: helper functions, added tests and >>>> verifications. >>>> >>>> -- >>>> >>>> Thanks and Best Regards, Thomas >>>> >>>> [1] https://bugs.openjdk.java.net/browse/JDK-8166690 >>>> [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2017-November >>>> /000128.html >>>> [3] https://bugs.openjdk.java.net/browse/JDK-8185034 >>>> [4] https://bugs.openjdk.java.net/browse/JDK-8176808 >>>> [5] https://bugs.openjdk.java.net/secure/attachment/63532/test3.zip >>>> >>>> >>>> >>>> >> > > From erik.osterlund at oracle.com Tue Mar 6 07:07:48 2018 From: erik.osterlund at oracle.com (Erik Osterlund) Date: Tue, 6 Mar 2018 08:07:48 +0100 Subject: 8195148: Collapse G1SATBCardTableModRefBS and G1SATBCardTableLoggingModRefBS into a single G1BarrierSet In-Reply-To: <83017731-18B1-4B91-A915-026505F7C61E@oracle.com> References: <5A940DE0.7040108@oracle.com> <6C98D06F-BEB8-4B35-9B90-7579DFD7BB43@oracle.com> <44E8DB56-A4D6-4D1B-BF09-0780963227E4@oracle.com> <5A9D1714.7040607@oracle.com> <1C8E1159-8A12-4D46-9372-EDEDBD7627E1@oracle.com> <17a033da-d8cb-9b72-6923-70518efb3c14@oracle.com> <419e115e-5534-77c2-d744-d470ee804184@oracle.com> <83017731-18B1-4B91-A915-026505F7C61E@oracle.com> Message-ID: Hi Kim, On 6 Mar 2018, at 04:56, Kim Barrett wrote: >> On Mar 5, 2018, at 5:11 PM, Erik ?sterlund wrote: >> >> Hi Kim, >> >> On 2018-03-05 23:02, Kim Barrett wrote: >>>> On Mar 5, 2018, at 4:10 PM, Erik ?sterlund wrote: >>>> >>>> It breaks an include cycle that is necessary for ZGC (and probably Shenandoah too) to build with #include "oops/oop.inline.hpp" being in g1BarrierSet.inline.hpp. It reproduces when files include both barrierSet.inline.hpp and access.inline.hpp. Because barrierSet.inline.hpp includes barrierSetConfig.inline.hpp which includes all concrete barrier sets, which now also includes oop.inline.hpp which includes access.inline.hpp which includes barrierSet.inline.hpp again. This cycle causes resulution to barrierset accessors to happen before their metafunctions have been declared that translate barrierset types to/from enum values, which breaks the build. This bad cycle is broken with these changes by having access.inline.hpp include barrierSetConfig.inline.hpp instead of barrierSet.inline.hpp so that the barrierset type translation metafunctions have always been declared when resulution is defined. >>> I suspected it was something like that. Can you provide more detail, >>> or much better, tell me how to reproduce the problem? In the absence >>> of such, the proposed changes look kind of ad hoc and fragile to me. >>> >>> I've been thinking about some questions about how we write and use >>> .inline.hpp files, and have some ideas that may or may not be >>> relevant. If I could reproduce the problem at hand, I might be able >>> to suggest some alternative ideas. Depending on what I find, I might >>> say go ahead with the proposed change, but I'd like to have a look >>> first, in case there's a (relatively) simple alternative that seems >>> more solid. >> >> The easiest way to reproduce is to either: >> 1) Add #include "oops/oop.inline.hpp" in g1SATBCardTableModRefBS.inline.hpp in the Z repo, or: >> ...or... >> 2) Add any minimal no-op barrier set to jdk-hs with a name that comes after "g1" in lexicographical order (and hence include order), and plug it in to the barrierSetConfig files, and then add #include "oops/oop.inline.hpp" to the g1SATBCardTableModRefBS.inline.hpp file. >> >> Thanks, >> /Erik > > Thanks for the instructions for reproducing. I was able to create a minimal no-op barrier set and > did indeed get compile errors. > > Now that I understand the problem, your proposed include changes look like a good minimal fix. > I think there are deeper issues regarding .inline.hpp files, but that can be discussed separately and > without delaying this change any further. I am looking forward to what comes out from that discussion. Would be great with a more solid solution for managing dependencies between .inline.hpp files. > Looks good. Thank you for the review. /Erik From stefan.karlsson at oracle.com Tue Mar 6 08:30:33 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Tue, 6 Mar 2018 09:30:33 +0100 Subject: Enabling use of hugepages with java In-Reply-To: <9e71448b-2d3d-acb6-79e8-47d1d0987da3@oracle.com> References: <9e71448b-2d3d-acb6-79e8-47d1d0987da3@oracle.com> Message-ID: <5b5c7e1d-a6aa-a468-590e-9cbea17314be@oracle.com> Hi Richard, On 2018-02-28 23:04, David Holmes wrote: > Hi Richard, > > Moving to hotspot-dev as the appropriate list. > > David > > On 1/03/2018 1:20 AM, Richard Achmatowicz wrote: >> Hi >> >> I hope that I am directing this question to the correct mailing list. >> >> I have a question concerning the OS setup on Linux required for >> correct use of the java option -XX:+UseLargePages in JDK 8. >> >> Official Oracle documentation >> (http://www.oracle.com/technetwork/java/javase/tech/largememory-jsp-137182.html) >> suggests that in order to make use of large memory pages, in addition >> to setting the flag -XX:+UseLargePages, an OS option shmmax needs to >> be tuned to be larger than the java heap size. >> >> ?From looking at the java documentation, there are various ways of >> enabling the use of huge pages: -XX:+UseHugeTLBFS, >> -XX:+UseTransparentHugePages, -XX:+UseSHM and, if I understand >> correctly, these correspond in part to making use of different >> OS-level APIs for accessing huge pages (via shared memory, hugetlbfs, >> and other means). >> >> My question is this: is setting the shmmax OS value only relevant if >> we are using -XX:+UseSHM? In other words, if we are using >> -XX:+UseHugeTLBFS to enable use of hugepages by the JVM, is it the >> case that setting the shmmax OS setting has no effect on the use of >> hugepages by the JVM? Yes, your understanding is correct. The document you link to seems to be from a time before -XX:+UseHugeTLBFS was added. This document clarifies this a bit more: ?https://docs.oracle.com/javase/8/docs/technotes/tools/unix/java.html#large_pages ?"If you are using the option -XX:+UseSHM (instead of -XX:+UseHugeTLBFS), then increase the SHMMAX value." Cheers, StefanK >> >> Thanks in advance >> >> Richard >> From shafi.s.ahmad at oracle.com Tue Mar 6 08:55:56 2018 From: shafi.s.ahmad at oracle.com (Shafi Ahmad) Date: Tue, 6 Mar 2018 00:55:56 -0800 (PST) Subject: [8u] RFR for backport of JDK-8026331: hs_err improvement: Print if we have seen any OutOfMemoryErrors or StackOverflowErrors In-Reply-To: <1bf9373c-ab90-46a1-a499-9bc0a7f01a86@default> References: <84e4010f-e1ed-4940-ad24-5e7fc1667899@default> <1e6b48f9-9d19-40f4-aae0-61ffa4d51800@default> <1bf9373c-ab90-46a1-a499-9bc0a7f01a86@default> Message-ID: Hi All, Could someone please review it. Regards, Shafi > -----Original Message----- > From: Shafi Ahmad > Sent: Tuesday, February 06, 2018 11:27 AM > To: hotspot-dev at openjdk.java.net > Cc: Stephen Fitch > Subject: RE: [8u] RFR for backport of JDK-8026331: hs_err improvement: Print > if we have seen any OutOfMemoryErrors or StackOverflowErrors > > Hi, > > Could someone please review it. > > Regards, > Shafi > > > -----Original Message----- > > From: Shafi Ahmad > > Sent: Monday, January 29, 2018 10:16 AM > > To: hotspot-dev at openjdk.java.net > > Subject: RE: [8u] RFR for backport of JDK-8026331: hs_err improvement: > > Print if we have seen any OutOfMemoryErrors or StackOverflowErrors > > > > 2nd try... > > > > Regards, > > Shafi > > > > > -----Original Message----- > > > From: Shafi Ahmad > > > Sent: Wednesday, January 24, 2018 3:16 PM > > > To: hotspot-dev at openjdk.java.net > > > Subject: [8u] RFR for backport of JDK-8026331: hs_err improvement: > > > Print if we have seen any OutOfMemoryErrors or StackOverflowErrors > > > > > > Hi, > > > > > > Please review the backport of bug: " JDK-8026331: hs_err improvement: > > > Print if we have seen any OutOfMemoryErrors or StackOverflowErrors" > > > to > > > jdk8u- dev. > > > > > > Please note that this is not a clean backport as I got below > > > conflicts > > > - > > > > > > hotspot$ find ./ -name "*.rej" -exec cat {} \; > > > --- metaspace.cpp > > > +++ metaspace.cpp > > > @@ -3132,10 +3132,21 @@ > > > initialize_class_space(metaspace_rs); > > > > > > if (PrintCompressedOopsMode || (PrintMiscellaneous && Verbose)) { > > > - gclog_or_tty->print_cr("Narrow klass base: " PTR_FORMAT ", Narrow > > > klass shift: %d", > > > - p2i(Universe::narrow_klass_base()), > > > Universe::narrow_klass_shift()); > > > - gclog_or_tty->print_cr("Compressed class space size: " SIZE_FORMAT > " > > > Address: " PTR_FORMAT " Req Addr: " PTR_FORMAT, > > > - compressed_class_space_size(), > p2i(metaspace_rs.base()), > > > p2i(requested_addr)); > > > + print_compressed_class_space(gclog_or_tty, requested_addr); > > > + } > > > +} > > > + > > > +void Metaspace::print_compressed_class_space(outputStream* st, > > > +const > > > char* requested_addr) { > > > + st->print_cr("Narrow klass base: " PTR_FORMAT ", Narrow klass shift: > > %d", > > > + p2i(Universe::narrow_klass_base()), > > > Universe::narrow_klass_shift()); > > > + if (_class_space_list != NULL) { > > > + address base = > > > + (address)_class_space_list->current_virtual_space()- > > > >bottom(); > > > + st->print("Compressed class space size: " SIZE_FORMAT " Address: " > > > PTR_FORMAT, > > > + compressed_class_space_size(), p2i(base)); > > > + if (requested_addr != 0) { > > > + st->print(" Req Addr: " PTR_FORMAT, p2i(requested_addr)); > > > + } > > > + st->cr(); > > > } > > > } > > > > > > --- universe.cpp > > > +++ universe.cpp > > > @@ -781,27 +781,24 @@ > > > return JNI_OK; > > > } > > > > > > -void Universe::print_compressed_oops_mode() { > > > - tty->cr(); > > > - tty->print("heap address: " PTR_FORMAT ", size: " SIZE_FORMAT " > > > MB", > > > +void Universe::print_compressed_oops_mode(outputStream* st) { > > > + st->print("heap address: " PTR_FORMAT ", size: " SIZE_FORMAT " > > > +MB", > > > p2i(Universe::heap()->base()), Universe::heap()- > > > >reserved_region().byte_size()/M); > > > > > > - tty->print(", Compressed Oops mode: %s", > > > narrow_oop_mode_to_string(narrow_oop_mode())); > > > + st->print(", Compressed Oops mode: %s", > > > narrow_oop_mode_to_string(narrow_oop_mode())); > > > > > > if (Universe::narrow_oop_base() != 0) { > > > - tty->print(": " PTR_FORMAT, p2i(Universe::narrow_oop_base())); > > > + st->print(": " PTR_FORMAT, p2i(Universe::narrow_oop_base())); > > > } > > > > > > if (Universe::narrow_oop_shift() != 0) { > > > - tty->print(", Oop shift amount: %d", Universe::narrow_oop_shift()); > > > + st->print(", Oop shift amount: %d", > > > + Universe::narrow_oop_shift()); > > > } > > > > > > if (!Universe::narrow_oop_use_implicit_null_checks()) { > > > - tty->print(", no protected page in front of the heap"); > > > + st->print(", no protected page in front of the heap"); > > > } > > > - > > > - tty->cr(); > > > - tty->cr(); > > > + st->cr(); > > > } > > > > > > Webrev: http://cr.openjdk.java.net/~shshahma/8026331/webrev.00/ > > > Jdk9 bug: https://bugs.openjdk.java.net/browse/JDK-8026331 > > > Original patch pushed to jdk9: > > > http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/cf5a0377f578 > > > > > > Test: Run jprt -testset hotspot and jtreg - hotspot/test > > > > > > Regards, > > > Shafi From adinn at redhat.com Tue Mar 6 12:23:07 2018 From: adinn at redhat.com (Andrew Dinn) Date: Tue, 6 Mar 2018 12:23:07 +0000 Subject: RFR: AArch64: org.openjdk.jcstress.tests.varhandles.DekkerTest fails Message-ID: Could someone please review the following patch to /shared code/ which fixes an AArch64 breakage that was inadvertently introduced by JDK-8181211: webrev: http://cr.openjdk.java.net/~adinn/8198950/webrev.00 JIRA: https://bugs.openjdk.java.net/browse/JDK-8198950 The patch applies to jdk/hs. It also applies cleanly to jdk/jdk10. I would like it to be considered for inclusion in jdk10 if at all possible because it patches a critical error in handling of volatile reads that may result in incorrect memory synchronization. The problem: ------------ JDK-8181211 modified generations of load nodes in library_call.c to allow them to float when i) the load is known to be from a heap address (i.e. an object) ii) the object reference is know (from type info) to be non-null iii) the offset for the load is known to lie within object's extent In this case the load is created with a null control, allowing it to float. In the case where this is a volatile load passing a null control is not really appropriate because the load should not be able to float. Before 8181211 the load inherited its control and memory link from the leading CPUOrder memory barrier that precedes the load node. After 8181211, the null control passed in to the load create call may eventually be replaced by control flow from from an earlier node (e.g. a guarding ifnull node). This change does not actually allow the load to float because it is still dominated directly in the memory flow by the memory feed from the CPUOrder membar. The problem this causes for AArch64 is that the back end generator attempts to replace ldr; membar_acquire sequences with ldar. In order to do so it has to spot that a load is actually volatile and associated with a trailing Acquire membar. It is currently relying on the now invalid assumption that the load will be dominated in both control and memory flow by the preceding CPUOrder membar. The generator falls back to generating ldr; membar_acquire. Unfortunately, the other half of the generation scheme, which replaces membar_release; str; membar_volatile with an stlr instruction is still performed. The transformation is only valid if stlr instructions are paired with ldar instruction which is why the Dekker test fails. The fix: -------- This patch tweaks the changes originally made to /shared code/ in 8181211 to ensure that a null control is only passed when a non-volatile load is being created (i.e. when needs_membar is false). I chose to fix it this way because: i) it models the control flow correctly ii) this re-establishes the status quo ante for volatile loads which should, therefore be very low risk while not prejudicing the performance gain for non-volatile loads iii) the alternative tack of modifying the graph pattern matching done by the AArch64 back-end generator is a relatively complex change. alternative jdk10 fix: ---------------------- If this is not able to go in as a fix for jdk10 right now then it will still be necessary to implement an alternative fix before jdk10 is released otherwise AArch64 will be horribly broken. The least complicated alternative is to switch the default setting for AArch64 product flag UseBarriersForVolatile to true, so that the transformations are disabled. It would also be possible to rewrite the back end generator so that it only checks for a dominating memory feed from the CPUOrder barrier when trying to identify a volatile load. Testing: ------- I reran jcstress tests on AArch64 and the change fixes the failing Dekker tests without introducing any new failures (3 opaque tests were failing before and after the patch). I ran all jdk tier1 jtreg tests on AArch64 and they passed. As a change to shared code this also needs testing against the submit tree to ensure it does not break x86 code (which I am in the process of doing). regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From david.holmes at oracle.com Tue Mar 6 12:39:26 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 6 Mar 2018 22:39:26 +1000 Subject: [8u] RFR for backport of JDK-8026331: hs_err improvement: Print if we have seen any OutOfMemoryErrors or StackOverflowErrors In-Reply-To: References: <84e4010f-e1ed-4940-ad24-5e7fc1667899@default> <1e6b48f9-9d19-40f4-aae0-61ffa4d51800@default> <1bf9373c-ab90-46a1-a499-9bc0a7f01a86@default> Message-ID: <922c9f2b-1550-e7de-2039-0cfd0f439eae@oracle.com> Hi Shafi, This seems like an accurate backport of the error reporting enhancements. Copyright years need updating to 2018. Thanks, David On 6/03/2018 6:55 PM, Shafi Ahmad wrote: > Hi All, > > Could someone please review it. > > Regards, > Shafi > > >> -----Original Message----- >> From: Shafi Ahmad >> Sent: Tuesday, February 06, 2018 11:27 AM >> To: hotspot-dev at openjdk.java.net >> Cc: Stephen Fitch >> Subject: RE: [8u] RFR for backport of JDK-8026331: hs_err improvement: Print >> if we have seen any OutOfMemoryErrors or StackOverflowErrors >> >> Hi, >> >> Could someone please review it. >> >> Regards, >> Shafi >> >>> -----Original Message----- >>> From: Shafi Ahmad >>> Sent: Monday, January 29, 2018 10:16 AM >>> To: hotspot-dev at openjdk.java.net >>> Subject: RE: [8u] RFR for backport of JDK-8026331: hs_err improvement: >>> Print if we have seen any OutOfMemoryErrors or StackOverflowErrors >>> >>> 2nd try... >>> >>> Regards, >>> Shafi >>> >>>> -----Original Message----- >>>> From: Shafi Ahmad >>>> Sent: Wednesday, January 24, 2018 3:16 PM >>>> To: hotspot-dev at openjdk.java.net >>>> Subject: [8u] RFR for backport of JDK-8026331: hs_err improvement: >>>> Print if we have seen any OutOfMemoryErrors or StackOverflowErrors >>>> >>>> Hi, >>>> >>>> Please review the backport of bug: " JDK-8026331: hs_err improvement: >>>> Print if we have seen any OutOfMemoryErrors or StackOverflowErrors" >>>> to >>>> jdk8u- dev. >>>> >>>> Please note that this is not a clean backport as I got below >>>> conflicts >>>> - >>>> >>>> hotspot$ find ./ -name "*.rej" -exec cat {} \; >>>> --- metaspace.cpp >>>> +++ metaspace.cpp >>>> @@ -3132,10 +3132,21 @@ >>>> initialize_class_space(metaspace_rs); >>>> >>>> if (PrintCompressedOopsMode || (PrintMiscellaneous && Verbose)) { >>>> - gclog_or_tty->print_cr("Narrow klass base: " PTR_FORMAT ", Narrow >>>> klass shift: %d", >>>> - p2i(Universe::narrow_klass_base()), >>>> Universe::narrow_klass_shift()); >>>> - gclog_or_tty->print_cr("Compressed class space size: " SIZE_FORMAT >> " >>>> Address: " PTR_FORMAT " Req Addr: " PTR_FORMAT, >>>> - compressed_class_space_size(), >> p2i(metaspace_rs.base()), >>>> p2i(requested_addr)); >>>> + print_compressed_class_space(gclog_or_tty, requested_addr); >>>> + } >>>> +} >>>> + >>>> +void Metaspace::print_compressed_class_space(outputStream* st, >>>> +const >>>> char* requested_addr) { >>>> + st->print_cr("Narrow klass base: " PTR_FORMAT ", Narrow klass shift: >>> %d", >>>> + p2i(Universe::narrow_klass_base()), >>>> Universe::narrow_klass_shift()); >>>> + if (_class_space_list != NULL) { >>>> + address base = >>>> + (address)_class_space_list->current_virtual_space()- >>>>> bottom(); >>>> + st->print("Compressed class space size: " SIZE_FORMAT " Address: " >>>> PTR_FORMAT, >>>> + compressed_class_space_size(), p2i(base)); >>>> + if (requested_addr != 0) { >>>> + st->print(" Req Addr: " PTR_FORMAT, p2i(requested_addr)); >>>> + } >>>> + st->cr(); >>>> } >>>> } >>>> >>>> --- universe.cpp >>>> +++ universe.cpp >>>> @@ -781,27 +781,24 @@ >>>> return JNI_OK; >>>> } >>>> >>>> -void Universe::print_compressed_oops_mode() { >>>> - tty->cr(); >>>> - tty->print("heap address: " PTR_FORMAT ", size: " SIZE_FORMAT " >>>> MB", >>>> +void Universe::print_compressed_oops_mode(outputStream* st) { >>>> + st->print("heap address: " PTR_FORMAT ", size: " SIZE_FORMAT " >>>> +MB", >>>> p2i(Universe::heap()->base()), Universe::heap()- >>>>> reserved_region().byte_size()/M); >>>> >>>> - tty->print(", Compressed Oops mode: %s", >>>> narrow_oop_mode_to_string(narrow_oop_mode())); >>>> + st->print(", Compressed Oops mode: %s", >>>> narrow_oop_mode_to_string(narrow_oop_mode())); >>>> >>>> if (Universe::narrow_oop_base() != 0) { >>>> - tty->print(": " PTR_FORMAT, p2i(Universe::narrow_oop_base())); >>>> + st->print(": " PTR_FORMAT, p2i(Universe::narrow_oop_base())); >>>> } >>>> >>>> if (Universe::narrow_oop_shift() != 0) { >>>> - tty->print(", Oop shift amount: %d", Universe::narrow_oop_shift()); >>>> + st->print(", Oop shift amount: %d", >>>> + Universe::narrow_oop_shift()); >>>> } >>>> >>>> if (!Universe::narrow_oop_use_implicit_null_checks()) { >>>> - tty->print(", no protected page in front of the heap"); >>>> + st->print(", no protected page in front of the heap"); >>>> } >>>> - >>>> - tty->cr(); >>>> - tty->cr(); >>>> + st->cr(); >>>> } >>>> >>>> Webrev: http://cr.openjdk.java.net/~shshahma/8026331/webrev.00/ >>>> Jdk9 bug: https://bugs.openjdk.java.net/browse/JDK-8026331 >>>> Original patch pushed to jdk9: >>>> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/cf5a0377f578 >>>> >>>> Test: Run jprt -testset hotspot and jtreg - hotspot/test >>>> >>>> Regards, >>>> Shafi From tobias.hartmann at oracle.com Tue Mar 6 13:12:38 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Tue, 6 Mar 2018 14:12:38 +0100 Subject: RFR: AArch64: org.openjdk.jcstress.tests.varhandles.DekkerTest fails In-Reply-To: References: Message-ID: <1717884e-9d25-7275-2105-1230fdaea232@oracle.com> Hi Andrew, the fix looks reasonable to me. Do I understand correctly, that the impact on code generation on other platforms should be minimal because even without a control edge, volatile loads can not float above the CPUOrder memory barrier? Probably, final code isn't even affected at all on x86. I'm not sure about inclusion into JDK 10 but I think this fix is low risk. Thanks, Tobias On 06.03.2018 13:23, Andrew Dinn wrote: > Could someone please review the following patch to /shared code/ which > fixes an AArch64 breakage that was inadvertently introduced by JDK-8181211: > > webrev: http://cr.openjdk.java.net/~adinn/8198950/webrev.00 > JIRA: https://bugs.openjdk.java.net/browse/JDK-8198950 > > The patch applies to jdk/hs. It also applies cleanly to jdk/jdk10. > > I would like it to be considered for inclusion in jdk10 if at all > possible because it patches a critical error in handling of volatile > reads that may result in incorrect memory synchronization. > > The problem: > ------------ > > JDK-8181211 modified generations of load nodes in library_call.c to > allow them to float when > > i) the load is known to be from a heap address (i.e. an object) > ii) the object reference is know (from type info) to be non-null > iii) the offset for the load is known to lie within object's extent > > In this case the load is created with a null control, allowing it to float. > > In the case where this is a volatile load passing a null control is not > really appropriate because the load should not be able to float. > > Before 8181211 the load inherited its control and memory link from the > leading CPUOrder memory barrier that precedes the load node. After > 8181211, the null control passed in to the load create call may > eventually be replaced by control flow from from an earlier node (e.g. a > guarding ifnull node). This change does not actually allow the load to > float because it is still dominated directly in the memory flow by the > memory feed from the CPUOrder membar. > > The problem this causes for AArch64 is that the back end generator > attempts to replace ldr; membar_acquire sequences with ldar. In order to > do so it has to spot that a load is actually volatile and associated > with a trailing Acquire membar. It is currently relying on the now > invalid assumption that the load will be dominated in both control and > memory flow by the preceding CPUOrder membar. The generator falls back > to generating ldr; membar_acquire. > > Unfortunately, the other half of the generation scheme, which replaces > membar_release; str; membar_volatile with an stlr instruction is still > performed. The transformation is only valid if stlr instructions are > paired with ldar instruction which is why the Dekker test fails. > > The fix: > -------- > > This patch tweaks the changes originally made to /shared code/ in > 8181211 to ensure that a null control is only passed when a non-volatile > load is being created (i.e. when needs_membar is false). I chose to fix > it this way because: > > i) it models the control flow correctly > ii) this re-establishes the status quo ante for volatile loads which > should, therefore be very low risk while not prejudicing the performance > gain for non-volatile loads > iii) the alternative tack of modifying the graph pattern matching done > by the AArch64 back-end generator is a relatively complex change. > > alternative jdk10 fix: > ---------------------- > > If this is not able to go in as a fix for jdk10 right now then it will > still be necessary to implement an alternative fix before jdk10 is > released otherwise AArch64 will be horribly broken. > > The least complicated alternative is to switch the default setting for > AArch64 product flag UseBarriersForVolatile to true, so that the > transformations are disabled. > > It would also be possible to rewrite the back end generator so that it > only checks for a dominating memory feed from the CPUOrder barrier when > trying to identify a volatile load. > > Testing: > ------- > > I reran jcstress tests on AArch64 and the change fixes the failing > Dekker tests without introducing any new failures (3 opaque tests were > failing before and after the patch). I ran all jdk tier1 jtreg tests on > AArch64 and they passed. > > As a change to shared code this also needs testing against the submit > tree to ensure it does not break x86 code (which I am in the process of > doing). > > regards, > > > Andrew Dinn > ----------- > Senior Principal Software Engineer > Red Hat UK Ltd > Registered in England and Wales under Company Registration No. 03798903 > Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander > From adinn at redhat.com Tue Mar 6 13:25:18 2018 From: adinn at redhat.com (Andrew Dinn) Date: Tue, 6 Mar 2018 13:25:18 +0000 Subject: RFR: AArch64: org.openjdk.jcstress.tests.varhandles.DekkerTest fails In-Reply-To: <1717884e-9d25-7275-2105-1230fdaea232@oracle.com> References: <1717884e-9d25-7275-2105-1230fdaea232@oracle.com> Message-ID: <90f1304f-2c9a-d5b5-2e4c-517e6c7181a6@redhat.com> Hi Tobias, On 06/03/18 13:12, Tobias Hartmann wrote: > the fix looks reasonable to me. Do I understand correctly, that the impact on code generation on > other platforms should be minimal because even without a control edge, volatile loads can not float > above the CPUOrder memory barrier? Probably, final code isn't even affected at all on x86. Yes, that is correct. Volatile loads can not float the CPUOrder memory barrier. SO, final code should be exactly the same. > I'm not sure about inclusion into JDK 10 but I think this fix is low risk. Ok, thanks for looking at the patch. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From rwestrel at redhat.com Tue Mar 6 13:36:43 2018 From: rwestrel at redhat.com (Roland Westrelin) Date: Tue, 06 Mar 2018 14:36:43 +0100 Subject: [aarch64-port-dev ] RFR: AArch64: org.openjdk.jcstress.tests.varhandles.DekkerTest fails In-Reply-To: References: Message-ID: > webrev: http://cr.openjdk.java.net/~adinn/8198950/webrev.00 That looks good to me. Roland. From adinn at redhat.com Tue Mar 6 14:04:43 2018 From: adinn at redhat.com (Andrew Dinn) Date: Tue, 6 Mar 2018 14:04:43 +0000 Subject: [aarch64-port-dev ] RFR: AArch64: org.openjdk.jcstress.tests.varhandles.DekkerTest fails In-Reply-To: References: Message-ID: <8496210b-dc35-6773-1ace-47759f5b1f1d@redhat.com> On 06/03/18 13:36, Roland Westrelin wrote: >> webrev: http://cr.openjdk.java.net/~adinn/8198950/webrev.00 > > That looks good to me. Thanks, Roland! regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From mark.reinhold at oracle.com Tue Mar 6 16:24:20 2018 From: mark.reinhold at oracle.com (mark.reinhold at oracle.com) Date: Tue, 06 Mar 2018 08:24:20 -0800 Subject: RFR: AArch64: org.openjdk.jcstress.tests.varhandles.DekkerTest fails In-Reply-To: References: Message-ID: <20180306082420.853500758@eggemoggin.niobe.net> 2018/3/6 4:23:07 -0800, adinn at redhat.com: > Could someone please review the following patch to /shared code/ which > fixes an AArch64 breakage that was inadvertently introduced by JDK-8181211: > > webrev: http://cr.openjdk.java.net/~adinn/8198950/webrev.00 > JIRA: https://bugs.openjdk.java.net/browse/JDK-8198950 > > The patch applies to jdk/hs. It also applies cleanly to jdk/jdk10. > > I would like it to be considered for inclusion in jdk10 if at all > possible because it patches a critical error in handling of volatile > reads that may result in incorrect memory synchronization. Andrew -- thanks for the thorough analysis. So far Tobias and Roland have reviewed your change. I'm not qualified to review it myself, so since it's very late in the game for JDK 10 I'd like to see reviews from at least a couple more C2 committers before we make a call on this. - Mark From martin.doerr at sap.com Tue Mar 6 16:30:02 2018 From: martin.doerr at sap.com (Doerr, Martin) Date: Tue, 6 Mar 2018 16:30:02 +0000 Subject: AArch64: org.openjdk.jcstress.tests.varhandles.DekkerTest fails In-Reply-To: References: Message-ID: <43aeafd9b2ca4d07a0fe2f8f4a3593e6@sap.com> Hi Andrew, thanks for fixing. Looks good. I agree with that it should get fixed in jdk10, too. Best regards, Martin -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Andrew Dinn Sent: Dienstag, 6. M?rz 2018 13:23 To: hotspot-dev Source Developers ; aarch64-port-dev at openjdk.java.net Subject: RFR: AArch64: org.openjdk.jcstress.tests.varhandles.DekkerTest fails Could someone please review the following patch to /shared code/ which fixes an AArch64 breakage that was inadvertently introduced by JDK-8181211: webrev: http://cr.openjdk.java.net/~adinn/8198950/webrev.00 JIRA: https://bugs.openjdk.java.net/browse/JDK-8198950 The patch applies to jdk/hs. It also applies cleanly to jdk/jdk10. I would like it to be considered for inclusion in jdk10 if at all possible because it patches a critical error in handling of volatile reads that may result in incorrect memory synchronization. The problem: ------------ JDK-8181211 modified generations of load nodes in library_call.c to allow them to float when i) the load is known to be from a heap address (i.e. an object) ii) the object reference is know (from type info) to be non-null iii) the offset for the load is known to lie within object's extent In this case the load is created with a null control, allowing it to float. In the case where this is a volatile load passing a null control is not really appropriate because the load should not be able to float. Before 8181211 the load inherited its control and memory link from the leading CPUOrder memory barrier that precedes the load node. After 8181211, the null control passed in to the load create call may eventually be replaced by control flow from from an earlier node (e.g. a guarding ifnull node). This change does not actually allow the load to float because it is still dominated directly in the memory flow by the memory feed from the CPUOrder membar. The problem this causes for AArch64 is that the back end generator attempts to replace ldr; membar_acquire sequences with ldar. In order to do so it has to spot that a load is actually volatile and associated with a trailing Acquire membar. It is currently relying on the now invalid assumption that the load will be dominated in both control and memory flow by the preceding CPUOrder membar. The generator falls back to generating ldr; membar_acquire. Unfortunately, the other half of the generation scheme, which replaces membar_release; str; membar_volatile with an stlr instruction is still performed. The transformation is only valid if stlr instructions are paired with ldar instruction which is why the Dekker test fails. The fix: -------- This patch tweaks the changes originally made to /shared code/ in 8181211 to ensure that a null control is only passed when a non-volatile load is being created (i.e. when needs_membar is false). I chose to fix it this way because: i) it models the control flow correctly ii) this re-establishes the status quo ante for volatile loads which should, therefore be very low risk while not prejudicing the performance gain for non-volatile loads iii) the alternative tack of modifying the graph pattern matching done by the AArch64 back-end generator is a relatively complex change. alternative jdk10 fix: ---------------------- If this is not able to go in as a fix for jdk10 right now then it will still be necessary to implement an alternative fix before jdk10 is released otherwise AArch64 will be horribly broken. The least complicated alternative is to switch the default setting for AArch64 product flag UseBarriersForVolatile to true, so that the transformations are disabled. It would also be possible to rewrite the back end generator so that it only checks for a dominating memory feed from the CPUOrder barrier when trying to identify a volatile load. Testing: ------- I reran jcstress tests on AArch64 and the change fixes the failing Dekker tests without introducing any new failures (3 opaque tests were failing before and after the patch). I ran all jdk tier1 jtreg tests on AArch64 and they passed. As a change to shared code this also needs testing against the submit tree to ensure it does not break x86 code (which I am in the process of doing). regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From adinn at redhat.com Tue Mar 6 17:01:54 2018 From: adinn at redhat.com (Andrew Dinn) Date: Tue, 6 Mar 2018 17:01:54 +0000 Subject: RFR: AArch64: org.openjdk.jcstress.tests.varhandles.DekkerTest fails In-Reply-To: <20180306082420.853500758@eggemoggin.niobe.net> References: <20180306082420.853500758@eggemoggin.niobe.net> Message-ID: On 06/03/18 16:24, mark.reinhold at oracle.com wrote: > 2018/3/6 4:23:07 -0800, adinn at redhat.com: >> Could someone please review the following patch to /shared code/ which >> fixes an AArch64 breakage that was inadvertently introduced by JDK-8181211: >> >> webrev: http://cr.openjdk.java.net/~adinn/8198950/webrev.00 >> JIRA: https://bugs.openjdk.java.net/browse/JDK-8198950 >> >> The patch applies to jdk/hs. It also applies cleanly to jdk/jdk10. >> >> I would like it to be considered for inclusion in jdk10 if at all >> possible because it patches a critical error in handling of volatile >> reads that may result in incorrect memory synchronization. > > Andrew -- thanks for the thorough analysis. > > So far Tobias and Roland have reviewed your change. I'm not qualified > to review it myself, so since it's very late in the game for JDK 10 I'd > like to see reviews from at least a couple more C2 committers before we > make a call on this. Sure, that's an understandably cautious reaction. Perhaps a Vladimir (or two :-) might be able to take a look? Thanks very much for even considering this for inclusion in jdk10 at such a late stage. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From vladimir.x.ivanov at oracle.com Tue Mar 6 17:24:29 2018 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Tue, 6 Mar 2018 20:24:29 +0300 Subject: RFR: AArch64: org.openjdk.jcstress.tests.varhandles.DekkerTest fails In-Reply-To: References: Message-ID: The fix looks good. It reverts the behavior back to original (pre-8181211) when membar is needed, so it looks like a low risk fix. Best regards, Vladimir Ivanov On 3/6/18 3:23 PM, Andrew Dinn wrote: > Could someone please review the following patch to /shared code/ which > fixes an AArch64 breakage that was inadvertently introduced by JDK-8181211: > > webrev: http://cr.openjdk.java.net/~adinn/8198950/webrev.00 > JIRA: https://bugs.openjdk.java.net/browse/JDK-8198950 > > The patch applies to jdk/hs. It also applies cleanly to jdk/jdk10. > > I would like it to be considered for inclusion in jdk10 if at all > possible because it patches a critical error in handling of volatile > reads that may result in incorrect memory synchronization. > > The problem: > ------------ > > JDK-8181211 modified generations of load nodes in library_call.c to > allow them to float when > > i) the load is known to be from a heap address (i.e. an object) > ii) the object reference is know (from type info) to be non-null > iii) the offset for the load is known to lie within object's extent > > In this case the load is created with a null control, allowing it to float. > > In the case where this is a volatile load passing a null control is not > really appropriate because the load should not be able to float. > > Before 8181211 the load inherited its control and memory link from the > leading CPUOrder memory barrier that precedes the load node. After > 8181211, the null control passed in to the load create call may > eventually be replaced by control flow from from an earlier node (e.g. a > guarding ifnull node). This change does not actually allow the load to > float because it is still dominated directly in the memory flow by the > memory feed from the CPUOrder membar. > > The problem this causes for AArch64 is that the back end generator > attempts to replace ldr; membar_acquire sequences with ldar. In order to > do so it has to spot that a load is actually volatile and associated > with a trailing Acquire membar. It is currently relying on the now > invalid assumption that the load will be dominated in both control and > memory flow by the preceding CPUOrder membar. The generator falls back > to generating ldr; membar_acquire. > > Unfortunately, the other half of the generation scheme, which replaces > membar_release; str; membar_volatile with an stlr instruction is still > performed. The transformation is only valid if stlr instructions are > paired with ldar instruction which is why the Dekker test fails. > > The fix: > -------- > > This patch tweaks the changes originally made to /shared code/ in > 8181211 to ensure that a null control is only passed when a non-volatile > load is being created (i.e. when needs_membar is false). I chose to fix > it this way because: > > i) it models the control flow correctly > ii) this re-establishes the status quo ante for volatile loads which > should, therefore be very low risk while not prejudicing the performance > gain for non-volatile loads > iii) the alternative tack of modifying the graph pattern matching done > by the AArch64 back-end generator is a relatively complex change. > > alternative jdk10 fix: > ---------------------- > > If this is not able to go in as a fix for jdk10 right now then it will > still be necessary to implement an alternative fix before jdk10 is > released otherwise AArch64 will be horribly broken. > > The least complicated alternative is to switch the default setting for > AArch64 product flag UseBarriersForVolatile to true, so that the > transformations are disabled. > > It would also be possible to rewrite the back end generator so that it > only checks for a dominating memory feed from the CPUOrder barrier when > trying to identify a volatile load. > > Testing: > ------- > > I reran jcstress tests on AArch64 and the change fixes the failing > Dekker tests without introducing any new failures (3 opaque tests were > failing before and after the patch). I ran all jdk tier1 jtreg tests on > AArch64 and they passed. > > As a change to shared code this also needs testing against the submit > tree to ensure it does not break x86 code (which I am in the process of > doing). > > regards, > > > Andrew Dinn > ----------- > Senior Principal Software Engineer > Red Hat UK Ltd > Registered in England and Wales under Company Registration No. 03798903 > Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander > From coleen.phillimore at oracle.com Tue Mar 6 18:00:05 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 6 Mar 2018 13:00:05 -0500 Subject: RFR(L): 8198423: Improve metaspace chunk allocation (was: Proposal for improvements to the metaspace chunk allocator) In-Reply-To: References: <2005ef0d-9d95-9805-f7aa-94193f683fb3@oracle.com> <16fadc98-ffa2-cb4d-f611-78c3f63ab893@oracle.com> Message-ID: <049798ca-0876-7d53-6171-2a3b507a62f3@oracle.com> On 3/6/18 1:10 AM, Thomas St?fe wrote: > Hi Coleen, > > We test nightly in windows 32bit. I'll go and run some tests on 32bit > linux too. > > Thanks for the sponsoring offer! Goetz already reviewed this patch, > would that be sufficient or should I look for another reviewer from > Oracle? That's sufficient.? I'll sponsor the patch for you.? Can you update the patch in the webrev? thanks, Coleen > > Kind Regards, Thomas > > > On Tue, Mar 6, 2018 at 12:59 AM, > wrote: > > > Hi Thomas, > > I've read through the new code.? I don't have any substantive > comments.? Thank you for adding the functions. > > Has this been tested on any 32 bit platforms??? I will sponsor > this when you have another reviewer. > > Thanks for taking on the metaspace! > > Coleen > > > On 3/1/18 5:36 AM, Thomas St?fe wrote: >> Hi Coleen, >> >> thanks a lot for the review and the sponsoring offer! >> >> New version (full): >> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-03-01/webrev-full/webrev/ >> >> incremental: >> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-03-01/webrev-incr/webrev/ >> >> >> Please find remarks inline: >> >> >> On Tue, Feb 27, 2018 at 11:22 PM, > > wrote: >> >> >> Thomas, review comments: >> >> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-26/webrev/src/hotspot/share/memory/metachunk.hpp.udiff.html >> >> >> +// ChunkIndex (todo: rename?) defines the type of chunk. >> Chunk types >> >> >> It's really both, isn't it?? The type is the index into the >> free list or in use lists. The name seems fine. >> >> >> You are right. What I meant was that a lot of code needs to know >> about the different chunk sizes, but naming it "Index" and adding >> enum values like "NumberOfFreeLists" we expose implementation >> details no-one outside of SpaceManager and ChunkManager cares >> about (namely, the fact that these values are internally used as >> indices into arrays). A more neutral naming would be something >> like "enum ChunkTypes { spec,small, .... , >> NumberOfNonHumongousChunkTypes, NumberOfChunkTypes }. >> >> However, I can leave this out for a possible future cleanup. The >> change is big enough as it is. >> >> Can you add comments on the #endifs if the #ifdef is more >> than a couple 2-3 lines above (it's a nit that bothers me). >> >> +#ifdef ASSERT >> + // A 32bit sentinel for debugging purposes. >> +#define CHUNK_SENTINEL 0x4d4554EF // "MET" >> +#define CHUNK_SENTINEL_INVALID 0xFEEEEEEF >> + uint32_t _sentinel; >> +#endif >> + const ChunkIndex _chunk_type; >> + const bool _is_class; >> + // Whether the chunk is free (in freelist) or in use by >> some class loader. >> ? ?bool _is_tagged_free; >> ?+#ifdef ASSERT >> + ChunkOrigin _origin; >> + int _use_count; >> +#endif >> + >> >> >> I removed the asserts completely, following your suggestion below >> that "origin" would be valuable in customer scenarios too. By >> that logic, the other members are valuable too: the sentinel is >> valuable when examining memory dumps to see the start of chunks, >> and the in-use counter is useful too. What do you think? >> >> So, I leave the members in - which, depending what the C++ >> compiler does to enums and bools, may cost up to 128bit >> additional header space. I think that is ok. In one of my earlier >> versions of this patch I hand-crafted the header using chars and >> bitfields to be as small as possible, but that seemed >> over-engineered. >> >> However, I left out any automatic verifications accessing these >> debug members. These are still only done in debug builds. >> >> >> It seems that if you could move origin and _use_count into >> the ASSERT block above (maybe putting use_count before _origin. >> >> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-26/webrev/src/hotspot/share/memory/metaspace.cpp.udiff.html >> >> >> In take_from_committed, can the allocation of padding chunks >> be its own function like add_chunks_to_aligment() lines >> 1574-1615? The function is too long now. >> >> >> I moved the padding chunk allocation into an own function as you >> suggested. >> >> I don't think coalescation is a word in English, at least my >> dictionary cannot find it. Although it makes sense in the >> context, just distracting. >> >> >> I replaced "coalescation" with "chunk merging" throughout the >> code. Also less of a tongue breaker. >> >> + // Now check if in the coalescation area there are still >> life chunks. >> >> >> "live" chunks I guess.?? A sentence you won't read often :). >> >> >> Now that I read it it almost sounded sinister :) Fixed. >> >> >> In free_chunks_get() can you handle the Humongous case first? >> The else for humongous chunk size is buried tons of lines below. >> >> Otherwise it might be helpful to the logic to make your >> addition to this function be a function you call like >> ? chunk = split_from_larger_free_chunk(); >> >> >> I did the latter. I moved the splitting of a larger chunk to an >> own function. This causes a slight logic change: the new function >> (ChunkManager::split_chunk()) splits an existing large free >> chunks into n smaller free chunks and adds them all back to the >> freelist - that includes the chunk we are about to return. That >> allows us to use the same exit path - which removes the chunk >> from the freelist and adjusts all counters - in the caller >> function "ChunkManager::free_chunks_get" instead of having to >> return in the middle of the function. >> >> To make the test more readable, I also remove the >> "test-that-free-chunks-are-optimally-merged" verification - which >> was quite lengthy - from VirtualSpaceNode::verify() to a new >> function, VirtualSpaceNode::verify_free_chunks_are_ideally_merged(). >> >> >> You might want to keep the origin in product mode if it >> doesn't add to the chunk footprint.?? Might help with >> customer debugging. >> >> >> See above >> >> Awesome looking test... >> >> >> Thanks, I was worried it would be too complicated. >> I changed it a bit because there were sporadic errors. Not a >> "real" error, just the test itself was faulty. The >> "metaspaces_in_use" counter was slightly wrong in one corner case. >> >> I've read through most of this and thank you for adding this >> to at least partially solve the fragmentation problem.? The >> irony is that we templatized the Dictionary from CMS so that >> we could use it for Metaspace and that has splitting and >> coalescing but it seems this code makes more sense than >> adapting that code (if it's even possible). >> >> >> Well, it helps other metadata use cases too, no. >> >> >> Thank you for working on this.? I'll sponsor this for you. >> >> Coleen >> >> >> >> Thanks again! >> >> I also updated my jdk-submit branch to include these latest >> changes; tests are still runnning. >> >> Kind Regards, Thomas >> >> >> On 2/26/18 9:20 AM, Thomas St?fe wrote: >> >> Hi all, >> >> I know this patch is a bit larger, but may I please have >> reviews and/or >> other input? >> >> Issue: https://bugs.openjdk.java.net/browse/JDK-8198423 >> >> Latest version: >> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-26/webrev/ >> >> >> For those who followed the mail thread, this is the >> incremental diff to the >> last changes (included feedback Goetz gave me on- and >> off-list): >> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-26/webrev-incr/webrev/ >> >> >> Thank you! >> >> Kind Regards, Thomas Stuefe >> >> >> >> On Thu, Feb 8, 2018 at 12:58 PM, Thomas St?fe >> > >> wrote: >> >> Hi, >> >> We would like to contribute a patch developed at SAP >> which has been live >> in our VM for some time. It improves the metaspace >> chunk allocation: >> reduces fragmentation and raises the chance of >> reusing free metaspace >> chunks. >> >> The patch: >> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >> >> ation/2018-02-05--2/webrev/ >> >> In very short, this patch helps with a number of >> pathological cases where >> metaspace chunks are free but cannot be reused >> because they are of the >> wrong size. For example, the metaspace freelist could >> be full of small >> chunks, which would not be reusable if we need larger >> chunks. So, we could >> get metaspace OOMs even in situations where the >> metaspace was far from >> exhausted. Our patch adds the ability to split and >> merge metaspace chunks >> dynamically and thus remove the "size-lock-in" problem. >> >> Note that there have been other attempts to get a >> grip on this problem, >> see e.g. >> "SpaceManager::get_small_chunks_and_allocate()". But >> arguably >> our patch attempts a more complete solution. >> >> In 2016 I discussed the idea for this patch with some >> folks off-list, >> among them Jon Matsimutso. He then did advice me to >> create a JEP. So I did: >> [1]. However, meanwhile changes to the JEP process >> were discussed [2], and >> I am not sure anymore this patch needs even needs a >> JEP. It may be >> moderately complex and hence carries the risk >> inherent in any patch, but >> its effects would not be externally visible (if you >> discount seeing fewer >> metaspace OOMs). So, I'd prefer to handle this as a >> simple RFE. >> >> -- >> >> How this patch works: >> >> 1) When a class loader dies, its metaspace chunks are >> freed and returned >> to the freelist for reuse by the next class loader. >> With the patch, upon >> returning a chunk to the freelist, an attempt is made >> to merge it with its >> neighboring chunks - should they happen to be free >> too - to form a larger >> chunk. Which then is placed in the free list. >> >> As a result, the freelist should be populated by >> larger chunks at the >> expense of smaller chunks. In other words, all free >> chunks should always be >> as "coalesced as possible". >> >> 2) When a class loader needs a new chunk and a chunk >> of the requested size >> cannot be found in the free list, before carving out >> a new chunk from the >> virtual space, we first check if there is a larger >> chunk in the free list. >> If there is, that larger chunk is chopped up into n >> smaller chunks. One of >> them is returned to the callers, the others are >> re-added to the freelist. >> >> (1) and (2) together have the effect of removing the >> size-lock-in for >> chunks. If fragmentation allows it, small chunks are >> dynamically combined >> to form larger chunks, and larger chunks are split on >> demand. >> >> -- >> >> What this patch does not: >> >> This is not a rewrite of the chunk allocator - most >> of the mechanisms stay >> intact. Specifically, chunk sizes remain unchanged, >> and so do chunk >> allocation processes (when do which class loaders get >> handed which chunk >> size). Almost everthing this patch does affects only >> internal workings of >> the ChunkManager. >> >> Also note that I refrained from doing any cleanups, >> since I wanted >> reviewers to be able to gauge this patch without >> filtering noise. >> Unfortunately this patch adds some complexity. But >> there are many future >> opportunities for code cleanup and simplification, >> some of which we already >> discussed in existing RFEs ([3], [4]). All of them >> are out of the scope for >> this particular patch. >> >> -- >> >> Details: >> >> Before the patch, the following rules held: >> - All chunk sizes are multiples of the smallest chunk >> size ("specialized >> chunks") >> - All chunk sizes of larger chunks are also clean >> multiples of the next >> smaller chunk size (e.g. for class space, the ratio of >> specialized/small/medium chunks is 1:2:32) >> - All chunk start addresses are aligned to the >> smallest chunk size (more >> or less accidentally, see metaspace_reserve_alignment). >> The patch makes the last rule explicit and more strict: >> - All (non-humongous) chunk start addresses are now >> aligned to their own >> chunk size. So, e.g. medium chunks are allocated at >> addresses which are a >> multiple of medium chunk size. This rule is not >> extended to humongous >> chunks, whose start addresses continue to be aligned >> to the smallest chunk >> size. >> >> The reason for this new alignment rule is that it >> makes it cheap both to >> find chunk predecessors of a chunk and to check which >> chunks are free. >> >> When a class loader dies and its chunk is returned to >> the freelist, all we >> have is its address. In order to merge it with its >> neighbors to form a >> larger chunk, we need to find those neighbors, >> including those preceding >> the returned chunk. Prior to this patch that was not >> easy - one would have >> to iterate chunks starting at the beginning of the >> VirtualSpaceNode. But >> due to the new alignment rule, we now know where the >> prospective larger >> chunk must start - at the next lower >> larger-chunk-size-aligned boundary. We >> also know that currently a smaller chunk must start >> there (*). >> >> In order to check the free-ness of chunks quickly, >> each VirtualSpaceNode >> now keeps a bitmap which describes its occupancy. One >> bit in this bitmap >> corresponds to a range the size of the smallest chunk >> size and starting at >> an address aligned to the smallest chunk size. >> Because of the alignment >> rules above, such a range belongs to one single >> chunk. The bit is 1 if the >> associated chunk is in use by a class loader, 0 if it >> is free. >> >> When we have calculated the address range a >> prospective larger chunk would >> span, we now need to check if all chunks in that >> range are free. Only then >> we can merge them. We do that by querying the bitmap. >> Note that the most >> common use case here is forming medium chunks from >> smaller chunks. With the >> new alignment rules, the bitmap portion covering a >> medium chunk now always >> happens to be 16- or 32bit in size and is 16- or >> 32bit aligned, so reading >> the bitmap in many cases becomes a simple 16- or >> 32bit load. >> >> If the range is free, only then we need to iterate >> the chunks in that >> range: pull them from the freelist, combine them to >> one new larger chunk, >> re-add that one to the freelist. >> >> (*) Humongous chunks make this a bit more >> complicated. Since the new >> alignment rule does not extend to them, a humongous >> chunk could still >> straddle the lower or upper boundary of the >> prospective larger chunk. So I >> gave the occupancy map a second layer, which is used >> to mark the start of >> chunks. >> An alternative approach could have been to make >> humongous chunks size and >> start address always a multiple of the largest >> non-humongous chunk size >> (medium chunks). That would have caused a bit of >> waste per humongous chunk >> (<64K) in exchange for simpler coding and a simpler >> occupancy map. >> >> -- >> >> The patch shows its best results in scenarios where a >> lot of smallish >> class loaders are alive simultaneously. When dying, >> they leave continuous >> expanses of metaspace covered in small chunks, which >> can be merged nicely. >> However, if class loader life times vary more, we >> have more interleaving of >> dead and alive small chunks, and hence chunk merging >> does not work as well >> as it could. >> >> For an example of a pathological case like this see >> example program: [5] >> >> Executed like this: "java >> -XX:CompressedClassSpaceSize=10M -cp test3 >> test3.Example2" the test will load 3000 small classes >> in separate class >> loaders, then throw them away and start loading large >> classes. The small >> classes will have flooded the metaspace with small >> chunks, which are >> unusable for the large classes. When executing with >> the rather limited >> CompressedClassSpaceSize=10M, we will run into an OOM >> after loading about >> 800 large classes, having used only 40% of the class >> space, the rest is >> wasted to unused small chunks. However, with our >> patch the example program >> will manage to allocate ~2900 large classes before >> running into an OOM, and >> class space will show almost no waste. >> >> Do demonstrate this, add -Xlog:gc+metaspace+freelist. >> After running into >> an OOM, statistics and an ASCII representation of the >> class space will be >> shown. The unpatched version will show large expanses >> of unused small >> chunks, the patched variant will show almost no waste. >> >> Note that the patch could be made more effective with >> a different size >> ratio between small and medium chunks: in class >> space, that ratio is 1:16, >> so 16 small chunks must happen to be free to form one >> larger chunk. With a >> smaller ratio the chance for coalescation would be >> larger. So there may be >> room for future improvement here: Since we now can >> merge and split chunks >> on demand, we could introduce more chunk sizes. >> Potentially arriving at a >> buddy-ish allocator style where we drop hard-wired >> chunk sizes for a >> dynamic model where the ratio between chunk sizes is >> always 1:2 and we >> could in theory have no limit to the chunk size? But >> this is just a thought >> and well out of the scope of this patch. >> >> -- >> >> What does this patch cost (memory): >> >> ? - the occupancy bitmap adds 1 byte per 4K metaspace. >> ? - MetaChunk headers get larger, since we add an >> enum and two bools to it. >> Depending on what the c++ compiler does with that, >> chunk headers grow by >> one or two MetaWords, reducing the payload size by >> that amount. >> - The new alignment rules mean we may need to create >> padding chunks to >> precede larger chunks. But since these padding chunks >> are added to the >> freelist, they should be used up before the need for >> new padding chunks >> arises. So, the maximally possible number of unused >> padding chunks should >> be limited by design to about 64K. >> >> The expectation is that the memory savings by this >> patch far outweighs its >> added memory costs. >> >> .. (performance): >> >> We did not see measurable drops in standard >> benchmarks raising over the >> normal noise. I also measured times for a program >> which stresses metaspace >> chunk coalescation, with the same result. >> >> I am open to suggestions what else I should measure, >> and/or independent >> measurements. >> >> -- >> >> Other details: >> >> I removed >> SpaceManager::get_small_chunk_and_allocate() to reduce >> complexity somewhat, because it was made mostly >> obsolete by this patch: >> since small chunks are combined to larger chunks upon >> return to the >> freelist, in theory we should not have that many free >> small chunks anymore >> anyway. However, there may be still cases where we >> could benefit from this >> workaround, so I am asking your opinion on this one. >> >> About tests: There were two native tests - >> ChunkManagerReturnTest and >> TestVirtualSpaceNode (the former was added by me last >> year) - which did not >> make much sense anymore, since they relied heavily on >> internal behavior >> which was made unpredictable with this patch. >> To make up for these lost tests,? I added a new gtest >> which attempts to >> stress the many combinations of allocation pattern >> but does so from a layer >> above the old tests. It now uses >> Metaspace::allocate() and friends. By >> using that point as entry for tests, I am less >> dependent on implementation >> internals and still cover a lot of scenarios. >> >> -- >> >> Review pointers: >> >> Good points to start are >> - ChunkManager::return_single_chunk() - specifically, >> ChunkManager::attempt_to_coalesce_around_chunk() - >> here we merge chunks >> upon return to the free list >> - ChunkManager::free_chunks_get(): Here we now split >> large chunks into >> smaller chunks on demand >> - VirtualSpaceNode::take_from_committed() : chunks >> are allocated >> according to align rules now, padding chunks are handles >> - The OccupancyMap class is the helper class >> implementing the new >> occupancy bitmap >> >> The rest is mostly chaff: helper functions, added >> tests and verifications. >> >> -- >> >> Thanks and Best Regards, Thomas >> >> [1] https://bugs.openjdk.java.net/browse/JDK-8166690 >> >> [2] >> http://mail.openjdk.java.net/pipermail/jdk-dev/2017-November >> >> /000128.html >> [3] https://bugs.openjdk.java.net/browse/JDK-8185034 >> >> [4] https://bugs.openjdk.java.net/browse/JDK-8176808 >> >> [5] >> https://bugs.openjdk.java.net/secure/attachment/63532/test3.zip >> >> >> >> >> >> > > From thomas.stuefe at gmail.com Tue Mar 6 18:33:57 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 6 Mar 2018 19:33:57 +0100 Subject: RFR(L): 8198423: Improve metaspace chunk allocation (was: Proposal for improvements to the metaspace chunk allocator) In-Reply-To: <049798ca-0876-7d53-6171-2a3b507a62f3@oracle.com> References: <2005ef0d-9d95-9805-f7aa-94193f683fb3@oracle.com> <16fadc98-ffa2-cb4d-f611-78c3f63ab893@oracle.com> <049798ca-0876-7d53-6171-2a3b507a62f3@oracle.com> Message-ID: On Tue, Mar 6, 2018 at 7:00 PM, wrote: > > > On 3/6/18 1:10 AM, Thomas St?fe wrote: > > Hi Coleen, > > We test nightly in windows 32bit. I'll go and run some tests on 32bit > linux too. > > Thanks for the sponsoring offer! Goetz already reviewed this patch, would > that be sufficient or should I look for another reviewer from Oracle? > > > That's sufficient. I'll sponsor the patch for you. Can you update the > patch in the webrev? > thanks, > Coleen > > Great! Here you go: http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-03-06/webrev/ I rebased to head and fixed the change description. This is still a mq change though, do you need me to do a qfinish (I usually like to avoid that because the local change would then clash with the change you are about to push) ? Kind Regards, Thomas > > Kind Regards, Thomas > > > On Tue, Mar 6, 2018 at 12:59 AM, wrote: > >> >> Hi Thomas, >> >> I've read through the new code. I don't have any substantive comments. >> Thank you for adding the functions. >> >> Has this been tested on any 32 bit platforms? I will sponsor this when >> you have another reviewer. >> >> Thanks for taking on the metaspace! >> >> Coleen >> >> >> On 3/1/18 5:36 AM, Thomas St?fe wrote: >> >> Hi Coleen, >> >> thanks a lot for the review and the sponsoring offer! >> >> New version (full): http://cr.openjdk.java.net/~stuefe/webrevs/metaspace >> -coalescation/2018-03-01/webrev-full/webrev/ >> incremental: http://cr.openjdk.java.net/~stuefe/webrevs/ >> metaspace-coalescation/2018-03-01/webrev-incr/webrev/ >> >> Please find remarks inline: >> >> >> On Tue, Feb 27, 2018 at 11:22 PM, wrote: >> >>> >>> Thomas, review comments: >>> >>> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >>> ation/2018-02-26/webrev/src/hotspot/share/memory/metachunk.h >>> pp.udiff.html >>> >>> +// ChunkIndex (todo: rename?) defines the type of chunk. Chunk types >>> >>> >>> It's really both, isn't it? The type is the index into the free list or >>> in use lists. The name seems fine. >>> >>> >> You are right. What I meant was that a lot of code needs to know about >> the different chunk sizes, but naming it "Index" and adding enum values >> like "NumberOfFreeLists" we expose implementation details no-one outside of >> SpaceManager and ChunkManager cares about (namely, the fact that these >> values are internally used as indices into arrays). A more neutral naming >> would be something like "enum ChunkTypes { spec,small, .... , >> NumberOfNonHumongousChunkTypes, NumberOfChunkTypes }. >> >> However, I can leave this out for a possible future cleanup. The change >> is big enough as it is. >> >> >>> Can you add comments on the #endifs if the #ifdef is more than a couple >>> 2-3 lines above (it's a nit that bothers me). >>> >>> +#ifdef ASSERT >>> + // A 32bit sentinel for debugging purposes. >>> +#define CHUNK_SENTINEL 0x4d4554EF // "MET" >>> +#define CHUNK_SENTINEL_INVALID 0xFEEEEEEF >>> + uint32_t _sentinel; >>> +#endif >>> + const ChunkIndex _chunk_type; >>> + const bool _is_class; >>> + // Whether the chunk is free (in freelist) or in use by some class >>> loader. >>> bool _is_tagged_free; >>> +#ifdef ASSERT >>> + ChunkOrigin _origin; >>> + int _use_count; >>> +#endif >>> + >>> >>> >> I removed the asserts completely, following your suggestion below that >> "origin" would be valuable in customer scenarios too. By that logic, the >> other members are valuable too: the sentinel is valuable when examining >> memory dumps to see the start of chunks, and the in-use counter is useful >> too. What do you think? >> >> So, I leave the members in - which, depending what the C++ compiler does >> to enums and bools, may cost up to 128bit additional header space. I think >> that is ok. In one of my earlier versions of this patch I hand-crafted the >> header using chars and bitfields to be as small as possible, but that >> seemed over-engineered. >> >> However, I left out any automatic verifications accessing these debug >> members. These are still only done in debug builds. >> >> >>> >>> It seems that if you could move origin and _use_count into the ASSERT >>> block above (maybe putting use_count before _origin. >>> >>> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >>> ation/2018-02-26/webrev/src/hotspot/share/memory/metaspace.c >>> pp.udiff.html >>> >>> In take_from_committed, can the allocation of padding chunks be its own >>> function like add_chunks_to_aligment() lines 1574-1615? The function is too >>> long now. >>> >>> >> I moved the padding chunk allocation into an own function as you >> suggested. >> >> >>> I don't think coalescation is a word in English, at least my dictionary >>> cannot find it. Although it makes sense in the context, just distracting. >>> >>> >> I replaced "coalescation" with "chunk merging" throughout the code. Also >> less of a tongue breaker. >> >> >>> + // Now check if in the coalescation area there are still life chunks. >>> >>> >>> "live" chunks I guess. A sentence you won't read often :). >>> >> >> Now that I read it it almost sounded sinister :) Fixed. >> >> >>> >>> In free_chunks_get() can you handle the Humongous case first? The else >>> for humongous chunk size is buried tons of lines below. >>> >>> Otherwise it might be helpful to the logic to make your addition to this >>> function be a function you call like >>> chunk = split_from_larger_free_chunk(); >>> >> >> I did the latter. I moved the splitting of a larger chunk to an own >> function. This causes a slight logic change: the new function >> (ChunkManager::split_chunk()) splits an existing large free chunks into n >> smaller free chunks and adds them all back to the freelist - that includes >> the chunk we are about to return. That allows us to use the same exit path >> - which removes the chunk from the freelist and adjusts all counters - in >> the caller function "ChunkManager::free_chunks_get" instead of having to >> return in the middle of the function. >> >> To make the test more readable, I also remove the >> "test-that-free-chunks-are-optimally-merged" verification - which was >> quite lengthy - from VirtualSpaceNode::verify() to a new function, >> VirtualSpaceNode::verify_free_chunks_are_ideally_merged(). >> >> >>> You might want to keep the origin in product mode if it doesn't add to >>> the chunk footprint. Might help with customer debugging. >>> >>> >> See above >> >> >>> Awesome looking test... >>> >>> >> Thanks, I was worried it would be too complicated. >> I changed it a bit because there were sporadic errors. Not a "real" >> error, just the test itself was faulty. The "metaspaces_in_use" counter was >> slightly wrong in one corner case. >> >> >>> I've read through most of this and thank you for adding this to at least >>> partially solve the fragmentation problem. The irony is that we >>> templatized the Dictionary from CMS so that we could use it for Metaspace >>> and that has splitting and coalescing but it seems this code makes more >>> sense than adapting that code (if it's even possible). >>> >> >> Well, it helps other metadata use cases too, no. >> >> >>> >>> Thank you for working on this. I'll sponsor this for you. >>> >> Coleen >>> >>> >> >> Thanks again! >> >> I also updated my jdk-submit branch to include these latest changes; >> tests are still runnning. >> >> Kind Regards, Thomas >> >> >> >>> >>> On 2/26/18 9:20 AM, Thomas St?fe wrote: >>> >>>> Hi all, >>>> >>>> I know this patch is a bit larger, but may I please have reviews and/or >>>> other input? >>>> >>>> Issue: https://bugs.openjdk.java.net/browse/JDK-8198423 >>>> Latest version: >>>> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >>>> ation/2018-02-26/webrev/ >>>> >>>> For those who followed the mail thread, this is the incremental diff to >>>> the >>>> last changes (included feedback Goetz gave me on- and off-list): >>>> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >>>> ation/2018-02-26/webrev-incr/webrev/ >>>> >>>> Thank you! >>>> >>>> Kind Regards, Thomas Stuefe >>>> >>>> >>>> >>>> On Thu, Feb 8, 2018 at 12:58 PM, Thomas St?fe >>>> wrote: >>>> >>>> Hi, >>>>> >>>>> We would like to contribute a patch developed at SAP which has been >>>>> live >>>>> in our VM for some time. It improves the metaspace chunk allocation: >>>>> reduces fragmentation and raises the chance of reusing free metaspace >>>>> chunks. >>>>> >>>>> The patch: http://cr.openjdk.java.net/~st >>>>> uefe/webrevs/metaspace-coalesc >>>>> ation/2018-02-05--2/webrev/ >>>>> >>>>> In very short, this patch helps with a number of pathological cases >>>>> where >>>>> metaspace chunks are free but cannot be reused because they are of the >>>>> wrong size. For example, the metaspace freelist could be full of small >>>>> chunks, which would not be reusable if we need larger chunks. So, we >>>>> could >>>>> get metaspace OOMs even in situations where the metaspace was far from >>>>> exhausted. Our patch adds the ability to split and merge metaspace >>>>> chunks >>>>> dynamically and thus remove the "size-lock-in" problem. >>>>> >>>>> Note that there have been other attempts to get a grip on this problem, >>>>> see e.g. "SpaceManager::get_small_chunks_and_allocate()". But arguably >>>>> our patch attempts a more complete solution. >>>>> >>>>> In 2016 I discussed the idea for this patch with some folks off-list, >>>>> among them Jon Matsimutso. He then did advice me to create a JEP. So I >>>>> did: >>>>> [1]. However, meanwhile changes to the JEP process were discussed [2], >>>>> and >>>>> I am not sure anymore this patch needs even needs a JEP. It may be >>>>> moderately complex and hence carries the risk inherent in any patch, >>>>> but >>>>> its effects would not be externally visible (if you discount seeing >>>>> fewer >>>>> metaspace OOMs). So, I'd prefer to handle this as a simple RFE. >>>>> >>>>> -- >>>>> >>>>> How this patch works: >>>>> >>>>> 1) When a class loader dies, its metaspace chunks are freed and >>>>> returned >>>>> to the freelist for reuse by the next class loader. With the patch, >>>>> upon >>>>> returning a chunk to the freelist, an attempt is made to merge it with >>>>> its >>>>> neighboring chunks - should they happen to be free too - to form a >>>>> larger >>>>> chunk. Which then is placed in the free list. >>>>> >>>>> As a result, the freelist should be populated by larger chunks at the >>>>> expense of smaller chunks. In other words, all free chunks should >>>>> always be >>>>> as "coalesced as possible". >>>>> >>>>> 2) When a class loader needs a new chunk and a chunk of the requested >>>>> size >>>>> cannot be found in the free list, before carving out a new chunk from >>>>> the >>>>> virtual space, we first check if there is a larger chunk in the free >>>>> list. >>>>> If there is, that larger chunk is chopped up into n smaller chunks. >>>>> One of >>>>> them is returned to the callers, the others are re-added to the >>>>> freelist. >>>>> >>>>> (1) and (2) together have the effect of removing the size-lock-in for >>>>> chunks. If fragmentation allows it, small chunks are dynamically >>>>> combined >>>>> to form larger chunks, and larger chunks are split on demand. >>>>> >>>>> -- >>>>> >>>>> What this patch does not: >>>>> >>>>> This is not a rewrite of the chunk allocator - most of the mechanisms >>>>> stay >>>>> intact. Specifically, chunk sizes remain unchanged, and so do chunk >>>>> allocation processes (when do which class loaders get handed which >>>>> chunk >>>>> size). Almost everthing this patch does affects only internal workings >>>>> of >>>>> the ChunkManager. >>>>> >>>>> Also note that I refrained from doing any cleanups, since I wanted >>>>> reviewers to be able to gauge this patch without filtering noise. >>>>> Unfortunately this patch adds some complexity. But there are many >>>>> future >>>>> opportunities for code cleanup and simplification, some of which we >>>>> already >>>>> discussed in existing RFEs ([3], [4]). All of them are out of the >>>>> scope for >>>>> this particular patch. >>>>> >>>>> -- >>>>> >>>>> Details: >>>>> >>>>> Before the patch, the following rules held: >>>>> - All chunk sizes are multiples of the smallest chunk size >>>>> ("specialized >>>>> chunks") >>>>> - All chunk sizes of larger chunks are also clean multiples of the next >>>>> smaller chunk size (e.g. for class space, the ratio of >>>>> specialized/small/medium chunks is 1:2:32) >>>>> - All chunk start addresses are aligned to the smallest chunk size >>>>> (more >>>>> or less accidentally, see metaspace_reserve_alignment). >>>>> The patch makes the last rule explicit and more strict: >>>>> - All (non-humongous) chunk start addresses are now aligned to their >>>>> own >>>>> chunk size. So, e.g. medium chunks are allocated at addresses which >>>>> are a >>>>> multiple of medium chunk size. This rule is not extended to humongous >>>>> chunks, whose start addresses continue to be aligned to the smallest >>>>> chunk >>>>> size. >>>>> >>>>> The reason for this new alignment rule is that it makes it cheap both >>>>> to >>>>> find chunk predecessors of a chunk and to check which chunks are free. >>>>> >>>>> When a class loader dies and its chunk is returned to the freelist, >>>>> all we >>>>> have is its address. In order to merge it with its neighbors to form a >>>>> larger chunk, we need to find those neighbors, including those >>>>> preceding >>>>> the returned chunk. Prior to this patch that was not easy - one would >>>>> have >>>>> to iterate chunks starting at the beginning of the VirtualSpaceNode. >>>>> But >>>>> due to the new alignment rule, we now know where the prospective larger >>>>> chunk must start - at the next lower larger-chunk-size-aligned >>>>> boundary. We >>>>> also know that currently a smaller chunk must start there (*). >>>>> >>>>> In order to check the free-ness of chunks quickly, each >>>>> VirtualSpaceNode >>>>> now keeps a bitmap which describes its occupancy. One bit in this >>>>> bitmap >>>>> corresponds to a range the size of the smallest chunk size and >>>>> starting at >>>>> an address aligned to the smallest chunk size. Because of the alignment >>>>> rules above, such a range belongs to one single chunk. The bit is 1 if >>>>> the >>>>> associated chunk is in use by a class loader, 0 if it is free. >>>>> >>>>> When we have calculated the address range a prospective larger chunk >>>>> would >>>>> span, we now need to check if all chunks in that range are free. Only >>>>> then >>>>> we can merge them. We do that by querying the bitmap. Note that the >>>>> most >>>>> common use case here is forming medium chunks from smaller chunks. >>>>> With the >>>>> new alignment rules, the bitmap portion covering a medium chunk now >>>>> always >>>>> happens to be 16- or 32bit in size and is 16- or 32bit aligned, so >>>>> reading >>>>> the bitmap in many cases becomes a simple 16- or 32bit load. >>>>> >>>>> If the range is free, only then we need to iterate the chunks in that >>>>> range: pull them from the freelist, combine them to one new larger >>>>> chunk, >>>>> re-add that one to the freelist. >>>>> >>>>> (*) Humongous chunks make this a bit more complicated. Since the new >>>>> alignment rule does not extend to them, a humongous chunk could still >>>>> straddle the lower or upper boundary of the prospective larger chunk. >>>>> So I >>>>> gave the occupancy map a second layer, which is used to mark the start >>>>> of >>>>> chunks. >>>>> An alternative approach could have been to make humongous chunks size >>>>> and >>>>> start address always a multiple of the largest non-humongous chunk size >>>>> (medium chunks). That would have caused a bit of waste per humongous >>>>> chunk >>>>> (<64K) in exchange for simpler coding and a simpler occupancy map. >>>>> >>>>> -- >>>>> >>>>> The patch shows its best results in scenarios where a lot of smallish >>>>> class loaders are alive simultaneously. When dying, they leave >>>>> continuous >>>>> expanses of metaspace covered in small chunks, which can be merged >>>>> nicely. >>>>> However, if class loader life times vary more, we have more >>>>> interleaving of >>>>> dead and alive small chunks, and hence chunk merging does not work as >>>>> well >>>>> as it could. >>>>> >>>>> For an example of a pathological case like this see example program: >>>>> [5] >>>>> >>>>> Executed like this: "java -XX:CompressedClassSpaceSize=10M -cp test3 >>>>> test3.Example2" the test will load 3000 small classes in separate class >>>>> loaders, then throw them away and start loading large classes. The >>>>> small >>>>> classes will have flooded the metaspace with small chunks, which are >>>>> unusable for the large classes. When executing with the rather limited >>>>> CompressedClassSpaceSize=10M, we will run into an OOM after loading >>>>> about >>>>> 800 large classes, having used only 40% of the class space, the rest is >>>>> wasted to unused small chunks. However, with our patch the example >>>>> program >>>>> will manage to allocate ~2900 large classes before running into an >>>>> OOM, and >>>>> class space will show almost no waste. >>>>> >>>>> Do demonstrate this, add -Xlog:gc+metaspace+freelist. After running >>>>> into >>>>> an OOM, statistics and an ASCII representation of the class space will >>>>> be >>>>> shown. The unpatched version will show large expanses of unused small >>>>> chunks, the patched variant will show almost no waste. >>>>> >>>>> Note that the patch could be made more effective with a different size >>>>> ratio between small and medium chunks: in class space, that ratio is >>>>> 1:16, >>>>> so 16 small chunks must happen to be free to form one larger chunk. >>>>> With a >>>>> smaller ratio the chance for coalescation would be larger. So there >>>>> may be >>>>> room for future improvement here: Since we now can merge and split >>>>> chunks >>>>> on demand, we could introduce more chunk sizes. Potentially arriving >>>>> at a >>>>> buddy-ish allocator style where we drop hard-wired chunk sizes for a >>>>> dynamic model where the ratio between chunk sizes is always 1:2 and we >>>>> could in theory have no limit to the chunk size? But this is just a >>>>> thought >>>>> and well out of the scope of this patch. >>>>> >>>>> -- >>>>> >>>>> What does this patch cost (memory): >>>>> >>>>> - the occupancy bitmap adds 1 byte per 4K metaspace. >>>>> - MetaChunk headers get larger, since we add an enum and two bools >>>>> to it. >>>>> Depending on what the c++ compiler does with that, chunk headers grow >>>>> by >>>>> one or two MetaWords, reducing the payload size by that amount. >>>>> - The new alignment rules mean we may need to create padding chunks to >>>>> precede larger chunks. But since these padding chunks are added to the >>>>> freelist, they should be used up before the need for new padding chunks >>>>> arises. So, the maximally possible number of unused padding chunks >>>>> should >>>>> be limited by design to about 64K. >>>>> >>>>> The expectation is that the memory savings by this patch far outweighs >>>>> its >>>>> added memory costs. >>>>> >>>>> .. (performance): >>>>> >>>>> We did not see measurable drops in standard benchmarks raising over the >>>>> normal noise. I also measured times for a program which stresses >>>>> metaspace >>>>> chunk coalescation, with the same result. >>>>> >>>>> I am open to suggestions what else I should measure, and/or independent >>>>> measurements. >>>>> >>>>> -- >>>>> >>>>> Other details: >>>>> >>>>> I removed SpaceManager::get_small_chunk_and_allocate() to reduce >>>>> complexity somewhat, because it was made mostly obsolete by this patch: >>>>> since small chunks are combined to larger chunks upon return to the >>>>> freelist, in theory we should not have that many free small chunks >>>>> anymore >>>>> anyway. However, there may be still cases where we could benefit from >>>>> this >>>>> workaround, so I am asking your opinion on this one. >>>>> >>>>> About tests: There were two native tests - ChunkManagerReturnTest and >>>>> TestVirtualSpaceNode (the former was added by me last year) - which >>>>> did not >>>>> make much sense anymore, since they relied heavily on internal behavior >>>>> which was made unpredictable with this patch. >>>>> To make up for these lost tests, I added a new gtest which attempts to >>>>> stress the many combinations of allocation pattern but does so from a >>>>> layer >>>>> above the old tests. It now uses Metaspace::allocate() and friends. By >>>>> using that point as entry for tests, I am less dependent on >>>>> implementation >>>>> internals and still cover a lot of scenarios. >>>>> >>>>> -- >>>>> >>>>> Review pointers: >>>>> >>>>> Good points to start are >>>>> - ChunkManager::return_single_chunk() - specifically, >>>>> ChunkManager::attempt_to_coalesce_around_chunk() - here we merge >>>>> chunks >>>>> upon return to the free list >>>>> - ChunkManager::free_chunks_get(): Here we now split large chunks into >>>>> smaller chunks on demand >>>>> - VirtualSpaceNode::take_from_committed() : chunks are allocated >>>>> according to align rules now, padding chunks are handles >>>>> - The OccupancyMap class is the helper class implementing the new >>>>> occupancy bitmap >>>>> >>>>> The rest is mostly chaff: helper functions, added tests and >>>>> verifications. >>>>> >>>>> -- >>>>> >>>>> Thanks and Best Regards, Thomas >>>>> >>>>> [1] https://bugs.openjdk.java.net/browse/JDK-8166690 >>>>> [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2017-November >>>>> /000128.html >>>>> [3] https://bugs.openjdk.java.net/browse/JDK-8185034 >>>>> [4] https://bugs.openjdk.java.net/browse/JDK-8176808 >>>>> [5] https://bugs.openjdk.java.net/secure/attachment/63532/test3.zip >>>>> >>>>> >>>>> >>>>> >>> >> >> > > From vladimir.kozlov at oracle.com Tue Mar 6 17:30:53 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 6 Mar 2018 09:30:53 -0800 Subject: RFR: AArch64: org.openjdk.jcstress.tests.varhandles.DekkerTest fails In-Reply-To: <20180306082420.853500758@eggemoggin.niobe.net> References: <20180306082420.853500758@eggemoggin.niobe.net> Message-ID: <32eec7a9-e464-e89a-ac20-1f2b0f7386c0@oracle.com> Changes are good. I think we should push it into JDK 10. It reversed optimization done by JDK-8181211 in JDK 10 - it is regression in JDK 10. The fix prevents, for example, volatile loads float above membars - this is not aarch64 specific problem. I will update bug report later after testing finished. Thanks, Vladimir On 3/6/18 8:24 AM, mark.reinhold at oracle.com wrote: > 2018/3/6 4:23:07 -0800, adinn at redhat.com: >> Could someone please review the following patch to /shared code/ which >> fixes an AArch64 breakage that was inadvertently introduced by JDK-8181211: >> >> webrev: http://cr.openjdk.java.net/~adinn/8198950/webrev.00 >> JIRA: https://bugs.openjdk.java.net/browse/JDK-8198950 >> >> The patch applies to jdk/hs. It also applies cleanly to jdk/jdk10. >> >> I would like it to be considered for inclusion in jdk10 if at all >> possible because it patches a critical error in handling of volatile >> reads that may result in incorrect memory synchronization. > > Andrew -- thanks for the thorough analysis. > > So far Tobias and Roland have reviewed your change. I'm not qualified > to review it myself, so since it's very late in the game for JDK 10 I'd > like to see reviews from at least a couple more C2 committers before we > make a call on this. > > - Mark > From ekaterina.pavlova at oracle.com Tue Mar 6 21:45:35 2018 From: ekaterina.pavlova at oracle.com (Ekaterina Pavlova) Date: Tue, 6 Mar 2018 13:45:35 -0800 Subject: RFR(XS) [closed] : 8198924: java/lang/StackWalker/LocalsAndOperands.java timeouts with Graal Message-ID: Hi all, java/lang/StackWalker/LocalsAndOperands.java runs LocalsAndOperands 3 times with following set of jvm flags: 1) -Xint -DtestUnused=true 2) -Xcomp 3) -Xcomp -XX:-TieredCompilation When running with Graal as JIT (-XX:+TieredCompilation -XX:+UseJVMCICompiler -Djvmci.Compiler=graal) 3rd scenario could take more than 10 minutes and the test could fail by timeout. Actually running LocalsAndOperands with Graal and with "-Xcomp -XX:-TieredCompilation" doesn't provide big benefit and it would be reasonable to disable this scenario in case Graal is enabled. Please review the change. JBS: https://bugs.openjdk.java.net/browse/JDK-8198924 webrev: http://cr.openjdk.java.net/~epavlova//8198924/webrev.00/index.html testing: Tested by running the test in Graal as JIT mode. thanks, -katya p.s. Igor Ignatyev volunteered to sponsor this change. From mandy.chung at oracle.com Tue Mar 6 21:59:14 2018 From: mandy.chung at oracle.com (mandy chung) Date: Tue, 6 Mar 2018 13:59:14 -0800 Subject: RFR(XS) [closed] : 8198924: java/lang/StackWalker/LocalsAndOperands.java timeouts with Graal In-Reply-To: References: Message-ID: <571e551e-f40c-dbf6-0094-4a841fda42a3@oracle.com> Running #1 and #2 when Graal is enabled is fine. For the second @test, does it need @bug and @summary to run?? If not, I suggest to take it out as it's already mentioned in the first @test. Mandy On 3/6/18 1:45 PM, Ekaterina Pavlova wrote: > Hi all, > > java/lang/StackWalker/LocalsAndOperands.java runs LocalsAndOperands 3 > times with following set of jvm flags: > ?1) -Xint -DtestUnused=true > ?2) -Xcomp > ?3) -Xcomp -XX:-TieredCompilation > > When running with Graal as JIT (-XX:+TieredCompilation > -XX:+UseJVMCICompiler -Djvmci.Compiler=graal) > 3rd scenario could take more than 10 minutes and the test could fail > by timeout. > Actually running LocalsAndOperands with Graal and with "-Xcomp > -XX:-TieredCompilation" doesn't provide > big benefit and it would be reasonable to disable this scenario in > case Graal is enabled. > > Please review the change. > > ????? JBS: https://bugs.openjdk.java.net/browse/JDK-8198924 > ?? webrev: > http://cr.openjdk.java.net/~epavlova//8198924/webrev.00/index.html > > ? testing: Tested by running the test in Graal as JIT mode. > > thanks, > -katya > > p.s. > ?Igor Ignatyev volunteered to sponsor this change. From brent.christian at oracle.com Tue Mar 6 22:30:01 2018 From: brent.christian at oracle.com (Brent Christian) Date: Tue, 6 Mar 2018 14:30:01 -0800 Subject: RFR(XS) [closed] : 8198924: java/lang/StackWalker/LocalsAndOperands.java timeouts with Graal In-Reply-To: <571e551e-f40c-dbf6-0094-4a841fda42a3@oracle.com> References: <571e551e-f40c-dbf6-0094-4a841fda42a3@oracle.com> Message-ID: <4fcc3671-214a-accd-105b-9aeb83e6e62b@oracle.com> Looks good, Katya - thanks. I agree with omitting @bug and @summary from the second @test tag. Thanks, -Brent On 3/6/18 1:59 PM, mandy chung wrote: > Running #1 and #2 when Graal is enabled is fine. > > For the second @test, does it need @bug and @summary to run?? If not, I > suggest to take it out as it's already mentioned in the first @test. > > Mandy > > > On 3/6/18 1:45 PM, Ekaterina Pavlova wrote: >> Hi all, >> >> java/lang/StackWalker/LocalsAndOperands.java runs LocalsAndOperands 3 >> times with following set of jvm flags: >> ?1) -Xint -DtestUnused=true >> ?2) -Xcomp >> ?3) -Xcomp -XX:-TieredCompilation >> >> When running with Graal as JIT (-XX:+TieredCompilation >> -XX:+UseJVMCICompiler -Djvmci.Compiler=graal) >> 3rd scenario could take more than 10 minutes and the test could fail >> by timeout. >> Actually running LocalsAndOperands with Graal and with "-Xcomp >> -XX:-TieredCompilation" doesn't provide >> big benefit and it would be reasonable to disable this scenario in >> case Graal is enabled. >> >> Please review the change. >> >> ????? JBS: https://bugs.openjdk.java.net/browse/JDK-8198924 >> ?? webrev: >> http://cr.openjdk.java.net/~epavlova//8198924/webrev.00/index.html >> >> ? testing: Tested by running the test in Graal as JIT mode. >> >> thanks, >> -katya >> >> p.s. >> ?Igor Ignatyev volunteered to sponsor this change. > From john.r.rose at oracle.com Wed Mar 7 02:25:46 2018 From: john.r.rose at oracle.com (John Rose) Date: Tue, 6 Mar 2018 18:25:46 -0800 Subject: RFR: AArch64: org.openjdk.jcstress.tests.varhandles.DekkerTest fails In-Reply-To: <32eec7a9-e464-e89a-ac20-1f2b0f7386c0@oracle.com> References: <20180306082420.853500758@eggemoggin.niobe.net> <32eec7a9-e464-e89a-ac20-1f2b0f7386c0@oracle.com> Message-ID: <7D2F86CC-C3B9-4502-892C-407849989860@oracle.com> I agree with this reasoning. After looking at the old change and this new tweak, I also support pushing it now. Andrew can mark me a reviewer if he wants an even longer reviewed-by list. :-) ? John On Mar 6, 2018, at 9:30 AM, Vladimir Kozlov wrote: > > Changes are good. I think we should push it into JDK 10. > It reversed optimization done by JDK-8181211 in JDK 10 - it is regression in JDK 10. > > The fix prevents, for example, volatile loads float above membars - this is not aarch64 specific problem. > > I will update bug report later after testing finished. > > Thanks, > Vladimir > > On 3/6/18 8:24 AM, mark.reinhold at oracle.com wrote: >> 2018/3/6 4:23:07 -0800, adinn at redhat.com: >>> Could someone please review the following patch to /shared code/ which >>> fixes an AArch64 breakage that was inadvertently introduced by JDK-8181211: >>> >>> webrev: http://cr.openjdk.java.net/~adinn/8198950/webrev.00 >>> JIRA: https://bugs.openjdk.java.net/browse/JDK-8198950 >>> >>> The patch applies to jdk/hs. It also applies cleanly to jdk/jdk10. >>> >>> I would like it to be considered for inclusion in jdk10 if at all >>> possible because it patches a critical error in handling of volatile >>> reads that may result in incorrect memory synchronization. >> Andrew -- thanks for the thorough analysis. >> So far Tobias and Roland have reviewed your change. I'm not qualified >> to review it myself, so since it's very late in the game for JDK 10 I'd >> like to see reviews from at least a couple more C2 committers before we >> make a call on this. >> - Mark From martinrb at google.com Wed Mar 7 02:44:47 2018 From: martinrb at google.com (Martin Buchholz) Date: Tue, 6 Mar 2018 18:44:47 -0800 Subject: GetPrimitiveArrayCritical vs GetByteArrayRegion: 140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream In-Reply-To: <5A9DAE87.8020801@oracle.com> References: <22bbd632-47c1-7702-bd65-6a51f0bbad02@oracle.com> <9b5ff318-af41-03c4-efdc-b41b97cf6e61@oracle.com> <5A9D8C60.3050505@oracle.com> <5A9D8F33.5060500@oracle.com> <5A9DAC84.906@oracle.com> <5A9DAE87.8020801@oracle.com> Message-ID: Thanks Ian and Sherman for the excellent presentation and memories of ancient efforts. Yes, Sherman, I still have vague memory that attempts to touch any implementation detail in this area was asking for trouble and someone would complain. I was happy to let you deal with those problems! There's a continual struggle in the industry to enable more checking at test time, and -Xcheck:jni does look like it should be possible to routinely turn on for running all tests. (Google tests run with a time limit, and so any low-level performance regression immediately causes test failures, for better or worse) Our problem reduces to accessing a primitive array slice from native code. The only way to get O(1) access is via GetPrimitiveArrayCritical, BUT when it fails you have to pay for a copy of the entire array. An obvious solution is to introduce a slice variant GetPrimitiveArrayRegionCritical that would only degrade to a copy of the slice. Offhand that seems relatively easy to implement though we would hold our noses at adding yet more *Critical* functions to the JNI spec. In spirit though it's a straightforward generalization. Implementing Deflater in pure Java seems very reasonable and we've had good success with "nearby" code, but we likely cannot reuse the GNU Classpath code. Thanks for pointing out JDK-6311046: -Xcheck:jni should support checking of GetPrimitiveArrayCritical which went into jdk8 in u40. We can probably be smarter about choosing a better buffer size, e.g. in ZipOutputStream. Here's an idea: In code like this try (DeflaterOutputStream dout = new DeflaterOutputStream(deflated)) { dout.write(inflated, 0, inflated.length); } when the DeflaterOutputStream is given an input that is clearly too large for the current buffer size, reorganize internals dynamically to use a much bigger buffer size. It's possible (but hard work!) to adjust algorithms based on whether critical array access is available. It would be nice if we could get the JVM to tell us (but it might depend, e.g. on the size of the array). From mark.reinhold at oracle.com Wed Mar 7 04:16:54 2018 From: mark.reinhold at oracle.com (mark.reinhold at oracle.com) Date: Tue, 06 Mar 2018 20:16:54 -0800 Subject: RFR: AArch64: org.openjdk.jcstress.tests.varhandles.DekkerTest fails In-Reply-To: References: <20180306082420.853500758@eggemoggin.niobe.net> Message-ID: <20180306201654.420182025@eggemoggin.niobe.net> 2018/3/6 9:01:54 -0800, Andrew Dinn : > On 06/03/18 16:24, mark.reinhold at oracle.com wrote: >> ... >> >> So far Tobias and Roland have reviewed your change. I'm not qualified >> to review it myself, so since it's very late in the game for JDK 10 I'd >> like to see reviews from at least a couple more C2 committers before we >> make a call on this. > > Sure, that's an understandably cautious reaction. Perhaps a Vladimir (or > two :-) might be able to take a look? With additional reviews from the two Vladimirs, and now also John, I think this is looking pretty good. We're running some additional (Oracle internal) tests overnight, just to be paranoid, and should have results in the morning PST, but I don't expect any surprises. Stay tuned ... - Mark From adinn at redhat.com Wed Mar 7 09:25:34 2018 From: adinn at redhat.com (Andrew Dinn) Date: Wed, 7 Mar 2018 09:25:34 +0000 Subject: RFR: AArch64: org.openjdk.jcstress.tests.varhandles.DekkerTest fails In-Reply-To: <7D2F86CC-C3B9-4502-892C-407849989860@oracle.com> References: <20180306082420.853500758@eggemoggin.niobe.net> <32eec7a9-e464-e89a-ac20-1f2b0f7386c0@oracle.com> <7D2F86CC-C3B9-4502-892C-407849989860@oracle.com> Message-ID: <6242415a-83af-1dce-8779-d40954a4d056@redhat.com> On 07/03/18 02:25, John Rose wrote: > I agree with this reasoning. After looking at the old change and this > new tweak, I also support pushing it now. Andrew can mark me a > reviewer if he wants an even longer reviewed-by list. :-) An honour it would be churlish to reject :-) Thanks for the review. Also to both Vladimirs. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From shade at redhat.com Wed Mar 7 09:47:33 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Wed, 7 Mar 2018 10:47:33 +0100 Subject: RFR: 8199219: Build failures after JDK-8195148 Message-ID: <62c9699c-6c74-3ad7-49db-50a62a4acfee@redhat.com> x86_32 build fails with: src/hotspot/cpu/x86/stubGenerator_x86_32.cpp:708:2: error: #endif without #if #endif // INCLUDE_ALL_GCS ^~~~~ Bug: https://bugs.openjdk.java.net/browse/JDK-8199219 JDK-8195148 (G1 barrier set collapsing) change removed #if INCLUDE_ALL_GCS from stubGenerator_x86_32, but not in other arches: http://hg.openjdk.java.net/jdk/hs/rev/edb65305d3ac#l36.1 Reinstating it makes the x86_32 build pass: diff -r 0b48f0aa79ec src/hotspot/cpu/x86/stubGenerator_x86_32.cpp --- a/src/hotspot/cpu/x86/stubGenerator_x86_32.cpp Tue Mar 06 22:08:30 2018 -0800 +++ b/src/hotspot/cpu/x86/stubGenerator_x86_32.cpp Wed Mar 07 10:45:45 2018 +0100 @@ -678,6 +678,7 @@ assert_different_registers(start, count); BarrierSet* bs = Universe::heap()->barrier_set(); switch (bs->kind()) { +#if INCLUDE_ALL_GCS case BarrierSet::G1BarrierSet: // With G1, don't generate the call if we statically know that the target in uninitialized if (!uninitialized_target) { @@ -727,6 +728,7 @@ BarrierSet* bs = Universe::heap()->barrier_set(); assert_different_registers(start, count); switch (bs->kind()) { +#if INCLUDE_ALL_GCS case BarrierSet::G1BarrierSet: { __ pusha(); // push registers Testing: x86_32 build Thanks, -Aleksey From david.holmes at oracle.com Wed Mar 7 09:55:19 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 7 Mar 2018 19:55:19 +1000 Subject: RFR: 8199219: Build failures after JDK-8195148 In-Reply-To: <62c9699c-6c74-3ad7-49db-50a62a4acfee@redhat.com> References: <62c9699c-6c74-3ad7-49db-50a62a4acfee@redhat.com> Message-ID: <178eb100-9e29-0064-333f-4e98a34ce66a@oracle.com> Reviewed! I think this counts as trivial and can be pushed immediately. Thanks, David On 7/03/2018 7:47 PM, Aleksey Shipilev wrote: > x86_32 build fails with: > > src/hotspot/cpu/x86/stubGenerator_x86_32.cpp:708:2: error: #endif without #if > #endif // INCLUDE_ALL_GCS > ^~~~~ > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8199219 > > JDK-8195148 (G1 barrier set collapsing) change removed #if INCLUDE_ALL_GCS from > stubGenerator_x86_32, but not in other arches: > http://hg.openjdk.java.net/jdk/hs/rev/edb65305d3ac#l36.1 > > Reinstating it makes the x86_32 build pass: > > diff -r 0b48f0aa79ec src/hotspot/cpu/x86/stubGenerator_x86_32.cpp > --- a/src/hotspot/cpu/x86/stubGenerator_x86_32.cpp Tue Mar 06 22:08:30 2018 -0800 > +++ b/src/hotspot/cpu/x86/stubGenerator_x86_32.cpp Wed Mar 07 10:45:45 2018 +0100 > @@ -678,6 +678,7 @@ > assert_different_registers(start, count); > BarrierSet* bs = Universe::heap()->barrier_set(); > switch (bs->kind()) { > +#if INCLUDE_ALL_GCS > case BarrierSet::G1BarrierSet: > // With G1, don't generate the call if we statically know that the target in uninitialized > if (!uninitialized_target) { > @@ -727,6 +728,7 @@ > BarrierSet* bs = Universe::heap()->barrier_set(); > assert_different_registers(start, count); > switch (bs->kind()) { > +#if INCLUDE_ALL_GCS > case BarrierSet::G1BarrierSet: > { > __ pusha(); // push registers > > Testing: x86_32 build > > Thanks, > -Aleksey > From shade at redhat.com Wed Mar 7 10:03:00 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Wed, 7 Mar 2018 11:03:00 +0100 Subject: RFR: 8199219: Build failures after JDK-8195148 In-Reply-To: <178eb100-9e29-0064-333f-4e98a34ce66a@oracle.com> References: <62c9699c-6c74-3ad7-49db-50a62a4acfee@redhat.com> <178eb100-9e29-0064-333f-4e98a34ce66a@oracle.com> Message-ID: Thanks, pushed to jdk/hs: http://hg.openjdk.java.net/jdk/hs/rev/5f487b498e78 -Aleksey On 03/07/2018 10:55 AM, David Holmes wrote: > Reviewed! > > I think this counts as trivial and can be pushed immediately. > > Thanks, > David > > On 7/03/2018 7:47 PM, Aleksey Shipilev wrote: >> x86_32 build fails with: >> >> src/hotspot/cpu/x86/stubGenerator_x86_32.cpp:708:2: error: #endif without #if >> ? #endif // INCLUDE_ALL_GCS >> ?? ^~~~~ >> >> Bug: >> ? https://bugs.openjdk.java.net/browse/JDK-8199219 >> >> JDK-8195148 (G1 barrier set collapsing) change removed #if INCLUDE_ALL_GCS from >> stubGenerator_x86_32, but not in other arches: >> ? http://hg.openjdk.java.net/jdk/hs/rev/edb65305d3ac#l36.1 >> >> Reinstating it makes the x86_32 build pass: >> >> diff -r 0b48f0aa79ec src/hotspot/cpu/x86/stubGenerator_x86_32.cpp >> --- a/src/hotspot/cpu/x86/stubGenerator_x86_32.cpp??? Tue Mar 06 22:08:30 2018 -0800 >> +++ b/src/hotspot/cpu/x86/stubGenerator_x86_32.cpp??? Wed Mar 07 10:45:45 2018 +0100 >> @@ -678,6 +678,7 @@ >> ????? assert_different_registers(start, count); >> ????? BarrierSet* bs = Universe::heap()->barrier_set(); >> ????? switch (bs->kind()) { >> +#if INCLUDE_ALL_GCS??? >> ??????? case BarrierSet::G1BarrierSet: >> ????????? // With G1, don't generate the call if we statically know that the target in uninitialized >> ????????? if (!uninitialized_target) { >> @@ -727,6 +728,7 @@ >> ????? BarrierSet* bs = Universe::heap()->barrier_set(); >> ????? assert_different_registers(start, count); >> ????? switch (bs->kind()) { >> +#if INCLUDE_ALL_GCS??? >> ??????? case BarrierSet::G1BarrierSet: >> ????????? { >> ??????????? __ pusha();????????????????????? // push registers >> >> Testing: x86_32 build >> >> Thanks, >> -Aleksey >> From shafi.s.ahmad at oracle.com Wed Mar 7 10:08:33 2018 From: shafi.s.ahmad at oracle.com (Shafi Ahmad) Date: Wed, 7 Mar 2018 02:08:33 -0800 (PST) Subject: [8u] RFR for backport of JDK-8026331: hs_err improvement: Print if we have seen any OutOfMemoryErrors or StackOverflowErrors In-Reply-To: <922c9f2b-1550-e7de-2039-0cfd0f439eae@oracle.com> References: <84e4010f-e1ed-4940-ad24-5e7fc1667899@default> <1e6b48f9-9d19-40f4-aae0-61ffa4d51800@default> <1bf9373c-ab90-46a1-a499-9bc0a7f01a86@default> <922c9f2b-1550-e7de-2039-0cfd0f439eae@oracle.com> Message-ID: <25862753-417c-4865-9736-ebd79f9679ff@default> Hi David, Thank you for the review. Please find below updated webrev. http://cr.openjdk.java.net/~shshahma/8026331/webrev.01 Regards, Shafi > -----Original Message----- > From: David Holmes > Sent: Tuesday, March 06, 2018 6:09 PM > To: Shafi Ahmad ; hotspot- > dev at openjdk.java.net > Cc: Christian Tornqvist ; Coleen Phillimore > > Subject: Re: [8u] RFR for backport of JDK-8026331: hs_err improvement: Print > if we have seen any OutOfMemoryErrors or StackOverflowErrors > > Hi Shafi, > > This seems like an accurate backport of the error reporting enhancements. > > Copyright years need updating to 2018. > > Thanks, > David > > On 6/03/2018 6:55 PM, Shafi Ahmad wrote: > > Hi All, > > > > Could someone please review it. > > > > Regards, > > Shafi > > > > > >> -----Original Message----- > >> From: Shafi Ahmad > >> Sent: Tuesday, February 06, 2018 11:27 AM > >> To: hotspot-dev at openjdk.java.net > >> Cc: Stephen Fitch > >> Subject: RE: [8u] RFR for backport of JDK-8026331: hs_err > >> improvement: Print if we have seen any OutOfMemoryErrors or > >> StackOverflowErrors > >> > >> Hi, > >> > >> Could someone please review it. > >> > >> Regards, > >> Shafi > >> > >>> -----Original Message----- > >>> From: Shafi Ahmad > >>> Sent: Monday, January 29, 2018 10:16 AM > >>> To: hotspot-dev at openjdk.java.net > >>> Subject: RE: [8u] RFR for backport of JDK-8026331: hs_err improvement: > >>> Print if we have seen any OutOfMemoryErrors or StackOverflowErrors > >>> > >>> 2nd try... > >>> > >>> Regards, > >>> Shafi > >>> > >>>> -----Original Message----- > >>>> From: Shafi Ahmad > >>>> Sent: Wednesday, January 24, 2018 3:16 PM > >>>> To: hotspot-dev at openjdk.java.net > >>>> Subject: [8u] RFR for backport of JDK-8026331: hs_err improvement: > >>>> Print if we have seen any OutOfMemoryErrors or StackOverflowErrors > >>>> > >>>> Hi, > >>>> > >>>> Please review the backport of bug: " JDK-8026331: hs_err > improvement: > >>>> Print if we have seen any OutOfMemoryErrors or StackOverflowErrors" > >>>> to > >>>> jdk8u- dev. > >>>> > >>>> Please note that this is not a clean backport as I got below > >>>> conflicts > >>>> - > >>>> > >>>> hotspot$ find ./ -name "*.rej" -exec cat {} \; > >>>> --- metaspace.cpp > >>>> +++ metaspace.cpp > >>>> @@ -3132,10 +3132,21 @@ > >>>> initialize_class_space(metaspace_rs); > >>>> > >>>> if (PrintCompressedOopsMode || (PrintMiscellaneous && Verbose)) > { > >>>> - gclog_or_tty->print_cr("Narrow klass base: " PTR_FORMAT ", > Narrow > >>>> klass shift: %d", > >>>> - p2i(Universe::narrow_klass_base()), > >>>> Universe::narrow_klass_shift()); > >>>> - gclog_or_tty->print_cr("Compressed class space size: " > SIZE_FORMAT > >> " > >>>> Address: " PTR_FORMAT " Req Addr: " PTR_FORMAT, > >>>> - compressed_class_space_size(), > >> p2i(metaspace_rs.base()), > >>>> p2i(requested_addr)); > >>>> + print_compressed_class_space(gclog_or_tty, requested_addr); > >>>> + } > >>>> +} > >>>> + > >>>> +void Metaspace::print_compressed_class_space(outputStream* st, > >>>> +const > >>>> char* requested_addr) { > >>>> + st->print_cr("Narrow klass base: " PTR_FORMAT ", Narrow klass > shift: > >>> %d", > >>>> + p2i(Universe::narrow_klass_base()), > >>>> Universe::narrow_klass_shift()); > >>>> + if (_class_space_list != NULL) { > >>>> + address base = > >>>> + (address)_class_space_list->current_virtual_space()- > >>>>> bottom(); > >>>> + st->print("Compressed class space size: " SIZE_FORMAT " Address: " > >>>> PTR_FORMAT, > >>>> + compressed_class_space_size(), p2i(base)); > >>>> + if (requested_addr != 0) { > >>>> + st->print(" Req Addr: " PTR_FORMAT, p2i(requested_addr)); > >>>> + } > >>>> + st->cr(); > >>>> } > >>>> } > >>>> > >>>> --- universe.cpp > >>>> +++ universe.cpp > >>>> @@ -781,27 +781,24 @@ > >>>> return JNI_OK; > >>>> } > >>>> > >>>> -void Universe::print_compressed_oops_mode() { > >>>> - tty->cr(); > >>>> - tty->print("heap address: " PTR_FORMAT ", size: " SIZE_FORMAT " > >>>> MB", > >>>> +void Universe::print_compressed_oops_mode(outputStream* st) { > >>>> + st->print("heap address: " PTR_FORMAT ", size: " SIZE_FORMAT " > >>>> +MB", > >>>> p2i(Universe::heap()->base()), Universe::heap()- > >>>>> reserved_region().byte_size()/M); > >>>> > >>>> - tty->print(", Compressed Oops mode: %s", > >>>> narrow_oop_mode_to_string(narrow_oop_mode())); > >>>> + st->print(", Compressed Oops mode: %s", > >>>> narrow_oop_mode_to_string(narrow_oop_mode())); > >>>> > >>>> if (Universe::narrow_oop_base() != 0) { > >>>> - tty->print(": " PTR_FORMAT, p2i(Universe::narrow_oop_base())); > >>>> + st->print(": " PTR_FORMAT, p2i(Universe::narrow_oop_base())); > >>>> } > >>>> > >>>> if (Universe::narrow_oop_shift() != 0) { > >>>> - tty->print(", Oop shift amount: %d", Universe::narrow_oop_shift()); > >>>> + st->print(", Oop shift amount: %d", > >>>> + Universe::narrow_oop_shift()); > >>>> } > >>>> > >>>> if (!Universe::narrow_oop_use_implicit_null_checks()) { > >>>> - tty->print(", no protected page in front of the heap"); > >>>> + st->print(", no protected page in front of the heap"); > >>>> } > >>>> - > >>>> - tty->cr(); > >>>> - tty->cr(); > >>>> + st->cr(); > >>>> } > >>>> > >>>> Webrev: http://cr.openjdk.java.net/~shshahma/8026331/webrev.00/ > >>>> Jdk9 bug: https://bugs.openjdk.java.net/browse/JDK-8026331 > >>>> Original patch pushed to jdk9: > >>>> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/cf5a0377f578 > >>>> > >>>> Test: Run jprt -testset hotspot and jtreg - hotspot/test > >>>> > >>>> Regards, > >>>> Shafi From david.holmes at oracle.com Wed Mar 7 10:14:04 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 7 Mar 2018 20:14:04 +1000 Subject: [8u] RFR for backport of JDK-8026331: hs_err improvement: Print if we have seen any OutOfMemoryErrors or StackOverflowErrors In-Reply-To: <25862753-417c-4865-9736-ebd79f9679ff@default> References: <84e4010f-e1ed-4940-ad24-5e7fc1667899@default> <1e6b48f9-9d19-40f4-aae0-61ffa4d51800@default> <1bf9373c-ab90-46a1-a499-9bc0a7f01a86@default> <922c9f2b-1550-e7de-2039-0cfd0f439eae@oracle.com> <25862753-417c-4865-9736-ebd79f9679ff@default> Message-ID: <6216e0e5-7593-a4bf-6dff-c62f4a7a5bea@oracle.com> On 7/03/2018 8:08 PM, Shafi Ahmad wrote: > Hi David, > > Thank you for the review. > > Please find below updated webrev. > http://cr.openjdk.java.net/~shshahma/8026331/webrev.01 Thanks. I didn't need an updated webrev for the copyright updates. :) David > > Regards, > Shafi > > >> -----Original Message----- >> From: David Holmes >> Sent: Tuesday, March 06, 2018 6:09 PM >> To: Shafi Ahmad ; hotspot- >> dev at openjdk.java.net >> Cc: Christian Tornqvist ; Coleen Phillimore >> >> Subject: Re: [8u] RFR for backport of JDK-8026331: hs_err improvement: Print >> if we have seen any OutOfMemoryErrors or StackOverflowErrors >> >> Hi Shafi, >> >> This seems like an accurate backport of the error reporting enhancements. >> >> Copyright years need updating to 2018. >> >> Thanks, >> David >> >> On 6/03/2018 6:55 PM, Shafi Ahmad wrote: >>> Hi All, >>> >>> Could someone please review it. >>> >>> Regards, >>> Shafi >>> >>> >>>> -----Original Message----- >>>> From: Shafi Ahmad >>>> Sent: Tuesday, February 06, 2018 11:27 AM >>>> To: hotspot-dev at openjdk.java.net >>>> Cc: Stephen Fitch >>>> Subject: RE: [8u] RFR for backport of JDK-8026331: hs_err >>>> improvement: Print if we have seen any OutOfMemoryErrors or >>>> StackOverflowErrors >>>> >>>> Hi, >>>> >>>> Could someone please review it. >>>> >>>> Regards, >>>> Shafi >>>> >>>>> -----Original Message----- >>>>> From: Shafi Ahmad >>>>> Sent: Monday, January 29, 2018 10:16 AM >>>>> To: hotspot-dev at openjdk.java.net >>>>> Subject: RE: [8u] RFR for backport of JDK-8026331: hs_err improvement: >>>>> Print if we have seen any OutOfMemoryErrors or StackOverflowErrors >>>>> >>>>> 2nd try... >>>>> >>>>> Regards, >>>>> Shafi >>>>> >>>>>> -----Original Message----- >>>>>> From: Shafi Ahmad >>>>>> Sent: Wednesday, January 24, 2018 3:16 PM >>>>>> To: hotspot-dev at openjdk.java.net >>>>>> Subject: [8u] RFR for backport of JDK-8026331: hs_err improvement: >>>>>> Print if we have seen any OutOfMemoryErrors or StackOverflowErrors >>>>>> >>>>>> Hi, >>>>>> >>>>>> Please review the backport of bug: " JDK-8026331: hs_err >> improvement: >>>>>> Print if we have seen any OutOfMemoryErrors or StackOverflowErrors" >>>>>> to >>>>>> jdk8u- dev. >>>>>> >>>>>> Please note that this is not a clean backport as I got below >>>>>> conflicts >>>>>> - >>>>>> >>>>>> hotspot$ find ./ -name "*.rej" -exec cat {} \; >>>>>> --- metaspace.cpp >>>>>> +++ metaspace.cpp >>>>>> @@ -3132,10 +3132,21 @@ >>>>>> initialize_class_space(metaspace_rs); >>>>>> >>>>>> if (PrintCompressedOopsMode || (PrintMiscellaneous && Verbose)) >> { >>>>>> - gclog_or_tty->print_cr("Narrow klass base: " PTR_FORMAT ", >> Narrow >>>>>> klass shift: %d", >>>>>> - p2i(Universe::narrow_klass_base()), >>>>>> Universe::narrow_klass_shift()); >>>>>> - gclog_or_tty->print_cr("Compressed class space size: " >> SIZE_FORMAT >>>> " >>>>>> Address: " PTR_FORMAT " Req Addr: " PTR_FORMAT, >>>>>> - compressed_class_space_size(), >>>> p2i(metaspace_rs.base()), >>>>>> p2i(requested_addr)); >>>>>> + print_compressed_class_space(gclog_or_tty, requested_addr); >>>>>> + } >>>>>> +} >>>>>> + >>>>>> +void Metaspace::print_compressed_class_space(outputStream* st, >>>>>> +const >>>>>> char* requested_addr) { >>>>>> + st->print_cr("Narrow klass base: " PTR_FORMAT ", Narrow klass >> shift: >>>>> %d", >>>>>> + p2i(Universe::narrow_klass_base()), >>>>>> Universe::narrow_klass_shift()); >>>>>> + if (_class_space_list != NULL) { >>>>>> + address base = >>>>>> + (address)_class_space_list->current_virtual_space()- >>>>>>> bottom(); >>>>>> + st->print("Compressed class space size: " SIZE_FORMAT " Address: " >>>>>> PTR_FORMAT, >>>>>> + compressed_class_space_size(), p2i(base)); >>>>>> + if (requested_addr != 0) { >>>>>> + st->print(" Req Addr: " PTR_FORMAT, p2i(requested_addr)); >>>>>> + } >>>>>> + st->cr(); >>>>>> } >>>>>> } >>>>>> >>>>>> --- universe.cpp >>>>>> +++ universe.cpp >>>>>> @@ -781,27 +781,24 @@ >>>>>> return JNI_OK; >>>>>> } >>>>>> >>>>>> -void Universe::print_compressed_oops_mode() { >>>>>> - tty->cr(); >>>>>> - tty->print("heap address: " PTR_FORMAT ", size: " SIZE_FORMAT " >>>>>> MB", >>>>>> +void Universe::print_compressed_oops_mode(outputStream* st) { >>>>>> + st->print("heap address: " PTR_FORMAT ", size: " SIZE_FORMAT " >>>>>> +MB", >>>>>> p2i(Universe::heap()->base()), Universe::heap()- >>>>>>> reserved_region().byte_size()/M); >>>>>> >>>>>> - tty->print(", Compressed Oops mode: %s", >>>>>> narrow_oop_mode_to_string(narrow_oop_mode())); >>>>>> + st->print(", Compressed Oops mode: %s", >>>>>> narrow_oop_mode_to_string(narrow_oop_mode())); >>>>>> >>>>>> if (Universe::narrow_oop_base() != 0) { >>>>>> - tty->print(": " PTR_FORMAT, p2i(Universe::narrow_oop_base())); >>>>>> + st->print(": " PTR_FORMAT, p2i(Universe::narrow_oop_base())); >>>>>> } >>>>>> >>>>>> if (Universe::narrow_oop_shift() != 0) { >>>>>> - tty->print(", Oop shift amount: %d", Universe::narrow_oop_shift()); >>>>>> + st->print(", Oop shift amount: %d", >>>>>> + Universe::narrow_oop_shift()); >>>>>> } >>>>>> >>>>>> if (!Universe::narrow_oop_use_implicit_null_checks()) { >>>>>> - tty->print(", no protected page in front of the heap"); >>>>>> + st->print(", no protected page in front of the heap"); >>>>>> } >>>>>> - >>>>>> - tty->cr(); >>>>>> - tty->cr(); >>>>>> + st->cr(); >>>>>> } >>>>>> >>>>>> Webrev: http://cr.openjdk.java.net/~shshahma/8026331/webrev.00/ >>>>>> Jdk9 bug: https://bugs.openjdk.java.net/browse/JDK-8026331 >>>>>> Original patch pushed to jdk9: >>>>>> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/cf5a0377f578 >>>>>> >>>>>> Test: Run jprt -testset hotspot and jtreg - hotspot/test >>>>>> >>>>>> Regards, >>>>>> Shafi From edward.nevill at gmail.com Wed Mar 7 11:40:53 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Wed, 07 Mar 2018 11:40:53 +0000 Subject: RFR: 8199220: Zero build broken Message-ID: <1520422853.24302.6.camel@gmail.com> Hi, Please review the following webrev which fixes broken zero build. Bugid: https://bugs.openjdk.java.net/browse/JDK-8199220 Webrev: http://cr.openjdk.java.net/~enevill/8199220/webrev.00 As this involves changes to shared hotspot code I will need a sponsor. Thanks for your help, Eed. ----------------------------------------------------------------- The following symbol is reported as undefined in the Zero build ReduceInitialCardMarks This is a C2/JVMCI only symbol but is referenced unconditionally in src/hotspot/share/gc/shared/cardTableModRefBS.cpp void CardTableModRefBS::on_slowpath_allocation_exit(JavaThread* thread, oop new_obj) { if (!ReduceInitialCardMarks) { return; } A second build error is /home/ed/openjdk/jdk/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:1748: undefined reference to `typeArrayOopDesc::byte_at_put(int, signed char)' This is caused by bytecodeInterpreter.cpp not including typeArrayOop.inline.hpp From glaubitz at physik.fu-berlin.de Wed Mar 7 12:06:17 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Wed, 7 Mar 2018 13:06:17 +0100 Subject: RFR: 8199220: Zero build broken In-Reply-To: <1520422853.24302.6.camel@gmail.com> References: <1520422853.24302.6.camel@gmail.com> Message-ID: <72dfbd87-1409-0239-ba3e-ccd742f47a1b@physik.fu-berlin.de> Hi Edward! On 03/07/2018 12:40 PM, Edward Nevill wrote: > Please review the following webrev which fixes broken zero build. Thanks a lot for taking care of Zero. I don't have much time at the moment to work on OpenJDK as I am too busy with other stuff. I will however eventually return. If no one else steps up, I will happily sponsor your change. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From thomas.stuefe at gmail.com Wed Mar 7 12:50:38 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 7 Mar 2018 13:50:38 +0100 Subject: RFR: 8199220: Zero build broken In-Reply-To: <1520422853.24302.6.camel@gmail.com> References: <1520422853.24302.6.camel@gmail.com> Message-ID: Looks reasonable. Thank you for fixing this. Was zero really broken for two months? Thanks, Thomas (Reviewer but not from Oracle, so I cannot sponsor) On Wed, Mar 7, 2018 at 12:40 PM, Edward Nevill wrote: > Hi, > > Please review the following webrev which fixes broken zero build. > > Bugid: https://bugs.openjdk.java.net/browse/JDK-8199220 > Webrev: http://cr.openjdk.java.net/~enevill/8199220/webrev.00 > > As this involves changes to shared hotspot code I will need a sponsor. > > Thanks for your help, > Eed. > > ----------------------------------------------------------------- > The following symbol is reported as undefined in the Zero build > > ReduceInitialCardMarks > > This is a C2/JVMCI only symbol but is referenced unconditionally in > > src/hotspot/share/gc/shared/cardTableModRefBS.cpp > > void CardTableModRefBS::on_slowpath_allocation_exit(JavaThread* thread, > oop new_obj) { > if (!ReduceInitialCardMarks) { > return; > } > > A second build error is > > /home/ed/openjdk/jdk/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:1748: > undefined reference to `typeArrayOopDesc::byte_at_put(int, signed char)' > > This is caused by bytecodeInterpreter.cpp not including > typeArrayOop.inline.hpp > > From glaubitz at physik.fu-berlin.de Wed Mar 7 12:55:37 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Wed, 7 Mar 2018 13:55:37 +0100 Subject: RFR: 8199220: Zero build broken In-Reply-To: References: <1520422853.24302.6.camel@gmail.com> Message-ID: On 03/07/2018 01:50 PM, Thomas St?fe wrote: > Looks reasonable. Thank you for fixing this. > > Was zero really broken for two months? Btw, it might be a good idea to include in the summary which change broke the build. Should be easy to find using "hg blame". Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From thomas.stuefe at gmail.com Wed Mar 7 12:56:12 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 7 Mar 2018 13:56:12 +0100 Subject: RFR: 8199220: Zero build broken In-Reply-To: References: <1520422853.24302.6.camel@gmail.com> Message-ID: On Wed, Mar 7, 2018 at 1:55 PM, John Paul Adrian Glaubitz < glaubitz at physik.fu-berlin.de> wrote: > On 03/07/2018 01:50 PM, Thomas St?fe wrote: > >> Looks reasonable. Thank you for fixing this. >> >> Was zero really broken for two months? >> > Btw, it might be a good idea to include in the summary which change broke > the build. Should be easy to find using "hg blame". > > > Sorry! changeset: 48885:120b61d50f85 user: eosterlund date: Wed Jan 10 22:48:27 2018 +0100 summary: 8195103: Refactor out card table from CardTableModRefBS to flatten the BarrierSet hierarchy Reviewed-by: kbarrett, tschatzl > Adrian > > -- > .''`. John Paul Adrian Glaubitz > : :' : Debian Developer - glaubitz at debian.org > `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de > `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 > From glaubitz at physik.fu-berlin.de Wed Mar 7 12:57:56 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Wed, 7 Mar 2018 13:57:56 +0100 Subject: RFR: 8199220: Zero build broken In-Reply-To: References: <1520422853.24302.6.camel@gmail.com> Message-ID: On 03/07/2018 01:56 PM, Thomas St?fe wrote: > Sorry! > > changeset:? ?48885:120b61d50f85 > user:? ? ? ? eosterlund > date:? ? ? ? Wed Jan 10 22:48:27 2018 +0100 > summary:? ? ?8195103: Refactor out card table from CardTableModRefBS to flatten the BarrierSet hierarchy > Reviewed-by: kbarrett, tschatzl Thanks, Thomas for digging that up so quickly! @Edward: Would you mind changing your summary to "8199220: Zero build broken after 8195103" both in the bug tracker and the changeset. Thanks, Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From david.holmes at oracle.com Wed Mar 7 13:17:13 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 7 Mar 2018 23:17:13 +1000 Subject: RFR: 8199220: Zero build broken In-Reply-To: References: <1520422853.24302.6.camel@gmail.com> Message-ID: <5accc3da-07a8-7846-3deb-32d79c94eeae@oracle.com> On 7/03/2018 10:56 PM, Thomas St?fe wrote: > On Wed, Mar 7, 2018 at 1:55 PM, John Paul Adrian Glaubitz < > glaubitz at physik.fu-berlin.de> wrote: > >> On 03/07/2018 01:50 PM, Thomas St?fe wrote: >> >>> Looks reasonable. Thank you for fixing this. >>> >>> Was zero really broken for two months? >>> >> Btw, it might be a good idea to include in the summary which change broke >> the build. Should be easy to find using "hg blame". >> >> >> > Sorry! > > changeset: 48885:120b61d50f85 > user: eosterlund > date: Wed Jan 10 22:48:27 2018 +0100 That date is misleading. It is when the changeset was created not pushed. The push was much later: URL: http://hg.openjdk.java.net/jdk/hs/rev/120b61d50f85 User: eosterlund Date: 2018-02-13 13:14:12 +0000 David ----- > summary: 8195103: Refactor out card table from CardTableModRefBS to > flatten the BarrierSet hierarchy > Reviewed-by: kbarrett, tschatzl > > > > >> Adrian >> >> -- >> .''`. John Paul Adrian Glaubitz >> : :' : Debian Developer - glaubitz at debian.org >> `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de >> `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 >> From thomas.stuefe at gmail.com Wed Mar 7 13:20:05 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 7 Mar 2018 14:20:05 +0100 Subject: RFR: 8199220: Zero build broken In-Reply-To: <5accc3da-07a8-7846-3deb-32d79c94eeae@oracle.com> References: <1520422853.24302.6.camel@gmail.com> <5accc3da-07a8-7846-3deb-32d79c94eeae@oracle.com> Message-ID: On Wed, Mar 7, 2018 at 2:17 PM, David Holmes wrote: > On 7/03/2018 10:56 PM, Thomas St?fe wrote: > >> On Wed, Mar 7, 2018 at 1:55 PM, John Paul Adrian Glaubitz < >> glaubitz at physik.fu-berlin.de> wrote: >> >> On 03/07/2018 01:50 PM, Thomas St?fe wrote: >>> >>> Looks reasonable. Thank you for fixing this. >>>> >>>> Was zero really broken for two months? >>>> >>>> Btw, it might be a good idea to include in the summary which change >>> broke >>> the build. Should be easy to find using "hg blame". >>> >>> >>> >>> Sorry! >> >> changeset: 48885:120b61d50f85 >> user: eosterlund >> date: Wed Jan 10 22:48:27 2018 +0100 >> > > That date is misleading. It is when the changeset was created not pushed. > The push was much later: > > URL: http://hg.openjdk.java.net/jdk/hs/rev/120b61d50f85 > User: eosterlund > Date: 2018-02-13 13:14:12 +0000 > > David > ----- > > Ah. That is not as bad. ..thomas > > summary: 8195103: Refactor out card table from CardTableModRefBS to >> flatten the BarrierSet hierarchy >> Reviewed-by: kbarrett, tschatzl >> >> >> >> >> Adrian >>> >>> -- >>> .''`. John Paul Adrian Glaubitz >>> : :' : Debian Developer - glaubitz at debian.org >>> `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de >>> `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 >>> >>> From david.holmes at oracle.com Wed Mar 7 13:25:52 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 7 Mar 2018 23:25:52 +1000 Subject: RFR: 8199220: Zero build broken In-Reply-To: <1520422853.24302.6.camel@gmail.com> References: <1520422853.24302.6.camel@gmail.com> Message-ID: Hi Ed, On 7/03/2018 9:40 PM, Edward Nevill wrote: > Hi, > > Please review the following webrev which fixes broken zero build. > > Bugid: https://bugs.openjdk.java.net/browse/JDK-8199220 > Webrev: http://cr.openjdk.java.net/~enevill/8199220/webrev.00 > > As this involves changes to shared hotspot code I will need a sponsor. I'm concerned by the original code as highlighted by part of your fix: 593 #else 594 guarantee(false, "How did we get here?"); 595 #endif I think Erik and/or other GC folk need to weigh in here and clearly state this is intended to be a no-op unless using C2 or JVMCI. But even so unless a C1 only build is completely dead, we don't want that guarantee I think. Thanks, David > Thanks for your help, > Eed. > > ----------------------------------------------------------------- > The following symbol is reported as undefined in the Zero build > > ReduceInitialCardMarks > > This is a C2/JVMCI only symbol but is referenced unconditionally in > > src/hotspot/share/gc/shared/cardTableModRefBS.cpp > > void CardTableModRefBS::on_slowpath_allocation_exit(JavaThread* thread, oop new_obj) { > if (!ReduceInitialCardMarks) { > return; > } > > A second build error is > > /home/ed/openjdk/jdk/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:1748: undefined reference to `typeArrayOopDesc::byte_at_put(int, signed char)' > > This is caused by bytecodeInterpreter.cpp not including typeArrayOop.inline.hpp > From erik.osterlund at oracle.com Wed Mar 7 13:40:39 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Wed, 7 Mar 2018 14:40:39 +0100 Subject: RFR: 8199220: Zero build broken In-Reply-To: References: <1520422853.24302.6.camel@gmail.com> Message-ID: <5A9FEBD7.7040606@oracle.com> Hi, On 2018-03-07 14:25, David Holmes wrote: > Hi Ed, > > On 7/03/2018 9:40 PM, Edward Nevill wrote: >> Hi, >> >> Please review the following webrev which fixes broken zero build. >> >> Bugid: https://bugs.openjdk.java.net/browse/JDK-8199220 >> Webrev: http://cr.openjdk.java.net/~enevill/8199220/webrev.00 >> >> As this involves changes to shared hotspot code I will need a sponsor. > > I'm concerned by the original code as highlighted by part of your fix: > > 593 #else > 594 guarantee(false, "How did we get here?"); > 595 #endif > > I think Erik and/or other GC folk need to weigh in here and clearly > state this is intended to be a no-op unless using C2 or JVMCI. But > even so unless a C1 only build is completely dead, we don't want that > guarantee I think. I can confirm that unless C2 or JVMCI is used, this is a no-op and ReduceInitialCardMarks should be false. If it is not false, there is a bug. Thanks, /Erik > > Thanks, > David > >> Thanks for your help, >> Eed. >> >> ----------------------------------------------------------------- >> The following symbol is reported as undefined in the Zero build >> >> ReduceInitialCardMarks >> >> This is a C2/JVMCI only symbol but is referenced unconditionally in >> >> src/hotspot/share/gc/shared/cardTableModRefBS.cpp >> >> void CardTableModRefBS::on_slowpath_allocation_exit(JavaThread* >> thread, oop new_obj) { >> if (!ReduceInitialCardMarks) { >> return; >> } >> >> A second build error is >> >> /home/ed/openjdk/jdk/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:1748: >> undefined reference to `typeArrayOopDesc::byte_at_put(int, signed char)' >> >> This is caused by bytecodeInterpreter.cpp not including >> typeArrayOop.inline.hpp >> From edward.nevill at gmail.com Wed Mar 7 14:01:54 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Wed, 07 Mar 2018 14:01:54 +0000 Subject: RFR: 8199220: Zero build broken In-Reply-To: References: <1520422853.24302.6.camel@gmail.com> Message-ID: <1520431314.24302.27.camel@gmail.com> On Wed, 2018-03-07 at 13:57 +0100, John Paul Adrian Glaubitz wrote: > On 03/07/2018 01:56 PM, Thomas St?fe wrote: > > Sorry! > > > > changeset: 48885:120b61d50f85 > > user: eosterlund > > date: Wed Jan 10 22:48:27 2018 +0100 > > summary: 8195103: Refactor out card table from CardTableModRefBS to flatten the BarrierSet hierarchy > > Reviewed-by: kbarrett, tschatzl > > Thanks, Thomas for digging that up so quickly! > > @Edward: Would you mind changing your summary to > > "8199220: Zero build broken after 8195103" > > both in the bug tracker and the changeset. > Hi, I have changed this in the bug tracker. I will wait to see if there are any more changes required before changing it in the changeset. If this is the only change, perhaps this could be changed by the sponsor as it is only a change to the commit message and this has to be changed in any case to replace the current reviewer 'duke', with the actual reviewers. All the best, Ed. From edward.nevill at gmail.com Wed Mar 7 15:26:55 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Wed, 07 Mar 2018 15:26:55 +0000 Subject: RFR: 8199220: Zero build broken In-Reply-To: <5A9FEBD7.7040606@oracle.com> References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> Message-ID: <1520436415.30546.7.camel@gmail.com> On Wed, 2018-03-07 at 14:40 +0100, Erik ?sterlund wrote: > Hi, > > On 2018-03-07 14:25, David Holmes wrote: > > Hi Ed, > > > > On 7/03/2018 9:40 PM, Edward Nevill wrote: > > > Hi, > > > > > > Please review the following webrev which fixes broken zero build. > > > > > > Bugid: https://bugs.openjdk.java.net/browse/JDK-8199220 > > > Webrev: http://cr.openjdk.java.net/~enevill/8199220/webrev.00 > > > > > > As this involves changes to shared hotspot code I will need a > > > sponsor. > > > > I'm concerned by the original code as highlighted by part of your > > fix: > > > > 593 #else > > 594 guarantee(false, "How did we get here?"); > > 595 #endif > > > > I think Erik and/or other GC folk need to weigh in here and > > clearly > > state this is intended to be a no-op unless using C2 or JVMCI. But > > even so unless a C1 only build is completely dead, we don't want > > that > > guarantee I think. > > I can confirm that unless C2 or JVMCI is used, this is a no-op and > ReduceInitialCardMarks should be false. If it is not false, there is > a bug. Ok. Just to confirm what you are saying. For C1 (or Zero) on_slowpath_allocation_exit should be a no-op therefore I should delete the #else and the guarantee above and the above should just become 593 #endif Note: For C1 (or Zero) it is not possible to assert that ReduceInitialCardMarks is false because ReduceInitialCardMarks is not even defined. It is defined in c2_globals.hpp and jvmci_globals.hpp If my understanding is correct I will generate a new webrev with this change and the change proposed by Adrian (change summary to "Zero build broken after 8195103". Thanks for your help reviewing this, Ed. From manlang at jku.at Wed Mar 7 16:01:29 2018 From: manlang at jku.at (ManLang Conference) Date: Wed, 07 Mar 2018 17:01:29 +0100 Subject: Call for Papers: ManLang 2018 (Sept. 10-14, Linz, Austria) Message-ID: <5AA00CD90200007E00014771@s05gw02.im.jku.at> ------------------------------------------------------------------------------------------------ CALL FOR PAPERS 15th International Conference on Managed Languages & Runtimes (ManLang'18) September 10-14, 2018, Linz, Austria http://ssw.jku.at/manlang18/ ------------------------------------------------------------------------------------------------ ManLang (formerly PPPJ) is a premier forum for presenting and discussing novel results in all aspects of managed programming languages and runtime systems, which serve as building blocks for some of the most important computing systems, ranging from small-scale (embedded and real-time systems) to large-scale (cloud-computing and big-data platforms) and anything in between (mobile, IoT, and wearable applications). ====== Topics ====== Topics of interest include but are not limited to: Languages and Compilers ----------------------- - Managed languages (e.g., Java, Scala, JavaScript, Python, Ruby, C#, F#, Clojure, Groovy, Kotlin, R, Smalltalk, Racket, Rust, Go, etc.) - Domain-specific languages - Language design - Compilers and interpreters - Type systems and program logics - Language interoperability - Parallelism, distribution, and concurrency Virtual Machines ---------------- - Managed runtime systems (e.g., JVM, Dalvik VM, Android Runtime (ART), LLVM, .NET CLR, RPython, etc.) - VM design and optimization - VMs for mobile and embedded devices - VMs for real-time applications - Memory management - Hardware/software co-design Techniques, Tools, and Applications ----------------------------------- - Static and dynamic program analysis - Testing and debugging - Refactoring - Program understanding - Program synthesis - Security and privacy - Performance analysis and monitoring - Compiler and program verification =============== Important Dates =============== Submission: May 4, 2018 (Abstracts: April 27) Notification: July 6, 2018 Camera-ready version: August 3, 2018 Poster submission: August 6, 2018 Poster notification: August 13, 2018 Conference: September 10-14, 2018 ========================== Submission and Proceedings ========================== Submissions to the conference will be evaluated on the basis of originality, relevance, technical soundness and presentation quality. Papers should be written in English and not exceed 12 pages in ACM format for full papers (6 pages for WiP, industry, and tool papers). You can also submit posters, which can be accompanied by a one-page abstract, and are due on August 6, 2018. The conference proceedings will be published as part of the ACM International Conference Proceedings Series and will be disseminated through the ACM Digital Library. See the conference homepage for details on paper formats and submission. ============ Organization ============ General Chair: Hanspeter M?ssenb?ck, Johannes Kepler University Linz, Austria Program Chair: Eli Tilevich, Virginia Tech, USA Steering Committee: Walter Binder, University of Lugano (USI), Switzerland Bruce Childers, University of Pittsburgh, USA Martin Pluemicke, DHBW Stuttgart, Germany Christian Probst, Technical University of Denmark, Denmark Petr Tuma, Charles University, Czech Republic Thomas W?rthinger, Oracle Labs, Switzerland Program Committee: Godmar Back, Virginia Tech, USA Clement Bera, INRIA, France Christoph Bockisch, Philipps Universit?t Marburg, Germany Man Cao, Google, USA Shigeru Chiba, University of Tokyo, Japan Yvonne Coady, University of Victoria, Canada Julian Dolby, IBM Research, USA Patrick Eugster, University of Lugano, Switzerland Irene Finocchi, Sapienza University of Rome, Italy G?rel Hedin, Lund University, Sweden Robert Hirschfeld, Hasso Plattner Institute, Germany Tony Hosking, Purdue University, USA Doug Lea, SUNY Oswego, USA Eliot Moss, University of Massachusetts, USA Nate Nystrom, University of Lugano, Switzerland Tiark Rompf, Purdue University, USA J ennifer B. Sartor, Vrije Universiteit Brussel, Belgium JeremyJan Vitek, Northeastern University, USA Christian Wimmer, Oracle Labs, USA Jianjun Zhao, Kyushu University, Japan ======== Location ======== Linz, the capital of Upper Austria, is both a city of culture and of industry. Located at the Danube it features a historic downtown and a modern university campus just north of the Danube, where the conference will take place. For information on JKU and Linz, also see: http://www.jku.at, https://en.wikipedia.org/wiki/Linz and https://www.linz.at/english/ ================= Other Information ================= The 5th Virtual Machine Meetup (VMM) is a collocated event with ManLang '18. It is a venue for discussing the latest research and developments in the area of managed language execution. ManLang'18 is organized in cooperation with ACM, ACM SIGPLAN and ACM ICPS, and is sponsored by the JKU Department of Computer Science, Oracle Labs, and Linz AG. http://ssw.jku.at/manlang18/ https://www.facebook.com/ManLangConf/ https://twitter.com/manlangconf From mark.reinhold at oracle.com Wed Mar 7 17:34:07 2018 From: mark.reinhold at oracle.com (mark.reinhold at oracle.com) Date: Wed, 07 Mar 2018 09:34:07 -0800 Subject: RFR: AArch64: org.openjdk.jcstress.tests.varhandles.DekkerTest fails In-Reply-To: <20180306201654.420182025@eggemoggin.niobe.net> References: <20180306082420.853500758@eggemoggin.niobe.net> <20180306201654.420182025@eggemoggin.niobe.net> Message-ID: <20180307093407.897577454@eggemoggin.niobe.net> Andrew -- The overnight test runs turned up nothing new, so I've added the jdk10-fix-yes label to 8198950. Please push your fix at will, directly to jdk/jdk10. Thanks for the fix, and to everyone who reviewed and tested it! - Mark From adinn at redhat.com Wed Mar 7 18:01:50 2018 From: adinn at redhat.com (Andrew Dinn) Date: Wed, 7 Mar 2018 18:01:50 +0000 Subject: RFR: AArch64: org.openjdk.jcstress.tests.varhandles.DekkerTest fails In-Reply-To: <20180307093407.897577454@eggemoggin.niobe.net> References: <20180306082420.853500758@eggemoggin.niobe.net> <20180306201654.420182025@eggemoggin.niobe.net> <20180307093407.897577454@eggemoggin.niobe.net> Message-ID: <15c8bee5-99f2-9b8a-6567-93394fed19c2@redhat.com> On 07/03/18 17:34, mark.reinhold at oracle.com wrote: > Andrew -- The overnight test runs turned up nothing new, so I've > added the jdk10-fix-yes label to 8198950. Please push your fix > at will, directly to jdk/jdk10. > > Thanks for the fix, and to everyone who reviewed and tested it! Indeed, thanks very much to everyone. It has been pushed to jdk10 and I will also push to hs asap. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From ekaterina.pavlova at oracle.com Wed Mar 7 18:05:01 2018 From: ekaterina.pavlova at oracle.com (Ekaterina Pavlova) Date: Wed, 7 Mar 2018 10:05:01 -0800 Subject: RFR(XS) [closed] : 8198924: java/lang/StackWalker/LocalsAndOperands.java timeouts with Graal In-Reply-To: <4fcc3671-214a-accd-105b-9aeb83e6e62b@oracle.com> References: <571e551e-f40c-dbf6-0094-4a841fda42a3@oracle.com> <4fcc3671-214a-accd-105b-9aeb83e6e62b@oracle.com> Message-ID: Mandy, Brent, thanks for reviews. Removing @bug will not allow to find this testcase by bugid (jtreg -bug:). So, I think it is safe to omit only @summary. regards, -katya On 3/6/18 2:30 PM, Brent Christian wrote: > Looks good, Katya - thanks. > I agree with omitting @bug and @summary from the second @test tag. > > Thanks, > -Brent > > On 3/6/18 1:59 PM, mandy chung wrote: >> Running #1 and #2 when Graal is enabled is fine. >> >> For the second @test, does it need @bug and @summary to run?? If not, I suggest to take it out as it's already mentioned in the first @test. >> >> Mandy >> >> >> On 3/6/18 1:45 PM, Ekaterina Pavlova wrote: >>> Hi all, >>> >>> java/lang/StackWalker/LocalsAndOperands.java runs LocalsAndOperands 3 times with following set of jvm flags: >>> ?1) -Xint -DtestUnused=true >>> ?2) -Xcomp >>> ?3) -Xcomp -XX:-TieredCompilation >>> >>> When running with Graal as JIT (-XX:+TieredCompilation -XX:+UseJVMCICompiler -Djvmci.Compiler=graal) >>> 3rd scenario could take more than 10 minutes and the test could fail by timeout. >>> Actually running LocalsAndOperands with Graal and with "-Xcomp -XX:-TieredCompilation" doesn't provide >>> big benefit and it would be reasonable to disable this scenario in case Graal is enabled. >>> >>> Please review the change. >>> >>> ????? JBS: https://bugs.openjdk.java.net/browse/JDK-8198924 >>> ?? webrev: http://cr.openjdk.java.net/~epavlova//8198924/webrev.00/index.html >>> >>> ? testing: Tested by running the test in Graal as JIT mode. >>> >>> thanks, >>> -katya >>> >>> p.s. >>> ?Igor Ignatyev volunteered to sponsor this change. >> From irogers at google.com Wed Mar 7 18:16:25 2018 From: irogers at google.com (Ian Rogers) Date: Wed, 07 Mar 2018 18:16:25 +0000 Subject: GetPrimitiveArrayCritical vs GetByteArrayRegion: 140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream In-Reply-To: References: <22bbd632-47c1-7702-bd65-6a51f0bbad02@oracle.com> <9b5ff318-af41-03c4-efdc-b41b97cf6e61@oracle.com> <5A9D8C60.3050505@oracle.com> <5A9D8F33.5060500@oracle.com> <5A9DAC84.906@oracle.com> <5A9DAE87.8020801@oracle.com> Message-ID: Thanks Martin! Profiling shows most of the time spent in this code is in the call to libz's deflate. I worry that increasing the buffer size increases that work and holds the critical lock for longer. Profiling likely won't show this issue as there's needs to be contention on the GC locker. In HotSpot: http://hg.openjdk.java.net/jdk/jdk/file/2854589fd853/src/hotspot/share/gc/shared/gcLocker.hpp#l34 "Avoid calling these if at all possible" could be taken to suggest that JNI critical regions should also be avoided if at all possible. I think HotSpot and the JDK are out of step if this is the case and there could be work done to remove JNI critical regions from the JDK and replace either with Java code (JITs are better now) or with Get/Set...ArrayRegion. This does appear to be a O(1) to O(n) transition so perhaps the HotSpot folks could speak to it. Thanks, Ian On Tue, Mar 6, 2018 at 6:44 PM Martin Buchholz wrote: > Thanks Ian and Sherman for the excellent presentation and memories of > ancient efforts. > > Yes, Sherman, I still have vague memory that attempts to touch any > implementation detail in this area was asking for trouble and someone would > complain. I was happy to let you deal with those problems! > > There's a continual struggle in the industry to enable more checking at > test time, and -Xcheck:jni does look like it should be possible to > routinely turn on for running all tests. (Google tests run with a time > limit, and so any low-level performance regression immediately causes test > failures, for better or worse) > > Our problem reduces to accessing a primitive array slice from native > code. The only way to get O(1) access is via GetPrimitiveArrayCritical, > BUT when it fails you have to pay for a copy of the entire array. An > obvious solution is to introduce a slice variant GetPrimitiveArrayRegionCritical > that would only degrade to a copy of the slice. Offhand that seems > relatively easy to implement though we would hold our noses at adding yet > more *Critical* functions to the JNI spec. In spirit though it's a > straightforward generalization. > > Implementing Deflater in pure Java seems very reasonable and we've had > good success with "nearby" code, but we likely cannot reuse the GNU > Classpath code. > > Thanks for pointing out > JDK-6311046: -Xcheck:jni should support checking of > GetPrimitiveArrayCritical > which went into jdk8 in u40. > > We can probably be smarter about choosing a better buffer size, e.g. in > ZipOutputStream. > > Here's an idea: In code like this > try (DeflaterOutputStream dout = new DeflaterOutputStream(deflated)) { > dout.write(inflated, 0, inflated.length); > } > when the DeflaterOutputStream is given an input that is clearly too large > for the current buffer size, reorganize internals dynamically to use a much > bigger buffer size. > > It's possible (but hard work!) to adjust algorithms based on whether > critical array access is available. It would be nice if we could get the > JVM to tell us (but it might depend, e.g. on the size of the array). > From stefan.karlsson at oracle.com Wed Mar 7 21:59:52 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 7 Mar 2018 22:59:52 +0100 Subject: RFR: 8199264: Remove universe.inline.hpp to simplify include dependencies Message-ID: Hi all, Please review this patch to remove universe.inline.hpp. http://cr.openjdk.java.net/~stefank/8199264/webrev.01/ https://bugs.openjdk.java.net/browse/JDK-8199264 The main motivation for this patch is to lower the number of .inline.hpp files included from .hpp files. My initial though was to just move the two trivial functions from the universe.inline.hpp into universe.hpp, and get rid of universe.inline.hpp. However, field_type_should_be_aligned evaluates to true at the single location it is used, and element_type_should_be_aligned is only used in arrayOop.hpp, so it was moved there instead. There are a number of places that used to include universe.inline.hpp that now include universe.hpp. I left them all by purpose, to minimize the potential for build breakages. Even removing the universe.hpp include in arrayOop.hpp breaks the build, so I'd like to defer that cleaning to another patch. Thanks, StefanK From harold.seigel at oracle.com Wed Mar 7 22:04:13 2018 From: harold.seigel at oracle.com (harold seigel) Date: Wed, 7 Mar 2018 17:04:13 -0500 Subject: RFR: 8199264: Remove universe.inline.hpp to simplify include dependencies In-Reply-To: References: Message-ID: <95ea86db-2734-7063-0fa3-35af6ae9cb3b@oracle.com> Hi Stefan, This looks good.? Thanks for doing it. Harold On 3/7/2018 4:59 PM, Stefan Karlsson wrote: > Hi all, > > Please review this patch to remove universe.inline.hpp. > > http://cr.openjdk.java.net/~stefank/8199264/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8199264 > > The main motivation for this patch is to lower the number of > .inline.hpp files included from .hpp files. My initial though was to > just move the two trivial functions from the universe.inline.hpp into > universe.hpp, and get rid of universe.inline.hpp. However, > field_type_should_be_aligned evaluates to true at the single location > it is used, and element_type_should_be_aligned is only used in > arrayOop.hpp, so it was moved there instead. > > There are a number of places that used to include universe.inline.hpp > that now include universe.hpp. I left them all by purpose, to minimize > the potential for build breakages. Even removing the universe.hpp > include in arrayOop.hpp breaks the build, so I'd like to defer that > cleaning to another patch. > > Thanks, > StefanK From coleen.phillimore at oracle.com Wed Mar 7 22:21:12 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 7 Mar 2018 17:21:12 -0500 Subject: RFR: 8199264: Remove universe.inline.hpp to simplify include dependencies In-Reply-To: References: Message-ID: <2144fd3f-f100-0c68-640b-02c68a32b862@oracle.com> This looks good!? Thank you for doing this. Coleen On 3/7/18 4:59 PM, Stefan Karlsson wrote: > Hi all, > > Please review this patch to remove universe.inline.hpp. > > http://cr.openjdk.java.net/~stefank/8199264/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8199264 > > The main motivation for this patch is to lower the number of > .inline.hpp files included from .hpp files. My initial though was to > just move the two trivial functions from the universe.inline.hpp into > universe.hpp, and get rid of universe.inline.hpp. However, > field_type_should_be_aligned evaluates to true at the single location > it is used, and element_type_should_be_aligned is only used in > arrayOop.hpp, so it was moved there instead. > > There are a number of places that used to include universe.inline.hpp > that now include universe.hpp. I left them all by purpose, to minimize > the potential for build breakages. Even removing the universe.hpp > include in arrayOop.hpp breaks the build, so I'd like to defer that > cleaning to another patch. > > Thanks, > StefanK From stefan.karlsson at oracle.com Wed Mar 7 22:21:13 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 7 Mar 2018 23:21:13 +0100 Subject: RFR: 8199264: Remove universe.inline.hpp to simplify include dependencies In-Reply-To: <95ea86db-2734-7063-0fa3-35af6ae9cb3b@oracle.com> References: <95ea86db-2734-7063-0fa3-35af6ae9cb3b@oracle.com> Message-ID: Thanks, Harold! StefanK On 2018-03-07 23:04, harold seigel wrote: > Hi Stefan, > > This looks good.? Thanks for doing it. > > Harold > > On 3/7/2018 4:59 PM, Stefan Karlsson wrote: >> Hi all, >> >> Please review this patch to remove universe.inline.hpp. >> >> http://cr.openjdk.java.net/~stefank/8199264/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8199264 >> >> The main motivation for this patch is to lower the number of >> .inline.hpp files included from .hpp files. My initial though was to >> just move the two trivial functions from the universe.inline.hpp into >> universe.hpp, and get rid of universe.inline.hpp. However, >> field_type_should_be_aligned evaluates to true at the single location >> it is used, and element_type_should_be_aligned is only used in >> arrayOop.hpp, so it was moved there instead. >> >> There are a number of places that used to include universe.inline.hpp >> that now include universe.hpp. I left them all by purpose, to >> minimize the potential for build breakages. Even removing the >> universe.hpp include in arrayOop.hpp breaks the build, so I'd like to >> defer that cleaning to another patch. >> >> Thanks, >> StefanK > From stefan.karlsson at oracle.com Wed Mar 7 22:26:45 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 7 Mar 2018 23:26:45 +0100 Subject: RFR: 8199264: Remove universe.inline.hpp to simplify include dependencies In-Reply-To: <2144fd3f-f100-0c68-640b-02c68a32b862@oracle.com> References: <2144fd3f-f100-0c68-640b-02c68a32b862@oracle.com> Message-ID: <6dc11267-4894-6ff2-e99f-03e0d6015e5a@oracle.com> Thanks, Coleen! StefanK On 2018-03-07 23:21, coleen.phillimore at oracle.com wrote: > > This looks good!? Thank you for doing this. > Coleen > > On 3/7/18 4:59 PM, Stefan Karlsson wrote: >> Hi all, >> >> Please review this patch to remove universe.inline.hpp. >> >> http://cr.openjdk.java.net/~stefank/8199264/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8199264 >> >> The main motivation for this patch is to lower the number of >> .inline.hpp files included from .hpp files. My initial though was to >> just move the two trivial functions from the universe.inline.hpp into >> universe.hpp, and get rid of universe.inline.hpp. However, >> field_type_should_be_aligned evaluates to true at the single location >> it is used, and element_type_should_be_aligned is only used in >> arrayOop.hpp, so it was moved there instead. >> >> There are a number of places that used to include universe.inline.hpp >> that now include universe.hpp. I left them all by purpose, to >> minimize the potential for build breakages. Even removing the >> universe.hpp include in arrayOop.hpp breaks the build, so I'd like to >> defer that cleaning to another patch. >> >> Thanks, >> StefanK > From stefan.karlsson at oracle.com Wed Mar 7 22:33:15 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 7 Mar 2018 23:33:15 +0100 Subject: RFR: 8199275: Fix inclusions of allocation.inline.hpp Message-ID: <625a5e48-01be-d699-e6f4-41b84cffbbd6@oracle.com> Hi all, Please review this small patch to fix some includes of allocation.inline.hpp. http://cr.openjdk.java.net/~stefank/8199275/webrev.01/ https://bugs.openjdk.java.net/browse/JDK-8199275 The changes are quite simple: 1) SymbolHashMap::~SymbolHashMap and CDSOffsets::CDSOffsets were moved to .cpp files, since they used functions from allocation.inline.hpp. 2) includes of allocation.inline.hpp were added to .cpp and .inline.hpp files that used functions from allocation.inline.hpp The patch contains a few number added includes need after this restructuring. Thanks, StefanK From stefan.karlsson at oracle.com Wed Mar 7 22:34:24 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 7 Mar 2018 23:34:24 +0100 Subject: RFR: 8199275: Fix inclusions of allocation.inline.hpp In-Reply-To: <625a5e48-01be-d699-e6f4-41b84cffbbd6@oracle.com> References: <625a5e48-01be-d699-e6f4-41b84cffbbd6@oracle.com> Message-ID: I forgot to mention that this patch should be applied on top of the patch in: ?http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030565.html StefanK On 2018-03-07 23:33, Stefan Karlsson wrote: > Hi all, > > Please review this small patch to fix some includes of > allocation.inline.hpp. > > http://cr.openjdk.java.net/~stefank/8199275/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8199275 > > The changes are quite simple: > > 1) SymbolHashMap::~SymbolHashMap and CDSOffsets::CDSOffsets were moved > to .cpp files, since they used functions from allocation.inline.hpp. > > 2) includes of allocation.inline.hpp were added to .cpp and > .inline.hpp files that used functions from allocation.inline.hpp > > The patch contains a few number added includes need after this > restructuring. > > Thanks, > StefanK From coleen.phillimore at oracle.com Wed Mar 7 23:50:58 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 7 Mar 2018 18:50:58 -0500 Subject: RFR: 8199275: Fix inclusions of allocation.inline.hpp In-Reply-To: <625a5e48-01be-d699-e6f4-41b84cffbbd6@oracle.com> References: <625a5e48-01be-d699-e6f4-41b84cffbbd6@oracle.com> Message-ID: Small?? Well it's simple anyway.? Looks good.? Please update the copyrights when you commit this. Thanks, Coleen On 3/7/18 5:33 PM, Stefan Karlsson wrote: > Hi all, > > Please review this small patch to fix some includes of > allocation.inline.hpp. > > http://cr.openjdk.java.net/~stefank/8199275/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8199275 > > The changes are quite simple: > > 1) SymbolHashMap::~SymbolHashMap and CDSOffsets::CDSOffsets were moved > to .cpp files, since they used functions from allocation.inline.hpp. > > 2) includes of allocation.inline.hpp were added to .cpp and > .inline.hpp files that used functions from allocation.inline.hpp > > The patch contains a few number added includes need after this > restructuring. > > Thanks, > StefanK From kim.barrett at oracle.com Thu Mar 8 03:54:33 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 7 Mar 2018 22:54:33 -0500 Subject: RFR: 8199275: Fix inclusions of allocation.inline.hpp In-Reply-To: <625a5e48-01be-d699-e6f4-41b84cffbbd6@oracle.com> References: <625a5e48-01be-d699-e6f4-41b84cffbbd6@oracle.com> Message-ID: > On Mar 7, 2018, at 5:33 PM, Stefan Karlsson wrote: > > Hi all, > > Please review this small patch to fix some includes of allocation.inline.hpp. > > http://cr.openjdk.java.net/~stefank/8199275/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8199275 > > The changes are quite simple: > > 1) SymbolHashMap::~SymbolHashMap and CDSOffsets::CDSOffsets were moved to .cpp files, since they used functions from allocation.inline.hpp. > > 2) includes of allocation.inline.hpp were added to .cpp and .inline.hpp files that used functions from allocation.inline.hpp > > The patch contains a few number added includes need after this restructuring. > > Thanks, > StefanK Looks good. From david.holmes at oracle.com Thu Mar 8 05:28:38 2018 From: david.holmes at oracle.com (David Holmes) Date: Thu, 8 Mar 2018 15:28:38 +1000 Subject: RFR: 8199275: Fix inclusions of allocation.inline.hpp In-Reply-To: <625a5e48-01be-d699-e6f4-41b84cffbbd6@oracle.com> References: <625a5e48-01be-d699-e6f4-41b84cffbbd6@oracle.com> Message-ID: Hi Stefan, On 8/03/2018 8:33 AM, Stefan Karlsson wrote: > Hi all, > > Please review this small patch to fix some includes of > allocation.inline.hpp. > > http://cr.openjdk.java.net/~stefank/8199275/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8199275 > > The changes are quite simple: > > 1) SymbolHashMap::~SymbolHashMap and CDSOffsets::CDSOffsets were moved > to .cpp files, since they used functions from allocation.inline.hpp. > > 2) includes of allocation.inline.hpp were added to .cpp and .inline.hpp > files that used functions from allocation.inline.hpp I'm a little confused. Where cpp files were using functions from allocation.inline.hpp but not including it, where were they getting the definitions from? My initial guess is precompiled.hpp, but that wouldn't address builds with precompiled headers disabled ?? > The patch contains a few number added includes need after this > restructuring. Overall seems okay. Proof as always is in the building, with and without PCH. Thanks, David > > Thanks, > StefanK From kim.barrett at oracle.com Thu Mar 8 06:18:38 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 8 Mar 2018 01:18:38 -0500 Subject: RFR: 8199275: Fix inclusions of allocation.inline.hpp In-Reply-To: References: <625a5e48-01be-d699-e6f4-41b84cffbbd6@oracle.com> Message-ID: > On Mar 8, 2018, at 12:28 AM, David Holmes wrote: > > Hi Stefan, > > On 8/03/2018 8:33 AM, Stefan Karlsson wrote: >> Hi all, >> Please review this small patch to fix some includes of allocation.inline.hpp. >> http://cr.openjdk.java.net/~stefank/8199275/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8199275 >> The changes are quite simple: >> 1) SymbolHashMap::~SymbolHashMap and CDSOffsets::CDSOffsets were moved to .cpp files, since they used functions from allocation.inline.hpp. >> 2) includes of allocation.inline.hpp were added to .cpp and .inline.hpp files that used functions from allocation.inline.hpp > > I'm a little confused. Where cpp files were using functions from allocation.inline.hpp but not including it, where were they getting the definitions from? My initial guess is precompiled.hpp, but that wouldn't address builds with precompiled headers disabled ?? My guess: constantPool.hpp is being changed to include allocation.hpp rather than allocation.inline.hpp. The fanout from constantPool.hpp is (to me) surprisingly broad: thread.hpp includes frame.hpp, which includes method.hpp, which includes constantPool.hpp. Nearly 70 files directly include thread.hpp, and who knows how many additional files indirectly include it. >> The patch contains a few number added includes need after this restructuring. > > Overall seems okay. Proof as always is in the building, with and without PCH. > > Thanks, > David >> Thanks, >> StefanK From stefan.karlsson at oracle.com Thu Mar 8 06:32:43 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 8 Mar 2018 07:32:43 +0100 Subject: RFR: 8199275: Fix inclusions of allocation.inline.hpp In-Reply-To: References: <625a5e48-01be-d699-e6f4-41b84cffbbd6@oracle.com> Message-ID: On 2018-03-08 00:50, coleen.phillimore at oracle.com wrote: > > Small? I guess I was thinking about related changes and how big they could become. This patch is a meager 74 lines change :) > Well it's simple anyway.? Looks good. Thanks. > Please update the copyrights when you commit this. I guess this will make this patch a medium sized patch ;) StefanK > > Thanks, > Coleen > > On 3/7/18 5:33 PM, Stefan Karlsson wrote: >> Hi all, >> >> Please review this small patch to fix some includes of >> allocation.inline.hpp. >> >> http://cr.openjdk.java.net/~stefank/8199275/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8199275 >> >> The changes are quite simple: >> >> 1) SymbolHashMap::~SymbolHashMap and CDSOffsets::CDSOffsets were >> moved to .cpp files, since they used functions from >> allocation.inline.hpp. >> >> 2) includes of allocation.inline.hpp were added to .cpp and >> .inline.hpp files that used functions from allocation.inline.hpp >> >> The patch contains a few number added includes need after this >> restructuring. >> >> Thanks, >> StefanK > From stefan.karlsson at oracle.com Thu Mar 8 06:33:01 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 8 Mar 2018 07:33:01 +0100 Subject: RFR: 8199275: Fix inclusions of allocation.inline.hpp In-Reply-To: References: <625a5e48-01be-d699-e6f4-41b84cffbbd6@oracle.com> Message-ID: <2f19ecff-b594-711b-823c-388e16dc8e5a@oracle.com> Thanks Kim. StefanK On 2018-03-08 04:54, Kim Barrett wrote: >> On Mar 7, 2018, at 5:33 PM, Stefan Karlsson wrote: >> >> Hi all, >> >> Please review this small patch to fix some includes of allocation.inline.hpp. >> >> http://cr.openjdk.java.net/~stefank/8199275/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8199275 >> >> The changes are quite simple: >> >> 1) SymbolHashMap::~SymbolHashMap and CDSOffsets::CDSOffsets were moved to .cpp files, since they used functions from allocation.inline.hpp. >> >> 2) includes of allocation.inline.hpp were added to .cpp and .inline.hpp files that used functions from allocation.inline.hpp >> >> The patch contains a few number added includes need after this restructuring. >> >> Thanks, >> StefanK > Looks good. > From stefan.karlsson at oracle.com Thu Mar 8 06:38:48 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 8 Mar 2018 07:38:48 +0100 Subject: RFR: 8199275: Fix inclusions of allocation.inline.hpp In-Reply-To: References: <625a5e48-01be-d699-e6f4-41b84cffbbd6@oracle.com> Message-ID: <60c025e2-93da-8352-26cf-310b6af02b5f@oracle.com> On 2018-03-08 06:28, David Holmes wrote: > Hi Stefan, > > On 8/03/2018 8:33 AM, Stefan Karlsson wrote: >> Hi all, >> >> Please review this small patch to fix some includes of >> allocation.inline.hpp. >> >> http://cr.openjdk.java.net/~stefank/8199275/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8199275 >> >> The changes are quite simple: >> >> 1) SymbolHashMap::~SymbolHashMap and CDSOffsets::CDSOffsets were >> moved to .cpp files, since they used functions from >> allocation.inline.hpp. >> >> 2) includes of allocation.inline.hpp were added to .cpp and >> .inline.hpp files that used functions from allocation.inline.hpp > > I'm a little confused. Where cpp files were using functions from > allocation.inline.hpp but not including it, where were they getting > the definitions from? My initial guess is precompiled.hpp, but that > wouldn't address builds with precompiled headers disabled ?? I see that Kim explained this in his mail. You almost always get into this situation when you start to remove includes from the headers. Including .inline.hpp files and .hpp files with a large transitive closure of its includes is making the situation worse, and doing some cleanup in this area will help reduce the problem. >> The patch contains a few number added includes need after this >> restructuring. > > Overall seems okay. Proof as always is in the building, with and > without PCH. Thanks. StefanK > > Thanks, > David >> >> Thanks, >> StefanK From thomas.stuefe at gmail.com Thu Mar 8 09:22:40 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 8 Mar 2018 10:22:40 +0100 Subject: RFR: 8199275: Fix inclusions of allocation.inline.hpp In-Reply-To: <625a5e48-01be-d699-e6f4-41b84cffbbd6@oracle.com> References: <625a5e48-01be-d699-e6f4-41b84cffbbd6@oracle.com> Message-ID: Hi Stefan, thanks, this is a good cleanup. Sometimes I wish there were a method to automatically strip code from unnecessary includes. Thanks, Thomas On Wed, Mar 7, 2018 at 11:33 PM, Stefan Karlsson wrote: > Hi all, > > Please review this small patch to fix some includes of > allocation.inline.hpp. > > http://cr.openjdk.java.net/~stefank/8199275/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8199275 > > The changes are quite simple: > > 1) SymbolHashMap::~SymbolHashMap and CDSOffsets::CDSOffsets were moved to > .cpp files, since they used functions from allocation.inline.hpp. > > 2) includes of allocation.inline.hpp were added to .cpp and .inline.hpp > files that used functions from allocation.inline.hpp > > The patch contains a few number added includes need after this > restructuring. > > Thanks, > StefanK > From stefan.karlsson at oracle.com Thu Mar 8 09:25:00 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 8 Mar 2018 10:25:00 +0100 Subject: RFR: 8199275: Fix inclusions of allocation.inline.hpp In-Reply-To: References: <625a5e48-01be-d699-e6f4-41b84cffbbd6@oracle.com> Message-ID: Thanks Thomas. StefanK On 2018-03-08 10:22, Thomas St?fe wrote: > Hi Stefan, > > thanks, this is a good cleanup. > > Sometimes I wish there were a method to automatically strip code from > unnecessary includes. > > Thanks, Thomas > > > > On Wed, Mar 7, 2018 at 11:33 PM, Stefan Karlsson > > wrote: > > Hi all, > > Please review this small patch to fix some includes of > allocation.inline.hpp. > > http://cr.openjdk.java.net/~stefank/8199275/webrev.01/ > > https://bugs.openjdk.java.net/browse/JDK-8199275 > > > The changes are quite simple: > > 1) SymbolHashMap::~SymbolHashMap and CDSOffsets::CDSOffsets were > moved to .cpp files, since they used functions from > allocation.inline.hpp. > > 2) includes of allocation.inline.hpp were added to .cpp and > .inline.hpp files that used functions from allocation.inline.hpp > > The patch contains a few number added includes need after this > restructuring. > > Thanks, > StefanK > > From per.liden at oracle.com Thu Mar 8 14:01:23 2018 From: per.liden at oracle.com (Per Liden) Date: Thu, 8 Mar 2018 15:01:23 +0100 Subject: RFR: 8199328: Fix unsafe field accesses in heap dumper Message-ID: The heap dumper, more specifically the DumperSupport::dump_field_value() function, is doing unsafe raw loads of fields in heap objects. Those loads should go thru the access API. Bug: https://bugs.openjdk.java.net/browse/JDK-8199328 Webrev: http://cr.openjdk.java.net/~pliden/8199328/webrev.0/ Testing: manual dumping using jcmd GC.heap_dump, awaiting hs-tier1-3 results /Per From thomas.schatzl at oracle.com Thu Mar 8 14:19:01 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 08 Mar 2018 15:19:01 +0100 Subject: RFR: 8199328: Fix unsafe field accesses in heap dumper In-Reply-To: References: Message-ID: <1520518741.3121.27.camel@oracle.com> Hi, On Thu, 2018-03-08 at 15:01 +0100, Per Liden wrote: > The heap dumper, more specifically the > DumperSupport::dump_field_value() > function, is doing unsafe raw loads of fields in heap objects. Those > loads should go thru the access API. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8199328 > Webrev: http://cr.openjdk.java.net/~pliden/8199328/webrev.0/ > > Testing: manual dumping using jcmd GC.heap_dump, awaiting hs-tier1-3 > results I think this is good, but somebody else should also look into this as making sure that the right invariants hold for all collectors is tricky. Thanks, Thomas From vladimir.kozlov at oracle.com Thu Mar 8 15:19:27 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 8 Mar 2018 07:19:27 -0800 Subject: RFR: 8199275: Fix inclusions of allocation.inline.hpp In-Reply-To: References: <625a5e48-01be-d699-e6f4-41b84cffbbd6@oracle.com> Message-ID: On 3/8/18 1:22 AM, Thomas St?fe wrote: > Hi Stefan, > > thanks, this is a good cleanup. > > Sometimes I wish there were a method to automatically strip code from > unnecessary includes. I also wish for that. Can IDE do it for you? Thanks, Vladimir > > Thanks, Thomas > > > > On Wed, Mar 7, 2018 at 11:33 PM, Stefan Karlsson > wrote: > >> Hi all, >> >> Please review this small patch to fix some includes of >> allocation.inline.hpp. >> >> http://cr.openjdk.java.net/~stefank/8199275/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8199275 >> >> The changes are quite simple: >> >> 1) SymbolHashMap::~SymbolHashMap and CDSOffsets::CDSOffsets were moved to >> .cpp files, since they used functions from allocation.inline.hpp. >> >> 2) includes of allocation.inline.hpp were added to .cpp and .inline.hpp >> files that used functions from allocation.inline.hpp >> >> The patch contains a few number added includes need after this >> restructuring. >> >> Thanks, >> StefanK >> From shade at redhat.com Thu Mar 8 15:31:51 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Thu, 8 Mar 2018 16:31:51 +0100 Subject: RFR: 8199328: Fix unsafe field accesses in heap dumper In-Reply-To: <1520518741.3121.27.camel@oracle.com> References: <1520518741.3121.27.camel@oracle.com> Message-ID: <6b242945-0222-a5b0-adbc-19b07f608ecf@redhat.com> On 03/08/2018 03:19 PM, Thomas Schatzl wrote: > On Thu, 2018-03-08 at 15:01 +0100, Per Liden wrote: >> The heap dumper, more specifically the >> DumperSupport::dump_field_value() >> function, is doing unsafe raw loads of fields in heap objects. Those >> loads should go thru the access API. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8199328 >> Webrev: http://cr.openjdk.java.net/~pliden/8199328/webrev.0/ >> >> Testing: manual dumping using jcmd GC.heap_dump, awaiting hs-tier1-3 >> results > > I think this is good, but somebody else should also look into this as > making sure that the right invariants hold for all collectors is > tricky. This looks fine to me, given my cursory understanding of Access API, and following through where the new calls end up in Shenandoah barrier set. Our gc/shenandoah/jvmti/TestHeapDump.java test that used to find troubles in this code is also happy with this change applied. Thanks, -Aleksey From erik.osterlund at oracle.com Thu Mar 8 15:41:17 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 8 Mar 2018 16:41:17 +0100 Subject: RFR: 8199328: Fix unsafe field accesses in heap dumper In-Reply-To: References: Message-ID: <5AA1599D.2080006@oracle.com> Hi Per, Looks good. Thanks, /Erik On 2018-03-08 15:01, Per Liden wrote: > The heap dumper, more specifically the > DumperSupport::dump_field_value() function, is doing unsafe raw loads > of fields in heap objects. Those loads should go thru the access API. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8199328 > Webrev: http://cr.openjdk.java.net/~pliden/8199328/webrev.0/ > > Testing: manual dumping using jcmd GC.heap_dump, awaiting hs-tier1-3 > results > > /Per From erik.helin at oracle.com Thu Mar 8 16:14:36 2018 From: erik.helin at oracle.com (Erik Helin) Date: Thu, 8 Mar 2018 17:14:36 +0100 Subject: RFR(L): 8198423: Improve metaspace chunk allocation (was: Proposal for improvements to the metaspace chunk allocator) In-Reply-To: References: <633ac65d-83af-68e0-ea84-d0e7da181871@oracle.com> Message-ID: Alright, I know I am bit late to the party, but I have now refreshed my knowledge of Metaspace enough to begin reviewing this. I will probably ask a few questions to get a better understanding of your changes :) First of all, great work! The code is easy to read and the concepts are clear, so thank you for that. An initial commend: 1592 // Chunks are born as in-use (see MetaChunk ctor). So, before returning 1593 // the padding chunk to its chunk manager, mark it as in use (ChunkManager 1594 // will assert that). 1595 do_update_in_use_info_for_chunk(padding_chunk, true); This comment is slightly hard to read. I _think_ what you are meaning is something like: // So, before returning the padding chunk to its chunk manger, // mark it as in use in the the occupancy map. Is that correct? Besides updating the occupancy map, do_update_in_use_info_for_chunk will also call MetaChunk::set_is_tagged_free, but that is not strictly needed here, right? Thanks, Erik On 02/28/2018 05:17 PM, Thomas St?fe wrote: > Hi Eric, no problem! > > Thanks, Thomas > > On Wed, Feb 28, 2018 at 4:28 PM, Erik Helin > wrote: > > Hi Thomas, > > I will take a look at this, I just have been a bit busy lately > (sorry for not responding earlier). > > Thanks, > Erik > > > On 02/26/2018 03:20 PM, Thomas St?fe wrote: > > Hi all, > > I know this patch is a bit larger, but may I please have reviews > and/or > other input? > > Issue: https://bugs.openjdk.java.net/browse/JDK-8198423 > > Latest version: > http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-26/webrev/ > > > For those who followed the mail thread, this is the incremental > diff to the > last changes (included feedback Goetz gave me on- and off-list): > http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalescation/2018-02-26/webrev-incr/webrev/ > > > Thank you! > > Kind Regards, Thomas Stuefe > > > > On Thu, Feb 8, 2018 at 12:58 PM, Thomas St?fe > > > wrote: > > Hi, > > We would like to contribute a patch developed at SAP which > has been live > in our VM for some time. It improves the metaspace chunk > allocation: > reduces fragmentation and raises the chance of reusing free > metaspace > chunks. > > The patch: > http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc > > ation/2018-02-05--2/webrev/ > > In very short, this patch helps with a number of > pathological cases where > metaspace chunks are free but cannot be reused because they > are of the > wrong size. For example, the metaspace freelist could be > full of small > chunks, which would not be reusable if we need larger > chunks. So, we could > get metaspace OOMs even in situations where the metaspace > was far from > exhausted. Our patch adds the ability to split and merge > metaspace chunks > dynamically and thus remove the "size-lock-in" problem. > > Note that there have been other attempts to get a grip on > this problem, > see e.g. "SpaceManager::get_small_chunks_and_allocate()". > But arguably > our patch attempts a more complete solution. > > In 2016 I discussed the idea for this patch with some folks > off-list, > among them Jon Matsimutso. He then did advice me to create a > JEP. So I did: > [1]. However, meanwhile changes to the JEP process were > discussed [2], and > I am not sure anymore this patch needs even needs a JEP. It > may be > moderately complex and hence carries the risk inherent in > any patch, but > its effects would not be externally visible (if you discount > seeing fewer > metaspace OOMs). So, I'd prefer to handle this as a simple RFE. > > -- > > How this patch works: > > 1) When a class loader dies, its metaspace chunks are freed > and returned > to the freelist for reuse by the next class loader. With the > patch, upon > returning a chunk to the freelist, an attempt is made to > merge it with its > neighboring chunks - should they happen to be free too - to > form a larger > chunk. Which then is placed in the free list. > > As a result, the freelist should be populated by larger > chunks at the > expense of smaller chunks. In other words, all free chunks > should always be > as "coalesced as possible". > > 2) When a class loader needs a new chunk and a chunk of the > requested size > cannot be found in the free list, before carving out a new > chunk from the > virtual space, we first check if there is a larger chunk in > the free list. > If there is, that larger chunk is chopped up into n smaller > chunks. One of > them is returned to the callers, the others are re-added to > the freelist. > > (1) and (2) together have the effect of removing the > size-lock-in for > chunks. If fragmentation allows it, small chunks are > dynamically combined > to form larger chunks, and larger chunks are split on demand. > > -- > > What this patch does not: > > This is not a rewrite of the chunk allocator - most of the > mechanisms stay > intact. Specifically, chunk sizes remain unchanged, and so > do chunk > allocation processes (when do which class loaders get handed > which chunk > size). Almost everthing this patch does affects only > internal workings of > the ChunkManager. > > Also note that I refrained from doing any cleanups, since I > wanted > reviewers to be able to gauge this patch without filtering > noise. > Unfortunately this patch adds some complexity. But there are > many future > opportunities for code cleanup and simplification, some of > which we already > discussed in existing RFEs ([3], [4]). All of them are out > of the scope for > this particular patch. > > -- > > Details: > > Before the patch, the following rules held: > - All chunk sizes are multiples of the smallest chunk size > ("specialized > chunks") > - All chunk sizes of larger chunks are also clean multiples > of the next > smaller chunk size (e.g. for class space, the ratio of > specialized/small/medium chunks is 1:2:32) > - All chunk start addresses are aligned to the smallest > chunk size (more > or less accidentally, see metaspace_reserve_alignment). > The patch makes the last rule explicit and more strict: > - All (non-humongous) chunk start addresses are now aligned > to their own > chunk size. So, e.g. medium chunks are allocated at > addresses which are a > multiple of medium chunk size. This rule is not extended to > humongous > chunks, whose start addresses continue to be aligned to the > smallest chunk > size. > > The reason for this new alignment rule is that it makes it > cheap both to > find chunk predecessors of a chunk and to check which chunks > are free. > > When a class loader dies and its chunk is returned to the > freelist, all we > have is its address. In order to merge it with its neighbors > to form a > larger chunk, we need to find those neighbors, including > those preceding > the returned chunk. Prior to this patch that was not easy - > one would have > to iterate chunks starting at the beginning of the > VirtualSpaceNode. But > due to the new alignment rule, we now know where the > prospective larger > chunk must start - at the next lower > larger-chunk-size-aligned boundary. We > also know that currently a smaller chunk must start there (*). > > In order to check the free-ness of chunks quickly, each > VirtualSpaceNode > now keeps a bitmap which describes its occupancy. One bit in > this bitmap > corresponds to a range the size of the smallest chunk size > and starting at > an address aligned to the smallest chunk size. Because of > the alignment > rules above, such a range belongs to one single chunk. The > bit is 1 if the > associated chunk is in use by a class loader, 0 if it is free. > > When we have calculated the address range a prospective > larger chunk would > span, we now need to check if all chunks in that range are > free. Only then > we can merge them. We do that by querying the bitmap. Note > that the most > common use case here is forming medium chunks from smaller > chunks. With the > new alignment rules, the bitmap portion covering a medium > chunk now always > happens to be 16- or 32bit in size and is 16- or 32bit > aligned, so reading > the bitmap in many cases becomes a simple 16- or 32bit load. > > If the range is free, only then we need to iterate the > chunks in that > range: pull them from the freelist, combine them to one new > larger chunk, > re-add that one to the freelist. > > (*) Humongous chunks make this a bit more complicated. Since > the new > alignment rule does not extend to them, a humongous chunk > could still > straddle the lower or upper boundary of the prospective > larger chunk. So I > gave the occupancy map a second layer, which is used to mark > the start of > chunks. > An alternative approach could have been to make humongous > chunks size and > start address always a multiple of the largest non-humongous > chunk size > (medium chunks). That would have caused a bit of waste per > humongous chunk > (<64K) in exchange for simpler coding and a simpler > occupancy map. > > -- > > The patch shows its best results in scenarios where a lot of > smallish > class loaders are alive simultaneously. When dying, they > leave continuous > expanses of metaspace covered in small chunks, which can be > merged nicely. > However, if class loader life times vary more, we have more > interleaving of > dead and alive small chunks, and hence chunk merging does > not work as well > as it could. > > For an example of a pathological case like this see example > program: [5] > > Executed like this: "java -XX:CompressedClassSpaceSize=10M > -cp test3 > test3.Example2" the test will load 3000 small classes in > separate class > loaders, then throw them away and start loading large > classes. The small > classes will have flooded the metaspace with small chunks, > which are > unusable for the large classes. When executing with the > rather limited > CompressedClassSpaceSize=10M, we will run into an OOM after > loading about > 800 large classes, having used only 40% of the class space, > the rest is > wasted to unused small chunks. However, with our patch the > example program > will manage to allocate ~2900 large classes before running > into an OOM, and > class space will show almost no waste. > > Do demonstrate this, add -Xlog:gc+metaspace+freelist. After > running into > an OOM, statistics and an ASCII representation of the class > space will be > shown. The unpatched version will show large expanses of > unused small > chunks, the patched variant will show almost no waste. > > Note that the patch could be made more effective with a > different size > ratio between small and medium chunks: in class space, that > ratio is 1:16, > so 16 small chunks must happen to be free to form one larger > chunk. With a > smaller ratio the chance for coalescation would be larger. > So there may be > room for future improvement here: Since we now can merge and > split chunks > on demand, we could introduce more chunk sizes. Potentially > arriving at a > buddy-ish allocator style where we drop hard-wired chunk > sizes for a > dynamic model where the ratio between chunk sizes is always > 1:2 and we > could in theory have no limit to the chunk size? But this is > just a thought > and well out of the scope of this patch. > > -- > > What does this patch cost (memory): > > ? - the occupancy bitmap adds 1 byte per 4K metaspace. > ? - MetaChunk headers get larger, since we add an enum and > two bools to it. > Depending on what the c++ compiler does with that, chunk > headers grow by > one or two MetaWords, reducing the payload size by that amount. > - The new alignment rules mean we may need to create padding > chunks to > precede larger chunks. But since these padding chunks are > added to the > freelist, they should be used up before the need for new > padding chunks > arises. So, the maximally possible number of unused padding > chunks should > be limited by design to about 64K. > > The expectation is that the memory savings by this patch far > outweighs its > added memory costs. > > .. (performance): > > We did not see measurable drops in standard benchmarks > raising over the > normal noise. I also measured times for a program which > stresses metaspace > chunk coalescation, with the same result. > > I am open to suggestions what else I should measure, and/or > independent > measurements. > > -- > > Other details: > > I removed SpaceManager::get_small_chunk_and_allocate() to reduce > complexity somewhat, because it was made mostly obsolete by > this patch: > since small chunks are combined to larger chunks upon return > to the > freelist, in theory we should not have that many free small > chunks anymore > anyway. However, there may be still cases where we could > benefit from this > workaround, so I am asking your opinion on this one. > > About tests: There were two native tests - > ChunkManagerReturnTest and > TestVirtualSpaceNode (the former was added by me last year) > - which did not > make much sense anymore, since they relied heavily on > internal behavior > which was made unpredictable with this patch. > To make up for these lost tests,? I added a new gtest which > attempts to > stress the many combinations of allocation pattern but does > so from a layer > above the old tests. It now uses Metaspace::allocate() and > friends. By > using that point as entry for tests, I am less dependent on > implementation > internals and still cover a lot of scenarios. > > -- > > Review pointers: > > Good points to start are > - ChunkManager::return_single_chunk() - specifically, > ChunkManager::attempt_to_coalesce_around_chunk() - here we > merge chunks > upon return to the free list > - ChunkManager::free_chunks_get(): Here we now split large > chunks into > smaller chunks on demand > - VirtualSpaceNode::take_from_committed() : chunks are allocated > according to align rules now, padding chunks are handles > - The OccupancyMap class is the helper class implementing > the new > occupancy bitmap > > The rest is mostly chaff: helper functions, added tests and > verifications. > > -- > > Thanks and Best Regards, Thomas > > [1] https://bugs.openjdk.java.net/browse/JDK-8166690 > > [2] > http://mail.openjdk.java.net/pipermail/jdk-dev/2017-November > > /000128.html > [3] https://bugs.openjdk.java.net/browse/JDK-8185034 > > [4] https://bugs.openjdk.java.net/browse/JDK-8176808 > > [5] > https://bugs.openjdk.java.net/secure/attachment/63532/test3.zip > > > > > From thomas.stuefe at gmail.com Thu Mar 8 17:18:07 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 8 Mar 2018 18:18:07 +0100 Subject: RFR(L): 8198423: Improve metaspace chunk allocation (was: Proposal for improvements to the metaspace chunk allocator) In-Reply-To: References: <633ac65d-83af-68e0-ea84-d0e7da181871@oracle.com> Message-ID: Hey Eric, welcome to the party :) We were about to push the change (Coleen is running some final tests) but I am happy to wait a bit more, the more eyes the better. Further comments inline: On Thu, Mar 8, 2018 at 5:14 PM, Erik Helin wrote: > Alright, I know I am bit late to the party, but I have now refreshed my > knowledge of Metaspace enough to begin reviewing this. I will probably ask > a few questions to get a better understanding of your changes :) > > First of all, great work! The code is easy to read and the concepts are > clear, so thank you for that. > > Thanks! Still, code has gotten quite bloaty, and my change is not helping. I have some patches in the work to reduce complexity. An initial commend: > > 1592 // Chunks are born as in-use (see MetaChunk ctor). So, before > returning > 1593 // the padding chunk to its chunk manager, mark it as in use > (ChunkManager > 1594 // will assert that). > 1595 do_update_in_use_info_for_chunk(padding_chunk, true); > > This comment is slightly hard to read. I _think_ what you are meaning is > something like: > > // So, before returning the padding chunk to its chunk manger, > // mark it as in use in the the occupancy map. > > Is that correct? Correct. Yes, the comment is a bit convoluted. I meant: the chunk needs to appear to ChunkManager::return_chunk as a valid in-use chunks, because that it expects. > Besides updating the occupancy map, do_update_in_use_info_for_chunk will > also call MetaChunk::set_is_tagged_free, but that is not strictly needed > here, right? > > Yes, this is correct. > Thanks, > Erik > > On 02/28/2018 05:17 PM, Thomas St?fe wrote: > >> Hi Eric, no problem! >> >> Thanks, Thomas >> >> On Wed, Feb 28, 2018 at 4:28 PM, Erik Helin > > wrote: >> >> Hi Thomas, >> >> I will take a look at this, I just have been a bit busy lately >> (sorry for not responding earlier). >> >> Thanks, >> Erik >> >> >> On 02/26/2018 03:20 PM, Thomas St?fe wrote: >> >> Hi all, >> >> I know this patch is a bit larger, but may I please have reviews >> and/or >> other input? >> >> Issue: https://bugs.openjdk.java.net/browse/JDK-8198423 >> >> Latest version: >> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >> ation/2018-02-26/webrev/ >> > cation/2018-02-26/webrev/> >> >> For those who followed the mail thread, this is the incremental >> diff to the >> last changes (included feedback Goetz gave me on- and off-list): >> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >> ation/2018-02-26/webrev-incr/webrev/ >> > cation/2018-02-26/webrev-incr/webrev/> >> >> Thank you! >> >> Kind Regards, Thomas Stuefe >> >> >> >> On Thu, Feb 8, 2018 at 12:58 PM, Thomas St?fe >> > >> >> wrote: >> >> Hi, >> >> We would like to contribute a patch developed at SAP which >> has been live >> in our VM for some time. It improves the metaspace chunk >> allocation: >> reduces fragmentation and raises the chance of reusing free >> metaspace >> chunks. >> >> The patch: >> http://cr.openjdk.java.net/~stuefe/webrevs/metaspace-coalesc >> > > >> ation/2018-02-05--2/webrev/ >> >> In very short, this patch helps with a number of >> pathological cases where >> metaspace chunks are free but cannot be reused because they >> are of the >> wrong size. For example, the metaspace freelist could be >> full of small >> chunks, which would not be reusable if we need larger >> chunks. So, we could >> get metaspace OOMs even in situations where the metaspace >> was far from >> exhausted. Our patch adds the ability to split and merge >> metaspace chunks >> dynamically and thus remove the "size-lock-in" problem. >> >> Note that there have been other attempts to get a grip on >> this problem, >> see e.g. "SpaceManager::get_small_chunks_and_allocate()". >> But arguably >> our patch attempts a more complete solution. >> >> In 2016 I discussed the idea for this patch with some folks >> off-list, >> among them Jon Matsimutso. He then did advice me to create a >> JEP. So I did: >> [1]. However, meanwhile changes to the JEP process were >> discussed [2], and >> I am not sure anymore this patch needs even needs a JEP. It >> may be >> moderately complex and hence carries the risk inherent in >> any patch, but >> its effects would not be externally visible (if you discount >> seeing fewer >> metaspace OOMs). So, I'd prefer to handle this as a simple >> RFE. >> >> -- >> >> How this patch works: >> >> 1) When a class loader dies, its metaspace chunks are freed >> and returned >> to the freelist for reuse by the next class loader. With the >> patch, upon >> returning a chunk to the freelist, an attempt is made to >> merge it with its >> neighboring chunks - should they happen to be free too - to >> form a larger >> chunk. Which then is placed in the free list. >> >> As a result, the freelist should be populated by larger >> chunks at the >> expense of smaller chunks. In other words, all free chunks >> should always be >> as "coalesced as possible". >> >> 2) When a class loader needs a new chunk and a chunk of the >> requested size >> cannot be found in the free list, before carving out a new >> chunk from the >> virtual space, we first check if there is a larger chunk in >> the free list. >> If there is, that larger chunk is chopped up into n smaller >> chunks. One of >> them is returned to the callers, the others are re-added to >> the freelist. >> >> (1) and (2) together have the effect of removing the >> size-lock-in for >> chunks. If fragmentation allows it, small chunks are >> dynamically combined >> to form larger chunks, and larger chunks are split on demand. >> >> -- >> >> What this patch does not: >> >> This is not a rewrite of the chunk allocator - most of the >> mechanisms stay >> intact. Specifically, chunk sizes remain unchanged, and so >> do chunk >> allocation processes (when do which class loaders get handed >> which chunk >> size). Almost everthing this patch does affects only >> internal workings of >> the ChunkManager. >> >> Also note that I refrained from doing any cleanups, since I >> wanted >> reviewers to be able to gauge this patch without filtering >> noise. >> Unfortunately this patch adds some complexity. But there are >> many future >> opportunities for code cleanup and simplification, some of >> which we already >> discussed in existing RFEs ([3], [4]). All of them are out >> of the scope for >> this particular patch. >> >> -- >> >> Details: >> >> Before the patch, the following rules held: >> - All chunk sizes are multiples of the smallest chunk size >> ("specialized >> chunks") >> - All chunk sizes of larger chunks are also clean multiples >> of the next >> smaller chunk size (e.g. for class space, the ratio of >> specialized/small/medium chunks is 1:2:32) >> - All chunk start addresses are aligned to the smallest >> chunk size (more >> or less accidentally, see metaspace_reserve_alignment). >> The patch makes the last rule explicit and more strict: >> - All (non-humongous) chunk start addresses are now aligned >> to their own >> chunk size. So, e.g. medium chunks are allocated at >> addresses which are a >> multiple of medium chunk size. This rule is not extended to >> humongous >> chunks, whose start addresses continue to be aligned to the >> smallest chunk >> size. >> >> The reason for this new alignment rule is that it makes it >> cheap both to >> find chunk predecessors of a chunk and to check which chunks >> are free. >> >> When a class loader dies and its chunk is returned to the >> freelist, all we >> have is its address. In order to merge it with its neighbors >> to form a >> larger chunk, we need to find those neighbors, including >> those preceding >> the returned chunk. Prior to this patch that was not easy - >> one would have >> to iterate chunks starting at the beginning of the >> VirtualSpaceNode. But >> due to the new alignment rule, we now know where the >> prospective larger >> chunk must start - at the next lower >> larger-chunk-size-aligned boundary. We >> also know that currently a smaller chunk must start there (*). >> >> In order to check the free-ness of chunks quickly, each >> VirtualSpaceNode >> now keeps a bitmap which describes its occupancy. One bit in >> this bitmap >> corresponds to a range the size of the smallest chunk size >> and starting at >> an address aligned to the smallest chunk size. Because of >> the alignment >> rules above, such a range belongs to one single chunk. The >> bit is 1 if the >> associated chunk is in use by a class loader, 0 if it is free. >> >> When we have calculated the address range a prospective >> larger chunk would >> span, we now need to check if all chunks in that range are >> free. Only then >> we can merge them. We do that by querying the bitmap. Note >> that the most >> common use case here is forming medium chunks from smaller >> chunks. With the >> new alignment rules, the bitmap portion covering a medium >> chunk now always >> happens to be 16- or 32bit in size and is 16- or 32bit >> aligned, so reading >> the bitmap in many cases becomes a simple 16- or 32bit load. >> >> If the range is free, only then we need to iterate the >> chunks in that >> range: pull them from the freelist, combine them to one new >> larger chunk, >> re-add that one to the freelist. >> >> (*) Humongous chunks make this a bit more complicated. Since >> the new >> alignment rule does not extend to them, a humongous chunk >> could still >> straddle the lower or upper boundary of the prospective >> larger chunk. So I >> gave the occupancy map a second layer, which is used to mark >> the start of >> chunks. >> An alternative approach could have been to make humongous >> chunks size and >> start address always a multiple of the largest non-humongous >> chunk size >> (medium chunks). That would have caused a bit of waste per >> humongous chunk >> (<64K) in exchange for simpler coding and a simpler >> occupancy map. >> >> -- >> >> The patch shows its best results in scenarios where a lot of >> smallish >> class loaders are alive simultaneously. When dying, they >> leave continuous >> expanses of metaspace covered in small chunks, which can be >> merged nicely. >> However, if class loader life times vary more, we have more >> interleaving of >> dead and alive small chunks, and hence chunk merging does >> not work as well >> as it could. >> >> For an example of a pathological case like this see example >> program: [5] >> >> Executed like this: "java -XX:CompressedClassSpaceSize=10M >> -cp test3 >> test3.Example2" the test will load 3000 small classes in >> separate class >> loaders, then throw them away and start loading large >> classes. The small >> classes will have flooded the metaspace with small chunks, >> which are >> unusable for the large classes. When executing with the >> rather limited >> CompressedClassSpaceSize=10M, we will run into an OOM after >> loading about >> 800 large classes, having used only 40% of the class space, >> the rest is >> wasted to unused small chunks. However, with our patch the >> example program >> will manage to allocate ~2900 large classes before running >> into an OOM, and >> class space will show almost no waste. >> >> Do demonstrate this, add -Xlog:gc+metaspace+freelist. After >> running into >> an OOM, statistics and an ASCII representation of the class >> space will be >> shown. The unpatched version will show large expanses of >> unused small >> chunks, the patched variant will show almost no waste. >> >> Note that the patch could be made more effective with a >> different size >> ratio between small and medium chunks: in class space, that >> ratio is 1:16, >> so 16 small chunks must happen to be free to form one larger >> chunk. With a >> smaller ratio the chance for coalescation would be larger. >> So there may be >> room for future improvement here: Since we now can merge and >> split chunks >> on demand, we could introduce more chunk sizes. Potentially >> arriving at a >> buddy-ish allocator style where we drop hard-wired chunk >> sizes for a >> dynamic model where the ratio between chunk sizes is always >> 1:2 and we >> could in theory have no limit to the chunk size? But this is >> just a thought >> and well out of the scope of this patch. >> >> -- >> >> What does this patch cost (memory): >> >> - the occupancy bitmap adds 1 byte per 4K metaspace. >> - MetaChunk headers get larger, since we add an enum and >> two bools to it. >> Depending on what the c++ compiler does with that, chunk >> headers grow by >> one or two MetaWords, reducing the payload size by that >> amount. >> - The new alignment rules mean we may need to create padding >> chunks to >> precede larger chunks. But since these padding chunks are >> added to the >> freelist, they should be used up before the need for new >> padding chunks >> arises. So, the maximally possible number of unused padding >> chunks should >> be limited by design to about 64K. >> >> The expectation is that the memory savings by this patch far >> outweighs its >> added memory costs. >> >> .. (performance): >> >> We did not see measurable drops in standard benchmarks >> raising over the >> normal noise. I also measured times for a program which >> stresses metaspace >> chunk coalescation, with the same result. >> >> I am open to suggestions what else I should measure, and/or >> independent >> measurements. >> >> -- >> >> Other details: >> >> I removed SpaceManager::get_small_chunk_and_allocate() to >> reduce >> complexity somewhat, because it was made mostly obsolete by >> this patch: >> since small chunks are combined to larger chunks upon return >> to the >> freelist, in theory we should not have that many free small >> chunks anymore >> anyway. However, there may be still cases where we could >> benefit from this >> workaround, so I am asking your opinion on this one. >> >> About tests: There were two native tests - >> ChunkManagerReturnTest and >> TestVirtualSpaceNode (the former was added by me last year) >> - which did not >> make much sense anymore, since they relied heavily on >> internal behavior >> which was made unpredictable with this patch. >> To make up for these lost tests, I added a new gtest which >> attempts to >> stress the many combinations of allocation pattern but does >> so from a layer >> above the old tests. It now uses Metaspace::allocate() and >> friends. By >> using that point as entry for tests, I am less dependent on >> implementation >> internals and still cover a lot of scenarios. >> >> -- >> >> Review pointers: >> >> Good points to start are >> - ChunkManager::return_single_chunk() - specifically, >> ChunkManager::attempt_to_coalesce_around_chunk() - here we >> merge chunks >> upon return to the free list >> - ChunkManager::free_chunks_get(): Here we now split large >> chunks into >> smaller chunks on demand >> - VirtualSpaceNode::take_from_committed() : chunks are >> allocated >> according to align rules now, padding chunks are handles >> - The OccupancyMap class is the helper class implementing >> the new >> occupancy bitmap >> >> The rest is mostly chaff: helper functions, added tests and >> verifications. >> >> -- >> >> Thanks and Best Regards, Thomas >> >> [1] https://bugs.openjdk.java.net/browse/JDK-8166690 >> >> [2] >> http://mail.openjdk.java.net/pipermail/jdk-dev/2017-November >> > > >> /000128.html >> [3] https://bugs.openjdk.java.net/browse/JDK-8185034 >> >> [4] https://bugs.openjdk.java.net/browse/JDK-8176808 >> >> [5] >> https://bugs.openjdk.java.net/secure/attachment/63532/test3. >> zip >> > .zip> >> >> >> >> >> From edward.nevill at gmail.com Thu Mar 8 17:24:06 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Thu, 08 Mar 2018 17:24:06 +0000 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: <5A9FEBD7.7040606@oracle.com> References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> Message-ID: <1520529846.1085.9.camel@gmail.com> Hi, BugID: https://bugs.openjdk.java.net/browse/JDK-8199220 Webrev: http://cr.openjdk.java.net/~enevill/8199220/webrev.02 I have updated the webrev to include the following changes - Changed the description in the hg log to 8199220: Zero build broken after 8195103 and 8191102 - Remove the guarantee in on_slowpath_allocation_exit +#else + guarantee(false, "How did we get here?"); +#endif This means on_slowpath_allocation_exit becomes a no op in C1/Zero I also rebased the webrev on jdk/hs rathat than jdk/jdk as this is a hotspot patch. While doing so I discovered more brokenness in jdk/hs caused by 8191102: Incorrect include file use in classLoader.hpp Summary: Move appropriate methods to .inline.hpp files. Create .inline.hpp files when needed. This moved methods to .inline.hpp files but failed to update the zero specific files. I have updated this webrev with the approriate includes. Could I have a sponsor for this please, Thanks, Ed. From glaubitz at physik.fu-berlin.de Thu Mar 8 17:48:54 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Thu, 8 Mar 2018 18:48:54 +0100 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: <1520529846.1085.9.camel@gmail.com> References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> Message-ID: <998cc912-d259-f59c-10c3-89b33f9fb90d@physik.fu-berlin.de> On 03/08/2018 06:24 PM, Edward Nevill wrote: > Could I have a sponsor for this please, I can sponsor this for you if no one else steps up :-). I will have to review and also would want to test it on x86 and SPARC first. Thanks, Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From coleen.phillimore at oracle.com Thu Mar 8 18:26:06 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 8 Mar 2018 13:26:06 -0500 Subject: RFR (tedious) 8173070: Remove ValueObj class for allocation subclassing for runtime code Message-ID: <2bbd91d0-136f-9484-9e17-aea7c69c8a91@oracle.com> This change removes VALUE_OBJ_CLASS_SPEC as subclass for classes in the runtime code.? I decided to split this into 3 parts to divide the clicking.? See the bug for discussion of why we would like to remove this null class on most platforms in favor of the link time check to disallow Hotspot code from calling global operators new and delete (bug https://bugs.openjdk.java.net/browse/JDK-8198243) Tested with mach5 nightly tests with the full set of changes and mach5 tier1-2 with this set. open webrev at http://cr.openjdk.java.net/~coleenp/8173070.01/webrev bug link https://bugs.openjdk.java.net/browse/JDK-8173070 I'll update the copyrights with hg commit. Thanks, Coleen From edward.nevill at gmail.com Thu Mar 8 18:32:35 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Thu, 08 Mar 2018 18:32:35 +0000 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: <998cc912-d259-f59c-10c3-89b33f9fb90d@physik.fu-berlin.de> References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <998cc912-d259-f59c-10c3-89b33f9fb90d@physik.fu-berlin.de> Message-ID: <1520533955.1085.11.camel@gmail.com> On Thu, 2018-03-08 at 18:48 +0100, John Paul Adrian Glaubitz wrote: > On 03/08/2018 06:24 PM, Edward Nevill wrote: > > Could I have a sponsor for this please, > > I can sponsor this for you if no one else steps up :-). > > I will have to review and also would want to test it on x86 > and SPARC first. > Thanks Adrian, Ed. From stefan.karlsson at oracle.com Thu Mar 8 18:47:23 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 8 Mar 2018 19:47:23 +0100 Subject: RFR (tedious) 8173070: Remove ValueObj class for allocation subclassing for runtime code In-Reply-To: <2bbd91d0-136f-9484-9e17-aea7c69c8a91@oracle.com> References: <2bbd91d0-136f-9484-9e17-aea7c69c8a91@oracle.com> Message-ID: <56956b11-6566-2db3-b082-76fc45417879@oracle.com> Looks good. The comment in allocation.hpp is a bit weird, but Coleen has promised that it goes away in subsequent patches. StefanK On 2018-03-08 19:26, coleen.phillimore at oracle.com wrote: > > This change removes VALUE_OBJ_CLASS_SPEC as subclass for classes in > the runtime code.? I decided to split this into 3 parts to divide the > clicking.? See the bug for discussion of why we would like to remove > this null class on most platforms in favor of the link time check to > disallow Hotspot code from calling global operators new and delete > (bug https://bugs.openjdk.java.net/browse/JDK-8198243) > > Tested with mach5 nightly tests with the full set of changes and mach5 > tier1-2 with this set. > > open webrev at http://cr.openjdk.java.net/~coleenp/8173070.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8173070 > > I'll update the copyrights with hg commit. > > Thanks, > Coleen From thomas.stuefe at gmail.com Thu Mar 8 19:10:32 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 8 Mar 2018 20:10:32 +0100 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: <1520533955.1085.11.camel@gmail.com> References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <998cc912-d259-f59c-10c3-89b33f9fb90d@physik.fu-berlin.de> <1520533955.1085.11.camel@gmail.com> Message-ID: Tried to test but get build errors: /shared/projects/openjdk/jdk-hs/source/src/hotspot/share/interpreter/bytecodeInterpreter.cpp: In static member function ?static void BytecodeInterpreter::runWithChecks(interpreterState)?: /shared/projects/openjdk/jdk-hs/source/src/hotspot/share/utilities/debug.hpp:44:10: error: conversion from ?oop? to ?bool? is ambiguous Is that just me? My configure line: CONFIGURE_COMMAND_LINE:=-with-boot-jdk=/shared/projects/openjdk/jdks/openjdk9 --with-debug-level=fastdebug --with-jvm-variants=zero --with-native-debug-symbols=internal --with-build-jdk=../output-release/images/jdk ..Thomas On Thu, Mar 8, 2018 at 7:32 PM, Edward Nevill wrote: > On Thu, 2018-03-08 at 18:48 +0100, John Paul Adrian Glaubitz wrote: > > On 03/08/2018 06:24 PM, Edward Nevill wrote: > > > Could I have a sponsor for this please, > > > > I can sponsor this for you if no one else steps up :-). > > > > I will have to review and also would want to test it on x86 > > and SPARC first. > > > > Thanks Adrian, > Ed. > > From coleen.phillimore at oracle.com Thu Mar 8 19:11:20 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 8 Mar 2018 14:11:20 -0500 Subject: RFR (tedious) 8173070: Remove ValueObj class for allocation subclassing for runtime code In-Reply-To: <56956b11-6566-2db3-b082-76fc45417879@oracle.com> References: <2bbd91d0-136f-9484-9e17-aea7c69c8a91@oracle.com> <56956b11-6566-2db3-b082-76fc45417879@oracle.com> Message-ID: On 3/8/18 1:47 PM, Stefan Karlsson wrote: > Looks good. The comment in allocation.hpp is a bit weird, but Coleen > has promised that it goes away in subsequent patches. Yes, the last version will be the gc version and that has ValueObj and the comment about ValueObj removed. Maybe I should have sent the whole thing out, but I wanted to spare people clicking pain (and webrev takes hours for the whole thing). thanks! Coleen > > StefanK > > On 2018-03-08 19:26, coleen.phillimore at oracle.com wrote: >> >> This change removes VALUE_OBJ_CLASS_SPEC as subclass for classes in >> the runtime code.? I decided to split this into 3 parts to divide the >> clicking.? See the bug for discussion of why we would like to remove >> this null class on most platforms in favor of the link time check to >> disallow Hotspot code from calling global operators new and delete >> (bug https://bugs.openjdk.java.net/browse/JDK-8198243) >> >> Tested with mach5 nightly tests with the full set of changes and >> mach5 tier1-2 with this set. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8173070.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8173070 >> >> I'll update the copyrights with hg commit. >> >> Thanks, >> Coleen > > From thomas.schatzl at oracle.com Thu Mar 8 19:21:21 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 08 Mar 2018 20:21:21 +0100 Subject: RFR (tedious) 8173070: Remove ValueObj class for allocation subclassing for runtime code In-Reply-To: <56956b11-6566-2db3-b082-76fc45417879@oracle.com> References: <2bbd91d0-136f-9484-9e17-aea7c69c8a91@oracle.com> <56956b11-6566-2db3-b082-76fc45417879@oracle.com> Message-ID: <1520536881.2375.3.camel@oracle.com> Hi Coleen, On Thu, 2018-03-08 at 19:47 +0100, Stefan Karlsson wrote: > Looks good. The comment in allocation.hpp is a bit weird, but Coleen > has promised that it goes away in subsequent patches. there is also a comment referencing "ValueObj" in instanceKlass.hpp. I think just removing "ValueObjs embedded in klass." would be fine. I do not need to re-review for that change. Thanks a ton btw - these VALUE_OBJ_CLASS_SPEC decorators also mess up Oracle Studio code parsing (a bug) :) Thanks, Thomas > > StefanK > > On 2018-03-08 19:26, coleen.phillimore at oracle.com wrote: > > > > This change removes VALUE_OBJ_CLASS_SPEC as subclass for classes > > in > > the runtime code. I decided to split this into 3 parts to divide > > the > > clicking. See the bug for discussion of why we would like to > > remove > > this null class on most platforms in favor of the link time check > > to > > disallow Hotspot code from calling global operators new and delete > > (bug https://bugs.openjdk.java.net/browse/JDK-8198243) > > > > Tested with mach5 nightly tests with the full set of changes and > > mach5 > > tier1-2 with this set. > > > > open webrev at http://cr.openjdk.java.net/~coleenp/8173070.01/webre > > v > > bug link https://bugs.openjdk.java.net/browse/JDK-8173070 > > > > I'll update the copyrights with hg commit. > > > > Thanks, > > Coleen > > From coleen.phillimore at oracle.com Thu Mar 8 19:26:46 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 8 Mar 2018 14:26:46 -0500 Subject: RFR (tedious) 8173070: Remove ValueObj class for allocation subclassing for runtime code In-Reply-To: <1520536881.2375.3.camel@oracle.com> References: <2bbd91d0-136f-9484-9e17-aea7c69c8a91@oracle.com> <56956b11-6566-2db3-b082-76fc45417879@oracle.com> <1520536881.2375.3.camel@oracle.com> Message-ID: On 3/8/18 2:21 PM, Thomas Schatzl wrote: > Hi Coleen, > > On Thu, 2018-03-08 at 19:47 +0100, Stefan Karlsson wrote: >> Looks good. The comment in allocation.hpp is a bit weird, but Coleen >> has promised that it goes away in subsequent patches. > there is also a comment referencing "ValueObj" in instanceKlass.hpp. > I think just removing "ValueObjs embedded in klass." would be fine. > > I do not need to re-review for that change. Okay, I have fixed that comment.? Thanks for pointing that out! > > Thanks a ton btw - these VALUE_OBJ_CLASS_SPEC decorators also mess up > Oracle Studio code parsing (a bug) :) I'm glad you're happy with this. Coleen > > Thanks, > Thomas > >> StefanK >> >> On 2018-03-08 19:26, coleen.phillimore at oracle.com wrote: >>> This change removes VALUE_OBJ_CLASS_SPEC as subclass for classes >>> in >>> the runtime code. I decided to split this into 3 parts to divide >>> the >>> clicking. See the bug for discussion of why we would like to >>> remove >>> this null class on most platforms in favor of the link time check >>> to >>> disallow Hotspot code from calling global operators new and delete >>> (bug https://bugs.openjdk.java.net/browse/JDK-8198243) >>> >>> Tested with mach5 nightly tests with the full set of changes and >>> mach5 >>> tier1-2 with this set. >>> >>> open webrev at http://cr.openjdk.java.net/~coleenp/8173070.01/webre >>> v >>> bug link https://bugs.openjdk.java.net/browse/JDK-8173070 >>> >>> I'll update the copyrights with hg commit. >>> >>> Thanks, >>> Coleen >> From thomas.stuefe at gmail.com Thu Mar 8 20:30:55 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 8 Mar 2018 21:30:55 +0100 Subject: RFR (tedious) 8173070: Remove ValueObj class for allocation subclassing for runtime code In-Reply-To: <2bbd91d0-136f-9484-9e17-aea7c69c8a91@oracle.com> References: <2bbd91d0-136f-9484-9e17-aea7c69c8a91@oracle.com> Message-ID: Hi Coleen, On Thu, Mar 8, 2018 at 7:26 PM, wrote: > > This change removes VALUE_OBJ_CLASS_SPEC as subclass for classes in the > runtime code. I decided to split this into 3 parts to divide the > clicking. See the bug for discussion of why we would like to remove this > null class on most platforms in favor of the link time check to disallow > Hotspot code from calling global operators new and delete (bug > https://bugs.openjdk.java.net/browse/JDK-8198243) > > Tested with mach5 nightly tests with the full set of changes and mach5 > tier1-2 with this set. > > open webrev at http://cr.openjdk.java.net/~coleenp/8173070.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8173070 > > I'll update the copyrights with hg commit. > > Thanks, > Coleen > Looks mostly okay. Thank you for doing this, this will make C++ parsing in CDT easier. allocation.hpp: the comment around _ValueObj now reads strange. Also, just curious, do we still need _ValueObj? Kind Regards, Thomas From thomas.stuefe at gmail.com Thu Mar 8 20:31:58 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 8 Mar 2018 21:31:58 +0100 Subject: RFR (tedious) 8173070: Remove ValueObj class for allocation subclassing for runtime code In-Reply-To: References: <2bbd91d0-136f-9484-9e17-aea7c69c8a91@oracle.com> Message-ID: On Thu, Mar 8, 2018 at 9:30 PM, Thomas St?fe wrote: > Hi Coleen, > > On Thu, Mar 8, 2018 at 7:26 PM, wrote: > >> >> This change removes VALUE_OBJ_CLASS_SPEC as subclass for classes in the >> runtime code. I decided to split this into 3 parts to divide the >> clicking. See the bug for discussion of why we would like to remove this >> null class on most platforms in favor of the link time check to disallow >> Hotspot code from calling global operators new and delete (bug >> https://bugs.openjdk.java.net/browse/JDK-8198243) >> >> Tested with mach5 nightly tests with the full set of changes and mach5 >> tier1-2 with this set. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8173070.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8173070 >> >> I'll update the copyrights with hg commit. >> >> Thanks, >> Coleen >> > > Looks mostly okay. Thank you for doing this, this will make C++ parsing in > CDT easier. > > allocation.hpp: the comment around _ValueObj now reads strange. Also, just > curious, do we still need _ValueObj? > > Oh, just read that others already commented on the comment. So, never mind... > Kind Regards, Thomas > > From thomas.stuefe at gmail.com Thu Mar 8 20:43:22 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 8 Mar 2018 21:43:22 +0100 Subject: RFR: 8199275: Fix inclusions of allocation.inline.hpp In-Reply-To: References: <625a5e48-01be-d699-e6f4-41b84cffbbd6@oracle.com> Message-ID: On Thu, Mar 8, 2018 at 4:19 PM, Vladimir Kozlov wrote: > On 3/8/18 1:22 AM, Thomas St?fe wrote: > >> Hi Stefan, >> >> thanks, this is a good cleanup. >> >> Sometimes I wish there were a method to automatically strip code from >> unnecessary includes. >> > > I also wish for that. Can IDE do it for you? > > Thanks, > Vladimir > > Not that I know, no. When I am feeling really patient I do it the hard way by removing all includes and re-adding them one by one. Same with friend definitions, btw. But its annoying work. ..Thomas > > >> Thanks, Thomas >> >> >> >> On Wed, Mar 7, 2018 at 11:33 PM, Stefan Karlsson < >> stefan.karlsson at oracle.com >> >>> wrote: >>> >> >> Hi all, >>> >>> Please review this small patch to fix some includes of >>> allocation.inline.hpp. >>> >>> http://cr.openjdk.java.net/~stefank/8199275/webrev.01/ >>> https://bugs.openjdk.java.net/browse/JDK-8199275 >>> >>> The changes are quite simple: >>> >>> 1) SymbolHashMap::~SymbolHashMap and CDSOffsets::CDSOffsets were moved to >>> .cpp files, since they used functions from allocation.inline.hpp. >>> >>> 2) includes of allocation.inline.hpp were added to .cpp and .inline.hpp >>> files that used functions from allocation.inline.hpp >>> >>> The patch contains a few number added includes need after this >>> restructuring. >>> >>> Thanks, >>> StefanK >>> >>> From coleen.phillimore at oracle.com Thu Mar 8 20:49:42 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 8 Mar 2018 15:49:42 -0500 Subject: RFR (tedious) 8173070: Remove ValueObj class for allocation subclassing for runtime code In-Reply-To: References: <2bbd91d0-136f-9484-9e17-aea7c69c8a91@oracle.com> Message-ID: <3417da2d-416d-7cba-988c-b5fbbcd369c1@oracle.com> On 3/8/18 3:31 PM, Thomas St?fe wrote: > > > On Thu, Mar 8, 2018 at 9:30 PM, Thomas St?fe > wrote: > > Hi Coleen, > > On Thu, Mar 8, 2018 at 7:26 PM, > wrote: > > > This change removes VALUE_OBJ_CLASS_SPEC as subclass for > classes in the runtime code.? I decided to split this into 3 > parts to divide the clicking.? See the bug for discussion of > why we would like to remove this null class on most platforms > in favor of the link time check to disallow Hotspot code from > calling global operators new and delete (bug > https://bugs.openjdk.java.net/browse/JDK-8198243 > ) > > Tested with mach5 nightly tests with the full set of changes > and mach5 tier1-2 with this set. > > open webrev at > http://cr.openjdk.java.net/~coleenp/8173070.01/webrev > > bug link https://bugs.openjdk.java.net/browse/JDK-8173070 > > > I'll update the copyrights with hg commit. > > Thanks, > Coleen > > > Looks mostly okay. Thank you for doing this, this will make C++ > parsing in CDT easier. > > allocation.hpp: the comment around _ValueObj now reads strange. > Also, just curious, do we still need _ValueObj? > > > Oh, just read that others already commented on the comment. So, never > mind... Thanks for the review.? Yes, I will remove ValueObj in the next (or maybe third) go around. Coleen > Kind Regards, Thomas > > From erik.joelsson at oracle.com Thu Mar 8 22:08:12 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Thu, 8 Mar 2018 14:08:12 -0800 Subject: RFR: JDK-8199352: The Jib artifact resolver in test lib needs to print better error messages Message-ID: <12e462a0-f0c7-c3eb-3e8f-8959d2fc355a@oracle.com> The Jib artifact resolver is not very good at telling us why things go wrong. The reason is that it swallows exceptions. This patch changes the API from throwing a FileNotFoundException, which I don't really think fits correctly in all cases, to a new API specific exception. I have greped for all uses of this API in the tests and changed the exception type caught at the caller location. I verified that I didn't break anything by compiling all the affected test classes and by running some of them for a bit. With these changes it should be easier to diagnose problems with resolving artifacts in the future. Bug: https://bugs.openjdk.java.net/browse/JDK-8199352 Webrev: http://cr.openjdk.java.net/~erikj/8199352/webrev.01/index.html /Erik From george.triantafillou at oracle.com Thu Mar 8 23:15:30 2018 From: george.triantafillou at oracle.com (George Triantafillou) Date: Thu, 8 Mar 2018 18:15:30 -0500 Subject: RFR: JDK-8199352: The Jib artifact resolver in test lib needs to print better error messages In-Reply-To: <12e462a0-f0c7-c3eb-3e8f-8959d2fc355a@oracle.com> References: <12e462a0-f0c7-c3eb-3e8f-8959d2fc355a@oracle.com> Message-ID: <3e1bfff3-9b5c-3ab2-c91d-53eb3a7a1b9f@oracle.com> Erik, Looks good. -George On 3/8/2018 5:08 PM, Erik Joelsson wrote: > The Jib artifact resolver is not very good at telling us why things go > wrong. The reason is that it swallows exceptions. This patch changes > the API from throwing a FileNotFoundException, which I don't really > think fits correctly in all cases, to a new API specific exception. > > I have greped for all uses of this API in the tests and changed the > exception type caught at the caller location. I verified that I didn't > break anything by compiling all the affected test classes and by > running some of them for a bit. > > With these changes it should be easier to diagnose problems with > resolving artifacts in the future. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8199352 > > Webrev: http://cr.openjdk.java.net/~erikj/8199352/webrev.01/index.html > > /Erik > From igor.ignatyev at oracle.com Thu Mar 8 23:24:18 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Thu, 8 Mar 2018 15:24:18 -0800 Subject: RFR: JDK-8199352: The Jib artifact resolver in test lib needs to print better error messages In-Reply-To: <12e462a0-f0c7-c3eb-3e8f-8959d2fc355a@oracle.com> References: <12e462a0-f0c7-c3eb-3e8f-8959d2fc355a@oracle.com> Message-ID: Hi Erik, to avoid incompatibility, you could have just made ArtifactResolverException a subclass of java.io.FileNotFoundException. it seems you forgot to add ArtifactResolverException.java file to the repo. a minor nit: in JibArtifactManager::newInstance, you pass "Could not resolve " + JIB_SERVICE_FACTORY to ClassNotFoundException constructor. by the convention, the message in CNFE is the classname. -- Igor > On Mar 8, 2018, at 2:08 PM, Erik Joelsson wrote: > > The Jib artifact resolver is not very good at telling us why things go wrong. The reason is that it swallows exceptions. This patch changes the API from throwing a FileNotFoundException, which I don't really think fits correctly in all cases, to a new API specific exception. > > I have greped for all uses of this API in the tests and changed the exception type caught at the caller location. I verified that I didn't break anything by compiling all the affected test classes and by running some of them for a bit. > > With these changes it should be easier to diagnose problems with resolving artifacts in the future. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8199352 > > Webrev: http://cr.openjdk.java.net/~erikj/8199352/webrev.01/index.html > > /Erik > From coleen.phillimore at oracle.com Thu Mar 8 23:28:31 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 8 Mar 2018 18:28:31 -0500 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: <1520529846.1085.9.camel@gmail.com> References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> Message-ID: <03f59f60-dda2-bc0a-3929-115f7a0c4ca2@oracle.com> http://cr.openjdk.java.net/~enevill/8199220/webrev.02/src/hotspot/share/gc/shared/cardTableModRefBS.cpp.udiff.html Can you add a comment on the #endif // COMPILER2 || JVMCI something like that? thanks, Coleen On 3/8/18 12:24 PM, Edward Nevill wrote: > Hi, > > BugID: https://bugs.openjdk.java.net/browse/JDK-8199220 > Webrev: http://cr.openjdk.java.net/~enevill/8199220/webrev.02 > > I have updated the webrev to include the following changes > > - Changed the description in the hg log to > > 8199220: Zero build broken after 8195103 and 8191102 > > - Remove the guarantee in on_slowpath_allocation_exit > > +#else > + guarantee(false, "How did we get here?"); > +#endif > > This means on_slowpath_allocation_exit becomes a no op in C1/Zero > > I also rebased the webrev on jdk/hs rathat than jdk/jdk as this is a hotspot patch. > > While doing so I discovered more brokenness in jdk/hs caused by > > 8191102: Incorrect include file use in classLoader.hpp > Summary: Move appropriate methods to .inline.hpp files. Create .inline.hpp files when needed. > > This moved methods to .inline.hpp files but failed to update the zero specific files. > > I have updated this webrev with the approriate includes. > > Could I have a sponsor for this please, > > Thanks, > Ed. > From david.holmes at oracle.com Thu Mar 8 23:32:54 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 9 Mar 2018 09:32:54 +1000 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: <998cc912-d259-f59c-10c3-89b33f9fb90d@physik.fu-berlin.de> References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <998cc912-d259-f59c-10c3-89b33f9fb90d@physik.fu-berlin.de> Message-ID: <01013dbd-84b2-0b3e-a5cf-9e62e15e59eb@oracle.com> Hi Adrian, On 9/03/2018 3:48 AM, John Paul Adrian Glaubitz wrote: > On 03/08/2018 06:24 PM, Edward Nevill wrote: >> Could I have a sponsor for this please, > > I can sponsor this for you if no one else steps up :-). Ed's using "sponsor" in the hotspot-specific way meaning "someone from Oracle" - so that we can put shared code changes through our builds/tests prior to pushing. :) I'm putting this through our (non-zero) builds/tests now. Not that I expect any issues. From what Thomas wrote the zero part may still need some work. Thanks, David > I will have to review and also would want to test it on x86 > and SPARC first. > > Thanks, > Adrian > From erik.joelsson at oracle.com Fri Mar 9 00:06:47 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Thu, 8 Mar 2018 16:06:47 -0800 Subject: RFR: JDK-8199352: The Jib artifact resolver in test lib needs to print better error messages In-Reply-To: References: <12e462a0-f0c7-c3eb-3e8f-8959d2fc355a@oracle.com> Message-ID: <5c0f92b5-4f0d-9910-2495-cc5ec6d23b21@oracle.com> On 2018-03-08 15:24, Igor Ignatyev wrote: > Hi Erik, Thanks for looking at this! > to avoid incompatibility, you could have just made ArtifactResolverException a subclass of java.io.FileNotFoundException. This is correct, but I don't think the exception should be of type FileNotFoundException. IMO, this is something different and trying to re-purpose an existing exception type is rarely a good idea. > it seems you forgot to add ArtifactResolverException.java file to the repo. Doh! Added. > a minor nit: in JibArtifactManager::newInstance, you pass "Could not resolve " + JIB_SERVICE_FACTORY to ClassNotFoundException constructor. by the convention, the message in CNFE is the classname. Right, good point, fixed. New Webrev: http://cr.openjdk.java.net/~erikj/8199352/webrev.02/ /Erik > -- Igor > >> On Mar 8, 2018, at 2:08 PM, Erik Joelsson wrote: >> >> The Jib artifact resolver is not very good at telling us why things go wrong. The reason is that it swallows exceptions. This patch changes the API from throwing a FileNotFoundException, which I don't really think fits correctly in all cases, to a new API specific exception. >> >> I have greped for all uses of this API in the tests and changed the exception type caught at the caller location. I verified that I didn't break anything by compiling all the affected test classes and by running some of them for a bit. >> >> With these changes it should be easier to diagnose problems with resolving artifacts in the future. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8199352 >> >> Webrev: http://cr.openjdk.java.net/~erikj/8199352/webrev.01/index.html >> >> /Erik >> From igor.ignatyev at oracle.com Fri Mar 9 00:12:08 2018 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Thu, 8 Mar 2018 16:12:08 -0800 Subject: RFR: JDK-8199352: The Jib artifact resolver in test lib needs to print better error messages In-Reply-To: <5c0f92b5-4f0d-9910-2495-cc5ec6d23b21@oracle.com> References: <12e462a0-f0c7-c3eb-3e8f-8959d2fc355a@oracle.com> <5c0f92b5-4f0d-9910-2495-cc5ec6d23b21@oracle.com> Message-ID: Looks good to me. -- Igor > On Mar 8, 2018, at 4:06 PM, Erik Joelsson wrote: > > On 2018-03-08 15:24, Igor Ignatyev wrote: >> Hi Erik, > Thanks for looking at this! >> to avoid incompatibility, you could have just made ArtifactResolverException a subclass of java.io.FileNotFoundException. > This is correct, but I don't think the exception should be of type FileNotFoundException. IMO, this is something different and trying to re-purpose an existing exception type is rarely a good idea. >> it seems you forgot to add ArtifactResolverException.java file to the repo. > Doh! Added. >> a minor nit: in JibArtifactManager::newInstance, you pass "Could not resolve " + JIB_SERVICE_FACTORY to ClassNotFoundException constructor. by the convention, the message in CNFE is the classname. > Right, good point, fixed. > > New Webrev: http://cr.openjdk.java.net/~erikj/8199352/webrev.02/ > > /Erik >> -- Igor >> >>> On Mar 8, 2018, at 2:08 PM, Erik Joelsson wrote: >>> >>> The Jib artifact resolver is not very good at telling us why things go wrong. The reason is that it swallows exceptions. This patch changes the API from throwing a FileNotFoundException, which I don't really think fits correctly in all cases, to a new API specific exception. >>> >>> I have greped for all uses of this API in the tests and changed the exception type caught at the caller location. I verified that I didn't break anything by compiling all the affected test classes and by running some of them for a bit. >>> >>> With these changes it should be easier to diagnose problems with resolving artifacts in the future. >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8199352 >>> >>> Webrev: http://cr.openjdk.java.net/~erikj/8199352/webrev.01/index.html >>> >>> /Erik >>> > From paul.sandoz at oracle.com Fri Mar 9 01:45:19 2018 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Thu, 8 Mar 2018 17:45:19 -0800 Subject: 8199342 The constant pool forgets it has a Dynamic entry if there are overpass methods Message-ID: <146AEF5C-35A9-47D4-A2DA-14AB058A2D31@oracle.com> Hi, Please review the following patch: http://cr.openjdk.java.net/~psandoz/jdk/JDK-8199342-constant-dynamic-and-overpass-methods/webrev/ This fixes a crash due to an assert with debug builds. On class initialization we set a flag if the constant pool contains a Dynamic entry. If the class file is an interface and there are overpass methods then parts the constant pool gets re-written by copying the old pool to a new pool, but that process does not copy over the flag. Thanks, Paul. From erik.helin at oracle.com Fri Mar 9 05:57:43 2018 From: erik.helin at oracle.com (Erik Helin) Date: Fri, 9 Mar 2018 06:57:43 +0100 Subject: RFR: 8199275: Fix inclusions of allocation.inline.hpp In-Reply-To: References: <625a5e48-01be-d699-e6f4-41b84cffbbd6@oracle.com> Message-ID: <65934074-d775-2a05-2857-190a859df26b@oracle.com> On 03/08/2018 09:43 PM, Thomas St?fe wrote: > On Thu, Mar 8, 2018 at 4:19 PM, Vladimir Kozlov > wrote: > >> On 3/8/18 1:22 AM, Thomas St?fe wrote: >> >>> Hi Stefan, >>> >>> thanks, this is a good cleanup. >>> >>> Sometimes I wish there were a method to automatically strip code from >>> unnecessary includes. >>> >> >> I also wish for that. Can IDE do it for you? >> >> Thanks, >> Vladimir >> >> > Not that I know, no. But I know of one :) https://include-what-you-use.org/ iwyu was developed in-house at Google and then open sourced. I have never tried it with the HotSpot source code though, but it seems pretty easy to get started. Thanks, Erik > When I am feeling really patient I do it the hard way by removing all > includes and re-adding them one by one. > > Same with friend definitions, btw. > > But its annoying work. > > ..Thomas > > >> >> >>> Thanks, Thomas >>> >>> >>> >>> On Wed, Mar 7, 2018 at 11:33 PM, Stefan Karlsson < >>> stefan.karlsson at oracle.com >>> >>>> wrote: >>>> >>> >>> Hi all, >>>> >>>> Please review this small patch to fix some includes of >>>> allocation.inline.hpp. >>>> >>>> http://cr.openjdk.java.net/~stefank/8199275/webrev.01/ >>>> https://bugs.openjdk.java.net/browse/JDK-8199275 >>>> >>>> The changes are quite simple: >>>> >>>> 1) SymbolHashMap::~SymbolHashMap and CDSOffsets::CDSOffsets were moved to >>>> .cpp files, since they used functions from allocation.inline.hpp. >>>> >>>> 2) includes of allocation.inline.hpp were added to .cpp and .inline.hpp >>>> files that used functions from allocation.inline.hpp >>>> >>>> The patch contains a few number added includes need after this >>>> restructuring. >>>> >>>> Thanks, >>>> StefanK >>>> >>>> From per.liden at oracle.com Fri Mar 9 06:32:28 2018 From: per.liden at oracle.com (Per Liden) Date: Fri, 9 Mar 2018 07:32:28 +0100 Subject: RFR: 8199328: Fix unsafe field accesses in heap dumper In-Reply-To: <5AA1599D.2080006@oracle.com> References: <5AA1599D.2080006@oracle.com> Message-ID: <0c97661f-ab63-5710-0d93-71843ebb2f5d@oracle.com> Thanks Thomas, Aleksey and Erik for reviewing. /Per On 2018-03-08 16:41, Erik ?sterlund wrote: > Hi Per, > > Looks good. > > Thanks, > /Erik > > On 2018-03-08 15:01, Per Liden wrote: >> The heap dumper, more specifically the >> DumperSupport::dump_field_value() function, is doing unsafe raw loads >> of fields in heap objects. Those loads should go thru the access API. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8199328 >> Webrev: http://cr.openjdk.java.net/~pliden/8199328/webrev.0/ >> >> Testing: manual dumping using jcmd GC.heap_dump, awaiting hs-tier1-3 >> results >> >> /Per > From glaubitz at physik.fu-berlin.de Fri Mar 9 07:03:14 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Fri, 9 Mar 2018 08:03:14 +0100 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: <01013dbd-84b2-0b3e-a5cf-9e62e15e59eb@oracle.com> References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <998cc912-d259-f59c-10c3-89b33f9fb90d@physik.fu-berlin.de> <01013dbd-84b2-0b3e-a5cf-9e62e15e59eb@oracle.com> Message-ID: <2b8cd30e-ca14-be7e-e646-cd970ac7d514@physik.fu-berlin.de> On 03/09/2018 12:32 AM, David Holmes wrote: > Ed's using "sponsor" in the hotspot-specific way meaning "someone from Oracle" - > so that we can put shared code changes through our builds/tests prior to > pushing. :) Ok. I think it should be mentioned then because otherwise I could have pushed it as well. For me, "sponsor" just refers to a person with commit access, at least that's how we define that term in Debian. > I'm putting this through our (non-zero) builds/tests now. Not that I expect any issues. > > From what Thomas wrote the zero part may still need some work. Btw, during his presentation at FOSDEM, I think Mark talked about CI test machines that can be reached by non-Oracle OpenJDK members. Is that infrastructure already in place? And would that have been sufficient here in order for a non-Oracle sponsor to test and push the changes? Thanks, Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From david.holmes at oracle.com Fri Mar 9 07:12:44 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 9 Mar 2018 17:12:44 +1000 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: <2b8cd30e-ca14-be7e-e646-cd970ac7d514@physik.fu-berlin.de> References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <998cc912-d259-f59c-10c3-89b33f9fb90d@physik.fu-berlin.de> <01013dbd-84b2-0b3e-a5cf-9e62e15e59eb@oracle.com> <2b8cd30e-ca14-be7e-e646-cd970ac7d514@physik.fu-berlin.de> Message-ID: <0b25009b-28d3-1f40-0607-34a6181247dd@oracle.com> On 9/03/2018 5:03 PM, John Paul Adrian Glaubitz wrote: > On 03/09/2018 12:32 AM, David Holmes wrote: >> Ed's using "sponsor" in the hotspot-specific way meaning "someone from Oracle" - >> so that we can put shared code changes through our builds/tests prior to >> pushing. :) > > Ok. I think it should be mentioned then because otherwise I could have pushed > it as well. For me, "sponsor" just refers to a person with commit access, at > least that's how we define that term in Debian. Sure, that's why I clarified. >> I'm putting this through our (non-zero) builds/tests now. Not that I expect any issues. It all passed fine btw. >> >> From what Thomas wrote the zero part may still need some work. > > Btw, during his presentation at FOSDEM, I think Mark talked about CI test > machines that can be reached by non-Oracle OpenJDK members. Is that infrastructure > already in place? And would that have been sufficient here in order for > a non-Oracle sponsor to test and push the changes? Yes and No. Yes it's in place and is called the "submit" repo [1]. But No it wouldn't be sufficient here because the submit repo is based on jdk/jdk not jdk/hs, so passing there is no guarantee of passing on current jdk/hs. Hence we're still using Oracle-hotspot-sponsors to guide changes to shared code into the hs repo. Thanks, David [1] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-January/000566.html > Thanks, > Adrian > From edward.nevill at gmail.com Fri Mar 9 07:51:27 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Fri, 09 Mar 2018 07:51:27 +0000 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <998cc912-d259-f59c-10c3-89b33f9fb90d@physik.fu-berlin.de> <1520533955.1085.11.camel@gmail.com> Message-ID: <1520581887.2395.4.camel@gmail.com> On Thu, 2018-03-08 at 20:10 +0100, Thomas St?fe wrote: > Tried to test but get build errors: > > /shared/projects/openjdk/jdk-hs/source/src/hotspot/share/interpreter/bytecodeInterpreter.cpp: In static member function ?static void BytecodeInterpreter::runWithChecks(interpreterState)?: > /shared/projects/openjdk/jdk-hs/source/src/hotspot/share/utilities/debug.hpp:44:10: error: conversion from ?oop? to ?bool? is ambiguous > Pants. The debug Zero build is also broken. The above is trivial to fix but there is more brokenness afterwards. I won't get a chance to look at this again until after the weekend. Could we proceed with the patch for the release build. and fix the debug build later. Or perhaps someone else might like to fix the debug build and update my patch. BTW: The reason for the sudden interest in Zero is that I am looking at porting OpenJDK to riscv and I have an initial patch which builds OpenJDK for riscv. But that is dependant on Zero building to start with. It does seem that Zero has been lacking some TLC. All the best, Ed. From edward.nevill at gmail.com Fri Mar 9 07:53:39 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Fri, 09 Mar 2018 07:53:39 +0000 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: <03f59f60-dda2-bc0a-3929-115f7a0c4ca2@oracle.com> References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <03f59f60-dda2-bc0a-3929-115f7a0c4ca2@oracle.com> Message-ID: <1520582019.2395.6.camel@gmail.com> On Thu, 2018-03-08 at 18:28 -0500, coleen.phillimore at oracle.com wrote: > http://cr.openjdk.java.net/~enevill/8199220/webrev.02/src/hotspot/share/gc/shared/cardTableModRefBS.cpp.udiff.html > > Can you add a comment on the #endif // COMPILER2 || JVMCI > > something like that? > thanks, > Coleen > Sure. Would you like me to generate a new webrev. Or could David just add that to the commit? Thanks for your help, Ed. From magnus.ihse.bursie at oracle.com Fri Mar 9 07:53:24 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Fri, 9 Mar 2018 08:53:24 +0100 Subject: RFR: JDK-8199352: The Jib artifact resolver in test lib needs to print better error messages In-Reply-To: <5c0f92b5-4f0d-9910-2495-cc5ec6d23b21@oracle.com> References: <12e462a0-f0c7-c3eb-3e8f-8959d2fc355a@oracle.com> <5c0f92b5-4f0d-9910-2495-cc5ec6d23b21@oracle.com> Message-ID: > 9 mars 2018 kl. 01:06 skrev Erik Joelsson : > >> On 2018-03-08 15:24, Igor Ignatyev wrote: >> Hi Erik, > Thanks for looking at this! >> to avoid incompatibility, you could have just made ArtifactResolverException a subclass of java.io.FileNotFoundException. > This is correct, but I don't think the exception should be of type FileNotFoundException. IMO, this is something different and trying to re-purpose an existing exception type is rarely a good idea. I agree. >> it seems you forgot to add ArtifactResolverException.java file to the repo. > Doh! Added. >> a minor nit: in JibArtifactManager::newInstance, you pass "Could not resolve " + JIB_SERVICE_FACTORY to ClassNotFoundException constructor. by the convention, the message in CNFE is the classname. > Right, good point, fixed. > > New Webrev: http://cr.openjdk.java.net/~erikj/8199352/webrev.02/ Looks good to me. /Magnus > > /Erik >> -- Igor >> >>> On Mar 8, 2018, at 2:08 PM, Erik Joelsson wrote: >>> >>> The Jib artifact resolver is not very good at telling us why things go wrong. The reason is that it swallows exceptions. This patch changes the API from throwing a FileNotFoundException, which I don't really think fits correctly in all cases, to a new API specific exception. >>> >>> I have greped for all uses of this API in the tests and changed the exception type caught at the caller location. I verified that I didn't break anything by compiling all the affected test classes and by running some of them for a bit. >>> >>> With these changes it should be easier to diagnose problems with resolving artifacts in the future. >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8199352 >>> >>> Webrev: http://cr.openjdk.java.net/~erikj/8199352/webrev.01/index.html >>> >>> /Erik >>> > From glaubitz at physik.fu-berlin.de Fri Mar 9 08:22:19 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Fri, 9 Mar 2018 09:22:19 +0100 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: <1520581887.2395.4.camel@gmail.com> References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <998cc912-d259-f59c-10c3-89b33f9fb90d@physik.fu-berlin.de> <1520533955.1085.11.camel@gmail.com> <1520581887.2395.4.camel@gmail.com> Message-ID: <91635bbb-a455-0c2b-f048-47a687fcff80@physik.fu-berlin.de> On 03/09/2018 08:51 AM, Edward Nevill wrote: > Pants. The debug Zero build is also broken. The above is trivial to fix but there is more brokenness afterwards. > > I won't get a chance to look at this again until after the weekend. I can maybe have a look at it over the weekend. I normally do regular Zero test builds on a plethora of architectures for Debian but I currently have little time. There is just so much other stuff that needs work, like LLVM to get Rust usable on more architectures. > BTW: The reason for the sudden interest in Zero is that I am looking at porting OpenJDK to riscv and I have an initial patch which builds OpenJDK for riscv. But that is dependant on Zero building to start with. Were there any particular changes necessary for riscv64 to get OpenJDK. Normally I just expect the autoconf defintions to be enough unless riscv64 does some crazy things regarding its stack layout or so. > It does seem that Zero has been lacking some TLC. I am normally taking care of Zero. I just ran out of time recently. I will try to pick it up again. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From erik.osterlund at oracle.com Fri Mar 9 16:58:19 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Fri, 9 Mar 2018 17:58:19 +0100 Subject: RFR: 8198949: Modularize arraycopy stub routine GC barriers Message-ID: <5AA2BD2B.2060100@oracle.com> Hi, The GC barriers for arraycopy stub routines are not as modular as they could be. They currently use switch statements to check which GC barrier set is being used, and call one or another barrier based on that, with registers already allocated in such a way that it can only be used for write barriers. My solution to the problem is to introduce a platform-specific GC barrier set code generator. The abstract super class is BarrierSetCodeGen, and you can get it from the active BarrierSet. A virtual call to the BarrierSetCodeGen generates the relevant GC barriers for the arraycopy stub routines. The BarrierSetCodeGen inheritance hierarchy exactly matches the corresponding BarrierSet inheritance hierarchy. In other words, every BarrierSet class has a corresponding BarrierSetCodeGen class. The various switch statements that generate different GC barriers depending on the enum type of the barrier set have been changed to call a corresponding virtual member function in the BarrierSetCodeGen class instead. Thanks to Martin Doerr and Roman Kennke for providing platform specific code for PPC, S390 and AArch64. Webrev: http://cr.openjdk.java.net/~eosterlund/8198949/webrev.00/ CR: https://bugs.openjdk.java.net/browse/JDK-8198949 Thanks, /Erik From lois.foltan at oracle.com Fri Mar 9 18:59:43 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 9 Mar 2018 13:59:43 -0500 Subject: 8199342 The constant pool forgets it has a Dynamic entry if there are overpass methods In-Reply-To: <146AEF5C-35A9-47D4-A2DA-14AB058A2D31@oracle.com> References: <146AEF5C-35A9-47D4-A2DA-14AB058A2D31@oracle.com> Message-ID: <0055eebd-899c-0562-9252-c79aa1100825@oracle.com> Looks good Paul! Lois On 3/8/2018 8:45 PM, Paul Sandoz wrote: > Hi, > > Please review the following patch: > > http://cr.openjdk.java.net/~psandoz/jdk/JDK-8199342-constant-dynamic-and-overpass-methods/webrev/ > > > This fixes a crash due to an assert with debug builds. > > On class initialization we set a flag if the constant pool contains a > Dynamic entry. If the class file is an interface and there are > overpass methods then parts the constant pool gets re-written by > copying the old pool to a new pool, but that process does not copy > over the flag. > > Thanks, > Paul. From coleen.phillimore at oracle.com Fri Mar 9 19:09:10 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 9 Mar 2018 14:09:10 -0500 Subject: RFR (trivial) 8199283: Remove ValueObj class for allocation subclassing for compiler code Message-ID: <036e2ab1-8a8b-60db-19d1-7f8caf4096f3@oracle.com> There is not even very much clicking. open webrev at http://cr.openjdk.java.net/~coleenp/8199283.01/webrev bug link https://bugs.openjdk.java.net/browse/JDK-8199283 Will test with tier1 and 2 and update copyrights before commit. Thanks, Coleen From stefan.karlsson at oracle.com Fri Mar 9 19:26:22 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 9 Mar 2018 20:26:22 +0100 Subject: RFR (trivial) 8199283: Remove ValueObj class for allocation subclassing for compiler code In-Reply-To: <036e2ab1-8a8b-60db-19d1-7f8caf4096f3@oracle.com> References: <036e2ab1-8a8b-60db-19d1-7f8caf4096f3@oracle.com> Message-ID: <0827ec51-b75c-267f-34a3-f8b143cc5a5a@oracle.com> Looks good. StefanK On 2018-03-09 20:09, coleen.phillimore at oracle.com wrote: > There is not even very much clicking. > > open webrev at http://cr.openjdk.java.net/~coleenp/8199283.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8199283 > > Will test with tier1 and 2 and update copyrights before commit. > > Thanks, > Coleen > > From thomas.schatzl at oracle.com Fri Mar 9 19:37:55 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Fri, 09 Mar 2018 20:37:55 +0100 Subject: RFR (trivial) 8199283: Remove ValueObj class for allocation subclassing for compiler code In-Reply-To: <036e2ab1-8a8b-60db-19d1-7f8caf4096f3@oracle.com> References: <036e2ab1-8a8b-60db-19d1-7f8caf4096f3@oracle.com> Message-ID: <1520624275.2293.0.camel@oracle.com> Hi, On Fri, 2018-03-09 at 14:09 -0500, coleen.phillimore at oracle.com wrote: > There is not even very much clicking. > > open webrev at http://cr.openjdk.java.net/~coleenp/8199283.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8199283 > > Will test with tier1 and 2 and update copyrights before commit. > looks good. Thomas From vladimir.kozlov at oracle.com Fri Mar 9 19:41:32 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 9 Mar 2018 11:41:32 -0800 Subject: URGENT [11] RFR(XS) 8199422: Hotspot build is broken after push of 8197235 Message-ID: <638bde7f-edcf-8c49-8254-0872890082be@oracle.com> http://cr.openjdk.java.net/~kvn/8199422/webrev.00/ https://bugs.openjdk.java.net/browse/JDK-8199422 I did not retest JDK-8197235 changes after JDK-8199275 and other changes were pushed which changed header files dependencies. Ran pre-integration testing. -- Thanks, Vladimir From shade at redhat.com Fri Mar 9 19:45:26 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Fri, 9 Mar 2018 20:45:26 +0100 Subject: URGENT [11] RFR(XS) 8199422: Hotspot build is broken after push of 8197235 In-Reply-To: <638bde7f-edcf-8c49-8254-0872890082be@oracle.com> References: <638bde7f-edcf-8c49-8254-0872890082be@oracle.com> Message-ID: <54225ddc-f0d3-bed9-8555-416d7744dabd@redhat.com> On 03/09/2018 08:41 PM, Vladimir Kozlov wrote: > http://cr.openjdk.java.net/~kvn/8199422/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8199422 Looks good to me. -Aleksey From vladimir.kozlov at oracle.com Fri Mar 9 19:50:40 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 9 Mar 2018 11:50:40 -0800 Subject: URGENT [11] RFR(XS) 8199422: Hotspot build is broken after push of 8197235 In-Reply-To: <54225ddc-f0d3-bed9-8555-416d7744dabd@redhat.com> References: <638bde7f-edcf-8c49-8254-0872890082be@oracle.com> <54225ddc-f0d3-bed9-8555-416d7744dabd@redhat.com> Message-ID: <6ac5a090-43e8-9926-2ed8-e22c699a061d@oracle.com> Thank you, Aleksey Vladimir K On 3/9/18 11:45 AM, Aleksey Shipilev wrote: > On 03/09/2018 08:41 PM, Vladimir Kozlov wrote: >> http://cr.openjdk.java.net/~kvn/8199422/webrev.00/ >> https://bugs.openjdk.java.net/browse/JDK-8199422 > > Looks good to me. > > -Aleksey > From lois.foltan at oracle.com Fri Mar 9 19:58:38 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 9 Mar 2018 14:58:38 -0500 Subject: URGENT [11] RFR(XS) 8199422: Hotspot build is broken after push of 8197235 In-Reply-To: <638bde7f-edcf-8c49-8254-0872890082be@oracle.com> References: <638bde7f-edcf-8c49-8254-0872890082be@oracle.com> Message-ID: <4cfd1dd8-a53f-b015-ea3b-7bad1dfd5f1b@oracle.com> Looks good. Lois On 3/9/2018 2:41 PM, Vladimir Kozlov wrote: > http://cr.openjdk.java.net/~kvn/8199422/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8199422 > > I did not retest JDK-8197235 changes after JDK-8199275 and other > changes were pushed which changed header files dependencies. > > Ran pre-integration testing. > From coleen.phillimore at oracle.com Fri Mar 9 19:59:41 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 9 Mar 2018 14:59:41 -0500 Subject: RFR (trivial) 8199283: Remove ValueObj class for allocation subclassing for compiler code In-Reply-To: <1520624275.2293.0.camel@oracle.com> References: <036e2ab1-8a8b-60db-19d1-7f8caf4096f3@oracle.com> <1520624275.2293.0.camel@oracle.com> Message-ID: <353745a9-f19c-2826-5f73-3c437f9f0019@oracle.com> Thanks Stefan and Thomas. Coleen On 3/9/18 2:37 PM, Thomas Schatzl wrote: > Hi, > > On Fri, 2018-03-09 at 14:09 -0500, coleen.phillimore at oracle.com wrote: >> There is not even very much clicking. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8199283.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8199283 >> >> Will test with tier1 and 2 and update copyrights before commit. >> > looks good. > > Thomas From vladimir.kozlov at oracle.com Fri Mar 9 20:05:26 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 9 Mar 2018 12:05:26 -0800 Subject: URGENT [11] RFR(XS) 8199422: Hotspot build is broken after push of 8197235 In-Reply-To: <4cfd1dd8-a53f-b015-ea3b-7bad1dfd5f1b@oracle.com> References: <638bde7f-edcf-8c49-8254-0872890082be@oracle.com> <4cfd1dd8-a53f-b015-ea3b-7bad1dfd5f1b@oracle.com> Message-ID: Thank you, Lois Vladimir On 3/9/18 11:58 AM, Lois Foltan wrote: > Looks good. > Lois > > On 3/9/2018 2:41 PM, Vladimir Kozlov wrote: >> http://cr.openjdk.java.net/~kvn/8199422/webrev.00/ >> https://bugs.openjdk.java.net/browse/JDK-8199422 >> >> I did not retest JDK-8197235 changes after JDK-8199275 and other >> changes were pushed which changed header files dependencies. >> >> Ran pre-integration testing. >> > From karen.kinnear at oracle.com Fri Mar 9 20:59:15 2018 From: karen.kinnear at oracle.com (Karen Kinnear) Date: Fri, 9 Mar 2018 15:59:15 -0500 Subject: 8199342 The constant pool forgets it has a Dynamic entry if there are overpass methods In-Reply-To: <0055eebd-899c-0562-9252-c79aa1100825@oracle.com> References: <146AEF5C-35A9-47D4-A2DA-14AB058A2D31@oracle.com> <0055eebd-899c-0562-9252-c79aa1100825@oracle.com> Message-ID: <89E7A2D9-02EA-475E-BA88-4CD938284617@oracle.com> Looks good. Thanks for catching this. And redefineclasses goes through the ClassFileParser so should set the flag there. thanks, Karen p.s. thank you for the test. A note - I believe this code path is called for any class file that requires an overpass due to default method processing. We create these to throw a number of exceptions, such as IncompatibleClassChangeError (e.g. diamond shape for default methods) or some of the AbstractMethodError cases. > On Mar 9, 2018, at 1:59 PM, Lois Foltan wrote: > > Looks good Paul! > Lois > > On 3/8/2018 8:45 PM, Paul Sandoz wrote: >> Hi, >> >> Please review the following patch: >> >> http://cr.openjdk.java.net/~psandoz/jdk/JDK-8199342-constant-dynamic-and-overpass-methods/webrev/ >> >> This fixes a crash due to an assert with debug builds. >> >> On class initialization we set a flag if the constant pool contains a Dynamic entry. If the class file is an interface and there are overpass methods then parts the constant pool gets re-written by copying the old pool to a new pool, but that process does not copy over the flag. >> >> Thanks, >> Paul. > From stewartd.qdt at qualcommdatacenter.com Fri Mar 9 22:20:39 2018 From: stewartd.qdt at qualcommdatacenter.com (stewartd.qdt) Date: Fri, 9 Mar 2018 22:20:39 +0000 Subject: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java Message-ID: Please review this webrev [1] which attempts to fix a test error in runtime/stringtable/StringTableVerifyTest.java. This test uses the flag -XX:+VerifyStringTableAtExit, which is a diagnostic option and requires the flag -XX:+UnlockDiagnosticVMOptions. This test currently fails our JTReg testing on an AArch64 machine. This patch simply adds the -XX:+UnlockDiagnosticVMOptions. The bug report is filed at [2]. I am happy to modify the patch as necessary. Regards, Daniel Stewart [1] - http://cr.openjdk.java.net/~dstewart/8199425/webrev.01/ [2] - https://bugs.openjdk.java.net/browse/JDK-8199425 From vladimir.kozlov at oracle.com Fri Mar 9 22:30:08 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 9 Mar 2018 14:30:08 -0800 Subject: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java In-Reply-To: References: Message-ID: Looks good. Thanks, Vladimir On 3/9/18 2:20 PM, stewartd.qdt wrote: > Please review this webrev [1] which attempts to fix a test error in runtime/stringtable/StringTableVerifyTest.java. This test uses the flag -XX:+VerifyStringTableAtExit, which is a diagnostic option and requires the flag -XX:+UnlockDiagnosticVMOptions. > > > > This test currently fails our JTReg testing on an AArch64 machine. This patch simply adds the -XX:+UnlockDiagnosticVMOptions. > > The bug report is filed at [2]. > > > > I am happy to modify the patch as necessary. > > > > Regards, > > > > Daniel Stewart > > > > [1] - http://cr.openjdk.java.net/~dstewart/8199425/webrev.01/ > > [2] - https://bugs.openjdk.java.net/browse/JDK-8199425 > > > From zgu at redhat.com Fri Mar 9 22:34:46 2018 From: zgu at redhat.com (Zhengyu Gu) Date: Fri, 9 Mar 2018 17:34:46 -0500 Subject: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java In-Reply-To: References: Message-ID: Looks good. It failed on Linux x64 also, thanks for fixing it. -Zhengyu On 03/09/2018 05:20 PM, stewartd.qdt wrote: > Please review this webrev [1] which attempts to fix a test error in runtime/stringtable/StringTableVerifyTest.java. This test uses the flag -XX:+VerifyStringTableAtExit, which is a diagnostic option and requires the flag -XX:+UnlockDiagnosticVMOptions. > > > > This test currently fails our JTReg testing on an AArch64 machine. This patch simply adds the -XX:+UnlockDiagnosticVMOptions. > > The bug report is filed at [2]. > > > > I am happy to modify the patch as necessary. > > > > Regards, > > > > Daniel Stewart > > > > [1] - http://cr.openjdk.java.net/~dstewart/8199425/webrev.01/ > > [2] - https://bugs.openjdk.java.net/browse/JDK-8199425 > > > From coleen.phillimore at oracle.com Sat Mar 10 00:55:12 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 9 Mar 2018 19:55:12 -0500 Subject: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java In-Reply-To: References: Message-ID: <0733725e-5f99-28ea-b4cb-97dcd7a91d4d@oracle.com> Looks good.? Thank you for fixing this. Can you hg commit the patch with us as reviewers and I'll push it? thanks, Coleen On 3/9/18 5:20 PM, stewartd.qdt wrote: > Please review this webrev [1] which attempts to fix a test error in runtime/stringtable/StringTableVerifyTest.java. This test uses the flag -XX:+VerifyStringTableAtExit, which is a diagnostic option and requires the flag -XX:+UnlockDiagnosticVMOptions. > > > > This test currently fails our JTReg testing on an AArch64 machine. This patch simply adds the -XX:+UnlockDiagnosticVMOptions. > > The bug report is filed at [2]. > > > > I am happy to modify the patch as necessary. > > > > Regards, > > > > Daniel Stewart > > > > [1] - http://cr.openjdk.java.net/~dstewart/8199425/webrev.01/ > > [2] - https://bugs.openjdk.java.net/browse/JDK-8199425 > > > From stewartd.qdt at qualcommdatacenter.com Sat Mar 10 04:23:04 2018 From: stewartd.qdt at qualcommdatacenter.com (stewartd.qdt) Date: Sat, 10 Mar 2018 04:23:04 +0000 Subject: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java In-Reply-To: <0733725e-5f99-28ea-b4cb-97dcd7a91d4d@oracle.com> References: <0733725e-5f99-28ea-b4cb-97dcd7a91d4d@oracle.com> Message-ID: <8736910a683f4175ba983a158434d13d@NASANEXM01E.na.qualcomm.com> I'd love to Coleen, but having never pushed before, I'm running into issues. It seems I haven't figured out the magic set of steps yet. I get that I am unable to lock jdk/hs as it is Read Only. I'm off for the next Thursday. So, if it can wait until then, I'm happy to keep trying to figure it out. If you'd like, you may go ahead and take the webrev. It seems that is what others have done for other patches I made. But either way I'll have to figure this out. Thanks, Daniel -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of coleen.phillimore at oracle.com Sent: Friday, March 9, 2018 7:55 PM To: hotspot-dev at openjdk.java.net Subject: Re: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java Looks good.? Thank you for fixing this. Can you hg commit the patch with us as reviewers and I'll push it? thanks, Coleen On 3/9/18 5:20 PM, stewartd.qdt wrote: > Please review this webrev [1] which attempts to fix a test error in runtime/stringtable/StringTableVerifyTest.java. This test uses the flag -XX:+VerifyStringTableAtExit, which is a diagnostic option and requires the flag -XX:+UnlockDiagnosticVMOptions. > > > > This test currently fails our JTReg testing on an AArch64 machine. This patch simply adds the -XX:+UnlockDiagnosticVMOptions. > > The bug report is filed at [2]. > > > > I am happy to modify the patch as necessary. > > > > Regards, > > > > Daniel Stewart > > > > [1] - http://cr.openjdk.java.net/~dstewart/8199425/webrev.01/ > > [2] - https://bugs.openjdk.java.net/browse/JDK-8199425 > > > From coleen.phillimore at oracle.com Sat Mar 10 14:35:56 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Sat, 10 Mar 2018 09:35:56 -0500 Subject: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java In-Reply-To: <8736910a683f4175ba983a158434d13d@NASANEXM01E.na.qualcomm.com> References: <0733725e-5f99-28ea-b4cb-97dcd7a91d4d@oracle.com> <8736910a683f4175ba983a158434d13d@NASANEXM01E.na.qualcomm.com> Message-ID: <135de871-eeaa-0650-94b1-e92e7f4f5c50@oracle.com> Hi I didn't mean that you should push.? I wanted you to do an hg commit and I would import the changeset and push for you.?? I don't see an openjdk author name for you.? Have you signed the contributor agreement? Thanks, Coleen On 3/9/18 11:23 PM, stewartd.qdt wrote: > I'd love to Coleen, but having never pushed before, I'm running into issues. It seems I haven't figured out the magic set of steps yet. I get that I am unable to lock jdk/hs as it is Read Only. > > I'm off for the next Thursday. So, if it can wait until then, I'm happy to keep trying to figure it out. If you'd like, you may go ahead and take the webrev. It seems that is what others have done for other patches I made. But either way I'll have to figure this out. > > Thanks, > Daniel > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of coleen.phillimore at oracle.com > Sent: Friday, March 9, 2018 7:55 PM > To: hotspot-dev at openjdk.java.net > Subject: Re: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java > > Looks good.? Thank you for fixing this. > Can you hg commit the patch with us as reviewers and I'll push it? > thanks, > Coleen > > On 3/9/18 5:20 PM, stewartd.qdt wrote: >> Please review this webrev [1] which attempts to fix a test error in runtime/stringtable/StringTableVerifyTest.java. This test uses the flag -XX:+VerifyStringTableAtExit, which is a diagnostic option and requires the flag -XX:+UnlockDiagnosticVMOptions. >> >> >> >> This test currently fails our JTReg testing on an AArch64 machine. This patch simply adds the -XX:+UnlockDiagnosticVMOptions. >> >> The bug report is filed at [2]. >> >> >> >> I am happy to modify the patch as necessary. >> >> >> >> Regards, >> >> >> >> Daniel Stewart >> >> >> >> [1] - http://cr.openjdk.java.net/~dstewart/8199425/webrev.01/ >> >> [2] - https://bugs.openjdk.java.net/browse/JDK-8199425 >> >> >> From david.holmes at oracle.com Mon Mar 12 03:57:09 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 12 Mar 2018 13:57:09 +1000 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: <1520582019.2395.6.camel@gmail.com> References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <03f59f60-dda2-bc0a-3929-115f7a0c4ca2@oracle.com> <1520582019.2395.6.camel@gmail.com> Message-ID: <8a76582b-bde0-b281-adae-833f7f861feb@oracle.com> Hi Ed, On 9/03/2018 5:53 PM, Edward Nevill wrote: > On Thu, 2018-03-08 at 18:28 -0500, coleen.phillimore at oracle.com wrote: >> http://cr.openjdk.java.net/~enevill/8199220/webrev.02/src/hotspot/share/gc/shared/cardTableModRefBS.cpp.udiff.html >> >> Can you add a comment on the #endif // COMPILER2 || JVMCI >> >> something like that? >> thanks, >> Coleen >> > > Sure. Would you like me to generate a new webrev. Or could David just add that to the commit? Once we're certain this addresses all the issues it was intended to address (ref Thomas's email) you should generate a final changeset with the exact changes (ie Coleen's comment) and the final set of reviewers, and post the link. I'll take that re-run through our internal tests and then push. For next time the submit-hs repo can be used :) Thanks, David > Thanks for your help, > Ed. > From david.holmes at oracle.com Mon Mar 12 04:28:07 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 12 Mar 2018 14:28:07 +1000 Subject: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java In-Reply-To: <135de871-eeaa-0650-94b1-e92e7f4f5c50@oracle.com> References: <0733725e-5f99-28ea-b4cb-97dcd7a91d4d@oracle.com> <8736910a683f4175ba983a158434d13d@NASANEXM01E.na.qualcomm.com> <135de871-eeaa-0650-94b1-e92e7f4f5c50@oracle.com> Message-ID: <4dc898b1-3831-5319-53d7-98096bc6f151@oracle.com> Hi Coleen, Daniel, On 11/03/2018 12:35 AM, coleen.phillimore at oracle.com wrote: > > Hi I didn't mean that you should push.? I wanted you to do an hg commit > and I would import the changeset and push for you.?? I don't see an > openjdk author name for you.? Have you signed the contributor agreement? Coleen: Daniel is dstewart (Qualcomm Datacenter Technologies is the signatory). Daniel: as Coleen indicated you can't do the hg push as you are not a Committer, so just create the changeset using "hg commit" and ensure the commit message has the correct format [1] e.g. from a previous change of yours: 8196361: JTReg failure: serviceability/sa/ClhsdbInspect.java Summary: Modified test search strings to those guaranteed to exist in the passing cases. Reviewed-by: dholmes, jgeorge Thanks, David [1] http://openjdk.java.net/guide/producingChangeset.html#create > Thanks, > Coleen > > On 3/9/18 11:23 PM, stewartd.qdt wrote: >> I'd love to Coleen, but having never pushed before, I'm running into >> issues. It seems I haven't figured out the magic set of steps yet. I >> get that I am unable to lock jdk/hs as it is Read Only. >> >> I'm off for the next Thursday. So, if it can wait until then, I'm >> happy to keep trying to figure it out. If you'd like, you may go ahead >> and take the webrev. It seems that is what others have done for other >> patches I made. But either way I'll have to figure this out. >> >> Thanks, >> Daniel >> >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >> Behalf Of coleen.phillimore at oracle.com >> Sent: Friday, March 9, 2018 7:55 PM >> To: hotspot-dev at openjdk.java.net >> Subject: Re: RFR: 8199425: JTReg failure: >> runtime/stringtable/StringTableVerifyTest.java >> >> Looks good.? Thank you for fixing this. >> Can you hg commit the patch with us as reviewers and I'll push it? >> thanks, >> Coleen >> >> On 3/9/18 5:20 PM, stewartd.qdt wrote: >>> Please review this webrev [1] which attempts to fix a test error in >>> runtime/stringtable/StringTableVerifyTest.java. This test uses the >>> flag -XX:+VerifyStringTableAtExit, which is a diagnostic option and >>> requires the flag -XX:+UnlockDiagnosticVMOptions. >>> >>> >>> >>> This test currently fails our JTReg testing on an AArch64 machine. >>> This patch simply adds the -XX:+UnlockDiagnosticVMOptions. >>> >>> The bug report is filed at [2]. >>> >>> >>> >>> I am happy to modify the patch as necessary. >>> >>> >>> >>> Regards, >>> >>> >>> >>> Daniel Stewart >>> >>> >>> >>> [1] - http://cr.openjdk.java.net/~dstewart/8199425/webrev.01/ >>> >>> [2] - https://bugs.openjdk.java.net/browse/JDK-8199425 >>> >>> >>> > From leo.korinth at oracle.com Mon Mar 12 13:20:13 2018 From: leo.korinth at oracle.com (Leo Korinth) Date: Mon, 12 Mar 2018 14:20:13 +0100 Subject: RFR: 8176717: GC log file handle leaked to child processes Message-ID: Hi, This fix is for all operating systems though the problem only seams to appear on windows. I am creating a proxy function for fopen (os::fopen_retain) that appends the non-standard "e" mode for linux and bsds. For windows the "N" mode is used. For other operating systems, I assume that I can use fcntl F_SETFD FD_CLOEXEC. I think this will work for AIX, Solaris and other operating systems that do not support the "e" flag. Feedback otherwise please! The reason that I use the mode "e" and not only fcntl for linux and bsds is threefold. First, I still need to use mode flags on windows as it does not support fcntl. Second, I probably save a system call. Third, the change will be applied directly, and there will be no point in time (between system calls) when the process can leak the file descriptor, so it is safer. The test case forks three VMs in a row. By doing so we know that the second VM is opened with a specific log file. The third VM should have less open file descriptors (as it is does not use logging) which is checked using a UnixOperatingSystemMXBean. This is not possible on windows, so on windows I try to rename the file, which will not work if the file is opened (the actual reason the bug was opened). The added test case shows that the bug fix closes the log file on windows. The VM on other operating systems closed the log file even before the fix. Maybe the test case should be moved to a different path? Bug: https://bugs.openjdk.java.net/browse/JDK-8176717 https://bugs.openjdk.java.net/browse/JDK-8176809 Webrev: http://cr.openjdk.java.net/~lkorinth/8176717/00/ Testing: hs-tier1, hs-tier2 and TestInheritFD.java (on 64-bit linux, solaris, windows and mac) Thanks, Leo From christoph.langer at sap.com Mon Mar 12 13:38:20 2018 From: christoph.langer at sap.com (Langer, Christoph) Date: Mon, 12 Mar 2018 13:38:20 +0000 Subject: 8176717: GC log file handle leaked to child processes In-Reply-To: References: Message-ID: <9be5e688c8ab43a1ad6f6bb0d9bc8fdb@sap.com> Hi Leo, in general I think the idea is good. Speaking for AIX, the e flag is not supported but the fcntl FD_CLOEXEC should do it. I'm wondering why you modify "src/jdk.attach/share/classes/module-info.java" and add a dependency to jdk.management? I think this should probably go into your test (if required)... Best regards Christoph > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On > Behalf Of Leo Korinth > Sent: Montag, 12. M?rz 2018 14:20 > To: hotspot-dev at openjdk.java.net > Subject: RFR: 8176717: GC log file handle leaked to child processes > > Hi, > > This fix is for all operating systems though the problem only seams to > appear on windows. > > I am creating a proxy function for fopen (os::fopen_retain) that appends > the non-standard "e" mode for linux and bsds. For windows the "N" mode > is used. For other operating systems, I assume that I can use fcntl > F_SETFD FD_CLOEXEC. I think this will work for AIX, Solaris and other > operating systems that do not support the "e" flag. Feedback otherwise > please! > > The reason that I use the mode "e" and not only fcntl for linux and bsds > is threefold. First, I still need to use mode flags on windows as it > does not support fcntl. Second, I probably save a system call. Third, > the change will be applied directly, and there will be no point in time > (between system calls) when the process can leak the file descriptor, so > it is safer. > > The test case forks three VMs in a row. By doing so we know that the > second VM is opened with a specific log file. The third VM should have > less open file descriptors (as it is does not use logging) which is > checked using a UnixOperatingSystemMXBean. This is not possible on > windows, so on windows I try to rename the file, which will not work if > the file is opened (the actual reason the bug was opened). > > The added test case shows that the bug fix closes the log file on > windows. The VM on other operating systems closed the log file even > before the fix. > > Maybe the test case should be moved to a different path? > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8176717 > https://bugs.openjdk.java.net/browse/JDK-8176809 > > Webrev: > http://cr.openjdk.java.net/~lkorinth/8176717/00/ > > Testing: > hs-tier1, hs-tier2 and TestInheritFD.java > (on 64-bit linux, solaris, windows and mac) > > Thanks, > Leo From per.liden at oracle.com Mon Mar 12 14:13:26 2018 From: per.liden at oracle.com (Per Liden) Date: Mon, 12 Mar 2018 15:13:26 +0100 Subject: RFR: 8198949: Modularize arraycopy stub routine GC barriers In-Reply-To: <5AA2BD2B.2060100@oracle.com> References: <5AA2BD2B.2060100@oracle.com> Message-ID: <2cab0fef-0fb2-da2b-d249-803ca4e7df78@oracle.com> Hi Erik, Nice patch, a few comments below. General ------- May I suggest that we name the CodeGen classes CodeGen, like G1BarrierSetCodeGen instead of G1BSCodeGen. The names become a bit longer but I think the relationship with the barrier set them becomes more clear. src/hotspot/cpu/sparc/stubGenerator_XXX.cpp ------------------------------------------- Most of the subGenerator files have a sequence, similar to this: ... BarrierSetCodeGen *bs = Universe::heap()->barrier_set()->code_gen(); DecoratorSet decorators = ARRAYCOPY_DISJOINT; BasicType type = is_oop ? T_OBJECT : T_INT; if (dest_uninitialized) { decorators |= AS_DEST_NOT_INITIALIZED; } if (aligned) { decorators |= ARRAYCOPY_ALIGNED; } bs->arraycopy_prologue(_masm, decorators, type, from, to, count); ... Could we re-group block to be more like this, where we keep the setup of "decorators" grouped together, and move down the others to where they are first used? Just to make it a bit easier to read. ... DecoratorSet decorators = ARRAYCOPY_DISJOINT; if (dest_uninitialized) { decorators |= AS_DEST_NOT_INITIALIZED; } if (aligned) { decorators |= ARRAYCOPY_ALIGNED; } BasicType type = is_oop ? T_OBJECT : T_INT; BarrierSetCodeGen *bs = Universe::heap()->barrier_set()->code_gen(); bs->arraycopy_prologue(_masm, decorators, type, from, to, count); ... src/hotspot/share/gc/shared/barrierSet.cpp ------------------------------------------ Instead of BarrierSet::initialize() and BarrierSet::make_code_gen(), could we initialize the _code_gen pointer by having the concrete barrier set create it in its constructor and pass it up the chain to BarrierSet. For G1, it would look something like this: G1BarrierSet::G1BarrierSet(G1CardTable* card_table) : CardTableModRefBS(card_table, BarrierSet::FakeRtti(BarrierSet::G1BarrierSet), BarrierSetCodeGen::create(), _dcqs(JavaThread::dirty_card_queue_set()) {} Where BarrierSetCodeGen::create() would check the necessary conditions if a CodeGen class should be created, and if so, create a T and return a BarrierSetCodeGen*. And in the future, we can have a BarrierSetCodeGenC1::create() which does the same thing for C1, etc. I think this also means that BarrierSet::_code_gen can be made private instead of protected. How does that sound? src/hotspot/cpu/sparc/gc/g1/g1BSCodeGen_sparc.cpp ------------------------------------------------- It looks like the fast-path, for when the mark queue is in-active, has been accidentally dropped here? src/hotspot/share/gc/g1/g1BarrierSet.cpp ---------------------------------------- void G1BarrierSet::write_ref_array_pre_oop_entry(oop* dst, size_t length) { assert(length <= (size_t)max_intx, "count too large"); G1BarrierSet *bs = barrier_set_cast(BarrierSet::barrier_set()); bs->G1BarrierSet::write_ref_array_pre(dst, (int)length, false); } max_inx in the assert above can be larger than int, but we later cast length to int when we later call write_ref_array_pre(), which looks dangerous. I'd suggest that we change write_ref_array_pre() and write_ref_array_pre_work() to take a size_t instead of an int and remove the assert. Also, the call to: bs->G1BarrierSet::write_ref_array_pre(dst, (int)length, false); could be shortened to: bs->write_ref_array_pre(dst, (int)length, false); void G1BarrierSet::write_ref_array_pre_narrow_oop_entry(narrowOop* dst, size_t length) { assert(length <= (size_t)max_intx, "count too large"); G1BarrierSet *bs = barrier_set_cast(BarrierSet::barrier_set()); bs->G1BarrierSet::write_ref_array_pre(dst, (int)length, false); } Same comments as above. void G1BarrierSet::write_ref_array_post_entry(HeapWord* dst, size_t length) { G1BarrierSet *bs = barrier_set_cast(BarrierSet::barrier_set()); bs->G1BarrierSet::write_ref_array(dst, (int)length); } write_ref_array() takes a size_t but we cast length to an int, which is wrong. This was only a half review. I haven't looked through the x86-specific stuff in detail yet. I'll follow up on that tomorrow. cheers, Per On 03/09/2018 05:58 PM, Erik ?sterlund wrote: > Hi, > > The GC barriers for arraycopy stub routines are not as modular as they > could be. They currently use switch statements to check which GC barrier > set is being used, and call one or another barrier based on that, with > registers already allocated in such a way that it can only be used for > write barriers. > > My solution to the problem is to introduce a platform-specific GC > barrier set code generator. The abstract super class is > BarrierSetCodeGen, and you can get it from the active BarrierSet. A > virtual call to the BarrierSetCodeGen generates the relevant GC barriers > for the arraycopy stub routines. > > The BarrierSetCodeGen inheritance hierarchy exactly matches the > corresponding BarrierSet inheritance hierarchy. In other words, every > BarrierSet class has a corresponding BarrierSetCodeGen class. > > The various switch statements that generate different GC barriers > depending on the enum type of the barrier set have been changed to call > a corresponding virtual member function in the BarrierSetCodeGen class > instead. > > Thanks to Martin Doerr and Roman Kennke for providing platform specific > code for PPC, S390 and AArch64. > > Webrev: > http://cr.openjdk.java.net/~eosterlund/8198949/webrev.00/ > > CR: > https://bugs.openjdk.java.net/browse/JDK-8198949 > > Thanks, > /Erik From leo.korinth at oracle.com Mon Mar 12 14:20:59 2018 From: leo.korinth at oracle.com (Leo Korinth) Date: Mon, 12 Mar 2018 15:20:59 +0100 Subject: 8176717: GC log file handle leaked to child processes In-Reply-To: <9be5e688c8ab43a1ad6f6bb0d9bc8fdb@sap.com> References: <9be5e688c8ab43a1ad6f6bb0d9bc8fdb@sap.com> Message-ID: <2c952d02-da9c-1705-3e09-b2ad27ad84bb@oracle.com> On 12/03/18 14:38, Langer, Christoph wrote: > Hi Leo, > > in general I think the idea is good. Speaking for AIX, the e flag is not supported but the fcntl FD_CLOEXEC should do it. > > I'm wondering why you modify "src/jdk.attach/share/classes/module-info.java" and add a dependency to jdk.management? I think this should probably go into your test (if required)... Yes this is so very very wrong, *thank you so much* for spotting it. Thanks, Leo > > Best regards > Christoph > >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >> Behalf Of Leo Korinth >> Sent: Montag, 12. M?rz 2018 14:20 >> To: hotspot-dev at openjdk.java.net >> Subject: RFR: 8176717: GC log file handle leaked to child processes >> >> Hi, >> >> This fix is for all operating systems though the problem only seams to >> appear on windows. >> >> I am creating a proxy function for fopen (os::fopen_retain) that appends >> the non-standard "e" mode for linux and bsds. For windows the "N" mode >> is used. For other operating systems, I assume that I can use fcntl >> F_SETFD FD_CLOEXEC. I think this will work for AIX, Solaris and other >> operating systems that do not support the "e" flag. Feedback otherwise >> please! >> >> The reason that I use the mode "e" and not only fcntl for linux and bsds >> is threefold. First, I still need to use mode flags on windows as it >> does not support fcntl. Second, I probably save a system call. Third, >> the change will be applied directly, and there will be no point in time >> (between system calls) when the process can leak the file descriptor, so >> it is safer. >> >> The test case forks three VMs in a row. By doing so we know that the >> second VM is opened with a specific log file. The third VM should have >> less open file descriptors (as it is does not use logging) which is >> checked using a UnixOperatingSystemMXBean. This is not possible on >> windows, so on windows I try to rename the file, which will not work if >> the file is opened (the actual reason the bug was opened). >> >> The added test case shows that the bug fix closes the log file on >> windows. The VM on other operating systems closed the log file even >> before the fix. >> >> Maybe the test case should be moved to a different path? >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8176717 >> https://bugs.openjdk.java.net/browse/JDK-8176809 >> >> Webrev: >> http://cr.openjdk.java.net/~lkorinth/8176717/00/ >> >> Testing: >> hs-tier1, hs-tier2 and TestInheritFD.java >> (on 64-bit linux, solaris, windows and mac) >> >> Thanks, >> Leo From thomas.stuefe at gmail.com Mon Mar 12 14:29:34 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 12 Mar 2018 15:29:34 +0100 Subject: RFR: 8176717: GC log file handle leaked to child processes In-Reply-To: References: Message-ID: Hi Leo, This seems weird. This would affect numerous open() calls, not just this GC log, I cannot imagine the correct fix is to change all of them. In fact, on Posix platforms we close all file descriptors except the Pipe ones before between fork() and exec() - see unix/native/libjava/ childproc.c. Such code is missing on Windows - see windows/native/libjava/ProcessImpl_md.c . There, we do not have fork/exec, but CreateProcess(), and whether we inherit handles or not is controlled via an argument to CreateProcess(). But that flag is TRUE, so child processes inherit handles. 331 if (!CreateProcessW( 332 NULL, /* executable name */ 333 (LPWSTR)pcmd, /* command line */ 334 NULL, /* process security attribute */ 335 NULL, /* thread security attribute */ 336 TRUE, /* inherits system handles */ <<<<<< 337 processFlag, /* selected based on exe type */ 338 (LPVOID)penvBlock,/* environment block */ 339 (LPCWSTR)pdir, /* change to the new current directory */ 340 &si, /* (in) startup information */ 341 &pi)) /* (out) process information */ 342 { 343 win32Error(env, L"CreateProcess"); 344 } Maybe this is the real error we should fix? Make Windows Runtime.exec behave like the Posix variant by closing all file descriptors upon CreateProcessW? (This seems more of a core-libs question.) Kind Regards, Thomas On Mon, Mar 12, 2018 at 2:20 PM, Leo Korinth wrote: > Hi, > > This fix is for all operating systems though the problem only seams to > appear on windows. > > I am creating a proxy function for fopen (os::fopen_retain) that appends > the non-standard "e" mode for linux and bsds. For windows the "N" mode is > used. For other operating systems, I assume that I can use fcntl F_SETFD > FD_CLOEXEC. I think this will work for AIX, Solaris and other operating > systems that do not support the "e" flag. Feedback otherwise please! > > The reason that I use the mode "e" and not only fcntl for linux and bsds > is threefold. First, I still need to use mode flags on windows as it does > not support fcntl. Second, I probably save a system call. Third, the change > will be applied directly, and there will be no point in time (between > system calls) when the process can leak the file descriptor, so it is safer. > > The test case forks three VMs in a row. By doing so we know that the > second VM is opened with a specific log file. The third VM should have less > open file descriptors (as it is does not use logging) which is checked > using a UnixOperatingSystemMXBean. This is not possible on windows, so on > windows I try to rename the file, which will not work if the file is opened > (the actual reason the bug was opened). > > The added test case shows that the bug fix closes the log file on windows. > The VM on other operating systems closed the log file even before the fix. > > Maybe the test case should be moved to a different path? > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8176717 > https://bugs.openjdk.java.net/browse/JDK-8176809 > > Webrev: > http://cr.openjdk.java.net/~lkorinth/8176717/00/ > > Testing: > hs-tier1, hs-tier2 and TestInheritFD.java > (on 64-bit linux, solaris, windows and mac) > > Thanks, > Leo > From leo.korinth at oracle.com Mon Mar 12 15:54:57 2018 From: leo.korinth at oracle.com (Leo Korinth) Date: Mon, 12 Mar 2018 16:54:57 +0100 Subject: RFR: 8176717: GC log file handle leaked to child processes In-Reply-To: References: Message-ID: On 12/03/18 15:29, Thomas St?fe wrote: > Hi Leo, > > This seems weird. > > This would affect numerous open() calls, not just this GC log, I cannot > imagine the correct fix is to change all of them. Sorry, I do not understand what you mean with "numerous open()". This fix will only affect logging -- or am I missing something? os::open does roughly what I try to do in os::fopen_retain. > > In fact, on Posix platforms we close all file descriptors except the > Pipe ones before between fork() and exec() - see unix/native/libjava/ > childproc.c. Yes, that is why my test case did not fail before the fix on unix-like systems. I do not know why it is not handled in Windows (possibly a bug, possibly to keep old behaviour???), I had planned to ask that as a follow up question later, maybe open a bug report if it was not for keeping old behaviour. Even though childproc.c does close the file handler, I think it is much nicer to open them with FD_CLOEXEC (in addition to let childproc.c close it). os::open does so, and I would like to handle ::fopen the same way as ::open with a proxy call that ensures that the VM process will retain the file descriptor it opens (in HotSpot at least). > Such code is missing on Windows - see > windows/native/libjava/ProcessImpl_md.c ?. There, we do not have > fork/exec, but CreateProcess(), and whether we inherit handles or not is > controlled via an argument to CreateProcess(). But that flag is TRUE, so > child processes inherit handles. > > 331 ? ? ? ? ? ? ? ? ? ?if (!CreateProcessW( > 332 ? ? ? ? ? ? ? ? ? ? ? ?NULL, ? ? ? ? ? ? /* executable name */ > 333 ? ? ? ? ? ? ? ? ? ? ? ?(LPWSTR)pcmd, ? ? /* command line */ > 334 ? ? ? ? ? ? ? ? ? ? ? ?NULL, ? ? ? ? ? ? /* process security > attribute */ > 335 ? ? ? ? ? ? ? ? ? ? ? ?NULL, ? ? ? ? ? ? /* thread security attribute */ > 336 ? ? ? ? ? ? ? ? ? ? ? ?TRUE, ? ? ? ? ? ? /* inherits system handles > */ ? ? ? ? ?<<<<<< > 337 ? ? ? ? ? ? ? ? ? ? ? ?processFlag, ? ? ?/* selected based on exe > type */ > 338 ? ? ? ? ? ? ? ? ? ? ? ?(LPVOID)penvBlock,/* environment block */ > 339 ? ? ? ? ? ? ? ? ? ? ? ?(LPCWSTR)pdir, ? ?/* change to the new > current directory */ > 340 ? ? ? ? ? ? ? ? ? ? ? ?&si, ? ? ? ? ? ? ?/* (in) ?startup information */ > 341 ? ? ? ? ? ? ? ? ? ? ? ?&pi)) ? ? ? ? ? ? /* (out) process information */ > 342 ? ? ? ? ? ? ? ? ? ?{ > 343 ? ? ? ? ? ? ? ? ? ? ? ?win32Error(env, L"CreateProcess"); > 344 ? ? ? ? ? ? ? ? ? ?} > > Maybe this is the real error we should fix? Make Windows Runtime.exec > behave like the Posix variant by closing all file descriptors upon > CreateProcess > > (This seems more of a core-libs question.) I think it is both a core-libs question and a hotspot question. I firmly believe we should retain file descriptors with help of FD_CLOEXEC and its variants in HotSpot. I am unsure (and have no opinion) what to do in core-libs, maybe there is a deeper thought behind line 336? Some reasons for this: - if a process is forked using JNI, it would still be good if the hotspot descriptors would not leak. - if (I have no idea if this is true) the behaviour in core-libs can not be changed because the behaviour is already wildly (ab)used, this is still a correct fix. Remember this will only close file descriptors opened by HotSpot code, and at the moment only logging code. - this will fix the issue in the bug report, and give time for core-libs to consider what is correct (and what can be changed without breaking applications). Thanks, Leo > > Kind Regards, Thomas > > > On Mon, Mar 12, 2018 at 2:20 PM, Leo Korinth > wrote: > > Hi, > > This fix is for all operating systems though the problem only seams > to appear on windows. > > I am creating a proxy function for fopen (os::fopen_retain) that > appends the non-standard "e" mode for linux and bsds. For windows > the "N" mode is used. For other operating systems, I assume that I > can use fcntl F_SETFD FD_CLOEXEC. I think this will work for AIX, > Solaris and other operating systems that do not support the "e" > flag. Feedback otherwise please! > > The reason that I use the mode "e" and not only fcntl for linux and > bsds is threefold. First, I still need to use mode flags on windows > as it does not support fcntl. Second, I probably save a system call. > Third, the change will be applied directly, and there will be no > point in time (between system calls) when the process can leak the > file descriptor, so it is safer. > > The test case forks three VMs in a row. By doing so we know that the > second VM is opened with a specific log file. The third VM should > have less open file descriptors (as it is does not use logging) > which is checked using a UnixOperatingSystemMXBean. This is not > possible on windows, so on windows I try to rename the file, which > will not work if the file is opened (the actual reason the bug was > opened). > > The added test case shows that the bug fix closes the log file on > windows. The VM on other operating systems closed the log file even > before the fix. > > Maybe the test case should be moved to a different path? > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8176717 > > https://bugs.openjdk.java.net/browse/JDK-8176809 > > > Webrev: > http://cr.openjdk.java.net/~lkorinth/8176717/00/ > > > Testing: > hs-tier1, hs-tier2 and TestInheritFD.java > (on 64-bit linux, solaris, windows and mac) > > Thanks, > Leo > > From erik.osterlund at oracle.com Mon Mar 12 16:02:45 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 12 Mar 2018 17:02:45 +0100 Subject: RFR: 8198949: Modularize arraycopy stub routine GC barriers In-Reply-To: <2cab0fef-0fb2-da2b-d249-803ca4e7df78@oracle.com> References: <5AA2BD2B.2060100@oracle.com> <2cab0fef-0fb2-da2b-d249-803ca4e7df78@oracle.com> Message-ID: <5AA6A4A5.6030007@oracle.com> Hi Per, Thank you for reviewing this. New full webrev: http://cr.openjdk.java.net/~eosterlund/8198949/webrev.01/ Incremental: http://cr.openjdk.java.net/~eosterlund/8198949/webrev.00_01/ On 2018-03-12 15:13, Per Liden wrote: > Hi Erik, > > Nice patch, a few comments below. > > General > ------- > May I suggest that we name the CodeGen classes > CodeGen, like G1BarrierSetCodeGen instead of > G1BSCodeGen. The names become a bit longer but I think the > relationship with the barrier set them becomes more clear. Fixed. > > src/hotspot/cpu/sparc/stubGenerator_XXX.cpp > ------------------------------------------- > Most of the subGenerator files have a sequence, similar to this: > > ... > BarrierSetCodeGen *bs = Universe::heap()->barrier_set()->code_gen(); > DecoratorSet decorators = ARRAYCOPY_DISJOINT; > BasicType type = is_oop ? T_OBJECT : T_INT; > if (dest_uninitialized) { > decorators |= AS_DEST_NOT_INITIALIZED; > } > if (aligned) { > decorators |= ARRAYCOPY_ALIGNED; > } > bs->arraycopy_prologue(_masm, decorators, type, from, to, count); > ... > > Could we re-group block to be more like this, where we keep the setup > of "decorators" grouped together, and move down the others to where > they are first used? Just to make it a bit easier to read. > > ... > DecoratorSet decorators = ARRAYCOPY_DISJOINT; > if (dest_uninitialized) { > decorators |= AS_DEST_NOT_INITIALIZED; > } > if (aligned) { > decorators |= ARRAYCOPY_ALIGNED; > } > > BasicType type = is_oop ? T_OBJECT : T_INT; > BarrierSetCodeGen *bs = Universe::heap()->barrier_set()->code_gen(); > bs->arraycopy_prologue(_masm, decorators, type, from, to, count); > ... Fixed. > src/hotspot/share/gc/shared/barrierSet.cpp > ------------------------------------------ > Instead of BarrierSet::initialize() and BarrierSet::make_code_gen(), > could we initialize the _code_gen pointer by having the concrete > barrier set create it in its constructor and pass it up the chain to > BarrierSet. For G1, it would look something like this: > > G1BarrierSet::G1BarrierSet(G1CardTable* card_table) : > CardTableModRefBS(card_table, > BarrierSet::FakeRtti(BarrierSet::G1BarrierSet), > BarrierSetCodeGen::create(), > _dcqs(JavaThread::dirty_card_queue_set()) {} > > Where BarrierSetCodeGen::create() would check the necessary > conditions if a CodeGen class should be created, and if so, create a T > and return a BarrierSetCodeGen*. And in the future, we can have a > BarrierSetCodeGenC1::create() which does the same thing for C1, etc. > > I think this also means that BarrierSet::_code_gen can be made private > instead of protected. > > How does that sound? I tried a variation of this. Instead of BarrierSetCodeGen::create(), I use BarrierSet::make_code_gen(). The reason is that we might not even have a concrete code gen class available if we are running on Zero, but then we have a forward declaration instead. The BarrierSet make function checks accordingly whether the code generator should be instantiated or not (depending on whether we are compiling with Zero or not). I hope this is kind of what you had in mind. > > src/hotspot/cpu/sparc/gc/g1/g1BSCodeGen_sparc.cpp > ------------------------------------------------- > It looks like the fast-path, for when the mark queue is in-active, has > been accidentally dropped here? Oops. Fixed. > > src/hotspot/share/gc/g1/g1BarrierSet.cpp > ---------------------------------------- > > void G1BarrierSet::write_ref_array_pre_oop_entry(oop* dst, size_t > length) { > assert(length <= (size_t)max_intx, "count too large"); > G1BarrierSet *bs = > barrier_set_cast(BarrierSet::barrier_set()); > bs->G1BarrierSet::write_ref_array_pre(dst, (int)length, false); > } > > max_inx in the assert above can be larger than int, but we later cast > length to int when we later call write_ref_array_pre(), which looks > dangerous. I'd suggest that we change write_ref_array_pre() and > write_ref_array_pre_work() to take a size_t instead of an int and > remove the assert. Fixed. > Also, the call to: > bs->G1BarrierSet::write_ref_array_pre(dst, (int)length, false); > > could be shortened to: > bs->write_ref_array_pre(dst, (int)length, false); > > > void G1BarrierSet::write_ref_array_pre_narrow_oop_entry(narrowOop* > dst, size_t length) { > assert(length <= (size_t)max_intx, "count too large"); > G1BarrierSet *bs = > barrier_set_cast(BarrierSet::barrier_set()); > bs->G1BarrierSet::write_ref_array_pre(dst, (int)length, false); > } > > Same comments as above. > > > void G1BarrierSet::write_ref_array_post_entry(HeapWord* dst, size_t > length) { > G1BarrierSet *bs = > barrier_set_cast(BarrierSet::barrier_set()); > bs->G1BarrierSet::write_ref_array(dst, (int)length); > } > > write_ref_array() takes a size_t but we cast length to an int, which > is wrong. Fixed. > This was only a half review. I haven't looked through the x86-specific > stuff in detail yet. I'll follow up on that tomorrow. Thank you for the review. Thanks, /Erik > cheers, > Per > > On 03/09/2018 05:58 PM, Erik ?sterlund wrote: >> Hi, >> >> The GC barriers for arraycopy stub routines are not as modular as >> they could be. They currently use switch statements to check which GC >> barrier set is being used, and call one or another barrier based on >> that, with registers already allocated in such a way that it can only >> be used for write barriers. >> >> My solution to the problem is to introduce a platform-specific GC >> barrier set code generator. The abstract super class is >> BarrierSetCodeGen, and you can get it from the active BarrierSet. A >> virtual call to the BarrierSetCodeGen generates the relevant GC >> barriers for the arraycopy stub routines. >> >> The BarrierSetCodeGen inheritance hierarchy exactly matches the >> corresponding BarrierSet inheritance hierarchy. In other words, every >> BarrierSet class has a corresponding BarrierSetCodeGen class. >> >> The various switch statements that generate different GC barriers >> depending on the enum type of the barrier set have been changed to >> call a corresponding virtual member function in the BarrierSetCodeGen >> class instead. >> >> Thanks to Martin Doerr and Roman Kennke for providing platform >> specific code for PPC, S390 and AArch64. >> >> Webrev: >> http://cr.openjdk.java.net/~eosterlund/8198949/webrev.00/ >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8198949 >> >> Thanks, >> /Erik From leo.korinth at oracle.com Mon Mar 12 16:13:32 2018 From: leo.korinth at oracle.com (Leo Korinth) Date: Mon, 12 Mar 2018 17:13:32 +0100 Subject: RFR: 8176717: GC log file handle leaked to child processes In-Reply-To: References: Message-ID: <271f07b2-2a74-c5ff-7a7b-d9805929a23c@oracle.com> On 12/03/18 14:20, Leo Korinth wrote: > Hi, > > This fix is for all operating systems though the problem only seams to > appear on windows. > > I am creating a proxy function for fopen (os::fopen_retain) that appends > the non-standard "e" mode for linux and bsds. For windows the "N" mode > is used. For other operating systems, I assume that I can use fcntl > F_SETFD FD_CLOEXEC. I think this will work for AIX, Solaris and other > operating systems that do not support the "e" flag. Feedback otherwise > please! > > The reason that I use the mode "e" and not only fcntl for linux and bsds > is threefold. First, I still need to use mode flags on windows as it > does not support fcntl. Second, I probably save a system call. Third, > the change will be applied directly, and there will be no point in time > (between system calls) when the process can leak the file descriptor, so > it is safer. > > The test case forks three VMs in a row. By doing so we know that the > second VM is opened with a specific log file. The third VM should have > less open file descriptors (as it is does not use logging) which is > checked using a UnixOperatingSystemMXBean. This is not possible on > windows, so on windows I try to rename the file, which will not work if > the file is opened (the actual reason the bug was opened). > > The added test case shows that the bug fix closes the log file on > windows. The VM on other operating systems closed the log file even > before the fix. > > Maybe the test case should be moved to a different path? > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8176717 > https://bugs.openjdk.java.net/browse/JDK-8176809 > > Webrev: > http://cr.openjdk.java.net/~lkorinth/8176717/00/ New webrev (only change is removing module-info.java change): http://cr.openjdk.java.net/~lkorinth/8176717/01/ Thanks, Leo > > Testing: > hs-tier1, hs-tier2 and TestInheritFD.java > (on 64-bit linux, solaris, windows and mac) > > Thanks, > Leo From thomas.stuefe at gmail.com Mon Mar 12 16:48:58 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 12 Mar 2018 17:48:58 +0100 Subject: RFR: 8176717: GC log file handle leaked to child processes In-Reply-To: References: Message-ID: Hi Leo, On Mon, Mar 12, 2018 at 4:54 PM, Leo Korinth wrote: > > > On 12/03/18 15:29, Thomas St?fe wrote: > >> Hi Leo, >> >> This seems weird. >> >> This would affect numerous open() calls, not just this GC log, I cannot >> imagine the correct fix is to change all of them. >> > > Sorry, I do not understand what you mean with "numerous open()". This fix > will only affect logging -- or am I missing something? os::open does > roughly what I try to do in os::fopen_retain. > > Sorry, I spoke unclear. What I meant was I would expect the problem you found in gc logging to be present for every raw open()/fopen()/CreateFile() call in the VM and in the JDK, which are quite a few. I wondered why we do not see more problems like this. > >> In fact, on Posix platforms we close all file descriptors except the Pipe >> ones before between fork() and exec() - see unix/native/libjava/ >> childproc.c. >> > > Yes, that is why my test case did not fail before the fix on unix-like > systems. I do not know why it is not handled in Windows (possibly a bug, > possibly to keep old behaviour???), I had planned to ask that as a follow > up question later, maybe open a bug report if it was not for keeping old > behaviour. Even though childproc.c does close the file handler, I think it > is much nicer to open them with FD_CLOEXEC (in addition to let childproc.c > close it). os::open does so, and I would like to handle ::fopen the same > way as ::open with a proxy call that ensures that the VM process will > retain the file descriptor it opens (in HotSpot at least). > > Such code is missing on Windows - see windows/native/libjava/ProcessImpl_md.c >> . There, we do not have fork/exec, but CreateProcess(), and whether we >> inherit handles or not is controlled via an argument to CreateProcess(). >> But that flag is TRUE, so child processes inherit handles. >> >> 331 if (!CreateProcessW( >> 332 NULL, /* executable name */ >> 333 (LPWSTR)pcmd, /* command line */ >> 334 NULL, /* process security >> attribute */ >> 335 NULL, /* thread security attribute >> */ >> 336 TRUE, /* inherits system handles >> */ <<<<<< >> 337 processFlag, /* selected based on exe >> type */ >> 338 (LPVOID)penvBlock,/* environment block */ >> 339 (LPCWSTR)pdir, /* change to the new current >> directory */ >> 340 &si, /* (in) startup information >> */ >> 341 &pi)) /* (out) process information >> */ >> 342 { >> 343 win32Error(env, L"CreateProcess"); >> 344 } >> >> Maybe this is the real error we should fix? Make Windows Runtime.exec >> behave like the Posix variant by closing all file descriptors upon >> CreateProcess > >> (This seems more of a core-libs question.) >> > > I think it is both a core-libs question and a hotspot question. I firmly > believe we should retain file descriptors with help of FD_CLOEXEC and its > variants in HotSpot. I am unsure (and have no opinion) what to do in > core-libs, maybe there is a deeper thought behind line 336? > > Some reasons for this: > > - if a process is forked using JNI, it would still be good if the hotspot > descriptors would not leak. > > - if (I have no idea if this is true) the behaviour in core-libs can not > be changed because the behaviour is already wildly (ab)used, this is still > a correct fix. Remember this will only close file descriptors opened by > HotSpot code, and at the moment only logging code. > > - this will fix the issue in the bug report, and give time for core-libs > to consider what is correct (and what can be changed without breaking > applications). > > Thanks, > Leo > > yes, you convinced me. 1 We should fix raw open() calls, because if native code forks via a different code paths than java Runtime.exec(), we run into the same problem. Your patch fixes one instance of the problem. 2 And we should fix Windows Runtime.exec() to the same behaviour as on Posix. I can see this being backward-compatible-problematic, but it certainly would be the right thing to do. Would love to know what core-libs says. Okay, about your change: I dislike that we add a new function, especially a first class open function, to the os namespace. How about this instead: since we know that os::open() does the right thing on all platforms, why can we not just use os::open() instead? Afterwards call fdopen() to wrap a FILE structure around it, respectively call "FILE* os::open(int fd, const char* mode)" , which seems to be just a wrapped fdopen(). That way you can get what you want with less change and without introducing a new API. Kind Regards, Thomas > >> Kind Regards, Thomas >> >> >> On Mon, Mar 12, 2018 at 2:20 PM, Leo Korinth > > wrote: >> >> Hi, >> >> This fix is for all operating systems though the problem only seams >> to appear on windows. >> >> I am creating a proxy function for fopen (os::fopen_retain) that >> appends the non-standard "e" mode for linux and bsds. For windows >> the "N" mode is used. For other operating systems, I assume that I >> can use fcntl F_SETFD FD_CLOEXEC. I think this will work for AIX, >> Solaris and other operating systems that do not support the "e" >> flag. Feedback otherwise please! >> >> The reason that I use the mode "e" and not only fcntl for linux and >> bsds is threefold. First, I still need to use mode flags on windows >> as it does not support fcntl. Second, I probably save a system call. >> Third, the change will be applied directly, and there will be no >> point in time (between system calls) when the process can leak the >> file descriptor, so it is safer. >> >> The test case forks three VMs in a row. By doing so we know that the >> second VM is opened with a specific log file. The third VM should >> have less open file descriptors (as it is does not use logging) >> which is checked using a UnixOperatingSystemMXBean. This is not >> possible on windows, so on windows I try to rename the file, which >> will not work if the file is opened (the actual reason the bug was >> opened). >> >> The added test case shows that the bug fix closes the log file on >> windows. The VM on other operating systems closed the log file even >> before the fix. >> >> Maybe the test case should be moved to a different path? >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8176717 >> >> https://bugs.openjdk.java.net/browse/JDK-8176809 >> >> >> Webrev: >> http://cr.openjdk.java.net/~lkorinth/8176717/00/ >> >> >> Testing: >> hs-tier1, hs-tier2 and TestInheritFD.java >> (on 64-bit linux, solaris, windows and mac) >> >> Thanks, >> Leo >> >> >> From edward.nevill at gmail.com Mon Mar 12 19:27:10 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Mon, 12 Mar 2018 19:27:10 +0000 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: <8a76582b-bde0-b281-adae-833f7f861feb@oracle.com> References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <03f59f60-dda2-bc0a-3929-115f7a0c4ca2@oracle.com> <1520582019.2395.6.camel@gmail.com> <8a76582b-bde0-b281-adae-833f7f861feb@oracle.com> Message-ID: <1520882830.11566.12.camel@gmail.com> On Mon, 2018-03-12 at 13:57 +1000, David Holmes wrote: > Hi Ed, > > > Once we're certain this addresses all the issues it was intended to > address (ref Thomas's email) you should generate a final changeset with > the exact changes (ie Coleen's comment) and the final set of reviewers, > and post the link. I'll take that re-run through our internal tests and > then push. > > Hi David, Thanks for your patience. New webrev here http://cr.openjdk.java.net/~enevill/8199220/webrev.03 I have updated the webrev to build the debug version of zero which has been broken since Nov 20, 2017 by change 8189871. https://bugs.openjdk.java.net/browse/JDK-8189871 This caused the error /home/ed/openjdk/hs/src/hotspot/share/utilities/debug.hpp:184:29: error: incomplete type ?STATIC_ASSERT_FAILURE? used in nested name specifier I have also addressed Coleen's comment. Build tested zero release/debug and server release/debug, Thanks for you help, Ed. From volker.simonis at gmail.com Mon Mar 12 19:34:26 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Mon, 12 Mar 2018 20:34:26 +0100 Subject: RFR(S/M): 8199472: Fix non-PCH build after JDK-8199319 Message-ID: Hi, can I please have a review and a sponsor for the following fix: http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/ https://bugs.openjdk.java.net/browse/JDK-8199472 The number changes files is "M" but the fix is actually "S" :) Here come the gory details: Change "8199319: Remove handles.inline.hpp include from reflectionUtils.hpp" breaks the non-PCH build (at least on Ubuntu 16.04 with gcc 5.4.0). If you configure with "--disable-precompiled-headers" you will get a whole lot of undefined reference for "Handle::Handle(Thread*, oopDesc*)" - see bug report. It seems that newer versions of GCC (and possibly other compilers as well) don't emit any code for inline functions if these functions can be inlined at all potential call sites. The problem in this special case is that "Handle::Handle(Thread*, oopDesc*)" is not declared "inline" in "handles.hpp", but its definition in "handles.inline.hpp" is declared "inline". This leads to a situation, where compilation units which only include "handles.hpp" will emit a call to "Handle::Handle(Thread*, oopDesc*)" while compilation units which include "handles.inline.hpp" will try to inline "Handle::Handle(Thread*, oopDesc*)". If all the inlining attempts are successful, no instance of "Handle::Handle(Thread*, oopDesc*)" will be generated in any of the object files. This will lead to the link errors listed in the . The quick fix for this issue is to include "handles.inline.hpp" into all the compilation units with undefined references (listed below). The correct fix (realized in this RFR) is to declare "Handle::Handle(Thread*, oopDesc*)" inline in "handles.hpp". This will lead to warnings (which are treated as errors) if the inline definition is not available at a call site and will avoid linking error due to compiler optimizations. Unfortunately this requires a whole lot of follow-up changes, because "handles.hpp" defines some derived classes of "Handle" which all have implicitly inline constructors which all reference the base class "Handle::Handle(Thread*, oopDesc*)" constructor. So the constructors of the derived classes have to be explicitly declared inline in "handles.hpp" and their implementation has to be moved to "handles.inline.hpp". This change again triggers other changes for all files which relayed on the derived Handle classes having inline constructors... Thank you and best regards, Volker From leo.korinth at oracle.com Mon Mar 12 19:40:30 2018 From: leo.korinth at oracle.com (Leo Korinth) Date: Mon, 12 Mar 2018 20:40:30 +0100 Subject: RFR: 8176717: GC log file handle leaked to child processes In-Reply-To: References: Message-ID: <8ad5b65b-9596-bbca-1f50-27c81d8d65a1@oracle.com> On 12/03/18 17:48, Thomas St?fe wrote: > Hi Leo, > > On Mon, Mar 12, 2018 at 4:54 PM, Leo Korinth > wrote: > > > > On 12/03/18 15:29, Thomas St?fe wrote: > > Hi Leo, > > This seems weird. > > This would affect numerous open() calls, not just this GC log, I > cannot imagine the correct fix is to change all of them. > > > Sorry, I do not understand what you mean with "numerous open()". > This fix will only affect logging -- or am I missing something? > os::open does roughly what I try to do in os::fopen_retain. > > > Sorry, I spoke unclear. What I meant was I would expect the problem you > found in gc logging to be present for every raw > open()/fopen()/CreateFile() call in the VM and in the JDK, which are > quite a few. I wondered why we do not see more problems like this. Oh, now I see, I just *assumed* os::open was used everywhere when in fact it is only used in two places where the file in addition seems to be closed fast afterwards. I should assume less... I guess leaking file descriptors is not that big of a problem. It seems to *have* been a problem on Solaris (which seems to be the reason for os::open) but as the unix file descriptors are closed by core-libs before exec is mostly a windows problem. On windows it is also more of a problem because open files are harder to rename or remove, and therefore the bug report. > > > > In fact, on Posix platforms we close all file descriptors except > the Pipe ones before between fork() and exec() - see > unix/native/libjava/ childproc.c. > > > Yes, that is why my test case did not fail before the fix on > unix-like systems. I do not know why it is not handled in Windows > (possibly a bug, possibly to keep old behaviour???), I had planned > to ask that as a follow up question later, maybe open a bug report > if it was not for keeping old behaviour. Even though childproc.c > does close the file handler, I think it is much nicer to open them > with FD_CLOEXEC (in addition to let childproc.c close it). os::open > does so, and I would like to handle ::fopen the same way as ::open > with a proxy call that ensures that the VM process will retain the > file descriptor it opens (in HotSpot at least). > > Such code is missing on Windows - see > windows/native/libjava/ProcessImpl_md.c ?. There, we do not have > fork/exec, but CreateProcess(), and whether we inherit handles > or not is controlled via an argument to CreateProcess(). But > that flag is TRUE, so child processes inherit handles. > > 331 ? ? ? ? ? ? ? ? ? ?if (!CreateProcessW( > 332 ? ? ? ? ? ? ? ? ? ? ? ?NULL, ? ? ? ? ? ? /* executable name */ > 333 ? ? ? ? ? ? ? ? ? ? ? ?(LPWSTR)pcmd, ? ? /* command line */ > 334 ? ? ? ? ? ? ? ? ? ? ? ?NULL, ? ? ? ? ? ? /* process security > attribute */ > 335 ? ? ? ? ? ? ? ? ? ? ? ?NULL, ? ? ? ? ? ? /* thread security > attribute */ > 336 ? ? ? ? ? ? ? ? ? ? ? ?TRUE, ? ? ? ? ? ? /* inherits system > handles */ ? ? ? ? ?<<<<<< > 337 ? ? ? ? ? ? ? ? ? ? ? ?processFlag, ? ? ?/* selected based > on exe type */ > 338 ? ? ? ? ? ? ? ? ? ? ? ?(LPVOID)penvBlock,/* environment block */ > 339 ? ? ? ? ? ? ? ? ? ? ? ?(LPCWSTR)pdir, ? ?/* change to the > new current directory */ > 340 ? ? ? ? ? ? ? ? ? ? ? ?&si, ? ? ? ? ? ? ?/* (in) ?startup > information */ > 341 ? ? ? ? ? ? ? ? ? ? ? ?&pi)) ? ? ? ? ? ? /* (out) process > information */ > 342 ? ? ? ? ? ? ? ? ? ?{ > 343 ? ? ? ? ? ? ? ? ? ? ? ?win32Error(env, L"CreateProcess"); > 344 ? ? ? ? ? ? ? ? ? ?} > > Maybe this is the real error we should fix? Make Windows > Runtime.exec behave like the Posix variant by closing all file > descriptors upon CreateProcess > > (This seems more of a core-libs question.) > > > I think it is both a core-libs question and a hotspot question. I > firmly believe we should retain file descriptors with help of > FD_CLOEXEC and its variants in HotSpot. I am unsure (and have no > opinion) what to do in core-libs, maybe there is a deeper thought > behind line 336? > > Some reasons for this: > > - if a process is forked using JNI, it would still be good if the > hotspot descriptors would not leak. > > - if (I have no idea if this is true) the behaviour in core-libs can > not be changed because the behaviour is already wildly (ab)used, > this is still a correct fix. Remember this will only close file > descriptors opened by HotSpot code, and at the moment only logging code. > > - this will fix the issue in the bug report, and give time for > core-libs to consider what is correct (and what can be changed > without breaking applications). > > Thanks, > Leo > > > yes, you convinced me. > > 1 We should fix raw open() calls, because if native code forks via a > different code paths than java Runtime.exec(), we run into the same > problem. Your patch fixes one instance of the problem. Yes, I agree. I now understand that ::open() is much more used in the code base. > > 2 And we should fix Windows Runtime.exec() to the same behaviour as on > Posix. I can see this being backward-compatible-problematic, but it > certainly would be the right thing to do. Would love to know what > core-libs says. Possibly (I am intentionally dodging this question) > > Okay, about your change: I dislike that we add a new function, > especially a first class open function, to the os namespace. How about > this instead: since we know that os::open() does the right thing on all > platforms, why can we not just use os::open() instead? Afterwards call > fdopen() to wrap a FILE structure around it, respectively call "FILE* > os::open(int fd, const char* mode)" , which seems to be just a wrapped > fdopen(). That way you can get what you want with less change and > without introducing a new API. Yes, that might be a better solution. I did consider it, but was afraid that, for example the (significant) "w"/"w+" differences in semantics would matter. or that: os::open(os::open(path, WINDOWS_ONLY(_)O_CREAT|WINDOWS_ONLY(_)O_TRUNC, flags, mode) mode2) ...or something similar for fopen(path, "w"), would not be exactly the same. For example it would set the file to binary mode on windows. Maybe it is exactly the same otherwise? For me, the equality in semantics are not obvious. Also, now when I realized there is only two users of os::open I am less sure it always does the right thing... I prefer the os::fopen_retain way. Thanks, Leo > > Kind Regards, Thomas > > > > Kind Regards, Thomas > > > On Mon, Mar 12, 2018 at 2:20 PM, Leo Korinth > > >> > wrote: > > ? ? Hi, > > ? ? This fix is for all operating systems though the problem > only seams > ? ? to appear on windows. > > ? ? I am creating a proxy function for fopen (os::fopen_retain) > that > ? ? appends the non-standard "e" mode for linux and bsds. For > windows > ? ? the "N" mode is used. For other operating systems, I assume > that I > ? ? can use fcntl F_SETFD FD_CLOEXEC. I think this will work > for AIX, > ? ? Solaris and other operating systems that do not support the "e" > ? ? flag. Feedback otherwise please! > > ? ? The reason that I use the mode "e" and not only fcntl for > linux and > ? ? bsds is threefold. First, I still need to use mode flags on > windows > ? ? as it does not support fcntl. Second, I probably save a > system call. > ? ? Third, the change will be applied directly, and there will > be no > ? ? point in time (between system calls) when the process can > leak the > ? ? file descriptor, so it is safer. > > ? ? The test case forks three VMs in a row. By doing so we know > that the > ? ? second VM is opened with a specific log file. The third VM > should > ? ? have less open file descriptors (as it is does not use logging) > ? ? which is checked using a UnixOperatingSystemMXBean. This is not > ? ? possible on windows, so on windows I try to rename the > file, which > ? ? will not work if the file is opened (the actual reason the > bug was > ? ? opened). > > ? ? The added test case shows that the bug fix closes the log > file on > ? ? windows. The VM on other operating systems closed the log > file even > ? ? before the fix. > > ? ? Maybe the test case should be moved to a different path? > > ? ? Bug: > https://bugs.openjdk.java.net/browse/JDK-8176717 > > ? ? > > https://bugs.openjdk.java.net/browse/JDK-8176809 > > ? ? > > > ? ? Webrev: > http://cr.openjdk.java.net/~lkorinth/8176717/00/ > > ? ? > > > ? ? Testing: > ? ? hs-tier1, hs-tier2 and TestInheritFD.java > ? ? (on 64-bit linux, solaris, windows and mac) > > ? ? Thanks, > ? ? Leo > > > From glaubitz at physik.fu-berlin.de Mon Mar 12 19:40:53 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Mon, 12 Mar 2018 20:40:53 +0100 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: <1520882830.11566.12.camel@gmail.com> References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <03f59f60-dda2-bc0a-3929-115f7a0c4ca2@oracle.com> <1520582019.2395.6.camel@gmail.com> <8a76582b-bde0-b281-adae-833f7f861feb@oracle.com> <1520882830.11566.12.camel@gmail.com> Message-ID: Hi Edward! On 03/12/2018 08:27 PM, Edward Nevill wrote: > Thanks for your patience. New webrev here > > http://cr.openjdk.java.net/~enevill/8199220/webrev.03 > > I have updated the webrev to build the debug version of zero which has been broken since Nov 20, 2017 by change 8189871. > > https://bugs.openjdk.java.net/browse/JDK-8189871 > > This caused the error > > /home/ed/openjdk/hs/src/hotspot/share/utilities/debug.hpp:184:29: error: incomplete type ?STATIC_ASSERT_FAILURE? used in nested name specifier > > I have also addressed Coleen's comment. > > Build tested zero release/debug and server release/debug, Thanks for your work on Zero and fixing the issues. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From stefan.karlsson at oracle.com Mon Mar 12 19:41:41 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 12 Mar 2018 20:41:41 +0100 Subject: RFR(S/M): 8199472: Fix non-PCH build after JDK-8199319 In-Reply-To: References: Message-ID: Looks good to me. If you are open for suggestions, I would like to suggest that we move allocate_instance_handle to instanceKlass.cpp instead of instanceKlass.inline.hpp, and get rid of the extra added includes to instanceKlass.inline.hpp. Thanks, StefanK On 2018-03-12 20:34, Volker Simonis wrote: > Hi, > > can I please have a review and a sponsor for the following fix: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/ > https://bugs.openjdk.java.net/browse/JDK-8199472 > > The number changes files is "M" but the fix is actually "S" :) > > Here come the gory details: > > Change "8199319: Remove handles.inline.hpp include from > reflectionUtils.hpp" breaks the non-PCH build (at least on Ubuntu > 16.04 with gcc 5.4.0). If you configure with > "--disable-precompiled-headers" you will get a whole lot of undefined > reference for "Handle::Handle(Thread*, oopDesc*)" - see bug report. > > It seems that newer versions of GCC (and possibly other compilers as > well) don't emit any code for inline functions if these functions can > be inlined at all potential call sites. > > The problem in this special case is that "Handle::Handle(Thread*, > oopDesc*)" is not declared "inline" in "handles.hpp", but its > definition in "handles.inline.hpp" is declared "inline". This leads to > a situation, where compilation units which only include "handles.hpp" > will emit a call to "Handle::Handle(Thread*, oopDesc*)" while > compilation units which include "handles.inline.hpp" will try to > inline "Handle::Handle(Thread*, oopDesc*)". If all the inlining > attempts are successful, no instance of "Handle::Handle(Thread*, > oopDesc*)" will be generated in any of the object files. This will > lead to the link errors listed in the . > > The quick fix for this issue is to include "handles.inline.hpp" into > all the compilation units with undefined references (listed below). > > The correct fix (realized in this RFR) is to declare > "Handle::Handle(Thread*, oopDesc*)" inline in "handles.hpp". This will > lead to warnings (which are treated as errors) if the inline > definition is not available at a call site and will avoid linking > error due to compiler optimizations. Unfortunately this requires a > whole lot of follow-up changes, because "handles.hpp" defines some > derived classes of "Handle" which all have implicitly inline > constructors which all reference the base class > "Handle::Handle(Thread*, oopDesc*)" constructor. So the constructors > of the derived classes have to be explicitly declared inline in > "handles.hpp" and their implementation has to be moved to > "handles.inline.hpp". This change again triggers other changes for all > files which relayed on the derived Handle classes having inline > constructors... > > Thank you and best regards, > Volker From coleen.phillimore at oracle.com Mon Mar 12 19:42:55 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 12 Mar 2018 15:42:55 -0400 Subject: RFR(S/M): 8199472: Fix non-PCH build after JDK-8199319 In-Reply-To: References: Message-ID: <1fe1b023-b507-3706-b9ec-aff9d8283377@oracle.com> Hi this looks good except: http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/src/hotspot/share/oops/instanceKlass.inline.hpp.udiff.html Can you move this a function in instanceKlass.cpp and would this eliminate the changes that add include instanceKlass.inline.hpp ? If Stefan is not still online, I'll sponsor this for you. I have a follow-on related change https://bugs.openjdk.java.net/browse/JDK-8199263 which is quickly expanding due to transitive includes that I hope you can help me test out (when I get it to compile on solaris). Thanks, Coleen On 3/12/18 3:34 PM, Volker Simonis wrote: > Hi, > > can I please have a review and a sponsor for the following fix: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/ > https://bugs.openjdk.java.net/browse/JDK-8199472 > > The number changes files is "M" but the fix is actually "S" :) > > Here come the gory details: > > Change "8199319: Remove handles.inline.hpp include from > reflectionUtils.hpp" breaks the non-PCH build (at least on Ubuntu > 16.04 with gcc 5.4.0). If you configure with > "--disable-precompiled-headers" you will get a whole lot of undefined > reference for "Handle::Handle(Thread*, oopDesc*)" - see bug report. > > It seems that newer versions of GCC (and possibly other compilers as > well) don't emit any code for inline functions if these functions can > be inlined at all potential call sites. > > The problem in this special case is that "Handle::Handle(Thread*, > oopDesc*)" is not declared "inline" in "handles.hpp", but its > definition in "handles.inline.hpp" is declared "inline". This leads to > a situation, where compilation units which only include "handles.hpp" > will emit a call to "Handle::Handle(Thread*, oopDesc*)" while > compilation units which include "handles.inline.hpp" will try to > inline "Handle::Handle(Thread*, oopDesc*)". If all the inlining > attempts are successful, no instance of "Handle::Handle(Thread*, > oopDesc*)" will be generated in any of the object files. This will > lead to the link errors listed in the . > > The quick fix for this issue is to include "handles.inline.hpp" into > all the compilation units with undefined references (listed below). > > The correct fix (realized in this RFR) is to declare > "Handle::Handle(Thread*, oopDesc*)" inline in "handles.hpp". This will > lead to warnings (which are treated as errors) if the inline > definition is not available at a call site and will avoid linking > error due to compiler optimizations. Unfortunately this requires a > whole lot of follow-up changes, because "handles.hpp" defines some > derived classes of "Handle" which all have implicitly inline > constructors which all reference the base class > "Handle::Handle(Thread*, oopDesc*)" constructor. So the constructors > of the derived classes have to be explicitly declared inline in > "handles.hpp" and their implementation has to be moved to > "handles.inline.hpp". This change again triggers other changes for all > files which relayed on the derived Handle classes having inline > constructors... > > Thank you and best regards, > Volker From thomas.stuefe at gmail.com Mon Mar 12 19:54:53 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 12 Mar 2018 19:54:53 +0000 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: <1520882830.11566.12.camel@gmail.com> References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <03f59f60-dda2-bc0a-3929-115f7a0c4ca2@oracle.com> <1520582019.2395.6.camel@gmail.com> <8a76582b-bde0-b281-adae-833f7f861feb@oracle.com> <1520882830.11566.12.camel@gmail.com> Message-ID: Hi Edward, Thanks a lot for the fixing work! However, I am not so sure about the change to debug.hpp. Is the point of the Static assert thing not The missing Specialization? In which case the compile error you saw there was a static assert firing... I may be wrong, maybe Erik could clarify? Otherwise the change looks good. Thank you. ..Thomas On Mon 12. Mar 2018 at 20:27, Edward Nevill wrote: > On Mon, 2018-03-12 at 13:57 +1000, David Holmes wrote: > > Hi Ed, > > > > > > Once we're certain this addresses all the issues it was intended to > > address (ref Thomas's email) you should generate a final changeset with > > the exact changes (ie Coleen's comment) and the final set of reviewers, > > and post the link. I'll take that re-run through our internal tests and > > then push. > > > > > > Hi David, > > Thanks for your patience. New webrev here > > http://cr.openjdk.java.net/~enevill/8199220/webrev.03 > > I have updated the webrev to build the debug version of zero which has > been broken since Nov 20, 2017 by change 8189871. > > https://bugs.openjdk.java.net/browse/JDK-8189871 > > This caused the error > > /home/ed/openjdk/hs/src/hotspot/share/utilities/debug.hpp:184:29: error: > incomplete type ?STATIC_ASSERT_FAILURE? used in nested name specifier > > I have also addressed Coleen's comment. > > Build tested zero release/debug and server release/debug, > > Thanks for you help, > Ed. > > From erik.osterlund at oracle.com Mon Mar 12 20:37:17 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 12 Mar 2018 21:37:17 +0100 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <03f59f60-dda2-bc0a-3929-115f7a0c4ca2@oracle.com> <1520582019.2395.6.camel@gmail.com> <8a76582b-bde0-b281-adae-833f7f861feb@oracle.com> <1520882830.11566.12.camel@gmail.com> Message-ID: Hi Thomas, Yes your intuition is correct. The point is indeed the missing specialization that triggers a compilation error when the condition for the STATIC_ASSERT is false. So the proposed change to debug.hpp will make all STATIC_ASSERTs pass. Looks like the triggered assert should be dealt with instead. Thanks, /Erik On 2018-03-12 20:54, Thomas St?fe wrote: > Hi Edward, > > Thanks a lot for the fixing work! > > However, I am not so sure about the change to debug.hpp. Is the point > of the Static assert thing not The missing Specialization? In > which case the compile error you saw there was a static assert firing... > I may be wrong, maybe Erik could clarify? > > Otherwise the change looks good. Thank you. > > ..Thomas > > > On Mon 12. Mar 2018 at 20:27, Edward Nevill > wrote: > > On Mon, 2018-03-12 at 13:57 +1000, David Holmes wrote: > > Hi Ed, > > > > > > Once we're certain this addresses all the issues it was intended to > > address (ref Thomas's email) you should generate a final > changeset with > > the exact changes (ie Coleen's comment) and the final set of > reviewers, > > and post the link. I'll take that re-run through our internal > tests and > > then push. > > > > > > Hi David, > > Thanks for your patience. New webrev here > > http://cr.openjdk.java.net/~enevill/8199220/webrev.03 > > > I have updated the webrev to build the debug version of zero which > has been broken since Nov 20, 2017 by change 8189871. > > https://bugs.openjdk.java.net/browse/JDK-8189871 > > This caused the error > > /home/ed/openjdk/hs/src/hotspot/share/utilities/debug.hpp:184:29: > error: incomplete type ?STATIC_ASSERT_FAILURE? used in > nested name specifier > > I have also addressed Coleen's comment. > > Build tested zero release/debug and server release/debug, > > Thanks for you help, > Ed. > From edward.nevill at gmail.com Mon Mar 12 20:49:53 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Mon, 12 Mar 2018 20:49:53 +0000 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <03f59f60-dda2-bc0a-3929-115f7a0c4ca2@oracle.com> <1520582019.2395.6.camel@gmail.com> <8a76582b-bde0-b281-adae-833f7f861feb@oracle.com> <1520882830.11566.12.camel@gmail.com> Message-ID: <1520887793.11566.16.camel@gmail.com> On Mon, 2018-03-12 at 19:54 +0000, Thomas St?fe wrote: > Hi Edward, > > Thanks a lot for the fixing work! > > However, I am not so sure about the change to debug.hpp. Is the point of the Static assert thing not The missing Specialization? In which case the compile error you saw there was a static assert firing... > I may be wrong, maybe Erik could clarify? > > Otherwise the change looks good. Thank you. > Yes, of course, I see the purpose of the STATIC_ASSERT now. Kind of obvious from the name. The failure is in template static void verify_types(){ // If this fails to compile, then you have sent in something that is // not recognized as a valid primitive type to a primitive Access function. STATIC_ASSERT((HasDecorator::value || // oops have already been validated (IsPointer::value || IsIntegral::value) || IsFloatingPoint::value)); // not allowed primitive type } and the error is /home/ed/openjdk/hs/src/hotspot/share/oops/access.inline.hpp: In instantiation of ?void AccessInternal::verify_types() [with long unsigned int decorators = 4096; T = volatile oop]?: I will continue too look at this but would appreciate some help. Thanks, Ed. From thomas.stuefe at gmail.com Mon Mar 12 21:32:23 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 12 Mar 2018 21:32:23 +0000 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: <1520887793.11566.16.camel@gmail.com> References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <03f59f60-dda2-bc0a-3929-115f7a0c4ca2@oracle.com> <1520582019.2395.6.camel@gmail.com> <8a76582b-bde0-b281-adae-833f7f861feb@oracle.com> <1520882830.11566.12.camel@gmail.com> <1520887793.11566.16.camel@gmail.com> Message-ID: Reminds me of : http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-November/029289.html Could this be the same issue? On Mon 12. Mar 2018 at 21:49, Edward Nevill wrote: > On Mon, 2018-03-12 at 19:54 +0000, Thomas St?fe wrote: > > Hi Edward, > > > > Thanks a lot for the fixing work! > > > > However, I am not so sure about the change to debug.hpp. Is the point of > the Static assert thing not The missing Specialization? In which > case the compile error you saw there was a static assert firing... > > I may be wrong, maybe Erik could clarify? > > > > Otherwise the change looks good. Thank you. > > > > Yes, of course, I see the purpose of the STATIC_ASSERT now. Kind of > obvious from the name. > > The failure is in > > template > static void verify_types(){ > // If this fails to compile, then you have sent in something that is > // not recognized as a valid primitive type to a primitive Access > function. > STATIC_ASSERT((HasDecorator::value > || // oops have already been validated > (IsPointer::value || IsIntegral::value) || > IsFloatingPoint::value)); // not allowed primitive > type > } > > and the error is > > /home/ed/openjdk/hs/src/hotspot/share/oops/access.inline.hpp: In > instantiation of ?void AccessInternal::verify_types() [with long unsigned > int decorators = 4096; T = volatile oop]?: > > I will continue too look at this but would appreciate some help. Vague sense of deja vu: http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-November/029289.html Could this be the same issue? > > > Thanks, > Ed. > > From jesper.wilhelmsson at oracle.com Tue Mar 13 01:06:50 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Tue, 13 Mar 2018 02:06:50 +0100 Subject: New submit repo for hotspot changes Message-ID: Hi all HotSpot developers! There is now a new submit repo available. It is similar to the one created a while ago [1], and the usage is the same, but this one is based on and synched with the jdk/hs forest. This means that it should now be possible for any contributor to run all the required tests for hotspot pushes (referred to as hs tier 1) on the latest version of the HotSpot source code. http://hg.openjdk.java.net/jdk/submit-hs/ The results will still be returned in a mail with limited usage in case of a failure, but if all tests pass (and you fulfill the other criteria below) you will be ready to push your change. We do no longer require an Oracle sponsor to push changes to HotSpot. The following is not new, but I list it here for completeness. In order to push a change to HotSpot: 0. you must be a Committer in the JDK project. 1. you need a JBS issue for tracking. 2. your change must have been available for review at least 24 hours. 3. your change must have been approved by two Committers out of which at least one is also a Reviewer. 4. your change must have passed through the hs tier 1 testing provided by the submit-hs repository with zero failures. 5. you must be available the next few hours, and the next day and ready to follow up with any fix needed in case your change causes problems in later tiers. A change that causes failures in later tiers may be backed out if a fix can not be provided fast enough, or if the developer is not responsive when noticed about the failure. Note that 5 above should be interpreted as "it is a really bad idea to push a change the last thing you do before bedtime, or the day before going on vacation". There is a notion of trivial changes that can be pushed sooner than 24 hours. It should be clearly stated in the review mail that the intention is to push as a trivial change. How to actually define "trivial" is decided on a case-by-case basis but in general it would be things like fixing a comment, or moving code without changing it. Backing out a change is also considered trivial as the change itself in that case is generated by mercurial. One of these days I'll figure out how to put this stuff on the OpenJDK wiki. Thanks, /Jesper [1] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-January/000566.html From gromero at linux.vnet.ibm.com Tue Mar 13 01:13:14 2018 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Mon, 12 Mar 2018 22:13:14 -0300 Subject: RFR(S): 8198794: Hotspot crash on Cassandra 3.11.1 startup with libnuma 2.0.3 Message-ID: <5AA725AA.7010202@linux.vnet.ibm.com> Hi. Paul, I just saw today your bug on JBS... https://bugs.openjdk.java.net/browse/JDK-8198794 Thanks for reporting and debugging it. It looks like the issue boils down to the fact that although 'numa_all_nodes_ptr' was introduced with libnuma API v2, 'numa_nodes_ptr' was only introduced later on libnuma v2.0.9, so it's not present in libnuma 2.0.3 which dates back to Jun 2009 [1]. I agree with your initial patch that a reasonable way to address it for archs like x86_64 is to use 'numa_all_nodes_ptr' as a surrogate for 'numa_nodes_ptr' (PowerPC needs 'numa_nodes_ptr' anyway and will have to stick with libnuma 2.0.9 and above because it's not unusual to have non-configured nodes on PPC64 and nodes can be non-contiguous as well). I just think it's better to handle it inside isnode_in_existing_nodes() interface, which is where such a information is needed in the end. In that sense, if you agree could you please check if the following webrev fixes the issue for you? It must also apply ok for jdk8u: bug : https://bugs.openjdk.java.net/browse/JDK-8198794 webrev: http://cr.openjdk.java.net/~gromero/8198794/v1/ If it does solve your issue, I will kindly ask for another Reviewer. Thank you. Best regards, Gustavo [1] http://cr.openjdk.java.net/~gromero/misc/numa_all_nodes_ptr_VS_numa_nodes_ptr.txt From thomas.stuefe at gmail.com Tue Mar 13 05:50:21 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 13 Mar 2018 06:50:21 +0100 Subject: New submit repo for hotspot changes In-Reply-To: References: Message-ID: Hi Jesper, just a small question, how close will the syncing between submit-hs and jdk-hs be? Best Regards, Thomas On Tue, Mar 13, 2018 at 2:06 AM, wrote: > Hi all HotSpot developers! > > There is now a new submit repo available. It is similar to the one created > a while ago [1], and the usage is the same, but this one is based on and > synched with the jdk/hs forest. This means that it should now be possible > for any contributor to run all the required tests for hotspot pushes > (referred to as hs tier 1) on the latest version of the HotSpot source code. > > http://hg.openjdk.java.net/jdk/submit-hs/ > > The results will still be returned in a mail with limited usage in case of > a failure, but if all tests pass (and you fulfill the other criteria below) > you will be ready to push your change. We do no longer require an Oracle > sponsor to push changes to HotSpot. > > The following is not new, but I list it here for completeness. > > In order to push a change to HotSpot: > 0. you must be a Committer in the JDK project. > 1. you need a JBS issue for tracking. > 2. your change must have been available for review at least 24 hours. > 3. your change must have been approved by two Committers out of which at > least one is also a Reviewer. > 4. your change must have passed through the hs tier 1 testing provided by > the submit-hs repository with zero failures. > 5. you must be available the next few hours, and the next day and ready to > follow up with any fix needed in case your change causes problems in later > tiers. > > A change that causes failures in later tiers may be backed out if a fix > can not be provided fast enough, or if the developer is not responsive when > noticed about the failure. > > Note that 5 above should be interpreted as "it is a really bad idea to > push a change the last thing you do before bedtime, or the day before going > on vacation". > > There is a notion of trivial changes that can be pushed sooner than 24 > hours. It should be clearly stated in the review mail that the intention is > to push as a trivial change. How to actually define "trivial" is decided on > a case-by-case basis but in general it would be things like fixing a > comment, or moving code without changing it. Backing out a change is also > considered trivial as the change itself in that case is generated by > mercurial. > > One of these days I'll figure out how to put this stuff on the OpenJDK > wiki. > > Thanks, > /Jesper > > [1] http://mail.openjdk.java.net/pipermail/jdk-dev/2018- > January/000566.html > > From thomas.stuefe at gmail.com Tue Mar 13 06:10:23 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 13 Mar 2018 07:10:23 +0100 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <03f59f60-dda2-bc0a-3929-115f7a0c4ca2@oracle.com> <1520582019.2395.6.camel@gmail.com> <8a76582b-bde0-b281-adae-833f7f861feb@oracle.com> <1520882830.11566.12.camel@gmail.com> <1520887793.11566.16.camel@gmail.com> Message-ID: On Mon, Mar 12, 2018 at 10:32 PM, Thomas St?fe wrote: > Reminds me of : > http://mail.openjdk.java.net/pipermail/hotspot-dev/2017- > November/029289.html > > Could this be the same issue? > Hi Edward, I saw that the mail thread I linked yesterday night was cut off in the archives - the interesting part was off-list, some more in-depth explanation by Erik. Since it may be the same issue here, I'll paste Erik's off-list answer: .... In particular, the problem is that SUPPORTS_NATIVE_CX8 does not seem to be set on zero, despite actually running on a 64 bit machine. As a consequence, when you perform atomics, my Access API is trying to be clever and see if it is compile-time certain that we are not performing wide atomics with this mechanism: // This metafunction returns whether it is possible for a type T to require // locking to support wide atomics or not. template #ifdef SUPPORTS_NATIVE_CX8 struct PossiblyLockedAccess: public IntegralConstant {}; #else struct PossiblyLockedAccess: public IntegralConstant 4)> {}; #endif What happens in this case with zero is that if we do not have SUPPORTS_NATIVE_CX8, then it will expand a path where it tries to emulate wide atomics with a lock. But that wide atomic stuff assumes that we are really handling a 64 bit integer, *not* an oop. Because oops should *never* require wide atomics. The fix for this is either: 1) Make sure SUPPORTS_NATIVE_CX8 is set in globalDefinitions for zero when running on a 64 bit machine, or 2) Change the metafunction that switches to: // This metafunction returns whether it is possible for a type T to require // locking to support wide atomics or not. template #ifdef SUPPORTS_NATIVE_CX8 struct PossiblyLockedAccess: public IntegralConstant {}; #else struct PossiblyLockedAccess: public IntegralConstant sizeof(void*))> {}; #endif ...so that pointer sized values are never considered for wide atomics. Arguably, it is really SUPPORTS_NATIVE_CX8 that should be set. A good trick is to change the ifdefs. Instead of #ifdef SUPPORTS_NATIVE_CX8, it is better to always explicit define it to 0 or 1, and then use #if SUPPORTS_NATIVE_CX8 instead. We do this with e.g. INCLUDE_ALL_GCS Then if it was not defined, the compiler will die a bit earlier and tell you that you are referring to something that does not exist, instead of assuming that if it has not been defined, it means it is not supported. Since we had a number of header file shuffling arounds lately, this may be a regression and SUPPORTS_NATIVE_CX8 got lost? However, this is just a hunch - may be a different problem. Beware of wild geese. Kind Regards, Thomas > > On Mon 12. Mar 2018 at 21:49, Edward Nevill > wrote: > >> On Mon, 2018-03-12 at 19:54 +0000, Thomas St?fe wrote: >> > Hi Edward, >> > >> > Thanks a lot for the fixing work! >> > >> > However, I am not so sure about the change to debug.hpp. Is the point >> of the Static assert thing not The missing Specialization? In which >> case the compile error you saw there was a static assert firing... >> > I may be wrong, maybe Erik could clarify? >> > >> > Otherwise the change looks good. Thank you. >> > >> >> Yes, of course, I see the purpose of the STATIC_ASSERT now. Kind of >> obvious from the name. >> >> The failure is in >> >> template >> static void verify_types(){ >> // If this fails to compile, then you have sent in something that is >> // not recognized as a valid primitive type to a primitive Access >> function. >> STATIC_ASSERT((HasDecorator> INTERNAL_VALUE_IS_OOP>::value || // oops have already been validated >> (IsPointer::value || IsIntegral::value) || >> IsFloatingPoint::value)); // not allowed primitive >> type >> } >> >> and the error is >> >> /home/ed/openjdk/hs/src/hotspot/share/oops/access.inline.hpp: In >> instantiation of ?void AccessInternal::verify_types() [with long unsigned >> int decorators = 4096; T = volatile oop]?: >> >> I will continue too look at this but would appreciate some help. > > >> >> Thanks, >> Ed. >> >> From thomas.stuefe at gmail.com Tue Mar 13 06:46:31 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 13 Mar 2018 07:46:31 +0100 Subject: RFR: 8176717: GC log file handle leaked to child processes In-Reply-To: <8ad5b65b-9596-bbca-1f50-27c81d8d65a1@oracle.com> References: <8ad5b65b-9596-bbca-1f50-27c81d8d65a1@oracle.com> Message-ID: Hi Leo, On Mon, Mar 12, 2018 at 8:40 PM, Leo Korinth wrote: > > > On 12/03/18 17:48, Thomas St?fe wrote: > >> Hi Leo, >> >> On Mon, Mar 12, 2018 at 4:54 PM, Leo Korinth > > wrote: >> >> >> >> On 12/03/18 15:29, Thomas St?fe wrote: >> >> Hi Leo, >> >> This seems weird. >> >> This would affect numerous open() calls, not just this GC log, I >> cannot imagine the correct fix is to change all of them. >> >> >> Sorry, I do not understand what you mean with "numerous open()". >> This fix will only affect logging -- or am I missing something? >> os::open does roughly what I try to do in os::fopen_retain. >> >> >> Sorry, I spoke unclear. What I meant was I would expect the problem you >> found in gc logging to be present for every raw open()/fopen()/CreateFile() >> call in the VM and in the JDK, which are quite a few. I wondered why we do >> not see more problems like this. >> > > Oh, now I see, I just *assumed* os::open was used everywhere when in fact > it is only used in two places where the file in addition seems to be closed > fast afterwards. I should assume less... > > I guess leaking file descriptors is not that big of a problem. It seems to > *have* been a problem on Solaris (which seems to be the reason for > os::open) but as the unix file descriptors are closed by core-libs before > exec is mostly a windows problem. > > On windows it is also more of a problem because open files are harder to > rename or remove, and therefore the bug report. > > > >> >> >> In fact, on Posix platforms we close all file descriptors except >> the Pipe ones before between fork() and exec() - see >> unix/native/libjava/ childproc.c. >> >> >> Yes, that is why my test case did not fail before the fix on >> unix-like systems. I do not know why it is not handled in Windows >> (possibly a bug, possibly to keep old behaviour???), I had planned >> to ask that as a follow up question later, maybe open a bug report >> if it was not for keeping old behaviour. Even though childproc.c >> does close the file handler, I think it is much nicer to open them >> with FD_CLOEXEC (in addition to let childproc.c close it). os::open >> does so, and I would like to handle ::fopen the same way as ::open >> with a proxy call that ensures that the VM process will retain the >> file descriptor it opens (in HotSpot at least). >> >> Such code is missing on Windows - see >> windows/native/libjava/ProcessImpl_md.c . There, we do not have >> fork/exec, but CreateProcess(), and whether we inherit handles >> or not is controlled via an argument to CreateProcess(). But >> that flag is TRUE, so child processes inherit handles. >> >> 331 if (!CreateProcessW( >> 332 NULL, /* executable name */ >> 333 (LPWSTR)pcmd, /* command line */ >> 334 NULL, /* process security >> attribute */ >> 335 NULL, /* thread security >> attribute */ >> 336 TRUE, /* inherits system >> handles */ <<<<<< >> 337 processFlag, /* selected based >> on exe type */ >> 338 (LPVOID)penvBlock,/* environment block >> */ >> 339 (LPCWSTR)pdir, /* change to the >> new current directory */ >> 340 &si, /* (in) startup >> information */ >> 341 &pi)) /* (out) process >> information */ >> 342 { >> 343 win32Error(env, L"CreateProcess"); >> 344 } >> >> Maybe this is the real error we should fix? Make Windows >> Runtime.exec behave like the Posix variant by closing all file >> descriptors upon CreateProcess > >> (This seems more of a core-libs question.) >> >> >> I think it is both a core-libs question and a hotspot question. I >> firmly believe we should retain file descriptors with help of >> FD_CLOEXEC and its variants in HotSpot. I am unsure (and have no >> opinion) what to do in core-libs, maybe there is a deeper thought >> behind line 336? >> >> Some reasons for this: >> >> - if a process is forked using JNI, it would still be good if the >> hotspot descriptors would not leak. >> >> - if (I have no idea if this is true) the behaviour in core-libs can >> not be changed because the behaviour is already wildly (ab)used, >> this is still a correct fix. Remember this will only close file >> descriptors opened by HotSpot code, and at the moment only logging >> code. >> >> - this will fix the issue in the bug report, and give time for >> core-libs to consider what is correct (and what can be changed >> without breaking applications). >> >> Thanks, >> Leo >> >> >> yes, you convinced me. >> >> 1 We should fix raw open() calls, because if native code forks via a >> different code paths than java Runtime.exec(), we run into the same >> problem. Your patch fixes one instance of the problem. >> > Yes, I agree. I now understand that ::open() is much more used in the code > base. > >> >> 2 And we should fix Windows Runtime.exec() to the same behaviour as on >> Posix. I can see this being backward-compatible-problematic, but it >> certainly would be the right thing to do. Would love to know what core-libs >> says. >> > > Possibly (I am intentionally dodging this question) > >> >> Okay, about your change: I dislike that we add a new function, especially >> a first class open function, to the os namespace. How about this instead: >> since we know that os::open() does the right thing on all platforms, why >> can we not just use os::open() instead? Afterwards call fdopen() to wrap a >> FILE structure around it, respectively call "FILE* os::open(int fd, const >> char* mode)" , which seems to be just a wrapped fdopen(). That way you can >> get what you want with less change and without introducing a new API. >> > > Yes, that might be a better solution. I did consider it, but was afraid > that, for example the (significant) "w"/"w+" differences in semantics would > matter. or that: > > os::open(os::open(path, WINDOWS_ONLY(_)O_CREAT|WINDOWS_ONLY(_)O_TRUNC, > flags, mode) mode2) > > ...or something similar for fopen(path, "w"), would not be exactly the > same. For example it would set the file to binary mode on windows. Maybe it > is exactly the same otherwise? For me, the equality in semantics are not > obvious. > > Also, now when I realized there is only two users of os::open I am less > sure it always does the right thing... > > I prefer the os::fopen_retain way. > > I agree with you on the proposed fix: to open the file - at least on windows - with the inherit flag turned off. I still disagree with you about the way this is done. I am not a bit fan on "one trick APIs" being dropped into the os namespace for one singular purpose - I think we had recently a similar discussion about an snprintf variant specific for logging only. Just counting are a couple of variants I would prefer: 1) keep the API locally to logging, do not make it global. It is logging specific, after all. 2) Or even easier, just do this (logFileOutput.cpp): const char* const LogFileOutput::FileOpenMode = WINDOWS_ONLY("aN") NOT_WINDOWS("a"); that would fix windows. The other platforms do not have the problem if spawning via Runtime.exec(), and the problem of native-forking-and-handle-leaking is, while possible, rather theoretical. 2) If you really want a new global API, rename the thing to just "os::fopen()". Because after all you want to wrap a generic fopen() and forbid handle inheritance, yes? This is the same thing the os::open() sister function does too, so if you think you need that, give it a first class name :) And we could use tests then too (I think we have gtests for os::open()). In that case I also dislike the many ifdefs, so if you keep the function in its current form, I'd prefer it fanned out for different platforms, like os::open() does it. Just my 5c, and tastes differ, so I'll wait what others say. I'll cc Markus as the UL owner. Oh, I also think the bug desciption is a bit misleading, since this is about the UL file handle in general, not only gc log. And may it make sense to post this in hotspot-runtime, not hotspot-dev? Thanks and Best Regards, Thomas > Thanks, > Leo > > >> Kind Regards, Thomas >> >> >> >> Kind Regards, Thomas >> >> >> On Mon, Mar 12, 2018 at 2:20 PM, Leo Korinth >> >> >> >> >> wrote: >> >> Hi, >> >> This fix is for all operating systems though the problem >> only seams >> to appear on windows. >> >> I am creating a proxy function for fopen (os::fopen_retain) >> that >> appends the non-standard "e" mode for linux and bsds. For >> windows >> the "N" mode is used. For other operating systems, I assume >> that I >> can use fcntl F_SETFD FD_CLOEXEC. I think this will work >> for AIX, >> Solaris and other operating systems that do not support the >> "e" >> flag. Feedback otherwise please! >> >> The reason that I use the mode "e" and not only fcntl for >> linux and >> bsds is threefold. First, I still need to use mode flags on >> windows >> as it does not support fcntl. Second, I probably save a >> system call. >> Third, the change will be applied directly, and there will >> be no >> point in time (between system calls) when the process can >> leak the >> file descriptor, so it is safer. >> >> The test case forks three VMs in a row. By doing so we know >> that the >> second VM is opened with a specific log file. The third VM >> should >> have less open file descriptors (as it is does not use >> logging) >> which is checked using a UnixOperatingSystemMXBean. This is >> not >> possible on windows, so on windows I try to rename the >> file, which >> will not work if the file is opened (the actual reason the >> bug was >> opened). >> >> The added test case shows that the bug fix closes the log >> file on >> windows. The VM on other operating systems closed the log >> file even >> before the fix. >> >> Maybe the test case should be moved to a different path? >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8176717 >> >> > > >> https://bugs.openjdk.java.net/browse/JDK-8176809 >> >> > > >> >> Webrev: >> http://cr.openjdk.java.net/~lkorinth/8176717/00/ >> >> > > >> >> Testing: >> hs-tier1, hs-tier2 and TestInheritFD.java >> (on 64-bit linux, solaris, windows and mac) >> >> Thanks, >> Leo >> >> >> >> From volker.simonis at gmail.com Tue Mar 13 08:24:35 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 13 Mar 2018 09:24:35 +0100 Subject: New submit repo for hotspot changes In-Reply-To: References: Message-ID: Thanks Jesper, that's really good news! Regards, Volker On Tue, Mar 13, 2018 at 2:06 AM, wrote: > Hi all HotSpot developers! > > There is now a new submit repo available. It is similar to the one created a while ago [1], and the usage is the same, but this one is based on and synched with the jdk/hs forest. This means that it should now be possible for any contributor to run all the required tests for hotspot pushes (referred to as hs tier 1) on the latest version of the HotSpot source code. > > http://hg.openjdk.java.net/jdk/submit-hs/ > > The results will still be returned in a mail with limited usage in case of a failure, but if all tests pass (and you fulfill the other criteria below) you will be ready to push your change. We do no longer require an Oracle sponsor to push changes to HotSpot. > > The following is not new, but I list it here for completeness. > > In order to push a change to HotSpot: > 0. you must be a Committer in the JDK project. > 1. you need a JBS issue for tracking. > 2. your change must have been available for review at least 24 hours. > 3. your change must have been approved by two Committers out of which at least one is also a Reviewer. > 4. your change must have passed through the hs tier 1 testing provided by the submit-hs repository with zero failures. > 5. you must be available the next few hours, and the next day and ready to follow up with any fix needed in case your change causes problems in later tiers. > > A change that causes failures in later tiers may be backed out if a fix can not be provided fast enough, or if the developer is not responsive when noticed about the failure. > > Note that 5 above should be interpreted as "it is a really bad idea to push a change the last thing you do before bedtime, or the day before going on vacation". > > There is a notion of trivial changes that can be pushed sooner than 24 hours. It should be clearly stated in the review mail that the intention is to push as a trivial change. How to actually define "trivial" is decided on a case-by-case basis but in general it would be things like fixing a comment, or moving code without changing it. Backing out a change is also considered trivial as the change itself in that case is generated by mercurial. > > One of these days I'll figure out how to put this stuff on the OpenJDK wiki. > > Thanks, > /Jesper > > [1] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-January/000566.html > From glaubitz at physik.fu-berlin.de Tue Mar 13 08:27:50 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Tue, 13 Mar 2018 09:27:50 +0100 Subject: New submit repo for hotspot changes In-Reply-To: References: Message-ID: <8b7dfb28-8ad0-e5c4-789c-8810de9acc5e@physik.fu-berlin.de> On 03/13/2018 02:06 AM, jesper.wilhelmsson at oracle.com wrote: > There is now a new submit repo available. It is similar to the one created a while ago [1], and the usage is the same, but this one is based on and synched with the jdk/hs forest. This means that it should now be possible for any contributor to run all the required tests for hotspot pushes (referred to as hs tier 1) on the latest version of the HotSpot source code. > > http://hg.openjdk.java.net/jdk/submit-hs/ > > The results will still be returned in a mail with limited usage in case of a failure, but if all tests pass (and you fulfill the other criteria below) you will be ready to push your change. We do no longer require an Oracle sponsor to push changes to HotSpot. Woohoo, this is very cool \o/. Thanks for implementing this! Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From volker.simonis at gmail.com Tue Mar 13 09:12:48 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 13 Mar 2018 10:12:48 +0100 Subject: RFR(S/M): 8199472: Fix non-PCH build after JDK-8199319 In-Reply-To: <1fe1b023-b507-3706-b9ec-aff9d8283377@oracle.com> References: <1fe1b023-b507-3706-b9ec-aff9d8283377@oracle.com> Message-ID: Hi Coleen, Stefan, sure I'm open for suggestions :) As you both ask for the same thing, I'll prepare a new webrev with allocate_instance_handle moved to instanceKlass.cpp. In my initial patch I just didn't wanted to change the current inlining behaviour but if you both think that allocate_instance_handle is not performance critical I'm happy to clean that up. With the brand new submit-hs repo posted by Jesper just a few hours ago, I'll be also able to push this myself, so no more need for a sponsor :) Thanks, Volker On Mon, Mar 12, 2018 at 8:42 PM, wrote: > > Hi this looks good except: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/src/hotspot/share/oops/instanceKlass.inline.hpp.udiff.html > > Can you move this a function in instanceKlass.cpp and would this eliminate > the changes that add include instanceKlass.inline.hpp ? > > If Stefan is not still online, I'll sponsor this for you. > > I have a follow-on related change > https://bugs.openjdk.java.net/browse/JDK-8199263 which is quickly expanding > due to transitive includes that I hope you can help me test out (when I get > it to compile on solaris). > > Thanks, > Coleen > > > > On 3/12/18 3:34 PM, Volker Simonis wrote: >> >> Hi, >> >> can I please have a review and a sponsor for the following fix: >> >> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/ >> https://bugs.openjdk.java.net/browse/JDK-8199472 >> >> The number changes files is "M" but the fix is actually "S" :) >> >> Here come the gory details: >> >> Change "8199319: Remove handles.inline.hpp include from >> reflectionUtils.hpp" breaks the non-PCH build (at least on Ubuntu >> 16.04 with gcc 5.4.0). If you configure with >> "--disable-precompiled-headers" you will get a whole lot of undefined >> reference for "Handle::Handle(Thread*, oopDesc*)" - see bug report. >> >> It seems that newer versions of GCC (and possibly other compilers as >> well) don't emit any code for inline functions if these functions can >> be inlined at all potential call sites. >> >> The problem in this special case is that "Handle::Handle(Thread*, >> oopDesc*)" is not declared "inline" in "handles.hpp", but its >> definition in "handles.inline.hpp" is declared "inline". This leads to >> a situation, where compilation units which only include "handles.hpp" >> will emit a call to "Handle::Handle(Thread*, oopDesc*)" while >> compilation units which include "handles.inline.hpp" will try to >> inline "Handle::Handle(Thread*, oopDesc*)". If all the inlining >> attempts are successful, no instance of "Handle::Handle(Thread*, >> oopDesc*)" will be generated in any of the object files. This will >> lead to the link errors listed in the . >> >> The quick fix for this issue is to include "handles.inline.hpp" into >> all the compilation units with undefined references (listed below). >> >> The correct fix (realized in this RFR) is to declare >> "Handle::Handle(Thread*, oopDesc*)" inline in "handles.hpp". This will >> lead to warnings (which are treated as errors) if the inline >> definition is not available at a call site and will avoid linking >> error due to compiler optimizations. Unfortunately this requires a >> whole lot of follow-up changes, because "handles.hpp" defines some >> derived classes of "Handle" which all have implicitly inline >> constructors which all reference the base class >> "Handle::Handle(Thread*, oopDesc*)" constructor. So the constructors >> of the derived classes have to be explicitly declared inline in >> "handles.hpp" and their implementation has to be moved to >> "handles.inline.hpp". This change again triggers other changes for all >> files which relayed on the derived Handle classes having inline >> constructors... >> >> Thank you and best regards, >> Volker > > From stefan.karlsson at oracle.com Tue Mar 13 09:16:19 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Tue, 13 Mar 2018 10:16:19 +0100 Subject: RFR(S/M): 8199472: Fix non-PCH build after JDK-8199319 In-Reply-To: References: <1fe1b023-b507-3706-b9ec-aff9d8283377@oracle.com> Message-ID: <95ab46a3-796a-6a4a-bfb2-423c0d5c0794@oracle.com> Hi Volker, On 2018-03-13 10:12, Volker Simonis wrote: > Hi Coleen, Stefan, > > sure I'm open for suggestions :) > > As you both ask for the same thing, I'll prepare a new webrev with > allocate_instance_handle moved to instanceKlass.cpp. In my initial > patch I just didn't wanted to change the current inlining behaviour > but if you both think that allocate_instance_handle is not performance > critical I'm happy to clean that up. I don't think it's critical to get it inlined. With that said, I think the compiler will inline allocate_instance into allocate_instance_handle, so you'll most likely only get one call anyway. > With the brand new submit-hs repo posted by Jesper just a few hours > ago, I'll be also able to push this myself, so no more need for a > sponsor :) Yay! StefanK > > Thanks, > Volker > > > On Mon, Mar 12, 2018 at 8:42 PM, wrote: >> >> Hi this looks good except: >> >> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/src/hotspot/share/oops/instanceKlass.inline.hpp.udiff.html >> >> Can you move this a function in instanceKlass.cpp and would this eliminate >> the changes that add include instanceKlass.inline.hpp ? >> >> If Stefan is not still online, I'll sponsor this for you. >> >> I have a follow-on related change >> https://bugs.openjdk.java.net/browse/JDK-8199263 which is quickly expanding >> due to transitive includes that I hope you can help me test out (when I get >> it to compile on solaris). >> >> Thanks, >> Coleen >> >> >> >> On 3/12/18 3:34 PM, Volker Simonis wrote: >>> >>> Hi, >>> >>> can I please have a review and a sponsor for the following fix: >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/ >>> https://bugs.openjdk.java.net/browse/JDK-8199472 >>> >>> The number changes files is "M" but the fix is actually "S" :) >>> >>> Here come the gory details: >>> >>> Change "8199319: Remove handles.inline.hpp include from >>> reflectionUtils.hpp" breaks the non-PCH build (at least on Ubuntu >>> 16.04 with gcc 5.4.0). If you configure with >>> "--disable-precompiled-headers" you will get a whole lot of undefined >>> reference for "Handle::Handle(Thread*, oopDesc*)" - see bug report. >>> >>> It seems that newer versions of GCC (and possibly other compilers as >>> well) don't emit any code for inline functions if these functions can >>> be inlined at all potential call sites. >>> >>> The problem in this special case is that "Handle::Handle(Thread*, >>> oopDesc*)" is not declared "inline" in "handles.hpp", but its >>> definition in "handles.inline.hpp" is declared "inline". This leads to >>> a situation, where compilation units which only include "handles.hpp" >>> will emit a call to "Handle::Handle(Thread*, oopDesc*)" while >>> compilation units which include "handles.inline.hpp" will try to >>> inline "Handle::Handle(Thread*, oopDesc*)". If all the inlining >>> attempts are successful, no instance of "Handle::Handle(Thread*, >>> oopDesc*)" will be generated in any of the object files. This will >>> lead to the link errors listed in the . >>> >>> The quick fix for this issue is to include "handles.inline.hpp" into >>> all the compilation units with undefined references (listed below). >>> >>> The correct fix (realized in this RFR) is to declare >>> "Handle::Handle(Thread*, oopDesc*)" inline in "handles.hpp". This will >>> lead to warnings (which are treated as errors) if the inline >>> definition is not available at a call site and will avoid linking >>> error due to compiler optimizations. Unfortunately this requires a >>> whole lot of follow-up changes, because "handles.hpp" defines some >>> derived classes of "Handle" which all have implicitly inline >>> constructors which all reference the base class >>> "Handle::Handle(Thread*, oopDesc*)" constructor. So the constructors >>> of the derived classes have to be explicitly declared inline in >>> "handles.hpp" and their implementation has to be moved to >>> "handles.inline.hpp". This change again triggers other changes for all >>> files which relayed on the derived Handle classes having inline >>> constructors... >>> >>> Thank you and best regards, >>> Volker >> >> From rkennke at redhat.com Tue Mar 13 09:26:36 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 13 Mar 2018 10:26:36 +0100 Subject: RFR: 8198949: Modularize arraycopy stub routine GC barriers In-Reply-To: <5AA2BD2B.2060100@oracle.com> References: <5AA2BD2B.2060100@oracle.com> Message-ID: Am 09.03.2018 um 17:58 schrieb Erik ?sterlund: > Hi, > > The GC barriers for arraycopy stub routines are not as modular as they > could be. They currently use switch statements to check which GC barrier > set is being used, and call one or another barrier based on that, with > registers already allocated in such a way that it can only be used for > write barriers. > > My solution to the problem is to introduce a platform-specific GC > barrier set code generator. The abstract super class is > BarrierSetCodeGen, and you can get it from the active BarrierSet. A > virtual call to the BarrierSetCodeGen generates the relevant GC barriers > for the arraycopy stub routines. > > The BarrierSetCodeGen inheritance hierarchy exactly matches the > corresponding BarrierSet inheritance hierarchy. In other words, every > BarrierSet class has a corresponding BarrierSetCodeGen class. > > The various switch statements that generate different GC barriers > depending on the enum type of the barrier set have been changed to call > a corresponding virtual member function in the BarrierSetCodeGen class > instead. > > Thanks to Martin Doerr and Roman Kennke for providing platform specific > code for PPC, S390 and AArch64. > > Webrev: > http://cr.openjdk.java.net/~eosterlund/8198949/webrev.00/ > > CR: > https://bugs.openjdk.java.net/browse/JDK-8198949 > > Thanks, > /Erik I looked over x86, aarch64 and shared code (in webrev.01), and it looks good to me! As I commented earlier in private, I would find it useful if the barriers could 'take over' the whole arraycopy, for example to do the pre- and post-barrier and arraycopy in one pass, instead of 3. However, let's keep that for later. Awesome work, thank you! Cheers, Roman From erik.osterlund at oracle.com Tue Mar 13 09:47:24 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 13 Mar 2018 10:47:24 +0100 Subject: RFR: 8198949: Modularize arraycopy stub routine GC barriers In-Reply-To: References: <5AA2BD2B.2060100@oracle.com> Message-ID: <62a2c346-4260-8b79-d8a9-a4037a00d1bc@oracle.com> Hi Roman, Thanks for the review. /Erik On 2018-03-13 10:26, Roman Kennke wrote: > Am 09.03.2018 um 17:58 schrieb Erik ?sterlund: >> Hi, >> >> The GC barriers for arraycopy stub routines are not as modular as they >> could be. They currently use switch statements to check which GC barrier >> set is being used, and call one or another barrier based on that, with >> registers already allocated in such a way that it can only be used for >> write barriers. >> >> My solution to the problem is to introduce a platform-specific GC >> barrier set code generator. The abstract super class is >> BarrierSetCodeGen, and you can get it from the active BarrierSet. A >> virtual call to the BarrierSetCodeGen generates the relevant GC barriers >> for the arraycopy stub routines. >> >> The BarrierSetCodeGen inheritance hierarchy exactly matches the >> corresponding BarrierSet inheritance hierarchy. In other words, every >> BarrierSet class has a corresponding BarrierSetCodeGen class. >> >> The various switch statements that generate different GC barriers >> depending on the enum type of the barrier set have been changed to call >> a corresponding virtual member function in the BarrierSetCodeGen class >> instead. >> >> Thanks to Martin Doerr and Roman Kennke for providing platform specific >> code for PPC, S390 and AArch64. >> >> Webrev: >> http://cr.openjdk.java.net/~eosterlund/8198949/webrev.00/ >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8198949 >> >> Thanks, >> /Erik > > I looked over x86, aarch64 and shared code (in webrev.01), and it looks > good to me! > > As I commented earlier in private, I would find it useful if the > barriers could 'take over' the whole arraycopy, for example to do the > pre- and post-barrier and arraycopy in one pass, instead of 3. However, > let's keep that for later. > > Awesome work, thank you! > > Cheers, > Roman > > From edward.nevill at gmail.com Tue Mar 13 10:05:54 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Tue, 13 Mar 2018 10:05:54 +0000 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <03f59f60-dda2-bc0a-3929-115f7a0c4ca2@oracle.com> <1520582019.2395.6.camel@gmail.com> <8a76582b-bde0-b281-adae-833f7f861feb@oracle.com> <1520882830.11566.12.camel@gmail.com> <1520887793.11566.16.camel@gmail.com> Message-ID: <1520935554.25609.2.camel@gmail.com> On Tue, 2018-03-13 at 07:10 +0100, Thomas St?fe wrote: > > > On Mon, Mar 12, 2018 at 10:32 PM, Thomas St?fe wrote: > > Reminds me of : > > http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-November/029289.html > > > > Could this be the same issue? > > > It is indeed exactly the same issue. Was this issue ever resolved? I cannot find a JBS report or hg patch. Many thanks, Ed. From leo.korinth at oracle.com Tue Mar 13 10:02:09 2018 From: leo.korinth at oracle.com (Leo Korinth) Date: Tue, 13 Mar 2018 11:02:09 +0100 Subject: RFR: 8176717: GC log file handle leaked to child processes In-Reply-To: References: <8ad5b65b-9596-bbca-1f50-27c81d8d65a1@oracle.com> Message-ID: > I agree with you on the proposed fix: to open the file - at least on > windows - with the inherit flag turned off. I still disagree with you > about the way this is done. I am not a bit fan on "one trick APIs" being > dropped into the os namespace for one singular purpose - I think we had > recently a similar discussion about an snprintf variant specific for > logging only. > > Just counting are a couple of variants I would prefer: > > 1) keep the API locally to logging, do not make it global. It is logging > specific, after all. But is it really logging specific? I tried to do it as generic as possible. The _only_ behaviour I am forcing is that the descriptor should not leak (a property that arguably ought to be forced globally). I feel it is _more_ generic than os::open --- I am for example not forcing binary mode on anyone. > > 2) Or even easier, just do this (logFileOutput.cpp): > > const char* const LogFileOutput::FileOpenMode = WINDOWS_ONLY("aN") > NOT_WINDOWS("a"); > > that would fix windows. The other platforms do not have the problem if > spawning via Runtime.exec(), and the problem of > native-forking-and-handle-leaking is, while possible, rather theoretical. > > 2) If you really want a new global API, rename the thing to just > "os::fopen()". Because after all you want to wrap a generic fopen() and > forbid handle inheritance, yes? This is the same thing the os::open() > sister function does too, so if you think you need that, give it a first > class name :) And we could use tests then too (I think we have gtests > for os::open()). In that case I also dislike the many ifdefs, so if you > keep the function in its current form, I'd prefer it fanned out for > different platforms, like os::open() does it. Maybe the name os::fopen is better, I was afraid that it would give the wrongful impression that it was just a platform agnostic ::fopen. Just like os::open might not give the impression that it opens files in binary mode on windows. > > Just my 5c, and tastes differ, so I'll wait what others say. I'll cc > Markus as the UL owner. Thank you for the feedback! Lets see if the functionality is needed or wanted outside logging, if not I will remove it from "os" and just inline its use... Thanks, Leo > Oh, I also think the bug desciption is a bit misleading, since this is > about the UL file handle in general, not only gc log. And mayit make > sense to post this in hotspot-runtime, not hotspot-dev? > > Thanks and Best Regards, Thomas From thomas.stuefe at gmail.com Tue Mar 13 10:24:20 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 13 Mar 2018 11:24:20 +0100 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: <1520935554.25609.2.camel@gmail.com> References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <03f59f60-dda2-bc0a-3929-115f7a0c4ca2@oracle.com> <1520582019.2395.6.camel@gmail.com> <8a76582b-bde0-b281-adae-833f7f861feb@oracle.com> <1520882830.11566.12.camel@gmail.com> <1520887793.11566.16.camel@gmail.com> <1520935554.25609.2.camel@gmail.com> Message-ID: On Tue, Mar 13, 2018 at 11:05 AM, Edward Nevill wrote: > On Tue, 2018-03-13 at 07:10 +0100, Thomas St?fe wrote: > > > > > > On Mon, Mar 12, 2018 at 10:32 PM, Thomas St?fe > wrote: > > > Reminds me of : > > > http://mail.openjdk.java.net/pipermail/hotspot-dev/2017- > November/029289.html > > > > > > Could this be the same issue? > > > > > > > It is indeed exactly the same issue. > > Was this issue ever resolved? I cannot find a JBS report or hg patch. > > Many thanks, > Ed. > > ... oh... I think Erik thought I was going to fix it, and I was counting on Erik... :-) So, maybe it was never fixed. Adrian is the defacto maintainer of zero currently (at least he is the most active), but I think he may only build release? ..Thomas From adinn at redhat.com Tue Mar 13 10:54:29 2018 From: adinn at redhat.com (Andrew Dinn) Date: Tue, 13 Mar 2018 10:54:29 +0000 Subject: New submit repo for hotspot changes In-Reply-To: References: Message-ID: On 13/03/18 01:06, jesper.wilhelmsson at oracle.com wrote: > Hi all HotSpot developers! > > There is now a new submit repo available. > . . . > Thanks, /Jesper Great news! Thanks very much, Jesper. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From shade at redhat.com Tue Mar 13 11:05:49 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Tue, 13 Mar 2018 12:05:49 +0100 Subject: RFR (XS) 8199511: Do not generate g1_{pre|post}_barrier_slow_id without CardTable-enabled barrier set Message-ID: g1_{pre|post}_barrier_slow_id generation reaches for card table address, but it might not be available if barrier set does not support it. Reliably asserts with Epsilon. Bug: https://bugs.openjdk.java.net/browse/JDK-8199511 Fix: http://cr.openjdk.java.net/~shade/8199511/webrev.01/ This is arch-specific fix: - c1_Runtime1_aarch64: copy-pasted the check block from g1_pre_barrier_slow_id - c1_Runtime1_arm: added check block for *both* g1_{pre|post}_slow_id - c1_Runtime1_ppc: already implemented - c1_Runtime1_s390: already implemented - c1_Runtime1_sparc: already implemented - c1_Runtime1_x86: copy-pasted the check block from g1_pre_barrier_slow_id Testing: x86_64 build, Epsilon tests, (running with submit-hs repo now) Thanks, -Aleksey From shade at redhat.com Tue Mar 13 11:09:25 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Tue, 13 Mar 2018 12:09:25 +0100 Subject: New submit repo for hotspot changes In-Reply-To: References: Message-ID: <1af24623-0bd6-6820-0e2b-924233e5803f@redhat.com> On 03/13/2018 02:06 AM, jesper.wilhelmsson at oracle.com wrote: > Hi all HotSpot developers! > > There is now a new submit repo available. It is similar to the one created a while ago [1], and the usage is the same, but this one is based on and synched with the jdk/hs forest. This means that it should now be possible for any contributor to run all the required tests for hotspot pushes (referred to as hs tier 1) on the latest version of the HotSpot source code. > > http://hg.openjdk.java.net/jdk/submit-hs/ The mirror tarball for this new repo: https://builds.shipilev.net/workspaces/jdk-submit-hs.tar.xz As usual, this works: $ wget https://builds.shipilev.net/workspaces/jdk-submit-hs.tar.xz -O - | tar xJf -; \ cd jdk-submit-hs; \ hg pull; hg up; Thanks, -Aleksey From rkennke at redhat.com Tue Mar 13 11:09:56 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 13 Mar 2018 12:09:56 +0100 Subject: RFR (XS) 8199511: Do not generate g1_{pre|post}_barrier_slow_id without CardTable-enabled barrier set In-Reply-To: References: Message-ID: Am 13.03.2018 um 12:05 schrieb Aleksey Shipilev: > g1_{pre|post}_barrier_slow_id generation reaches for card table address, but it might not be > available if barrier set does not support it. Reliably asserts with Epsilon. > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8199511 > > Fix: > http://cr.openjdk.java.net/~shade/8199511/webrev.01/ > > This is arch-specific fix: > - c1_Runtime1_aarch64: copy-pasted the check block from g1_pre_barrier_slow_id > - c1_Runtime1_arm: added check block for *both* g1_{pre|post}_slow_id > - c1_Runtime1_ppc: already implemented > - c1_Runtime1_s390: already implemented > - c1_Runtime1_sparc: already implemented > - c1_Runtime1_x86: copy-pasted the check block from g1_pre_barrier_slow_id > > Testing: x86_64 build, Epsilon tests, (running with submit-hs repo now) > > Thanks, > -Aleksey > I just stumbled over the same thing. Patch looks good to me! Thanks, Roman From coleen.phillimore at oracle.com Tue Mar 13 11:50:12 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 13 Mar 2018 07:50:12 -0400 Subject: Pre-RFR (L) 8199263: Split interfaceSupport.hpp to not require including .inline.hpp files Message-ID: <0503632f-1ed7-8fa9-0540-4d21edc3bb33@oracle.com> Summary: interfaceSupport.hpp is an inline file so moved to interfaceSupport.inline.hpp and stopped including it in .hpp files 90% of this change is renaming interfaceSupport.hpp to interfaceSupport.inline.hpp.?? I tried to see if all of these files needed this header and the answer was yes.?? A surprising (to me!) number of files have thread state transitions. Some of interesting part of this change is adding ciUtilities.inline.hpp to include interfaceSupport.inline.hpp for VM_ENTRY.? whitebox.inline.hpp was added for the same reason. jvmtiEnter.hpp was renamed jvmtiEnter.inline.hpp because it includes interfaceSupport.inline.hpp, and is only included in cpp files. The rest of the changes were to add back includes that are not pulled in by header files including interfaceSupport.hpp, like gcLocker.hpp and of course handles.inline.hpp. This probably overlaps some of Volker's patch.? Can this be tested on other platforms that we don't have? Hopefully, at the end of all this we have more clean header files so that transitive includes don't make the jvm build on one platform but not the next.? I think that's the goal of all of this work. This was tested with Oracle platforms (linux-x64, solaris-sparcv9, macosx-x64, windows-x64) in the mach5 tier1 and 2.?? I built this locally without precompiled headers (my default setting of course) on linux-x64. bug link https://bugs.openjdk.java.net/browse/JDK-8199263 local webrev at http://oklahoma.us.oracle.com/~cphillim/webrev/8199263.02/webrev Thanks to Stefan for his help with this. Thanks, Coleen From erik.helin at oracle.com Tue Mar 13 12:38:57 2018 From: erik.helin at oracle.com (Erik Helin) Date: Tue, 13 Mar 2018 13:38:57 +0100 Subject: New submit repo for hotspot changes In-Reply-To: <1af24623-0bd6-6820-0e2b-924233e5803f@redhat.com> References: <1af24623-0bd6-6820-0e2b-924233e5803f@redhat.com> Message-ID: <953dffd2-9189-28f9-0db9-d1ad5f48cc56@oracle.com> On 03/13/2018 12:09 PM, Aleksey Shipilev wrote: > On 03/13/2018 02:06 AM, jesper.wilhelmsson at oracle.com wrote: >> Hi all HotSpot developers! >> >> There is now a new submit repo available. It is similar to the one created a while ago [1], and the usage is the same, but this one is based on and synched with the jdk/hs forest. This means that it should now be possible for any contributor to run all the required tests for hotspot pushes (referred to as hs tier 1) on the latest version of the HotSpot source code. >> >> http://hg.openjdk.java.net/jdk/submit-hs/ > > The mirror tarball for this new repo: > https://builds.shipilev.net/workspaces/jdk-submit-hs.tar.xz > > As usual, this works: > > $ wget https://builds.shipilev.net/workspaces/jdk-submit-hs.tar.xz -O - | tar xJf -; \ > cd jdk-submit-hs; \ > hg pull; hg up; Thanks Aleksey for mirroring, but if you are already working in jdk/hs, then you can just add the jdk/hs-submit repo as an additional remote: $ printf "[paths]\nsubmit = ssh://$(hg config ui.username)@hg.openjdk.java.net/jdk/hs-submit\n" >> $(hg root)/.hg/hgrc This way, when you want to test your work, just push to the submit path: $ hg push --new-branch submit To avoid forgetting to append the 'submit' path (and thereby pushing to the main repository), it is probably best to create an alias: $ printf "[alias]\nsubmit = push --new-branch submit" >> $HOME/.hgrc This way you can just type: $ hg submit to run your current branch through the jdk/hs-submit system. Thanks, Erik > Thanks, > -Aleksey > > From stefan.karlsson at oracle.com Tue Mar 13 12:55:54 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Tue, 13 Mar 2018 13:55:54 +0100 Subject: Pre-RFR (L) 8199263: Split interfaceSupport.hpp to not require including .inline.hpp files In-Reply-To: <0503632f-1ed7-8fa9-0540-4d21edc3bb33@oracle.com> References: <0503632f-1ed7-8fa9-0540-4d21edc3bb33@oracle.com> Message-ID: Hi Coleen, Not sure why this is a Pre-RFR instead of a RFR. Most of this looks good to me. I'd prefer if you removed the .inline.hpp files from precompiled.hpp. We could also do it as a separate cleanup if you don't want to retest this patch. Thanks, StefanK On 2018-03-13 12:50, coleen.phillimore at oracle.com wrote: > Summary: interfaceSupport.hpp is an inline file so moved to > interfaceSupport.inline.hpp and stopped including it in .hpp files > > 90% of this change is renaming interfaceSupport.hpp to > interfaceSupport.inline.hpp.?? I tried to see if all of these files > needed this header and the answer was yes.?? A surprising (to me!) > number of files have thread state transitions. > Some of interesting part of this change is adding ciUtilities.inline.hpp > to include interfaceSupport.inline.hpp for VM_ENTRY. whitebox.inline.hpp > was added for the same reason. > jvmtiEnter.hpp was renamed jvmtiEnter.inline.hpp because it includes > interfaceSupport.inline.hpp, and is only included in cpp files. > The rest of the changes were to add back includes that are not pulled in > by header files including interfaceSupport.hpp, like gcLocker.hpp and of > course handles.inline.hpp. > > This probably overlaps some of Volker's patch.? Can this be tested on > other platforms that we don't have? > > Hopefully, at the end of all this we have more clean header files so > that transitive includes don't make the jvm build on one platform but > not the next.? I think that's the goal of all of this work. > > This was tested with Oracle platforms (linux-x64, solaris-sparcv9, > macosx-x64, windows-x64) in the mach5 tier1 and 2.?? I built this > locally without precompiled headers (my default setting of course) on > linux-x64. > > bug link https://bugs.openjdk.java.net/browse/JDK-8199263 > local webrev at > http://oklahoma.us.oracle.com/~cphillim/webrev/8199263.02/webrev > > Thanks to Stefan for his help with this. > > Thanks, > Coleen > > From coleen.phillimore at oracle.com Tue Mar 13 13:01:13 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 13 Mar 2018 09:01:13 -0400 Subject: Pre-RFR (L) 8199263: Split interfaceSupport.hpp to not require including .inline.hpp files In-Reply-To: <0503632f-1ed7-8fa9-0540-4d21edc3bb33@oracle.com> References: <0503632f-1ed7-8fa9-0540-4d21edc3bb33@oracle.com> Message-ID: Sorry, this is the correct webrev: http://cr.openjdk.java.net/~coleenp/8199263.02/webrev/index.html Coleen On 3/13/18 7:50 AM, coleen.phillimore at oracle.com wrote: > Summary: interfaceSupport.hpp is an inline file so moved to > interfaceSupport.inline.hpp and stopped including it in .hpp files > > 90% of this change is renaming interfaceSupport.hpp to > interfaceSupport.inline.hpp.?? I tried to see if all of these files > needed this header and the answer was yes.?? A surprising (to me!) > number of files have thread state transitions. > Some of interesting part of this change is adding > ciUtilities.inline.hpp to include interfaceSupport.inline.hpp for > VM_ENTRY.? whitebox.inline.hpp was added for the same reason. > jvmtiEnter.hpp was renamed jvmtiEnter.inline.hpp because it includes > interfaceSupport.inline.hpp, and is only included in cpp files. > The rest of the changes were to add back includes that are not pulled > in by header files including interfaceSupport.hpp, like gcLocker.hpp > and of course handles.inline.hpp. > > This probably overlaps some of Volker's patch.? Can this be tested on > other platforms that we don't have? > > Hopefully, at the end of all this we have more clean header files so > that transitive includes don't make the jvm build on one platform but > not the next.? I think that's the goal of all of this work. > > This was tested with Oracle platforms (linux-x64, solaris-sparcv9, > macosx-x64, windows-x64) in the mach5 tier1 and 2.?? I built this > locally without precompiled headers (my default setting of course) on > linux-x64. > > bug link https://bugs.openjdk.java.net/browse/JDK-8199263 > local webrev at > http://oklahoma.us.oracle.com/~cphillim/webrev/8199263.02/webrev > > Thanks to Stefan for his help with this. > > Thanks, > Coleen > > From goetz.lindenmaier at sap.com Tue Mar 13 13:16:33 2018 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Tue, 13 Mar 2018 13:16:33 +0000 Subject: New submit repo for hotspot changes In-Reply-To: References: Message-ID: Hi This is great I appreciate this quick progress a lot!! Thanks to Jesper and anybody else involved in enabling this. The rules make complete sense to me. Maybe simple build fixes should be considered as trivial, too. (Like adding missing #endif). Best regards, Goetz. > -----Original Message----- > From: jdk-dev [mailto:jdk-dev-bounces at openjdk.java.net] On Behalf Of > jesper.wilhelmsson at oracle.com > Sent: Dienstag, 13. M?rz 2018 02:07 > To: HotSpot Open Source Developers > Cc: jdk-dev > Subject: New submit repo for hotspot changes > > Hi all HotSpot developers! > > There is now a new submit repo available. It is similar to the one created a > while ago [1], and the usage is the same, but this one is based on and > synched with the jdk/hs forest. This means that it should now be possible for > any contributor to run all the required tests for hotspot pushes (referred to > as hs tier 1) on the latest version of the HotSpot source code. > > http://hg.openjdk.java.net/jdk/submit-hs/ > > The results will still be returned in a mail with limited usage in case of a > failure, but if all tests pass (and you fulfill the other criteria below) you will be > ready to push your change. We do no longer require an Oracle sponsor to > push changes to HotSpot. > > The following is not new, but I list it here for completeness. > > In order to push a change to HotSpot: > 0. you must be a Committer in the JDK project. > 1. you need a JBS issue for tracking. > 2. your change must have been available for review at least 24 hours. > 3. your change must have been approved by two Committers out of which at > least one is also a Reviewer. > 4. your change must have passed through the hs tier 1 testing provided by > the submit-hs repository with zero failures. > 5. you must be available the next few hours, and the next day and ready to > follow up with any fix needed in case your change causes problems in later > tiers. > > A change that causes failures in later tiers may be backed out if a fix can not > be provided fast enough, or if the developer is not responsive when noticed > about the failure. > > Note that 5 above should be interpreted as "it is a really bad idea to push a > change the last thing you do before bedtime, or the day before going on > vacation". > > There is a notion of trivial changes that can be pushed sooner than 24 hours. > It should be clearly stated in the review mail that the intention is to push as a > trivial change. How to actually define "trivial" is decided on a case-by-case > basis but in general it would be things like fixing a comment, or moving code > without changing it. Backing out a change is also considered trivial as the > change itself in that case is generated by mercurial. > > One of these days I'll figure out how to put this stuff on the OpenJDK wiki. > > Thanks, > /Jesper > > [1] http://mail.openjdk.java.net/pipermail/jdk-dev/2018- > January/000566.html From shade at redhat.com Tue Mar 13 14:02:12 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Tue, 13 Mar 2018 15:02:12 +0100 Subject: RFR (XS) 8199511: Do not generate g1_{pre|post}_barrier_slow_id without CardTable-enabled barrier set In-Reply-To: References: Message-ID: On 03/13/2018 12:05 PM, Aleksey Shipilev wrote: > g1_{pre|post}_barrier_slow_id generation reaches for card table address, but it might not be > available if barrier set does not support it. Reliably asserts with Epsilon. > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8199511 > > Fix: > http://cr.openjdk.java.net/~shade/8199511/webrev.01/ > > This is arch-specific fix: > - c1_Runtime1_aarch64: copy-pasted the check block from g1_pre_barrier_slow_id > - c1_Runtime1_arm: added check block for *both* g1_{pre|post}_slow_id > - c1_Runtime1_ppc: already implemented > - c1_Runtime1_s390: already implemented > - c1_Runtime1_sparc: already implemented > - c1_Runtime1_x86: copy-pasted the check block from g1_pre_barrier_slow_id > > Testing: x86_64 build, Epsilon tests, (running with submit-hs repo now) submit-hs result came clean (apart from two known failures). Do I need a sponsor for this? Definitely need more reviews. -Aleksey From coleen.phillimore at oracle.com Tue Mar 13 14:00:44 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 13 Mar 2018 10:00:44 -0400 Subject: Pre-RFR (L) 8199263: Split interfaceSupport.hpp to not require including .inline.hpp files In-Reply-To: References: <0503632f-1ed7-8fa9-0540-4d21edc3bb33@oracle.com> Message-ID: On 3/13/18 8:55 AM, Stefan Karlsson wrote: > Hi Coleen, > > Not sure why this is a Pre-RFR instead of a RFR. Most of this looks > good to me. It's Pre-RFR because I want more testing on other platforms that we don't have the capability to test. Thanks for reviewing this! > > I'd prefer if you removed the .inline.hpp files from precompiled.hpp. > We could also do it as a separate cleanup if you don't want to retest > this patch. Let's do that separately.? I didn't know what we wanted to do for precompiled.hpp honestly. thanks, Coleen > > Thanks, > StefanK > > > On 2018-03-13 12:50, coleen.phillimore at oracle.com wrote: >> Summary: interfaceSupport.hpp is an inline file so moved to >> interfaceSupport.inline.hpp and stopped including it in .hpp files >> >> 90% of this change is renaming interfaceSupport.hpp to >> interfaceSupport.inline.hpp.?? I tried to see if all of these files >> needed this header and the answer was yes.?? A surprising (to me!) >> number of files have thread state transitions. >> Some of interesting part of this change is adding >> ciUtilities.inline.hpp to include interfaceSupport.inline.hpp for >> VM_ENTRY. whitebox.inline.hpp was added for the same reason. >> jvmtiEnter.hpp was renamed jvmtiEnter.inline.hpp because it includes >> interfaceSupport.inline.hpp, and is only included in cpp files. >> The rest of the changes were to add back includes that are not pulled >> in by header files including interfaceSupport.hpp, like gcLocker.hpp >> and of course handles.inline.hpp. >> >> This probably overlaps some of Volker's patch.? Can this be tested on >> other platforms that we don't have? >> >> Hopefully, at the end of all this we have more clean header files so >> that transitive includes don't make the jvm build on one platform but >> not the next.? I think that's the goal of all of this work. >> >> This was tested with Oracle platforms (linux-x64, solaris-sparcv9, >> macosx-x64, windows-x64) in the mach5 tier1 and 2.?? I built this >> locally without precompiled headers (my default setting of course) on >> linux-x64. >> >> bug link https://bugs.openjdk.java.net/browse/JDK-8199263 >> local webrev at >> http://oklahoma.us.oracle.com/~cphillim/webrev/8199263.02/webrev >> >> Thanks to Stefan for his help with this. >> >> Thanks, >> Coleen >> >> From rkennke at redhat.com Tue Mar 13 14:08:00 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 13 Mar 2018 15:08:00 +0100 Subject: RFR (XS) 8199511: Do not generate g1_{pre|post}_barrier_slow_id without CardTable-enabled barrier set In-Reply-To: References: Message-ID: <84c3a9e1-9bb4-a2a6-8f37-8ba1ed33e475@redhat.com> Am 13.03.2018 um 15:02 schrieb Aleksey Shipilev: > On 03/13/2018 12:05 PM, Aleksey Shipilev wrote: >> g1_{pre|post}_barrier_slow_id generation reaches for card table address, but it might not be >> available if barrier set does not support it. Reliably asserts with Epsilon. >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8199511 >> >> Fix: >> http://cr.openjdk.java.net/~shade/8199511/webrev.01/ >> >> This is arch-specific fix: >> - c1_Runtime1_aarch64: copy-pasted the check block from g1_pre_barrier_slow_id >> - c1_Runtime1_arm: added check block for *both* g1_{pre|post}_slow_id >> - c1_Runtime1_ppc: already implemented >> - c1_Runtime1_s390: already implemented >> - c1_Runtime1_sparc: already implemented >> - c1_Runtime1_x86: copy-pasted the check block from g1_pre_barrier_slow_id >> >> Testing: x86_64 build, Epsilon tests, (running with submit-hs repo now) > > submit-hs result came clean (apart from two known failures). > > Do I need a sponsor for this? Jesper said this: "you will be ready to push your change. We do no longer require an Oracle sponsor to push changes to HotSpot." > Definitely need more reviews. Yes :-) From hohensee at amazon.com Tue Mar 13 14:50:48 2018 From: hohensee at amazon.com (Hohensee, Paul) Date: Tue, 13 Mar 2018 14:50:48 +0000 Subject: RFR(S): 8198794: Hotspot crash on Cassandra 3.11.1 startup with libnuma 2.0.3 In-Reply-To: <5AA725AA.7010202@linux.vnet.ibm.com> References: <5AA725AA.7010202@linux.vnet.ibm.com> Message-ID: <3C4B8012-284F-4F47-B99F-ACB0056198C1@amazon.com> Looks good to me. Thanks, Paul ?On 3/12/18, 6:13 PM, "Gustavo Romero" wrote: Hi. Paul, I just saw today your bug on JBS... https://bugs.openjdk.java.net/browse/JDK-8198794 Thanks for reporting and debugging it. It looks like the issue boils down to the fact that although 'numa_all_nodes_ptr' was introduced with libnuma API v2, 'numa_nodes_ptr' was only introduced later on libnuma v2.0.9, so it's not present in libnuma 2.0.3 which dates back to Jun 2009 [1]. I agree with your initial patch that a reasonable way to address it for archs like x86_64 is to use 'numa_all_nodes_ptr' as a surrogate for 'numa_nodes_ptr' (PowerPC needs 'numa_nodes_ptr' anyway and will have to stick with libnuma 2.0.9 and above because it's not unusual to have non-configured nodes on PPC64 and nodes can be non-contiguous as well). I just think it's better to handle it inside isnode_in_existing_nodes() interface, which is where such a information is needed in the end. In that sense, if you agree could you please check if the following webrev fixes the issue for you? It must also apply ok for jdk8u: bug : https://bugs.openjdk.java.net/browse/JDK-8198794 webrev: http://cr.openjdk.java.net/~gromero/8198794/v1/ If it does solve your issue, I will kindly ask for another Reviewer. Thank you. Best regards, Gustavo [1] http://cr.openjdk.java.net/~gromero/misc/numa_all_nodes_ptr_VS_numa_nodes_ptr.txt From jesper.wilhelmsson at oracle.com Tue Mar 13 15:02:15 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Tue, 13 Mar 2018 16:02:15 +0100 Subject: New submit repo for hotspot changes In-Reply-To: References: Message-ID: <0F6C62A5-372D-4687-BC01-202C5034A562@oracle.com> The sync is done via an hg hook that pushes directly to the submit-hs repo as soon as anything enters hs, so it would be a matter of minutes at most. /Jesper > On 13 Mar 2018, at 06:50, Thomas St?fe wrote: > > Hi Jesper, > > just a small question, how close will the syncing between submit-hs and jdk-hs be? > > Best Regards, Thomas > > On Tue, Mar 13, 2018 at 2:06 AM, > wrote: > Hi all HotSpot developers! > > There is now a new submit repo available. It is similar to the one created a while ago [1], and the usage is the same, but this one is based on and synched with the jdk/hs forest. This means that it should now be possible for any contributor to run all the required tests for hotspot pushes (referred to as hs tier 1) on the latest version of the HotSpot source code. > > http://hg.openjdk.java.net/jdk/submit-hs/ > > The results will still be returned in a mail with limited usage in case of a failure, but if all tests pass (and you fulfill the other criteria below) you will be ready to push your change. We do no longer require an Oracle sponsor to push changes to HotSpot. > > The following is not new, but I list it here for completeness. > > In order to push a change to HotSpot: > 0. you must be a Committer in the JDK project. > 1. you need a JBS issue for tracking. > 2. your change must have been available for review at least 24 hours. > 3. your change must have been approved by two Committers out of which at least one is also a Reviewer. > 4. your change must have passed through the hs tier 1 testing provided by the submit-hs repository with zero failures. > 5. you must be available the next few hours, and the next day and ready to follow up with any fix needed in case your change causes problems in later tiers. > > A change that causes failures in later tiers may be backed out if a fix can not be provided fast enough, or if the developer is not responsive when noticed about the failure. > > Note that 5 above should be interpreted as "it is a really bad idea to push a change the last thing you do before bedtime, or the day before going on vacation". > > There is a notion of trivial changes that can be pushed sooner than 24 hours. It should be clearly stated in the review mail that the intention is to push as a trivial change. How to actually define "trivial" is decided on a case-by-case basis but in general it would be things like fixing a comment, or moving code without changing it. Backing out a change is also considered trivial as the change itself in that case is generated by mercurial. > > One of these days I'll figure out how to put this stuff on the OpenJDK wiki. > > Thanks, > /Jesper > > [1] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-January/000566.html > > From jesper.wilhelmsson at oracle.com Tue Mar 13 15:09:36 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Tue, 13 Mar 2018 16:09:36 +0100 Subject: New submit repo for hotspot changes In-Reply-To: References: Message-ID: <93469198-D937-4B42-B0A8-FCA45525BCE5@oracle.com> There were several people involved in this decision. Iris Clark and Christian T?rnqvist did the actual job, I did only send out the email. :-) /Jesper > On 13 Mar 2018, at 14:16, Lindenmaier, Goetz wrote: > > Hi > > This is great I appreciate this quick progress a lot!! > > Thanks to Jesper and anybody else involved in enabling this. > The rules make complete sense to me. > Maybe simple build fixes should be considered as trivial, too. > (Like adding missing #endif). > > Best regards, > Goetz. > >> -----Original Message----- >> From: jdk-dev [mailto:jdk-dev-bounces at openjdk.java.net] On Behalf Of >> jesper.wilhelmsson at oracle.com >> Sent: Dienstag, 13. M?rz 2018 02:07 >> To: HotSpot Open Source Developers >> Cc: jdk-dev >> Subject: New submit repo for hotspot changes >> >> Hi all HotSpot developers! >> >> There is now a new submit repo available. It is similar to the one created a >> while ago [1], and the usage is the same, but this one is based on and >> synched with the jdk/hs forest. This means that it should now be possible for >> any contributor to run all the required tests for hotspot pushes (referred to >> as hs tier 1) on the latest version of the HotSpot source code. >> >> http://hg.openjdk.java.net/jdk/submit-hs/ >> >> The results will still be returned in a mail with limited usage in case of a >> failure, but if all tests pass (and you fulfill the other criteria below) you will be >> ready to push your change. We do no longer require an Oracle sponsor to >> push changes to HotSpot. >> >> The following is not new, but I list it here for completeness. >> >> In order to push a change to HotSpot: >> 0. you must be a Committer in the JDK project. >> 1. you need a JBS issue for tracking. >> 2. your change must have been available for review at least 24 hours. >> 3. your change must have been approved by two Committers out of which at >> least one is also a Reviewer. >> 4. your change must have passed through the hs tier 1 testing provided by >> the submit-hs repository with zero failures. >> 5. you must be available the next few hours, and the next day and ready to >> follow up with any fix needed in case your change causes problems in later >> tiers. >> >> A change that causes failures in later tiers may be backed out if a fix can not >> be provided fast enough, or if the developer is not responsive when noticed >> about the failure. >> >> Note that 5 above should be interpreted as "it is a really bad idea to push a >> change the last thing you do before bedtime, or the day before going on >> vacation". >> >> There is a notion of trivial changes that can be pushed sooner than 24 hours. >> It should be clearly stated in the review mail that the intention is to push as a >> trivial change. How to actually define "trivial" is decided on a case-by-case >> basis but in general it would be things like fixing a comment, or moving code >> without changing it. Backing out a change is also considered trivial as the >> change itself in that case is generated by mercurial. >> >> One of these days I'll figure out how to put this stuff on the OpenJDK wiki. >> >> Thanks, >> /Jesper >> >> [1] http://mail.openjdk.java.net/pipermail/jdk-dev/2018- >> January/000566.html > From mark.reinhold at oracle.com Tue Mar 13 15:17:17 2018 From: mark.reinhold at oracle.com (mark.reinhold at oracle.com) Date: Tue, 13 Mar 2018 08:17:17 -0700 Subject: New submit repo for hotspot changes In-Reply-To: <0F6C62A5-372D-4687-BC01-202C5034A562@oracle.com> References: <0F6C62A5-372D-4687-BC01-202C5034A562@oracle.com> Message-ID: <20180313081717.253828242@eggemoggin.niobe.net> 2018/3/13 8:02:15 -0700, jesper.wilhelmsson at oracle.com: > The sync is done via an hg hook that pushes directly to the submit-hs > repo as soon as anything enters hs, so it would be a matter of minutes > at most. /Jesper It should typically be nearly instantaneous. When you push a changeset to jdk/hs, the Mercurial server runs a post-transaction hook that pushes the changeset over to jdk/hs-submit. That's why you'll now see a bit more output when you push to jdk/hs: $ hg push pushing to ssh://hg.openjdk.java.net/jdk/hs searching for changes remote: adding changesets remote: adding manifests remote: adding file changes remote: added 1 changesets with 5 changes to 5 files # new output here: remote: pushing to /hg/jdk/submit-hs remote: searching for changes remote: adding changesets remote: adding manifests remote: adding file changes remote: added 1 changesets with 5 changes to 5 files remote: branch-only check passed remote: notifying jdk-submit-changes at openjdk.java.net remote: pushed update to /hg/jdk/submit-hs remote: notifying jdk-all-changes at openjdk.java.net, jdk-hs-changes at openjdk.java.net $ - Mark From rkennke at redhat.com Tue Mar 13 15:48:32 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 13 Mar 2018 16:48:32 +0100 Subject: New submit repo for hotspot changes In-Reply-To: References: Message-ID: <57ea7be3-4393-69a2-4659-74f961bc16f9@redhat.com> Hi Jesper and all, this is very great news! Thank you! I have questions: - In case of a failure, that is not obviously related (probably spurious unrelated test failure), is it possible to re-submit a test job? And how? - Related to this, how would a new revision of a change be submitted? Do the change in the same branch and push? Would that be picked up and re-tested? Thanks, Roman > Hi all HotSpot developers! > > There is now a new submit repo available. It is similar to the one created a while ago [1], and the usage is the same, but this one is based on and synched with the jdk/hs forest. This means that it should now be possible for any contributor to run all the required tests for hotspot pushes (referred to as hs tier 1) on the latest version of the HotSpot source code. > > http://hg.openjdk.java.net/jdk/submit-hs/ > > The results will still be returned in a mail with limited usage in case of a failure, but if all tests pass (and you fulfill the other criteria below) you will be ready to push your change. We do no longer require an Oracle sponsor to push changes to HotSpot. > > The following is not new, but I list it here for completeness. > > In order to push a change to HotSpot: > 0. you must be a Committer in the JDK project. > 1. you need a JBS issue for tracking. > 2. your change must have been available for review at least 24 hours. > 3. your change must have been approved by two Committers out of which at least one is also a Reviewer. > 4. your change must have passed through the hs tier 1 testing provided by the submit-hs repository with zero failures. > 5. you must be available the next few hours, and the next day and ready to follow up with any fix needed in case your change causes problems in later tiers. > > A change that causes failures in later tiers may be backed out if a fix can not be provided fast enough, or if the developer is not responsive when noticed about the failure. > > Note that 5 above should be interpreted as "it is a really bad idea to push a change the last thing you do before bedtime, or the day before going on vacation". > > There is a notion of trivial changes that can be pushed sooner than 24 hours. It should be clearly stated in the review mail that the intention is to push as a trivial change. How to actually define "trivial" is decided on a case-by-case basis but in general it would be things like fixing a comment, or moving code without changing it. Backing out a change is also considered trivial as the change itself in that case is generated by mercurial. > > One of these days I'll figure out how to put this stuff on the OpenJDK wiki. > > Thanks, > /Jesper > > [1] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-January/000566.html > From christian.tornqvist at oracle.com Tue Mar 13 15:58:14 2018 From: christian.tornqvist at oracle.com (Christian Tornqvist) Date: Tue, 13 Mar 2018 11:58:14 -0400 Subject: New submit repo for hotspot changes In-Reply-To: <57ea7be3-4393-69a2-4659-74f961bc16f9@redhat.com> References: <57ea7be3-4393-69a2-4659-74f961bc16f9@redhat.com> Message-ID: Hi Roman, > On Mar 13, 2018, at 11:48 32AM, Roman Kennke wrote: > > Hi Jesper and all, > > this is very great news! Thank you! > > I have questions: > - In case of a failure, that is not obviously related (probably spurious > unrelated test failure), is it possible to re-submit a test job? And how? Easiest way is to just make a new change in the branch, I suggest pulling new changes in from the default branch every time you make changes to your branch as well. We can trigger a new build/test run but that requires help from someone inside of Oracle. > - Related to this, how would a new revision of a change be submitted? Do > the change in the same branch and push? Would that be picked up and > re-tested? Yes, it?ll monitor the branch and start building and testing for every change made in there. So you can re-use the same branch for multiple test runs. Thanks, Christian > > Thanks, Roman > > >> Hi all HotSpot developers! >> >> There is now a new submit repo available. It is similar to the one created a while ago [1], and the usage is the same, but this one is based on and synched with the jdk/hs forest. This means that it should now be possible for any contributor to run all the required tests for hotspot pushes (referred to as hs tier 1) on the latest version of the HotSpot source code. >> >> http://hg.openjdk.java.net/jdk/submit-hs/ >> >> The results will still be returned in a mail with limited usage in case of a failure, but if all tests pass (and you fulfill the other criteria below) you will be ready to push your change. We do no longer require an Oracle sponsor to push changes to HotSpot. >> >> The following is not new, but I list it here for completeness. >> >> In order to push a change to HotSpot: >> 0. you must be a Committer in the JDK project. >> 1. you need a JBS issue for tracking. >> 2. your change must have been available for review at least 24 hours. >> 3. your change must have been approved by two Committers out of which at least one is also a Reviewer. >> 4. your change must have passed through the hs tier 1 testing provided by the submit-hs repository with zero failures. >> 5. you must be available the next few hours, and the next day and ready to follow up with any fix needed in case your change causes problems in later tiers. >> >> A change that causes failures in later tiers may be backed out if a fix can not be provided fast enough, or if the developer is not responsive when noticed about the failure. >> >> Note that 5 above should be interpreted as "it is a really bad idea to push a change the last thing you do before bedtime, or the day before going on vacation". >> >> There is a notion of trivial changes that can be pushed sooner than 24 hours. It should be clearly stated in the review mail that the intention is to push as a trivial change. How to actually define "trivial" is decided on a case-by-case basis but in general it would be things like fixing a comment, or moving code without changing it. Backing out a change is also considered trivial as the change itself in that case is generated by mercurial. >> >> One of these days I'll figure out how to put this stuff on the OpenJDK wiki. >> >> Thanks, >> /Jesper >> >> [1] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-January/000566.html >> > > From thomas.stuefe at gmail.com Tue Mar 13 16:06:45 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 13 Mar 2018 17:06:45 +0100 Subject: New submit repo for hotspot changes In-Reply-To: <0F6C62A5-372D-4687-BC01-202C5034A562@oracle.com> References: <0F6C62A5-372D-4687-BC01-202C5034A562@oracle.com> Message-ID: Thank you Jesper! On Tue, Mar 13, 2018 at 4:02 PM, wrote: > The sync is done via an hg hook that pushes directly to the submit-hs repo > as soon as anything enters hs, so it would be a matter of minutes at most. > /Jesper > > > On 13 Mar 2018, at 06:50, Thomas St?fe wrote: > > Hi Jesper, > > just a small question, how close will the syncing between submit-hs and > jdk-hs be? > > Best Regards, Thomas > > On Tue, Mar 13, 2018 at 2:06 AM, wrote: > >> Hi all HotSpot developers! >> >> There is now a new submit repo available. It is similar to the one >> created a while ago [1], and the usage is the same, but this one is based >> on and synched with the jdk/hs forest. This means that it should now be >> possible for any contributor to run all the required tests for hotspot >> pushes (referred to as hs tier 1) on the latest version of the HotSpot >> source code. >> >> http://hg.openjdk.java.net/jdk/submit-hs/ >> >> The results will still be returned in a mail with limited usage in case >> of a failure, but if all tests pass (and you fulfill the other criteria >> below) you will be ready to push your change. We do no longer require an >> Oracle sponsor to push changes to HotSpot. >> >> The following is not new, but I list it here for completeness. >> >> In order to push a change to HotSpot: >> 0. you must be a Committer in the JDK project. >> 1. you need a JBS issue for tracking. >> 2. your change must have been available for review at least 24 hours. >> 3. your change must have been approved by two Committers out of which at >> least one is also a Reviewer. >> 4. your change must have passed through the hs tier 1 testing provided by >> the submit-hs repository with zero failures. >> 5. you must be available the next few hours, and the next day and ready >> to follow up with any fix needed in case your change causes problems in >> later tiers. >> >> A change that causes failures in later tiers may be backed out if a fix >> can not be provided fast enough, or if the developer is not responsive when >> noticed about the failure. >> >> Note that 5 above should be interpreted as "it is a really bad idea to >> push a change the last thing you do before bedtime, or the day before going >> on vacation". >> >> There is a notion of trivial changes that can be pushed sooner than 24 >> hours. It should be clearly stated in the review mail that the intention is >> to push as a trivial change. How to actually define "trivial" is decided on >> a case-by-case basis but in general it would be things like fixing a >> comment, or moving code without changing it. Backing out a change is also >> considered trivial as the change itself in that case is generated by >> mercurial. >> >> One of these days I'll figure out how to put this stuff on the OpenJDK >> wiki. >> >> Thanks, >> /Jesper >> >> [1] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-January/ >> 000566.html >> >> > > From stefan.johansson at oracle.com Tue Mar 13 17:03:49 2018 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Tue, 13 Mar 2018 18:03:49 +0100 Subject: RFR: 8199533: ProblemList tests failing after JDK-8153333 Message-ID: <180e867b-2169-8a0d-f1e0-8923f4c533ef@oracle.com> Hi, Please review this change to add some tests to the problem list which are failing after JDK-8153333. Links JBS: https://bugs.openjdk.java.net/browse/JDK-8199533 Webrev: http://cr.openjdk.java.net/~sjohanss/8199533/00 Summary: After JDK-8153333 some tests have started failing when run with a collector not having any concurrent phases, for example with -XX:+UseParallelGC. Testing: Locally verified that the tests are excluded. Cheers, Stefan From christian.tornqvist at oracle.com Tue Mar 13 17:07:37 2018 From: christian.tornqvist at oracle.com (Christian Tornqvist) Date: Tue, 13 Mar 2018 13:07:37 -0400 Subject: RFR: 8199533: ProblemList tests failing after JDK-8153333 In-Reply-To: <180e867b-2169-8a0d-f1e0-8923f4c533ef@oracle.com> References: <180e867b-2169-8a0d-f1e0-8923f4c533ef@oracle.com> Message-ID: Hi Stefan > On Mar 13, 2018, at 1:03 49PM, Stefan Johansson wrote: > > Hi, > > Please review this change to add some tests to the problem list which are failing after JDK-8153333. > > Links > JBS: https://bugs.openjdk.java.net/browse/JDK-8199533 > Webrev: http://cr.openjdk.java.net/~sjohanss/8199533/00 > > Summary: > After JDK-8153333 some tests have started failing when run with a collector not having any concurrent phases, for example with -XX:+UseParallelGC. There should be a bug filed for fixing this, and that bug id should be next to the test name in the ProblemList.txt. It should also have generic-all added after the bug id, just like the other entries in ProblemList.txt. Thanks, Christian > > Testing: > Locally verified that the tests are excluded. > > Cheers, > Stefan From volker.simonis at gmail.com Tue Mar 13 17:13:45 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 13 Mar 2018 18:13:45 +0100 Subject: RFR(S/M): 8199472: Fix non-PCH build after JDK-8199319 In-Reply-To: <95ab46a3-796a-6a4a-bfb2-423c0d5c0794@oracle.com> References: <1fe1b023-b507-3706-b9ec-aff9d8283377@oracle.com> <95ab46a3-796a-6a4a-bfb2-423c0d5c0794@oracle.com> Message-ID: Hi, please find the new webrev here: http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472.v2/ I've moved allocate_instance_handle to instanceKlass.cpp as requested and updated some copyrights. The change is currently running through the new submit-hs repo testing. If you're OK with the new version and the tests succeed I'll push the change tomorrow. Best regards, Volker On Tue, Mar 13, 2018 at 10:16 AM, Stefan Karlsson wrote: > Hi Volker, > > On 2018-03-13 10:12, Volker Simonis wrote: >> >> Hi Coleen, Stefan, >> >> sure I'm open for suggestions :) >> >> As you both ask for the same thing, I'll prepare a new webrev with >> allocate_instance_handle moved to instanceKlass.cpp. In my initial >> patch I just didn't wanted to change the current inlining behaviour >> but if you both think that allocate_instance_handle is not performance >> critical I'm happy to clean that up. > > > > I don't think it's critical to get it inlined. With that said, I think the > compiler will inline allocate_instance into allocate_instance_handle, so > you'll most likely only get one call anyway. > >> With the brand new submit-hs repo posted by Jesper just a few hours >> ago, I'll be also able to push this myself, so no more need for a >> sponsor :) > > > Yay! > > StefanK > > >> >> Thanks, >> Volker >> >> >> On Mon, Mar 12, 2018 at 8:42 PM, wrote: >>> >>> >>> Hi this looks good except: >>> >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/src/hotspot/share/oops/instanceKlass.inline.hpp.udiff.html >>> >>> Can you move this a function in instanceKlass.cpp and would this >>> eliminate >>> the changes that add include instanceKlass.inline.hpp ? >>> >>> If Stefan is not still online, I'll sponsor this for you. >>> >>> I have a follow-on related change >>> https://bugs.openjdk.java.net/browse/JDK-8199263 which is quickly >>> expanding >>> due to transitive includes that I hope you can help me test out (when I >>> get >>> it to compile on solaris). >>> >>> Thanks, >>> Coleen >>> >>> >>> >>> On 3/12/18 3:34 PM, Volker Simonis wrote: >>>> >>>> >>>> Hi, >>>> >>>> can I please have a review and a sponsor for the following fix: >>>> >>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/ >>>> https://bugs.openjdk.java.net/browse/JDK-8199472 >>>> >>>> The number changes files is "M" but the fix is actually "S" :) >>>> >>>> Here come the gory details: >>>> >>>> Change "8199319: Remove handles.inline.hpp include from >>>> reflectionUtils.hpp" breaks the non-PCH build (at least on Ubuntu >>>> 16.04 with gcc 5.4.0). If you configure with >>>> "--disable-precompiled-headers" you will get a whole lot of undefined >>>> reference for "Handle::Handle(Thread*, oopDesc*)" - see bug report. >>>> >>>> It seems that newer versions of GCC (and possibly other compilers as >>>> well) don't emit any code for inline functions if these functions can >>>> be inlined at all potential call sites. >>>> >>>> The problem in this special case is that "Handle::Handle(Thread*, >>>> oopDesc*)" is not declared "inline" in "handles.hpp", but its >>>> definition in "handles.inline.hpp" is declared "inline". This leads to >>>> a situation, where compilation units which only include "handles.hpp" >>>> will emit a call to "Handle::Handle(Thread*, oopDesc*)" while >>>> compilation units which include "handles.inline.hpp" will try to >>>> inline "Handle::Handle(Thread*, oopDesc*)". If all the inlining >>>> attempts are successful, no instance of "Handle::Handle(Thread*, >>>> oopDesc*)" will be generated in any of the object files. This will >>>> lead to the link errors listed in the . >>>> >>>> The quick fix for this issue is to include "handles.inline.hpp" into >>>> all the compilation units with undefined references (listed below). >>>> >>>> The correct fix (realized in this RFR) is to declare >>>> "Handle::Handle(Thread*, oopDesc*)" inline in "handles.hpp". This will >>>> lead to warnings (which are treated as errors) if the inline >>>> definition is not available at a call site and will avoid linking >>>> error due to compiler optimizations. Unfortunately this requires a >>>> whole lot of follow-up changes, because "handles.hpp" defines some >>>> derived classes of "Handle" which all have implicitly inline >>>> constructors which all reference the base class >>>> "Handle::Handle(Thread*, oopDesc*)" constructor. So the constructors >>>> of the derived classes have to be explicitly declared inline in >>>> "handles.hpp" and their implementation has to be moved to >>>> "handles.inline.hpp". This change again triggers other changes for all >>>> files which relayed on the derived Handle classes having inline >>>> constructors... >>>> >>>> Thank you and best regards, >>>> Volker >>> >>> >>> > From stefan.johansson at oracle.com Tue Mar 13 17:18:57 2018 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Tue, 13 Mar 2018 18:18:57 +0100 Subject: RFR: 8199533: ProblemList tests failing after JDK-8153333 In-Reply-To: References: <180e867b-2169-8a0d-f1e0-8923f4c533ef@oracle.com> Message-ID: Thanks for the review, On 2018-03-13 18:07, Christian Tornqvist wrote: > Hi Stefan > >> On Mar 13, 2018, at 1:03 49PM, Stefan Johansson wrote: >> >> Hi, >> >> Please review this change to add some tests to the problem list which are failing after JDK-8153333. >> >> Links >> JBS: https://bugs.openjdk.java.net/browse/JDK-8199533 >> Webrev: http://cr.openjdk.java.net/~sjohanss/8199533/00 >> >> Summary: >> After JDK-8153333 some tests have started failing when run with a collector not having any concurrent phases, for example with -XX:+UseParallelGC. > There should be a bug filed for fixing this, and that bug id should be next to the test name in the ProblemList.txt. It should also have generic-all added after the bug id, just like the other entries in ProblemList.txt. Thanks for pointing this out, new webrevs: Full: http://cr.openjdk.java.net/~sjohanss/8199533/01/ Inc: http://cr.openjdk.java.net/~sjohanss/8199533/00-01/ Should I push this right away, or can it wait until tomorrow? Cheers, Stefan > Thanks, > Christian > >> Testing: >> Locally verified that the tests are excluded. >> >> Cheers, >> Stefan From christian.tornqvist at oracle.com Tue Mar 13 17:22:15 2018 From: christian.tornqvist at oracle.com (Christian Tornqvist) Date: Tue, 13 Mar 2018 13:22:15 -0400 Subject: RFR: 8199533: ProblemList tests failing after JDK-8153333 In-Reply-To: References: <180e867b-2169-8a0d-f1e0-8923f4c533ef@oracle.com> Message-ID: <2C111003-77E8-4561-BC49-AC355EF7FC34@oracle.com> Looks good, thanks for doing this! This is a trivial change that you should push right away :) Thanks, Christian > On Mar 13, 2018, at 1:18 57PM, Stefan Johansson wrote: > > Thanks for the review, > > On 2018-03-13 18:07, Christian Tornqvist wrote: >> Hi Stefan >> >>> On Mar 13, 2018, at 1:03 49PM, Stefan Johansson wrote: >>> >>> Hi, >>> >>> Please review this change to add some tests to the problem list which are failing after JDK-8153333. >>> >>> Links >>> JBS: https://bugs.openjdk.java.net/browse/JDK-8199533 >>> Webrev: http://cr.openjdk.java.net/~sjohanss/8199533/00 >>> >>> Summary: >>> After JDK-8153333 some tests have started failing when run with a collector not having any concurrent phases, for example with -XX:+UseParallelGC. >> There should be a bug filed for fixing this, and that bug id should be next to the test name in the ProblemList.txt. It should also have generic-all added after the bug id, just like the other entries in ProblemList.txt. > Thanks for pointing this out, new webrevs: > Full: http://cr.openjdk.java.net/~sjohanss/8199533/01/ > Inc: http://cr.openjdk.java.net/~sjohanss/8199533/00-01/ > > Should I push this right away, or can it wait until tomorrow? > > Cheers, > Stefan > >> Thanks, >> Christian >> >>> Testing: >>> Locally verified that the tests are excluded. >>> >>> Cheers, >>> Stefan From stefan.johansson at oracle.com Tue Mar 13 17:24:42 2018 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Tue, 13 Mar 2018 18:24:42 +0100 Subject: RFR: 8199533: ProblemList tests failing after JDK-8153333 In-Reply-To: <2C111003-77E8-4561-BC49-AC355EF7FC34@oracle.com> References: <180e867b-2169-8a0d-f1e0-8923f4c533ef@oracle.com> <2C111003-77E8-4561-BC49-AC355EF7FC34@oracle.com> Message-ID: <435a2204-5195-a9d4-ce27-7f55785022b8@oracle.com> Thanks, I will push then. Cheers, Stefan On 2018-03-13 18:22, Christian Tornqvist wrote: > Looks good, thanks for doing this! This is a trivial change that you > should push right away :) > > Thanks, > Christian > >> On Mar 13, 2018, at 1:18 57PM, Stefan Johansson >> > wrote: >> >> Thanks for the review, >> >> On 2018-03-13 18:07, Christian Tornqvist wrote: >>> Hi Stefan >>> >>>> On Mar 13, 2018, at 1:03 49PM, Stefan Johansson >>>> > >>>> wrote: >>>> >>>> Hi, >>>> >>>> Please review this change to add some tests to the problem list >>>> which are failing after JDK-8153333. >>>> >>>> Links >>>> JBS: https://bugs.openjdk.java.net/browse/JDK-8199533 >>>> Webrev: http://cr.openjdk.java.net/~sjohanss/8199533/00 >>>> >>>> >>>> Summary: >>>> After JDK-8153333 some tests have started failing when run with a >>>> collector not having any concurrent phases, for example with >>>> -XX:+UseParallelGC. >>> There should be a bug filed for fixing this, and that bug id should >>> be next to the test name in the ProblemList.txt. It should also have >>> generic-all added after the bug id, just like the other entries in >>> ProblemList.txt. >> Thanks for pointing this out, new webrevs: >> Full:http://cr.openjdk.java.net/~sjohanss/8199533/01/ >> >> Inc:http://cr.openjdk.java.net/~sjohanss/8199533/00-01/ >> >> >> Should I push this right away, or can it wait until tomorrow? >> >> Cheers, >> Stefan >> >>> Thanks, >>> Christian >>> >>>> Testing: >>>> Locally verified that the tests are excluded. >>>> >>>> Cheers, >>>> Stefan > From paul.sandoz at oracle.com Tue Mar 13 17:43:58 2018 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Tue, 13 Mar 2018 10:43:58 -0700 Subject: RFR 8197944 Condy tests fails on Windows Message-ID: <6B43DC97-2AAD-4866-8FCE-BAFAD5BB5454@oracle.com> Hi, The recent push for https://bugs.openjdk.java.net/browse/JDK-8199342 The constant pool forgets it has a Dynamic entry if there are overpass methods resulted in a test failure on windows that i failed to observe from the test reports prior to pushing. This patch fixes that failure, and fixes other related tests that failed on windows for the same reason, which were placed on the problem list. The problematic tests open a file for debugging purposes and do not close it. This causes the test infrastructure on windows to fail as it cannot remove the file (i dunno if this is something that can be independently fixed). I verified (more carefully this time) that a mach5 build and test on windows passes. Paul. diff -r 74518f9ca4b4 test/jdk/ProblemList.txt --- a/test/jdk/ProblemList.txt Thu Mar 08 14:33:57 2018 -0800 +++ b/test/jdk/ProblemList.txt Tue Mar 13 10:31:38 2018 -0700 @@ -493,9 +493,6 @@ java/lang/String/nativeEncoding/StringPlatformChars.java 8182569 windows-all,solaris-all -java/lang/invoke/condy/CondyRepeatFailedResolution.java 8197944 windows-all -java/lang/invoke/condy/CondyReturnPrimitiveTest.java 8197944 windows-all - ############################################################################ # jdk_instrument diff -r 74518f9ca4b4 test/jdk/java/lang/invoke/condy/CondyInterfaceWithOverpassMethods.java --- a/test/jdk/java/lang/invoke/condy/CondyInterfaceWithOverpassMethods.java Thu Mar 08 14:33:57 2018 -0800 +++ b/test/jdk/java/lang/invoke/condy/CondyInterfaceWithOverpassMethods.java Tue Mar 13 10:31:38 2018 -0700 @@ -34,16 +34,11 @@ import jdk.experimental.bytecode.BasicClassBuilder; import jdk.experimental.bytecode.Flag; import jdk.experimental.bytecode.TypedCodeBuilder; -import org.testng.Assert; import org.testng.annotations.BeforeClass; import org.testng.annotations.Test; -import java.io.File; -import java.io.FileOutputStream; import java.lang.invoke.MethodHandles; import java.lang.invoke.MethodType; -import java.lang.reflect.Method; -import java.util.concurrent.atomic.AtomicInteger; @Test public class CondyInterfaceWithOverpassMethods { @@ -93,9 +88,6 @@ )) .build(); - // For debugging purposes - new FileOutputStream(new File(genClassName + ".class")).write(byteArray); - gc = MethodHandles.lookup().defineClass(byteArray); } diff -r 74518f9ca4b4 test/jdk/java/lang/invoke/condy/CondyRepeatFailedResolution.java --- a/test/jdk/java/lang/invoke/condy/CondyRepeatFailedResolution.java Thu Mar 08 14:33:57 2018 -0800 +++ b/test/jdk/java/lang/invoke/condy/CondyRepeatFailedResolution.java Tue Mar 13 10:31:38 2018 -0700 @@ -39,8 +39,6 @@ import org.testng.annotations.BeforeClass; import org.testng.annotations.Test; -import java.io.File; -import java.io.FileOutputStream; import java.lang.invoke.MethodHandles; import java.lang.invoke.MethodType; import java.lang.reflect.InvocationTargetException; @@ -217,9 +215,6 @@ )) .build(); - // For debugging purposes - new FileOutputStream(new File(genClassName + ".class")).write(byteArray); - gc = MethodHandles.lookup().defineClass(byteArray); } diff -r 74518f9ca4b4 test/jdk/java/lang/invoke/condy/CondyReturnPrimitiveTest.java --- a/test/jdk/java/lang/invoke/condy/CondyReturnPrimitiveTest.java Thu Mar 08 14:33:57 2018 -0800 +++ b/test/jdk/java/lang/invoke/condy/CondyReturnPrimitiveTest.java Tue Mar 13 10:31:38 2018 -0700 @@ -39,8 +39,6 @@ import org.testng.annotations.BeforeClass; import org.testng.annotations.Test; -import java.io.File; -import java.io.FileOutputStream; import java.lang.invoke.MethodHandles; import java.lang.invoke.MethodType; import java.lang.reflect.Method; @@ -218,9 +216,6 @@ )) .build(); - // For debugging purposes - new FileOutputStream(new File(genClassName + ".class")).write(byteArray); - gc = MethodHandles.lookup().defineClass(byteArray); } From vladimir.kozlov at oracle.com Tue Mar 13 18:18:07 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 13 Mar 2018 11:18:07 -0700 Subject: Pre-RFR (L) 8199263: Split interfaceSupport.hpp to not require including .inline.hpp files In-Reply-To: <0503632f-1ed7-8fa9-0540-4d21edc3bb33@oracle.com> References: <0503632f-1ed7-8fa9-0540-4d21edc3bb33@oracle.com> Message-ID: CI and Jvmci changes looks good to me. Thanks, vladimir On 3/13/18 4:50 AM, coleen.phillimore at oracle.com wrote: > Summary: interfaceSupport.hpp is an inline file so moved to > interfaceSupport.inline.hpp and stopped including it in .hpp files > > 90% of this change is renaming interfaceSupport.hpp to > interfaceSupport.inline.hpp.?? I tried to see if all of these files > needed this header and the answer was yes.?? A surprising (to me!) > number of files have thread state transitions. > Some of interesting part of this change is adding ciUtilities.inline.hpp > to include interfaceSupport.inline.hpp for VM_ENTRY. whitebox.inline.hpp > was added for the same reason. > jvmtiEnter.hpp was renamed jvmtiEnter.inline.hpp because it includes > interfaceSupport.inline.hpp, and is only included in cpp files. > The rest of the changes were to add back includes that are not pulled in > by header files including interfaceSupport.hpp, like gcLocker.hpp and of > course handles.inline.hpp. > > This probably overlaps some of Volker's patch.? Can this be tested on > other platforms that we don't have? > > Hopefully, at the end of all this we have more clean header files so > that transitive includes don't make the jvm build on one platform but > not the next.? I think that's the goal of all of this work. > > This was tested with Oracle platforms (linux-x64, solaris-sparcv9, > macosx-x64, windows-x64) in the mach5 tier1 and 2.?? I built this > locally without precompiled headers (my default setting of course) on > linux-x64. > > bug link https://bugs.openjdk.java.net/browse/JDK-8199263 > local webrev at > http://oklahoma.us.oracle.com/~cphillim/webrev/8199263.02/webrev > > Thanks to Stefan for his help with this. > > Thanks, > Coleen > > From edward.nevill at gmail.com Tue Mar 13 18:30:57 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Tue, 13 Mar 2018 18:30:57 +0000 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <03f59f60-dda2-bc0a-3929-115f7a0c4ca2@oracle.com> <1520582019.2395.6.camel@gmail.com> <8a76582b-bde0-b281-adae-833f7f861feb@oracle.com> <1520882830.11566.12.camel@gmail.com> <1520887793.11566.16.camel@gmail.com> <1520935554.25609.2.camel@gmail.com> Message-ID: <1520965857.8311.12.camel@gmail.com> On Tue, 2018-03-13 at 11:24 +0100, Thomas St?fe wrote: > > > On Tue, Mar 13, 2018 at 11:05 AM, Edward Nevill wrote: > > On Tue, 2018-03-13 at 07:10 +0100, Thomas St?fe wrote: > > > > > > > > > On Mon, Mar 12, 2018 at 10:32 PM, Thomas St?fe wrote: > > > > Reminds me of : > > > > http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-November/029289.html > > > > > > > > Could this be the same issue? > > > > > > > > > > > It is indeed exactly the same issue. > > > > Was this issue ever resolved? I cannot find a JBS report or hg patch. > > > > Many thanks, > > Ed. > > > > ... oh... > > I think Erik thought I was going to fix it, and I was counting on Erik... :-) So, maybe it was never fixed. Adrian is the defacto maintainer of zero currently (at least he is the most active), but I think he may only build release? > New webrev http://cr.openjdk.java.net/~enevill/8199220/webrev.04 The simplest solution seemed to be to add SUPPORTS_NATIVE_CX8 to Zero as follows +#ifdef _LP64 +#define SUPPORTS_NATIVE_CX8 +#endif I have build fastdebug versions on x86 and aarch64 to test two different 64 bit systems. Does it look OK now? Thanks, Ed. From thomas.stuefe at gmail.com Tue Mar 13 18:32:43 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 13 Mar 2018 19:32:43 +0100 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <03f59f60-dda2-bc0a-3929-115f7a0c4ca2@oracle.com> <1520582019.2395.6.camel@gmail.com> <8a76582b-bde0-b281-adae-833f7f861feb@oracle.com> <1520882830.11566.12.camel@gmail.com> <1520887793.11566.16.camel@gmail.com> <1520935554.25609.2.camel@gmail.com> <1520965857.8311.12.camel@gmail.com> Message-ID: Looks good. Thanks for fixing zero! Best Regards, Thomas On Mar 13, 2018 19:30, "Edward Nevill" wrote: On Tue, 2018-03-13 at 11:24 +0100, Thomas St?fe wrote: > > > On Tue, Mar 13, 2018 at 11:05 AM, Edward Nevill wrote: > > On Tue, 2018-03-13 at 07:10 +0100, Thomas St?fe wrote: > > > > > > > > > On Mon, Mar 12, 2018 at 10:32 PM, Thomas St?fe < thomas.stuefe at gmail.com> wrote: > > > > Reminds me of : > > > > http://mail.openjdk.java.net/pipermail/hotspot-dev/2017- November/029289.html > > > > > > > > Could this be the same issue? > > > > > > > > > > > It is indeed exactly the same issue. > > > > Was this issue ever resolved? I cannot find a JBS report or hg patch. > > > > Many thanks, > > Ed. > > > > ... oh... > > I think Erik thought I was going to fix it, and I was counting on Erik... :-) So, maybe it was never fixed. Adrian is the defacto maintainer of zero currently (at least he is the most active), but I think he may only build release? > New webrev http://cr.openjdk.java.net/~enevill/8199220/webrev.04 The simplest solution seemed to be to add SUPPORTS_NATIVE_CX8 to Zero as follows +#ifdef _LP64 +#define SUPPORTS_NATIVE_CX8 +#endif I have build fastdebug versions on x86 and aarch64 to test two different 64 bit systems. Does it look OK now? Thanks, Ed. From gromero at linux.vnet.ibm.com Tue Mar 13 18:35:14 2018 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Tue, 13 Mar 2018 15:35:14 -0300 Subject: RFR(S): 8198794: Hotspot crash on Cassandra 3.11.1 startup with libnuma 2.0.3 In-Reply-To: <3C4B8012-284F-4F47-B99F-ACB0056198C1@amazon.com> References: <5AA725AA.7010202@linux.vnet.ibm.com> <3C4B8012-284F-4F47-B99F-ACB0056198C1@amazon.com> Message-ID: Hi, On 03/13/2018 11:50 AM, Hohensee, Paul wrote: > Looks good to me. Thanks for reviewing it. @David, do you mind to review that small change (maybe it should be marked as XS actually...) regarding libnuma since you reviewed the previous ones? bug : https://bugs.openjdk.java.net/browse/JDK-8198794 webrev: http://cr.openjdk.java.net/~gromero/8198794/v1/ Regards, Gustavo > Paul > > ?On 3/12/18, 6:13 PM, "Gustavo Romero" wrote: > > Hi. > > Paul, I just saw today your bug on JBS... > https://bugs.openjdk.java.net/browse/JDK-8198794 > > Thanks for reporting and debugging it. > > It looks like the issue boils down to the fact that although > 'numa_all_nodes_ptr' was introduced with libnuma API v2, 'numa_nodes_ptr' > was only introduced later on libnuma v2.0.9, so it's not present in libnuma > 2.0.3 which dates back to Jun 2009 [1]. I agree with your initial patch > that a reasonable way to address it for archs like x86_64 is to use > 'numa_all_nodes_ptr' as a surrogate for 'numa_nodes_ptr' (PowerPC needs > 'numa_nodes_ptr' anyway and will have to stick with libnuma 2.0.9 and above > because it's not unusual to have non-configured nodes on PPC64 and nodes > can be non-contiguous as well). > > I just think it's better to handle it inside isnode_in_existing_nodes() > interface, which is where such a information is needed in the end. In that > sense, if you agree could you please check if the following webrev fixes > the issue for you? It must also apply ok for jdk8u: > > bug : https://bugs.openjdk.java.net/browse/JDK-8198794 > webrev: http://cr.openjdk.java.net/~gromero/8198794/v1/ > > If it does solve your issue, I will kindly ask for another Reviewer. > > Thank you. > > > Best regards, > Gustavo > > [1] http://cr.openjdk.java.net/~gromero/misc/numa_all_nodes_ptr_VS_numa_nodes_ptr.txt > > > From stefan.karlsson at oracle.com Tue Mar 13 18:37:10 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Tue, 13 Mar 2018 19:37:10 +0100 Subject: RFR(S/M): 8199472: Fix non-PCH build after JDK-8199319 In-Reply-To: References: <1fe1b023-b507-3706-b9ec-aff9d8283377@oracle.com> <95ab46a3-796a-6a4a-bfb2-423c0d5c0794@oracle.com> Message-ID: <23670adb-6574-400f-b5f9-fd954f7ec7ae@oracle.com> On 2018-03-13 18:13, Volker Simonis wrote: > Hi, > > please find the new webrev here: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472.v2/ > > I've moved allocate_instance_handle to instanceKlass.cpp as requested > and updated some copyrights. The change is currently running through > the new submit-hs repo testing. > > If you're OK with the new version and the tests succeed I'll push the > change tomorrow. Looks good to me. StefanK > > Best regards, > Volker > > > On Tue, Mar 13, 2018 at 10:16 AM, Stefan Karlsson > wrote: >> Hi Volker, >> >> On 2018-03-13 10:12, Volker Simonis wrote: >>> Hi Coleen, Stefan, >>> >>> sure I'm open for suggestions :) >>> >>> As you both ask for the same thing, I'll prepare a new webrev with >>> allocate_instance_handle moved to instanceKlass.cpp. In my initial >>> patch I just didn't wanted to change the current inlining behaviour >>> but if you both think that allocate_instance_handle is not performance >>> critical I'm happy to clean that up. >> >> >> I don't think it's critical to get it inlined. With that said, I think the >> compiler will inline allocate_instance into allocate_instance_handle, so >> you'll most likely only get one call anyway. >> >>> With the brand new submit-hs repo posted by Jesper just a few hours >>> ago, I'll be also able to push this myself, so no more need for a >>> sponsor :) >> >> Yay! >> >> StefanK >> >> >>> Thanks, >>> Volker >>> >>> >>> On Mon, Mar 12, 2018 at 8:42 PM, wrote: >>>> >>>> Hi this looks good except: >>>> >>>> >>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/src/hotspot/share/oops/instanceKlass.inline.hpp.udiff.html >>>> >>>> Can you move this a function in instanceKlass.cpp and would this >>>> eliminate >>>> the changes that add include instanceKlass.inline.hpp ? >>>> >>>> If Stefan is not still online, I'll sponsor this for you. >>>> >>>> I have a follow-on related change >>>> https://bugs.openjdk.java.net/browse/JDK-8199263 which is quickly >>>> expanding >>>> due to transitive includes that I hope you can help me test out (when I >>>> get >>>> it to compile on solaris). >>>> >>>> Thanks, >>>> Coleen >>>> >>>> >>>> >>>> On 3/12/18 3:34 PM, Volker Simonis wrote: >>>>> >>>>> Hi, >>>>> >>>>> can I please have a review and a sponsor for the following fix: >>>>> >>>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/ >>>>> https://bugs.openjdk.java.net/browse/JDK-8199472 >>>>> >>>>> The number changes files is "M" but the fix is actually "S" :) >>>>> >>>>> Here come the gory details: >>>>> >>>>> Change "8199319: Remove handles.inline.hpp include from >>>>> reflectionUtils.hpp" breaks the non-PCH build (at least on Ubuntu >>>>> 16.04 with gcc 5.4.0). If you configure with >>>>> "--disable-precompiled-headers" you will get a whole lot of undefined >>>>> reference for "Handle::Handle(Thread*, oopDesc*)" - see bug report. >>>>> >>>>> It seems that newer versions of GCC (and possibly other compilers as >>>>> well) don't emit any code for inline functions if these functions can >>>>> be inlined at all potential call sites. >>>>> >>>>> The problem in this special case is that "Handle::Handle(Thread*, >>>>> oopDesc*)" is not declared "inline" in "handles.hpp", but its >>>>> definition in "handles.inline.hpp" is declared "inline". This leads to >>>>> a situation, where compilation units which only include "handles.hpp" >>>>> will emit a call to "Handle::Handle(Thread*, oopDesc*)" while >>>>> compilation units which include "handles.inline.hpp" will try to >>>>> inline "Handle::Handle(Thread*, oopDesc*)". If all the inlining >>>>> attempts are successful, no instance of "Handle::Handle(Thread*, >>>>> oopDesc*)" will be generated in any of the object files. This will >>>>> lead to the link errors listed in the . >>>>> >>>>> The quick fix for this issue is to include "handles.inline.hpp" into >>>>> all the compilation units with undefined references (listed below). >>>>> >>>>> The correct fix (realized in this RFR) is to declare >>>>> "Handle::Handle(Thread*, oopDesc*)" inline in "handles.hpp". This will >>>>> lead to warnings (which are treated as errors) if the inline >>>>> definition is not available at a call site and will avoid linking >>>>> error due to compiler optimizations. Unfortunately this requires a >>>>> whole lot of follow-up changes, because "handles.hpp" defines some >>>>> derived classes of "Handle" which all have implicitly inline >>>>> constructors which all reference the base class >>>>> "Handle::Handle(Thread*, oopDesc*)" constructor. So the constructors >>>>> of the derived classes have to be explicitly declared inline in >>>>> "handles.hpp" and their implementation has to be moved to >>>>> "handles.inline.hpp". This change again triggers other changes for all >>>>> files which relayed on the derived Handle classes having inline >>>>> constructors... >>>>> >>>>> Thank you and best regards, >>>>> Volker >>>> >>>> From vladimir.kozlov at oracle.com Tue Mar 13 18:37:08 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 13 Mar 2018 11:37:08 -0700 Subject: RFR (XS) 8199511: Do not generate g1_{pre|post}_barrier_slow_id without CardTable-enabled barrier set In-Reply-To: References: Message-ID: <574bdf2f-8b6b-4d49-05c4-b8fe7f98b04b@oracle.com> Looks good to me but someone from GC should look on it too. Thanks, Vladimir On 3/13/18 4:05 AM, Aleksey Shipilev wrote: > g1_{pre|post}_barrier_slow_id generation reaches for card table address, but it might not be > available if barrier set does not support it. Reliably asserts with Epsilon. > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8199511 > > Fix: > http://cr.openjdk.java.net/~shade/8199511/webrev.01/ > > This is arch-specific fix: > - c1_Runtime1_aarch64: copy-pasted the check block from g1_pre_barrier_slow_id > - c1_Runtime1_arm: added check block for *both* g1_{pre|post}_slow_id > - c1_Runtime1_ppc: already implemented > - c1_Runtime1_s390: already implemented > - c1_Runtime1_sparc: already implemented > - c1_Runtime1_x86: copy-pasted the check block from g1_pre_barrier_slow_id > > Testing: x86_64 build, Epsilon tests, (running with submit-hs repo now) > > Thanks, > -Aleksey > From coleen.phillimore at oracle.com Tue Mar 13 19:17:09 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 13 Mar 2018 15:17:09 -0400 Subject: Pre-RFR (L) 8199263: Split interfaceSupport.hpp to not require including .inline.hpp files In-Reply-To: References: <0503632f-1ed7-8fa9-0540-4d21edc3bb33@oracle.com> Message-ID: <7c09fc5a-8e9d-ce79-7ec0-3cfcc934eb49@oracle.com> Thank you Vladimir! Coleen On 3/13/18 2:18 PM, Vladimir Kozlov wrote: > CI and Jvmci changes looks good to me. > > Thanks, > vladimir > > On 3/13/18 4:50 AM, coleen.phillimore at oracle.com wrote: >> Summary: interfaceSupport.hpp is an inline file so moved to >> interfaceSupport.inline.hpp and stopped including it in .hpp files >> >> 90% of this change is renaming interfaceSupport.hpp to >> interfaceSupport.inline.hpp.?? I tried to see if all of these files >> needed this header and the answer was yes.?? A surprising (to me!) >> number of files have thread state transitions. >> Some of interesting part of this change is adding >> ciUtilities.inline.hpp to include interfaceSupport.inline.hpp for >> VM_ENTRY. whitebox.inline.hpp was added for the same reason. >> jvmtiEnter.hpp was renamed jvmtiEnter.inline.hpp because it includes >> interfaceSupport.inline.hpp, and is only included in cpp files. >> The rest of the changes were to add back includes that are not pulled >> in by header files including interfaceSupport.hpp, like gcLocker.hpp >> and of course handles.inline.hpp. >> >> This probably overlaps some of Volker's patch.? Can this be tested on >> other platforms that we don't have? >> >> Hopefully, at the end of all this we have more clean header files so >> that transitive includes don't make the jvm build on one platform but >> not the next.? I think that's the goal of all of this work. >> >> This was tested with Oracle platforms (linux-x64, solaris-sparcv9, >> macosx-x64, windows-x64) in the mach5 tier1 and 2.?? I built this >> locally without precompiled headers (my default setting of course) on >> linux-x64. >> >> bug link https://bugs.openjdk.java.net/browse/JDK-8199263 >> local webrev at >> http://oklahoma.us.oracle.com/~cphillim/webrev/8199263.02/webrev >> >> Thanks to Stefan for his help with this. >> >> Thanks, >> Coleen >> >> From erik.osterlund at oracle.com Tue Mar 13 19:20:27 2018 From: erik.osterlund at oracle.com (Erik Osterlund) Date: Tue, 13 Mar 2018 20:20:27 +0100 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: <1520965857.8311.12.camel@gmail.com> References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <03f59f60-dda2-bc0a-3929-115f7a0c4ca2@oracle.com> <1520582019.2395.6.camel@gmail.com> <8a76582b-bde0-b281-adae-833f7f861feb@oracle.com> <1520882830.11566.12.camel@gmail.com> <1520887793.11566.16.camel@gmail.com> <1520935554.25609.2.camel@gmail.com> <1520965857.8311.12.camel@gmail.com> Message-ID: <3BFD0C75-F039-4DEB-A6DA-1C178368C047@oracle.com> Hi Edward, Looks good. Thanks, /Erik > On 13 Mar 2018, at 19:30, Edward Nevill wrote: > >> On Tue, 2018-03-13 at 11:24 +0100, Thomas St?fe wrote: >> >> >>> On Tue, Mar 13, 2018 at 11:05 AM, Edward Nevill wrote: >>>> On Tue, 2018-03-13 at 07:10 +0100, Thomas St?fe wrote: >>>> >>>> >>>>> On Mon, Mar 12, 2018 at 10:32 PM, Thomas St?fe wrote: >>>>> Reminds me of : >>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-November/029289.html >>>>> >>>>> Could this be the same issue? >>>>> >>>> >>> >>> It is indeed exactly the same issue. >>> >>> Was this issue ever resolved? I cannot find a JBS report or hg patch. >>> >>> Many thanks, >>> Ed. >>> >> >> ... oh... >> >> I think Erik thought I was going to fix it, and I was counting on Erik... :-) So, maybe it was never fixed. Adrian is the defacto maintainer of zero currently (at least he is the most active), but I think he may only build release? >> > > New webrev > > http://cr.openjdk.java.net/~enevill/8199220/webrev.04 > > The simplest solution seemed to be to add SUPPORTS_NATIVE_CX8 to Zero as follows > > +#ifdef _LP64 > +#define SUPPORTS_NATIVE_CX8 > +#endif > > I have build fastdebug versions on x86 and aarch64 to test two different 64 bit systems. > > Does it look OK now? > > Thanks, > Ed. > From david.holmes at oracle.com Tue Mar 13 21:46:22 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 14 Mar 2018 07:46:22 +1000 Subject: RFR 8197944 Condy tests fails on Windows In-Reply-To: <6B43DC97-2AAD-4866-8FCE-BAFAD5BB5454@oracle.com> References: <6B43DC97-2AAD-4866-8FCE-BAFAD5BB5454@oracle.com> Message-ID: <1bcae4cc-43b5-80ea-d3b5-91fdbc19d426@oracle.com> Hi Paul, Fix seems fine. Though you could have just closed the file and kept the debugging capability. Thanks, David On 14/03/2018 3:43 AM, Paul Sandoz wrote: > Hi, > > The recent push for > > https://bugs.openjdk.java.net/browse/JDK-8199342 > The constant pool forgets it has a Dynamic entry if there are overpass methods > > resulted in a test failure on windows that i failed to observe from the test reports prior to pushing. > > This patch fixes that failure, and fixes other related tests that failed on windows for the same reason, which were placed on the problem list. > > The problematic tests open a file for debugging purposes and do not close it. This causes the test infrastructure on windows to fail as it cannot remove the file (i dunno if this is something that can be independently fixed). > > I verified (more carefully this time) that a mach5 build and test on windows passes. > > Paul. > > diff -r 74518f9ca4b4 test/jdk/ProblemList.txt > --- a/test/jdk/ProblemList.txt Thu Mar 08 14:33:57 2018 -0800 > +++ b/test/jdk/ProblemList.txt Tue Mar 13 10:31:38 2018 -0700 > @@ -493,9 +493,6 @@ > > java/lang/String/nativeEncoding/StringPlatformChars.java 8182569 windows-all,solaris-all > > -java/lang/invoke/condy/CondyRepeatFailedResolution.java 8197944 windows-all > -java/lang/invoke/condy/CondyReturnPrimitiveTest.java 8197944 windows-all > - > ############################################################################ > > # jdk_instrument > diff -r 74518f9ca4b4 test/jdk/java/lang/invoke/condy/CondyInterfaceWithOverpassMethods.java > --- a/test/jdk/java/lang/invoke/condy/CondyInterfaceWithOverpassMethods.java Thu Mar 08 14:33:57 2018 -0800 > +++ b/test/jdk/java/lang/invoke/condy/CondyInterfaceWithOverpassMethods.java Tue Mar 13 10:31:38 2018 -0700 > @@ -34,16 +34,11 @@ > import jdk.experimental.bytecode.BasicClassBuilder; > import jdk.experimental.bytecode.Flag; > import jdk.experimental.bytecode.TypedCodeBuilder; > -import org.testng.Assert; > import org.testng.annotations.BeforeClass; > import org.testng.annotations.Test; > > -import java.io.File; > -import java.io.FileOutputStream; > import java.lang.invoke.MethodHandles; > import java.lang.invoke.MethodType; > -import java.lang.reflect.Method; > -import java.util.concurrent.atomic.AtomicInteger; > > @Test > public class CondyInterfaceWithOverpassMethods { > @@ -93,9 +88,6 @@ > )) > .build(); > > - // For debugging purposes > - new FileOutputStream(new File(genClassName + ".class")).write(byteArray); > - > gc = MethodHandles.lookup().defineClass(byteArray); > } > > diff -r 74518f9ca4b4 test/jdk/java/lang/invoke/condy/CondyRepeatFailedResolution.java > --- a/test/jdk/java/lang/invoke/condy/CondyRepeatFailedResolution.java Thu Mar 08 14:33:57 2018 -0800 > +++ b/test/jdk/java/lang/invoke/condy/CondyRepeatFailedResolution.java Tue Mar 13 10:31:38 2018 -0700 > @@ -39,8 +39,6 @@ > import org.testng.annotations.BeforeClass; > import org.testng.annotations.Test; > > -import java.io.File; > -import java.io.FileOutputStream; > import java.lang.invoke.MethodHandles; > import java.lang.invoke.MethodType; > import java.lang.reflect.InvocationTargetException; > @@ -217,9 +215,6 @@ > )) > .build(); > > - // For debugging purposes > - new FileOutputStream(new File(genClassName + ".class")).write(byteArray); > - > gc = MethodHandles.lookup().defineClass(byteArray); > } > > diff -r 74518f9ca4b4 test/jdk/java/lang/invoke/condy/CondyReturnPrimitiveTest.java > --- a/test/jdk/java/lang/invoke/condy/CondyReturnPrimitiveTest.java Thu Mar 08 14:33:57 2018 -0800 > +++ b/test/jdk/java/lang/invoke/condy/CondyReturnPrimitiveTest.java Tue Mar 13 10:31:38 2018 -0700 > @@ -39,8 +39,6 @@ > import org.testng.annotations.BeforeClass; > import org.testng.annotations.Test; > > -import java.io.File; > -import java.io.FileOutputStream; > import java.lang.invoke.MethodHandles; > import java.lang.invoke.MethodType; > import java.lang.reflect.Method; > @@ -218,9 +216,6 @@ > )) > .build(); > > - // For debugging purposes > - new FileOutputStream(new File(genClassName + ".class")).write(byteArray); > - > gc = MethodHandles.lookup().defineClass(byteArray); > } > From coleen.phillimore at oracle.com Tue Mar 13 21:47:40 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 13 Mar 2018 17:47:40 -0400 Subject: RFR(S/M): 8199472: Fix non-PCH build after JDK-8199319 In-Reply-To: References: <1fe1b023-b507-3706-b9ec-aff9d8283377@oracle.com> <95ab46a3-796a-6a4a-bfb2-423c0d5c0794@oracle.com> Message-ID: This looks good to me too. Thanks, Coleen ps. can you test out my patch on ppc and the others for 8199263: Split interfaceSupport.hpp to not require including .inline.hpp files http://cr.openjdk.java.net/~coleenp/8199263.02/webrev/index.html On 3/13/18 1:13 PM, Volker Simonis wrote: > Hi, > > please find the new webrev here: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472.v2/ > > I've moved allocate_instance_handle to instanceKlass.cpp as requested > and updated some copyrights. The change is currently running through > the new submit-hs repo testing. > > If you're OK with the new version and the tests succeed I'll push the > change tomorrow. > > Best regards, > Volker > > > On Tue, Mar 13, 2018 at 10:16 AM, Stefan Karlsson > wrote: >> Hi Volker, >> >> On 2018-03-13 10:12, Volker Simonis wrote: >>> Hi Coleen, Stefan, >>> >>> sure I'm open for suggestions :) >>> >>> As you both ask for the same thing, I'll prepare a new webrev with >>> allocate_instance_handle moved to instanceKlass.cpp. In my initial >>> patch I just didn't wanted to change the current inlining behaviour >>> but if you both think that allocate_instance_handle is not performance >>> critical I'm happy to clean that up. >> >> >> I don't think it's critical to get it inlined. With that said, I think the >> compiler will inline allocate_instance into allocate_instance_handle, so >> you'll most likely only get one call anyway. >> >>> With the brand new submit-hs repo posted by Jesper just a few hours >>> ago, I'll be also able to push this myself, so no more need for a >>> sponsor :) >> >> Yay! >> >> StefanK >> >> >>> Thanks, >>> Volker >>> >>> >>> On Mon, Mar 12, 2018 at 8:42 PM, wrote: >>>> >>>> Hi this looks good except: >>>> >>>> >>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/src/hotspot/share/oops/instanceKlass.inline.hpp.udiff.html >>>> >>>> Can you move this a function in instanceKlass.cpp and would this >>>> eliminate >>>> the changes that add include instanceKlass.inline.hpp ? >>>> >>>> If Stefan is not still online, I'll sponsor this for you. >>>> >>>> I have a follow-on related change >>>> https://bugs.openjdk.java.net/browse/JDK-8199263 which is quickly >>>> expanding >>>> due to transitive includes that I hope you can help me test out (when I >>>> get >>>> it to compile on solaris). >>>> >>>> Thanks, >>>> Coleen >>>> >>>> >>>> >>>> On 3/12/18 3:34 PM, Volker Simonis wrote: >>>>> >>>>> Hi, >>>>> >>>>> can I please have a review and a sponsor for the following fix: >>>>> >>>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/ >>>>> https://bugs.openjdk.java.net/browse/JDK-8199472 >>>>> >>>>> The number changes files is "M" but the fix is actually "S" :) >>>>> >>>>> Here come the gory details: >>>>> >>>>> Change "8199319: Remove handles.inline.hpp include from >>>>> reflectionUtils.hpp" breaks the non-PCH build (at least on Ubuntu >>>>> 16.04 with gcc 5.4.0). If you configure with >>>>> "--disable-precompiled-headers" you will get a whole lot of undefined >>>>> reference for "Handle::Handle(Thread*, oopDesc*)" - see bug report. >>>>> >>>>> It seems that newer versions of GCC (and possibly other compilers as >>>>> well) don't emit any code for inline functions if these functions can >>>>> be inlined at all potential call sites. >>>>> >>>>> The problem in this special case is that "Handle::Handle(Thread*, >>>>> oopDesc*)" is not declared "inline" in "handles.hpp", but its >>>>> definition in "handles.inline.hpp" is declared "inline". This leads to >>>>> a situation, where compilation units which only include "handles.hpp" >>>>> will emit a call to "Handle::Handle(Thread*, oopDesc*)" while >>>>> compilation units which include "handles.inline.hpp" will try to >>>>> inline "Handle::Handle(Thread*, oopDesc*)". If all the inlining >>>>> attempts are successful, no instance of "Handle::Handle(Thread*, >>>>> oopDesc*)" will be generated in any of the object files. This will >>>>> lead to the link errors listed in the . >>>>> >>>>> The quick fix for this issue is to include "handles.inline.hpp" into >>>>> all the compilation units with undefined references (listed below). >>>>> >>>>> The correct fix (realized in this RFR) is to declare >>>>> "Handle::Handle(Thread*, oopDesc*)" inline in "handles.hpp". This will >>>>> lead to warnings (which are treated as errors) if the inline >>>>> definition is not available at a call site and will avoid linking >>>>> error due to compiler optimizations. Unfortunately this requires a >>>>> whole lot of follow-up changes, because "handles.hpp" defines some >>>>> derived classes of "Handle" which all have implicitly inline >>>>> constructors which all reference the base class >>>>> "Handle::Handle(Thread*, oopDesc*)" constructor. So the constructors >>>>> of the derived classes have to be explicitly declared inline in >>>>> "handles.hpp" and their implementation has to be moved to >>>>> "handles.inline.hpp". This change again triggers other changes for all >>>>> files which relayed on the derived Handle classes having inline >>>>> constructors... >>>>> >>>>> Thank you and best regards, >>>>> Volker >>>> >>>> From paul.sandoz at oracle.com Tue Mar 13 21:50:07 2018 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Tue, 13 Mar 2018 14:50:07 -0700 Subject: RFR 8197944 Condy tests fails on Windows In-Reply-To: <1bcae4cc-43b5-80ea-d3b5-91fdbc19d426@oracle.com> References: <6B43DC97-2AAD-4866-8FCE-BAFAD5BB5454@oracle.com> <1bcae4cc-43b5-80ea-d3b5-91fdbc19d426@oracle.com> Message-ID: <7FCFE0CF-34E9-414F-8713-5A2837CA3DB6@oracle.com> > On Mar 13, 2018, at 2:46 PM, David Holmes wrote: > > Hi Paul, > > Fix seems fine. Though you could have just closed the file and kept the debugging capability. > Thanks. The debugging capability is not consistently applied to all tests, so i just removed it. Paul. From david.holmes at oracle.com Tue Mar 13 22:00:02 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 14 Mar 2018 08:00:02 +1000 Subject: Pre-RFR (L) 8199263: Split interfaceSupport.hpp to not require including .inline.hpp files In-Reply-To: References: <0503632f-1ed7-8fa9-0540-4d21edc3bb33@oracle.com> Message-ID: On 14/03/2018 12:00 AM, coleen.phillimore at oracle.com wrote: > On 3/13/18 8:55 AM, Stefan Karlsson wrote: >> I'd prefer if you removed the .inline.hpp files from precompiled.hpp. >> We could also do it as a separate cleanup if you don't want to retest >> this patch. > Let's do that separately.? I didn't know what we wanted to do for > precompiled.hpp honestly. I'd like to understand how .inline.hpp files work with PCH. If they can be precompiled then I would think we want to keep them in precompiled.hpp. If pre-compiling is meaningless for .inline.hpp then we should remove them as clutter. I don't expected precompiled.hpp to be subject to the ".hpp can't include .inline.hpp" rule. I hope to look at this soon as well. Thanks, David > thanks, > Coleen >> >> Thanks, >> StefanK >> >> >> On 2018-03-13 12:50, coleen.phillimore at oracle.com wrote: >>> Summary: interfaceSupport.hpp is an inline file so moved to >>> interfaceSupport.inline.hpp and stopped including it in .hpp files >>> >>> 90% of this change is renaming interfaceSupport.hpp to >>> interfaceSupport.inline.hpp.?? I tried to see if all of these files >>> needed this header and the answer was yes.?? A surprising (to me!) >>> number of files have thread state transitions. >>> Some of interesting part of this change is adding >>> ciUtilities.inline.hpp to include interfaceSupport.inline.hpp for >>> VM_ENTRY. whitebox.inline.hpp was added for the same reason. >>> jvmtiEnter.hpp was renamed jvmtiEnter.inline.hpp because it includes >>> interfaceSupport.inline.hpp, and is only included in cpp files. >>> The rest of the changes were to add back includes that are not pulled >>> in by header files including interfaceSupport.hpp, like gcLocker.hpp >>> and of course handles.inline.hpp. >>> >>> This probably overlaps some of Volker's patch.? Can this be tested on >>> other platforms that we don't have? >>> >>> Hopefully, at the end of all this we have more clean header files so >>> that transitive includes don't make the jvm build on one platform but >>> not the next.? I think that's the goal of all of this work. >>> >>> This was tested with Oracle platforms (linux-x64, solaris-sparcv9, >>> macosx-x64, windows-x64) in the mach5 tier1 and 2.?? I built this >>> locally without precompiled headers (my default setting of course) on >>> linux-x64. >>> >>> bug link https://bugs.openjdk.java.net/browse/JDK-8199263 >>> local webrev at >>> http://oklahoma.us.oracle.com/~cphillim/webrev/8199263.02/webrev >>> >>> Thanks to Stefan for his help with this. >>> >>> Thanks, >>> Coleen >>> >>> > From lois.foltan at oracle.com Tue Mar 13 22:41:35 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Tue, 13 Mar 2018 18:41:35 -0400 Subject: RFR 8197944 Condy tests fails on Windows In-Reply-To: <6B43DC97-2AAD-4866-8FCE-BAFAD5BB5454@oracle.com> References: <6B43DC97-2AAD-4866-8FCE-BAFAD5BB5454@oracle.com> Message-ID: <846dfb2b-c962-19c9-56af-f71f1451b5a3@oracle.com> Looks good! Lois On 3/13/2018 1:43 PM, Paul Sandoz wrote: > Hi, > > The recent push for > > https://bugs.openjdk.java.net/browse/JDK-8199342 > The constant pool forgets it has a Dynamic entry if there are overpass methods > > resulted in a test failure on windows that i failed to observe from the test reports prior to pushing. > > This patch fixes that failure, and fixes other related tests that failed on windows for the same reason, which were placed on the problem list. > > The problematic tests open a file for debugging purposes and do not close it. This causes the test infrastructure on windows to fail as it cannot remove the file (i dunno if this is something that can be independently fixed). > > I verified (more carefully this time) that a mach5 build and test on windows passes. > > Paul. > > diff -r 74518f9ca4b4 test/jdk/ProblemList.txt > --- a/test/jdk/ProblemList.txt Thu Mar 08 14:33:57 2018 -0800 > +++ b/test/jdk/ProblemList.txt Tue Mar 13 10:31:38 2018 -0700 > @@ -493,9 +493,6 @@ > > java/lang/String/nativeEncoding/StringPlatformChars.java 8182569 windows-all,solaris-all > > -java/lang/invoke/condy/CondyRepeatFailedResolution.java 8197944 windows-all > -java/lang/invoke/condy/CondyReturnPrimitiveTest.java 8197944 windows-all > - > ############################################################################ > > # jdk_instrument > diff -r 74518f9ca4b4 test/jdk/java/lang/invoke/condy/CondyInterfaceWithOverpassMethods.java > --- a/test/jdk/java/lang/invoke/condy/CondyInterfaceWithOverpassMethods.java Thu Mar 08 14:33:57 2018 -0800 > +++ b/test/jdk/java/lang/invoke/condy/CondyInterfaceWithOverpassMethods.java Tue Mar 13 10:31:38 2018 -0700 > @@ -34,16 +34,11 @@ > import jdk.experimental.bytecode.BasicClassBuilder; > import jdk.experimental.bytecode.Flag; > import jdk.experimental.bytecode.TypedCodeBuilder; > -import org.testng.Assert; > import org.testng.annotations.BeforeClass; > import org.testng.annotations.Test; > > -import java.io.File; > -import java.io.FileOutputStream; > import java.lang.invoke.MethodHandles; > import java.lang.invoke.MethodType; > -import java.lang.reflect.Method; > -import java.util.concurrent.atomic.AtomicInteger; > > @Test > public class CondyInterfaceWithOverpassMethods { > @@ -93,9 +88,6 @@ > )) > .build(); > > - // For debugging purposes > - new FileOutputStream(new File(genClassName + ".class")).write(byteArray); > - > gc = MethodHandles.lookup().defineClass(byteArray); > } > > diff -r 74518f9ca4b4 test/jdk/java/lang/invoke/condy/CondyRepeatFailedResolution.java > --- a/test/jdk/java/lang/invoke/condy/CondyRepeatFailedResolution.java Thu Mar 08 14:33:57 2018 -0800 > +++ b/test/jdk/java/lang/invoke/condy/CondyRepeatFailedResolution.java Tue Mar 13 10:31:38 2018 -0700 > @@ -39,8 +39,6 @@ > import org.testng.annotations.BeforeClass; > import org.testng.annotations.Test; > > -import java.io.File; > -import java.io.FileOutputStream; > import java.lang.invoke.MethodHandles; > import java.lang.invoke.MethodType; > import java.lang.reflect.InvocationTargetException; > @@ -217,9 +215,6 @@ > )) > .build(); > > - // For debugging purposes > - new FileOutputStream(new File(genClassName + ".class")).write(byteArray); > - > gc = MethodHandles.lookup().defineClass(byteArray); > } > > diff -r 74518f9ca4b4 test/jdk/java/lang/invoke/condy/CondyReturnPrimitiveTest.java > --- a/test/jdk/java/lang/invoke/condy/CondyReturnPrimitiveTest.java Thu Mar 08 14:33:57 2018 -0800 > +++ b/test/jdk/java/lang/invoke/condy/CondyReturnPrimitiveTest.java Tue Mar 13 10:31:38 2018 -0700 > @@ -39,8 +39,6 @@ > import org.testng.annotations.BeforeClass; > import org.testng.annotations.Test; > > -import java.io.File; > -import java.io.FileOutputStream; > import java.lang.invoke.MethodHandles; > import java.lang.invoke.MethodType; > import java.lang.reflect.Method; > @@ -218,9 +216,6 @@ > )) > .build(); > > - // For debugging purposes > - new FileOutputStream(new File(genClassName + ".class")).write(byteArray); > - > gc = MethodHandles.lookup().defineClass(byteArray); > } > From david.holmes at oracle.com Wed Mar 14 00:05:40 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 14 Mar 2018 10:05:40 +1000 Subject: RFR(S): 8198794: Hotspot crash on Cassandra 3.11.1 startup with libnuma 2.0.3 In-Reply-To: References: <5AA725AA.7010202@linux.vnet.ibm.com> <3C4B8012-284F-4F47-B99F-ACB0056198C1@amazon.com> Message-ID: <282ee7b0-eb29-a4e2-1aff-4d4c369c08c6@oracle.com> On 14/03/2018 4:35 AM, Gustavo Romero wrote: > Hi, > > On 03/13/2018 11:50 AM, Hohensee, Paul wrote: >> Looks good to me. > > Thanks for reviewing it. > > > @David, do you mind to review that small change (maybe it > should be marked as XS actually...) regarding libnuma > since you reviewed the previous ones? > > bug?? : https://bugs.openjdk.java.net/browse/JDK-8198794 > webrev: http://cr.openjdk.java.net/~gromero/8198794/v1/ Seems okay. Couple of grammar nits with the mega comment: // it can exist nodes it -> there // are besides that non-contiguous. "are besides that" -> "may be" Thanks, David > > Regards, > Gustavo > >> Paul >> >> ?On 3/12/18, 6:13 PM, "Gustavo Romero" >> wrote: >> >> ???? Hi. >> ???? Paul, I just saw today your bug on JBS... >> ???? https://bugs.openjdk.java.net/browse/JDK-8198794 >> ???? Thanks for reporting and debugging it. >> ???? It looks like the issue boils down to the fact that although >> ???? 'numa_all_nodes_ptr' was introduced with libnuma API v2, >> 'numa_nodes_ptr' >> ???? was only introduced later on libnuma v2.0.9, so it's not present >> in libnuma >> ???? 2.0.3 which dates back to Jun 2009 [1]. I agree with your initial >> patch >> ???? that a reasonable way to address it for archs like x86_64 is to use >> ???? 'numa_all_nodes_ptr' as a surrogate for 'numa_nodes_ptr' (PowerPC >> needs >> ???? 'numa_nodes_ptr' anyway and will have to stick with libnuma 2.0.9 >> and above >> ???? because it's not unusual to have non-configured nodes on PPC64 >> and nodes >> ???? can be non-contiguous as well). >> ???? I just think it's better to handle it inside >> isnode_in_existing_nodes() >> ???? interface, which is where such a information is needed in the >> end. In that >> ???? sense, if you agree could you please check if the following >> webrev fixes >> ???? the issue for you? It must also apply ok for jdk8u: >> ???? bug?? : https://bugs.openjdk.java.net/browse/JDK-8198794 >> ???? webrev: http://cr.openjdk.java.net/~gromero/8198794/v1/ >> ???? If it does solve your issue, I will kindly ask for another Reviewer. >> ???? Thank you. >> ???? Best regards, >> ???? Gustavo >> ???? [1] >> http://cr.openjdk.java.net/~gromero/misc/numa_all_nodes_ptr_VS_numa_nodes_ptr.txt >> >> > From david.holmes at oracle.com Wed Mar 14 00:52:50 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 14 Mar 2018 10:52:50 +1000 Subject: RFR: 8199220: Zero build broken after 8195103 and 8191102 (was RFR: 8199220: Zero build broken) In-Reply-To: <1520965857.8311.12.camel@gmail.com> References: <1520422853.24302.6.camel@gmail.com> <5A9FEBD7.7040606@oracle.com> <1520529846.1085.9.camel@gmail.com> <03f59f60-dda2-bc0a-3929-115f7a0c4ca2@oracle.com> <1520582019.2395.6.camel@gmail.com> <8a76582b-bde0-b281-adae-833f7f861feb@oracle.com> <1520882830.11566.12.camel@gmail.com> <1520887793.11566.16.camel@gmail.com> <1520935554.25609.2.camel@gmail.com> <1520965857.8311.12.camel@gmail.com> Message-ID: Looks fine. Do you still want a sponsor or are you going to use submit-hs before pushing yourself? Thanks, David On 14/03/2018 4:30 AM, Edward Nevill wrote: > On Tue, 2018-03-13 at 11:24 +0100, Thomas St?fe wrote: >> >> >> On Tue, Mar 13, 2018 at 11:05 AM, Edward Nevill wrote: >>> On Tue, 2018-03-13 at 07:10 +0100, Thomas St?fe wrote: >>>> >>>> >>>> On Mon, Mar 12, 2018 at 10:32 PM, Thomas St?fe wrote: >>>>> Reminds me of : >>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-November/029289.html >>>>> >>>>> Could this be the same issue? >>>>> >>>> >>> >>> It is indeed exactly the same issue. >>> >>> Was this issue ever resolved? I cannot find a JBS report or hg patch. >>> >>> Many thanks, >>> Ed. >>> >> >> ... oh... >> >> I think Erik thought I was going to fix it, and I was counting on Erik... :-) So, maybe it was never fixed. Adrian is the defacto maintainer of zero currently (at least he is the most active), but I think he may only build release? >> > > New webrev > > http://cr.openjdk.java.net/~enevill/8199220/webrev.04 > > The simplest solution seemed to be to add SUPPORTS_NATIVE_CX8 to Zero as follows > > +#ifdef _LP64 > +#define SUPPORTS_NATIVE_CX8 > +#endif > > I have build fastdebug versions on x86 and aarch64 to test two different 64 bit systems. > > Does it look OK now? > > Thanks, > Ed. > From yasuenag at gmail.com Wed Mar 14 01:33:46 2018 From: yasuenag at gmail.com (Yasumasa Suenaga) Date: Wed, 14 Mar 2018 10:33:46 +0900 Subject: Build failure w/ GCC 7.3.1 Message-ID: Hi all. I encountered build failure with GCC 7.3.1 on Fedora 27 x86_64 as below: ------------------- Building target 'images' in configuration 'linux-x86_64-normal-server-fastdebug' In file included from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/access.inline.hpp:35:0, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/oop.inline.hpp:32, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/markOop.inline.hpp:30, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:153: /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp: In static member function 'static bool RawAccessBarrier::oop_arraycopy(arrayOop, arrayOop, HeapWord*, HeapWord*, size_t)': /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:129:98: error: no matching function for call to 'RawAccessBarrier::arraycopy(narrowOop*, narrowOop*, size_t&)' return arraycopy(reinterpret_cast(src), reinterpret_cast(dst), length); ^ In file included from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/barrierSet.hpp:31:0, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/collectorPolicy.hpp:28, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/genCollectedHeap.hpp:29, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/gcLocker.hpp:29, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/runtime/interfaceSupport.hpp:28, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/prims/methodHandles.hpp:32, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciMethod.hpp:33, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/code/debugInfoRec.hpp:30, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciEnv.hpp:31, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciUtilities.hpp:28, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciNullObject.hpp:30, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciConstant.hpp:29, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciArray.hpp:29, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:35: /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: note: candidate: template template static bool RawAccessBarrier::arraycopy(arrayOop, arrayOop, T*, T*, size_t) static bool arraycopy(arrayOop src_obj, arrayOop dst_obj, T* src, T* dst, size_t length); ^~~~~~~~~ /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: note: template argument deduction/substitution failed: In file included from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/access.inline.hpp:35:0, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/oop.inline.hpp:32, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/markOop.inline.hpp:30, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:153: /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:129:98: note: mismatched types 'T*' and 'long unsigned int' return arraycopy(reinterpret_cast(src), reinterpret_cast(dst), length); ^ /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:131:86: error: no matching function for call to 'RawAccessBarrier::arraycopy(oop*, oop*, size_t&)' return arraycopy(reinterpret_cast(src), reinterpret_cast(dst), length); ^ In file included from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/barrierSet.hpp:31:0, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/collectorPolicy.hpp:28, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/genCollectedHeap.hpp:29, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/gcLocker.hpp:29, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/runtime/interfaceSupport.hpp:28, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/prims/methodHandles.hpp:32, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciMethod.hpp:33, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/code/debugInfoRec.hpp:30, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciEnv.hpp:31, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciUtilities.hpp:28, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciNullObject.hpp:30, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciConstant.hpp:29, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciArray.hpp:29, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:35: /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: note: candidate: template template static bool RawAccessBarrier::arraycopy(arrayOop, arrayOop, T*, T*, size_t) static bool arraycopy(arrayOop src_obj, arrayOop dst_obj, T* src, T* dst, size_t length); ^~~~~~~~~ /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: note: template argument deduction/substitution failed: In file included from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/access.inline.hpp:35:0, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/oop.inline.hpp:32, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/markOop.inline.hpp:30, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:153: /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:131:86: note: mismatched types 'T*' and 'long unsigned int' return arraycopy(reinterpret_cast(src), reinterpret_cast(dst), length); ^ In file included from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/access.inline.hpp:35:0, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/oop.inline.hpp:32, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/markOop.inline.hpp:30, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:153: /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp: In static member function 'static bool RawAccessBarrier::oop_arraycopy(arrayOop, arrayOop, HeapWord*, HeapWord*, size_t)': /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:129:98: error: no matching function for call to 'RawAccessBarrier::arraycopy(narrowOop*, narrowOop*, size_t&)' return arraycopy(reinterpret_cast(src), reinterpret_cast(dst), length); ^ In file included from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/barrierSet.hpp:31:0, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/collectorPolicy.hpp:28, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/genCollectedHeap.hpp:29, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/gcLocker.hpp:29, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/runtime/interfaceSupport.hpp:28, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/prims/methodHandles.hpp:32, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciMethod.hpp:33, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/code/debugInfoRec.hpp:30, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciEnv.hpp:31, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciUtilities.hpp:28, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciNullObject.hpp:30, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciConstant.hpp:29, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciArray.hpp:29, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:35: /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: note: candidate: template template static bool RawAccessBarrier::arraycopy(arrayOop, arrayOop, T*, T*, size_t) static bool arraycopy(arrayOop src_obj, arrayOop dst_obj, T* src, T* dst, size_t length); ^~~~~~~~~ /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: note: template argument deduction/substitution failed: In file included from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/access.inline.hpp:35:0, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/oop.inline.hpp:32, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/markOop.inline.hpp:30, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:153: /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:129:98: note: mismatched types 'T*' and 'long unsigned int' return arraycopy(reinterpret_cast(src), reinterpret_cast(dst), length); ^ /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:131:86: error: no matching function for call to 'RawAccessBarrier::arraycopy(oop*, oop*, size_t&)' return arraycopy(reinterpret_cast(src), reinterpret_cast(dst), length); ^ In file included from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/barrierSet.hpp:31:0, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/collectorPolicy.hpp:28, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/genCollectedHeap.hpp:29, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/gcLocker.hpp:29, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/runtime/interfaceSupport.hpp:28, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/prims/methodHandles.hpp:32, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciMethod.hpp:33, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/code/debugInfoRec.hpp:30, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciEnv.hpp:31, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciUtilities.hpp:28, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciNullObject.hpp:30, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciConstant.hpp:29, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciArray.hpp:29, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:35: /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: note: candidate: template template static bool RawAccessBarrier::arraycopy(arrayOop, arrayOop, T*, T*, size_t) static bool arraycopy(arrayOop src_obj, arrayOop dst_obj, T* src, T* dst, size_t length); ^~~~~~~~~ /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: note: template argument deduction/substitution failed: In file included from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/access.inline.hpp:35:0, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/oop.inline.hpp:32, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/markOop.inline.hpp:30, from /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:153: /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:131:86: note: mismatched types 'T*' and 'long unsigned int' return arraycopy(reinterpret_cast(src), reinterpret_cast(dst), length); ^ gmake[3]: *** [lib/CompileJvm.gmk:214: /home/ysuenaga/OpenJDK/jdk-hs/build/linux-x86_64-normal-server-fastdebug/hotspot/variant-server/libjvm/objs/precompiled/precompiled.hpp.gch] Error 1 gmake[3]: *** Waiting for unfinished jobs.... gmake[3]: *** [lib/CompileGtest.gmk:67: /home/ysuenaga/OpenJDK/jdk-hs/build/linux-x86_64-normal-server-fastdebug/hotspot/variant-server/libjvm/gtest/objs/precompiled/precompiled.hpp.gch] Error 1 gmake[2]: *** [make/Main.gmk:267: hotspot-server-libs] Error 2 ERROR: Build failed for target 'images' in configuration 'linux-x86_64-normal-server-fastdebug' (exit code 2) ------------------- Do someone work for this issue? IMHO we can avoid this with following patch: ------------------- diff -r 98e7a2c315a9 src/hotspot/share/oops/accessBackend.hpp --- a/src/hotspot/share/oops/accessBackend.hpp Tue Mar 13 15:29:55 2018 -0700 +++ b/src/hotspot/share/oops/accessBackend.hpp Wed Mar 14 10:28:27 2018 +0900 @@ -384,7 +384,6 @@ template static bool oop_arraycopy(arrayOop src_obj, arrayOop dst_obj, T* src, T* dst, size_t length); - static bool oop_arraycopy(arrayOop src_obj, arrayOop dst_obj, HeapWord* src, HeapWord* dst, size_t length); static void clone(oop src, oop dst, size_t size); diff -r 98e7a2c315a9 src/hotspot/share/oops/accessBackend.inline.hpp --- a/src/hotspot/share/oops/accessBackend.inline.hpp Tue Mar 13 15:29:55 2018 -0700 +++ b/src/hotspot/share/oops/accessBackend.inline.hpp Wed Mar 14 10:28:27 2018 +0900 @@ -122,17 +122,6 @@ } template -inline bool RawAccessBarrier::oop_arraycopy(arrayOop src_obj, arrayOop dst_obj, HeapWord* src, HeapWord* dst, size_t length) { - bool needs_oop_compress = HasDecorator::value && - HasDecorator::value; - if (needs_oop_compress) { - return arraycopy(reinterpret_cast(src), reinterpret_cast(dst), length); - } else { - return arraycopy(reinterpret_cast(src), reinterpret_cast(dst), length); - } -} - -template template inline typename EnableIf< HasDecorator::value, T>::type ------------------- Thanks, Yasumasa From david.holmes at oracle.com Wed Mar 14 01:48:21 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 14 Mar 2018 11:48:21 +1000 Subject: Build failure w/ GCC 7.3.1 In-Reply-To: References: Message-ID: <87cc1a91-3a34-aa2c-fb66-9965c9049a31@oracle.com> Looks related to: "8198445: Access API for primitive/native arraycopy" We don't see it locally so may be gcc version specific. David On 14/03/2018 11:33 AM, Yasumasa Suenaga wrote: > Hi all. > > I encountered build failure with GCC 7.3.1 on Fedora 27 x86_64 as below: > ------------------- > Building target 'images' in configuration 'linux-x86_64-normal-server-fastdebug' > In file included from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/access.inline.hpp:35:0, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/oop.inline.hpp:32, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/markOop.inline.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:153: > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp: > In static member function 'static bool > RawAccessBarrier::oop_arraycopy(arrayOop, arrayOop, > HeapWord*, HeapWord*, size_t)': > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:129:98: > error: no matching function for call to > 'RawAccessBarrier::arraycopy(narrowOop*, narrowOop*, > size_t&)' > return arraycopy(reinterpret_cast(src), > reinterpret_cast(dst), length); > > ^ > In file included from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/barrierSet.hpp:31:0, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/collectorPolicy.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/genCollectedHeap.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/gcLocker.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/runtime/interfaceSupport.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/prims/methodHandles.hpp:32, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciMethod.hpp:33, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/code/debugInfoRec.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciEnv.hpp:31, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciUtilities.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciNullObject.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciConstant.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciArray.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:35: > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: > note: candidate: template template T> static bool RawAccessBarrier::arraycopy(arrayOop, > arrayOop, T*, T*, size_t) > static bool arraycopy(arrayOop src_obj, arrayOop dst_obj, T* src, > T* dst, size_t length); > ^~~~~~~~~ > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: > note: template argument deduction/substitution failed: > In file included from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/access.inline.hpp:35:0, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/oop.inline.hpp:32, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/markOop.inline.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:153: > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:129:98: > note: mismatched types 'T*' and 'long unsigned int' > return arraycopy(reinterpret_cast(src), > reinterpret_cast(dst), length); > > ^ > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:131:86: > error: no matching function for call to > 'RawAccessBarrier::arraycopy(oop*, oop*, size_t&)' > return arraycopy(reinterpret_cast(src), > reinterpret_cast(dst), length); > > ^ > In file included from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/barrierSet.hpp:31:0, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/collectorPolicy.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/genCollectedHeap.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/gcLocker.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/runtime/interfaceSupport.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/prims/methodHandles.hpp:32, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciMethod.hpp:33, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/code/debugInfoRec.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciEnv.hpp:31, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciUtilities.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciNullObject.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciConstant.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciArray.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:35: > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: > note: candidate: template template T> static bool RawAccessBarrier::arraycopy(arrayOop, > arrayOop, T*, T*, size_t) > static bool arraycopy(arrayOop src_obj, arrayOop dst_obj, T* src, > T* dst, size_t length); > ^~~~~~~~~ > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: > note: template argument deduction/substitution failed: > In file included from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/access.inline.hpp:35:0, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/oop.inline.hpp:32, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/markOop.inline.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:153: > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:131:86: > note: mismatched types 'T*' and 'long unsigned int' > return arraycopy(reinterpret_cast(src), > reinterpret_cast(dst), length); > > ^ > In file included from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/access.inline.hpp:35:0, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/oop.inline.hpp:32, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/markOop.inline.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:153: > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp: > In static member function 'static bool > RawAccessBarrier::oop_arraycopy(arrayOop, arrayOop, > HeapWord*, HeapWord*, size_t)': > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:129:98: > error: no matching function for call to > 'RawAccessBarrier::arraycopy(narrowOop*, narrowOop*, > size_t&)' > return arraycopy(reinterpret_cast(src), > reinterpret_cast(dst), length); > > ^ > In file included from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/barrierSet.hpp:31:0, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/collectorPolicy.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/genCollectedHeap.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/gcLocker.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/runtime/interfaceSupport.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/prims/methodHandles.hpp:32, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciMethod.hpp:33, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/code/debugInfoRec.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciEnv.hpp:31, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciUtilities.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciNullObject.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciConstant.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciArray.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:35: > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: > note: candidate: template template T> static bool RawAccessBarrier::arraycopy(arrayOop, > arrayOop, T*, T*, size_t) > static bool arraycopy(arrayOop src_obj, arrayOop dst_obj, T* src, > T* dst, size_t length); > ^~~~~~~~~ > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: > note: template argument deduction/substitution failed: > In file included from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/access.inline.hpp:35:0, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/oop.inline.hpp:32, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/markOop.inline.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:153: > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:129:98: > note: mismatched types 'T*' and 'long unsigned int' > return arraycopy(reinterpret_cast(src), > reinterpret_cast(dst), length); > > ^ > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:131:86: > error: no matching function for call to > 'RawAccessBarrier::arraycopy(oop*, oop*, size_t&)' > return arraycopy(reinterpret_cast(src), > reinterpret_cast(dst), length); > > ^ > In file included from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/barrierSet.hpp:31:0, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/collectorPolicy.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/genCollectedHeap.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/gcLocker.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/runtime/interfaceSupport.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/prims/methodHandles.hpp:32, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciMethod.hpp:33, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/code/debugInfoRec.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciEnv.hpp:31, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciUtilities.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciNullObject.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciConstant.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciArray.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:35: > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: > note: candidate: template template T> static bool RawAccessBarrier::arraycopy(arrayOop, > arrayOop, T*, T*, size_t) > static bool arraycopy(arrayOop src_obj, arrayOop dst_obj, T* src, > T* dst, size_t length); > ^~~~~~~~~ > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: > note: template argument deduction/substitution failed: > In file included from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/access.inline.hpp:35:0, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/oop.inline.hpp:32, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/markOop.inline.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:153: > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:131:86: > note: mismatched types 'T*' and 'long unsigned int' > return arraycopy(reinterpret_cast(src), > reinterpret_cast(dst), length); > > ^ > gmake[3]: *** [lib/CompileJvm.gmk:214: > /home/ysuenaga/OpenJDK/jdk-hs/build/linux-x86_64-normal-server-fastdebug/hotspot/variant-server/libjvm/objs/precompiled/precompiled.hpp.gch] > Error 1 > gmake[3]: *** Waiting for unfinished jobs.... > gmake[3]: *** [lib/CompileGtest.gmk:67: > /home/ysuenaga/OpenJDK/jdk-hs/build/linux-x86_64-normal-server-fastdebug/hotspot/variant-server/libjvm/gtest/objs/precompiled/precompiled.hpp.gch] > Error 1 > gmake[2]: *** [make/Main.gmk:267: hotspot-server-libs] Error 2 > > ERROR: Build failed for target 'images' in configuration > 'linux-x86_64-normal-server-fastdebug' (exit code 2) > ------------------- > > Do someone work for this issue? > IMHO we can avoid this with following patch: > ------------------- > diff -r 98e7a2c315a9 src/hotspot/share/oops/accessBackend.hpp > --- a/src/hotspot/share/oops/accessBackend.hpp Tue Mar 13 15:29:55 2018 -0700 > +++ b/src/hotspot/share/oops/accessBackend.hpp Wed Mar 14 10:28:27 2018 +0900 > @@ -384,7 +384,6 @@ > > template > static bool oop_arraycopy(arrayOop src_obj, arrayOop dst_obj, T* > src, T* dst, size_t length); > - static bool oop_arraycopy(arrayOop src_obj, arrayOop dst_obj, > HeapWord* src, HeapWord* dst, size_t length); > > static void clone(oop src, oop dst, size_t size); > > diff -r 98e7a2c315a9 src/hotspot/share/oops/accessBackend.inline.hpp > --- a/src/hotspot/share/oops/accessBackend.inline.hpp Tue Mar 13 > 15:29:55 2018 -0700 > +++ b/src/hotspot/share/oops/accessBackend.inline.hpp Wed Mar 14 > 10:28:27 2018 +0900 > @@ -122,17 +122,6 @@ > } > > template > -inline bool RawAccessBarrier::oop_arraycopy(arrayOop > src_obj, arrayOop dst_obj, HeapWord* src, HeapWord* dst, size_t > length) { > - bool needs_oop_compress = HasDecorator INTERNAL_CONVERT_COMPRESSED_OOP>::value && > - HasDecorator INTERNAL_RT_USE_COMPRESSED_OOPS>::value; > - if (needs_oop_compress) { > - return arraycopy(reinterpret_cast(src), > reinterpret_cast(dst), length); > - } else { > - return arraycopy(reinterpret_cast(src), > reinterpret_cast(dst), length); > - } > -} > - > -template > template > inline typename EnableIf< > HasDecorator::value, T>::type > ------------------- > > > Thanks, > > Yasumasa > From shade at redhat.com Wed Mar 14 08:34:02 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Wed, 14 Mar 2018 09:34:02 +0100 Subject: RFR (XS) 8199511: Do not generate g1_{pre|post}_barrier_slow_id without CardTable-enabled barrier set In-Reply-To: <574bdf2f-8b6b-4d49-05c4-b8fe7f98b04b@oracle.com> References: <574bdf2f-8b6b-4d49-05c4-b8fe7f98b04b@oracle.com> Message-ID: <24a3ccb8-53db-850d-c115-8a50821a030f@redhat.com> Thank you, Vladimir! Any non-Red Hat GC people around? -Aleksey On 03/13/2018 07:37 PM, Vladimir Kozlov wrote: > Looks good to me but someone from GC should look on it too. > > Thanks, > Vladimir > > On 3/13/18 4:05 AM, Aleksey Shipilev wrote: >> g1_{pre|post}_barrier_slow_id generation reaches for card table address, but it might not be >> available if barrier set does not support it. Reliably asserts with Epsilon. >> >> Bug: >> ?? https://bugs.openjdk.java.net/browse/JDK-8199511 >> >> Fix: >> ?? http://cr.openjdk.java.net/~shade/8199511/webrev.01/ >> >> This is arch-specific fix: >> ?? - c1_Runtime1_aarch64: copy-pasted the check block from g1_pre_barrier_slow_id >> ?? - c1_Runtime1_arm:???? added check block for *both* g1_{pre|post}_slow_id >> ?? - c1_Runtime1_ppc:???? already implemented >> ?? - c1_Runtime1_s390:??? already implemented >> ?? - c1_Runtime1_sparc:?? already implemented >> ?? - c1_Runtime1_x86:???? copy-pasted the check block from g1_pre_barrier_slow_id >> >> Testing: x86_64 build, Epsilon tests, (running with submit-hs repo now) >> >> Thanks, >> -Aleksey >> From per.liden at oracle.com Wed Mar 14 08:47:56 2018 From: per.liden at oracle.com (Per Liden) Date: Wed, 14 Mar 2018 09:47:56 +0100 Subject: RFR (XS) 8199511: Do not generate g1_{pre|post}_barrier_slow_id without CardTable-enabled barrier set In-Reply-To: <24a3ccb8-53db-850d-c115-8a50821a030f@redhat.com> References: <574bdf2f-8b6b-4d49-05c4-b8fe7f98b04b@oracle.com> <24a3ccb8-53db-850d-c115-8a50821a030f@redhat.com> Message-ID: <66744075-f828-f771-7c05-2d1fba92981c@oracle.com> Looks good. For x86 we have the exact same patch in the ZGC repo (not sure why we haven't upstreamed that already, so good that you're doing it) /Per On 03/14/2018 09:34 AM, Aleksey Shipilev wrote: > Thank you, Vladimir! > > Any non-Red Hat GC people around? > > -Aleksey > > On 03/13/2018 07:37 PM, Vladimir Kozlov wrote: >> Looks good to me but someone from GC should look on it too. >> >> Thanks, >> Vladimir >> >> On 3/13/18 4:05 AM, Aleksey Shipilev wrote: >>> g1_{pre|post}_barrier_slow_id generation reaches for card table address, but it might not be >>> available if barrier set does not support it. Reliably asserts with Epsilon. >>> >>> Bug: >>> ?? https://bugs.openjdk.java.net/browse/JDK-8199511 >>> >>> Fix: >>> ?? http://cr.openjdk.java.net/~shade/8199511/webrev.01/ >>> >>> This is arch-specific fix: >>> ?? - c1_Runtime1_aarch64: copy-pasted the check block from g1_pre_barrier_slow_id >>> ?? - c1_Runtime1_arm:???? added check block for *both* g1_{pre|post}_slow_id >>> ?? - c1_Runtime1_ppc:???? already implemented >>> ?? - c1_Runtime1_s390:??? already implemented >>> ?? - c1_Runtime1_sparc:?? already implemented >>> ?? - c1_Runtime1_x86:???? copy-pasted the check block from g1_pre_barrier_slow_id >>> >>> Testing: x86_64 build, Epsilon tests, (running with submit-hs repo now) >>> >>> Thanks, >>> -Aleksey >>> > > From stefan.karlsson at oracle.com Wed Mar 14 08:47:44 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 14 Mar 2018 09:47:44 +0100 Subject: RFR (XS) 8199511: Do not generate g1_{pre|post}_barrier_slow_id without CardTable-enabled barrier set In-Reply-To: <24a3ccb8-53db-850d-c115-8a50821a030f@redhat.com> References: <574bdf2f-8b6b-4d49-05c4-b8fe7f98b04b@oracle.com> <24a3ccb8-53db-850d-c115-8a50821a030f@redhat.com> Message-ID: On 2018-03-14 09:34, Aleksey Shipilev wrote: > Thank you, Vladimir! > > Any non-Red Hat GC people around? This x86 part looks good. We have the same code in ZGC. I talked to Erik ? and he has patches to change this, so that we don't unnecessarily generate barriers for GCs that are not run. Thanks, StefanK > > -Aleksey > > On 03/13/2018 07:37 PM, Vladimir Kozlov wrote: >> Looks good to me but someone from GC should look on it too. >> >> Thanks, >> Vladimir >> >> On 3/13/18 4:05 AM, Aleksey Shipilev wrote: >>> g1_{pre|post}_barrier_slow_id generation reaches for card table address, but it might not be >>> available if barrier set does not support it. Reliably asserts with Epsilon. >>> >>> Bug: >>> ?? https://bugs.openjdk.java.net/browse/JDK-8199511 >>> >>> Fix: >>> ?? http://cr.openjdk.java.net/~shade/8199511/webrev.01/ >>> >>> This is arch-specific fix: >>> ?? - c1_Runtime1_aarch64: copy-pasted the check block from g1_pre_barrier_slow_id >>> ?? - c1_Runtime1_arm:???? added check block for *both* g1_{pre|post}_slow_id >>> ?? - c1_Runtime1_ppc:???? already implemented >>> ?? - c1_Runtime1_s390:??? already implemented >>> ?? - c1_Runtime1_sparc:?? already implemented >>> ?? - c1_Runtime1_x86:???? copy-pasted the check block from g1_pre_barrier_slow_id >>> >>> Testing: x86_64 build, Epsilon tests, (running with submit-hs repo now) >>> >>> Thanks, >>> -Aleksey >>> > > From volker.simonis at gmail.com Wed Mar 14 10:30:02 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Wed, 14 Mar 2018 11:30:02 +0100 Subject: RFR(S/M): 8199472: Fix non-PCH build after JDK-8199319 In-Reply-To: References: <1fe1b023-b507-3706-b9ec-aff9d8283377@oracle.com> <95ab46a3-796a-6a4a-bfb2-423c0d5c0794@oracle.com> Message-ID: On Tue, Mar 13, 2018 at 10:47 PM, wrote: > > This looks good to me too. > Thanks, > Coleen > Thanks! > ps. can you test out my patch on ppc and the others for 8199263: Split > interfaceSupport.hpp to not require including .inline.hpp files > http://cr.openjdk.java.net/~coleenp/8199263.02/webrev/index.html > Sure, I'll do it today and let you know... > > On 3/13/18 1:13 PM, Volker Simonis wrote: > > Hi, > > please find the new webrev here: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472.v2/ > > I've moved allocate_instance_handle to instanceKlass.cpp as requested > and updated some copyrights. The change is currently running through > the new submit-hs repo testing. > > If you're OK with the new version and the tests succeed I'll push the > change tomorrow. > > Best regards, > Volker > > > On Tue, Mar 13, 2018 at 10:16 AM, Stefan Karlsson > wrote: > > Hi Volker, > > On 2018-03-13 10:12, Volker Simonis wrote: > > Hi Coleen, Stefan, > > sure I'm open for suggestions :) > > As you both ask for the same thing, I'll prepare a new webrev with > allocate_instance_handle moved to instanceKlass.cpp. In my initial > patch I just didn't wanted to change the current inlining behaviour > but if you both think that allocate_instance_handle is not performance > critical I'm happy to clean that up. > > > I don't think it's critical to get it inlined. With that said, I think the > compiler will inline allocate_instance into allocate_instance_handle, so > you'll most likely only get one call anyway. > > With the brand new submit-hs repo posted by Jesper just a few hours > ago, I'll be also able to push this myself, so no more need for a > sponsor :) > > Yay! > > StefanK > > > Thanks, > Volker > > > On Mon, Mar 12, 2018 at 8:42 PM, wrote: > > Hi this looks good except: > > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/src/hotspot/share/oops/instanceKlass.inline.hpp.udiff.html > > Can you move this a function in instanceKlass.cpp and would this > eliminate > the changes that add include instanceKlass.inline.hpp ? > > If Stefan is not still online, I'll sponsor this for you. > > I have a follow-on related change > https://bugs.openjdk.java.net/browse/JDK-8199263 which is quickly > expanding > due to transitive includes that I hope you can help me test out (when I > get > it to compile on solaris). > > Thanks, > Coleen > > > > On 3/12/18 3:34 PM, Volker Simonis wrote: > > Hi, > > can I please have a review and a sponsor for the following fix: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/ > https://bugs.openjdk.java.net/browse/JDK-8199472 > > The number changes files is "M" but the fix is actually "S" :) > > Here come the gory details: > > Change "8199319: Remove handles.inline.hpp include from > reflectionUtils.hpp" breaks the non-PCH build (at least on Ubuntu > 16.04 with gcc 5.4.0). If you configure with > "--disable-precompiled-headers" you will get a whole lot of undefined > reference for "Handle::Handle(Thread*, oopDesc*)" - see bug report. > > It seems that newer versions of GCC (and possibly other compilers as > well) don't emit any code for inline functions if these functions can > be inlined at all potential call sites. > > The problem in this special case is that "Handle::Handle(Thread*, > oopDesc*)" is not declared "inline" in "handles.hpp", but its > definition in "handles.inline.hpp" is declared "inline". This leads to > a situation, where compilation units which only include "handles.hpp" > will emit a call to "Handle::Handle(Thread*, oopDesc*)" while > compilation units which include "handles.inline.hpp" will try to > inline "Handle::Handle(Thread*, oopDesc*)". If all the inlining > attempts are successful, no instance of "Handle::Handle(Thread*, > oopDesc*)" will be generated in any of the object files. This will > lead to the link errors listed in the . > > The quick fix for this issue is to include "handles.inline.hpp" into > all the compilation units with undefined references (listed below). > > The correct fix (realized in this RFR) is to declare > "Handle::Handle(Thread*, oopDesc*)" inline in "handles.hpp". This will > lead to warnings (which are treated as errors) if the inline > definition is not available at a call site and will avoid linking > error due to compiler optimizations. Unfortunately this requires a > whole lot of follow-up changes, because "handles.hpp" defines some > derived classes of "Handle" which all have implicitly inline > constructors which all reference the base class > "Handle::Handle(Thread*, oopDesc*)" constructor. So the constructors > of the derived classes have to be explicitly declared inline in > "handles.hpp" and their implementation has to be moved to > "handles.inline.hpp". This change again triggers other changes for all > files which relayed on the derived Handle classes having inline > constructors... > > Thank you and best regards, > Volker > > > From stefan.karlsson at oracle.com Wed Mar 14 09:52:38 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 14 Mar 2018 10:52:38 +0100 Subject: RFR(S/M): 8199472: Fix non-PCH build after JDK-8199319 In-Reply-To: References: <1fe1b023-b507-3706-b9ec-aff9d8283377@oracle.com> <95ab46a3-796a-6a4a-bfb2-423c0d5c0794@oracle.com> Message-ID: Hi Volker, On 2018-03-13 18:13, Volker Simonis wrote: > Hi, > > please find the new webrev here: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472.v2/ > > I've moved allocate_instance_handle to instanceKlass.cpp as requested > and updated some copyrights. The change is currently running through > the new submit-hs repo testing. > > If you're OK with the new version and the tests succeed I'll push the > change tomorrow. The submit job failed because of missing handles.inline.hpp includes in our closed JFR code. I've created closed patch to solve that. I can push both of these patches, unless you really want to push the open part yourself. Thanks, StefanK > > Best regards, > Volker > > > On Tue, Mar 13, 2018 at 10:16 AM, Stefan Karlsson > wrote: >> Hi Volker, >> >> On 2018-03-13 10:12, Volker Simonis wrote: >>> >>> Hi Coleen, Stefan, >>> >>> sure I'm open for suggestions :) >>> >>> As you both ask for the same thing, I'll prepare a new webrev with >>> allocate_instance_handle moved to instanceKlass.cpp. In my initial >>> patch I just didn't wanted to change the current inlining behaviour >>> but if you both think that allocate_instance_handle is not performance >>> critical I'm happy to clean that up. >> >> >> >> I don't think it's critical to get it inlined. With that said, I think the >> compiler will inline allocate_instance into allocate_instance_handle, so >> you'll most likely only get one call anyway. >> >>> With the brand new submit-hs repo posted by Jesper just a few hours >>> ago, I'll be also able to push this myself, so no more need for a >>> sponsor :) >> >> >> Yay! >> >> StefanK >> >> >>> >>> Thanks, >>> Volker >>> >>> >>> On Mon, Mar 12, 2018 at 8:42 PM, wrote: >>>> >>>> >>>> Hi this looks good except: >>>> >>>> >>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/src/hotspot/share/oops/instanceKlass.inline.hpp.udiff.html >>>> >>>> Can you move this a function in instanceKlass.cpp and would this >>>> eliminate >>>> the changes that add include instanceKlass.inline.hpp ? >>>> >>>> If Stefan is not still online, I'll sponsor this for you. >>>> >>>> I have a follow-on related change >>>> https://bugs.openjdk.java.net/browse/JDK-8199263 which is quickly >>>> expanding >>>> due to transitive includes that I hope you can help me test out (when I >>>> get >>>> it to compile on solaris). >>>> >>>> Thanks, >>>> Coleen >>>> >>>> >>>> >>>> On 3/12/18 3:34 PM, Volker Simonis wrote: >>>>> >>>>> >>>>> Hi, >>>>> >>>>> can I please have a review and a sponsor for the following fix: >>>>> >>>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/ >>>>> https://bugs.openjdk.java.net/browse/JDK-8199472 >>>>> >>>>> The number changes files is "M" but the fix is actually "S" :) >>>>> >>>>> Here come the gory details: >>>>> >>>>> Change "8199319: Remove handles.inline.hpp include from >>>>> reflectionUtils.hpp" breaks the non-PCH build (at least on Ubuntu >>>>> 16.04 with gcc 5.4.0). If you configure with >>>>> "--disable-precompiled-headers" you will get a whole lot of undefined >>>>> reference for "Handle::Handle(Thread*, oopDesc*)" - see bug report. >>>>> >>>>> It seems that newer versions of GCC (and possibly other compilers as >>>>> well) don't emit any code for inline functions if these functions can >>>>> be inlined at all potential call sites. >>>>> >>>>> The problem in this special case is that "Handle::Handle(Thread*, >>>>> oopDesc*)" is not declared "inline" in "handles.hpp", but its >>>>> definition in "handles.inline.hpp" is declared "inline". This leads to >>>>> a situation, where compilation units which only include "handles.hpp" >>>>> will emit a call to "Handle::Handle(Thread*, oopDesc*)" while >>>>> compilation units which include "handles.inline.hpp" will try to >>>>> inline "Handle::Handle(Thread*, oopDesc*)". If all the inlining >>>>> attempts are successful, no instance of "Handle::Handle(Thread*, >>>>> oopDesc*)" will be generated in any of the object files. This will >>>>> lead to the link errors listed in the . >>>>> >>>>> The quick fix for this issue is to include "handles.inline.hpp" into >>>>> all the compilation units with undefined references (listed below). >>>>> >>>>> The correct fix (realized in this RFR) is to declare >>>>> "Handle::Handle(Thread*, oopDesc*)" inline in "handles.hpp". This will >>>>> lead to warnings (which are treated as errors) if the inline >>>>> definition is not available at a call site and will avoid linking >>>>> error due to compiler optimizations. Unfortunately this requires a >>>>> whole lot of follow-up changes, because "handles.hpp" defines some >>>>> derived classes of "Handle" which all have implicitly inline >>>>> constructors which all reference the base class >>>>> "Handle::Handle(Thread*, oopDesc*)" constructor. So the constructors >>>>> of the derived classes have to be explicitly declared inline in >>>>> "handles.hpp" and their implementation has to be moved to >>>>> "handles.inline.hpp". This change again triggers other changes for all >>>>> files which relayed on the derived Handle classes having inline >>>>> constructors... >>>>> >>>>> Thank you and best regards, >>>>> Volker >>>> >>>> >>>> >> From volker.simonis at gmail.com Wed Mar 14 10:42:55 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Wed, 14 Mar 2018 11:42:55 +0100 Subject: RFR(S/M): 8199472: Fix non-PCH build after JDK-8199319 In-Reply-To: References: <1fe1b023-b507-3706-b9ec-aff9d8283377@oracle.com> <95ab46a3-796a-6a4a-bfb2-423c0d5c0794@oracle.com> Message-ID: Ahh, just wanted to send you a mail to ask about the build failures of my submit-hs job on Solaris which I can't reproduce on our local machines :) Yes, please go ahead and push my change. I'm sure I'll find another one which I can finally push myself :) Thanks, Volker On Wed, Mar 14, 2018 at 10:52 AM, Stefan Karlsson wrote: > Hi Volker, > > On 2018-03-13 18:13, Volker Simonis wrote: >> >> Hi, >> >> please find the new webrev here: >> >> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472.v2/ >> >> I've moved allocate_instance_handle to instanceKlass.cpp as requested >> and updated some copyrights. The change is currently running through >> the new submit-hs repo testing. >> >> If you're OK with the new version and the tests succeed I'll push the >> change tomorrow. > > > The submit job failed because of missing handles.inline.hpp includes in our > closed JFR code. I've created closed patch to solve that. I can push both of > these patches, unless you really want to push the open part yourself. > > Thanks, > StefanK > > >> >> Best regards, >> Volker >> >> >> On Tue, Mar 13, 2018 at 10:16 AM, Stefan Karlsson >> wrote: >>> >>> Hi Volker, >>> >>> On 2018-03-13 10:12, Volker Simonis wrote: >>>> >>>> >>>> Hi Coleen, Stefan, >>>> >>>> sure I'm open for suggestions :) >>>> >>>> As you both ask for the same thing, I'll prepare a new webrev with >>>> allocate_instance_handle moved to instanceKlass.cpp. In my initial >>>> patch I just didn't wanted to change the current inlining behaviour >>>> but if you both think that allocate_instance_handle is not performance >>>> critical I'm happy to clean that up. >>> >>> >>> >>> >>> I don't think it's critical to get it inlined. With that said, I think >>> the >>> compiler will inline allocate_instance into allocate_instance_handle, so >>> you'll most likely only get one call anyway. >>> >>>> With the brand new submit-hs repo posted by Jesper just a few hours >>>> ago, I'll be also able to push this myself, so no more need for a >>>> sponsor :) >>> >>> >>> >>> Yay! >>> >>> StefanK >>> >>> >>>> >>>> Thanks, >>>> Volker >>>> >>>> >>>> On Mon, Mar 12, 2018 at 8:42 PM, wrote: >>>>> >>>>> >>>>> >>>>> Hi this looks good except: >>>>> >>>>> >>>>> >>>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/src/hotspot/share/oops/instanceKlass.inline.hpp.udiff.html >>>>> >>>>> Can you move this a function in instanceKlass.cpp and would this >>>>> eliminate >>>>> the changes that add include instanceKlass.inline.hpp ? >>>>> >>>>> If Stefan is not still online, I'll sponsor this for you. >>>>> >>>>> I have a follow-on related change >>>>> https://bugs.openjdk.java.net/browse/JDK-8199263 which is quickly >>>>> expanding >>>>> due to transitive includes that I hope you can help me test out (when I >>>>> get >>>>> it to compile on solaris). >>>>> >>>>> Thanks, >>>>> Coleen >>>>> >>>>> >>>>> >>>>> On 3/12/18 3:34 PM, Volker Simonis wrote: >>>>>> >>>>>> >>>>>> >>>>>> Hi, >>>>>> >>>>>> can I please have a review and a sponsor for the following fix: >>>>>> >>>>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/ >>>>>> https://bugs.openjdk.java.net/browse/JDK-8199472 >>>>>> >>>>>> The number changes files is "M" but the fix is actually "S" :) >>>>>> >>>>>> Here come the gory details: >>>>>> >>>>>> Change "8199319: Remove handles.inline.hpp include from >>>>>> reflectionUtils.hpp" breaks the non-PCH build (at least on Ubuntu >>>>>> 16.04 with gcc 5.4.0). If you configure with >>>>>> "--disable-precompiled-headers" you will get a whole lot of undefined >>>>>> reference for "Handle::Handle(Thread*, oopDesc*)" - see bug report. >>>>>> >>>>>> It seems that newer versions of GCC (and possibly other compilers as >>>>>> well) don't emit any code for inline functions if these functions can >>>>>> be inlined at all potential call sites. >>>>>> >>>>>> The problem in this special case is that "Handle::Handle(Thread*, >>>>>> oopDesc*)" is not declared "inline" in "handles.hpp", but its >>>>>> definition in "handles.inline.hpp" is declared "inline". This leads to >>>>>> a situation, where compilation units which only include "handles.hpp" >>>>>> will emit a call to "Handle::Handle(Thread*, oopDesc*)" while >>>>>> compilation units which include "handles.inline.hpp" will try to >>>>>> inline "Handle::Handle(Thread*, oopDesc*)". If all the inlining >>>>>> attempts are successful, no instance of "Handle::Handle(Thread*, >>>>>> oopDesc*)" will be generated in any of the object files. This will >>>>>> lead to the link errors listed in the . >>>>>> >>>>>> The quick fix for this issue is to include "handles.inline.hpp" into >>>>>> all the compilation units with undefined references (listed below). >>>>>> >>>>>> The correct fix (realized in this RFR) is to declare >>>>>> "Handle::Handle(Thread*, oopDesc*)" inline in "handles.hpp". This will >>>>>> lead to warnings (which are treated as errors) if the inline >>>>>> definition is not available at a call site and will avoid linking >>>>>> error due to compiler optimizations. Unfortunately this requires a >>>>>> whole lot of follow-up changes, because "handles.hpp" defines some >>>>>> derived classes of "Handle" which all have implicitly inline >>>>>> constructors which all reference the base class >>>>>> "Handle::Handle(Thread*, oopDesc*)" constructor. So the constructors >>>>>> of the derived classes have to be explicitly declared inline in >>>>>> "handles.hpp" and their implementation has to be moved to >>>>>> "handles.inline.hpp". This change again triggers other changes for all >>>>>> files which relayed on the derived Handle classes having inline >>>>>> constructors... >>>>>> >>>>>> Thank you and best regards, >>>>>> Volker >>>>> >>>>> >>>>> >>>>> >>> > From rkennke at redhat.com Wed Mar 14 11:27:19 2018 From: rkennke at redhat.com (Roman Kennke) Date: Wed, 14 Mar 2018 12:27:19 +0100 Subject: RFR (S): 8198561: Make oop and narrowOop always have their own class type In-Reply-To: <5A956F7E.5090205@oracle.com> References: <5A940C84.7040508@oracle.com> <004fcd31-f9c1-5eb2-3d48-34571a1f3cee@oracle.com> <5A956F7E.5090205@oracle.com> Message-ID: <674301c0-1f4a-cd0b-15c8-b2bc51a804e0@redhat.com> So where are we with this change? There's not many places where I can think of possible performance problems. Probably the most crucial ones are the oop/narrowOop/object iterators that are used by GC. OopClosure and subclasses get pointers to oop/narrowOop.. it shouldn't make a difference. Then there's ObjectClosure which receives an oop. Does it make a difference there? Maybe write a little benchmark that fills the heap with many small objects, and runs an empty ObjectClosure over it? If it doesn't show up there, I'm almost sure it's not going to show up anywhere else... Roman > Hi Coleen, > > Thanks for the review. > > On 2018-02-26 20:55, coleen.phillimore at oracle.com wrote: >> >> Hi Erik, >> >> This looks great.?? I assume that the generated code (for these >> classes vs. oopDesc* and juint) comes out the same? > > I assume so too. Or at least that the performance does not regress. > Maybe I run some benchmarks to be sure since the question has been asked. > > Thanks, > /Erik > >> thanks, >> Coleen >> >> On 2/26/18 8:32 AM, Erik ?sterlund wrote: >>> Hi, >>> >>> Making oop sometimes map to class types and sometimes to primitives >>> comes with some unfortunate problems. Advantages of making them >>> always have their own type include: >>> >>> 1) Not getting compilation errors in configuration X but not Y >>> 2) Making it easier to adopt existing code to use Shenandoah equals >>> barriers >>> 3) Recognize oops and narrowOops safely in template >>> >>> Therefore, I would like to make both oop and narrowOop always map to >>> a class type consistently. >>> >>> Webrev: >>> http://cr.openjdk.java.net/~eosterlund/8198561/webrev.00/ >>> >>> Bug: >>> https://bugs.openjdk.java.net/browse/JDK-8198561 >>> >>> Thanks, >>> /Erik >> > From stefan.karlsson at oracle.com Wed Mar 14 12:14:00 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 14 Mar 2018 13:14:00 +0100 Subject: RFR(S/M): 8199472: Fix non-PCH build after JDK-8199319 In-Reply-To: References: <1fe1b023-b507-3706-b9ec-aff9d8283377@oracle.com> <95ab46a3-796a-6a4a-bfb2-423c0d5c0794@oracle.com> Message-ID: <11db1111-c331-7f1a-6f92-207bb1b5aac7@oracle.com> On 2018-03-14 11:42, Volker Simonis wrote: > Ahh, just wanted to send you a mail to ask about the build failures of > my submit-hs job on Solaris which I can't reproduce on our local > machines :) > > Yes, please go ahead and push my change. I'm sure I'll find another > one which I can finally push myself :) :) I've pushed the change now. StefanK > > Thanks, > Volker > > > On Wed, Mar 14, 2018 at 10:52 AM, Stefan Karlsson > wrote: >> Hi Volker, >> >> On 2018-03-13 18:13, Volker Simonis wrote: >>> >>> Hi, >>> >>> please find the new webrev here: >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472.v2/ >>> >>> I've moved allocate_instance_handle to instanceKlass.cpp as requested >>> and updated some copyrights. The change is currently running through >>> the new submit-hs repo testing. >>> >>> If you're OK with the new version and the tests succeed I'll push the >>> change tomorrow. >> >> >> The submit job failed because of missing handles.inline.hpp includes in our >> closed JFR code. I've created closed patch to solve that. I can push both of >> these patches, unless you really want to push the open part yourself. >> >> Thanks, >> StefanK >> >> >>> >>> Best regards, >>> Volker >>> >>> >>> On Tue, Mar 13, 2018 at 10:16 AM, Stefan Karlsson >>> wrote: >>>> >>>> Hi Volker, >>>> >>>> On 2018-03-13 10:12, Volker Simonis wrote: >>>>> >>>>> >>>>> Hi Coleen, Stefan, >>>>> >>>>> sure I'm open for suggestions :) >>>>> >>>>> As you both ask for the same thing, I'll prepare a new webrev with >>>>> allocate_instance_handle moved to instanceKlass.cpp. In my initial >>>>> patch I just didn't wanted to change the current inlining behaviour >>>>> but if you both think that allocate_instance_handle is not performance >>>>> critical I'm happy to clean that up. >>>> >>>> >>>> >>>> >>>> I don't think it's critical to get it inlined. With that said, I think >>>> the >>>> compiler will inline allocate_instance into allocate_instance_handle, so >>>> you'll most likely only get one call anyway. >>>> >>>>> With the brand new submit-hs repo posted by Jesper just a few hours >>>>> ago, I'll be also able to push this myself, so no more need for a >>>>> sponsor :) >>>> >>>> >>>> >>>> Yay! >>>> >>>> StefanK >>>> >>>> >>>>> >>>>> Thanks, >>>>> Volker >>>>> >>>>> >>>>> On Mon, Mar 12, 2018 at 8:42 PM, wrote: >>>>>> >>>>>> >>>>>> >>>>>> Hi this looks good except: >>>>>> >>>>>> >>>>>> >>>>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/src/hotspot/share/oops/instanceKlass.inline.hpp.udiff.html >>>>>> >>>>>> Can you move this a function in instanceKlass.cpp and would this >>>>>> eliminate >>>>>> the changes that add include instanceKlass.inline.hpp ? >>>>>> >>>>>> If Stefan is not still online, I'll sponsor this for you. >>>>>> >>>>>> I have a follow-on related change >>>>>> https://bugs.openjdk.java.net/browse/JDK-8199263 which is quickly >>>>>> expanding >>>>>> due to transitive includes that I hope you can help me test out (when I >>>>>> get >>>>>> it to compile on solaris). >>>>>> >>>>>> Thanks, >>>>>> Coleen >>>>>> >>>>>> >>>>>> >>>>>> On 3/12/18 3:34 PM, Volker Simonis wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> can I please have a review and a sponsor for the following fix: >>>>>>> >>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/ >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8199472 >>>>>>> >>>>>>> The number changes files is "M" but the fix is actually "S" :) >>>>>>> >>>>>>> Here come the gory details: >>>>>>> >>>>>>> Change "8199319: Remove handles.inline.hpp include from >>>>>>> reflectionUtils.hpp" breaks the non-PCH build (at least on Ubuntu >>>>>>> 16.04 with gcc 5.4.0). If you configure with >>>>>>> "--disable-precompiled-headers" you will get a whole lot of undefined >>>>>>> reference for "Handle::Handle(Thread*, oopDesc*)" - see bug report. >>>>>>> >>>>>>> It seems that newer versions of GCC (and possibly other compilers as >>>>>>> well) don't emit any code for inline functions if these functions can >>>>>>> be inlined at all potential call sites. >>>>>>> >>>>>>> The problem in this special case is that "Handle::Handle(Thread*, >>>>>>> oopDesc*)" is not declared "inline" in "handles.hpp", but its >>>>>>> definition in "handles.inline.hpp" is declared "inline". This leads to >>>>>>> a situation, where compilation units which only include "handles.hpp" >>>>>>> will emit a call to "Handle::Handle(Thread*, oopDesc*)" while >>>>>>> compilation units which include "handles.inline.hpp" will try to >>>>>>> inline "Handle::Handle(Thread*, oopDesc*)". If all the inlining >>>>>>> attempts are successful, no instance of "Handle::Handle(Thread*, >>>>>>> oopDesc*)" will be generated in any of the object files. This will >>>>>>> lead to the link errors listed in the . >>>>>>> >>>>>>> The quick fix for this issue is to include "handles.inline.hpp" into >>>>>>> all the compilation units with undefined references (listed below). >>>>>>> >>>>>>> The correct fix (realized in this RFR) is to declare >>>>>>> "Handle::Handle(Thread*, oopDesc*)" inline in "handles.hpp". This will >>>>>>> lead to warnings (which are treated as errors) if the inline >>>>>>> definition is not available at a call site and will avoid linking >>>>>>> error due to compiler optimizations. Unfortunately this requires a >>>>>>> whole lot of follow-up changes, because "handles.hpp" defines some >>>>>>> derived classes of "Handle" which all have implicitly inline >>>>>>> constructors which all reference the base class >>>>>>> "Handle::Handle(Thread*, oopDesc*)" constructor. So the constructors >>>>>>> of the derived classes have to be explicitly declared inline in >>>>>>> "handles.hpp" and their implementation has to be moved to >>>>>>> "handles.inline.hpp". This change again triggers other changes for all >>>>>>> files which relayed on the derived Handle classes having inline >>>>>>> constructors... >>>>>>> >>>>>>> Thank you and best regards, >>>>>>> Volker >>>>>> >>>>>> >>>>>> >>>>>> >>>> >> From coleen.phillimore at oracle.com Wed Mar 14 12:48:40 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 14 Mar 2018 08:48:40 -0400 Subject: Pre-RFR (L) 8199263: Split interfaceSupport.hpp to not require including .inline.hpp files In-Reply-To: References: <0503632f-1ed7-8fa9-0540-4d21edc3bb33@oracle.com> Message-ID: <465b8e02-e234-f9c8-4a9b-a20a691ae8d8@oracle.com> Hi, this is broken with the inline Handle constructor, so please disregard for now.? I have to add more handles.inline.hpp includes since they're not transitively included by interfaceSupport.hpp. thanks, Coleen On 3/13/18 9:01 AM, coleen.phillimore at oracle.com wrote: > Sorry, this is the correct webrev: > http://cr.openjdk.java.net/~coleenp/8199263.02/webrev/index.html > > Coleen > > On 3/13/18 7:50 AM, coleen.phillimore at oracle.com wrote: >> Summary: interfaceSupport.hpp is an inline file so moved to >> interfaceSupport.inline.hpp and stopped including it in .hpp files >> >> 90% of this change is renaming interfaceSupport.hpp to >> interfaceSupport.inline.hpp.?? I tried to see if all of these files >> needed this header and the answer was yes.?? A surprising (to me!) >> number of files have thread state transitions. >> Some of interesting part of this change is adding >> ciUtilities.inline.hpp to include interfaceSupport.inline.hpp for >> VM_ENTRY.? whitebox.inline.hpp was added for the same reason. >> jvmtiEnter.hpp was renamed jvmtiEnter.inline.hpp because it includes >> interfaceSupport.inline.hpp, and is only included in cpp files. >> The rest of the changes were to add back includes that are not pulled >> in by header files including interfaceSupport.hpp, like gcLocker.hpp >> and of course handles.inline.hpp. >> >> This probably overlaps some of Volker's patch.? Can this be tested on >> other platforms that we don't have? >> >> Hopefully, at the end of all this we have more clean header files so >> that transitive includes don't make the jvm build on one platform but >> not the next.? I think that's the goal of all of this work. >> >> This was tested with Oracle platforms (linux-x64, solaris-sparcv9, >> macosx-x64, windows-x64) in the mach5 tier1 and 2.?? I built this >> locally without precompiled headers (my default setting of course) on >> linux-x64. >> >> bug link https://bugs.openjdk.java.net/browse/JDK-8199263 >> local webrev at >> http://oklahoma.us.oracle.com/~cphillim/webrev/8199263.02/webrev >> >> Thanks to Stefan for his help with this. >> >> Thanks, >> Coleen >> >> > From erik.osterlund at oracle.com Wed Mar 14 12:46:27 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Wed, 14 Mar 2018 13:46:27 +0100 Subject: RFR (S): 8198561: Make oop and narrowOop always have their own class type In-Reply-To: <674301c0-1f4a-cd0b-15c8-b2bc51a804e0@redhat.com> References: <5A940C84.7040508@oracle.com> <004fcd31-f9c1-5eb2-3d48-34571a1f3cee@oracle.com> <5A956F7E.5090205@oracle.com> <674301c0-1f4a-cd0b-15c8-b2bc51a804e0@redhat.com> Message-ID: <5AA919A3.7000606@oracle.com> Hi Roman, Sorry for the delay. Here is a status update: 1. I looked at the generated machine code from a bunch of different compilers and found that it was horrible. 2. Found that the biggest reason it was horrible was due to unfortunate uses of volatile in oop. Was easy enough to solve: Full webrev: http://cr.openjdk.java.net/~eosterlund/8198561/webrev.02/ Incremental: http://cr.openjdk.java.net/~eosterlund/8198561/webrev.01_02/ 3. Found that even after solving the accidental volatile problems, oops sent as value as arguments to functions were sent on the stack (not register), because oop is not a POD, and this is seemingly a strict ABI requirement of C++. Optimizing it would be an ABI violation and is therefore not done. 4. Found that oop is inherently not going to be a POD unless we rewrite a *lot* of code in hotspot due to having e.g. volatile copy constructor (which can not be auto generated) and a popular user defined oopDesc* constructor. 5. Got sad that C++ really has to send wrapper objects as simple as oop through the stack on callsites and dropped this as I did not know how I felt about it any longer. You can pick this up where I left if you want to and check if the performance impact due to the suboptimal machine code is something we should be scared of or not. If there is reason to be scared, I wonder if LTO mechanisms can solve part of this and a whole bunch of unnecessary use of .inline.hpp files at the same time. Thanks, /Erik On 2018-03-14 12:27, Roman Kennke wrote: > So where are we with this change? > > There's not many places where I can think of possible performance > problems. Probably the most crucial ones are the oop/narrowOop/object > iterators that are used by GC. OopClosure and subclasses get pointers to > oop/narrowOop.. it shouldn't make a difference. Then there's > ObjectClosure which receives an oop. Does it make a difference there? > Maybe write a little benchmark that fills the heap with many small > objects, and runs an empty ObjectClosure over it? If it doesn't show up > there, I'm almost sure it's not going to show up anywhere else... > > Roman > >> Hi Coleen, >> >> Thanks for the review. >> >> On 2018-02-26 20:55, coleen.phillimore at oracle.com wrote: >>> Hi Erik, >>> >>> This looks great. I assume that the generated code (for these >>> classes vs. oopDesc* and juint) comes out the same? >> I assume so too. Or at least that the performance does not regress. >> Maybe I run some benchmarks to be sure since the question has been asked. >> >> Thanks, >> /Erik >> >>> thanks, >>> Coleen >>> >>> On 2/26/18 8:32 AM, Erik ?sterlund wrote: >>>> Hi, >>>> >>>> Making oop sometimes map to class types and sometimes to primitives >>>> comes with some unfortunate problems. Advantages of making them >>>> always have their own type include: >>>> >>>> 1) Not getting compilation errors in configuration X but not Y >>>> 2) Making it easier to adopt existing code to use Shenandoah equals >>>> barriers >>>> 3) Recognize oops and narrowOops safely in template >>>> >>>> Therefore, I would like to make both oop and narrowOop always map to >>>> a class type consistently. >>>> >>>> Webrev: >>>> http://cr.openjdk.java.net/~eosterlund/8198561/webrev.00/ >>>> >>>> Bug: >>>> https://bugs.openjdk.java.net/browse/JDK-8198561 >>>> >>>> Thanks, >>>> /Erik > From rkennke at redhat.com Wed Mar 14 13:19:22 2018 From: rkennke at redhat.com (Roman Kennke) Date: Wed, 14 Mar 2018 14:19:22 +0100 Subject: RFR (S): 8198561: Make oop and narrowOop always have their own class type In-Reply-To: <5AA919A3.7000606@oracle.com> References: <5A940C84.7040508@oracle.com> <004fcd31-f9c1-5eb2-3d48-34571a1f3cee@oracle.com> <5A956F7E.5090205@oracle.com> <674301c0-1f4a-cd0b-15c8-b2bc51a804e0@redhat.com> <5AA919A3.7000606@oracle.com> Message-ID: <18aaac14-7b9b-9940-29e2-5a1a260a904b@redhat.com> Alright, thank you for the explanations. The main reason why I wanted this change was so that we could overload == (i.e. equality comparison of oops), and redirect it through BarrierSet or Access API. Since this is not possible on pointers, e.g. oopDesc*, which is what oop is typedef'd to in release builds, the next reasonable option is to provide an explicit static method in oopDesc, e.g. oopDesc::equals(oop, oop) (and narrowOop version) which would then call into BarrierSet or Access APIs. This would not be unprecedented: we already have oopDesc::is_null(oop) and oopDesc::compare(oop, oop). In Shenandoah land, we already know all the places where to put oopDesc::equals() instead of ==, and we do have some code in oopsHierarchy to overload == in fastdebug builds and verify to not call naked == on oops. Would that be a reasonable way forward? If yes, then I can provide an RFR soon. WDYT? Roman > 1. I looked at the generated machine code from a bunch of different > compilers and found that it was horrible. > 2. Found that the biggest reason it was horrible was due to unfortunate > uses of volatile in oop. Was easy enough to solve: > > Full webrev: http://cr.openjdk.java.net/~eosterlund/8198561/webrev.02/ > Incremental: http://cr.openjdk.java.net/~eosterlund/8198561/webrev.01_02/ > > 3. Found that even after solving the accidental volatile problems, oops > sent as value as arguments to functions were sent on the stack (not > register), because oop is not a POD, and this is seemingly a strict ABI > requirement of C++. Optimizing it would be an ABI violation and is > therefore not done. > 4. Found that oop is inherently not going to be a POD unless we rewrite > a *lot* of code in hotspot due to having e.g. volatile copy constructor > (which can not be auto generated) and a popular user defined oopDesc* > constructor. > 5. Got sad that C++ really has to send wrapper objects as simple as oop > through the stack on callsites and dropped this as I did not know how I > felt about it any longer. > > You can pick this up where I left if you want to and check if the > performance impact due to the suboptimal machine code is something we > should be scared of or not. If there is reason to be scared, I wonder if > LTO mechanisms can solve part of this and a whole bunch of unnecessary > use of .inline.hpp files at the same time. > > Thanks, > /Erik > > On 2018-03-14 12:27, Roman Kennke wrote: >> So where are we with this change? >> >> There's not many places where I can think of possible performance >> problems. Probably the most crucial ones are the oop/narrowOop/object >> iterators that are used by GC. OopClosure and subclasses get pointers to >> oop/narrowOop.. it shouldn't make a difference. Then there's >> ObjectClosure which receives an oop. Does it make a difference there? >> Maybe write a little benchmark that fills the heap with many small >> objects, and runs an empty ObjectClosure over it? If it doesn't show up >> there, I'm almost sure it's not going to show up anywhere else... >> >> Roman >> >>> Hi Coleen, >>> >>> Thanks for the review. >>> >>> On 2018-02-26 20:55, coleen.phillimore at oracle.com wrote: >>>> Hi Erik, >>>> >>>> This looks great.?? I assume that the generated code (for these >>>> classes vs. oopDesc* and juint) comes out the same? >>> I assume so too. Or at least that the performance does not regress. >>> Maybe I run some benchmarks to be sure since the question has been >>> asked. >>> >>> Thanks, >>> /Erik >>> >>>> thanks, >>>> Coleen >>>> >>>> On 2/26/18 8:32 AM, Erik ?sterlund wrote: >>>>> Hi, >>>>> >>>>> Making oop sometimes map to class types and sometimes to primitives >>>>> comes with some unfortunate problems. Advantages of making them >>>>> always have their own type include: >>>>> >>>>> 1) Not getting compilation errors in configuration X but not Y >>>>> 2) Making it easier to adopt existing code to use Shenandoah equals >>>>> barriers >>>>> 3) Recognize oops and narrowOops safely in template >>>>> >>>>> Therefore, I would like to make both oop and narrowOop always map to >>>>> a class type consistently. >>>>> >>>>> Webrev: >>>>> http://cr.openjdk.java.net/~eosterlund/8198561/webrev.00/ >>>>> >>>>> Bug: >>>>> https://bugs.openjdk.java.net/browse/JDK-8198561 >>>>> >>>>> Thanks, >>>>> /Erik >> > From volker.simonis at gmail.com Wed Mar 14 13:23:43 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Wed, 14 Mar 2018 14:23:43 +0100 Subject: RFR(S/M): 8199472: Fix non-PCH build after JDK-8199319 In-Reply-To: <11db1111-c331-7f1a-6f92-207bb1b5aac7@oracle.com> References: <1fe1b023-b507-3706-b9ec-aff9d8283377@oracle.com> <95ab46a3-796a-6a4a-bfb2-423c0d5c0794@oracle.com> <11db1111-c331-7f1a-6f92-207bb1b5aac7@oracle.com> Message-ID: Cool! Thanks a lot, Volker On Wed, Mar 14, 2018 at 1:14 PM, Stefan Karlsson wrote: > On 2018-03-14 11:42, Volker Simonis wrote: >> >> Ahh, just wanted to send you a mail to ask about the build failures of >> my submit-hs job on Solaris which I can't reproduce on our local >> machines :) >> >> Yes, please go ahead and push my change. I'm sure I'll find another >> one which I can finally push myself :) > > > :) I've pushed the change now. > > StefanK > > >> >> Thanks, >> Volker >> >> >> On Wed, Mar 14, 2018 at 10:52 AM, Stefan Karlsson >> wrote: >>> >>> Hi Volker, >>> >>> On 2018-03-13 18:13, Volker Simonis wrote: >>>> >>>> >>>> Hi, >>>> >>>> please find the new webrev here: >>>> >>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472.v2/ >>>> >>>> I've moved allocate_instance_handle to instanceKlass.cpp as requested >>>> and updated some copyrights. The change is currently running through >>>> the new submit-hs repo testing. >>>> >>>> If you're OK with the new version and the tests succeed I'll push the >>>> change tomorrow. >>> >>> >>> >>> The submit job failed because of missing handles.inline.hpp includes in >>> our >>> closed JFR code. I've created closed patch to solve that. I can push both >>> of >>> these patches, unless you really want to push the open part yourself. >>> >>> Thanks, >>> StefanK >>> >>> >>>> >>>> Best regards, >>>> Volker >>>> >>>> >>>> On Tue, Mar 13, 2018 at 10:16 AM, Stefan Karlsson >>>> wrote: >>>>> >>>>> >>>>> Hi Volker, >>>>> >>>>> On 2018-03-13 10:12, Volker Simonis wrote: >>>>>> >>>>>> >>>>>> >>>>>> Hi Coleen, Stefan, >>>>>> >>>>>> sure I'm open for suggestions :) >>>>>> >>>>>> As you both ask for the same thing, I'll prepare a new webrev with >>>>>> allocate_instance_handle moved to instanceKlass.cpp. In my initial >>>>>> patch I just didn't wanted to change the current inlining behaviour >>>>>> but if you both think that allocate_instance_handle is not performance >>>>>> critical I'm happy to clean that up. >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> I don't think it's critical to get it inlined. With that said, I think >>>>> the >>>>> compiler will inline allocate_instance into allocate_instance_handle, >>>>> so >>>>> you'll most likely only get one call anyway. >>>>> >>>>>> With the brand new submit-hs repo posted by Jesper just a few hours >>>>>> ago, I'll be also able to push this myself, so no more need for a >>>>>> sponsor :) >>>>> >>>>> >>>>> >>>>> >>>>> Yay! >>>>> >>>>> StefanK >>>>> >>>>> >>>>>> >>>>>> Thanks, >>>>>> Volker >>>>>> >>>>>> >>>>>> On Mon, Mar 12, 2018 at 8:42 PM, >>>>>> wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> Hi this looks good except: >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/src/hotspot/share/oops/instanceKlass.inline.hpp.udiff.html >>>>>>> >>>>>>> Can you move this a function in instanceKlass.cpp and would this >>>>>>> eliminate >>>>>>> the changes that add include instanceKlass.inline.hpp ? >>>>>>> >>>>>>> If Stefan is not still online, I'll sponsor this for you. >>>>>>> >>>>>>> I have a follow-on related change >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8199263 which is quickly >>>>>>> expanding >>>>>>> due to transitive includes that I hope you can help me test out (when >>>>>>> I >>>>>>> get >>>>>>> it to compile on solaris). >>>>>>> >>>>>>> Thanks, >>>>>>> Coleen >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 3/12/18 3:34 PM, Volker Simonis wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> can I please have a review and a sponsor for the following fix: >>>>>>>> >>>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199472/ >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8199472 >>>>>>>> >>>>>>>> The number changes files is "M" but the fix is actually "S" :) >>>>>>>> >>>>>>>> Here come the gory details: >>>>>>>> >>>>>>>> Change "8199319: Remove handles.inline.hpp include from >>>>>>>> reflectionUtils.hpp" breaks the non-PCH build (at least on Ubuntu >>>>>>>> 16.04 with gcc 5.4.0). If you configure with >>>>>>>> "--disable-precompiled-headers" you will get a whole lot of >>>>>>>> undefined >>>>>>>> reference for "Handle::Handle(Thread*, oopDesc*)" - see bug report. >>>>>>>> >>>>>>>> It seems that newer versions of GCC (and possibly other compilers as >>>>>>>> well) don't emit any code for inline functions if these functions >>>>>>>> can >>>>>>>> be inlined at all potential call sites. >>>>>>>> >>>>>>>> The problem in this special case is that "Handle::Handle(Thread*, >>>>>>>> oopDesc*)" is not declared "inline" in "handles.hpp", but its >>>>>>>> definition in "handles.inline.hpp" is declared "inline". This leads >>>>>>>> to >>>>>>>> a situation, where compilation units which only include >>>>>>>> "handles.hpp" >>>>>>>> will emit a call to "Handle::Handle(Thread*, oopDesc*)" while >>>>>>>> compilation units which include "handles.inline.hpp" will try to >>>>>>>> inline "Handle::Handle(Thread*, oopDesc*)". If all the inlining >>>>>>>> attempts are successful, no instance of "Handle::Handle(Thread*, >>>>>>>> oopDesc*)" will be generated in any of the object files. This will >>>>>>>> lead to the link errors listed in the . >>>>>>>> >>>>>>>> The quick fix for this issue is to include "handles.inline.hpp" into >>>>>>>> all the compilation units with undefined references (listed below). >>>>>>>> >>>>>>>> The correct fix (realized in this RFR) is to declare >>>>>>>> "Handle::Handle(Thread*, oopDesc*)" inline in "handles.hpp". This >>>>>>>> will >>>>>>>> lead to warnings (which are treated as errors) if the inline >>>>>>>> definition is not available at a call site and will avoid linking >>>>>>>> error due to compiler optimizations. Unfortunately this requires a >>>>>>>> whole lot of follow-up changes, because "handles.hpp" defines some >>>>>>>> derived classes of "Handle" which all have implicitly inline >>>>>>>> constructors which all reference the base class >>>>>>>> "Handle::Handle(Thread*, oopDesc*)" constructor. So the constructors >>>>>>>> of the derived classes have to be explicitly declared inline in >>>>>>>> "handles.hpp" and their implementation has to be moved to >>>>>>>> "handles.inline.hpp". This change again triggers other changes for >>>>>>>> all >>>>>>>> files which relayed on the derived Handle classes having inline >>>>>>>> constructors... >>>>>>>> >>>>>>>> Thank you and best regards, >>>>>>>> Volker >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>> >>> > From erik.osterlund at oracle.com Wed Mar 14 13:47:21 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Wed, 14 Mar 2018 14:47:21 +0100 Subject: RFR (S): 8198561: Make oop and narrowOop always have their own class type In-Reply-To: <18aaac14-7b9b-9940-29e2-5a1a260a904b@redhat.com> References: <5A940C84.7040508@oracle.com> <004fcd31-f9c1-5eb2-3d48-34571a1f3cee@oracle.com> <5A956F7E.5090205@oracle.com> <674301c0-1f4a-cd0b-15c8-b2bc51a804e0@redhat.com> <5AA919A3.7000606@oracle.com> <18aaac14-7b9b-9940-29e2-5a1a260a904b@redhat.com> Message-ID: <5AA927E9.6040803@oracle.com> Hi Roman, On 2018-03-14 14:19, Roman Kennke wrote: > Alright, thank you for the explanations. > > The main reason why I wanted this change was so that we could overload > == (i.e. equality comparison of oops), and redirect it through > BarrierSet or Access API. Yes, this was precisely why I wanted this. > Since this is not possible on pointers, e.g. oopDesc*, which is what oop > is typedef'd to in release builds, the next reasonable option is to > provide an explicit static method in oopDesc, e.g. oopDesc::equals(oop, > oop) (and narrowOop version) which would then call into BarrierSet or > Access APIs. > > This would not be unprecedented: we already have oopDesc::is_null(oop) > and oopDesc::compare(oop, oop). > > In Shenandoah land, we already know all the places where to put > oopDesc::equals() instead of ==, and we do have some code in > oopsHierarchy to overload == in fastdebug builds and verify to not call > naked == on oops. > > Would that be a reasonable way forward? If yes, then I can provide an > RFR soon. > > WDYT? Admittedly, that does make me a little bit sad. And I still wonder if LTO techniques could save the day. But if it can't then I don't have any other better ideas than to explicitly call some equals function everywhere we compare oops. Thanks, /Erik > Roman > >> 1. I looked at the generated machine code from a bunch of different >> compilers and found that it was horrible. >> 2. Found that the biggest reason it was horrible was due to unfortunate >> uses of volatile in oop. Was easy enough to solve: >> >> Full webrev: http://cr.openjdk.java.net/~eosterlund/8198561/webrev.02/ >> Incremental: http://cr.openjdk.java.net/~eosterlund/8198561/webrev.01_02/ >> >> 3. Found that even after solving the accidental volatile problems, oops >> sent as value as arguments to functions were sent on the stack (not >> register), because oop is not a POD, and this is seemingly a strict ABI >> requirement of C++. Optimizing it would be an ABI violation and is >> therefore not done. >> 4. Found that oop is inherently not going to be a POD unless we rewrite >> a *lot* of code in hotspot due to having e.g. volatile copy constructor >> (which can not be auto generated) and a popular user defined oopDesc* >> constructor. >> 5. Got sad that C++ really has to send wrapper objects as simple as oop >> through the stack on callsites and dropped this as I did not know how I >> felt about it any longer. >> >> You can pick this up where I left if you want to and check if the >> performance impact due to the suboptimal machine code is something we >> should be scared of or not. If there is reason to be scared, I wonder if >> LTO mechanisms can solve part of this and a whole bunch of unnecessary >> use of .inline.hpp files at the same time. >> >> Thanks, >> /Erik >> >> On 2018-03-14 12:27, Roman Kennke wrote: >>> So where are we with this change? >>> >>> There's not many places where I can think of possible performance >>> problems. Probably the most crucial ones are the oop/narrowOop/object >>> iterators that are used by GC. OopClosure and subclasses get pointers to >>> oop/narrowOop.. it shouldn't make a difference. Then there's >>> ObjectClosure which receives an oop. Does it make a difference there? >>> Maybe write a little benchmark that fills the heap with many small >>> objects, and runs an empty ObjectClosure over it? If it doesn't show up >>> there, I'm almost sure it's not going to show up anywhere else... >>> >>> Roman >>> >>>> Hi Coleen, >>>> >>>> Thanks for the review. >>>> >>>> On 2018-02-26 20:55, coleen.phillimore at oracle.com wrote: >>>>> Hi Erik, >>>>> >>>>> This looks great. I assume that the generated code (for these >>>>> classes vs. oopDesc* and juint) comes out the same? >>>> I assume so too. Or at least that the performance does not regress. >>>> Maybe I run some benchmarks to be sure since the question has been >>>> asked. >>>> >>>> Thanks, >>>> /Erik >>>> >>>>> thanks, >>>>> Coleen >>>>> >>>>> On 2/26/18 8:32 AM, Erik ?sterlund wrote: >>>>>> Hi, >>>>>> >>>>>> Making oop sometimes map to class types and sometimes to primitives >>>>>> comes with some unfortunate problems. Advantages of making them >>>>>> always have their own type include: >>>>>> >>>>>> 1) Not getting compilation errors in configuration X but not Y >>>>>> 2) Making it easier to adopt existing code to use Shenandoah equals >>>>>> barriers >>>>>> 3) Recognize oops and narrowOops safely in template >>>>>> >>>>>> Therefore, I would like to make both oop and narrowOop always map to >>>>>> a class type consistently. >>>>>> >>>>>> Webrev: >>>>>> http://cr.openjdk.java.net/~eosterlund/8198561/webrev.00/ >>>>>> >>>>>> Bug: >>>>>> https://bugs.openjdk.java.net/browse/JDK-8198561 >>>>>> >>>>>> Thanks, >>>>>> /Erik > From gromero at linux.vnet.ibm.com Wed Mar 14 15:07:45 2018 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Wed, 14 Mar 2018 12:07:45 -0300 Subject: RFR(S): 8198794: Hotspot crash on Cassandra 3.11.1 startup with libnuma 2.0.3 In-Reply-To: <282ee7b0-eb29-a4e2-1aff-4d4c369c08c6@oracle.com> References: <5AA725AA.7010202@linux.vnet.ibm.com> <3C4B8012-284F-4F47-B99F-ACB0056198C1@amazon.com> <282ee7b0-eb29-a4e2-1aff-4d4c369c08c6@oracle.com> Message-ID: Hi David, On 03/13/2018 09:05 PM, David Holmes wrote: >> bug?? : https://bugs.openjdk.java.net/browse/JDK-8198794 >> webrev: http://cr.openjdk.java.net/~gromero/8198794/v1/ > > Seems okay. Couple of grammar nits with the mega comment: > > // it can exist nodes > > it -> there > > // are besides that non-contiguous. > > "are besides that" -> "may be" Fixed. webrev: http://cr.openjdk.java.net/~gromero/8198794/v2/ Thanks a lot for reviewing it. Could somebody sponsor that change please? Regards, Gustavo From coleen.phillimore at oracle.com Wed Mar 14 16:01:27 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 14 Mar 2018 12:01:27 -0400 Subject: RFR (L) 8199263: Split interfaceSupport.hpp to not require including .inline.hpp files In-Reply-To: <465b8e02-e234-f9c8-4a9b-a20a691ae8d8@oracle.com> References: <0503632f-1ed7-8fa9-0540-4d21edc3bb33@oracle.com> <465b8e02-e234-f9c8-4a9b-a20a691ae8d8@oracle.com> Message-ID: I had to include and out-line some functions that create Handles, where handles.inline.hpp is not transitively included via interfaceSupport.hpp anymore.? Thanks to Stefan for finding these for me. This is the incremental webrev: http://cr.openjdk.java.net/~coleenp/8199263.03.incr/webrev/index.html And full webrev: http://cr.openjdk.java.net/~coleenp/8199263.03/webrev/index.html This passes mach5 tier1 on Oracle platforms. Thanks, Coleen On 3/14/18 8:48 AM, coleen.phillimore at oracle.com wrote: > > Hi, this is broken with the inline Handle constructor, so please > disregard for now.? I have to add more handles.inline.hpp includes > since they're not transitively included by interfaceSupport.hpp. > thanks, > Coleen > > On 3/13/18 9:01 AM, coleen.phillimore at oracle.com wrote: >> Sorry, this is the correct webrev: >> http://cr.openjdk.java.net/~coleenp/8199263.02/webrev/index.html >> >> Coleen >> >> On 3/13/18 7:50 AM, coleen.phillimore at oracle.com wrote: >>> Summary: interfaceSupport.hpp is an inline file so moved to >>> interfaceSupport.inline.hpp and stopped including it in .hpp files >>> >>> 90% of this change is renaming interfaceSupport.hpp to >>> interfaceSupport.inline.hpp.?? I tried to see if all of these files >>> needed this header and the answer was yes.?? A surprising (to me!) >>> number of files have thread state transitions. >>> Some of interesting part of this change is adding >>> ciUtilities.inline.hpp to include interfaceSupport.inline.hpp for >>> VM_ENTRY.? whitebox.inline.hpp was added for the same reason. >>> jvmtiEnter.hpp was renamed jvmtiEnter.inline.hpp because it includes >>> interfaceSupport.inline.hpp, and is only included in cpp files. >>> The rest of the changes were to add back includes that are not >>> pulled in by header files including interfaceSupport.hpp, like >>> gcLocker.hpp and of course handles.inline.hpp. >>> >>> This probably overlaps some of Volker's patch.? Can this be tested >>> on other platforms that we don't have? >>> >>> Hopefully, at the end of all this we have more clean header files so >>> that transitive includes don't make the jvm build on one platform >>> but not the next.? I think that's the goal of all of this work. >>> >>> This was tested with Oracle platforms (linux-x64, solaris-sparcv9, >>> macosx-x64, windows-x64) in the mach5 tier1 and 2.?? I built this >>> locally without precompiled headers (my default setting of course) >>> on linux-x64. >>> >>> bug link https://bugs.openjdk.java.net/browse/JDK-8199263 >>> local webrev at >>> http://oklahoma.us.oracle.com/~cphillim/webrev/8199263.02/webrev >>> >>> Thanks to Stefan for his help with this. >>> >>> Thanks, >>> Coleen >>> >>> >> > From robbin.ehn at oracle.com Wed Mar 14 16:04:51 2018 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Wed, 14 Mar 2018 17:04:51 +0100 Subject: RFR (L) 8199263: Split interfaceSupport.hpp to not require including .inline.hpp files In-Reply-To: References: <0503632f-1ed7-8fa9-0540-4d21edc3bb33@oracle.com> <465b8e02-e234-f9c8-4a9b-a20a691ae8d8@oracle.com> Message-ID: <2292a82e-f306-9978-f39e-2036f4f716ca@oracle.com> Looks good, thanks for fixing. /Robbin On 2018-03-14 17:01, coleen.phillimore at oracle.com wrote: > > I had to include and out-line some functions that create Handles, where handles.inline.hpp is not transitively included via interfaceSupport.hpp anymore.? Thanks to Stefan for finding these for me. > > This is the incremental webrev: > > http://cr.openjdk.java.net/~coleenp/8199263.03.incr/webrev/index.html > > And full webrev: > > http://cr.openjdk.java.net/~coleenp/8199263.03/webrev/index.html > > This passes mach5 tier1 on Oracle platforms. > > Thanks, > Coleen > > On 3/14/18 8:48 AM, coleen.phillimore at oracle.com wrote: >> >> Hi, this is broken with the inline Handle constructor, so please disregard for now.? I have to add more handles.inline.hpp includes since they're not transitively included by interfaceSupport.hpp. >> thanks, >> Coleen >> >> On 3/13/18 9:01 AM, coleen.phillimore at oracle.com wrote: >>> Sorry, this is the correct webrev: >>> http://cr.openjdk.java.net/~coleenp/8199263.02/webrev/index.html >>> >>> Coleen >>> >>> On 3/13/18 7:50 AM, coleen.phillimore at oracle.com wrote: >>>> Summary: interfaceSupport.hpp is an inline file so moved to interfaceSupport.inline.hpp and stopped including it in .hpp files >>>> >>>> 90% of this change is renaming interfaceSupport.hpp to interfaceSupport.inline.hpp.?? I tried to see if all of these files needed this header and the answer was yes.?? A surprising (to me!) number of files have thread state transitions. >>>> Some of interesting part of this change is adding ciUtilities.inline.hpp to include interfaceSupport.inline.hpp for VM_ENTRY.? whitebox.inline.hpp was added for the same reason. >>>> jvmtiEnter.hpp was renamed jvmtiEnter.inline.hpp because it includes interfaceSupport.inline.hpp, and is only included in cpp files. >>>> The rest of the changes were to add back includes that are not pulled in by header files including interfaceSupport.hpp, like gcLocker.hpp and of course handles.inline.hpp. >>>> >>>> This probably overlaps some of Volker's patch.? Can this be tested on other platforms that we don't have? >>>> >>>> Hopefully, at the end of all this we have more clean header files so that transitive includes don't make the jvm build on one platform but not the next.? I think that's the goal of all of this work. >>>> >>>> This was tested with Oracle platforms (linux-x64, solaris-sparcv9, macosx-x64, windows-x64) in the mach5 tier1 and 2.?? I built this locally without precompiled headers (my default setting of course) on linux-x64. >>>> >>>> bug link https://bugs.openjdk.java.net/browse/JDK-8199263 >>>> local webrev at http://oklahoma.us.oracle.com/~cphillim/webrev/8199263.02/webrev >>>> >>>> Thanks to Stefan for his help with this. >>>> >>>> Thanks, >>>> Coleen >>>> >>>> >>> >> > From coleen.phillimore at oracle.com Wed Mar 14 16:23:35 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 14 Mar 2018 12:23:35 -0400 Subject: RFR (L) 8199263: Split interfaceSupport.hpp to not require including .inline.hpp files In-Reply-To: <2292a82e-f306-9978-f39e-2036f4f716ca@oracle.com> References: <0503632f-1ed7-8fa9-0540-4d21edc3bb33@oracle.com> <465b8e02-e234-f9c8-4a9b-a20a691ae8d8@oracle.com> <2292a82e-f306-9978-f39e-2036f4f716ca@oracle.com> Message-ID: <9243e705-f467-a9be-6c23-1ecf4d503c67@oracle.com> Thank you for reviewing, Robbin! Coleen On 3/14/18 12:04 PM, Robbin Ehn wrote: > Looks good, thanks for fixing. > > /Robbin > > On 2018-03-14 17:01, coleen.phillimore at oracle.com wrote: >> >> I had to include and out-line some functions that create Handles, >> where handles.inline.hpp is not transitively included via >> interfaceSupport.hpp anymore.? Thanks to Stefan for finding these for >> me. >> >> This is the incremental webrev: >> >> http://cr.openjdk.java.net/~coleenp/8199263.03.incr/webrev/index.html >> >> And full webrev: >> >> http://cr.openjdk.java.net/~coleenp/8199263.03/webrev/index.html >> >> This passes mach5 tier1 on Oracle platforms. >> >> Thanks, >> Coleen >> >> On 3/14/18 8:48 AM, coleen.phillimore at oracle.com wrote: >>> >>> Hi, this is broken with the inline Handle constructor, so please >>> disregard for now.? I have to add more handles.inline.hpp includes >>> since they're not transitively included by interfaceSupport.hpp. >>> thanks, >>> Coleen >>> >>> On 3/13/18 9:01 AM, coleen.phillimore at oracle.com wrote: >>>> Sorry, this is the correct webrev: >>>> http://cr.openjdk.java.net/~coleenp/8199263.02/webrev/index.html >>>> >>>> Coleen >>>> >>>> On 3/13/18 7:50 AM, coleen.phillimore at oracle.com wrote: >>>>> Summary: interfaceSupport.hpp is an inline file so moved to >>>>> interfaceSupport.inline.hpp and stopped including it in .hpp files >>>>> >>>>> 90% of this change is renaming interfaceSupport.hpp to >>>>> interfaceSupport.inline.hpp.?? I tried to see if all of these >>>>> files needed this header and the answer was yes.?? A surprising >>>>> (to me!) number of files have thread state transitions. >>>>> Some of interesting part of this change is adding >>>>> ciUtilities.inline.hpp to include interfaceSupport.inline.hpp for >>>>> VM_ENTRY. whitebox.inline.hpp was added for the same reason. >>>>> jvmtiEnter.hpp was renamed jvmtiEnter.inline.hpp because it >>>>> includes interfaceSupport.inline.hpp, and is only included in cpp >>>>> files. >>>>> The rest of the changes were to add back includes that are not >>>>> pulled in by header files including interfaceSupport.hpp, like >>>>> gcLocker.hpp and of course handles.inline.hpp. >>>>> >>>>> This probably overlaps some of Volker's patch.? Can this be tested >>>>> on other platforms that we don't have? >>>>> >>>>> Hopefully, at the end of all this we have more clean header files >>>>> so that transitive includes don't make the jvm build on one >>>>> platform but not the next.? I think that's the goal of all of this >>>>> work. >>>>> >>>>> This was tested with Oracle platforms (linux-x64, solaris-sparcv9, >>>>> macosx-x64, windows-x64) in the mach5 tier1 and 2.?? I built this >>>>> locally without precompiled headers (my default setting of course) >>>>> on linux-x64. >>>>> >>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8199263 >>>>> local webrev at >>>>> http://oklahoma.us.oracle.com/~cphillim/webrev/8199263.02/webrev >>>>> >>>>> Thanks to Stefan for his help with this. >>>>> >>>>> Thanks, >>>>> Coleen >>>>> >>>>> >>>> >>> >> From mark.reinhold at oracle.com Wed Mar 14 15:39:53 2018 From: mark.reinhold at oracle.com (mark.reinhold at oracle.com) Date: Wed, 14 Mar 2018 08:39:53 -0700 (PDT) Subject: JEP 328: Flight Recorder Message-ID: <20180314153953.2EE4217F75D@eggemoggin.niobe.net> New JEP Candidate: http://openjdk.java.net/jeps/328 - Mark From jesper.wilhelmsson at oracle.com Wed Mar 14 21:00:28 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Wed, 14 Mar 2018 22:00:28 +0100 Subject: Merging jdk/hs with jdk/jdk Message-ID: <8819CAD3-AF29-463E-8A76-14440CF37D2B@oracle.com> All, Over the last couple of years we have left behind a graph of integration forests where each component in the JVM had its own line of development. Today all HotSpot development is done in the same repository, jdk/hs [1]. As a result of merging we have seen several positive effects, ranging from less confusion around where and how to do things, and reduced time for fixes to propagate, to significantly better cooperation between the components, and improved quality of the product. We would like to improve further and therefore we suggest to merge jdk/hs into jdk/jdk [2]. As before, we expect this change to build a stronger team spirit between the merged areas, and contribute to less confusion - especially around ramp down phases and similar. We also expect further improvements in quality as changes that cause problems in a different area are found faster and can be dealt with immediately. In the same way as we did in the past, we suggest to try this out as an experiment for at least two weeks (giving us some time to adapt in case of issues). Monitoring and evaluation of the new structure will take place continuously, with an option to revert back if things do not work out. The experiment would keep going for at least a few months, after which we will evaluate it and depending on the results consider making it the new standard. If so, the jdk/hs forest will eventually be retired. As part of this merge we can also retire the newly setup submit-hs [3] repository and do all testing using the submit repo based on jdk/jdk [4]. Much like what we have done in the past we would leave the jdk/hs forest around until we see if the experiment works out. We would also lock it down so that no accidental pushes are made to it. Once the jdk/hs forest is locked down, any work in flight based on it would have to be rebased on jdk/jdk. We tried this approach during the last few months of JDK 10 development and it worked out fine there. Please let us know if you have any feedback or questions! Thanks, /Jesper [1] http://hg.openjdk.java.net/jdk/hs [2] http://hg.openjdk.java.net/jdk/jdk [3] http://hg.openjdk.java.net/jdk/submit-hs [4] http://hg.openjdk.java.net/jdk/submit From kim.barrett at oracle.com Wed Mar 14 21:34:29 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 14 Mar 2018 17:34:29 -0400 Subject: RFR (S): 8198561: Make oop and narrowOop always have their own class type In-Reply-To: <5AA919A3.7000606@oracle.com> References: <5A940C84.7040508@oracle.com> <004fcd31-f9c1-5eb2-3d48-34571a1f3cee@oracle.com> <5A956F7E.5090205@oracle.com> <674301c0-1f4a-cd0b-15c8-b2bc51a804e0@redhat.com> <5AA919A3.7000606@oracle.com> Message-ID: <2AD7E29E-F5B8-4A79-A85B-74B07DE2B7E8@oracle.com> > On Mar 14, 2018, at 8:46 AM, Erik ?sterlund wrote: > > Hi Roman, > > Sorry for the delay. Here is a status update: > > 1. I looked at the generated machine code from a bunch of different compilers and found that it was horrible. > 2. Found that the biggest reason it was horrible was due to unfortunate uses of volatile in oop. Was easy enough to solve: > > Full webrev: http://cr.openjdk.java.net/~eosterlund/8198561/webrev.02/ > Incremental: http://cr.openjdk.java.net/~eosterlund/8198561/webrev.01_02/ > > 3. Found that even after solving the accidental volatile problems, oops sent as value as arguments to functions were sent on the stack (not register), because oop is not a POD, and this is seemingly a strict ABI requirement of C++. Optimizing it would be an ABI violation and is therefore not done. > 4. Found that oop is inherently not going to be a POD unless we rewrite a *lot* of code in hotspot due to having e.g. volatile copy constructor (which can not be auto generated) and a popular user defined oopDesc* constructor. > 5. Got sad that C++ really has to send wrapper objects as simple as oop through the stack on callsites and dropped this as I did not know how I felt about it any longer. > > You can pick this up where I left if you want to and check if the performance impact due to the suboptimal machine code is something we should be scared of or not. If there is reason to be scared, I wonder if LTO mechanisms can solve part of this and a whole bunch of unnecessary use of .inline.hpp files at the same time. I wonder if the situation changes at all with C++11, where these classes could probably be made ?standard-layout? (if they aren't already). From kim.barrett at oracle.com Wed Mar 14 21:41:09 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 14 Mar 2018 17:41:09 -0400 Subject: RFR (S): 8198561: Make oop and narrowOop always have their own class type In-Reply-To: <18aaac14-7b9b-9940-29e2-5a1a260a904b@redhat.com> References: <5A940C84.7040508@oracle.com> <004fcd31-f9c1-5eb2-3d48-34571a1f3cee@oracle.com> <5A956F7E.5090205@oracle.com> <674301c0-1f4a-cd0b-15c8-b2bc51a804e0@redhat.com> <5AA919A3.7000606@oracle.com> <18aaac14-7b9b-9940-29e2-5a1a260a904b@redhat.com> Message-ID: <371C0B23-3A68-40B8-8A09-4C92C6BDAE54@oracle.com> > On Mar 14, 2018, at 9:19 AM, Roman Kennke wrote: > > Alright, thank you for the explanations. > > The main reason why I wanted this change was so that we could overload > == (i.e. equality comparison of oops), and redirect it through > BarrierSet or Access API. > > Since this is not possible on pointers, e.g. oopDesc*, which is what oop > is typedef'd to in release builds, the next reasonable option is to > provide an explicit static method in oopDesc, e.g. oopDesc::equals(oop, > oop) (and narrowOop version) which would then call into BarrierSet or > Access APIs. > > This would not be unprecedented: we already have oopDesc::is_null(oop) > and oopDesc::compare(oop, oop). > > In Shenandoah land, we already know all the places where to put > oopDesc::equals() instead of ==, and we do have some code in > oopsHierarchy to overload == in fastdebug builds and verify to not call > naked == on oops. > > Would that be a reasonable way forward? If yes, then I can provide an > RFR soon. > > WDYT? This is the direction Coleen and I were thinking things would go when we removed operator! and the non-equality comparison operators from the class implementation of oop: https://bugs.openjdk.java.net/browse/JDK-8196199 Remove miscellaneous oop comparison operators From irogers at google.com Thu Mar 15 01:00:13 2018 From: irogers at google.com (Ian Rogers) Date: Thu, 15 Mar 2018 01:00:13 +0000 Subject: GetPrimitiveArrayCritical vs GetByteArrayRegion: 140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream In-Reply-To: References: <22bbd632-47c1-7702-bd65-6a51f0bbad02@oracle.com> <9b5ff318-af41-03c4-efdc-b41b97cf6e61@oracle.com> <5A9D8C60.3050505@oracle.com> <5A9D8F33.5060500@oracle.com> <5A9DAC84.906@oracle.com> <5A9DAE87.8020801@oracle.com> Message-ID: An old data point on how large a critical region should be comes from java.nio.Bits. In JDK 9 the code migrated into unsafe, but in JDK 8 the copies within a critical region were bound at most copying 1MB: http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/native/java/nio/Bits.c#l88 This is inconsistent with Deflater and ObjectOutputStream which both allow unlimited arrays and thereby critical region sizes. In JDK 9 the copies starve the garbage collector in nio Bits too as there is no 1MB limit to the copy sizes: http://hg.openjdk.java.net/jdk/jdk/rev/f70e100d3195 which came from: https://bugs.openjdk.java.net/browse/JDK-8149596 Perhaps this is a regression not demonstrated due to the testing challenge. There is a time to safepoint discussion thread related to this here: https://groups.google.com/d/msg/mechanical-sympathy/f3g8pry-o1A/x6NptTDslcIJ "silent killer" It doesn't seem unreasonable to have the loops for the copies occur in 1MB chunks but JDK-8149596 lost this and so I'm confused on what the HotSpot stand point is. In a way criticals are better than unsafe as they may pin the memory and not starve GC, which shenandoah does. Thanks, Ian On Wed, Mar 7, 2018 at 10:16 AM Ian Rogers wrote: > Thanks Martin! Profiling shows most of the time spent in this code is in > the call to libz's deflate. I worry that increasing the buffer size > increases that work and holds the critical lock for longer. Profiling > likely won't show this issue as there's needs to be contention on the GC > locker. > > In HotSpot: > > http://hg.openjdk.java.net/jdk/jdk/file/2854589fd853/src/hotspot/share/gc/shared/gcLocker.hpp#l34 > "Avoid calling these if at all possible" could be taken to suggest that > JNI critical regions should also be avoided if at all possible. I think > HotSpot and the JDK are out of step if this is the case and there could be > work done to remove JNI critical regions from the JDK and replace either > with Java code (JITs are better now) or with Get/Set...ArrayRegion. This > does appear to be a O(1) to O(n) transition so perhaps the HotSpot folks > could speak to it. > > Thanks, > Ian > > > On Tue, Mar 6, 2018 at 6:44 PM Martin Buchholz > wrote: > >> Thanks Ian and Sherman for the excellent presentation and memories of >> ancient efforts. >> >> Yes, Sherman, I still have vague memory that attempts to touch any >> implementation detail in this area was asking for trouble and someone would >> complain. I was happy to let you deal with those problems! >> >> There's a continual struggle in the industry to enable more checking at >> test time, and -Xcheck:jni does look like it should be possible to >> routinely turn on for running all tests. (Google tests run with a time >> limit, and so any low-level performance regression immediately causes test >> failures, for better or worse) >> >> Our problem reduces to accessing a primitive array slice from native >> code. The only way to get O(1) access is via GetPrimitiveArrayCritical, >> BUT when it fails you have to pay for a copy of the entire array. An >> obvious solution is to introduce a slice variant GetPrimitiveArrayRegionCritical >> that would only degrade to a copy of the slice. Offhand that seems >> relatively easy to implement though we would hold our noses at adding yet >> more *Critical* functions to the JNI spec. In spirit though it's a >> straightforward generalization. >> >> Implementing Deflater in pure Java seems very reasonable and we've had >> good success with "nearby" code, but we likely cannot reuse the GNU >> Classpath code. >> >> Thanks for pointing out >> JDK-6311046: -Xcheck:jni should support checking of >> GetPrimitiveArrayCritical >> which went into jdk8 in u40. >> >> We can probably be smarter about choosing a better buffer size, e.g. in >> ZipOutputStream. >> >> Here's an idea: In code like this >> try (DeflaterOutputStream dout = new DeflaterOutputStream(deflated)) { >> dout.write(inflated, 0, inflated.length); >> } >> when the DeflaterOutputStream is given an input that is clearly too large >> for the current buffer size, reorganize internals dynamically to use a much >> bigger buffer size. >> >> It's possible (but hard work!) to adjust algorithms based on whether >> critical array access is available. It would be nice if we could get the >> JVM to tell us (but it might depend, e.g. on the size of the array). >> > From thomas.schatzl at oracle.com Thu Mar 15 08:25:48 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 15 Mar 2018 09:25:48 +0100 Subject: GetPrimitiveArrayCritical vs GetByteArrayRegion: 140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream In-Reply-To: References: <22bbd632-47c1-7702-bd65-6a51f0bbad02@oracle.com> <9b5ff318-af41-03c4-efdc-b41b97cf6e61@oracle.com> <5A9D8C60.3050505@oracle.com> <5A9D8F33.5060500@oracle.com> <5A9DAC84.906@oracle.com> <5A9DAE87.8020801@oracle.com> Message-ID: <1521102348.2448.25.camel@oracle.com> Hi, On Thu, 2018-03-15 at 01:00 +0000, Ian Rogers wrote: > An old data point on how large a critical region should be comes from > java.nio.Bits. In JDK 9 the code migrated into unsafe, but in JDK 8 > the copies within a critical region were bound at most copying 1MB: > http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/ > native/java/nio/Bits.c#l88 This is inconsistent with Deflater and > ObjectOutputStream which both allow unlimited arrays and thereby > critical region sizes. > > In JDK 9 the copies starve the garbage collector in nio Bits too as > there is no 1MB limit to the copy sizes: > http://hg.openjdk.java.net/jdk/jdk/rev/f70e100d3195 > which came from: > https://bugs.openjdk.java.net/browse/JDK-8149596 > > Perhaps this is a regression not demonstrated due to the testing > challenge. > [...] > It doesn't seem unreasonable to have the loops for the copies occur > in 1MB chunks but JDK-8149596 lost this and so I'm confused on what > the HotSpot stand point is. Please file a bug (seems to be a core-libs/java.nio regression?), preferably with some kind of regression test. Also file enhancements (I would guess) for the other cases allowing unlimited arrays. Long TTSP is a performance bug as any other. > In a way criticals are better than unsafe as they may > pin the memory and not starve GC, which shenandoah does. (Region based) Object pinning has its own share of problems: - only (relatively) easily implemented in region based collectors - may slow down pause a bit in presence of pinned regions/objects (for non-concurrent copying collectors) - excessive use of pinning may cause OOME and VM exit probably earlier than the gc locker. GC locker seems to provide a more gradual degradation. E.g. pinning regions typically makes these regions unavailable for allocation. I.e. you still should not use it for many, very long living objects. Of course this somewhat depends on the sophistication of the implementation. I think region based pinning would be a good addition to other collectors than Shenandoah too. It has been on our minds for a long time, but there are so many other more important issues :), so of course we are eager to see contributions in this area. ;) If you are interested on working on this, please ping us on hotspot-gc- dev for implementation ideas to get you jump-started. Thanks, Thomas From erik.osterlund at oracle.com Thu Mar 15 09:24:01 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 15 Mar 2018 10:24:01 +0100 Subject: Build failure w/ GCC 7.3.1 In-Reply-To: References: Message-ID: <5AAA3BB1.4020309@oracle.com> Hi Yasumasa, The problem is that the HeapWord* overload of arraycopy in the RawAccessBarrier calls arraycopy with 3 arguments instead of the expected 5 (it used to be 3, but changed very recently to 5). This builds anyway for us because this path is never currently expanded as all calls to arraycopy with HeapWord* arguments goes through HeapAccess (as opposed to RawAccess) which always resolves whether compressed oops is used or not, and hence on that layer always knows if it is oop* or narrowOop*. It seems like the newer GCC version detects this error despite the path not being expanded. I am working on a fix and refactoring of this code. Ideally I want the HeapWord* detection on arraycopy to be done at an earlier level to be more symmetric with how it is done for other accesses in this class. Thanks, /Erik On 2018-03-14 02:33, Yasumasa Suenaga wrote: > Hi all. > > I encountered build failure with GCC 7.3.1 on Fedora 27 x86_64 as below: > ------------------- > Building target 'images' in configuration 'linux-x86_64-normal-server-fastdebug' > In file included from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/access.inline.hpp:35:0, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/oop.inline.hpp:32, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/markOop.inline.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:153: > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp: > In static member function 'static bool > RawAccessBarrier::oop_arraycopy(arrayOop, arrayOop, > HeapWord*, HeapWord*, size_t)': > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:129:98: > error: no matching function for call to > 'RawAccessBarrier::arraycopy(narrowOop*, narrowOop*, > size_t&)' > return arraycopy(reinterpret_cast(src), > reinterpret_cast(dst), length); > > ^ > In file included from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/barrierSet.hpp:31:0, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/collectorPolicy.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/genCollectedHeap.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/gcLocker.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/runtime/interfaceSupport.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/prims/methodHandles.hpp:32, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciMethod.hpp:33, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/code/debugInfoRec.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciEnv.hpp:31, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciUtilities.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciNullObject.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciConstant.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciArray.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:35: > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: > note: candidate: template template T> static bool RawAccessBarrier::arraycopy(arrayOop, > arrayOop, T*, T*, size_t) > static bool arraycopy(arrayOop src_obj, arrayOop dst_obj, T* src, > T* dst, size_t length); > ^~~~~~~~~ > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: > note: template argument deduction/substitution failed: > In file included from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/access.inline.hpp:35:0, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/oop.inline.hpp:32, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/markOop.inline.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:153: > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:129:98: > note: mismatched types 'T*' and 'long unsigned int' > return arraycopy(reinterpret_cast(src), > reinterpret_cast(dst), length); > > ^ > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:131:86: > error: no matching function for call to > 'RawAccessBarrier::arraycopy(oop*, oop*, size_t&)' > return arraycopy(reinterpret_cast(src), > reinterpret_cast(dst), length); > > ^ > In file included from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/barrierSet.hpp:31:0, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/collectorPolicy.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/genCollectedHeap.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/gcLocker.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/runtime/interfaceSupport.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/prims/methodHandles.hpp:32, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciMethod.hpp:33, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/code/debugInfoRec.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciEnv.hpp:31, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciUtilities.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciNullObject.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciConstant.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciArray.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:35: > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: > note: candidate: template template T> static bool RawAccessBarrier::arraycopy(arrayOop, > arrayOop, T*, T*, size_t) > static bool arraycopy(arrayOop src_obj, arrayOop dst_obj, T* src, > T* dst, size_t length); > ^~~~~~~~~ > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: > note: template argument deduction/substitution failed: > In file included from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/access.inline.hpp:35:0, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/oop.inline.hpp:32, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/markOop.inline.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:153: > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:131:86: > note: mismatched types 'T*' and 'long unsigned int' > return arraycopy(reinterpret_cast(src), > reinterpret_cast(dst), length); > > ^ > In file included from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/access.inline.hpp:35:0, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/oop.inline.hpp:32, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/markOop.inline.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:153: > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp: > In static member function 'static bool > RawAccessBarrier::oop_arraycopy(arrayOop, arrayOop, > HeapWord*, HeapWord*, size_t)': > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:129:98: > error: no matching function for call to > 'RawAccessBarrier::arraycopy(narrowOop*, narrowOop*, > size_t&)' > return arraycopy(reinterpret_cast(src), > reinterpret_cast(dst), length); > > ^ > In file included from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/barrierSet.hpp:31:0, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/collectorPolicy.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/genCollectedHeap.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/gcLocker.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/runtime/interfaceSupport.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/prims/methodHandles.hpp:32, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciMethod.hpp:33, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/code/debugInfoRec.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciEnv.hpp:31, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciUtilities.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciNullObject.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciConstant.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciArray.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:35: > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: > note: candidate: template template T> static bool RawAccessBarrier::arraycopy(arrayOop, > arrayOop, T*, T*, size_t) > static bool arraycopy(arrayOop src_obj, arrayOop dst_obj, T* src, > T* dst, size_t length); > ^~~~~~~~~ > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: > note: template argument deduction/substitution failed: > In file included from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/access.inline.hpp:35:0, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/oop.inline.hpp:32, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/markOop.inline.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:153: > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:129:98: > note: mismatched types 'T*' and 'long unsigned int' > return arraycopy(reinterpret_cast(src), > reinterpret_cast(dst), length); > > ^ > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:131:86: > error: no matching function for call to > 'RawAccessBarrier::arraycopy(oop*, oop*, size_t&)' > return arraycopy(reinterpret_cast(src), > reinterpret_cast(dst), length); > > ^ > In file included from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/barrierSet.hpp:31:0, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/collectorPolicy.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/genCollectedHeap.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/gc/shared/gcLocker.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/runtime/interfaceSupport.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/prims/methodHandles.hpp:32, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciMethod.hpp:33, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/code/debugInfoRec.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciEnv.hpp:31, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciUtilities.hpp:28, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciNullObject.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciConstant.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/ci/ciArray.hpp:29, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:35: > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: > note: candidate: template template T> static bool RawAccessBarrier::arraycopy(arrayOop, > arrayOop, T*, T*, size_t) > static bool arraycopy(arrayOop src_obj, arrayOop dst_obj, T* src, > T* dst, size_t length); > ^~~~~~~~~ > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.hpp:343:15: > note: template argument deduction/substitution failed: > In file included from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/access.inline.hpp:35:0, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/oop.inline.hpp:32, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/markOop.inline.hpp:30, > from > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/precompiled/precompiled.hpp:153: > /home/ysuenaga/OpenJDK/jdk-hs/src/hotspot/share/oops/accessBackend.inline.hpp:131:86: > note: mismatched types 'T*' and 'long unsigned int' > return arraycopy(reinterpret_cast(src), > reinterpret_cast(dst), length); > > ^ > gmake[3]: *** [lib/CompileJvm.gmk:214: > /home/ysuenaga/OpenJDK/jdk-hs/build/linux-x86_64-normal-server-fastdebug/hotspot/variant-server/libjvm/objs/precompiled/precompiled.hpp.gch] > Error 1 > gmake[3]: *** Waiting for unfinished jobs.... > gmake[3]: *** [lib/CompileGtest.gmk:67: > /home/ysuenaga/OpenJDK/jdk-hs/build/linux-x86_64-normal-server-fastdebug/hotspot/variant-server/libjvm/gtest/objs/precompiled/precompiled.hpp.gch] > Error 1 > gmake[2]: *** [make/Main.gmk:267: hotspot-server-libs] Error 2 > > ERROR: Build failed for target 'images' in configuration > 'linux-x86_64-normal-server-fastdebug' (exit code 2) > ------------------- > > Do someone work for this issue? > IMHO we can avoid this with following patch: > ------------------- > diff -r 98e7a2c315a9 src/hotspot/share/oops/accessBackend.hpp > --- a/src/hotspot/share/oops/accessBackend.hpp Tue Mar 13 15:29:55 2018 -0700 > +++ b/src/hotspot/share/oops/accessBackend.hpp Wed Mar 14 10:28:27 2018 +0900 > @@ -384,7 +384,6 @@ > > template > static bool oop_arraycopy(arrayOop src_obj, arrayOop dst_obj, T* > src, T* dst, size_t length); > - static bool oop_arraycopy(arrayOop src_obj, arrayOop dst_obj, > HeapWord* src, HeapWord* dst, size_t length); > > static void clone(oop src, oop dst, size_t size); > > diff -r 98e7a2c315a9 src/hotspot/share/oops/accessBackend.inline.hpp > --- a/src/hotspot/share/oops/accessBackend.inline.hpp Tue Mar 13 > 15:29:55 2018 -0700 > +++ b/src/hotspot/share/oops/accessBackend.inline.hpp Wed Mar 14 > 10:28:27 2018 +0900 > @@ -122,17 +122,6 @@ > } > > template > -inline bool RawAccessBarrier::oop_arraycopy(arrayOop > src_obj, arrayOop dst_obj, HeapWord* src, HeapWord* dst, size_t > length) { > - bool needs_oop_compress = HasDecorator INTERNAL_CONVERT_COMPRESSED_OOP>::value && > - HasDecorator INTERNAL_RT_USE_COMPRESSED_OOPS>::value; > - if (needs_oop_compress) { > - return arraycopy(reinterpret_cast(src), > reinterpret_cast(dst), length); > - } else { > - return arraycopy(reinterpret_cast(src), > reinterpret_cast(dst), length); > - } > -} > - > -template > template > inline typename EnableIf< > HasDecorator::value, T>::type > ------------------- > > > Thanks, > > Yasumasa From stefan.karlsson at oracle.com Thu Mar 15 12:10:29 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 15 Mar 2018 13:10:29 +0100 Subject: RFR (L) 8199263: Split interfaceSupport.hpp to not require including .inline.hpp files In-Reply-To: References: <0503632f-1ed7-8fa9-0540-4d21edc3bb33@oracle.com> <465b8e02-e234-f9c8-4a9b-a20a691ae8d8@oracle.com> Message-ID: <38519338-a2d1-b924-227e-73286b26c3a3@oracle.com> Looks good. StefanK On 2018-03-14 17:01, coleen.phillimore at oracle.com wrote: > > I had to include and out-line some functions that create Handles, where > handles.inline.hpp is not transitively included via interfaceSupport.hpp > anymore.? Thanks to Stefan for finding these for me. > > This is the incremental webrev: > > http://cr.openjdk.java.net/~coleenp/8199263.03.incr/webrev/index.html > > And full webrev: > > http://cr.openjdk.java.net/~coleenp/8199263.03/webrev/index.html > > This passes mach5 tier1 on Oracle platforms. > > Thanks, > Coleen > > On 3/14/18 8:48 AM, coleen.phillimore at oracle.com wrote: >> >> Hi, this is broken with the inline Handle constructor, so please >> disregard for now.? I have to add more handles.inline.hpp includes >> since they're not transitively included by interfaceSupport.hpp. >> thanks, >> Coleen >> >> On 3/13/18 9:01 AM, coleen.phillimore at oracle.com wrote: >>> Sorry, this is the correct webrev: >>> http://cr.openjdk.java.net/~coleenp/8199263.02/webrev/index.html >>> >>> Coleen >>> >>> On 3/13/18 7:50 AM, coleen.phillimore at oracle.com wrote: >>>> Summary: interfaceSupport.hpp is an inline file so moved to >>>> interfaceSupport.inline.hpp and stopped including it in .hpp files >>>> >>>> 90% of this change is renaming interfaceSupport.hpp to >>>> interfaceSupport.inline.hpp.?? I tried to see if all of these files >>>> needed this header and the answer was yes.?? A surprising (to me!) >>>> number of files have thread state transitions. >>>> Some of interesting part of this change is adding >>>> ciUtilities.inline.hpp to include interfaceSupport.inline.hpp for >>>> VM_ENTRY.? whitebox.inline.hpp was added for the same reason. >>>> jvmtiEnter.hpp was renamed jvmtiEnter.inline.hpp because it includes >>>> interfaceSupport.inline.hpp, and is only included in cpp files. >>>> The rest of the changes were to add back includes that are not >>>> pulled in by header files including interfaceSupport.hpp, like >>>> gcLocker.hpp and of course handles.inline.hpp. >>>> >>>> This probably overlaps some of Volker's patch.? Can this be tested >>>> on other platforms that we don't have? >>>> >>>> Hopefully, at the end of all this we have more clean header files so >>>> that transitive includes don't make the jvm build on one platform >>>> but not the next.? I think that's the goal of all of this work. >>>> >>>> This was tested with Oracle platforms (linux-x64, solaris-sparcv9, >>>> macosx-x64, windows-x64) in the mach5 tier1 and 2.?? I built this >>>> locally without precompiled headers (my default setting of course) >>>> on linux-x64. >>>> >>>> bug link https://bugs.openjdk.java.net/browse/JDK-8199263 >>>> local webrev at >>>> http://oklahoma.us.oracle.com/~cphillim/webrev/8199263.02/webrev >>>> >>>> Thanks to Stefan for his help with this. >>>> >>>> Thanks, >>>> Coleen >>>> >>>> >>> >> > From coleen.phillimore at oracle.com Thu Mar 15 12:11:21 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 15 Mar 2018 08:11:21 -0400 Subject: RFR (L) 8199263: Split interfaceSupport.hpp to not require including .inline.hpp files In-Reply-To: <38519338-a2d1-b924-227e-73286b26c3a3@oracle.com> References: <0503632f-1ed7-8fa9-0540-4d21edc3bb33@oracle.com> <465b8e02-e234-f9c8-4a9b-a20a691ae8d8@oracle.com> <38519338-a2d1-b924-227e-73286b26c3a3@oracle.com> Message-ID: Thanks Stefan. Coleen On 3/15/18 8:10 AM, Stefan Karlsson wrote: > Looks good. > > StefanK > > On 2018-03-14 17:01, coleen.phillimore at oracle.com wrote: >> >> I had to include and out-line some functions that create Handles, >> where handles.inline.hpp is not transitively included via >> interfaceSupport.hpp anymore.? Thanks to Stefan for finding these for >> me. >> >> This is the incremental webrev: >> >> http://cr.openjdk.java.net/~coleenp/8199263.03.incr/webrev/index.html >> >> And full webrev: >> >> http://cr.openjdk.java.net/~coleenp/8199263.03/webrev/index.html >> >> This passes mach5 tier1 on Oracle platforms. >> >> Thanks, >> Coleen >> >> On 3/14/18 8:48 AM, coleen.phillimore at oracle.com wrote: >>> >>> Hi, this is broken with the inline Handle constructor, so please >>> disregard for now.? I have to add more handles.inline.hpp includes >>> since they're not transitively included by interfaceSupport.hpp. >>> thanks, >>> Coleen >>> >>> On 3/13/18 9:01 AM, coleen.phillimore at oracle.com wrote: >>>> Sorry, this is the correct webrev: >>>> http://cr.openjdk.java.net/~coleenp/8199263.02/webrev/index.html >>>> >>>> Coleen >>>> >>>> On 3/13/18 7:50 AM, coleen.phillimore at oracle.com wrote: >>>>> Summary: interfaceSupport.hpp is an inline file so moved to >>>>> interfaceSupport.inline.hpp and stopped including it in .hpp files >>>>> >>>>> 90% of this change is renaming interfaceSupport.hpp to >>>>> interfaceSupport.inline.hpp.?? I tried to see if all of these >>>>> files needed this header and the answer was yes.?? A surprising >>>>> (to me!) number of files have thread state transitions. >>>>> Some of interesting part of this change is adding >>>>> ciUtilities.inline.hpp to include interfaceSupport.inline.hpp for >>>>> VM_ENTRY. whitebox.inline.hpp was added for the same reason. >>>>> jvmtiEnter.hpp was renamed jvmtiEnter.inline.hpp because it >>>>> includes interfaceSupport.inline.hpp, and is only included in cpp >>>>> files. >>>>> The rest of the changes were to add back includes that are not >>>>> pulled in by header files including interfaceSupport.hpp, like >>>>> gcLocker.hpp and of course handles.inline.hpp. >>>>> >>>>> This probably overlaps some of Volker's patch.? Can this be tested >>>>> on other platforms that we don't have? >>>>> >>>>> Hopefully, at the end of all this we have more clean header files >>>>> so that transitive includes don't make the jvm build on one >>>>> platform but not the next.? I think that's the goal of all of this >>>>> work. >>>>> >>>>> This was tested with Oracle platforms (linux-x64, solaris-sparcv9, >>>>> macosx-x64, windows-x64) in the mach5 tier1 and 2.?? I built this >>>>> locally without precompiled headers (my default setting of course) >>>>> on linux-x64. >>>>> >>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8199263 >>>>> local webrev at >>>>> http://oklahoma.us.oracle.com/~cphillim/webrev/8199263.02/webrev >>>>> >>>>> Thanks to Stefan for his help with this. >>>>> >>>>> Thanks, >>>>> Coleen >>>>> >>>>> >>>> >>> >> From david.holmes at oracle.com Thu Mar 15 12:27:22 2018 From: david.holmes at oracle.com (David Holmes) Date: Thu, 15 Mar 2018 22:27:22 +1000 Subject: container support not enabled due to required cgroup subsystems not found In-Reply-To: References: Message-ID: Moving to hotspot-dev On 15/03/2018 8:51 PM, ashutosh mehra wrote: > When I run jdk-10+46 build in a docker container, I don't see MaxHeapSize > being adjusted based on container memory limit. > > Command to run docker container with 2G memory, 2 CPUs: > $ docker run -m2g --memory-swap=2g --cpus=2 -it --rm -v > /home/ashu/data/builds/openjdk/jdk-10+46:/root/jdk-10 ubuntu:16.04 > > Once inside the container ran the following command: > > # /root/jdk-10//bin/java -XX:+UnlockDiagnosticVMOptions > -XX:+PrintFlagsFinal -version | grep MaxHeapSize > > Output: > size_t MaxHeapSize = 2015363072 > {product} {ergonomic} > openjdk version "10" 2018-03-20 > OpenJDK Runtime Environment 18.3 (build 10+46) > OpenJDK 64-Bit Server VM 18.3 (build 10+46, mixed mode) > > When I used -Xlog:os+container=trace option, I get following information: > > # /root/jdk-10//bin/java "-Xlog:os+container=trace" -version > > [0.001s][trace][os,container] OSContainer::init: Initializing Container > Support > [0.001s][debug][os,container] Required cgroup subsystems not found > openjdk version "10" 2018-03-20 > OpenJDK Runtime Environment 18.3 (build 10+46) > OpenJDK 64-Bit Server VM 18.3 (build 10+46, mixed mode) > > Following subsystems are present in the docker container: > # ls /sys/fs/cgroup/ > blkio cpu cpuacct cpuacct,cpu cpuset devices freezer hugetlb > memory net_cls net_prio net_prio,net_cls perf_event pids systemd > > As far as I understand, JVM is using only memory, cpu and cpuset subsystems > which are present in my system. Not sure why is it reporting "Required > cgroup subsystems not found". > > Any idea what could be wrong here? Are there other debug options to figure > out what is going wrong? What does /proc/self/mountinfo show? David > Regards, > Ashutosh Mehra > From matthias.baesken at sap.com Thu Mar 15 12:55:42 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Thu, 15 Mar 2018 12:55:42 +0000 Subject: container support not enabled due to required cgroup subsystems not found In-Reply-To: References: Message-ID: > Following subsystems are present in the docker container: > # ls /sys/fs/cgroup/ > blkio cpu cpuacct cpuacct,cpu cpuset devices freezer hugetlb > memory net_cls net_prio net_prio,net_cls perf_event pids systemd > > As far as I understand, JVM is using only memory, cpu and cpuset > subsystems Hi, in jdk10 "cpu,cpuacct" Is checked as well , see http://hg.openjdk.java.net/jdk/jdk10/file/b09e56145e11/src/hotspot/os/linux/osContainer_linux.cpp ( but unfortunately not cpuacct,cpu ). This might cause the message : > [0.001s][debug][os,container] Required cgroup subsystems not found In jdk11 both "cpu,cpuacct" and "cpuacct,cpu" are checked . Jdk11 has also a bit better logging , so I think it would tell you what subsystem is not found . Could you maybe rerun with jdk11 ? Thanks, Matthias > -----Original Message----- > From: jdk-dev [mailto:jdk-dev-bounces at openjdk.java.net] On Behalf Of > ashutosh mehra > Sent: Donnerstag, 15. M?rz 2018 11:52 > To: jdk-dev at openjdk.java.net > Subject: container support not enabled due to required cgroup subsystems > not found > > When I run jdk-10+46 build in a docker container, I don't see MaxHeapSize > being adjusted based on container memory limit. > > Command to run docker container with 2G memory, 2 CPUs: > $ docker run -m2g --memory-swap=2g --cpus=2 -it --rm -v > /home/ashu/data/builds/openjdk/jdk-10+46:/root/jdk-10 ubuntu:16.04 > > Once inside the container ran the following command: > > # /root/jdk-10//bin/java -XX:+UnlockDiagnosticVMOptions > -XX:+PrintFlagsFinal -version | grep MaxHeapSize > > Output: > size_t MaxHeapSize = 2015363072 > {product} {ergonomic} > openjdk version "10" 2018-03-20 > OpenJDK Runtime Environment 18.3 (build 10+46) > OpenJDK 64-Bit Server VM 18.3 (build 10+46, mixed mode) > > When I used -Xlog:os+container=trace option, I get following information: > > # /root/jdk-10//bin/java "-Xlog:os+container=trace" -version > > [0.001s][trace][os,container] OSContainer::init: Initializing Container > Support > [0.001s][debug][os,container] Required cgroup subsystems not found > openjdk version "10" 2018-03-20 > OpenJDK Runtime Environment 18.3 (build 10+46) > OpenJDK 64-Bit Server VM 18.3 (build 10+46, mixed mode) > > Following subsystems are present in the docker container: > # ls /sys/fs/cgroup/ > blkio cpu cpuacct cpuacct,cpu cpuset devices freezer hugetlb > memory net_cls net_prio net_prio,net_cls perf_event pids systemd > > As far as I understand, JVM is using only memory, cpu and cpuset > subsystems > which are present in my system. Not sure why is it reporting "Required > cgroup subsystems not found". > > Any idea what could be wrong here? Are there other debug options to figure > out what is going wrong? > > Regards, > Ashutosh Mehra From bob.vandette at oracle.com Thu Mar 15 13:01:12 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Thu, 15 Mar 2018 09:01:12 -0400 Subject: container support not enabled due to required cgroup subsystems not found In-Reply-To: References: Message-ID: <10DB1EE1-C1DC-44C7-B75B-9E3DE0DDB1A6@oracle.com> > On Mar 15, 2018, at 8:55 AM, Baesken, Matthias wrote: > >> Following subsystems are present in the docker container: >> # ls /sys/fs/cgroup/ >> blkio cpu cpuacct cpuacct,cpu cpuset devices freezer hugetlb >> memory net_cls net_prio net_prio,net_cls perf_event pids systemd >> >> As far as I understand, JVM is using only memory, cpu and cpuset >> subsystems > > Hi, in jdk10 > > "cpu,cpuacct? > His system has ?cpu? and ?cpuacct? links so that shouldn?t be the issue. Bob. > > Is checked as well , see > > http://hg.openjdk.java.net/jdk/jdk10/file/b09e56145e11/src/hotspot/os/linux/osContainer_linux.cpp > > ( but unfortunately not cpuacct,cpu ). > This might cause the message : > >> [0.001s][debug][os,container] Required cgroup subsystems not found > > > In jdk11 both "cpu,cpuacct" and "cpuacct,cpu" are checked . > Jdk11 has also a bit better logging , so I think it would tell you what subsystem is not found . > Could you maybe rerun with jdk11 ? > > > Thanks, Matthias > > > >> -----Original Message----- >> From: jdk-dev [mailto:jdk-dev-bounces at openjdk.java.net] On Behalf Of >> ashutosh mehra >> Sent: Donnerstag, 15. M?rz 2018 11:52 >> To: jdk-dev at openjdk.java.net >> Subject: container support not enabled due to required cgroup subsystems >> not found >> >> When I run jdk-10+46 build in a docker container, I don't see MaxHeapSize >> being adjusted based on container memory limit. >> >> Command to run docker container with 2G memory, 2 CPUs: >> $ docker run -m2g --memory-swap=2g --cpus=2 -it --rm -v >> /home/ashu/data/builds/openjdk/jdk-10+46:/root/jdk-10 ubuntu:16.04 >> >> Once inside the container ran the following command: >> >> # /root/jdk-10//bin/java -XX:+UnlockDiagnosticVMOptions >> -XX:+PrintFlagsFinal -version | grep MaxHeapSize >> >> Output: >> size_t MaxHeapSize = 2015363072 >> {product} {ergonomic} >> openjdk version "10" 2018-03-20 >> OpenJDK Runtime Environment 18.3 (build 10+46) >> OpenJDK 64-Bit Server VM 18.3 (build 10+46, mixed mode) >> >> When I used -Xlog:os+container=trace option, I get following information: >> >> # /root/jdk-10//bin/java "-Xlog:os+container=trace" -version >> >> [0.001s][trace][os,container] OSContainer::init: Initializing Container >> Support >> [0.001s][debug][os,container] Required cgroup subsystems not found >> openjdk version "10" 2018-03-20 >> OpenJDK Runtime Environment 18.3 (build 10+46) >> OpenJDK 64-Bit Server VM 18.3 (build 10+46, mixed mode) >> >> Following subsystems are present in the docker container: >> # ls /sys/fs/cgroup/ >> blkio cpu cpuacct cpuacct,cpu cpuset devices freezer hugetlb >> memory net_cls net_prio net_prio,net_cls perf_event pids systemd >> >> As far as I understand, JVM is using only memory, cpu and cpuset >> subsystems >> which are present in my system. Not sure why is it reporting "Required >> cgroup subsystems not found". >> >> Any idea what could be wrong here? Are there other debug options to figure >> out what is going wrong? >> >> Regards, >> Ashutosh Mehra From bob.vandette at oracle.com Thu Mar 15 12:52:40 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Thu, 15 Mar 2018 08:52:40 -0400 Subject: container support not enabled due to required cgroup subsystems not found In-Reply-To: References: Message-ID: <76752255-B9D9-4E24-BFFD-032DFB356FE3@oracle.com> Yes, please send us the contents of /proc/self/mountinfo. What Linux operating system distro are you running? What kernel version? You might want to try the JDK 11 Early Access build. It provides a bit more logging for container detection. You will at least see which subsystem was not located. http://jdk.java.net/11/ Are you running cgroups version 1 or 2? Bob. > On Mar 15, 2018, at 8:27 AM, David Holmes wrote: > > Moving to hotspot-dev > > On 15/03/2018 8:51 PM, ashutosh mehra wrote: >> When I run jdk-10+46 build in a docker container, I don't see MaxHeapSize >> being adjusted based on container memory limit. >> Command to run docker container with 2G memory, 2 CPUs: >> $ docker run -m2g --memory-swap=2g --cpus=2 -it --rm -v >> /home/ashu/data/builds/openjdk/jdk-10+46:/root/jdk-10 ubuntu:16.04 >> Once inside the container ran the following command: >> # /root/jdk-10//bin/java -XX:+UnlockDiagnosticVMOptions >> -XX:+PrintFlagsFinal -version | grep MaxHeapSize >> Output: >> size_t MaxHeapSize = 2015363072 >> {product} {ergonomic} >> openjdk version "10" 2018-03-20 >> OpenJDK Runtime Environment 18.3 (build 10+46) >> OpenJDK 64-Bit Server VM 18.3 (build 10+46, mixed mode) >> When I used -Xlog:os+container=trace option, I get following information: >> # /root/jdk-10//bin/java "-Xlog:os+container=trace" -version >> [0.001s][trace][os,container] OSContainer::init: Initializing Container >> Support >> [0.001s][debug][os,container] Required cgroup subsystems not found >> openjdk version "10" 2018-03-20 >> OpenJDK Runtime Environment 18.3 (build 10+46) >> OpenJDK 64-Bit Server VM 18.3 (build 10+46, mixed mode) >> Following subsystems are present in the docker container: >> # ls /sys/fs/cgroup/ >> blkio cpu cpuacct cpuacct,cpu cpuset devices freezer hugetlb >> memory net_cls net_prio net_prio,net_cls perf_event pids systemd >> As far as I understand, JVM is using only memory, cpu and cpuset subsystems >> which are present in my system. Not sure why is it reporting "Required >> cgroup subsystems not found". >> Any idea what could be wrong here? Are there other debug options to figure >> out what is going wrong? > > What does /proc/self/mountinfo show? > > David > >> Regards, >> Ashutosh Mehra From erik.osterlund at oracle.com Thu Mar 15 13:06:40 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 15 Mar 2018 14:06:40 +0100 Subject: RFR: 8199685: Access arraycopy build failure with GCC 7.3.1 Message-ID: <5AAA6FE0.3020109@oracle.com> Hi, Newer compilers are not happy with arraycopy since recent changes made RawAccessBarrier accept 5 arguments instead of 3, and there was an unused path in Access for HeapWord* arraycopy, that was not expanded, that still tried to call an internal arraycopy function with 3 arguments. The problematic unexpanded overload in RawAccessBarrier for HeapWord* addresses has been removed. Instead, the HeapWord logic for performing Raw oop arraycopy when it is not known whether compressed oops is used or not has been moved to an earlier stage (reduce_types) to better reflect how this logic is consistently handled for the other accessors with the same HeapWord* logic, to make the code more symmetric. Bug: https://bugs.openjdk.java.net/browse/JDK-8199685 Webrev: http://cr.openjdk.java.net/~eosterlund/8199685/webrev.00/ Thanks, /Erik From bob.vandette at oracle.com Thu Mar 15 13:09:47 2018 From: bob.vandette at oracle.com (Bob Vandette) Date: Thu, 15 Mar 2018 09:09:47 -0400 Subject: container support not enabled due to required cgroup subsystems not found In-Reply-To: <10DB1EE1-C1DC-44C7-B75B-9E3DE0DDB1A6@oracle.com> References: <10DB1EE1-C1DC-44C7-B75B-9E3DE0DDB1A6@oracle.com> Message-ID: <8CA38DEF-3DE2-4C91-B1F9-76217C4DA4F7@oracle.com> > On Mar 15, 2018, at 9:01 AM, Bob Vandette wrote: > > > >> On Mar 15, 2018, at 8:55 AM, Baesken, Matthias wrote: >> >>> Following subsystems are present in the docker container: >>> # ls /sys/fs/cgroup/ >>> blkio cpu cpuacct cpuacct,cpu cpuset devices freezer hugetlb >>> memory net_cls net_prio net_prio,net_cls perf_event pids systemd >>> >>> As far as I understand, JVM is using only memory, cpu and cpuset >>> subsystems >> >> Hi, in jdk10 >> >> "cpu,cpuacct? >> > > His system has ?cpu? and ?cpuacct? links so that shouldn?t be the issue. I take that back. If mountinfo only has a cpuacct,cpu entry, we will fail detection in JDK10. Bob. > > Bob. > > >> >> Is checked as well , see >> >> http://hg.openjdk.java.net/jdk/jdk10/file/b09e56145e11/src/hotspot/os/linux/osContainer_linux.cpp >> >> ( but unfortunately not cpuacct,cpu ). >> This might cause the message : >> >>> [0.001s][debug][os,container] Required cgroup subsystems not found >> >> >> In jdk11 both "cpu,cpuacct" and "cpuacct,cpu" are checked . >> Jdk11 has also a bit better logging , so I think it would tell you what subsystem is not found . >> Could you maybe rerun with jdk11 ? >> >> >> Thanks, Matthias >> >> >> >>> -----Original Message----- >>> From: jdk-dev [mailto:jdk-dev-bounces at openjdk.java.net] On Behalf Of >>> ashutosh mehra >>> Sent: Donnerstag, 15. M?rz 2018 11:52 >>> To: jdk-dev at openjdk.java.net >>> Subject: container support not enabled due to required cgroup subsystems >>> not found >>> >>> When I run jdk-10+46 build in a docker container, I don't see MaxHeapSize >>> being adjusted based on container memory limit. >>> >>> Command to run docker container with 2G memory, 2 CPUs: >>> $ docker run -m2g --memory-swap=2g --cpus=2 -it --rm -v >>> /home/ashu/data/builds/openjdk/jdk-10+46:/root/jdk-10 ubuntu:16.04 >>> >>> Once inside the container ran the following command: >>> >>> # /root/jdk-10//bin/java -XX:+UnlockDiagnosticVMOptions >>> -XX:+PrintFlagsFinal -version | grep MaxHeapSize >>> >>> Output: >>> size_t MaxHeapSize = 2015363072 >>> {product} {ergonomic} >>> openjdk version "10" 2018-03-20 >>> OpenJDK Runtime Environment 18.3 (build 10+46) >>> OpenJDK 64-Bit Server VM 18.3 (build 10+46, mixed mode) >>> >>> When I used -Xlog:os+container=trace option, I get following information: >>> >>> # /root/jdk-10//bin/java "-Xlog:os+container=trace" -version >>> >>> [0.001s][trace][os,container] OSContainer::init: Initializing Container >>> Support >>> [0.001s][debug][os,container] Required cgroup subsystems not found >>> openjdk version "10" 2018-03-20 >>> OpenJDK Runtime Environment 18.3 (build 10+46) >>> OpenJDK 64-Bit Server VM 18.3 (build 10+46, mixed mode) >>> >>> Following subsystems are present in the docker container: >>> # ls /sys/fs/cgroup/ >>> blkio cpu cpuacct cpuacct,cpu cpuset devices freezer hugetlb >>> memory net_cls net_prio net_prio,net_cls perf_event pids systemd >>> >>> As far as I understand, JVM is using only memory, cpu and cpuset >>> subsystems >>> which are present in my system. Not sure why is it reporting "Required >>> cgroup subsystems not found". >>> >>> Any idea what could be wrong here? Are there other debug options to figure >>> out what is going wrong? >>> >>> Regards, >>> Ashutosh Mehra > From gromero at linux.vnet.ibm.com Thu Mar 15 13:11:41 2018 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Thu, 15 Mar 2018 10:11:41 -0300 Subject: [PING] Re: RFR(S): 8198794: Hotspot crash on Cassandra 3.11.1 startup with libnuma 2.0.3 In-Reply-To: References: <5AA725AA.7010202@linux.vnet.ibm.com> <3C4B8012-284F-4F47-B99F-ACB0056198C1@amazon.com> <282ee7b0-eb29-a4e2-1aff-4d4c369c08c6@oracle.com> Message-ID: <7116aeef-f2e3-c84f-a509-835b71195d10@linux.vnet.ibm.com> Hi, Could somebody please sponsor the following small change? bug : https://bugs.openjdk.java.net/browse/JDK-8198794 webrev: http://cr.openjdk.java.net/~gromero/8198794/v2/ It's already reviewed by two Reviewers: dholmes and phh. Thank you! Regards, Gustavo On 03/14/2018 12:07 PM, Gustavo Romero wrote: > Hi David, > > On 03/13/2018 09:05 PM, David Holmes wrote: >>> bug?? : https://bugs.openjdk.java.net/browse/JDK-8198794 >>> webrev: http://cr.openjdk.java.net/~gromero/8198794/v1/ >> >> Seems okay. Couple of grammar nits with the mega comment: >> >> // it can exist nodes >> >> it -> there >> >> // are besides that non-contiguous. >> >> "are besides that" -> "may be" > > Fixed. > > webrev: http://cr.openjdk.java.net/~gromero/8198794/v2/ > > Thanks a lot for reviewing it. > > > Could somebody sponsor that change please? > > > Regards, > Gustavo > From rkennke at redhat.com Thu Mar 15 13:17:21 2018 From: rkennke at redhat.com (Roman Kennke) Date: Thu, 15 Mar 2018 14:17:21 +0100 Subject: RFR: 8199685: Access arraycopy build failure with GCC 7.3.1 In-Reply-To: <5AAA6FE0.3020109@oracle.com> References: <5AAA6FE0.3020109@oracle.com> Message-ID: <01ee4b05-66da-cfbe-d9d7-e2f4a700b53b@redhat.com> Am 15.03.2018 um 14:06 schrieb Erik ?sterlund: > Hi, > > Newer compilers are not happy with arraycopy since recent changes made > RawAccessBarrier accept 5 arguments instead of 3, and there was an > unused path in Access for HeapWord* arraycopy, that was not expanded, > that still tried to call an internal arraycopy function with 3 arguments. > > The problematic unexpanded overload in RawAccessBarrier for HeapWord* > addresses has been removed. Instead, the HeapWord logic for performing > Raw oop arraycopy when it is not known whether compressed oops is used > or not has been moved to an earlier stage (reduce_types) to better > reflect how this logic is consistently handled for the other accessors > with the same HeapWord* logic, to make the code more symmetric. > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8199685 > > Webrev: > http://cr.openjdk.java.net/~eosterlund/8199685/webrev.00/ > > Thanks, > /Erik Looks ok. From erik.osterlund at oracle.com Thu Mar 15 13:41:44 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 15 Mar 2018 14:41:44 +0100 Subject: RFR: 8199685: Access arraycopy build failure with GCC 7.3.1 In-Reply-To: <01ee4b05-66da-cfbe-d9d7-e2f4a700b53b@redhat.com> References: <5AAA6FE0.3020109@oracle.com> <01ee4b05-66da-cfbe-d9d7-e2f4a700b53b@redhat.com> Message-ID: <5AAA7818.4010700@oracle.com> Hi Roman, Thank you for the review. /Erik On 2018-03-15 14:17, Roman Kennke wrote: > Am 15.03.2018 um 14:06 schrieb Erik ?sterlund: >> Hi, >> >> Newer compilers are not happy with arraycopy since recent changes made >> RawAccessBarrier accept 5 arguments instead of 3, and there was an >> unused path in Access for HeapWord* arraycopy, that was not expanded, >> that still tried to call an internal arraycopy function with 3 arguments. >> >> The problematic unexpanded overload in RawAccessBarrier for HeapWord* >> addresses has been removed. Instead, the HeapWord logic for performing >> Raw oop arraycopy when it is not known whether compressed oops is used >> or not has been moved to an earlier stage (reduce_types) to better >> reflect how this logic is consistently handled for the other accessors >> with the same HeapWord* logic, to make the code more symmetric. >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8199685 >> >> Webrev: >> http://cr.openjdk.java.net/~eosterlund/8199685/webrev.00/ >> >> Thanks, >> /Erik > Looks ok. > From claes.redestad at oracle.com Thu Mar 15 14:18:03 2018 From: claes.redestad at oracle.com (Claes Redestad) Date: Thu, 15 Mar 2018 15:18:03 +0100 Subject: Merging jdk/hs with jdk/jdk In-Reply-To: <8819CAD3-AF29-463E-8A76-14440CF37D2B@oracle.com> References: <8819CAD3-AF29-463E-8A76-14440CF37D2B@oracle.com> Message-ID: <867a073a-a4fe-df7c-c37f-a764365fc9d3@oracle.com> A very welcome change! I'm sure there will be some challenges, but my expectation is that we'll be able to better focus our resources and detect performance regressions much sooner, which greatly increases the chance there'll be time to fix them before the next train leaves. So, can we put this in effect immediately? :-) /Claes On 2018-03-14 22:00, jesper.wilhelmsson at oracle.com wrote: > All, > > Over the last couple of years we have left behind a graph of > integration forests where each component in the JVM had its own > line of development. Today all HotSpot development is done in the > same repository, jdk/hs [1]. As a result of merging we have seen > several positive effects, ranging from less confusion around > where and how to do things, and reduced time for fixes to > propagate, to significantly better cooperation between the > components, and improved quality of the product. We would like to > improve further and therefore we suggest to merge jdk/hs into > jdk/jdk [2]. > > As before, we expect this change to build a stronger team spirit > between the merged areas, and contribute to less confusion - > especially around ramp down phases and similar. We also expect > further improvements in quality as changes that cause problems in > a different area are found faster and can be dealt with > immediately. > > In the same way as we did in the past, we suggest to try this out > as an experiment for at least two weeks (giving us some time to > adapt in case of issues). Monitoring and evaluation of the new > structure will take place continuously, with an option to revert > back if things do not work out. The experiment would keep going > for at least a few months, after which we will evaluate it and > depending on the results consider making it the new standard. If > so, the jdk/hs forest will eventually be retired. As part of this > merge we can also retire the newly setup submit-hs [3] repository > and do all testing using the submit repo based on jdk/jdk [4]. > > Much like what we have done in the past we would leave the jdk/hs > forest around until we see if the experiment works out. We would > also lock it down so that no accidental pushes are made to > it. Once the jdk/hs forest is locked down, any work in flight > based on it would have to be rebased on jdk/jdk. > > We tried this approach during the last few months of JDK 10 > development and it worked out fine there. > > Please let us know if you have any feedback or questions! > > Thanks, > /Jesper > > [1] http://hg.openjdk.java.net/jdk/hs > [2] http://hg.openjdk.java.net/jdk/jdk > [3] http://hg.openjdk.java.net/jdk/submit-hs > [4] http://hg.openjdk.java.net/jdk/submit From per.liden at oracle.com Thu Mar 15 14:25:11 2018 From: per.liden at oracle.com (Per Liden) Date: Thu, 15 Mar 2018 15:25:11 +0100 Subject: RFR: 8199685: Access arraycopy build failure with GCC 7.3.1 In-Reply-To: <5AAA6FE0.3020109@oracle.com> References: <5AAA6FE0.3020109@oracle.com> Message-ID: Looks good! /Per On 03/15/2018 02:06 PM, Erik ?sterlund wrote: > Hi, > > Newer compilers are not happy with arraycopy since recent changes made > RawAccessBarrier accept 5 arguments instead of 3, and there was an > unused path in Access for HeapWord* arraycopy, that was not expanded, > that still tried to call an internal arraycopy function with 3 arguments. > > The problematic unexpanded overload in RawAccessBarrier for HeapWord* > addresses has been removed. Instead, the HeapWord logic for performing > Raw oop arraycopy when it is not known whether compressed oops is used > or not has been moved to an earlier stage (reduce_types) to better > reflect how this logic is consistently handled for the other accessors > with the same HeapWord* logic, to make the code more symmetric. > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8199685 > > Webrev: > http://cr.openjdk.java.net/~eosterlund/8199685/webrev.00/ > > Thanks, > /Erik From erik.osterlund at oracle.com Thu Mar 15 14:30:35 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 15 Mar 2018 15:30:35 +0100 Subject: RFR: 8199685: Access arraycopy build failure with GCC 7.3.1 In-Reply-To: References: <5AAA6FE0.3020109@oracle.com> Message-ID: <5AAA838B.70208@oracle.com> Hi Per, Thanks for the review. /Erik On 2018-03-15 15:25, Per Liden wrote: > Looks good! > > /Per > > On 03/15/2018 02:06 PM, Erik ?sterlund wrote: >> Hi, >> >> Newer compilers are not happy with arraycopy since recent changes >> made RawAccessBarrier accept 5 arguments instead of 3, and there was >> an unused path in Access for HeapWord* arraycopy, that was not >> expanded, that still tried to call an internal arraycopy function >> with 3 arguments. >> >> The problematic unexpanded overload in RawAccessBarrier for HeapWord* >> addresses has been removed. Instead, the HeapWord logic for >> performing Raw oop arraycopy when it is not known whether compressed >> oops is used or not has been moved to an earlier stage (reduce_types) >> to better reflect how this logic is consistently handled for the >> other accessors with the same HeapWord* logic, to make the code more >> symmetric. >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8199685 >> >> Webrev: >> http://cr.openjdk.java.net/~eosterlund/8199685/webrev.00/ >> >> Thanks, >> /Erik From stewartd.qdt at qualcommdatacenter.com Thu Mar 15 14:42:04 2018 From: stewartd.qdt at qualcommdatacenter.com (stewartd.qdt) Date: Thu, 15 Mar 2018 14:42:04 +0000 Subject: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java In-Reply-To: <4dc898b1-3831-5319-53d7-98096bc6f151@oracle.com> References: <0733725e-5f99-28ea-b4cb-97dcd7a91d4d@oracle.com> <8736910a683f4175ba983a158434d13d@NASANEXM01E.na.qualcomm.com> <135de871-eeaa-0650-94b1-e92e7f4f5c50@oracle.com> <4dc898b1-3831-5319-53d7-98096bc6f151@oracle.com> Message-ID: I'm back from holiday and attempting to get this process figured out. Thanks for the help so far David and Coleen. However, after reading around openjdk.java.net and trying several things, I will have to fess up to my ignorance. I have committed my patch locally and used the "hg commit -l message" approach to get the appropriate format. However, I am at a loss for what the next step would be. So I will simply put in the actual output of hg export -g, which if I read http://openjdk.java.net/contribute/ correctly is the preferred output method. I was hoping that there would be some sort of webrev-like output that could be used to create and upload the patch. Perhaps I am to manually upload the final patch to cr.openjdk.java.net? Sorry for the most basic of issues, but I can't find a nice description of the process. (I can't even find where I got my webrev.ksh script anymore, though it was somewhere on the openjdk.java.net site). Daniel # HG changeset patch # User dstewart # Date 1521123686 0 # Thu Mar 15 14:21:26 2018 +0000 # Node ID 6803b666b65dc28ea06b8622d36c15898abc7550 # Parent 62dd99c3a6f98a943f754a6aa2ea8fcfb9cb55fd 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java Summary: Adding required -XX:+UnlockDiagnosticVMOptions flag to StringTableVerifyTest.java Reviewed-by: coleenp, kvn diff --git a/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java b/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java --- a/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java +++ b/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java @@ -35,7 +35,7 @@ public class StringTableVerifyTest { public static void main(String[] args) throws Exception { - ProcessBuilder pb = ProcessTools.createJavaProcessBuilder("-XX:+VerifyStringTableAtExit", "-version"); + ProcessBuilder pb = ProcessTools.createJavaProcessBuilder("-XX:+UnlockDiagnosticVMOptions", "-XX:+VerifyStringTableAtExit", "-version"); OutputAnalyzer output = new OutputAnalyzer(pb.start()); output.shouldHaveExitValue(0); } -----Original Message----- From: David Holmes [mailto:david.holmes at oracle.com] Sent: Monday, March 12, 2018 12:28 AM To: coleen.phillimore at oracle.com; stewartd.qdt ; hotspot-dev at openjdk.java.net Subject: Re: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java Hi Coleen, Daniel, On 11/03/2018 12:35 AM, coleen.phillimore at oracle.com wrote: > > Hi I didn't mean that you should push.? I wanted you to do an hg > commit and I would import the changeset and push for you.?? I don't > see an openjdk author name for you.? Have you signed the contributor agreement? Coleen: Daniel is dstewart (Qualcomm Datacenter Technologies is the signatory). Daniel: as Coleen indicated you can't do the hg push as you are not a Committer, so just create the changeset using "hg commit" and ensure the commit message has the correct format [1] e.g. from a previous change of yours: 8196361: JTReg failure: serviceability/sa/ClhsdbInspect.java Summary: Modified test search strings to those guaranteed to exist in the passing cases. Reviewed-by: dholmes, jgeorge Thanks, David [1] http://openjdk.java.net/guide/producingChangeset.html#create > Thanks, > Coleen > > On 3/9/18 11:23 PM, stewartd.qdt wrote: >> I'd love to Coleen, but having never pushed before, I'm running into >> issues. It seems I haven't figured out the magic set of steps yet. I >> get that I am unable to lock jdk/hs as it is Read Only. >> >> I'm off for the next Thursday. So, if it can wait until then, I'm >> happy to keep trying to figure it out. If you'd like, you may go >> ahead and take the webrev. It seems that is what others have done for >> other patches I made. But either way I'll have to figure this out. >> >> Thanks, >> Daniel >> >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >> Behalf Of coleen.phillimore at oracle.com >> Sent: Friday, March 9, 2018 7:55 PM >> To: hotspot-dev at openjdk.java.net >> Subject: Re: RFR: 8199425: JTReg failure: >> runtime/stringtable/StringTableVerifyTest.java >> >> Looks good.? Thank you for fixing this. >> Can you hg commit the patch with us as reviewers and I'll push it? >> thanks, >> Coleen >> >> On 3/9/18 5:20 PM, stewartd.qdt wrote: >>> Please review this webrev [1] which attempts to fix a test error in >>> runtime/stringtable/StringTableVerifyTest.java. This test uses the >>> flag -XX:+VerifyStringTableAtExit, which is a diagnostic option and >>> requires the flag -XX:+UnlockDiagnosticVMOptions. >>> >>> >>> >>> This test currently fails our JTReg testing on an AArch64 machine. >>> This patch simply adds the -XX:+UnlockDiagnosticVMOptions. >>> >>> The bug report is filed at [2]. >>> >>> >>> >>> I am happy to modify the patch as necessary. >>> >>> >>> >>> Regards, >>> >>> >>> >>> Daniel Stewart >>> >>> >>> >>> [1] - http://cr.openjdk.java.net/~dstewart/8199425/webrev.01/ >>> >>> [2] - https://bugs.openjdk.java.net/browse/JDK-8199425 >>> >>> >>> > From erik.osterlund at oracle.com Thu Mar 15 15:18:24 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 15 Mar 2018 16:18:24 +0100 Subject: RFR: 8199696: Remove Runtime1::arraycopy Message-ID: <5AAA8EC0.8050504@oracle.com> Hi, The Runtime1::arraycopy stub appears to only be used on S390 because there is no StubRoutines::generic_arraycopy() provided. However, C1 could then simply take a slow path and call its arraycopy stub that performs a native call. Then this logic may be removed. I added an assert on each platform that I think should have a generic_arraycopy() stub, and added a branch to the slow path on S390 if there is no such stub. If a stub is eventually added on S390, it should automatically pick that up. Webrev: http://cr.openjdk.java.net/~eosterlund/8199696/webrev.00/ Bug ID: https://bugs.openjdk.java.net/browse/JDK-8199696 Thanks, /Erik From mehra.ashutosh at gmail.com Thu Mar 15 13:46:09 2018 From: mehra.ashutosh at gmail.com (ashutosh mehra) Date: Thu, 15 Mar 2018 19:16:09 +0530 Subject: container support not enabled due to required cgroup subsystems not found In-Reply-To: <76752255-B9D9-4E24-BFFD-032DFB356FE3@oracle.com> References: <76752255-B9D9-4E24-BFFD-032DFB356FE3@oracle.com> Message-ID: I am assuming you would be interested in only cgroup related mount points in /proc/self/mountinfo: 25 18 0:20 / /sys/fs/cgroup ro,nosuid,nodev,noexec shared:8 - tmpfs tmpfs ro,seclabel,mode=755 26 25 0:21 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:9 - cgroup cgroup rw,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 28 25 0:23 / /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:10 - cgroup cgroup rw,cpuacct,cpu 29 25 0:24 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:11 - cgroup cgroup rw,memory 30 25 0:25 / /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:12 - cgroup cgroup rw,hugetlb 31 25 0:26 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:13 - cgroup cgroup rw,cpuset 32 25 0:27 / /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:14 - cgroup cgroup rw,perf_event 33 25 0:28 / /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:15 - cgroup cgroup rw,net_prio,net_cls 34 25 0:29 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:16 - cgroup cgroup rw,blkio 35 25 0:30 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:17 - cgroup cgroup rw,freezer 36 25 0:31 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:18 - cgroup cgroup rw,devices 37 25 0:32 / /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:19 - cgroup cgroup rw,pids Linux distro is RHEL Workstation 7.4 Kernel level - 3.10.0-693.17.1.el7.x86_64 I have cgroup v1. I will check with Java 11 as well. Regards, Ashutosh On Thu, Mar 15, 2018 at 6:22 PM, Bob Vandette wrote: > Yes, please send us the contents of /proc/self/mountinfo. > > What Linux operating system distro are you running? What kernel version? > > You might want to try the JDK 11 Early Access build. It provides a bit > more logging for container detection. > You will at least see which subsystem was not located. > > http://jdk.java.net/11/ > > Are you running cgroups version 1 or 2? > > > Bob. > > > > On Mar 15, 2018, at 8:27 AM, David Holmes > wrote: > > > > Moving to hotspot-dev > > > > On 15/03/2018 8:51 PM, ashutosh mehra wrote: > >> When I run jdk-10+46 build in a docker container, I don't see > MaxHeapSize > >> being adjusted based on container memory limit. > >> Command to run docker container with 2G memory, 2 CPUs: > >> $ docker run -m2g --memory-swap=2g --cpus=2 -it --rm -v > >> /home/ashu/data/builds/openjdk/jdk-10+46:/root/jdk-10 ubuntu:16.04 > >> Once inside the container ran the following command: > >> # /root/jdk-10//bin/java -XX:+UnlockDiagnosticVMOptions > >> -XX:+PrintFlagsFinal -version | grep MaxHeapSize > >> Output: > >> size_t MaxHeapSize = 2015363072 > >> {product} {ergonomic} > >> openjdk version "10" 2018-03-20 > >> OpenJDK Runtime Environment 18.3 (build 10+46) > >> OpenJDK 64-Bit Server VM 18.3 (build 10+46, mixed mode) > >> When I used -Xlog:os+container=trace option, I get following > information: > >> # /root/jdk-10//bin/java "-Xlog:os+container=trace" -version > >> [0.001s][trace][os,container] OSContainer::init: Initializing Container > >> Support > >> [0.001s][debug][os,container] Required cgroup subsystems not found > >> openjdk version "10" 2018-03-20 > >> OpenJDK Runtime Environment 18.3 (build 10+46) > >> OpenJDK 64-Bit Server VM 18.3 (build 10+46, mixed mode) > >> Following subsystems are present in the docker container: > >> # ls /sys/fs/cgroup/ > >> blkio cpu cpuacct cpuacct,cpu cpuset devices freezer hugetlb > >> memory net_cls net_prio net_prio,net_cls perf_event pids systemd > >> As far as I understand, JVM is using only memory, cpu and cpuset > subsystems > >> which are present in my system. Not sure why is it reporting "Required > >> cgroup subsystems not found". > >> Any idea what could be wrong here? Are there other debug options to > figure > >> out what is going wrong? > > > > What does /proc/self/mountinfo show? > > > > David > > > >> Regards, > >> Ashutosh Mehra > > From mehra.ashutosh at gmail.com Thu Mar 15 13:57:02 2018 From: mehra.ashutosh at gmail.com (ashutosh mehra) Date: Thu, 15 Mar 2018 19:27:02 +0530 Subject: container support not enabled due to required cgroup subsystems not found In-Reply-To: References: <76752255-B9D9-4E24-BFFD-032DFB356FE3@oracle.com> Message-ID: JDK 11 seems to be working fine. It detects container memory limit and sets the heap size accordingly. Regards, Ashutosh On Thu, Mar 15, 2018 at 7:16 PM, ashutosh mehra wrote: > I am assuming you would be interested in only cgroup related mount points > in /proc/self/mountinfo: > > 25 18 0:20 / /sys/fs/cgroup ro,nosuid,nodev,noexec shared:8 - tmpfs tmpfs > ro,seclabel,mode=755 > 26 25 0:21 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime > shared:9 - cgroup cgroup rw,xattr,release_agent=/usr/ > lib/systemd/systemd-cgroups-agent,name=systemd > 28 25 0:23 / /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime > shared:10 - cgroup cgroup rw,cpuacct,cpu > 29 25 0:24 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime > shared:11 - cgroup cgroup rw,memory > 30 25 0:25 / /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime > shared:12 - cgroup cgroup rw,hugetlb > 31 25 0:26 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime > shared:13 - cgroup cgroup rw,cpuset > 32 25 0:27 / /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime > shared:14 - cgroup cgroup rw,perf_event > 33 25 0:28 / /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime > shared:15 - cgroup cgroup rw,net_prio,net_cls > 34 25 0:29 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime > shared:16 - cgroup cgroup rw,blkio > 35 25 0:30 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime > shared:17 - cgroup cgroup rw,freezer > 36 25 0:31 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime > shared:18 - cgroup cgroup rw,devices > 37 25 0:32 / /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime > shared:19 - cgroup cgroup rw,pids > > Linux distro is RHEL Workstation 7.4 > Kernel level - 3.10.0-693.17.1.el7.x86_64 > > I have cgroup v1. > > I will check with Java 11 as well. > > Regards, > Ashutosh > > On Thu, Mar 15, 2018 at 6:22 PM, Bob Vandette > wrote: > >> Yes, please send us the contents of /proc/self/mountinfo. >> >> What Linux operating system distro are you running? What kernel version? >> >> You might want to try the JDK 11 Early Access build. It provides a bit >> more logging for container detection. >> You will at least see which subsystem was not located. >> >> http://jdk.java.net/11/ >> >> Are you running cgroups version 1 or 2? >> >> >> Bob. >> >> >> > On Mar 15, 2018, at 8:27 AM, David Holmes >> wrote: >> > >> > Moving to hotspot-dev >> > >> > On 15/03/2018 8:51 PM, ashutosh mehra wrote: >> >> When I run jdk-10+46 build in a docker container, I don't see >> MaxHeapSize >> >> being adjusted based on container memory limit. >> >> Command to run docker container with 2G memory, 2 CPUs: >> >> $ docker run -m2g --memory-swap=2g --cpus=2 -it --rm -v >> >> /home/ashu/data/builds/openjdk/jdk-10+46:/root/jdk-10 ubuntu:16.04 >> >> Once inside the container ran the following command: >> >> # /root/jdk-10//bin/java -XX:+UnlockDiagnosticVMOptions >> >> -XX:+PrintFlagsFinal -version | grep MaxHeapSize >> >> Output: >> >> size_t MaxHeapSize = 2015363072 >> >> {product} {ergonomic} >> >> openjdk version "10" 2018-03-20 >> >> OpenJDK Runtime Environment 18.3 (build 10+46) >> >> OpenJDK 64-Bit Server VM 18.3 (build 10+46, mixed mode) >> >> When I used -Xlog:os+container=trace option, I get following >> information: >> >> # /root/jdk-10//bin/java "-Xlog:os+container=trace" -version >> >> [0.001s][trace][os,container] OSContainer::init: Initializing Container >> >> Support >> >> [0.001s][debug][os,container] Required cgroup subsystems not found >> >> openjdk version "10" 2018-03-20 >> >> OpenJDK Runtime Environment 18.3 (build 10+46) >> >> OpenJDK 64-Bit Server VM 18.3 (build 10+46, mixed mode) >> >> Following subsystems are present in the docker container: >> >> # ls /sys/fs/cgroup/ >> >> blkio cpu cpuacct cpuacct,cpu cpuset devices freezer hugetlb >> >> memory net_cls net_prio net_prio,net_cls perf_event pids systemd >> >> As far as I understand, JVM is using only memory, cpu and cpuset >> subsystems >> >> which are present in my system. Not sure why is it reporting "Required >> >> cgroup subsystems not found". >> >> Any idea what could be wrong here? Are there other debug options to >> figure >> >> out what is going wrong? >> > >> > What does /proc/self/mountinfo show? >> > >> > David >> > >> >> Regards, >> >> Ashutosh Mehra >> >> > From martin.doerr at sap.com Thu Mar 15 17:16:38 2018 From: martin.doerr at sap.com (Doerr, Martin) Date: Thu, 15 Mar 2018 17:16:38 +0000 Subject: 8199696: Remove Runtime1::arraycopy In-Reply-To: <5AAA8EC0.8050504@oracle.com> References: <5AAA8EC0.8050504@oracle.com> Message-ID: <5408c73dc0fc49dcb3e858aefcf59233@sap.com> Hi Erik, PPC64 and s390 parts look good. Arraycopy should get removed from c1_Runtime1.hpp, too. Best regards, Martin -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Erik ?sterlund Sent: Donnerstag, 15. M?rz 2018 16:18 To: hotspot-dev developers Subject: RFR: 8199696: Remove Runtime1::arraycopy Hi, The Runtime1::arraycopy stub appears to only be used on S390 because there is no StubRoutines::generic_arraycopy() provided. However, C1 could then simply take a slow path and call its arraycopy stub that performs a native call. Then this logic may be removed. I added an assert on each platform that I think should have a generic_arraycopy() stub, and added a branch to the slow path on S390 if there is no such stub. If a stub is eventually added on S390, it should automatically pick that up. Webrev: http://cr.openjdk.java.net/~eosterlund/8199696/webrev.00/ Bug ID: https://bugs.openjdk.java.net/browse/JDK-8199696 Thanks, /Erik From volker.simonis at gmail.com Thu Mar 15 17:20:32 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Thu, 15 Mar 2018 18:20:32 +0100 Subject: RFR(S): 8199698: Change 8199275 breaks template instantiation for xlC (and potentially other compliers) Message-ID: Hi, can I please have a review for the following small fix: http://cr.openjdk.java.net/~simonis/webrevs/2018/8199698/ https://bugs.openjdk.java.net/browse/JDK-8199698 The fix is actually trivial: just defining the corresponding "NOINLINE" macro for xlC. Unfortunately it's syntax requirements are a little different for method declarations, compared to the other platforms (it has to be placed AFTER the method declarator instead of BEFORE it). Fortunately, there are no differences for method definitions, so we can safely move the NOINLINE attributes from the method declarations in allocation.hpp to the method definitions in allocation.inline.hpp. Thank you and best regards, Volker PS: or true C++ enthusiasts I've also included the whole story of why this happens and why it happens just now, right after 8199275 :) Change "8199275: Fix inclusions of allocation.inline.hpp" replaced the inclusion of "allocation.inline.hpp" in some .hpp files (e.g. constantPool.hpp) by "allocation.hpp". "allocation.inline.hpp" contains not only the definition of some inline methods (as the name implies) but also the definition of some template methods (notably the various CHeapObj<>::operator new() versions). Template functions are on orthogonal concept with regard to inline functions, but they share on implementation communality: at their call sites, the compiler absolutely needs the corresponding function definition. Otherwise it can either not inline the corresponding function in the case of inline functions or it won't even be able to create the corresponding instantiation in the case of a template function. Because of this reason, template functions and methods are defined in their corresponding .inline.hpp files in HotSpot (even if they are not subject to inlining). This is especially true for the before mentioned CHeapObj<>:: new operators, which are explicitly marked as "NOINLINE" in allocation.hpp but defined in allocation.inline.hpp. Now every call site of these CHeapOb<>::new() operators which only includes "allocation.hpp" will emit a call to the corresponding instantiation of the CHeapObj<>:: new operator, but wont be able to actually create that instantiation (simply because it doesn't see the corresponding definition in allocation.inline.hpp). On the other side, call sites of a CHeapObj<>:: new operator which include allocation.inline.hpp will instantiate the required version in the current compilation unit (or even inline that method instance if it is not flagged as "NOINLINE"). If a compiler doesn't honor the "NOINLINE" attribute (or has an empty definition for the NOINLIN macro like xlC), he can potentially inline all the various template instances of CHeapObj<>:: new at all call sites, if their implementation is available. This is exactly what has happened on AIX/xlC before change 8199275 with the effect that the resulting object files contained no single instance of the corresponding new operators. After change 8199275, the template definition of the CHeapObj<>:: new operators aren't available any more at all call sites (because the inclusion of allocation.inline.hpp was removed from some other .hpp files which where included transitively before). As a result, the xlC compiler will emit calls to the corresponding instantiations instead of inlining them. But at all other call sites of the corresponding operators, the operator instantiations are still inlined (because xlC does not support "NOINLINE"), so we end up with link errors in libjvm.so because of missing CHeapObj<>::new instances. As a general rule of thumb, we should always make template method definitions available at all call sites, by placing them into corresponding .inline.hpp files and including them appropriately. Otherwise, we might end up without the required instantiations at link time. Unfortunately, there's no compile time check to enforce this requirement. But we can misuse the "inline" keyword here, by attributing template functions/methods as "inline". This way, the compiler will warn us, if a template definition isn't available at a specific call site. Of course this trick doesn't work if we specifically want to define template functions/methods which shouldn't be inlined, like in the current case :) From irogers at google.com Thu Mar 15 17:49:53 2018 From: irogers at google.com (Ian Rogers) Date: Thu, 15 Mar 2018 17:49:53 +0000 Subject: GetPrimitiveArrayCritical vs GetByteArrayRegion: 140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream In-Reply-To: <1521102348.2448.25.camel@oracle.com> References: <22bbd632-47c1-7702-bd65-6a51f0bbad02@oracle.com> <9b5ff318-af41-03c4-efdc-b41b97cf6e61@oracle.com> <5A9D8C60.3050505@oracle.com> <5A9D8F33.5060500@oracle.com> <5A9DAC84.906@oracle.com> <5A9DAE87.8020801@oracle.com> <1521102348.2448.25.camel@oracle.com> Message-ID: +hotspot-gc-dev On Thu, Mar 15, 2018 at 1:25 AM Thomas Schatzl wrote: > Hi, > > On Thu, 2018-03-15 at 01:00 +0000, Ian Rogers wrote: > > An old data point on how large a critical region should be comes from > > java.nio.Bits. In JDK 9 the code migrated into unsafe, but in JDK 8 > > the copies within a critical region were bound at most copying 1MB: > > http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/ > > native/java/nio/Bits.c#l88 This is inconsistent with Deflater and > > ObjectOutputStream which both allow unlimited arrays and thereby > > critical region sizes. > > > > In JDK 9 the copies starve the garbage collector in nio Bits too as > > there is no 1MB limit to the copy sizes: > > http://hg.openjdk.java.net/jdk/jdk/rev/f70e100d3195 > > which came from: > > https://bugs.openjdk.java.net/browse/JDK-8149596 > > > > Perhaps this is a regression not demonstrated due to the testing > > challenge. > > [...] > > It doesn't seem unreasonable to have the loops for the copies occur > > in 1MB chunks but JDK-8149596 lost this and so I'm confused on what > > the HotSpot stand point is. > > Please file a bug (seems to be a core-libs/java.nio regression?), > preferably with some kind of regression test. Also file enhancements (I > would guess) for the other cases allowing unlimited arrays. > I don't have perms to file bugs there's some catch-22 scenario in getting the permissions. Happy to have a bug filed or to file were that not an issue. Happy to create a test case but can't see any others for TTSP issues. This feels like a potential use case for jmh, perhaps run the benchmark well having a separate thread run GC bench. Should there be a bug to add, in debug mode, a TTSP watcher thread whose job it is to bring "random" threads into safepoints and report on tardy ones? Should there be a bug to warn on being in a JNI critical for more than just a short period? Seems like there should be a bug on Unsafe.copyMemory and Unsafe.copySwapMemory having TTSP issues. Seems like there should be a bug on all uses of critical that don't chunk their critical region work based on some bound (like 1MB chunks for nio Bits)? How are these bounds set? A past reference that I've lost is in having the bound be the equivalent of 65535 bytecodes due to the expectation of GC work at least once in a method or on a loop backedge - I thought this was in a spec somewhere but now I can't find it. The bytecode size feels as arbitrary as 1MB, a time period would be better but that can depend on the kind of GC you want as delays with concurrent GC mean more than non-concurrent. Clearly the chunk size shouldn't just be 0, but this appears to currently be the norm in the JDK. The original reason for coming here was a 140x slow down in -Xcheck:jni in Deflater.deflate There are a few options there that its useful to enumerate: 1) rewrite in Java but there are correctness and open source related issues 2) remove underflow/overflow protection from critical arrays (revert JDK-6311046 or perhaps bound protection to arrays of a particular small size) - this removes checking and doesn't deal with TTSP 3) add a critical array slice API to JNI so that copies with -Xcheck:jni aren't unbounded (martinrb@ proposed this) - keeps checking but doesn't deal with TTSP 4) rewrite primitive array criticals with GetArrayRegion as O(n) beats the "silent killer" TTSP (effectively deprecate the critical APIs) In general (ie not just the deflate case) I think (1) is the most preferable. (2) and (3) both have TTSP issues. (4) isn't great performance wise, which motivates more use of approach (1), but I think deprecating criticals may just be the easiest and sanest way forward. I think that discussion is worth having on an e-mail thread rather than a bug. > Long TTSP is a performance bug as any other. > > > In a way criticals are better than unsafe as they may > > pin the memory and not starve GC, which shenandoah does. > > (Region based) Object pinning has its own share of problems: > > - only (relatively) easily implemented in region based collectors > > - may slow down pause a bit in presence of pinned regions/objects (for > non-concurrent copying collectors) > > - excessive use of pinning may cause OOME and VM exit probably earlier > than the gc locker. GC locker seems to provide a more gradual > degradation. E.g. pinning regions typically makes these regions > unavailable for allocation. > I.e. you still should not use it for many, very long living objects. > Of course this somewhat depends on the sophistication of the > implementation. > > I think region based pinning would be a good addition to other > collectors than Shenandoah too. It has been on our minds for a long > time, but there are so many other more important issues :), so of > course we are eager to see contributions in this area. ;) > > If you are interested on working on this, please ping us on hotspot-gc- > dev for implementation ideas to get you jump-started. > > Thanks, > Thomas > I'd rather deprecate criticals than build upon the complexity, but I'm very glad this is a concern. Thanks, Ian From vladimir.kozlov at oracle.com Thu Mar 15 17:53:24 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 15 Mar 2018 10:53:24 -0700 Subject: RFR: 8199696: Remove Runtime1::arraycopy In-Reply-To: <5AAA8EC0.8050504@oracle.com> References: <5AAA8EC0.8050504@oracle.com> Message-ID: <20147623-7fd0-b1be-565a-cc1dfc7b497b@oracle.com> Hi Erik, I think it is historical from time when we had Client VM with C1 only and not shared runtime. Shared, x86 and Sparc changes looks good to me. What platforms you tested on? Thanks, Vladimir On 3/15/18 8:18 AM, Erik ?sterlund wrote: > Hi, > > The Runtime1::arraycopy stub appears to only be used on S390 because > there is no StubRoutines::generic_arraycopy() provided. However, C1 > could then simply take a slow path and call its arraycopy stub that > performs a native call. Then this logic may be removed. > > I added an assert on each platform that I think should have a > generic_arraycopy() stub, and added a branch to the slow path on S390 if > there is no such stub. If a stub is eventually added on S390, it should > automatically pick that up. > > Webrev: > http://cr.openjdk.java.net/~eosterlund/8199696/webrev.00/ > > Bug ID: > https://bugs.openjdk.java.net/browse/JDK-8199696 > > Thanks, > /Erik From stewartd.qdt at qualcommdatacenter.com Thu Mar 15 19:43:55 2018 From: stewartd.qdt at qualcommdatacenter.com (stewartd.qdt) Date: Thu, 15 Mar 2018 19:43:55 +0000 Subject: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java In-Reply-To: References: <0733725e-5f99-28ea-b4cb-97dcd7a91d4d@oracle.com> <8736910a683f4175ba983a158434d13d@NASANEXM01E.na.qualcomm.com> <135de871-eeaa-0650-94b1-e92e7f4f5c50@oracle.com> <4dc898b1-3831-5319-53d7-98096bc6f151@oracle.com> Message-ID: <95a637bbc20e4d089cbdf23fbbda6fe8@NASANEXM01E.na.qualcomm.com> Coleen, David, Ok ... perhaps it really is as simple as I thought it should be .... http://cr.openjdk.java.net/~dstewart/8199425/webrev.02/ I think that is what you wanted. Daniel -----Original Message----- From: stewartd.qdt Sent: Thursday, March 15, 2018 10:42 AM To: 'David Holmes' ; coleen.phillimore at oracle.com; stewartd.qdt ; hotspot-dev at openjdk.java.net Subject: RE: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java I'm back from holiday and attempting to get this process figured out. Thanks for the help so far David and Coleen. However, after reading around openjdk.java.net and trying several things, I will have to fess up to my ignorance. I have committed my patch locally and used the "hg commit -l message" approach to get the appropriate format. However, I am at a loss for what the next step would be. So I will simply put in the actual output of hg export -g, which if I read http://openjdk.java.net/contribute/ correctly is the preferred output method. I was hoping that there would be some sort of webrev-like output that could be used to create and upload the patch. Perhaps I am to manually upload the final patch to cr.openjdk.java.net? Sorry for the most basic of issues, but I can't find a nice description of the process. (I can't even find where I got my webrev.ksh script anymore, though it was somewhere on the openjdk.java.net site). Daniel # HG changeset patch # User dstewart # Date 1521123686 0 # Thu Mar 15 14:21:26 2018 +0000 # Node ID 6803b666b65dc28ea06b8622d36c15898abc7550 # Parent 62dd99c3a6f98a943f754a6aa2ea8fcfb9cb55fd 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java Summary: Adding required -XX:+UnlockDiagnosticVMOptions flag to StringTableVerifyTest.java Reviewed-by: coleenp, kvn diff --git a/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java b/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java --- a/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java +++ b/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java @@ -35,7 +35,7 @@ public class StringTableVerifyTest { public static void main(String[] args) throws Exception { - ProcessBuilder pb = ProcessTools.createJavaProcessBuilder("-XX:+VerifyStringTableAtExit", "-version"); + ProcessBuilder pb = + ProcessTools.createJavaProcessBuilder("-XX:+UnlockDiagnosticVMOptions" + , "-XX:+VerifyStringTableAtExit", "-version"); OutputAnalyzer output = new OutputAnalyzer(pb.start()); output.shouldHaveExitValue(0); } -----Original Message----- From: David Holmes [mailto:david.holmes at oracle.com] Sent: Monday, March 12, 2018 12:28 AM To: coleen.phillimore at oracle.com; stewartd.qdt ; hotspot-dev at openjdk.java.net Subject: Re: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java Hi Coleen, Daniel, On 11/03/2018 12:35 AM, coleen.phillimore at oracle.com wrote: > > Hi I didn't mean that you should push.? I wanted you to do an hg > commit and I would import the changeset and push for you.?? I don't > see an openjdk author name for you.? Have you signed the contributor agreement? Coleen: Daniel is dstewart (Qualcomm Datacenter Technologies is the signatory). Daniel: as Coleen indicated you can't do the hg push as you are not a Committer, so just create the changeset using "hg commit" and ensure the commit message has the correct format [1] e.g. from a previous change of yours: 8196361: JTReg failure: serviceability/sa/ClhsdbInspect.java Summary: Modified test search strings to those guaranteed to exist in the passing cases. Reviewed-by: dholmes, jgeorge Thanks, David [1] http://openjdk.java.net/guide/producingChangeset.html#create > Thanks, > Coleen > > On 3/9/18 11:23 PM, stewartd.qdt wrote: >> I'd love to Coleen, but having never pushed before, I'm running into >> issues. It seems I haven't figured out the magic set of steps yet. I >> get that I am unable to lock jdk/hs as it is Read Only. >> >> I'm off for the next Thursday. So, if it can wait until then, I'm >> happy to keep trying to figure it out. If you'd like, you may go >> ahead and take the webrev. It seems that is what others have done for >> other patches I made. But either way I'll have to figure this out. >> >> Thanks, >> Daniel >> >> -----Original Message----- >> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >> Behalf Of coleen.phillimore at oracle.com >> Sent: Friday, March 9, 2018 7:55 PM >> To: hotspot-dev at openjdk.java.net >> Subject: Re: RFR: 8199425: JTReg failure: >> runtime/stringtable/StringTableVerifyTest.java >> >> Looks good.? Thank you for fixing this. >> Can you hg commit the patch with us as reviewers and I'll push it? >> thanks, >> Coleen >> >> On 3/9/18 5:20 PM, stewartd.qdt wrote: >>> Please review this webrev [1] which attempts to fix a test error in >>> runtime/stringtable/StringTableVerifyTest.java. This test uses the >>> flag -XX:+VerifyStringTableAtExit, which is a diagnostic option and >>> requires the flag -XX:+UnlockDiagnosticVMOptions. >>> >>> >>> >>> This test currently fails our JTReg testing on an AArch64 machine. >>> This patch simply adds the -XX:+UnlockDiagnosticVMOptions. >>> >>> The bug report is filed at [2]. >>> >>> >>> >>> I am happy to modify the patch as necessary. >>> >>> >>> >>> Regards, >>> >>> >>> >>> Daniel Stewart >>> >>> >>> >>> [1] - http://cr.openjdk.java.net/~dstewart/8199425/webrev.01/ >>> >>> [2] - https://bugs.openjdk.java.net/browse/JDK-8199425 >>> >>> >>> > From coleen.phillimore at oracle.com Thu Mar 15 19:57:19 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 15 Mar 2018 15:57:19 -0400 Subject: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java In-Reply-To: <95a637bbc20e4d089cbdf23fbbda6fe8@NASANEXM01E.na.qualcomm.com> References: <0733725e-5f99-28ea-b4cb-97dcd7a91d4d@oracle.com> <8736910a683f4175ba983a158434d13d@NASANEXM01E.na.qualcomm.com> <135de871-eeaa-0650-94b1-e92e7f4f5c50@oracle.com> <4dc898b1-3831-5319-53d7-98096bc6f151@oracle.com> <95a637bbc20e4d089cbdf23fbbda6fe8@NASANEXM01E.na.qualcomm.com> Message-ID: Yes, it is easy but instead of -u can you do commit -u dstewart (ie your openjdk username). Then generate the webrev again and I'll sponsor it. thanks, Coleen On 3/15/18 3:43 PM, stewartd.qdt wrote: > Coleen, David, > > Ok ... perhaps it really is as simple as I thought it should be .... > > http://cr.openjdk.java.net/~dstewart/8199425/webrev.02/ > > I think that is what you wanted. > > Daniel > > -----Original Message----- > From: stewartd.qdt > Sent: Thursday, March 15, 2018 10:42 AM > To: 'David Holmes' ; coleen.phillimore at oracle.com; stewartd.qdt ; hotspot-dev at openjdk.java.net > Subject: RE: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java > > I'm back from holiday and attempting to get this process figured out. Thanks for the help so far David and Coleen. However, after reading around openjdk.java.net and trying several things, I will have to fess up to my ignorance. I have committed my patch locally and used the "hg commit -l message" approach to get the appropriate format. > > However, I am at a loss for what the next step would be. So I will simply put in the actual output of hg export -g, which if I read http://openjdk.java.net/contribute/ correctly is the preferred output method. > > I was hoping that there would be some sort of webrev-like output that could be used to create and upload the patch. Perhaps I am to manually upload the final patch to cr.openjdk.java.net? > > Sorry for the most basic of issues, but I can't find a nice description of the process. (I can't even find where I got my webrev.ksh script anymore, though it was somewhere on the openjdk.java.net site). > > Daniel > > # HG changeset patch > # User dstewart > # Date 1521123686 0 > # Thu Mar 15 14:21:26 2018 +0000 > # Node ID 6803b666b65dc28ea06b8622d36c15898abc7550 > # Parent 62dd99c3a6f98a943f754a6aa2ea8fcfb9cb55fd > 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java > Summary: Adding required -XX:+UnlockDiagnosticVMOptions flag to StringTableVerifyTest.java > Reviewed-by: coleenp, kvn > > diff --git a/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java b/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java > --- a/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java > +++ b/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java > @@ -35,7 +35,7 @@ > > public class StringTableVerifyTest { > public static void main(String[] args) throws Exception { > - ProcessBuilder pb = ProcessTools.createJavaProcessBuilder("-XX:+VerifyStringTableAtExit", "-version"); > + ProcessBuilder pb = > + ProcessTools.createJavaProcessBuilder("-XX:+UnlockDiagnosticVMOptions" > + , "-XX:+VerifyStringTableAtExit", "-version"); > OutputAnalyzer output = new OutputAnalyzer(pb.start()); > output.shouldHaveExitValue(0); > } > > > > > -----Original Message----- > From: David Holmes [mailto:david.holmes at oracle.com] > Sent: Monday, March 12, 2018 12:28 AM > To: coleen.phillimore at oracle.com; stewartd.qdt ; hotspot-dev at openjdk.java.net > Subject: Re: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java > > Hi Coleen, Daniel, > > On 11/03/2018 12:35 AM, coleen.phillimore at oracle.com wrote: >> Hi I didn't mean that you should push.? I wanted you to do an hg >> commit and I would import the changeset and push for you.?? I don't >> see an openjdk author name for you.? Have you signed the contributor agreement? > Coleen: Daniel is dstewart (Qualcomm Datacenter Technologies is the signatory). > > Daniel: as Coleen indicated you can't do the hg push as you are not a Committer, so just create the changeset using "hg commit" and ensure the commit message has the correct format [1] e.g. from a previous change of > yours: > > 8196361: JTReg failure: serviceability/sa/ClhsdbInspect.java > Summary: Modified test search strings to those guaranteed to exist in the passing cases. > Reviewed-by: dholmes, jgeorge > > Thanks, > David > > [1] http://openjdk.java.net/guide/producingChangeset.html#create > > > >> Thanks, >> Coleen >> >> On 3/9/18 11:23 PM, stewartd.qdt wrote: >>> I'd love to Coleen, but having never pushed before, I'm running into >>> issues. It seems I haven't figured out the magic set of steps yet. I >>> get that I am unable to lock jdk/hs as it is Read Only. >>> >>> I'm off for the next Thursday. So, if it can wait until then, I'm >>> happy to keep trying to figure it out. If you'd like, you may go >>> ahead and take the webrev. It seems that is what others have done for >>> other patches I made. But either way I'll have to figure this out. >>> >>> Thanks, >>> Daniel >>> >>> -----Original Message----- >>> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >>> Behalf Of coleen.phillimore at oracle.com >>> Sent: Friday, March 9, 2018 7:55 PM >>> To: hotspot-dev at openjdk.java.net >>> Subject: Re: RFR: 8199425: JTReg failure: >>> runtime/stringtable/StringTableVerifyTest.java >>> >>> Looks good.? Thank you for fixing this. >>> Can you hg commit the patch with us as reviewers and I'll push it? >>> thanks, >>> Coleen >>> >>> On 3/9/18 5:20 PM, stewartd.qdt wrote: >>>> Please review this webrev [1] which attempts to fix a test error in >>>> runtime/stringtable/StringTableVerifyTest.java. This test uses the >>>> flag -XX:+VerifyStringTableAtExit, which is a diagnostic option and >>>> requires the flag -XX:+UnlockDiagnosticVMOptions. >>>> >>>> >>>> >>>> This test currently fails our JTReg testing on an AArch64 machine. >>>> This patch simply adds the -XX:+UnlockDiagnosticVMOptions. >>>> >>>> The bug report is filed at [2]. >>>> >>>> >>>> >>>> I am happy to modify the patch as necessary. >>>> >>>> >>>> >>>> Regards, >>>> >>>> >>>> >>>> Daniel Stewart >>>> >>>> >>>> >>>> [1] - http://cr.openjdk.java.net/~dstewart/8199425/webrev.01/ >>>> >>>> [2] - https://bugs.openjdk.java.net/browse/JDK-8199425 >>>> >>>> >>>> From stewartd.qdt at qualcommdatacenter.com Thu Mar 15 20:34:25 2018 From: stewartd.qdt at qualcommdatacenter.com (stewartd.qdt) Date: Thu, 15 Mar 2018 20:34:25 +0000 Subject: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java In-Reply-To: References: <0733725e-5f99-28ea-b4cb-97dcd7a91d4d@oracle.com> <8736910a683f4175ba983a158434d13d@NASANEXM01E.na.qualcomm.com> <135de871-eeaa-0650-94b1-e92e7f4f5c50@oracle.com> <4dc898b1-3831-5319-53d7-98096bc6f151@oracle.com> <95a637bbc20e4d089cbdf23fbbda6fe8@NASANEXM01E.na.qualcomm.com> Message-ID: <4d7216a89f7c489bb0bf904b067b1a4b@NASANEXM01E.na.qualcomm.com> Ah, yes, sorry about that. Here's an upate. http://cr.openjdk.java.net/~dstewart/8199425/webrev.03/ Daniel -----Original Message----- From: coleen.phillimore at oracle.com [mailto:coleen.phillimore at oracle.com] Sent: Thursday, March 15, 2018 3:57 PM To: stewartd.qdt ; David Holmes ; hotspot-dev at openjdk.java.net Subject: Re: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java Yes, it is easy but instead of -u can you do commit -u dstewart (ie your openjdk username). Then generate the webrev again and I'll sponsor it. thanks, Coleen On 3/15/18 3:43 PM, stewartd.qdt wrote: > Coleen, David, > > Ok ... perhaps it really is as simple as I thought it should be .... > > > http://cr.openjdk.java.net/~dstewart/8199425/webrev.02/ > > I think that is what you wanted. > > Daniel > > -----Original Message----- > From: stewartd.qdt > Sent: Thursday, March 15, 2018 10:42 AM > To: 'David Holmes' ; > coleen.phillimore at oracle.com; stewartd.qdt > ; hotspot-dev at openjdk.java.net > Subject: RE: RFR: 8199425: JTReg failure: > runtime/stringtable/StringTableVerifyTest.java > > I'm back from holiday and attempting to get this process figured out. Thanks for the help so far David and Coleen. However, after reading around openjdk.java.net and trying several things, I will have to fess up to my ignorance. I have committed my patch locally and used the "hg commit -l message" approach to get the appropriate format. > > However, I am at a loss for what the next step would be. So I will simply put in the actual output of hg export -g, which if I read http://openjdk.java.net/contribute/ correctly is the preferred output method. > > I was hoping that there would be some sort of webrev-like output that could be used to create and upload the patch. Perhaps I am to manually upload the final patch to cr.openjdk.java.net? > > Sorry for the most basic of issues, but I can't find a nice description of the process. (I can't even find where I got my webrev.ksh script anymore, though it was somewhere on the openjdk.java.net site). > > Daniel > > # HG changeset patch > # User dstewart > # Date 1521123686 0 > # Thu Mar 15 14:21:26 2018 +0000 > # Node ID 6803b666b65dc28ea06b8622d36c15898abc7550 > # Parent 62dd99c3a6f98a943f754a6aa2ea8fcfb9cb55fd > 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java > Summary: Adding required -XX:+UnlockDiagnosticVMOptions flag to > StringTableVerifyTest.java > Reviewed-by: coleenp, kvn > > diff --git > a/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java > b/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java > --- > a/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java > +++ b/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.jav > +++ a > @@ -35,7 +35,7 @@ > > public class StringTableVerifyTest { > public static void main(String[] args) throws Exception { > - ProcessBuilder pb = ProcessTools.createJavaProcessBuilder("-XX:+VerifyStringTableAtExit", "-version"); > + ProcessBuilder pb = > + ProcessTools.createJavaProcessBuilder("-XX:+UnlockDiagnosticVMOptions" > + , "-XX:+VerifyStringTableAtExit", "-version"); > OutputAnalyzer output = new OutputAnalyzer(pb.start()); > output.shouldHaveExitValue(0); > } > > > > > -----Original Message----- > From: David Holmes [mailto:david.holmes at oracle.com] > Sent: Monday, March 12, 2018 12:28 AM > To: coleen.phillimore at oracle.com; stewartd.qdt > ; hotspot-dev at openjdk.java.net > Subject: Re: RFR: 8199425: JTReg failure: > runtime/stringtable/StringTableVerifyTest.java > > Hi Coleen, Daniel, > > On 11/03/2018 12:35 AM, coleen.phillimore at oracle.com wrote: >> Hi I didn't mean that you should push.? I wanted you to do an hg >> commit and I would import the changeset and push for you.?? I don't >> see an openjdk author name for you.? Have you signed the contributor agreement? > Coleen: Daniel is dstewart (Qualcomm Datacenter Technologies is the signatory). > > Daniel: as Coleen indicated you can't do the hg push as you are not a > Committer, so just create the changeset using "hg commit" and ensure > the commit message has the correct format [1] e.g. from a previous > change of > yours: > > 8196361: JTReg failure: serviceability/sa/ClhsdbInspect.java > Summary: Modified test search strings to those guaranteed to exist in the passing cases. > Reviewed-by: dholmes, jgeorge > > Thanks, > David > > [1] http://openjdk.java.net/guide/producingChangeset.html#create > > > >> Thanks, >> Coleen >> >> On 3/9/18 11:23 PM, stewartd.qdt wrote: >>> I'd love to Coleen, but having never pushed before, I'm running into >>> issues. It seems I haven't figured out the magic set of steps yet. I >>> get that I am unable to lock jdk/hs as it is Read Only. >>> >>> I'm off for the next Thursday. So, if it can wait until then, I'm >>> happy to keep trying to figure it out. If you'd like, you may go >>> ahead and take the webrev. It seems that is what others have done >>> for other patches I made. But either way I'll have to figure this out. >>> >>> Thanks, >>> Daniel >>> >>> -----Original Message----- >>> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >>> Behalf Of coleen.phillimore at oracle.com >>> Sent: Friday, March 9, 2018 7:55 PM >>> To: hotspot-dev at openjdk.java.net >>> Subject: Re: RFR: 8199425: JTReg failure: >>> runtime/stringtable/StringTableVerifyTest.java >>> >>> Looks good.? Thank you for fixing this. >>> Can you hg commit the patch with us as reviewers and I'll push it? >>> thanks, >>> Coleen >>> >>> On 3/9/18 5:20 PM, stewartd.qdt wrote: >>>> Please review this webrev [1] which attempts to fix a test error in >>>> runtime/stringtable/StringTableVerifyTest.java. This test uses the >>>> flag -XX:+VerifyStringTableAtExit, which is a diagnostic option and >>>> requires the flag -XX:+UnlockDiagnosticVMOptions. >>>> >>>> >>>> >>>> This test currently fails our JTReg testing on an AArch64 machine. >>>> This patch simply adds the -XX:+UnlockDiagnosticVMOptions. >>>> >>>> The bug report is filed at [2]. >>>> >>>> >>>> >>>> I am happy to modify the patch as necessary. >>>> >>>> >>>> >>>> Regards, >>>> >>>> >>>> >>>> Daniel Stewart >>>> >>>> >>>> >>>> [1] - http://cr.openjdk.java.net/~dstewart/8199425/webrev.01/ >>>> >>>> [2] - https://bugs.openjdk.java.net/browse/JDK-8199425 >>>> >>>> >>>> From coleen.phillimore at oracle.com Thu Mar 15 21:16:08 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 15 Mar 2018 17:16:08 -0400 Subject: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java In-Reply-To: <4d7216a89f7c489bb0bf904b067b1a4b@NASANEXM01E.na.qualcomm.com> References: <0733725e-5f99-28ea-b4cb-97dcd7a91d4d@oracle.com> <8736910a683f4175ba983a158434d13d@NASANEXM01E.na.qualcomm.com> <135de871-eeaa-0650-94b1-e92e7f4f5c50@oracle.com> <4dc898b1-3831-5319-53d7-98096bc6f151@oracle.com> <95a637bbc20e4d089cbdf23fbbda6fe8@NASANEXM01E.na.qualcomm.com> <4d7216a89f7c489bb0bf904b067b1a4b@NASANEXM01E.na.qualcomm.com> Message-ID: <9164c8ef-70d8-f46e-1aee-27f4e7a8dd53@oracle.com> Thank you for fixing this. It is pushed now. Coleen On 3/15/18 4:34 PM, stewartd.qdt wrote: > Ah, yes, sorry about that. Here's an upate. > > http://cr.openjdk.java.net/~dstewart/8199425/webrev.03/ > > Daniel > > -----Original Message----- > From: coleen.phillimore at oracle.com [mailto:coleen.phillimore at oracle.com] > Sent: Thursday, March 15, 2018 3:57 PM > To: stewartd.qdt ; David Holmes ; hotspot-dev at openjdk.java.net > Subject: Re: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java > > > Yes, it is easy but instead of -u can you do commit -u dstewart (ie your openjdk username). > Then generate the webrev again and I'll sponsor it. > thanks, > Coleen > > On 3/15/18 3:43 PM, stewartd.qdt wrote: >> Coleen, David, >> >> Ok ... perhaps it really is as simple as I thought it should be .... >> >> >> http://cr.openjdk.java.net/~dstewart/8199425/webrev.02/ >> >> I think that is what you wanted. >> >> Daniel >> >> -----Original Message----- >> From: stewartd.qdt >> Sent: Thursday, March 15, 2018 10:42 AM >> To: 'David Holmes' ; >> coleen.phillimore at oracle.com; stewartd.qdt >> ; hotspot-dev at openjdk.java.net >> Subject: RE: RFR: 8199425: JTReg failure: >> runtime/stringtable/StringTableVerifyTest.java >> >> I'm back from holiday and attempting to get this process figured out. Thanks for the help so far David and Coleen. However, after reading around openjdk.java.net and trying several things, I will have to fess up to my ignorance. I have committed my patch locally and used the "hg commit -l message" approach to get the appropriate format. >> >> However, I am at a loss for what the next step would be. So I will simply put in the actual output of hg export -g, which if I read http://openjdk.java.net/contribute/ correctly is the preferred output method. >> >> I was hoping that there would be some sort of webrev-like output that could be used to create and upload the patch. Perhaps I am to manually upload the final patch to cr.openjdk.java.net? >> >> Sorry for the most basic of issues, but I can't find a nice description of the process. (I can't even find where I got my webrev.ksh script anymore, though it was somewhere on the openjdk.java.net site). >> >> Daniel >> >> # HG changeset patch >> # User dstewart >> # Date 1521123686 0 >> # Thu Mar 15 14:21:26 2018 +0000 >> # Node ID 6803b666b65dc28ea06b8622d36c15898abc7550 >> # Parent 62dd99c3a6f98a943f754a6aa2ea8fcfb9cb55fd >> 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java >> Summary: Adding required -XX:+UnlockDiagnosticVMOptions flag to >> StringTableVerifyTest.java >> Reviewed-by: coleenp, kvn >> >> diff --git >> a/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java >> b/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java >> --- >> a/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java >> +++ b/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.jav >> +++ a >> @@ -35,7 +35,7 @@ >> >> public class StringTableVerifyTest { >> public static void main(String[] args) throws Exception { >> - ProcessBuilder pb = ProcessTools.createJavaProcessBuilder("-XX:+VerifyStringTableAtExit", "-version"); >> + ProcessBuilder pb = >> + ProcessTools.createJavaProcessBuilder("-XX:+UnlockDiagnosticVMOptions" >> + , "-XX:+VerifyStringTableAtExit", "-version"); >> OutputAnalyzer output = new OutputAnalyzer(pb.start()); >> output.shouldHaveExitValue(0); >> } >> >> >> >> >> -----Original Message----- >> From: David Holmes [mailto:david.holmes at oracle.com] >> Sent: Monday, March 12, 2018 12:28 AM >> To: coleen.phillimore at oracle.com; stewartd.qdt >> ; hotspot-dev at openjdk.java.net >> Subject: Re: RFR: 8199425: JTReg failure: >> runtime/stringtable/StringTableVerifyTest.java >> >> Hi Coleen, Daniel, >> >> On 11/03/2018 12:35 AM, coleen.phillimore at oracle.com wrote: >>> Hi I didn't mean that you should push.? I wanted you to do an hg >>> commit and I would import the changeset and push for you.?? I don't >>> see an openjdk author name for you.? Have you signed the contributor agreement? >> Coleen: Daniel is dstewart (Qualcomm Datacenter Technologies is the signatory). >> >> Daniel: as Coleen indicated you can't do the hg push as you are not a >> Committer, so just create the changeset using "hg commit" and ensure >> the commit message has the correct format [1] e.g. from a previous >> change of >> yours: >> >> 8196361: JTReg failure: serviceability/sa/ClhsdbInspect.java >> Summary: Modified test search strings to those guaranteed to exist in the passing cases. >> Reviewed-by: dholmes, jgeorge >> >> Thanks, >> David >> >> [1] http://openjdk.java.net/guide/producingChangeset.html#create >> >> >> >>> Thanks, >>> Coleen >>> >>> On 3/9/18 11:23 PM, stewartd.qdt wrote: >>>> I'd love to Coleen, but having never pushed before, I'm running into >>>> issues. It seems I haven't figured out the magic set of steps yet. I >>>> get that I am unable to lock jdk/hs as it is Read Only. >>>> >>>> I'm off for the next Thursday. So, if it can wait until then, I'm >>>> happy to keep trying to figure it out. If you'd like, you may go >>>> ahead and take the webrev. It seems that is what others have done >>>> for other patches I made. But either way I'll have to figure this out. >>>> >>>> Thanks, >>>> Daniel >>>> >>>> -----Original Message----- >>>> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >>>> Behalf Of coleen.phillimore at oracle.com >>>> Sent: Friday, March 9, 2018 7:55 PM >>>> To: hotspot-dev at openjdk.java.net >>>> Subject: Re: RFR: 8199425: JTReg failure: >>>> runtime/stringtable/StringTableVerifyTest.java >>>> >>>> Looks good.? Thank you for fixing this. >>>> Can you hg commit the patch with us as reviewers and I'll push it? >>>> thanks, >>>> Coleen >>>> >>>> On 3/9/18 5:20 PM, stewartd.qdt wrote: >>>>> Please review this webrev [1] which attempts to fix a test error in >>>>> runtime/stringtable/StringTableVerifyTest.java. This test uses the >>>>> flag -XX:+VerifyStringTableAtExit, which is a diagnostic option and >>>>> requires the flag -XX:+UnlockDiagnosticVMOptions. >>>>> >>>>> >>>>> >>>>> This test currently fails our JTReg testing on an AArch64 machine. >>>>> This patch simply adds the -XX:+UnlockDiagnosticVMOptions. >>>>> >>>>> The bug report is filed at [2]. >>>>> >>>>> >>>>> >>>>> I am happy to modify the patch as necessary. >>>>> >>>>> >>>>> >>>>> Regards, >>>>> >>>>> >>>>> >>>>> Daniel Stewart >>>>> >>>>> >>>>> >>>>> [1] - http://cr.openjdk.java.net/~dstewart/8199425/webrev.01/ >>>>> >>>>> [2] - https://bugs.openjdk.java.net/browse/JDK-8199425 >>>>> >>>>> >>>>> From edward.nevill at gmail.com Thu Mar 15 21:40:37 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Thu, 15 Mar 2018 21:40:37 +0000 Subject: RFR: aarch32: ARM 32 build broken after 8165929 Message-ID: <1521150037.2955.5.camel@gmail.com> Hi, Please review the following webrev Bugid: https://bugs.openjdk.java.net/browse/JDK-8199243 Webrev: http://cr.openjdk.java.net/~enevill/8199243/webrev.00 The ARM 32 build is broken with multiple errors of the form /work/ed/arm32/jdk/src/hotspot/os_cpu/linux_arm/copy_linux_arm.inline.hpp:33:54: error: invalid conversion from 'const HeapWord*' to 'HeapWord*' [-fpermissive] _Copy_conjoint_words(to, from, count * HeapWordSize); The problem was introduced by change 8165929 # HG changeset patch # User coleenp # Date 1518182622 18000 # Fri Feb 09 08:23:42 2018 -0500 # Node ID 950c35ea6237afd834d02345a2878e5dc30750e0 # Parent f323537c9b75444578c75d348fa2e5be81532d3e 8165929: Constify arguments of Copy methods Reviewed-by: hseigel, kbarrett This change added 'const' to the 'from' arguments in various Copy functions, for example - void _Copy_conjoint_words(HeapWord* from, HeapWord* to, size_t count); - void _Copy_disjoint_words(HeapWord* from, HeapWord* to, size_t count); + void _Copy_conjoint_words(const HeapWord* from, HeapWord* to, size_t count); + void _Copy_disjoint_words(const HeapWord* from, HeapWord* to, size_t count); The problem in the ARM 32 port occurs in code like the following static void pd_conjoint_words(const HeapWord* from, HeapWord* to, size_t count) { #ifdef AARCH64 _Copy_conjoint_words(from, to, count * HeapWordSize); #else // NOTE: _Copy_* functions on 32-bit ARM expect "to" and "from" arguments in reversed order _Copy_conjoint_words(to, from, count * HeapWordSize); #endif } The assembler implementation of the Copy functions in ARM 32 actually copies in the wrong order. Ie it copies from 'to' and to 'from'. Looking at the assembler implementation it says the following # Support for void Copy::conjoint_words(void* from, # void* to, # size_t count) _Copy_conjoint_words: stmdb sp!, {r3 - r9, ip} IE. It implies that it copies from 'from' and to 'to' in the comment, or in other words copies from memory pointed to by 'R0' to memory pointed to by 'R1' but in fact the implementation does the copy the other way around! The quick and dirty fix would be to apply a (const *) cast to the 'to' arguments for ARM 32. However, I think this is just too broken and misleading. My proposal is to fix this properly and have the assembler copy the correct way, and then delete the nasty conditionalisation on the calls. I also propose using symbolic names 'from' and 'to' rather than 'r0' and 'r1'. Many thanks, Ed. From coleen.phillimore at oracle.com Thu Mar 15 21:55:22 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 15 Mar 2018 17:55:22 -0400 Subject: RFR: aarch32: ARM 32 build broken after 8165929 In-Reply-To: <1521150037.2955.5.camel@gmail.com> References: <1521150037.2955.5.camel@gmail.com> Message-ID: <2c33dcc4-0521-ab12-a31a-6fdc7a67b9ee@oracle.com> Thank you for fixing this!? I don't know arm32 assembly but I'm happy that you've fixed the inconsistency. Coleen On 3/15/18 5:40 PM, Edward Nevill wrote: > Hi, > > Please review the following webrev > > Bugid: https://bugs.openjdk.java.net/browse/JDK-8199243 > Webrev: http://cr.openjdk.java.net/~enevill/8199243/webrev.00 > > The ARM 32 build is broken with multiple errors of the form > > /work/ed/arm32/jdk/src/hotspot/os_cpu/linux_arm/copy_linux_arm.inline.hpp:33:54: error: invalid conversion from 'const HeapWord*' to 'HeapWord*' [-fpermissive] > _Copy_conjoint_words(to, from, count * HeapWordSize); > > The problem was introduced by change 8165929 > > # HG changeset patch > # User coleenp > # Date 1518182622 18000 > # Fri Feb 09 08:23:42 2018 -0500 > # Node ID 950c35ea6237afd834d02345a2878e5dc30750e0 > # Parent f323537c9b75444578c75d348fa2e5be81532d3e > 8165929: Constify arguments of Copy methods > Reviewed-by: hseigel, kbarrett > > This change added 'const' to the 'from' arguments in various Copy functions, for example > > - void _Copy_conjoint_words(HeapWord* from, HeapWord* to, size_t count); > - void _Copy_disjoint_words(HeapWord* from, HeapWord* to, size_t count); > + void _Copy_conjoint_words(const HeapWord* from, HeapWord* to, size_t count); > + void _Copy_disjoint_words(const HeapWord* from, HeapWord* to, size_t count); > > The problem in the ARM 32 port occurs in code like the following > > static void pd_conjoint_words(const HeapWord* from, HeapWord* to, size_t count) { > #ifdef AARCH64 > _Copy_conjoint_words(from, to, count * HeapWordSize); > #else > // NOTE: _Copy_* functions on 32-bit ARM expect "to" and "from" arguments in reversed order > _Copy_conjoint_words(to, from, count * HeapWordSize); > #endif > } > > The assembler implementation of the Copy functions in ARM 32 actually copies in the wrong order. Ie it copies from 'to' and to 'from'. > > Looking at the assembler implementation it says the following > > # Support for void Copy::conjoint_words(void* from, > # void* to, > # size_t count) > _Copy_conjoint_words: > stmdb sp!, {r3 - r9, ip} > > IE. It implies that it copies from 'from' and to 'to' in the comment, or in other words copies from memory pointed to by 'R0' to memory pointed to by 'R1' but in fact the implementation does the copy the other way around! > > The quick and dirty fix would be to apply a (const *) cast to the 'to' arguments for ARM 32. However, I think this is just too broken and misleading. > > My proposal is to fix this properly and have the assembler copy the correct way, and then delete the nasty conditionalisation on the calls. I also propose using symbolic names 'from' and 'to' rather than 'r0' and 'r1'. > > Many thanks, > Ed. > From david.holmes at oracle.com Fri Mar 16 00:49:59 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 16 Mar 2018 10:49:59 +1000 Subject: [PING] Re: RFR(S): 8198794: Hotspot crash on Cassandra 3.11.1 startup with libnuma 2.0.3 In-Reply-To: <7116aeef-f2e3-c84f-a509-835b71195d10@linux.vnet.ibm.com> References: <5AA725AA.7010202@linux.vnet.ibm.com> <3C4B8012-284F-4F47-B99F-ACB0056198C1@amazon.com> <282ee7b0-eb29-a4e2-1aff-4d4c369c08c6@oracle.com> <7116aeef-f2e3-c84f-a509-835b71195d10@linux.vnet.ibm.com> Message-ID: <833ca502-c2b7-76ce-cac7-dcf08caf247a@oracle.com> Looks like I blinked first. :) I'll sponsor this for you Gustavo. In the future if any of your colleagues are OpenJDK committers you could get them to sponsor you, after using submit-hs repo for testing. Cheers, David On 15/03/2018 11:11 PM, Gustavo Romero wrote: > Hi, > > Could somebody please sponsor the following small change? > > bug?? : https://bugs.openjdk.java.net/browse/JDK-8198794 > webrev: http://cr.openjdk.java.net/~gromero/8198794/v2/ > > It's already reviewed by two Reviewers: dholmes and phh. > > Thank you! > > > Regards, > Gustavo > > On 03/14/2018 12:07 PM, Gustavo Romero wrote: >> Hi David, >> >> On 03/13/2018 09:05 PM, David Holmes wrote: >>>> bug?? : https://bugs.openjdk.java.net/browse/JDK-8198794 >>>> webrev: http://cr.openjdk.java.net/~gromero/8198794/v1/ >>> >>> Seems okay. Couple of grammar nits with the mega comment: >>> >>> // it can exist nodes >>> >>> it -> there >>> >>> // are besides that non-contiguous. >>> >>> "are besides that" -> "may be" >> >> Fixed. >> >> webrev: http://cr.openjdk.java.net/~gromero/8198794/v2/ >> >> Thanks a lot for reviewing it. >> >> >> Could somebody sponsor that change please? >> >> >> Regards, >> Gustavo >> > From david.holmes at oracle.com Fri Mar 16 01:40:19 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 16 Mar 2018 11:40:19 +1000 Subject: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java In-Reply-To: References: <0733725e-5f99-28ea-b4cb-97dcd7a91d4d@oracle.com> <8736910a683f4175ba983a158434d13d@NASANEXM01E.na.qualcomm.com> <135de871-eeaa-0650-94b1-e92e7f4f5c50@oracle.com> <4dc898b1-3831-5319-53d7-98096bc6f151@oracle.com> <95a637bbc20e4d089cbdf23fbbda6fe8@NASANEXM01E.na.qualcomm.com> Message-ID: <60fa47aa-5ba3-82ea-7c1a-d5a8bf281c7e@oracle.com> On 16/03/2018 5:57 AM, coleen.phillimore at oracle.com wrote: > > Yes, it is easy but instead of -u can you do commit -u dstewart > (ie your openjdk username). This indicates you are not running jcheck locally. Make sure you have enabled the jcheck extension in your .hgrc file. You get jcheck from http://hg.openjdk.java.net/code-tools/ David > Then generate the webrev again and I'll sponsor it. > thanks, > Coleen > > On 3/15/18 3:43 PM, stewartd.qdt wrote: >> Coleen, David, >> >> Ok ... perhaps it really is as simple as I thought it should be .... >> >> >> http://cr.openjdk.java.net/~dstewart/8199425/webrev.02/ >> >> I think that is what you wanted. >> >> Daniel >> >> -----Original Message----- >> From: stewartd.qdt >> Sent: Thursday, March 15, 2018 10:42 AM >> To: 'David Holmes' ; >> coleen.phillimore at oracle.com; stewartd.qdt >> ; hotspot-dev at openjdk.java.net >> Subject: RE: RFR: 8199425: JTReg failure: >> runtime/stringtable/StringTableVerifyTest.java >> >> I'm back from holiday and attempting to get this process figured out. >> Thanks for the help so far David and Coleen. However, after reading >> around openjdk.java.net and trying several things, I will have to fess >> up to my ignorance. I have committed my patch locally and used the "hg >> commit -l message" approach to get the appropriate format. >> >> However, I am at a loss for what the next step would be. So I will >> simply put in the actual output of hg export -g, which if I read >> http://openjdk.java.net/contribute/ correctly is the preferred output >> method. >> >> I was hoping that there would be some sort of webrev-like output that >> could be used to create and upload the patch. Perhaps I am to manually >> upload the final patch to cr.openjdk.java.net? >> >> Sorry for the most basic of issues, but I can't find a nice >> description of the process. (I can't even find where I got my >> webrev.ksh script anymore, though it was somewhere on the >> openjdk.java.net site). >> >> Daniel >> >> # HG changeset patch >> # User dstewart >> # Date 1521123686 0 >> #????? Thu Mar 15 14:21:26 2018 +0000 >> # Node ID 6803b666b65dc28ea06b8622d36c15898abc7550 >> # Parent? 62dd99c3a6f98a943f754a6aa2ea8fcfb9cb55fd >> 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java >> Summary: Adding required -XX:+UnlockDiagnosticVMOptions flag to >> StringTableVerifyTest.java >> Reviewed-by: coleenp, kvn >> >> diff --git >> a/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java >> b/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java >> --- a/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java >> +++ b/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java >> @@ -35,7 +35,7 @@ >> ? public class StringTableVerifyTest { >> ????? public static void main(String[] args) throws Exception { >> -??????? ProcessBuilder pb = >> ProcessTools.createJavaProcessBuilder("-XX:+VerifyStringTableAtExit", >> "-version"); >> +??????? ProcessBuilder pb = >> + ProcessTools.createJavaProcessBuilder("-XX:+UnlockDiagnosticVMOptions" >> + , "-XX:+VerifyStringTableAtExit", "-version"); >> ????????? OutputAnalyzer output = new OutputAnalyzer(pb.start()); >> ????????? output.shouldHaveExitValue(0); >> ????? } >> >> >> >> >> -----Original Message----- >> From: David Holmes [mailto:david.holmes at oracle.com] >> Sent: Monday, March 12, 2018 12:28 AM >> To: coleen.phillimore at oracle.com; stewartd.qdt >> ; hotspot-dev at openjdk.java.net >> Subject: Re: RFR: 8199425: JTReg failure: >> runtime/stringtable/StringTableVerifyTest.java >> >> Hi Coleen, Daniel, >> >> On 11/03/2018 12:35 AM, coleen.phillimore at oracle.com wrote: >>> Hi I didn't mean that you should push.? I wanted you to do an hg >>> commit and I would import the changeset and push for you.?? I don't >>> see an openjdk author name for you.? Have you signed the contributor >>> agreement? >> Coleen: Daniel is dstewart (Qualcomm Datacenter Technologies is the >> signatory). >> >> Daniel: as Coleen indicated you can't do the hg push as you are not a >> Committer, so just create the changeset using "hg commit" and ensure >> the commit message has the correct format [1] e.g. from a previous >> change of >> yours: >> >> 8196361: JTReg failure: serviceability/sa/ClhsdbInspect.java >> Summary: Modified test search strings to those guaranteed to exist in >> the passing cases. >> Reviewed-by: dholmes, jgeorge >> >> Thanks, >> David >> >> [1] http://openjdk.java.net/guide/producingChangeset.html#create >> >> >> >>> Thanks, >>> Coleen >>> >>> On 3/9/18 11:23 PM, stewartd.qdt wrote: >>>> I'd love to Coleen, but having never pushed before, I'm running into >>>> issues. It seems I haven't figured out the magic set of steps yet. I >>>> get that I am unable to lock jdk/hs as it is Read Only. >>>> >>>> I'm off for the next Thursday. So, if it can wait until then, I'm >>>> happy to keep trying to figure it out. If you'd like, you may go >>>> ahead and take the webrev. It seems that is what others have done for >>>> other patches I made. But either way I'll have to figure this out. >>>> >>>> Thanks, >>>> Daniel >>>> >>>> -----Original Message----- >>>> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >>>> Behalf Of coleen.phillimore at oracle.com >>>> Sent: Friday, March 9, 2018 7:55 PM >>>> To: hotspot-dev at openjdk.java.net >>>> Subject: Re: RFR: 8199425: JTReg failure: >>>> runtime/stringtable/StringTableVerifyTest.java >>>> >>>> Looks good.? Thank you for fixing this. >>>> Can you hg commit the patch with us as reviewers and I'll push it? >>>> thanks, >>>> Coleen >>>> >>>> On 3/9/18 5:20 PM, stewartd.qdt wrote: >>>>> Please review this webrev [1] which attempts to fix a test error in >>>>> runtime/stringtable/StringTableVerifyTest.java. This test uses the >>>>> flag -XX:+VerifyStringTableAtExit, which is a diagnostic option and >>>>> requires the flag -XX:+UnlockDiagnosticVMOptions. >>>>> >>>>> >>>>> >>>>> This test currently fails our JTReg testing on an AArch64 machine. >>>>> This patch simply adds the -XX:+UnlockDiagnosticVMOptions. >>>>> >>>>> The bug report is filed at [2]. >>>>> >>>>> >>>>> >>>>> I am happy to modify the patch as necessary. >>>>> >>>>> >>>>> >>>>> Regards, >>>>> >>>>> >>>>> >>>>> Daniel Stewart >>>>> >>>>> >>>>> >>>>> [1] - http://cr.openjdk.java.net/~dstewart/8199425/webrev.01/ >>>>> >>>>> [2] - https://bugs.openjdk.java.net/browse/JDK-8199425 >>>>> >>>>> >>>>> > From david.holmes at oracle.com Fri Mar 16 02:18:02 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 16 Mar 2018 12:18:02 +1000 Subject: RFR: 8199696: Remove Runtime1::arraycopy In-Reply-To: <20147623-7fd0-b1be-565a-cc1dfc7b497b@oracle.com> References: <5AAA8EC0.8050504@oracle.com> <20147623-7fd0-b1be-565a-cc1dfc7b497b@oracle.com> Message-ID: <57e402b8-b49e-ab25-98d9-6f89f2bb6638@oracle.com> On 16/03/2018 3:53 AM, Vladimir Kozlov wrote: > Hi Erik, > > I think it is historical from time when we had Client VM with C1 only > and not shared runtime. So won't 32-bit builds still need it/use it? David > Shared, x86 and Sparc changes looks good to me. > > What platforms you tested on? > > Thanks, > Vladimir > > On 3/15/18 8:18 AM, Erik ?sterlund wrote: >> Hi, >> >> The Runtime1::arraycopy stub appears to only be used on S390 because >> there is no StubRoutines::generic_arraycopy() provided. However, C1 >> could then simply take a slow path and call its arraycopy stub that >> performs a native call. Then this logic may be removed. >> >> I added an assert on each platform that I think should have a >> generic_arraycopy() stub, and added a branch to the slow path on S390 >> if there is no such stub. If a stub is eventually added on S390, it >> should automatically pick that up. >> >> Webrev: >> http://cr.openjdk.java.net/~eosterlund/8199696/webrev.00/ >> >> Bug ID: >> https://bugs.openjdk.java.net/browse/JDK-8199696 >> >> Thanks, >> /Erik From david.holmes at oracle.com Fri Mar 16 02:28:01 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 16 Mar 2018 12:28:01 +1000 Subject: RFR: aarch32: ARM 32 build broken after 8165929 In-Reply-To: <1521150037.2955.5.camel@gmail.com> References: <1521150037.2955.5.camel@gmail.com> Message-ID: <4a209409-165c-8e87-55ee-a896f2b341b1@oracle.com> Hi Ed, Small nit: can you please include the current bug number in the RFR email subject. Thanks. On 16/03/2018 7:40 AM, Edward Nevill wrote: > Hi, > > Please review the following webrev > > Bugid: https://bugs.openjdk.java.net/browse/JDK-8199243 > Webrev: http://cr.openjdk.java.net/~enevill/8199243/webrev.00 > > The ARM 32 build is broken with multiple errors of the form > > /work/ed/arm32/jdk/src/hotspot/os_cpu/linux_arm/copy_linux_arm.inline.hpp:33:54: error: invalid conversion from 'const HeapWord*' to 'HeapWord*' [-fpermissive] > _Copy_conjoint_words(to, from, count * HeapWordSize); > > The problem was introduced by change 8165929 > > # HG changeset patch > # User coleenp > # Date 1518182622 18000 > # Fri Feb 09 08:23:42 2018 -0500 > # Node ID 950c35ea6237afd834d02345a2878e5dc30750e0 > # Parent f323537c9b75444578c75d348fa2e5be81532d3e > 8165929: Constify arguments of Copy methods > Reviewed-by: hseigel, kbarrett > > This change added 'const' to the 'from' arguments in various Copy functions, for example > > - void _Copy_conjoint_words(HeapWord* from, HeapWord* to, size_t count); > - void _Copy_disjoint_words(HeapWord* from, HeapWord* to, size_t count); > + void _Copy_conjoint_words(const HeapWord* from, HeapWord* to, size_t count); > + void _Copy_disjoint_words(const HeapWord* from, HeapWord* to, size_t count); > > The problem in the ARM 32 port occurs in code like the following > > static void pd_conjoint_words(const HeapWord* from, HeapWord* to, size_t count) { > #ifdef AARCH64 > _Copy_conjoint_words(from, to, count * HeapWordSize); > #else > // NOTE: _Copy_* functions on 32-bit ARM expect "to" and "from" arguments in reversed order > _Copy_conjoint_words(to, from, count * HeapWordSize); > #endif > } > > The assembler implementation of the Copy functions in ARM 32 actually copies in the wrong order. Ie it copies from 'to' and to 'from'. > > Looking at the assembler implementation it says the following > > # Support for void Copy::conjoint_words(void* from, > # void* to, > # size_t count) > _Copy_conjoint_words: > stmdb sp!, {r3 - r9, ip} > > IE. It implies that it copies from 'from' and to 'to' in the comment, or in other words copies from memory pointed to by 'R0' to memory pointed to by 'R1' but in fact the implementation does the copy the other way around! > > The quick and dirty fix would be to apply a (const *) cast to the 'to' arguments for ARM 32. However, I think this is just too broken and misleading. > > My proposal is to fix this properly and have the assembler copy the correct way, and then delete the nasty conditionalisation on the calls. I also propose using symbolic names 'from' and 'to' rather than 'r0' and 'r1'. Okay the conversion of r1 -> from, and r0 -> to, seems accurate. I'm assuming the only actual bug was in the comment. Thanks, David > Many thanks, > Ed. > From vladimir.kozlov at oracle.com Fri Mar 16 04:35:20 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 15 Mar 2018 21:35:20 -0700 Subject: RFR: 8199696: Remove Runtime1::arraycopy In-Reply-To: <57e402b8-b49e-ab25-98d9-6f89f2bb6638@oracle.com> References: <5AAA8EC0.8050504@oracle.com> <20147623-7fd0-b1be-565a-cc1dfc7b497b@oracle.com> <57e402b8-b49e-ab25-98d9-6f89f2bb6638@oracle.com> Message-ID: <0c65cf93-e510-48ca-ed4b-e62cd7ed73e4@oracle.com> On 3/15/18 7:18 PM, David Holmes wrote: > On 16/03/2018 3:53 AM, Vladimir Kozlov wrote: >> Hi Erik, >> >> I think it is historical from time when we had Client VM with C1 only >> and not shared runtime. > > So won't 32-bit builds still need it/use it? Not anymore, arraycopy stubs are generated in all cases now: http://hg.openjdk.java.net/jdk/hs/file/7a656b77a2d8/src/hotspot/cpu/x86/stubGenerator_x86_32.cpp#l3965 Vladimir > > David > >> Shared, x86 and Sparc changes looks good to me. >> >> What platforms you tested on? >> >> Thanks, >> Vladimir >> >> On 3/15/18 8:18 AM, Erik ?sterlund wrote: >>> Hi, >>> >>> The Runtime1::arraycopy stub appears to only be used on S390 because >>> there is no StubRoutines::generic_arraycopy() provided. However, C1 >>> could then simply take a slow path and call its arraycopy stub that >>> performs a native call. Then this logic may be removed. >>> >>> I added an assert on each platform that I think should have a >>> generic_arraycopy() stub, and added a branch to the slow path on S390 >>> if there is no such stub. If a stub is eventually added on S390, it >>> should automatically pick that up. >>> >>> Webrev: >>> http://cr.openjdk.java.net/~eosterlund/8199696/webrev.00/ >>> >>> Bug ID: >>> https://bugs.openjdk.java.net/browse/JDK-8199696 >>> >>> Thanks, >>> /Erik From stefan.karlsson at oracle.com Fri Mar 16 08:18:37 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 16 Mar 2018 09:18:37 +0100 Subject: RFR(S): 8199698: Change 8199275 breaks template instantiation for xlC (and potentially other compliers) In-Reply-To: References: Message-ID: <96c76fce-d4f1-7567-2198-f12c85dc78b9@oracle.com> Hi Volker, This seems fine to be. An alternative fix for the allocation.inline.hpp problem would be to move the AllocateHeap code into allocation.cpp, and get rid of the NOINLINE usage. I've created a prototype for that: http://cr.openjdk.java.net/~stefank/8199698/prototypeAllocateHeapInCpp/ I've visually inspected the output from NMT and it seems to give correct stack traces. For example: [0x00007f4bffa10eff] ObjectSynchronizer::omAlloc(Thread*)+0x3cf [0x00007f4bffa1244c] ObjectSynchronizer::inflate(Thread*, oopDesc*, ObjectSynchronizer::InflateCause)+0x8c [0x00007f4bffa1425a] ObjectSynchronizer::FastHashCode(Thread*, oopDesc*)+0x7a [0x00007f4bff616142] JVM_IHashCode+0x52 (malloc=4144KB type=Internal #129) Thanks, StefanK On 2018-03-15 18:20, Volker Simonis wrote: > Hi, > > can I please have a review for the following small fix: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8199698/ > https://bugs.openjdk.java.net/browse/JDK-8199698 > > The fix is actually trivial: just defining the corresponding > "NOINLINE" macro for xlC. Unfortunately it's syntax requirements are a > little different for method declarations, compared to the other > platforms (it has to be placed AFTER the method declarator instead of > BEFORE it). Fortunately, there are no differences for method > definitions, so we can safely move the NOINLINE attributes from the > method declarations in allocation.hpp to the method definitions in > allocation.inline.hpp. > > Thank you and best regards, > Volker > > PS: or true C++ enthusiasts I've also included the whole story of why > this happens and why it happens just now, right after 8199275 :) > > Change "8199275: Fix inclusions of allocation.inline.hpp" replaced the > inclusion of "allocation.inline.hpp" in some .hpp files (e.g. > constantPool.hpp) by "allocation.hpp". > > "allocation.inline.hpp" contains not only the definition of some > inline methods (as the name implies) but also the definition of some > template methods (notably the various CHeapObj<>::operator new() > versions). > > Template functions are on orthogonal concept with regard to inline > functions, but they share on implementation communality: at their call > sites, the compiler absolutely needs the corresponding function > definition. Otherwise it can either not inline the corresponding > function in the case of inline functions or it won't even be able to > create the corresponding instantiation in the case of a template > function. > > Because of this reason, template functions and methods are defined in > their corresponding .inline.hpp files in HotSpot (even if they are not > subject to inlining). This is especially true for the before mentioned > CHeapObj<>:: new operators, which are explicitly marked as "NOINLINE" > in allocation.hpp but defined in allocation.inline.hpp. > > Now every call site of these CHeapOb<>::new() operators which only > includes "allocation.hpp" will emit a call to the corresponding > instantiation of the CHeapObj<>:: new operator, but wont be able to > actually create that instantiation (simply because it doesn't see the > corresponding definition in allocation.inline.hpp). On the other side, > call sites of a CHeapObj<>:: new operator which include > allocation.inline.hpp will instantiate the required version in the > current compilation unit (or even inline that method instance if it is > not flagged as "NOINLINE"). > > If a compiler doesn't honor the "NOINLINE" attribute (or has an empty > definition for the NOINLIN macro like xlC), he can potentially inline > all the various template instances of CHeapObj<>:: new at all call > sites, if their implementation is available. This is exactly what has > happened on AIX/xlC before change 8199275 with the effect that the > resulting object files contained no single instance of the > corresponding new operators. > > After change 8199275, the template definition of the CHeapObj<>:: new > operators aren't available any more at all call sites (because the > inclusion of allocation.inline.hpp was removed from some other .hpp > files which where included transitively before). As a result, the xlC > compiler will emit calls to the corresponding instantiations instead > of inlining them. But at all other call sites of the corresponding > operators, the operator instantiations are still inlined (because xlC > does not support "NOINLINE"), so we end up with link errors in > libjvm.so because of missing CHeapObj<>::new instances. > > As a general rule of thumb, we should always make template method > definitions available at all call sites, by placing them into > corresponding .inline.hpp files and including them appropriately. > Otherwise, we might end up without the required instantiations at link > time. > > Unfortunately, there's no compile time check to enforce this > requirement. But we can misuse the "inline" keyword here, by > attributing template functions/methods as "inline". This way, the > compiler will warn us, if a template definition isn't available at a > specific call site. Of course this trick doesn't work if we > specifically want to define template functions/methods which shouldn't > be inlined, like in the current case :) > From erik.osterlund at oracle.com Fri Mar 16 09:36:09 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Fri, 16 Mar 2018 10:36:09 +0100 Subject: 8199696: Remove Runtime1::arraycopy In-Reply-To: <5408c73dc0fc49dcb3e858aefcf59233@sap.com> References: <5AAA8EC0.8050504@oracle.com> <5408c73dc0fc49dcb3e858aefcf59233@sap.com> Message-ID: <5AAB9009.2030509@oracle.com> Hi Martin, Thanks for the review. I removed the declaration in the header file as well as you said (with new webrev). Full webrev: http://cr.openjdk.java.net/~eosterlund/8199696/webrev.01/ Incremental webrev: http://cr.openjdk.java.net/~eosterlund/8199696/webrev.00_01/ Thanks, /Erik On 2018-03-15 18:16, Doerr, Martin wrote: > Hi Erik, > > PPC64 and s390 parts look good. > > Arraycopy should get removed from c1_Runtime1.hpp, too. > > Best regards, > Martin > > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Erik ?sterlund > Sent: Donnerstag, 15. M?rz 2018 16:18 > To: hotspot-dev developers > Subject: RFR: 8199696: Remove Runtime1::arraycopy > > Hi, > > The Runtime1::arraycopy stub appears to only be used on S390 because > there is no StubRoutines::generic_arraycopy() provided. However, C1 > could then simply take a slow path and call its arraycopy stub that > performs a native call. Then this logic may be removed. > > I added an assert on each platform that I think should have a > generic_arraycopy() stub, and added a branch to the slow path on S390 if > there is no such stub. If a stub is eventually added on S390, it should > automatically pick that up. > > Webrev: > http://cr.openjdk.java.net/~eosterlund/8199696/webrev.00/ > > Bug ID: > https://bugs.openjdk.java.net/browse/JDK-8199696 > > Thanks, > /Erik From erik.osterlund at oracle.com Fri Mar 16 09:41:19 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Fri, 16 Mar 2018 10:41:19 +0100 Subject: RFR: 8199696: Remove Runtime1::arraycopy In-Reply-To: <20147623-7fd0-b1be-565a-cc1dfc7b497b@oracle.com> References: <5AAA8EC0.8050504@oracle.com> <20147623-7fd0-b1be-565a-cc1dfc7b497b@oracle.com> Message-ID: <5AAB913F.9010606@oracle.com> Hi Vladimir, Thank you for the review. I tested this on all Oracle platforms with hs-tier1-3 and jdk-tier1-3, and Martin tested it on SAP platforms. @Roman: Would you mind checking this on RedHat's AArch64 port? Thanks, /Erik On 2018-03-15 18:53, Vladimir Kozlov wrote: > Hi Erik, > > I think it is historical from time when we had Client VM with C1 only > and not shared runtime. > > Shared, x86 and Sparc changes looks good to me. > > What platforms you tested on? > > Thanks, > Vladimir > > On 3/15/18 8:18 AM, Erik ?sterlund wrote: >> Hi, >> >> The Runtime1::arraycopy stub appears to only be used on S390 because >> there is no StubRoutines::generic_arraycopy() provided. However, C1 >> could then simply take a slow path and call its arraycopy stub that >> performs a native call. Then this logic may be removed. >> >> I added an assert on each platform that I think should have a >> generic_arraycopy() stub, and added a branch to the slow path on S390 >> if there is no such stub. If a stub is eventually added on S390, it >> should automatically pick that up. >> >> Webrev: >> http://cr.openjdk.java.net/~eosterlund/8199696/webrev.00/ >> >> Bug ID: >> https://bugs.openjdk.java.net/browse/JDK-8199696 >> >> Thanks, >> /Erik From david.holmes at oracle.com Fri Mar 16 11:05:15 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 16 Mar 2018 21:05:15 +1000 Subject: RFR: build pragma error with gcc 4.4.7 In-Reply-To: <3196d61c-9b7c-b795-a68d-6e50a3416f41@redhat.com> References: <3196d61c-9b7c-b795-a68d-6e50a3416f41@redhat.com> Message-ID: <2dd2d37c-e0d7-ad13-6da6-92196ab6749d@oracle.com> Hi Michal, On 16/03/2018 8:48 PM, Michal Vala wrote: > Hi, > > I've been trying to build latest jdk with gcc 4.4.7 and I hit compile > error due to pragma used in function: That's a very old gcc. Our "official" version is 4.9.2 but we're working on getting gcc 7.x working as well. This code causes no problem on 4.9.2+ so to make any change we'd have to know it will continue to work on later versions. Also a google search indicates the "pragma diagnostic push" and pop weren't added until gcc 4.6 ?? David ----- > /mnt/ramdisk/openjdk/src/hotspot/os/linux/os_linux.inline.hpp:103: > error: #pragma GCC diagnostic not allowed inside functions > > > I'm sending little patch that fixes the issue by wrapping whole > function. I've also created a macro for ignoring deprecated declaration > inside compilerWarnings.hpp to line up with others. > > Can someone please review? If it's ok, I would also need a sponsor. > > > diff -r 422615764e12 src/hotspot/os/linux/os_linux.inline.hpp > --- a/src/hotspot/os/linux/os_linux.inline.hpp??? Thu Mar 15 14:54:10 > 2018 -0700 > +++ b/src/hotspot/os/linux/os_linux.inline.hpp??? Fri Mar 16 10:50:24 > 2018 +0100 > @@ -96,13 +96,12 @@ > ?? return ::ftruncate64(fd, length); > ?} > > -inline struct dirent* os::readdir(DIR* dirp, dirent *dbuf) > -{ > ?// readdir_r has been deprecated since glibc 2.24. > ?// See https://sourceware.org/bugzilla/show_bug.cgi?id=19056 for more > details. > -#pragma GCC diagnostic push > -#pragma GCC diagnostic ignored "-Wdeprecated-declarations" > - > +PRAGMA_DIAG_PUSH > +PRAGMA_DEPRECATED_IGNORED > +inline struct dirent* os::readdir(DIR* dirp, dirent *dbuf) > +{ > ?? dirent* p; > ?? int status; > ?? assert(dirp != NULL, "just checking"); > @@ -114,11 +113,11 @@ > ?? if((status = ::readdir_r(dirp, dbuf, &p)) != 0) { > ???? errno = status; > ???? return NULL; > -? } else > +? } else { > ???? return p; > - > -#pragma GCC diagnostic pop > +? } > ?} > +PRAGMA_DIAG_POP > > ?inline int os::closedir(DIR *dirp) { > ?? assert(dirp != NULL, "argument is NULL"); > diff -r 422615764e12 src/hotspot/share/utilities/compilerWarnings.hpp > --- a/src/hotspot/share/utilities/compilerWarnings.hpp??? Thu Mar 15 > 14:54:10 2018 -0700 > +++ b/src/hotspot/share/utilities/compilerWarnings.hpp??? Fri Mar 16 > 10:50:24 2018 +0100 > @@ -48,6 +48,7 @@ > ?#define PRAGMA_FORMAT_NONLITERAL_IGNORED _Pragma("GCC diagnostic > ignored \"-Wformat-nonliteral\"") \ > ????????????????????????????????????????? _Pragma("GCC diagnostic > ignored \"-Wformat-security\"") > ?#define PRAGMA_FORMAT_IGNORED _Pragma("GCC diagnostic ignored > \"-Wformat\"") > +#define PRAGMA_DEPRECATED_IGNORED _Pragma("GCC diagnostic ignored > \"-Wdeprecated-declarations\"") > > ?#if defined(__clang_major__) && \ > ?????? (__clang_major__ >= 4 || \ > > > Thanks! > From magnus.ihse.bursie at oracle.com Fri Mar 16 11:36:23 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Fri, 16 Mar 2018 12:36:23 +0100 Subject: RFR: build pragma error with gcc 4.4.7 In-Reply-To: <2dd2d37c-e0d7-ad13-6da6-92196ab6749d@oracle.com> References: <3196d61c-9b7c-b795-a68d-6e50a3416f41@redhat.com> <2dd2d37c-e0d7-ad13-6da6-92196ab6749d@oracle.com> Message-ID: <0221bde4-1313-31d7-65fa-e4f4ebed4200@oracle.com> On 2018-03-16 12:05, David Holmes wrote: > Hi Michal, > > On 16/03/2018 8:48 PM, Michal Vala wrote: >> Hi, >> >> I've been trying to build latest jdk with gcc 4.4.7 and I hit compile >> error due to pragma used in function: I don't think gcc 4.4.7 is likely to work at all. Configure will complain (but continue) if you use a gcc prior to 4.7 (very recently raised to 4.8). You can try getting past this error, but you are likely to hit more issues down the road. Do you have any specific reasons for using such an old compiler? /Magnus > > That's a very old gcc. Our "official" version is 4.9.2 but we're > working on getting gcc 7.x working as well. This code causes no > problem on 4.9.2+ so to make any change we'd have to know it will > continue to work on later versions. > > Also a google search indicates the "pragma diagnostic push" and pop > weren't added until gcc 4.6 ?? > > David > ----- > >> /mnt/ramdisk/openjdk/src/hotspot/os/linux/os_linux.inline.hpp:103: >> error: #pragma GCC diagnostic not allowed inside functions >> >> >> I'm sending little patch that fixes the issue by wrapping whole >> function. I've also created a macro for ignoring deprecated >> declaration inside compilerWarnings.hpp to line up with others. >> >> Can someone please review? If it's ok, I would also need a sponsor. >> >> >> diff -r 422615764e12 src/hotspot/os/linux/os_linux.inline.hpp >> --- a/src/hotspot/os/linux/os_linux.inline.hpp??? Thu Mar 15 14:54:10 >> 2018 -0700 >> +++ b/src/hotspot/os/linux/os_linux.inline.hpp??? Fri Mar 16 10:50:24 >> 2018 +0100 >> @@ -96,13 +96,12 @@ >> ??? return ::ftruncate64(fd, length); >> ??} >> >> -inline struct dirent* os::readdir(DIR* dirp, dirent *dbuf) >> -{ >> ??// readdir_r has been deprecated since glibc 2.24. >> ??// See https://sourceware.org/bugzilla/show_bug.cgi?id=19056 for >> more details. >> -#pragma GCC diagnostic push >> -#pragma GCC diagnostic ignored "-Wdeprecated-declarations" >> - >> +PRAGMA_DIAG_PUSH >> +PRAGMA_DEPRECATED_IGNORED >> +inline struct dirent* os::readdir(DIR* dirp, dirent *dbuf) >> +{ >> ??? dirent* p; >> ??? int status; >> ??? assert(dirp != NULL, "just checking"); >> @@ -114,11 +113,11 @@ >> ??? if((status = ::readdir_r(dirp, dbuf, &p)) != 0) { >> ????? errno = status; >> ????? return NULL; >> -? } else >> +? } else { >> ????? return p; >> - >> -#pragma GCC diagnostic pop >> +? } >> ??} >> +PRAGMA_DIAG_POP >> >> ??inline int os::closedir(DIR *dirp) { >> ??? assert(dirp != NULL, "argument is NULL"); >> diff -r 422615764e12 src/hotspot/share/utilities/compilerWarnings.hpp >> --- a/src/hotspot/share/utilities/compilerWarnings.hpp??? Thu Mar 15 >> 14:54:10 2018 -0700 >> +++ b/src/hotspot/share/utilities/compilerWarnings.hpp??? Fri Mar 16 >> 10:50:24 2018 +0100 >> @@ -48,6 +48,7 @@ >> ??#define PRAGMA_FORMAT_NONLITERAL_IGNORED _Pragma("GCC diagnostic >> ignored \"-Wformat-nonliteral\"") \ >> ?????????????????????????????????????????? _Pragma("GCC diagnostic >> ignored \"-Wformat-security\"") >> ??#define PRAGMA_FORMAT_IGNORED _Pragma("GCC diagnostic ignored >> \"-Wformat\"") >> +#define PRAGMA_DEPRECATED_IGNORED _Pragma("GCC diagnostic ignored >> \"-Wdeprecated-declarations\"") >> >> ??#if defined(__clang_major__) && \ >> ??????? (__clang_major__ >= 4 || \ >> >> >> Thanks! >> From erik.osterlund at oracle.com Fri Mar 16 12:17:02 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Fri, 16 Mar 2018 13:17:02 +0100 Subject: RFR: 8199604: Rename CardTableModRefBS to CardTableBarrierSet Message-ID: <5AABB5BE.50704@oracle.com> Hi, After collapsing barrier sets, it seems like CardTableModRefBS is the only barrier set that encodes "ModRef" in its name and uses the "BS" suffix instead of spelling out "BarrierSet". This is a weird inconsistency that I have solved by renaming CardTableModRefBS to CardTableBarrierSet. Files were renamed like this: parCardTableModRefBS.cpp => cmsCardTable.cpp (this file contains a bunch of implementations for the card table used by CMS and is in the CMS directory) cardTableModRefBS.cpp => cardTableBarrierSet.cpp cardTableModRefBS.hpp => cardTableBarrierSet.hpp cardTableModRefBS.inline.hpp => cardTableBarrierSet.inline.hpp I have performed an automatic search and replace to rename CardTableModRefBS to CardTableBarrierSet, and manually clicked through everything to make sure that space alignment, comments, etc all look as they should. I have also checked that the SA agent is not affected and run hs-tier1 on this changeset. Despite the fix being a bit lengthy, I would still prefer to consider it trivial (unless somebody opposes that) as I am only renaming a class in a rather mechanical way. Hope we can agree about that. Webrev: http://cr.openjdk.java.net/~eosterlund/8199604/webrev.00/ Bug: https://bugs.openjdk.java.net/browse/JDK-8199604 Testing: mach5 hs-tier1-3 Thanks, /Erik From stewartd.qdt at qualcommdatacenter.com Fri Mar 16 12:57:04 2018 From: stewartd.qdt at qualcommdatacenter.com (stewartd.qdt) Date: Fri, 16 Mar 2018 12:57:04 +0000 Subject: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java In-Reply-To: <9164c8ef-70d8-f46e-1aee-27f4e7a8dd53@oracle.com> References: <0733725e-5f99-28ea-b4cb-97dcd7a91d4d@oracle.com> <8736910a683f4175ba983a158434d13d@NASANEXM01E.na.qualcomm.com> <135de871-eeaa-0650-94b1-e92e7f4f5c50@oracle.com> <4dc898b1-3831-5319-53d7-98096bc6f151@oracle.com> <95a637bbc20e4d089cbdf23fbbda6fe8@NASANEXM01E.na.qualcomm.com> <4d7216a89f7c489bb0bf904b067b1a4b@NASANEXM01E.na.qualcomm.com> <9164c8ef-70d8-f46e-1aee-27f4e7a8dd53@oracle.com> Message-ID: <95cee3ea79ad47eea464fe59201db642@NASANEXM01E.na.qualcomm.com> Thanks so much Coleen! Sorry for the hassle! Daniel -----Original Message----- From: coleen.phillimore at oracle.com [mailto:coleen.phillimore at oracle.com] Sent: Thursday, March 15, 2018 5:16 PM To: stewartd.qdt ; David Holmes ; hotspot-dev at openjdk.java.net Subject: Re: RFR: 8199425: JTReg failure: runtime/stringtable/StringTableVerifyTest.java Thank you for fixing this. It is pushed now. Coleen On 3/15/18 4:34 PM, stewartd.qdt wrote: > Ah, yes, sorry about that. Here's an upate. > > http://cr.openjdk.java.net/~dstewart/8199425/webrev.03/ > > Daniel > > -----Original Message----- > From: coleen.phillimore at oracle.com > [mailto:coleen.phillimore at oracle.com] > Sent: Thursday, March 15, 2018 3:57 PM > To: stewartd.qdt ; David Holmes > ; hotspot-dev at openjdk.java.net > Subject: Re: RFR: 8199425: JTReg failure: > runtime/stringtable/StringTableVerifyTest.java > > > Yes, it is easy but instead of -u can you do commit -u dstewart (ie your openjdk username). > Then generate the webrev again and I'll sponsor it. > thanks, > Coleen > > On 3/15/18 3:43 PM, stewartd.qdt wrote: >> Coleen, David, >> >> Ok ... perhaps it really is as simple as I thought it should be .... >> >> >> http://cr.openjdk.java.net/~dstewart/8199425/webrev.02/ >> >> I think that is what you wanted. >> >> Daniel >> >> -----Original Message----- >> From: stewartd.qdt >> Sent: Thursday, March 15, 2018 10:42 AM >> To: 'David Holmes' ; >> coleen.phillimore at oracle.com; stewartd.qdt >> ; hotspot-dev at openjdk.java.net >> Subject: RE: RFR: 8199425: JTReg failure: >> runtime/stringtable/StringTableVerifyTest.java >> >> I'm back from holiday and attempting to get this process figured out. Thanks for the help so far David and Coleen. However, after reading around openjdk.java.net and trying several things, I will have to fess up to my ignorance. I have committed my patch locally and used the "hg commit -l message" approach to get the appropriate format. >> >> However, I am at a loss for what the next step would be. So I will simply put in the actual output of hg export -g, which if I read http://openjdk.java.net/contribute/ correctly is the preferred output method. >> >> I was hoping that there would be some sort of webrev-like output that could be used to create and upload the patch. Perhaps I am to manually upload the final patch to cr.openjdk.java.net? >> >> Sorry for the most basic of issues, but I can't find a nice description of the process. (I can't even find where I got my webrev.ksh script anymore, though it was somewhere on the openjdk.java.net site). >> >> Daniel >> >> # HG changeset patch >> # User dstewart >> # Date 1521123686 0 >> # Thu Mar 15 14:21:26 2018 +0000 >> # Node ID 6803b666b65dc28ea06b8622d36c15898abc7550 >> # Parent 62dd99c3a6f98a943f754a6aa2ea8fcfb9cb55fd >> 8199425: JTReg failure: >> runtime/stringtable/StringTableVerifyTest.java >> Summary: Adding required -XX:+UnlockDiagnosticVMOptions flag to >> StringTableVerifyTest.java >> Reviewed-by: coleenp, kvn >> >> diff --git >> a/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java >> b/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java >> --- >> a/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.java >> +++ b/test/hotspot/jtreg/runtime/stringtable/StringTableVerifyTest.ja >> +++ v >> +++ a >> @@ -35,7 +35,7 @@ >> >> public class StringTableVerifyTest { >> public static void main(String[] args) throws Exception { >> - ProcessBuilder pb = ProcessTools.createJavaProcessBuilder("-XX:+VerifyStringTableAtExit", "-version"); >> + ProcessBuilder pb = >> + ProcessTools.createJavaProcessBuilder("-XX:+UnlockDiagnosticVMOptions" >> + , "-XX:+VerifyStringTableAtExit", "-version"); >> OutputAnalyzer output = new OutputAnalyzer(pb.start()); >> output.shouldHaveExitValue(0); >> } >> >> >> >> >> -----Original Message----- >> From: David Holmes [mailto:david.holmes at oracle.com] >> Sent: Monday, March 12, 2018 12:28 AM >> To: coleen.phillimore at oracle.com; stewartd.qdt >> ; hotspot-dev at openjdk.java.net >> Subject: Re: RFR: 8199425: JTReg failure: >> runtime/stringtable/StringTableVerifyTest.java >> >> Hi Coleen, Daniel, >> >> On 11/03/2018 12:35 AM, coleen.phillimore at oracle.com wrote: >>> Hi I didn't mean that you should push.? I wanted you to do an hg >>> commit and I would import the changeset and push for you.?? I don't >>> see an openjdk author name for you.? Have you signed the contributor agreement? >> Coleen: Daniel is dstewart (Qualcomm Datacenter Technologies is the signatory). >> >> Daniel: as Coleen indicated you can't do the hg push as you are not a >> Committer, so just create the changeset using "hg commit" and ensure >> the commit message has the correct format [1] e.g. from a previous >> change of >> yours: >> >> 8196361: JTReg failure: serviceability/sa/ClhsdbInspect.java >> Summary: Modified test search strings to those guaranteed to exist in the passing cases. >> Reviewed-by: dholmes, jgeorge >> >> Thanks, >> David >> >> [1] http://openjdk.java.net/guide/producingChangeset.html#create >> >> >> >>> Thanks, >>> Coleen >>> >>> On 3/9/18 11:23 PM, stewartd.qdt wrote: >>>> I'd love to Coleen, but having never pushed before, I'm running >>>> into issues. It seems I haven't figured out the magic set of steps >>>> yet. I get that I am unable to lock jdk/hs as it is Read Only. >>>> >>>> I'm off for the next Thursday. So, if it can wait until then, I'm >>>> happy to keep trying to figure it out. If you'd like, you may go >>>> ahead and take the webrev. It seems that is what others have done >>>> for other patches I made. But either way I'll have to figure this out. >>>> >>>> Thanks, >>>> Daniel >>>> >>>> -----Original Message----- >>>> From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On >>>> Behalf Of coleen.phillimore at oracle.com >>>> Sent: Friday, March 9, 2018 7:55 PM >>>> To: hotspot-dev at openjdk.java.net >>>> Subject: Re: RFR: 8199425: JTReg failure: >>>> runtime/stringtable/StringTableVerifyTest.java >>>> >>>> Looks good.? Thank you for fixing this. >>>> Can you hg commit the patch with us as reviewers and I'll push it? >>>> thanks, >>>> Coleen >>>> >>>> On 3/9/18 5:20 PM, stewartd.qdt wrote: >>>>> Please review this webrev [1] which attempts to fix a test error >>>>> in runtime/stringtable/StringTableVerifyTest.java. This test uses >>>>> the flag -XX:+VerifyStringTableAtExit, which is a diagnostic >>>>> option and requires the flag -XX:+UnlockDiagnosticVMOptions. >>>>> >>>>> >>>>> >>>>> This test currently fails our JTReg testing on an AArch64 machine. >>>>> This patch simply adds the -XX:+UnlockDiagnosticVMOptions. >>>>> >>>>> The bug report is filed at [2]. >>>>> >>>>> >>>>> >>>>> I am happy to modify the patch as necessary. >>>>> >>>>> >>>>> >>>>> Regards, >>>>> >>>>> >>>>> >>>>> Daniel Stewart >>>>> >>>>> >>>>> >>>>> [1] - http://cr.openjdk.java.net/~dstewart/8199425/webrev.01/ >>>>> >>>>> [2] - https://bugs.openjdk.java.net/browse/JDK-8199425 >>>>> >>>>> >>>>> From coleen.phillimore at oracle.com Fri Mar 16 14:10:00 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 16 Mar 2018 10:10:00 -0400 Subject: RFR(S): 8199698: Change 8199275 breaks template instantiation for xlC (and potentially other compliers) In-Reply-To: <96c76fce-d4f1-7567-2198-f12c85dc78b9@oracle.com> References: <96c76fce-d4f1-7567-2198-f12c85dc78b9@oracle.com> Message-ID: <47783ed4-a8de-e773-3605-775d57266723@oracle.com> Hi, I've looked at both fixes and I really like Stefan's version. Having new and delete to be trivially inlined in allocation.hpp solves a lot of problems that I've found when removing more .inline.hpp includes from .hpp files (hashtable.inline.hpp from systemDictionary.hpp, I think). Also, rather than NOINLINE, it's a lot nicer to put this in the .cpp file. Thanks, Coleen On 3/16/18 4:18 AM, Stefan Karlsson wrote: > Hi Volker, > > This seems fine to be. > > An alternative fix for the allocation.inline.hpp problem would be to > move the AllocateHeap code into allocation.cpp, and get rid of the > NOINLINE usage. > > I've created a prototype for that: > http://cr.openjdk.java.net/~stefank/8199698/prototypeAllocateHeapInCpp/ > > I've visually inspected the output from NMT and it seems to give > correct stack traces. For example: > > [0x00007f4bffa10eff] ObjectSynchronizer::omAlloc(Thread*)+0x3cf > [0x00007f4bffa1244c] ObjectSynchronizer::inflate(Thread*, oopDesc*, > ObjectSynchronizer::InflateCause)+0x8c > [0x00007f4bffa1425a] ObjectSynchronizer::FastHashCode(Thread*, > oopDesc*)+0x7a > [0x00007f4bff616142] JVM_IHashCode+0x52 > ???????????????????????????? (malloc=4144KB type=Internal #129) > > Thanks, > StefanK > > On 2018-03-15 18:20, Volker Simonis wrote: >> Hi, >> >> can I please have a review for the following small fix: >> >> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199698/ >> https://bugs.openjdk.java.net/browse/JDK-8199698 >> >> The fix is actually trivial: just defining the corresponding >> "NOINLINE" macro for xlC. Unfortunately it's syntax requirements are a >> little different for method declarations, compared to the other >> platforms (it has to be placed AFTER the method declarator instead of >> BEFORE it). Fortunately, there are no differences for method >> definitions, so we can safely move the NOINLINE attributes from the >> method declarations in allocation.hpp to the method definitions in >> allocation.inline.hpp. >> >> Thank you and best regards, >> Volker >> >> PS: or true C++ enthusiasts I've also included the whole story of why >> this happens and why it happens just now, right after 8199275 :) >> >> Change "8199275: Fix inclusions of allocation.inline.hpp" replaced the >> inclusion of "allocation.inline.hpp" in some .hpp files (e.g. >> constantPool.hpp) by "allocation.hpp". >> >> "allocation.inline.hpp" contains not only the definition of some >> inline methods (as the name implies) but also the definition of some >> template methods (notably the various CHeapObj<>::operator new() >> versions). >> >> Template functions are on orthogonal concept with regard to inline >> functions, but they share on implementation communality: at their call >> sites, the compiler absolutely needs the corresponding function >> definition. Otherwise it can either not inline the corresponding >> function in the case of inline functions or it won't even be able to >> create the corresponding instantiation in the case of a template >> function. >> >> Because of this reason, template functions and methods are defined in >> their corresponding .inline.hpp files in HotSpot (even if they are not >> subject to inlining). This is especially true for the before mentioned >> CHeapObj<>:: new operators, which are explicitly marked as "NOINLINE" >> in allocation.hpp but defined in allocation.inline.hpp. >> >> Now every call site of these CHeapOb<>::new() operators which only >> includes "allocation.hpp" will emit a call to the corresponding >> instantiation of the CHeapObj<>:: new operator, but wont be able to >> actually create that instantiation (simply because it doesn't see the >> corresponding definition in allocation.inline.hpp). On the other side, >> call sites of a CHeapObj<>:: new operator which include >> allocation.inline.hpp will instantiate the required version in the >> current compilation unit (or even inline that method instance if it is >> not flagged as "NOINLINE"). >> >> If a compiler doesn't honor the "NOINLINE" attribute (or has an empty >> definition for the NOINLIN macro like xlC), he can potentially inline >> all the various template instances of CHeapObj<>:: new at all call >> sites, if their implementation is available. This is exactly what has >> happened on AIX/xlC before change 8199275 with the effect that the >> resulting object files contained no single instance of the >> corresponding new operators. >> >> After change 8199275, the template definition of the CHeapObj<>:: new >> operators aren't available any more at all call sites (because the >> inclusion of allocation.inline.hpp was removed from some other .hpp >> files which where included transitively before). As a result, the xlC >> compiler will emit calls to the corresponding instantiations instead >> of inlining them. But at all other call sites of the corresponding >> operators, the operator instantiations are still inlined (because xlC >> does not support "NOINLINE"), so we end up with link errors in >> libjvm.so because of missing CHeapObj<>::new instances. >> >> ? As a general rule of thumb, we should always make template method >> definitions available at all call sites, by placing them into >> corresponding .inline.hpp files and including them appropriately. >> Otherwise, we might end up without the required instantiations at link >> time. >> >> Unfortunately, there's no compile time check to enforce this >> requirement. But we can misuse the "inline" keyword here, by >> attributing template functions/methods as "inline". This way, the >> compiler will warn us, if a template definition isn't available at a >> specific call site. Of course this trick doesn't work if we >> specifically want to define template functions/methods which shouldn't >> be inlined, like in the current case :) >> From stefan.karlsson at oracle.com Fri Mar 16 14:29:08 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 16 Mar 2018 15:29:08 +0100 Subject: RFR: 8199728: Remove oopDesc::is_scavengable Message-ID: <34f5276b-5f03-c8de-6c3d-c29f99e02fdb@oracle.com> Hi all, Please review this trivial patch to replace oopDesc::is_scavengable() usages with Universe::heap()->is_scavengable(...). http://cr.openjdk.java.net/~stefank/8199728/webrev.01/ This helps break an include dependency between oop.inline.hpp and collectedHeap.inline.hpp. I'll remove the collectedHeap.inline.hpp include in a separate RFE, since doing that requires update to many header files that transitively included collectedHeap.inline.hpp and its include files. Thanks, StefanK From stefan.karlsson at oracle.com Fri Mar 16 14:39:55 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 16 Mar 2018 15:39:55 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp Message-ID: Hi all, Please review this patch to use HeapAccess<>::oop_load instead of oopDesc::load_decode_heap_oop when loading oops from static fields in javaClasses.cpp: http://cr.openjdk.java.net/~stefank/8199739/webrev.01/ https://bugs.openjdk.java.net/browse/JDK-8199739 It's necessary to use HeapAccess<>::oop_load to inject load barriers for GCs that need them. Thanks, StefanK From erik.osterlund at oracle.com Fri Mar 16 14:43:54 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Fri, 16 Mar 2018 15:43:54 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: References: Message-ID: <5AABD82A.4060002@oracle.com> Hi Stefan, Looks good. Thanks, /Erik On 2018-03-16 15:39, Stefan Karlsson wrote: > Hi all, > > Please review this patch to use HeapAccess<>::oop_load instead of > oopDesc::load_decode_heap_oop when loading oops from static fields in > javaClasses.cpp: > > http://cr.openjdk.java.net/~stefank/8199739/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8199739 > > It's necessary to use HeapAccess<>::oop_load to inject load barriers > for GCs that need them. > > Thanks, > StefanK From coleen.phillimore at oracle.com Fri Mar 16 15:04:43 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 16 Mar 2018 11:04:43 -0400 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: References: Message-ID: <04b5248a-59bc-4f48-17b7-cdaa19eada36@oracle.com> This looks great. Coleen On 3/16/18 10:39 AM, Stefan Karlsson wrote: > Hi all, > > Please review this patch to use HeapAccess<>::oop_load instead of > oopDesc::load_decode_heap_oop when loading oops from static fields in > javaClasses.cpp: > > http://cr.openjdk.java.net/~stefank/8199739/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8199739 > > It's necessary to use HeapAccess<>::oop_load to inject load barriers > for GCs that need them. > > Thanks, > StefanK From stefan.karlsson at oracle.com Fri Mar 16 15:06:49 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 16 Mar 2018 16:06:49 +0100 Subject: RFR: 8199604: Rename CardTableModRefBS to CardTableBarrierSet In-Reply-To: <5AABB5BE.50704@oracle.com> References: <5AABB5BE.50704@oracle.com> Message-ID: Looks good. StefanK On 2018-03-16 13:17, Erik ?sterlund wrote: > Hi, > > After collapsing barrier sets, it seems like CardTableModRefBS is the > only barrier set that encodes "ModRef" in its name and uses the "BS" > suffix instead of spelling out "BarrierSet". > > This is a weird inconsistency that I have solved by renaming > CardTableModRefBS to CardTableBarrierSet. > > Files were renamed like this: > parCardTableModRefBS.cpp => cmsCardTable.cpp (this file contains a bunch > of implementations for the card table used by CMS and is in the CMS > directory) > cardTableModRefBS.cpp => cardTableBarrierSet.cpp > cardTableModRefBS.hpp => cardTableBarrierSet.hpp > cardTableModRefBS.inline.hpp => cardTableBarrierSet.inline.hpp > > I have performed an automatic search and replace to rename > CardTableModRefBS to CardTableBarrierSet, and manually clicked through > everything to make sure that space alignment, comments, etc all look as > they should. I have also checked that the SA agent is not affected and > run hs-tier1 on this changeset. > > Despite the fix being a bit lengthy, I would still prefer to consider it > trivial (unless somebody opposes that) as I am only renaming a class in > a rather mechanical way. Hope we can agree about that. > > Webrev: > http://cr.openjdk.java.net/~eosterlund/8199604/webrev.00/ > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8199604 > > Testing: > mach5 hs-tier1-3 > > Thanks, > /Erik From erik.osterlund at oracle.com Fri Mar 16 15:10:15 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Fri, 16 Mar 2018 16:10:15 +0100 Subject: RFR: 8199604: Rename CardTableModRefBS to CardTableBarrierSet In-Reply-To: References: <5AABB5BE.50704@oracle.com> Message-ID: <5AABDE57.6030505@oracle.com> Hi Stefan, Thank you for the review! Thanks, /Erik On 2018-03-16 16:06, Stefan Karlsson wrote: > Looks good. > > StefanK > > On 2018-03-16 13:17, Erik ?sterlund wrote: >> Hi, >> >> After collapsing barrier sets, it seems like CardTableModRefBS is the >> only barrier set that encodes "ModRef" in its name and uses the "BS" >> suffix instead of spelling out "BarrierSet". >> >> This is a weird inconsistency that I have solved by renaming >> CardTableModRefBS to CardTableBarrierSet. >> >> Files were renamed like this: >> parCardTableModRefBS.cpp => cmsCardTable.cpp (this file contains a >> bunch of implementations for the card table used by CMS and is in the >> CMS directory) >> cardTableModRefBS.cpp => cardTableBarrierSet.cpp >> cardTableModRefBS.hpp => cardTableBarrierSet.hpp >> cardTableModRefBS.inline.hpp => cardTableBarrierSet.inline.hpp >> >> I have performed an automatic search and replace to rename >> CardTableModRefBS to CardTableBarrierSet, and manually clicked >> through everything to make sure that space alignment, comments, etc >> all look as they should. I have also checked that the SA agent is not >> affected and run hs-tier1 on this changeset. >> >> Despite the fix being a bit lengthy, I would still prefer to consider >> it trivial (unless somebody opposes that) as I am only renaming a >> class in a rather mechanical way. Hope we can agree about that. >> >> Webrev: >> http://cr.openjdk.java.net/~eosterlund/8199604/webrev.00/ >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8199604 >> >> Testing: >> mach5 hs-tier1-3 >> >> Thanks, >> /Erik From stefan.karlsson at oracle.com Fri Mar 16 15:08:28 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 16 Mar 2018 16:08:28 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <04b5248a-59bc-4f48-17b7-cdaa19eada36@oracle.com> References: <04b5248a-59bc-4f48-17b7-cdaa19eada36@oracle.com> Message-ID: <9feb44cc-745d-bb48-d151-5dfb8d5ae3f8@oracle.com> Thanks Coleen and Erik for the reviews! StefanK On 2018-03-16 16:04, coleen.phillimore at oracle.com wrote: > This looks great. > Coleen > > On 3/16/18 10:39 AM, Stefan Karlsson wrote: >> Hi all, >> >> Please review this patch to use HeapAccess<>::oop_load instead of >> oopDesc::load_decode_heap_oop when loading oops from static fields in >> javaClasses.cpp: >> >> http://cr.openjdk.java.net/~stefank/8199739/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8199739 >> >> It's necessary to use HeapAccess<>::oop_load to inject load barriers >> for GCs that need them. >> >> Thanks, >> StefanK > From rkennke at redhat.com Fri Mar 16 15:10:05 2018 From: rkennke at redhat.com (Roman Kennke) Date: Fri, 16 Mar 2018 16:10:05 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: References: Message-ID: <8e16e2f4-eab7-3699-517d-36226220d82d@redhat.com> Am 16.03.2018 um 15:39 schrieb Stefan Karlsson: > Hi all, > > Please review this patch to use HeapAccess<>::oop_load instead of > oopDesc::load_decode_heap_oop when loading oops from static fields in > javaClasses.cpp: > > http://cr.openjdk.java.net/~stefank/8199739/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8199739 > > It's necessary to use HeapAccess<>::oop_load to inject load barriers for > GCs that need them. > > Thanks, > StefanK The change looks good. I haven't checked: are there any stores in there that also need to go through HeapAccess? Thanks, Roman From ChrisPhi at LGonQn.Org Fri Mar 16 15:14:11 2018 From: ChrisPhi at LGonQn.Org (Chris Phillips) Date: Fri, 16 Mar 2018 11:14:11 -0400 Subject: RFR: aarch32: ARM 32 build broken after 8165929 In-Reply-To: <2c33dcc4-0521-ab12-a31a-6fdc7a67b9ee@oracle.com> References: <1521150037.2955.5.camel@gmail.com> <2c33dcc4-0521-ab12-a31a-6fdc7a67b9ee@oracle.com> Message-ID: Hi, On 15/03/18 05:55 PM, coleen.phillimore at oracle.com wrote: > > Thank you for fixing this!? I don't know arm32 assembly but I'm happy > that you've fixed the inconsistency. > > Coleen > > On 3/15/18 5:40 PM, Edward Nevill wrote: >> Hi, >> >> Please review the following webrev >> >> Bugid: https://bugs.openjdk.java.net/browse/JDK-8199243 >> Webrev: http://cr.openjdk.java.net/~enevill/8199243/webrev.00 >> >> The ARM 32 build is broken with multiple errors of the form >> >> /work/ed/arm32/jdk/src/hotspot/os_cpu/linux_arm/copy_linux_arm.inline.hpp:33:54: >> error: invalid conversion from 'const HeapWord*' to 'HeapWord*' >> [-fpermissive] >> ??? _Copy_conjoint_words(to, from, count * HeapWordSize); >> >> The problem was introduced by change 8165929 >> >> # HG changeset patch >> # User coleenp >> # Date 1518182622 18000 >> #????? Fri Feb 09 08:23:42 2018 -0500 >> # Node ID 950c35ea6237afd834d02345a2878e5dc30750e0 >> # Parent? f323537c9b75444578c75d348fa2e5be81532d3e >> 8165929: Constify arguments of Copy methods >> Reviewed-by: hseigel, kbarrett >> >> This change added 'const' to the 'from' arguments in various Copy >> functions, for example >> >> -? void _Copy_conjoint_words(HeapWord* from, HeapWord* to, size_t count); >> -? void _Copy_disjoint_words(HeapWord* from, HeapWord* to, size_t count); >> +? void _Copy_conjoint_words(const HeapWord* from, HeapWord* to, >> size_t count); >> +? void _Copy_disjoint_words(const HeapWord* from, HeapWord* to, >> size_t count); >> >> The problem in the ARM 32 port occurs in code like the following >> >> static void pd_conjoint_words(const HeapWord* from, HeapWord* to, >> size_t count) { >> #ifdef AARCH64 >> ?? _Copy_conjoint_words(from, to, count * HeapWordSize); >> #else >> ??? // NOTE: _Copy_* functions on 32-bit ARM expect "to" and "from" >> arguments in reversed order >> ?? _Copy_conjoint_words(to, from, count * HeapWordSize); >> #endif >> } >> >> The assembler implementation of the Copy functions in ARM 32 actually >> copies in the wrong order. Ie it copies from 'to' and to 'from'. >> >> Looking at the assembler implementation it says the following >> >> ??????? # Support for void Copy::conjoint_words(void* from, >> ???????? #?????????????????????????????????????? void* to, >> ???????? #?????????????????????????????????????? size_t count) >> _Copy_conjoint_words: >> ???????? stmdb??? sp!, {r3 - r9, ip} >> >> IE. It implies that it copies from 'from' and to 'to' in the comment, >> or in other words copies from memory pointed to by 'R0' to memory >> pointed to by 'R1' but in fact the implementation does the copy the >> other way around! >> >> The quick and dirty fix would be to apply a (const *) cast to the 'to' >> arguments for ARM 32. However, I think this is just too broken and >> misleading. >> >> My proposal is to fix this properly and have the assembler copy the >> correct way, and then delete the nasty conditionalisation on the >> calls. I also propose using symbolic names 'from' and 'to' rather than >> 'r0' and 'r1'. >> >> Many thanks, >> Ed. >> > > > Not an official reviewer, but the Arm32 code looks OK. Cheers, Chris From paul.sandoz at oracle.com Fri Mar 16 16:21:32 2018 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Fri, 16 Mar 2018 09:21:32 -0700 Subject: GetPrimitiveArrayCritical vs GetByteArrayRegion: 140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream In-Reply-To: References: <22bbd632-47c1-7702-bd65-6a51f0bbad02@oracle.com> <9b5ff318-af41-03c4-efdc-b41b97cf6e61@oracle.com> <5A9D8C60.3050505@oracle.com> <5A9D8F33.5060500@oracle.com> <5A9DAC84.906@oracle.com> <5A9DAE87.8020801@oracle.com> <1521102348.2448.25.camel@oracle.com> Message-ID: <23639144-5217-4A0F-930C-EF24B4976544@oracle.com> Hi Ian, Thomas, Some background on the bulk copying for byte buffers after talking with Mikael who worked on these changes a few years ago. Please correct the following if needed as our memory is hazy :-) IIRC at the time we felt this was a reasonable thing to do because C2 did not strip mine the loops for the bulk copying of large Java arrays i.e. the issue was there anyway for more common cases. However, i believe that may no longer be the so in some cases after Roland implemented loop strip mining in C2 [1]. So we should go back and revisit/check the current support in buffers and Java arrays (System.arraycopy etc). (This is also something we need to consider if we modify buffers to support capacities larger than Integer.MAX_VALUE. Also connects with Project Panama.) If Thomas has not done so or does not plan to i can log an issue for you. Paul. [1] https://bugs.openjdk.java.net/browse/JDK-8186027 > On Mar 15, 2018, at 10:49 AM, Ian Rogers wrote: > > +hotspot-gc-dev > > On Thu, Mar 15, 2018 at 1:25 AM Thomas Schatzl > wrote: > >> Hi, >> >> On Thu, 2018-03-15 at 01:00 +0000, Ian Rogers wrote: >>> An old data point on how large a critical region should be comes from >>> java.nio.Bits. In JDK 9 the code migrated into unsafe, but in JDK 8 >>> the copies within a critical region were bound at most copying 1MB: >>> http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/ >>> native/java/nio/Bits.c#l88 This is inconsistent with Deflater and >>> ObjectOutputStream which both allow unlimited arrays and thereby >>> critical region sizes. >>> >>> In JDK 9 the copies starve the garbage collector in nio Bits too as >>> there is no 1MB limit to the copy sizes: >>> http://hg.openjdk.java.net/jdk/jdk/rev/f70e100d3195 >>> which came from: >>> https://bugs.openjdk.java.net/browse/JDK-8149596 >>> >>> Perhaps this is a regression not demonstrated due to the testing >>> challenge. >>> [...] >>> It doesn't seem unreasonable to have the loops for the copies occur >>> in 1MB chunks but JDK-8149596 lost this and so I'm confused on what >>> the HotSpot stand point is. >> >> Please file a bug (seems to be a core-libs/java.nio regression?), >> preferably with some kind of regression test. Also file enhancements (I >> would guess) for the other cases allowing unlimited arrays. >> > > I don't have perms to file bugs there's some catch-22 scenario in getting > the permissions. Happy to have a bug filed or to file were that not an > issue. Happy to create a test case but can't see any others for TTSP > issues. This feels like a potential use case for jmh, perhaps run the > benchmark well having a separate thread run GC bench. > > Should there be a bug to add, in debug mode, a TTSP watcher thread whose > job it is to bring "random" threads into safepoints and report on tardy > ones? > Should there be a bug to warn on being in a JNI critical for more than just > a short period? > Seems like there should be a bug on Unsafe.copyMemory and > Unsafe.copySwapMemory having TTSP issues. > Seems like there should be a bug on all uses of critical that don't chunk > their critical region work based on some bound (like 1MB chunks for nio > Bits)? How are these bounds set? A past reference that I've lost is in > having the bound be the equivalent of 65535 bytecodes due to the > expectation of GC work at least once in a method or on a loop backedge - I > thought this was in a spec somewhere but now I can't find it. The bytecode > size feels as arbitrary as 1MB, a time period would be better but that can > depend on the kind of GC you want as delays with concurrent GC mean more > than non-concurrent. Clearly the chunk size shouldn't just be 0, but this > appears to currently be the norm in the JDK. > > The original reason for coming here was a 140x slow down in -Xcheck:jni in > Deflater.deflate There are a few options there that its useful to enumerate: > 1) rewrite in Java but there are correctness and open source related issues > 2) remove underflow/overflow protection from critical arrays (revert > JDK-6311046 > or perhaps bound protection to arrays of a particular small size) - this > removes checking and doesn't deal with TTSP > 3) add a critical array slice API to JNI so that copies with -Xcheck:jni > aren't unbounded (martinrb@ proposed this) - keeps checking but doesn't > deal with TTSP > 4) rewrite primitive array criticals with GetArrayRegion as O(n) beats the > "silent killer" TTSP (effectively deprecate the critical APIs) > > In general (ie not just the deflate case) I think (1) is the most > preferable. (2) and (3) both have TTSP issues. (4) isn't great performance > wise, which motivates more use of approach (1), but I think deprecating > criticals may just be the easiest and sanest way forward. I think that > discussion is worth having on an e-mail thread rather than a bug. > > >> Long TTSP is a performance bug as any other. >> >>> In a way criticals are better than unsafe as they may >>> pin the memory and not starve GC, which shenandoah does. >> >> (Region based) Object pinning has its own share of problems: >> >> - only (relatively) easily implemented in region based collectors >> >> - may slow down pause a bit in presence of pinned regions/objects (for >> non-concurrent copying collectors) >> >> - excessive use of pinning may cause OOME and VM exit probably earlier >> than the gc locker. GC locker seems to provide a more gradual >> degradation. E.g. pinning regions typically makes these regions >> unavailable for allocation. >> I.e. you still should not use it for many, very long living objects. >> Of course this somewhat depends on the sophistication of the >> implementation. >> >> I think region based pinning would be a good addition to other >> collectors than Shenandoah too. It has been on our minds for a long >> time, but there are so many other more important issues :), so of >> course we are eager to see contributions in this area. ;) >> >> If you are interested on working on this, please ping us on hotspot-gc- >> dev for implementation ideas to get you jump-started. >> >> Thanks, >> Thomas >> > > I'd rather deprecate criticals than build upon the complexity, but I'm very > glad this is a concern. > > Thanks, > Ian From kim.barrett at oracle.com Fri Mar 16 16:54:29 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 16 Mar 2018 12:54:29 -0400 Subject: RFR: 8199728: Remove oopDesc::is_scavengable In-Reply-To: <34f5276b-5f03-c8de-6c3d-c29f99e02fdb@oracle.com> References: <34f5276b-5f03-c8de-6c3d-c29f99e02fdb@oracle.com> Message-ID: <5CD44140-5F9A-446D-BC92-AF8BFF8FE781@oracle.com> > On Mar 16, 2018, at 10:29 AM, Stefan Karlsson wrote: > > Hi all, > > Please review this trivial patch to replace oopDesc::is_scavengable() usages with Universe::heap()->is_scavengable(...). > > http://cr.openjdk.java.net/~stefank/8199728/webrev.01/ > > This helps break an include dependency between oop.inline.hpp and collectedHeap.inline.hpp. I'll remove the collectedHeap.inline.hpp include in a separate RFE, since doing that requires update to many header files that transitively included collectedHeap.inline.hpp and its include files. > > Thanks, > StefanK Looks good. From irogers at google.com Fri Mar 16 17:19:58 2018 From: irogers at google.com (Ian Rogers) Date: Fri, 16 Mar 2018 17:19:58 +0000 Subject: GetPrimitiveArrayCritical vs GetByteArrayRegion: 140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream In-Reply-To: <23639144-5217-4A0F-930C-EF24B4976544@oracle.com> References: <22bbd632-47c1-7702-bd65-6a51f0bbad02@oracle.com> <9b5ff318-af41-03c4-efdc-b41b97cf6e61@oracle.com> <5A9D8C60.3050505@oracle.com> <5A9D8F33.5060500@oracle.com> <5A9DAC84.906@oracle.com> <5A9DAE87.8020801@oracle.com> <1521102348.2448.25.camel@oracle.com> <23639144-5217-4A0F-930C-EF24B4976544@oracle.com> Message-ID: Thanks Paul, very interesting. On Fri, Mar 16, 2018 at 9:21 AM Paul Sandoz wrote: > Hi Ian, Thomas, > > Some background on the bulk copying for byte buffers after talking with > Mikael who worked on these changes a few years ago. > > Please correct the following if needed as our memory is hazy :-) > > IIRC at the time we felt this was a reasonable thing to do because C2 did > not strip mine the loops for the bulk copying of large Java arrays i.e. the > issue was there anyway for more common cases. However, i believe that may > no longer be the so in some cases after Roland implemented loop strip > mining in C2 [1]. So we should go back and revisit/check the current > support in buffers and Java arrays (System.arraycopy etc). > The C2 issue is a well known TTSP issue :-) Its great that there is a strip mining optimization, revisiting the bulk copies would be great! > (This is also something we need to consider if we modify buffers to > support capacities larger than Integer.MAX_VALUE. Also connects with > Project Panama.) > > If Thomas has not done so or does not plan to i can log an issue for you. > That'd be great. I wonder if identifying more TTSP issues should also be a bug. Its interesting to observe that overlooking TTSP in C2 motivated the Unsafe.copyMemory change permitting a fresh TTSP issue. If TTSP is a 1st class issue then maybe we can deprecate JNI critical regions to support that effort :-) Thanks, Ian > Paul. > > > [1] https://bugs.openjdk.java.net/browse/JDK-8186027 > > On Mar 15, 2018, at 10:49 AM, Ian Rogers wrote: > > +hotspot-gc-dev > > On Thu, Mar 15, 2018 at 1:25 AM Thomas Schatzl > wrote: > > Hi, > > On Thu, 2018-03-15 at 01:00 +0000, Ian Rogers wrote: > > An old data point on how large a critical region should be comes from > java.nio.Bits. In JDK 9 the code migrated into unsafe, but in JDK 8 > the copies within a critical region were bound at most copying 1MB: > http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/ > native/java/nio/Bits.c#l88 This is inconsistent with Deflater and > ObjectOutputStream which both allow unlimited arrays and thereby > critical region sizes. > > In JDK 9 the copies starve the garbage collector in nio Bits too as > there is no 1MB limit to the copy sizes: > http://hg.openjdk.java.net/jdk/jdk/rev/f70e100d3195 > which came from: > https://bugs.openjdk.java.net/browse/JDK-8149596 > > Perhaps this is a regression not demonstrated due to the testing > challenge. > [...] > It doesn't seem unreasonable to have the loops for the copies occur > in 1MB chunks but JDK-8149596 lost this and so I'm confused on what > the HotSpot stand point is. > > > Please file a bug (seems to be a core-libs/java.nio regression?), > preferably with some kind of regression test. Also file enhancements (I > would guess) for the other cases allowing unlimited arrays. > > > I don't have perms to file bugs there's some catch-22 scenario in getting > the permissions. Happy to have a bug filed or to file were that not an > issue. Happy to create a test case but can't see any others for TTSP > issues. This feels like a potential use case for jmh, perhaps run the > benchmark well having a separate thread run GC bench. > > Should there be a bug to add, in debug mode, a TTSP watcher thread whose > job it is to bring "random" threads into safepoints and report on tardy > ones? > Should there be a bug to warn on being in a JNI critical for more than just > a short period? > Seems like there should be a bug on Unsafe.copyMemory and > Unsafe.copySwapMemory having TTSP issues. > Seems like there should be a bug on all uses of critical that don't chunk > their critical region work based on some bound (like 1MB chunks for nio > Bits)? How are these bounds set? A past reference that I've lost is in > having the bound be the equivalent of 65535 bytecodes due to the > expectation of GC work at least once in a method or on a loop backedge - I > thought this was in a spec somewhere but now I can't find it. The bytecode > size feels as arbitrary as 1MB, a time period would be better but that can > depend on the kind of GC you want as delays with concurrent GC mean more > than non-concurrent. Clearly the chunk size shouldn't just be 0, but this > appears to currently be the norm in the JDK. > > The original reason for coming here was a 140x slow down in -Xcheck:jni in > Deflater.deflate There are a few options there that its useful to > enumerate: > 1) rewrite in Java but there are correctness and open source related issues > 2) remove underflow/overflow protection from critical arrays (revert > JDK-6311046 > or perhaps bound protection to arrays of a particular small size) - this > removes checking and doesn't deal with TTSP > 3) add a critical array slice API to JNI so that copies with -Xcheck:jni > aren't unbounded (martinrb@ proposed this) - keeps checking but doesn't > deal with TTSP > 4) rewrite primitive array criticals with GetArrayRegion as O(n) beats the > "silent killer" TTSP (effectively deprecate the critical APIs) > > In general (ie not just the deflate case) I think (1) is the most > preferable. (2) and (3) both have TTSP issues. (4) isn't great performance > wise, which motivates more use of approach (1), but I think deprecating > criticals may just be the easiest and sanest way forward. I think that > discussion is worth having on an e-mail thread rather than a bug. > > > Long TTSP is a performance bug as any other. > > In a way criticals are better than unsafe as they may > pin the memory and not starve GC, which shenandoah does. > > > (Region based) Object pinning has its own share of problems: > > - only (relatively) easily implemented in region based collectors > > - may slow down pause a bit in presence of pinned regions/objects (for > non-concurrent copying collectors) > > - excessive use of pinning may cause OOME and VM exit probably earlier > than the gc locker. GC locker seems to provide a more gradual > degradation. E.g. pinning regions typically makes these regions > unavailable for allocation. > I.e. you still should not use it for many, very long living objects. > Of course this somewhat depends on the sophistication of the > implementation. > > I think region based pinning would be a good addition to other > collectors than Shenandoah too. It has been on our minds for a long > time, but there are so many other more important issues :), so of > course we are eager to see contributions in this area. ;) > > If you are interested on working on this, please ping us on hotspot-gc- > dev for implementation ideas to get you jump-started. > > Thanks, > Thomas > > > I'd rather deprecate criticals than build upon the complexity, but I'm very > glad this is a concern. > > Thanks, > Ian > > > From gromero at linux.vnet.ibm.com Fri Mar 16 17:20:30 2018 From: gromero at linux.vnet.ibm.com (Gustavo Romero) Date: Fri, 16 Mar 2018 14:20:30 -0300 Subject: [PING] Re: RFR(S): 8198794: Hotspot crash on Cassandra 3.11.1 startup with libnuma 2.0.3 In-Reply-To: <833ca502-c2b7-76ce-cac7-dcf08caf247a@oracle.com> References: <5AA725AA.7010202@linux.vnet.ibm.com> <3C4B8012-284F-4F47-B99F-ACB0056198C1@amazon.com> <282ee7b0-eb29-a4e2-1aff-4d4c369c08c6@oracle.com> <7116aeef-f2e3-c84f-a509-835b71195d10@linux.vnet.ibm.com> <833ca502-c2b7-76ce-cac7-dcf08caf247a@oracle.com> Message-ID: On 03/15/2018 09:49 PM, David Holmes wrote: > Looks like I blinked first. :) > > I'll sponsor this for you Gustavo. haha thank you so much again David :) > In the future if any of your colleagues are OpenJDK committers you could get them to sponsor you, after using submit-hs repo for testing. okay sure! Cheers, Gustavo > Cheers, > David > > On 15/03/2018 11:11 PM, Gustavo Romero wrote: >> Hi, >> >> Could somebody please sponsor the following small change? >> >> bug?? : https://bugs.openjdk.java.net/browse/JDK-8198794 >> webrev: http://cr.openjdk.java.net/~gromero/8198794/v2/ >> >> It's already reviewed by two Reviewers: dholmes and phh. >> >> Thank you! >> >> >> Regards, >> Gustavo >> >> On 03/14/2018 12:07 PM, Gustavo Romero wrote: >>> Hi David, >>> >>> On 03/13/2018 09:05 PM, David Holmes wrote: >>>>> bug?? : https://bugs.openjdk.java.net/browse/JDK-8198794 >>>>> webrev: http://cr.openjdk.java.net/~gromero/8198794/v1/ >>>> >>>> Seems okay. Couple of grammar nits with the mega comment: >>>> >>>> // it can exist nodes >>>> >>>> it -> there >>>> >>>> // are besides that non-contiguous. >>>> >>>> "are besides that" -> "may be" >>> >>> Fixed. >>> >>> webrev: http://cr.openjdk.java.net/~gromero/8198794/v2/ >>> >>> Thanks a lot for reviewing it. >>> >>> >>> Could somebody sponsor that change please? >>> >>> >>> Regards, >>> Gustavo >>> >> > From volker.simonis at gmail.com Fri Mar 16 18:38:36 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 16 Mar 2018 19:38:36 +0100 Subject: RFR(S): 8199698: Change 8199275 breaks template instantiation for xlC (and potentially other compliers) In-Reply-To: <47783ed4-a8de-e773-3605-775d57266723@oracle.com> References: <96c76fce-d4f1-7567-2198-f12c85dc78b9@oracle.com> <47783ed4-a8de-e773-3605-775d57266723@oracle.com> Message-ID: Hi Coleen, Stefan, I agree that Stefans proposal looks very attractive! I've further simplified it by removing the AllocateHeap versions which take a nothrow_t argument. We can simlpy call AllocateHeap with the corresponding AllocFailStrategy instead. Also we still need to fix the NOINLINE macro for xlC: http://cr.openjdk.java.net/~simonis/webrevs/2018/8199698.v2/ I've build on Linux/x86, Solaris/SPARC and AIX. The NMT tests are still passing. Currently the submit-hs job is underway. What do you think? Regards, Volker On Fri, Mar 16, 2018 at 3:10 PM, wrote: > > Hi, I've looked at both fixes and I really like Stefan's version. Having new > and delete to be trivially inlined in allocation.hpp solves a lot of > problems that I've found when removing more .inline.hpp includes from .hpp > files (hashtable.inline.hpp from systemDictionary.hpp, I think). > > Also, rather than NOINLINE, it's a lot nicer to put this in the .cpp file. > > Thanks, > Coleen > > > On 3/16/18 4:18 AM, Stefan Karlsson wrote: >> >> Hi Volker, >> >> This seems fine to be. >> >> An alternative fix for the allocation.inline.hpp problem would be to move >> the AllocateHeap code into allocation.cpp, and get rid of the NOINLINE >> usage. >> >> I've created a prototype for that: >> http://cr.openjdk.java.net/~stefank/8199698/prototypeAllocateHeapInCpp/ >> >> I've visually inspected the output from NMT and it seems to give correct >> stack traces. For example: >> >> [0x00007f4bffa10eff] ObjectSynchronizer::omAlloc(Thread*)+0x3cf >> [0x00007f4bffa1244c] ObjectSynchronizer::inflate(Thread*, oopDesc*, >> ObjectSynchronizer::InflateCause)+0x8c >> [0x00007f4bffa1425a] ObjectSynchronizer::FastHashCode(Thread*, >> oopDesc*)+0x7a >> [0x00007f4bff616142] JVM_IHashCode+0x52 >> (malloc=4144KB type=Internal #129) >> >> Thanks, >> StefanK >> >> On 2018-03-15 18:20, Volker Simonis wrote: >>> >>> Hi, >>> >>> can I please have a review for the following small fix: >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199698/ >>> https://bugs.openjdk.java.net/browse/JDK-8199698 >>> >>> The fix is actually trivial: just defining the corresponding >>> "NOINLINE" macro for xlC. Unfortunately it's syntax requirements are a >>> little different for method declarations, compared to the other >>> platforms (it has to be placed AFTER the method declarator instead of >>> BEFORE it). Fortunately, there are no differences for method >>> definitions, so we can safely move the NOINLINE attributes from the >>> method declarations in allocation.hpp to the method definitions in >>> allocation.inline.hpp. >>> >>> Thank you and best regards, >>> Volker >>> >>> PS: or true C++ enthusiasts I've also included the whole story of why >>> this happens and why it happens just now, right after 8199275 :) >>> >>> Change "8199275: Fix inclusions of allocation.inline.hpp" replaced the >>> inclusion of "allocation.inline.hpp" in some .hpp files (e.g. >>> constantPool.hpp) by "allocation.hpp". >>> >>> "allocation.inline.hpp" contains not only the definition of some >>> inline methods (as the name implies) but also the definition of some >>> template methods (notably the various CHeapObj<>::operator new() >>> versions). >>> >>> Template functions are on orthogonal concept with regard to inline >>> functions, but they share on implementation communality: at their call >>> sites, the compiler absolutely needs the corresponding function >>> definition. Otherwise it can either not inline the corresponding >>> function in the case of inline functions or it won't even be able to >>> create the corresponding instantiation in the case of a template >>> function. >>> >>> Because of this reason, template functions and methods are defined in >>> their corresponding .inline.hpp files in HotSpot (even if they are not >>> subject to inlining). This is especially true for the before mentioned >>> CHeapObj<>:: new operators, which are explicitly marked as "NOINLINE" >>> in allocation.hpp but defined in allocation.inline.hpp. >>> >>> Now every call site of these CHeapOb<>::new() operators which only >>> includes "allocation.hpp" will emit a call to the corresponding >>> instantiation of the CHeapObj<>:: new operator, but wont be able to >>> actually create that instantiation (simply because it doesn't see the >>> corresponding definition in allocation.inline.hpp). On the other side, >>> call sites of a CHeapObj<>:: new operator which include >>> allocation.inline.hpp will instantiate the required version in the >>> current compilation unit (or even inline that method instance if it is >>> not flagged as "NOINLINE"). >>> >>> If a compiler doesn't honor the "NOINLINE" attribute (or has an empty >>> definition for the NOINLIN macro like xlC), he can potentially inline >>> all the various template instances of CHeapObj<>:: new at all call >>> sites, if their implementation is available. This is exactly what has >>> happened on AIX/xlC before change 8199275 with the effect that the >>> resulting object files contained no single instance of the >>> corresponding new operators. >>> >>> After change 8199275, the template definition of the CHeapObj<>:: new >>> operators aren't available any more at all call sites (because the >>> inclusion of allocation.inline.hpp was removed from some other .hpp >>> files which where included transitively before). As a result, the xlC >>> compiler will emit calls to the corresponding instantiations instead >>> of inlining them. But at all other call sites of the corresponding >>> operators, the operator instantiations are still inlined (because xlC >>> does not support "NOINLINE"), so we end up with link errors in >>> libjvm.so because of missing CHeapObj<>::new instances. >>> >>> As a general rule of thumb, we should always make template method >>> definitions available at all call sites, by placing them into >>> corresponding .inline.hpp files and including them appropriately. >>> Otherwise, we might end up without the required instantiations at link >>> time. >>> >>> Unfortunately, there's no compile time check to enforce this >>> requirement. But we can misuse the "inline" keyword here, by >>> attributing template functions/methods as "inline". This way, the >>> compiler will warn us, if a template definition isn't available at a >>> specific call site. Of course this trick doesn't work if we >>> specifically want to define template functions/methods which shouldn't >>> be inlined, like in the current case :) >>> > From coleen.phillimore at oracle.com Fri Mar 16 19:03:24 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 16 Mar 2018 15:03:24 -0400 Subject: RFR(S): 8199698: Change 8199275 breaks template instantiation for xlC (and potentially other compliers) In-Reply-To: References: <96c76fce-d4f1-7567-2198-f12c85dc78b9@oracle.com> <47783ed4-a8de-e773-3605-775d57266723@oracle.com> Message-ID: <9a1b0bb5-854e-a728-21d4-907eda854a21@oracle.com> Yes, this change looks really good. Thanks! Coleen On 3/16/18 2:38 PM, Volker Simonis wrote: > Hi Coleen, Stefan, > > I agree that Stefans proposal looks very attractive! I've further > simplified it by removing the AllocateHeap versions which take a > nothrow_t argument. We can simlpy call AllocateHeap with the > corresponding AllocFailStrategy instead. Also we still need to fix the > NOINLINE macro for xlC: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8199698.v2/ > > I've build on Linux/x86, Solaris/SPARC and AIX. The NMT tests are > still passing. Currently the submit-hs job is underway. > > What do you think? > > Regards, > Volker > > On Fri, Mar 16, 2018 at 3:10 PM, wrote: >> Hi, I've looked at both fixes and I really like Stefan's version. Having new >> and delete to be trivially inlined in allocation.hpp solves a lot of >> problems that I've found when removing more .inline.hpp includes from .hpp >> files (hashtable.inline.hpp from systemDictionary.hpp, I think). >> >> Also, rather than NOINLINE, it's a lot nicer to put this in the .cpp file. >> >> Thanks, >> Coleen >> >> >> On 3/16/18 4:18 AM, Stefan Karlsson wrote: >>> Hi Volker, >>> >>> This seems fine to be. >>> >>> An alternative fix for the allocation.inline.hpp problem would be to move >>> the AllocateHeap code into allocation.cpp, and get rid of the NOINLINE >>> usage. >>> >>> I've created a prototype for that: >>> http://cr.openjdk.java.net/~stefank/8199698/prototypeAllocateHeapInCpp/ >>> >>> I've visually inspected the output from NMT and it seems to give correct >>> stack traces. For example: >>> >>> [0x00007f4bffa10eff] ObjectSynchronizer::omAlloc(Thread*)+0x3cf >>> [0x00007f4bffa1244c] ObjectSynchronizer::inflate(Thread*, oopDesc*, >>> ObjectSynchronizer::InflateCause)+0x8c >>> [0x00007f4bffa1425a] ObjectSynchronizer::FastHashCode(Thread*, >>> oopDesc*)+0x7a >>> [0x00007f4bff616142] JVM_IHashCode+0x52 >>> (malloc=4144KB type=Internal #129) >>> >>> Thanks, >>> StefanK >>> >>> On 2018-03-15 18:20, Volker Simonis wrote: >>>> Hi, >>>> >>>> can I please have a review for the following small fix: >>>> >>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199698/ >>>> https://bugs.openjdk.java.net/browse/JDK-8199698 >>>> >>>> The fix is actually trivial: just defining the corresponding >>>> "NOINLINE" macro for xlC. Unfortunately it's syntax requirements are a >>>> little different for method declarations, compared to the other >>>> platforms (it has to be placed AFTER the method declarator instead of >>>> BEFORE it). Fortunately, there are no differences for method >>>> definitions, so we can safely move the NOINLINE attributes from the >>>> method declarations in allocation.hpp to the method definitions in >>>> allocation.inline.hpp. >>>> >>>> Thank you and best regards, >>>> Volker >>>> >>>> PS: or true C++ enthusiasts I've also included the whole story of why >>>> this happens and why it happens just now, right after 8199275 :) >>>> >>>> Change "8199275: Fix inclusions of allocation.inline.hpp" replaced the >>>> inclusion of "allocation.inline.hpp" in some .hpp files (e.g. >>>> constantPool.hpp) by "allocation.hpp". >>>> >>>> "allocation.inline.hpp" contains not only the definition of some >>>> inline methods (as the name implies) but also the definition of some >>>> template methods (notably the various CHeapObj<>::operator new() >>>> versions). >>>> >>>> Template functions are on orthogonal concept with regard to inline >>>> functions, but they share on implementation communality: at their call >>>> sites, the compiler absolutely needs the corresponding function >>>> definition. Otherwise it can either not inline the corresponding >>>> function in the case of inline functions or it won't even be able to >>>> create the corresponding instantiation in the case of a template >>>> function. >>>> >>>> Because of this reason, template functions and methods are defined in >>>> their corresponding .inline.hpp files in HotSpot (even if they are not >>>> subject to inlining). This is especially true for the before mentioned >>>> CHeapObj<>:: new operators, which are explicitly marked as "NOINLINE" >>>> in allocation.hpp but defined in allocation.inline.hpp. >>>> >>>> Now every call site of these CHeapOb<>::new() operators which only >>>> includes "allocation.hpp" will emit a call to the corresponding >>>> instantiation of the CHeapObj<>:: new operator, but wont be able to >>>> actually create that instantiation (simply because it doesn't see the >>>> corresponding definition in allocation.inline.hpp). On the other side, >>>> call sites of a CHeapObj<>:: new operator which include >>>> allocation.inline.hpp will instantiate the required version in the >>>> current compilation unit (or even inline that method instance if it is >>>> not flagged as "NOINLINE"). >>>> >>>> If a compiler doesn't honor the "NOINLINE" attribute (or has an empty >>>> definition for the NOINLIN macro like xlC), he can potentially inline >>>> all the various template instances of CHeapObj<>:: new at all call >>>> sites, if their implementation is available. This is exactly what has >>>> happened on AIX/xlC before change 8199275 with the effect that the >>>> resulting object files contained no single instance of the >>>> corresponding new operators. >>>> >>>> After change 8199275, the template definition of the CHeapObj<>:: new >>>> operators aren't available any more at all call sites (because the >>>> inclusion of allocation.inline.hpp was removed from some other .hpp >>>> files which where included transitively before). As a result, the xlC >>>> compiler will emit calls to the corresponding instantiations instead >>>> of inlining them. But at all other call sites of the corresponding >>>> operators, the operator instantiations are still inlined (because xlC >>>> does not support "NOINLINE"), so we end up with link errors in >>>> libjvm.so because of missing CHeapObj<>::new instances. >>>> >>>> As a general rule of thumb, we should always make template method >>>> definitions available at all call sites, by placing them into >>>> corresponding .inline.hpp files and including them appropriately. >>>> Otherwise, we might end up without the required instantiations at link >>>> time. >>>> >>>> Unfortunately, there's no compile time check to enforce this >>>> requirement. But we can misuse the "inline" keyword here, by >>>> attributing template functions/methods as "inline". This way, the >>>> compiler will warn us, if a template definition isn't available at a >>>> specific call site. Of course this trick doesn't work if we >>>> specifically want to define template functions/methods which shouldn't >>>> be inlined, like in the current case :) >>>> From stefan.karlsson at oracle.com Fri Mar 16 19:51:36 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 16 Mar 2018 20:51:36 +0100 Subject: RFR(S): 8199698: Change 8199275 breaks template instantiation for xlC (and potentially other compliers) In-Reply-To: References: <96c76fce-d4f1-7567-2198-f12c85dc78b9@oracle.com> <47783ed4-a8de-e773-3605-775d57266723@oracle.com> Message-ID: On 2018-03-16 19:38, Volker Simonis wrote: > Hi Coleen, Stefan, > > I agree that Stefans proposal looks very attractive! I've further > simplified it by removing the AllocateHeap versions which take a > nothrow_t argument. We can simlpy call AllocateHeap with the > corresponding AllocFailStrategy instead. Also we still need to fix the > NOINLINE macro for xlC: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8199698.v2/ > > I've build on Linux/x86, Solaris/SPARC and AIX. The NMT tests are > still passing. Currently the submit-hs job is underway. > > What do you think? Looks good to me! Thanks, StefanK > > Regards, > Volker > > On Fri, Mar 16, 2018 at 3:10 PM, wrote: >> Hi, I've looked at both fixes and I really like Stefan's version. Having new >> and delete to be trivially inlined in allocation.hpp solves a lot of >> problems that I've found when removing more .inline.hpp includes from .hpp >> files (hashtable.inline.hpp from systemDictionary.hpp, I think). >> >> Also, rather than NOINLINE, it's a lot nicer to put this in the .cpp file. >> >> Thanks, >> Coleen >> >> >> On 3/16/18 4:18 AM, Stefan Karlsson wrote: >>> Hi Volker, >>> >>> This seems fine to be. >>> >>> An alternative fix for the allocation.inline.hpp problem would be to move >>> the AllocateHeap code into allocation.cpp, and get rid of the NOINLINE >>> usage. >>> >>> I've created a prototype for that: >>> http://cr.openjdk.java.net/~stefank/8199698/prototypeAllocateHeapInCpp/ >>> >>> I've visually inspected the output from NMT and it seems to give correct >>> stack traces. For example: >>> >>> [0x00007f4bffa10eff] ObjectSynchronizer::omAlloc(Thread*)+0x3cf >>> [0x00007f4bffa1244c] ObjectSynchronizer::inflate(Thread*, oopDesc*, >>> ObjectSynchronizer::InflateCause)+0x8c >>> [0x00007f4bffa1425a] ObjectSynchronizer::FastHashCode(Thread*, >>> oopDesc*)+0x7a >>> [0x00007f4bff616142] JVM_IHashCode+0x52 >>> (malloc=4144KB type=Internal #129) >>> >>> Thanks, >>> StefanK >>> >>> On 2018-03-15 18:20, Volker Simonis wrote: >>>> Hi, >>>> >>>> can I please have a review for the following small fix: >>>> >>>> http://cr.openjdk.java.net/~simonis/webrevs/2018/8199698/ >>>> https://bugs.openjdk.java.net/browse/JDK-8199698 >>>> >>>> The fix is actually trivial: just defining the corresponding >>>> "NOINLINE" macro for xlC. Unfortunately it's syntax requirements are a >>>> little different for method declarations, compared to the other >>>> platforms (it has to be placed AFTER the method declarator instead of >>>> BEFORE it). Fortunately, there are no differences for method >>>> definitions, so we can safely move the NOINLINE attributes from the >>>> method declarations in allocation.hpp to the method definitions in >>>> allocation.inline.hpp. >>>> >>>> Thank you and best regards, >>>> Volker >>>> >>>> PS: or true C++ enthusiasts I've also included the whole story of why >>>> this happens and why it happens just now, right after 8199275 :) >>>> >>>> Change "8199275: Fix inclusions of allocation.inline.hpp" replaced the >>>> inclusion of "allocation.inline.hpp" in some .hpp files (e.g. >>>> constantPool.hpp) by "allocation.hpp". >>>> >>>> "allocation.inline.hpp" contains not only the definition of some >>>> inline methods (as the name implies) but also the definition of some >>>> template methods (notably the various CHeapObj<>::operator new() >>>> versions). >>>> >>>> Template functions are on orthogonal concept with regard to inline >>>> functions, but they share on implementation communality: at their call >>>> sites, the compiler absolutely needs the corresponding function >>>> definition. Otherwise it can either not inline the corresponding >>>> function in the case of inline functions or it won't even be able to >>>> create the corresponding instantiation in the case of a template >>>> function. >>>> >>>> Because of this reason, template functions and methods are defined in >>>> their corresponding .inline.hpp files in HotSpot (even if they are not >>>> subject to inlining). This is especially true for the before mentioned >>>> CHeapObj<>:: new operators, which are explicitly marked as "NOINLINE" >>>> in allocation.hpp but defined in allocation.inline.hpp. >>>> >>>> Now every call site of these CHeapOb<>::new() operators which only >>>> includes "allocation.hpp" will emit a call to the corresponding >>>> instantiation of the CHeapObj<>:: new operator, but wont be able to >>>> actually create that instantiation (simply because it doesn't see the >>>> corresponding definition in allocation.inline.hpp). On the other side, >>>> call sites of a CHeapObj<>:: new operator which include >>>> allocation.inline.hpp will instantiate the required version in the >>>> current compilation unit (or even inline that method instance if it is >>>> not flagged as "NOINLINE"). >>>> >>>> If a compiler doesn't honor the "NOINLINE" attribute (or has an empty >>>> definition for the NOINLIN macro like xlC), he can potentially inline >>>> all the various template instances of CHeapObj<>:: new at all call >>>> sites, if their implementation is available. This is exactly what has >>>> happened on AIX/xlC before change 8199275 with the effect that the >>>> resulting object files contained no single instance of the >>>> corresponding new operators. >>>> >>>> After change 8199275, the template definition of the CHeapObj<>:: new >>>> operators aren't available any more at all call sites (because the >>>> inclusion of allocation.inline.hpp was removed from some other .hpp >>>> files which where included transitively before). As a result, the xlC >>>> compiler will emit calls to the corresponding instantiations instead >>>> of inlining them. But at all other call sites of the corresponding >>>> operators, the operator instantiations are still inlined (because xlC >>>> does not support "NOINLINE"), so we end up with link errors in >>>> libjvm.so because of missing CHeapObj<>::new instances. >>>> >>>> As a general rule of thumb, we should always make template method >>>> definitions available at all call sites, by placing them into >>>> corresponding .inline.hpp files and including them appropriately. >>>> Otherwise, we might end up without the required instantiations at link >>>> time. >>>> >>>> Unfortunately, there's no compile time check to enforce this >>>> requirement. But we can misuse the "inline" keyword here, by >>>> attributing template functions/methods as "inline". This way, the >>>> compiler will warn us, if a template definition isn't available at a >>>> specific call site. Of course this trick doesn't work if we >>>> specifically want to define template functions/methods which shouldn't >>>> be inlined, like in the current case :) >>>> From per.liden at oracle.com Fri Mar 16 23:34:43 2018 From: per.liden at oracle.com (Per Liden) Date: Sat, 17 Mar 2018 00:34:43 +0100 Subject: RFR: 8199604: Rename CardTableModRefBS to CardTableBarrierSet In-Reply-To: <5AABB5BE.50704@oracle.com> References: <5AABB5BE.50704@oracle.com> Message-ID: Looks good! /Per On 03/16/2018 01:17 PM, Erik ?sterlund wrote: > Hi, > > After collapsing barrier sets, it seems like CardTableModRefBS is the > only barrier set that encodes "ModRef" in its name and uses the "BS" > suffix instead of spelling out "BarrierSet". > > This is a weird inconsistency that I have solved by renaming > CardTableModRefBS to CardTableBarrierSet. > > Files were renamed like this: > parCardTableModRefBS.cpp => cmsCardTable.cpp (this file contains a bunch > of implementations for the card table used by CMS and is in the CMS > directory) > cardTableModRefBS.cpp => cardTableBarrierSet.cpp > cardTableModRefBS.hpp => cardTableBarrierSet.hpp > cardTableModRefBS.inline.hpp => cardTableBarrierSet.inline.hpp > > I have performed an automatic search and replace to rename > CardTableModRefBS to CardTableBarrierSet, and manually clicked through > everything to make sure that space alignment, comments, etc all look as > they should. I have also checked that the SA agent is not affected and > run hs-tier1 on this changeset. > > Despite the fix being a bit lengthy, I would still prefer to consider it > trivial (unless somebody opposes that) as I am only renaming a class in > a rather mechanical way. Hope we can agree about that. > > Webrev: > http://cr.openjdk.java.net/~eosterlund/8199604/webrev.00/ > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8199604 > > Testing: > mach5 hs-tier1-3 > > Thanks, > /Erik From per.liden at oracle.com Fri Mar 16 23:38:35 2018 From: per.liden at oracle.com (Per Liden) Date: Sat, 17 Mar 2018 00:38:35 +0100 Subject: RFR: 8199728: Remove oopDesc::is_scavengable In-Reply-To: <34f5276b-5f03-c8de-6c3d-c29f99e02fdb@oracle.com> References: <34f5276b-5f03-c8de-6c3d-c29f99e02fdb@oracle.com> Message-ID: Looks good. /Per On 03/16/2018 03:29 PM, Stefan Karlsson wrote: > Hi all, > > Please review this trivial patch to replace oopDesc::is_scavengable() > usages with Universe::heap()->is_scavengable(...). > > http://cr.openjdk.java.net/~stefank/8199728/webrev.01/ > > This helps break an include dependency between oop.inline.hpp and > collectedHeap.inline.hpp. I'll remove the collectedHeap.inline.hpp > include in a separate RFE, since doing that requires update to many > header files that transitively included collectedHeap.inline.hpp and its > include files. > > Thanks, > StefanK From kim.barrett at oracle.com Sat Mar 17 04:11:42 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Sat, 17 Mar 2018 00:11:42 -0400 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: References: Message-ID: > On Mar 16, 2018, at 10:39 AM, Stefan Karlsson wrote: > > Hi all, > > Please review this patch to use HeapAccess<>::oop_load instead of oopDesc::load_decode_heap_oop when loading oops from static fields in javaClasses.cpp: > > http://cr.openjdk.java.net/~stefank/8199739/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8199739 > > It's necessary to use HeapAccess<>::oop_load to inject load barriers for GCs that need them. > > Thanks, > StefanK ------------------------------------------------------------------------------ src/hotspot/share/classfile/javaClasses.cpp. 1870 address addr = ik->static_field_addr(static_unassigned_stacktrace_offset); 1871 return HeapAccess<>::oop_load((HeapWord*)addr); I'm not sure this is sufficient. Isn't static_field_addr just fundamentally broken for Shenandoah? Currently: *(mirror + offset) Proposed change: HeapAccess<>::oop_load(mirror + offset) but I think such an access needs to be HeapAccess<>::load_at(mirror, offset) (I have an email thread about this with me, Coleen, and ErikO from mid-December. I don't have anything in that thread from Erik though. I think we discussed it on slack, but that's been deleted.) ------------------------------------------------------------------------------ From volker.simonis at gmail.com Sat Mar 17 07:42:09 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Sat, 17 Mar 2018 07:42:09 +0000 Subject: Merging jdk/hs with jdk/jdk In-Reply-To: <8819CAD3-AF29-463E-8A76-14440CF37D2B@oracle.com> References: <8819CAD3-AF29-463E-8A76-14440CF37D2B@oracle.com> Message-ID: Hi Jesper, we definitely welcome this change! It will make it easier for us to keep our builds working. What about the client repo. Are there any plans to consolidate that into jdk as well ? Regards, Volker schrieb am Mi. 14. M?rz 2018 um 22:00: > All, > > Over the last couple of years we have left behind a graph of > integration forests where each component in the JVM had its own > line of development. Today all HotSpot development is done in the > same repository, jdk/hs [1]. As a result of merging we have seen > several positive effects, ranging from less confusion around > where and how to do things, and reduced time for fixes to > propagate, to significantly better cooperation between the > components, and improved quality of the product. We would like to > improve further and therefore we suggest to merge jdk/hs into > jdk/jdk [2]. > > As before, we expect this change to build a stronger team spirit > between the merged areas, and contribute to less confusion - > especially around ramp down phases and similar. We also expect > further improvements in quality as changes that cause problems in > a different area are found faster and can be dealt with > immediately. > > In the same way as we did in the past, we suggest to try this out > as an experiment for at least two weeks (giving us some time to > adapt in case of issues). Monitoring and evaluation of the new > structure will take place continuously, with an option to revert > back if things do not work out. The experiment would keep going > for at least a few months, after which we will evaluate it and > depending on the results consider making it the new standard. If > so, the jdk/hs forest will eventually be retired. As part of this > merge we can also retire the newly setup submit-hs [3] repository > and do all testing using the submit repo based on jdk/jdk [4]. > > Much like what we have done in the past we would leave the jdk/hs > forest around until we see if the experiment works out. We would > also lock it down so that no accidental pushes are made to > it. Once the jdk/hs forest is locked down, any work in flight > based on it would have to be rebased on jdk/jdk. > > We tried this approach during the last few months of JDK 10 > development and it worked out fine there. > > Please let us know if you have any feedback or questions! > > Thanks, > /Jesper > > [1] http://hg.openjdk.java.net/jdk/hs > [2] http://hg.openjdk.java.net/jdk/jdk > > [3] http://hg.openjdk.java.net/jdk/submit-hs < > http://hg.openjdk.java.net/jdk/submit-hs> > [4] http://hg.openjdk.java.net/jdk/submit < > http://hg.openjdk.java.net/jdk/submit> From volker.simonis at gmail.com Sat Mar 17 07:55:08 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Sat, 17 Mar 2018 07:55:08 +0000 Subject: Naming restriction for branches in submit-hs Message-ID: Hi Jesper, the Wiki mentions that ?only branches starting with ?JDK-? will be built and tested? [1]. I?ve submitted a branch called ?JDK-8199698.v2? [2] (i.e. the second version of a change after review) yesterday, but it doesn?t seem to be build. Are there any other naming conventions? I?m pretty sure names like these worked in the first version of the submit repo last year. Thanks, Volker [1] https://wiki.openjdk.java.net/display/Build/Submit+Repo [2] http://hg.openjdk.java.net/jdk/submit-hs/rev/60bae43fe453 From erik.osterlund at oracle.com Sat Mar 17 08:28:16 2018 From: erik.osterlund at oracle.com (Erik Osterlund) Date: Sat, 17 Mar 2018 09:28:16 +0100 Subject: RFR: 8199604: Rename CardTableModRefBS to CardTableBarrierSet In-Reply-To: References: <5AABB5BE.50704@oracle.com> Message-ID: <86C595C1-AB59-4D11-9532-9AEB8AD859F9@oracle.com> Hi Per, Thanks for the review! /Erik > On 17 Mar 2018, at 00:34, Per Liden wrote: > > Looks good! > > /Per > >> On 03/16/2018 01:17 PM, Erik ?sterlund wrote: >> Hi, >> After collapsing barrier sets, it seems like CardTableModRefBS is the only barrier set that encodes "ModRef" in its name and uses the "BS" suffix instead of spelling out "BarrierSet". >> This is a weird inconsistency that I have solved by renaming CardTableModRefBS to CardTableBarrierSet. >> Files were renamed like this: >> parCardTableModRefBS.cpp => cmsCardTable.cpp (this file contains a bunch of implementations for the card table used by CMS and is in the CMS directory) >> cardTableModRefBS.cpp => cardTableBarrierSet.cpp >> cardTableModRefBS.hpp => cardTableBarrierSet.hpp >> cardTableModRefBS.inline.hpp => cardTableBarrierSet.inline.hpp >> I have performed an automatic search and replace to rename CardTableModRefBS to CardTableBarrierSet, and manually clicked through everything to make sure that space alignment, comments, etc all look as they should. I have also checked that the SA agent is not affected and run hs-tier1 on this changeset. >> Despite the fix being a bit lengthy, I would still prefer to consider it trivial (unless somebody opposes that) as I am only renaming a class in a rather mechanical way. Hope we can agree about that. >> Webrev: >> http://cr.openjdk.java.net/~eosterlund/8199604/webrev.00/ >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8199604 >> Testing: >> mach5 hs-tier1-3 >> Thanks, >> /Erik From rkennke at redhat.com Sat Mar 17 10:39:01 2018 From: rkennke at redhat.com (Roman Kennke) Date: Sat, 17 Mar 2018 11:39:01 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: References: Message-ID: <59cf4b2f-9387-dc8f-c64e-822f9813b73e@redhat.com> Am 17.03.2018 um 05:11 schrieb Kim Barrett: >> On Mar 16, 2018, at 10:39 AM, Stefan Karlsson wrote: >> >> Hi all, >> >> Please review this patch to use HeapAccess<>::oop_load instead of oopDesc::load_decode_heap_oop when loading oops from static fields in javaClasses.cpp: >> >> http://cr.openjdk.java.net/~stefank/8199739/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8199739 >> >> It's necessary to use HeapAccess<>::oop_load to inject load barriers for GCs that need them. >> >> Thanks, >> StefanK > > ------------------------------------------------------------------------------ > src/hotspot/share/classfile/javaClasses.cpp. > > 1870 address addr = ik->static_field_addr(static_unassigned_stacktrace_offset); > 1871 return HeapAccess<>::oop_load((HeapWord*)addr); > > I'm not sure this is sufficient. Isn't static_field_addr just > fundamentally broken for Shenandoah? It is not totally broken (I think _addr() calls resolve() which kinda does what we need), but the following would be perfect: > but I think such an access needs to be > HeapAccess<>::load_at(mirror, offset) Thank you, Roman From kim.barrett at oracle.com Sat Mar 17 18:41:47 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Sat, 17 Mar 2018 14:41:47 -0400 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <59cf4b2f-9387-dc8f-c64e-822f9813b73e@redhat.com> References: <59cf4b2f-9387-dc8f-c64e-822f9813b73e@redhat.com> Message-ID: <19590DC7-FF43-4E0D-94BD-FF9B70202325@oracle.com> > On Mar 17, 2018, at 6:39 AM, Roman Kennke wrote: > > Am 17.03.2018 um 05:11 schrieb Kim Barrett: >>> On Mar 16, 2018, at 10:39 AM, Stefan Karlsson wrote: >>> >>> Hi all, >>> >>> Please review this patch to use HeapAccess<>::oop_load instead of oopDesc::load_decode_heap_oop when loading oops from static fields in javaClasses.cpp: >>> >>> http://cr.openjdk.java.net/~stefank/8199739/webrev.01/ >>> https://bugs.openjdk.java.net/browse/JDK-8199739 >>> >>> It's necessary to use HeapAccess<>::oop_load to inject load barriers for GCs that need them. >>> >>> Thanks, >>> StefanK >> >> ------------------------------------------------------------------------------ >> src/hotspot/share/classfile/javaClasses.cpp. >> >> 1870 address addr = ik->static_field_addr(static_unassigned_stacktrace_offset); >> 1871 return HeapAccess<>::oop_load((HeapWord*)addr); >> >> I'm not sure this is sufficient. Isn't static_field_addr just >> fundamentally broken for Shenandoah? > > It is not totally broken (I think _addr() calls resolve() which kinda > does what we need), Current implementation of static_field_addr is: address InstanceKlass::static_field_addr(int offset) { assert(offset >= InstanceMirrorKlass::offset_of_static_fields(), "has already been adjusted"); return (address)(offset + cast_from_oop(java_mirror())); } java_mirror() calls OopHandle::resolve(), which presently doesn?t do anything wrto Access (has OopHandle not yet been Accessorized? Or is there some reason why it?s okay as is that I?m not thinking of right now?) Even if it did use Access, I would expect for Shenandoah the access would just be a dereference of the oop* in the OopHandle, and would not chase the Brooks pointer, since we?re not accessing fields in the mirror object at that point. > but the following would be perfect: > >> but I think such an access needs to be >> HeapAccess<>::load_at(mirror, offset) > > Thank you, Roman From edward.nevill at gmail.com Sat Mar 17 19:02:40 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Sat, 17 Mar 2018 19:02:40 +0000 Subject: RFR: 8199138: Add RISC-V support to Zero Message-ID: <1521313360.26308.4.camel@gmail.com> Hi, Please review the following webrev Bugid: https://bugs.openjdk.java.net/browse/JDK-8199138 Webrev: http://cr.openjdk.java.net/~enevill/8199138/webrev.00 This webrev add Zero support for RISC-V I propose to set up a project to develop template interpreter, C1 & C2 support for RISC-V and I will file a JEP for that work. This patch just gets RISC-V building with Zero. Many thanks, Ed. From glaubitz at physik.fu-berlin.de Sun Mar 18 12:02:14 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Sun, 18 Mar 2018 21:02:14 +0900 Subject: Zero broken again Message-ID: <571b0888-573c-b744-2b8a-14d3eece3c90@physik.fu-berlin.de> Hi! I just ran my standard debug build on Debian unstable to see whether Zero builds for me again after Edwards fixes and it turns out, it's still or again broken for debug builds (see below). Looking at the blame history, it looks like JDK-8198445 is responsible as it changed the function signature of RawAccessBarrier but did not update the calls within Zero. I'll have a look and try to whip up a patch. I currently have some time as I'm on vacation in Japan :-). Adrian === Output from failing command(s) repeated here === /usr/bin/printf "* For target hotspot_variant-zero_libjvm_objs_abstractInterpreter.o:\n" * For target hotspot_variant-zero_libjvm_objs_abstractInterpreter.o: (/bin/grep -v -e "^Note: including file:" < /home/glaubitz/upstream/hs/build/linux-x86_64-normal-zero-release/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_abstractInterpreter.o.log || true) | /usr/bin/head -n 12 In file included from /home/glaubitz/upstream/hs/src/hotspot/share/oops/access.inline.hpp:35:0, from /home/glaubitz/upstream/hs/src/hotspot/share/oops/oop.inline.hpp:32, from /home/glaubitz/upstream/hs/src/hotspot/share/memory/metaspaceShared.hpp:32, from /home/glaubitz/upstream/hs/src/hotspot/share/interpreter/abstractInterpreter.cpp:36: /home/glaubitz/upstream/hs/src/hotspot/share/oops/accessBackend.inline.hpp: In static member function ?static bool RawAccessBarrier::oop_arraycopy(arrayOop, arrayOop, HeapWord*, HeapWord*, size_t)?: /home/glaubitz/upstream/hs/src/hotspot/share/oops/accessBackend.inline.hpp:129:98: error: no matching function for call to ?RawAccessBarrier::arraycopy(narrowOop*, narrowOop*, size_t&)? return arraycopy(reinterpret_cast(src), reinterpret_cast(dst), length); ^ In file included from /home/glaubitz/upstream/hs/src/hotspot/share/gc/shared/barrierSet.hpp:31:0, from /home/glaubitz/upstream/hs/src/hotspot/share/gc/shared/collectorPolicy.hpp:28, from /home/glaubitz/upstream/hs/src/hotspot/share/gc/shared/genCollectedHeap.hpp:29, from /home/glaubitz/upstream/hs/src/hotspot/share/oops/oop.inline.hpp:30, if test `/usr/bin/wc -l < /home/glaubitz/upstream/hs/build/linux-x86_64-normal-zero-release/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_abstractInterpreter.o.log` -gt 12; then /bin/echo " ... (rest of output omitted)" ; fi ... (rest of output omitted) /usr/bin/printf "* For target hotspot_variant-zero_libjvm_objs_accessBackend.o:\n" * For target hotspot_variant-zero_libjvm_objs_accessBackend.o: (/bin/grep -v -e "^Note: including file:" < /home/glaubitz/upstream/hs/build/linux-x86_64-normal-zero-release/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_accessBackend.o.log || true) | /usr/bin/head -n 12 In file included from /home/glaubitz/upstream/hs/src/hotspot/share/oops/accessBackend.cpp:26:0: /home/glaubitz/upstream/hs/src/hotspot/share/oops/accessBackend.inline.hpp: In static member function ?static bool RawAccessBarrier::oop_arraycopy(arrayOop, arrayOop, HeapWord*, HeapWord*, size_t)?: /home/glaubitz/upstream/hs/src/hotspot/share/oops/accessBackend.inline.hpp:129:98: error: no matching function for call to ?RawAccessBarrier::arraycopy(narrowOop*, narrowOop*, size_t&)? return arraycopy(reinterpret_cast(src), reinterpret_cast(dst), length); ^ In file included from /home/glaubitz/upstream/hs/src/hotspot/share/oops/accessBackend.inline.hpp:29:0, from /home/glaubitz/upstream/hs/src/hotspot/share/oops/accessBackend.cpp:26: /home/glaubitz/upstream/hs/src/hotspot/share/oops/accessBackend.hpp:343:15: note: candidate: template template static bool RawAccessBarrier::arraycopy(arrayOop, arrayOop, T*, T*, size_t) static bool arraycopy(arrayOop src_obj, arrayOop dst_obj, T* src, T* dst, size_t length); ^~~~~~~~~ /home/glaubitz/upstream/hs/src/hotspot/share/oops/accessBackend.hpp:343:15: note: template argument deduction/substitution failed: In file included from /home/glaubitz/upstream/hs/src/hotspot/share/oops/accessBackend.cpp:26:0: if test `/usr/bin/wc -l < /home/glaubitz/upstream/hs/build/linux-x86_64-normal-zero-release/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_accessBackend.o.log` -gt 12; then /bin/echo " ... (rest of output omitted)" ; fi ... (rest of output omitted) /usr/bin/printf "* For target hotspot_variant-zero_libjvm_objs_accessBarrierSupport.o:\n" * For target hotspot_variant-zero_libjvm_objs_accessBarrierSupport.o: (/bin/grep -v -e "^Note: including file:" < /home/glaubitz/upstream/hs/build/linux-x86_64-normal-zero-release/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_accessBarrierSupport.o.log || true) | /usr/bin/head -n 12 In file included from /home/glaubitz/upstream/hs/src/hotspot/share/oops/access.inline.hpp:35:0, from /home/glaubitz/upstream/hs/src/hotspot/share/classfile/javaClasses.inline.hpp:29, from /home/glaubitz/upstream/hs/src/hotspot/share/gc/shared/accessBarrierSupport.cpp:26: /home/glaubitz/upstream/hs/src/hotspot/share/oops/accessBackend.inline.hpp: In static member function ?static bool RawAccessBarrier::oop_arraycopy(arrayOop, arrayOop, HeapWord*, HeapWord*, size_t)?: /home/glaubitz/upstream/hs/src/hotspot/share/oops/accessBackend.inline.hpp:129:98: error: no matching function for call to ?RawAccessBarrier::arraycopy(narrowOop*, narrowOop*, size_t&)? return arraycopy(reinterpret_cast(src), reinterpret_cast(dst), length); ^ In file included from /home/glaubitz/upstream/hs/src/hotspot/share/gc/shared/barrierSet.hpp:31:0, from /home/glaubitz/upstream/hs/src/hotspot/share/gc/shared/barrierSet.inline.hpp:28, from /home/glaubitz/upstream/hs/src/hotspot/share/gc/shared/modRefBarrierSet.inline.hpp:28, from /home/glaubitz/upstream/hs/src/hotspot/share/gc/shared/barrierSetConfig.inline.hpp:30, from /home/glaubitz/upstream/hs/src/hotspot/share/oops/access.inline.hpp:28, if test `/usr/bin/wc -l < /home/glaubitz/upstream/hs/build/linux-x86_64-normal-zero-release/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_accessBarrierSupport.o.log` -gt 12; then /bin/echo " ... (rest of output omitted)" ; fi ... (rest of output omitted) /usr/bin/printf "* For target hotspot_variant-zero_libjvm_objs_accessFlags.o:\n" * For target hotspot_variant-zero_libjvm_objs_accessFlags.o: (/bin/grep -v -e "^Note: including file:" < /home/glaubitz/upstream/hs/build/linux-x86_64-normal-zero-release/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_accessFlags.o.log || true) | /usr/bin/head -n 12 In file included from /home/glaubitz/upstream/hs/src/hotspot/share/oops/access.inline.hpp:35:0, from /home/glaubitz/upstream/hs/src/hotspot/share/oops/oop.inline.hpp:32, from /home/glaubitz/upstream/hs/src/hotspot/share/utilities/accessFlags.cpp:26: /home/glaubitz/upstream/hs/src/hotspot/share/oops/accessBackend.inline.hpp: In static member function ?static bool RawAccessBarrier::oop_arraycopy(arrayOop, arrayOop, HeapWord*, HeapWord*, size_t)?: /home/glaubitz/upstream/hs/src/hotspot/share/oops/accessBackend.inline.hpp:129:98: error: no matching function for call to ?RawAccessBarrier::arraycopy(narrowOop*, narrowOop*, size_t&)? return arraycopy(reinterpret_cast(src), reinterpret_cast(dst), length); ^ In file included from /home/glaubitz/upstream/hs/src/hotspot/share/gc/shared/barrierSet.hpp:31:0, from /home/glaubitz/upstream/hs/src/hotspot/share/gc/shared/collectorPolicy.hpp:28, from /home/glaubitz/upstream/hs/src/hotspot/share/gc/shared/genCollectedHeap.hpp:29, from /home/glaubitz/upstream/hs/src/hotspot/share/oops/oop.inline.hpp:30, from /home/glaubitz/upstream/hs/src/hotspot/share/utilities/accessFlags.cpp:26: if test `/usr/bin/wc -l < /home/glaubitz/upstream/hs/build/linux-x86_64-normal-zero-release/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_accessFlags.o.log` -gt 12; then /bin/echo " ... (rest of output omitted)" ; fi ... (rest of output omitted) /usr/bin/printf "* For target hotspot_variant-zero_libjvm_objs_ageTable.o:\n" * For target hotspot_variant-zero_libjvm_objs_ageTable.o: (/bin/grep -v -e "^Note: including file:" < /home/glaubitz/upstream/hs/build/linux-x86_64-normal-zero-release/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_ageTable.o.log || true) | /usr/bin/head -n 12 In file included from /home/glaubitz/upstream/hs/src/hotspot/share/oops/access.inline.hpp:35:0, from /home/glaubitz/upstream/hs/src/hotspot/share/oops/oop.inline.hpp:32, from /home/glaubitz/upstream/hs/src/hotspot/share/gc/shared/ageTable.inline.hpp:29, from /home/glaubitz/upstream/hs/src/hotspot/share/gc/shared/ageTable.cpp:27: /home/glaubitz/upstream/hs/src/hotspot/share/oops/accessBackend.inline.hpp: In static member function ?static bool RawAccessBarrier::oop_arraycopy(arrayOop, arrayOop, HeapWord*, HeapWord*, size_t)?: /home/glaubitz/upstream/hs/src/hotspot/share/oops/accessBackend.inline.hpp:129:98: error: no matching function for call to ?RawAccessBarrier::arraycopy(narrowOop*, narrowOop*, size_t&)? return arraycopy(reinterpret_cast(src), reinterpret_cast(dst), length); ^ In file included from /home/glaubitz/upstream/hs/src/hotspot/share/gc/shared/barrierSet.hpp:31:0, from /home/glaubitz/upstream/hs/src/hotspot/share/gc/shared/collectorPolicy.hpp:28, from /home/glaubitz/upstream/hs/src/hotspot/share/gc/shared/genCollectedHeap.hpp:29, from /home/glaubitz/upstream/hs/src/hotspot/share/oops/oop.inline.hpp:30, if test `/usr/bin/wc -l < /home/glaubitz/upstream/hs/build/linux-x86_64-normal-zero-release/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_ageTable.o.log` -gt 12; then /bin/echo " ... (rest of output omitted)" ; fi ... (rest of output omitted) /usr/bin/printf "\n* All command lines available in /home/glaubitz/upstream/hs/build/linux-x86_64-normal-zero-release/make-support/failure-logs.\n" * All command lines available in /home/glaubitz/upstream/hs/build/linux-x86_64-normal-zero-release/make-support/failure-logs. /usr/bin/printf "=== End of repeated output ===\n" === End of repeated output === -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From glaubitz at physik.fu-berlin.de Sun Mar 18 12:10:15 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Sun, 18 Mar 2018 21:10:15 +0900 Subject: Zero broken again In-Reply-To: <571b0888-573c-b744-2b8a-14d3eece3c90@physik.fu-berlin.de> References: <571b0888-573c-b744-2b8a-14d3eece3c90@physik.fu-berlin.de> Message-ID: On 03/18/2018 09:02 PM, John Paul Adrian Glaubitz wrote: > Looking at the blame history, it looks like JDK-8198445 is responsible > as it changed the function signature of RawAccessBarrier but did not > update the calls within Zero. Oops, I meant RawAccessBarrier::arraycopy(), of course. -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From edward.nevill at gmail.com Sun Mar 18 13:09:18 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Sun, 18 Mar 2018 13:09:18 +0000 Subject: Zero broken again In-Reply-To: <571b0888-573c-b744-2b8a-14d3eece3c90@physik.fu-berlin.de> References: <571b0888-573c-b744-2b8a-14d3eece3c90@physik.fu-berlin.de> Message-ID: <1521378558.26787.3.camel@gmail.com> On Sun, 2018-03-18 at 21:02 +0900, John Paul Adrian Glaubitz wrote: > Hi! > > I just ran my standard debug build on Debian unstable to see whether Zero > builds for me again after Edwards fixes and it turns out, it's still or > again broken for debug builds (see below). > > Looking at the blame history, it looks like JDK-8198445 is responsible > as it changed the function signature of RawAccessBarrier but did not > update the calls within Zero. > > It is not just Zero. The non Zero build is broken as well. See this thread http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030743.html All the best, Ed. From glaubitz at physik.fu-berlin.de Sun Mar 18 13:11:35 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Sun, 18 Mar 2018 22:11:35 +0900 Subject: Zero broken again In-Reply-To: <1521378558.26787.3.camel@gmail.com> References: <571b0888-573c-b744-2b8a-14d3eece3c90@physik.fu-berlin.de> <1521378558.26787.3.camel@gmail.com> Message-ID: <0392fe36-7622-a5b0-8b0e-bb4eef3cc2e2@physik.fu-berlin.de> On 03/18/2018 10:09 PM, Edward Nevill wrote: > It is not just Zero. The non Zero build is broken as well. See this thread > > http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030743.html Ah, I missed that one then. I just tested Zero but I had some suspicions that the normal Hotspot builds must be broken as well. Might be an idea to perform the CI tests on a Debian unstable system then so we can always test against the latest toolchains. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From aph at redhat.com Sun Mar 18 14:37:17 2018 From: aph at redhat.com (Andrew Haley) Date: Sun, 18 Mar 2018 14:37:17 +0000 Subject: RFR: 8199138: Add RISC-V support to Zero In-Reply-To: <1521313360.26308.4.camel@gmail.com> References: <1521313360.26308.4.camel@gmail.com> Message-ID: On 03/17/2018 07:02 PM, Edward Nevill wrote: > Webrev: http://cr.openjdk.java.net/~enevill/8199138/webrev.00 > > This webrev add Zero support for RISC-V What happens with atomics? Do we fall back to GCC builtins for everything? -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From edward.nevill at gmail.com Sun Mar 18 20:19:11 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Sun, 18 Mar 2018 20:19:11 +0000 Subject: RFR: 8199138: Add RISC-V support to Zero In-Reply-To: References: <1521313360.26308.4.camel@gmail.com> Message-ID: <1521404351.3951.7.camel@gmail.com> On Sun, 2018-03-18 at 14:37 +0000, Andrew Haley wrote: > On 03/17/2018 07:02 PM, Edward Nevill wrote: > > Webrev: http://cr.openjdk.java.net/~enevill/8199138/webrev.00 > > > > This webrev add Zero support for RISC-V > > What happens with atomics? Do we fall back to GCC builtins for everything? > Pretty much. The only atomic operation which doesn't used GCC builtins is os::atomic_copy64. For RISC-V this just does the same as all other 64 bit CPUs. *(jlong *) dst = *(const jlong *) src; Interestingly, there is no implementation of atomic_copy64 for ARM32. I guess it just relies on the compiler generating LDRD/STRD correctly and doesn't support earlier ARM32 archs. I'll do a bit of investigation. For reference here is the implementation of atomic_copy64. Regards, Ed. --- CUT --- static void atomic_copy64(const volatile void *src, volatile void *dst) { #if defined(PPC32) && !defined(__SPE__) double tmp; asm volatile ("lfd %0, %2\n" "stfd %0, %1\n" : "=&f"(tmp), "=Q"(*(volatile double*)dst) : "Q"(*(volatile double*)src)); #elif defined(PPC32) && defined(__SPE__) long tmp; asm volatile ("evldd %0, %2\n" "evstdd %0, %1\n" : "=&r"(tmp), "=Q"(*(volatile long*)dst) : "Q"(*(volatile long*)src)); #elif defined(S390) && !defined(_LP64) double tmp; asm volatile ("ld %0, 0(%1)\n" "std %0, 0(%2)\n" : "=r"(tmp) : "a"(src), "a"(dst)); #else *(jlong *) dst = *(const jlong *) src; #endif --- CUT --- From stuart.monteith at linaro.org Mon Mar 19 07:34:43 2018 From: stuart.monteith at linaro.org (Stuart Monteith) Date: Mon, 19 Mar 2018 07:34:43 +0000 Subject: [aarch64-port-dev ] RFR: 8193266: AArch64: TestOptionsWithRanges.java SIGSEGV In-Reply-To: References: <7dbf43d1-72b9-5720-3878-ce31f3e8f555@redhat.com> <20e812bc-d132-9863-815b-345283f9517e@redhat.com> <3c83440f-dd4b-f988-1f96-afa88dff36eb@redhat.com> Message-ID: Hello, Would it be possible for this to be reviewed? I'd like to get it in for JDK11. I don't believe there were any outstanding issues. I've updated and reapplied the patch against current jdk/hs http://cr.openjdk.java.net/~smonteith/8193266/webrev-6/ Thanks, Stuart On 11/01/18 08:20, Rahul Raghavan wrote: > < Just resending below review request email from Stuart for 8193266 > including aarch64-port-dev also. Thanks.> > > > -- On Saturday 06 January 2018 12:13 AM, Stuart Monteith wrote: > I've removed the AARCH64 conditionals, added the empty line I removed, > and changed the type of "use_XOR_for_compressed_class_base" to bool. > > http://cr.openjdk.java.net/~smonteith/8193266/webrev-5/ > > BR, > ??? Stuart > >> On 4 January 2018 at 14:45, Andrew Haley wrote: >>> Hi, >>> >>> On 04/01/18 14:26, coleen.phillimore at oracle.com wrote: >>>> ? I was going to offer to sponsor this since it touches shared code but >>>> I'm not sure I like that there's AARCH64 specific code in >>>> universe.cpp/hpp.?? And the name is somewhat offputting, suggesting >>>> implementation details of one target leaking into shared code. >>>> >>>> set_use_XOR_for_compressed_class_base >>>> >>>> I think webrev-3 looked more reasonable, and could elide the #ifdef >>>> AARCH64 in the shared code for that version.?? And the indentation is >>>> better. >>> >>> I hate the #ifdef AARCH64 stuff too, but it's always a sign that there >>> is something wrong with the front-end to back-end modularization.? We >>> can handle the use_XOR_for_compressed_class_base later: we really >>> should have a way to communicate with the back ends when the memory >>> layout is initialized.? We can go with webrev-3. >>> >>> -- >>> Andrew Haley >>> Java Platform Lead Engineer >>> Red Hat UK Ltd. >>> EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From rkennke at redhat.com Mon Mar 19 08:47:06 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 19 Mar 2018 09:47:06 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <19590DC7-FF43-4E0D-94BD-FF9B70202325@oracle.com> References: <59cf4b2f-9387-dc8f-c64e-822f9813b73e@redhat.com> <19590DC7-FF43-4E0D-94BD-FF9B70202325@oracle.com> Message-ID: Am 17.03.2018 um 19:41 schrieb Kim Barrett: >> On Mar 17, 2018, at 6:39 AM, Roman Kennke wrote: >> >> Am 17.03.2018 um 05:11 schrieb Kim Barrett: >>>> On Mar 16, 2018, at 10:39 AM, Stefan Karlsson wrote: >>>> >>>> Hi all, >>>> >>>> Please review this patch to use HeapAccess<>::oop_load instead of oopDesc::load_decode_heap_oop when loading oops from static fields in javaClasses.cpp: >>>> >>>> http://cr.openjdk.java.net/~stefank/8199739/webrev.01/ >>>> https://bugs.openjdk.java.net/browse/JDK-8199739 >>>> >>>> It's necessary to use HeapAccess<>::oop_load to inject load barriers for GCs that need them. >>>> >>>> Thanks, >>>> StefanK >>> >>> ------------------------------------------------------------------------------ >>> src/hotspot/share/classfile/javaClasses.cpp. >>> >>> 1870 address addr = ik->static_field_addr(static_unassigned_stacktrace_offset); >>> 1871 return HeapAccess<>::oop_load((HeapWord*)addr); >>> >>> I'm not sure this is sufficient. Isn't static_field_addr just >>> fundamentally broken for Shenandoah? >> >> It is not totally broken (I think _addr() calls resolve() which kinda >> does what we need), > > Current implementation of static_field_addr is: > > address InstanceKlass::static_field_addr(int offset) { > assert(offset >= InstanceMirrorKlass::offset_of_static_fields(), "has already been adjusted"); > return (address)(offset + cast_from_oop(java_mirror())); > } > > java_mirror() calls OopHandle::resolve(), which presently doesn?t do anything wrto Access > (has OopHandle not yet been Accessorized? Or is there some reason why it?s okay as is > that I?m not thinking of right now?) Even if it did use Access, I would expect for Shenandoah > the access would just be a dereference of the oop* in the OopHandle, and would not chase > the Brooks pointer, since we?re not accessing fields in the mirror object at that point. > Oh oops, I confused this with typeArrayOop::*_addr(). Sorry. Yeah, we need appropriate access barriers in there too. I think it would be best if we either had static_field() and static_field_put() accessors in instanceKlass, or simply go vial ik->java_mirror()->*_field() and ik->java_mirror()->*_field_put() ? Roman From thomas.schatzl at oracle.com Mon Mar 19 09:28:07 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Mon, 19 Mar 2018 10:28:07 +0100 Subject: GetPrimitiveArrayCritical vs GetByteArrayRegion: 140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream In-Reply-To: References: <22bbd632-47c1-7702-bd65-6a51f0bbad02@oracle.com> <9b5ff318-af41-03c4-efdc-b41b97cf6e61@oracle.com> <5A9D8C60.3050505@oracle.com> <5A9D8F33.5060500@oracle.com> <5A9DAC84.906@oracle.com> <5A9DAE87.8020801@oracle.com> <1521102348.2448.25.camel@oracle.com> <23639144-5217-4A0F-930C-EF24B4976544@oracle.com> Message-ID: <1521451687.2323.5.camel@oracle.com> Hi, On Fri, 2018-03-16 at 17:19 +0000, Ian Rogers wrote: > Thanks Paul, very interesting. > > On Fri, Mar 16, 2018 at 9:21 AM Paul Sandoz > wrote: > > Hi Ian, Thomas, > > > > [...] > > (This is also something we need to consider if we modify buffers to > > support capacities larger than Integer.MAX_VALUE. Also connects > > with Project Panama.) > > > > If Thomas has not done so or does not plan to i can log an issue > > for you. > > > > That'd be great. I wonder if identifying more TTSP issues should also > be a bug. Its interesting to observe that overlooking TTSP in C2 > motivated the Unsafe.copyMemory change permitting a fresh TTSP issue. > If TTSP is a 1st class issue then maybe we can deprecate JNI critical > regions to support that effort :-) Please log an issue. I am still a bit unsure what and how many issues should be filed. @Ian: at bugreports.oracle.com everyone may file bug reports without the need for an account. It will take some time until they show up in Jira due to vetting, but if you have a good case, and can e.g. link to the mailing list, this should be painless. Thanks, Thomas From aph at redhat.com Mon Mar 19 09:35:14 2018 From: aph at redhat.com (Andrew Haley) Date: Mon, 19 Mar 2018 09:35:14 +0000 Subject: [aarch64-port-dev ] RFR: 8193266: AArch64: TestOptionsWithRanges.java SIGSEGV In-Reply-To: References: <7dbf43d1-72b9-5720-3878-ce31f3e8f555@redhat.com> <20e812bc-d132-9863-815b-345283f9517e@redhat.com> <3c83440f-dd4b-f988-1f96-afa88dff36eb@redhat.com> Message-ID: <86a36520-a7d9-54d6-d1ec-25e444104002@redhat.com> On 03/19/2018 07:34 AM, Stuart Monteith wrote: > Would it be possible for this to be reviewed? I'd like to get it in for > JDK11. I don't believe there were any outstanding issues. > > I've updated and reapplied the patch against current jdk/hs > > http://cr.openjdk.java.net/~smonteith/8193266/webrev-6/ Yes, thank you. I'm sorry for the delay: I lost track of the conversation. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From aph at redhat.com Mon Mar 19 09:40:53 2018 From: aph at redhat.com (Andrew Haley) Date: Mon, 19 Mar 2018 09:40:53 +0000 Subject: RFR: 8199138: Add RISC-V support to Zero In-Reply-To: <1521404351.3951.7.camel@gmail.com> References: <1521313360.26308.4.camel@gmail.com> <1521404351.3951.7.camel@gmail.com> Message-ID: On 03/18/2018 08:19 PM, Edward Nevill wrote: > Pretty much. The only atomic operation which doesn't used GCC builtins is os::atomic_copy64. For RISC-V this just does the same as all other 64 bit CPUs. > > *(jlong *) dst = *(const jlong *) src; That's probably wrong, but it'll do for now. We'll need something better in the future. GCC's __atomic_{load,store} (__ATOMIC_RELAXED) would do it. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From stefan.karlsson at oracle.com Mon Mar 19 09:52:28 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 19 Mar 2018 10:52:28 +0100 Subject: RFR: 8199728: Remove oopDesc::is_scavengable In-Reply-To: References: <34f5276b-5f03-c8de-6c3d-c29f99e02fdb@oracle.com> Message-ID: Thanks, Per. StefanK On 2018-03-17 00:38, Per Liden wrote: > Looks good. > > /Per > > On 03/16/2018 03:29 PM, Stefan Karlsson wrote: >> Hi all, >> >> Please review this trivial patch to replace oopDesc::is_scavengable() >> usages with Universe::heap()->is_scavengable(...). >> >> http://cr.openjdk.java.net/~stefank/8199728/webrev.01/ >> >> This helps break an include dependency between oop.inline.hpp and >> collectedHeap.inline.hpp. I'll remove the collectedHeap.inline.hpp >> include in a separate RFE, since doing that requires update to many >> header files that transitively included collectedHeap.inline.hpp and >> its include files. >> >> Thanks, >> StefanK From stefan.karlsson at oracle.com Mon Mar 19 09:52:13 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 19 Mar 2018 10:52:13 +0100 Subject: RFR: 8199728: Remove oopDesc::is_scavengable In-Reply-To: <5CD44140-5F9A-446D-BC92-AF8BFF8FE781@oracle.com> References: <34f5276b-5f03-c8de-6c3d-c29f99e02fdb@oracle.com> <5CD44140-5F9A-446D-BC92-AF8BFF8FE781@oracle.com> Message-ID: Thanks for the review, Kim StefanK On 2018-03-16 17:54, Kim Barrett wrote: >> On Mar 16, 2018, at 10:29 AM, Stefan Karlsson wrote: >> >> Hi all, >> >> Please review this trivial patch to replace oopDesc::is_scavengable() usages with Universe::heap()->is_scavengable(...). >> >> http://cr.openjdk.java.net/~stefank/8199728/webrev.01/ >> >> This helps break an include dependency between oop.inline.hpp and collectedHeap.inline.hpp. I'll remove the collectedHeap.inline.hpp include in a separate RFE, since doing that requires update to many header files that transitively included collectedHeap.inline.hpp and its include files. >> >> Thanks, >> StefanK > > Looks good. > From stefan.karlsson at oracle.com Mon Mar 19 10:11:25 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 19 Mar 2018 11:11:25 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: References: Message-ID: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> Hi all, Kim and Roman commented that my patch doesn't work with Shenandoah. Here's an updated version: http://cr.openjdk.java.net/~stefank/8199739/webrev.02/ Thanks, StefanK On 2018-03-16 15:39, Stefan Karlsson wrote: > Hi all, > > Please review this patch to use HeapAccess<>::oop_load instead of > oopDesc::load_decode_heap_oop when loading oops from static fields in > javaClasses.cpp: > > http://cr.openjdk.java.net/~stefank/8199739/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8199739 > > It's necessary to use HeapAccess<>::oop_load to inject load barriers for > GCs that need them. > > Thanks, > StefanK From stefan.karlsson at oracle.com Mon Mar 19 10:13:56 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 19 Mar 2018 11:13:56 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <8e16e2f4-eab7-3699-517d-36226220d82d@redhat.com> References: <8e16e2f4-eab7-3699-517d-36226220d82d@redhat.com> Message-ID: <0f29d3b1-bfa4-e547-5b45-fa6df62fc78b@oracle.com> On 2018-03-16 16:10, Roman Kennke wrote: > Am 16.03.2018 um 15:39 schrieb Stefan Karlsson: >> Hi all, >> >> Please review this patch to use HeapAccess<>::oop_load instead of >> oopDesc::load_decode_heap_oop when loading oops from static fields in >> javaClasses.cpp: >> >> http://cr.openjdk.java.net/~stefank/8199739/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8199739 >> >> It's necessary to use HeapAccess<>::oop_load to inject load barriers for >> GCs that need them. >> >> Thanks, >> StefanK > > The change looks good. > > I haven't checked: are there any stores in there that also need to go > through HeapAccess? I couldn't find anything obvious. I'm prototyping a patch to completely remove the oopDesc::load_decode_* and oopDesc::encode_store_* functions, and didn't find anything in javaClasses that used oopDesc::encode_store_*. StefanK > > Thanks, Roman > From rkennke at redhat.com Mon Mar 19 10:17:34 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 19 Mar 2018 11:17:34 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> Message-ID: <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> Hi Stefan, thank you! grepping for static_field_addr() yields some more places that'd need similar treatment: jlong java_lang_ref_SoftReference::clock() void java_lang_ref_SoftReference::set_clock(jlong value) Maybe cover them as well? Or I'll file a separate issue. Your call. Thanks, Roman > Hi all, > > Kim and Roman commented that my patch doesn't work with Shenandoah. > Here's an updated version: > http://cr.openjdk.java.net/~stefank/8199739/webrev.02/ > > Thanks, > StefanK > > On 2018-03-16 15:39, Stefan Karlsson wrote: >> Hi all, >> >> Please review this patch to use HeapAccess<>::oop_load instead of >> oopDesc::load_decode_heap_oop when loading oops from static fields in >> javaClasses.cpp: >> >> http://cr.openjdk.java.net/~stefank/8199739/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8199739 >> >> It's necessary to use HeapAccess<>::oop_load to inject load barriers >> for GCs that need them. >> >> Thanks, >> StefanK From jesper.wilhelmsson at oracle.com Mon Mar 19 10:43:00 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Mon, 19 Mar 2018 11:43:00 +0100 Subject: Merging jdk/hs with jdk/jdk In-Reply-To: References: <8819CAD3-AF29-463E-8A76-14440CF37D2B@oracle.com> Message-ID: Hi Volker, The client repo will remain separate for now. /Jesper > On 17 Mar 2018, at 08:42, Volker Simonis wrote: > > Hi Jesper, > > we definitely welcome this change! It will make it easier for us to keep our builds working. > > What about the client repo. Are there any plans to consolidate that into jdk as well ? > > Regards, > Volker > > > schrieb am Mi. 14. M?rz 2018 um 22:00: > All, > > Over the last couple of years we have left behind a graph of > integration forests where each component in the JVM had its own > line of development. Today all HotSpot development is done in the > same repository, jdk/hs [1]. As a result of merging we have seen > several positive effects, ranging from less confusion around > where and how to do things, and reduced time for fixes to > propagate, to significantly better cooperation between the > components, and improved quality of the product. We would like to > improve further and therefore we suggest to merge jdk/hs into > jdk/jdk [2]. > > As before, we expect this change to build a stronger team spirit > between the merged areas, and contribute to less confusion - > especially around ramp down phases and similar. We also expect > further improvements in quality as changes that cause problems in > a different area are found faster and can be dealt with > immediately. > > In the same way as we did in the past, we suggest to try this out > as an experiment for at least two weeks (giving us some time to > adapt in case of issues). Monitoring and evaluation of the new > structure will take place continuously, with an option to revert > back if things do not work out. The experiment would keep going > for at least a few months, after which we will evaluate it and > depending on the results consider making it the new standard. If > so, the jdk/hs forest will eventually be retired. As part of this > merge we can also retire the newly setup submit-hs [3] repository > and do all testing using the submit repo based on jdk/jdk [4]. > > Much like what we have done in the past we would leave the jdk/hs > forest around until we see if the experiment works out. We would > also lock it down so that no accidental pushes are made to > it. Once the jdk/hs forest is locked down, any work in flight > based on it would have to be rebased on jdk/jdk. > > We tried this approach during the last few months of JDK 10 > development and it worked out fine there. > > Please let us know if you have any feedback or questions! > > Thanks, > /Jesper > > [1] http://hg.openjdk.java.net/jdk/hs > > [2] http://hg.openjdk.java.net/jdk/jdk > > [3] http://hg.openjdk.java.net/jdk/submit-hs > > [4] http://hg.openjdk.java.net/jdk/submit > From jesper.wilhelmsson at oracle.com Mon Mar 19 10:45:00 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Mon, 19 Mar 2018 11:45:00 +0100 Subject: Naming restriction for branches in submit-hs In-Reply-To: References: Message-ID: Hi Volker, It seems that a . in the branch name might be an issue. This is being investigated. The last build I see with your name on it is from 2018-03-15 17:48. I can see that this is not the one you are looking for. /Jesper > On 17 Mar 2018, at 08:55, Volker Simonis wrote: > > Hi Jesper, > > the Wiki mentions that ?only branches starting with ?JDK-? will be built and tested? [1]. I?ve submitted a branch called ?JDK-8199698.v2? [2] (i.e. the second version of a change after review) yesterday, but it doesn?t seem to be build. Are there any other naming conventions? I?m pretty sure names like these worked in the first version of the submit repo last year. > > Thanks, > Volker > > [1] https://wiki.openjdk.java.net/display/Build/Submit+Repo > [2] http://hg.openjdk.java.net/jdk/submit-hs/rev/60bae43fe453 > > From stefan.karlsson at oracle.com Mon Mar 19 11:29:01 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 19 Mar 2018 12:29:01 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> Message-ID: <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> Hi Roman, On 2018-03-19 11:17, Roman Kennke wrote: > Hi Stefan, > > thank you! > > grepping for static_field_addr() yields some more places that'd need > similar treatment: > > jlong java_lang_ref_SoftReference::clock() > void java_lang_ref_SoftReference::set_clock(jlong value) > > Maybe cover them as well? Or I'll file a separate issue. Your call. This seems like primitive accesses, I'll be happy to leave those to you. ;) Thanks, StefanK > > Thanks, Roman > > >> Hi all, >> >> Kim and Roman commented that my patch doesn't work with Shenandoah. >> Here's an updated version: >> http://cr.openjdk.java.net/~stefank/8199739/webrev.02/ >> >> Thanks, >> StefanK >> >> On 2018-03-16 15:39, Stefan Karlsson wrote: >>> Hi all, >>> >>> Please review this patch to use HeapAccess<>::oop_load instead of >>> oopDesc::load_decode_heap_oop when loading oops from static fields in >>> javaClasses.cpp: >>> >>> http://cr.openjdk.java.net/~stefank/8199739/webrev.01/ >>> https://bugs.openjdk.java.net/browse/JDK-8199739 >>> >>> It's necessary to use HeapAccess<>::oop_load to inject load barriers >>> for GCs that need them. >>> >>> Thanks, >>> StefanK > > From erik.osterlund at oracle.com Mon Mar 19 11:49:11 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 19 Mar 2018 12:49:11 +0100 Subject: RFR: 8198949: Modularize arraycopy stub routine GC barriers In-Reply-To: <62a2c346-4260-8b79-d8a9-a4037a00d1bc@oracle.com> References: <5AA2BD2B.2060100@oracle.com> <62a2c346-4260-8b79-d8a9-a4037a00d1bc@oracle.com> Message-ID: <5AAFA3B7.5040101@oracle.com> Hi, After some internal discussions, it turns out that the name "*BSCodeGen" was not so popular, and has been changed to *BarrierSetAssembler instead. I have rebased this on top of 8199604: Rename CardTableModRefBS to CardTableBarrierSet and went ahead with the performing the required renaming. New full webrev: http://cr.openjdk.java.net/~eosterlund/8198949/webrev.03/ New incremental webrev (from the rebased version): http://cr.openjdk.java.net/~eosterlund/8198949/webrev.02_03/ Thanks, /Erik On 2018-03-13 10:47, Erik ?sterlund wrote: > Hi Roman, > > Thanks for the review. > > /Erik > > On 2018-03-13 10:26, Roman Kennke wrote: >> Am 09.03.2018 um 17:58 schrieb Erik ?sterlund: >>> Hi, >>> >>> The GC barriers for arraycopy stub routines are not as modular as they >>> could be. They currently use switch statements to check which GC >>> barrier >>> set is being used, and call one or another barrier based on that, with >>> registers already allocated in such a way that it can only be used for >>> write barriers. >>> >>> My solution to the problem is to introduce a platform-specific GC >>> barrier set code generator. The abstract super class is >>> BarrierSetCodeGen, and you can get it from the active BarrierSet. A >>> virtual call to the BarrierSetCodeGen generates the relevant GC >>> barriers >>> for the arraycopy stub routines. >>> >>> The BarrierSetCodeGen inheritance hierarchy exactly matches the >>> corresponding BarrierSet inheritance hierarchy. In other words, every >>> BarrierSet class has a corresponding BarrierSetCodeGen class. >>> >>> The various switch statements that generate different GC barriers >>> depending on the enum type of the barrier set have been changed to call >>> a corresponding virtual member function in the BarrierSetCodeGen class >>> instead. >>> >>> Thanks to Martin Doerr and Roman Kennke for providing platform specific >>> code for PPC, S390 and AArch64. >>> >>> Webrev: >>> http://cr.openjdk.java.net/~eosterlund/8198949/webrev.00/ >>> >>> CR: >>> https://bugs.openjdk.java.net/browse/JDK-8198949 >>> >>> Thanks, >>> /Erik >> >> I looked over x86, aarch64 and shared code (in webrev.01), and it looks >> good to me! >> >> As I commented earlier in private, I would find it useful if the >> barriers could 'take over' the whole arraycopy, for example to do the >> pre- and post-barrier and arraycopy in one pass, instead of 3. However, >> let's keep that for later. >> >> Awesome work, thank you! >> >> Cheers, >> Roman >> >> > From stefan.karlsson at oracle.com Mon Mar 19 12:59:01 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 19 Mar 2018 13:59:01 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> Message-ID: <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> Hi again, Seems like Coleen wanted me to solve this problem slightly differently. She suggested that I add an Access<>::resolve barrier in static_field_addr: http://cr.openjdk.java.net/~stefank/8199739/webrev.03/ This will probably solve the primitive access for Shenandoah. What do you think? Thanks, StefanK On 2018-03-19 12:29, Stefan Karlsson wrote: > Hi Roman, > > On 2018-03-19 11:17, Roman Kennke wrote: >> Hi Stefan, >> >> thank you! >> >> grepping for static_field_addr() yields some more places that'd need >> similar treatment: >> >> jlong java_lang_ref_SoftReference::clock() >> void java_lang_ref_SoftReference::set_clock(jlong value) >> >> Maybe cover them as well? Or I'll file a separate issue. Your call. > > This seems like primitive accesses, I'll be happy to leave those to you. ;) > > Thanks, > StefanK > >> >> Thanks, Roman >> >> >>> Hi all, >>> >>> Kim and Roman commented that my patch doesn't work with Shenandoah. >>> Here's an updated version: >>> http://cr.openjdk.java.net/~stefank/8199739/webrev.02/ >>> >>> Thanks, >>> StefanK >>> >>> On 2018-03-16 15:39, Stefan Karlsson wrote: >>>> Hi all, >>>> >>>> Please review this patch to use HeapAccess<>::oop_load instead of >>>> oopDesc::load_decode_heap_oop when loading oops from static fields in >>>> javaClasses.cpp: >>>> >>>> http://cr.openjdk.java.net/~stefank/8199739/webrev.01/ >>>> https://bugs.openjdk.java.net/browse/JDK-8199739 >>>> >>>> It's necessary to use HeapAccess<>::oop_load to inject load barriers >>>> for GCs that need them. >>>> >>>> Thanks, >>>> StefanK >> >> From rkennke at redhat.com Mon Mar 19 13:24:27 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 19 Mar 2018 14:24:27 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> Message-ID: <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> Hi Stefan, To be honest, I'd find it better to get rid of static_field_addr() altogether, and I am also not a fan of resolve(): it seems easy to just cover everything with it, but for Shenandoah it means to do a write-barrier, even when a read-barrier would suffice (which is cheaper). I recognize that none of the affected code is very performance sensitive, but if there is a cleaner solution, I'd go for this. What about this approach: http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.00/ ? Roman > Hi again, > > Seems like Coleen wanted me to solve this problem slightly differently. > She suggested that I add an Access<>::resolve barrier in static_field_addr: > > http://cr.openjdk.java.net/~stefank/8199739/webrev.03/ > > This will probably solve the primitive access for Shenandoah. What do > you think? > > Thanks, > StefanK > > On 2018-03-19 12:29, Stefan Karlsson wrote: >> Hi Roman, >> >> On 2018-03-19 11:17, Roman Kennke wrote: >>> Hi Stefan, >>> >>> thank you! >>> >>> grepping for static_field_addr() yields some more places that'd need >>> similar treatment: >>> >>> jlong java_lang_ref_SoftReference::clock() >>> void java_lang_ref_SoftReference::set_clock(jlong value) >>> >>> Maybe cover them as well? Or I'll file a separate issue. Your call. >> >> This seems like primitive accesses, I'll be happy to leave those to >> you. ;) >> >> Thanks, >> StefanK >> >>> >>> Thanks, Roman >>> >>> >>>> Hi all, >>>> >>>> Kim and Roman commented that my patch doesn't work with Shenandoah. >>>> Here's an updated version: >>>> http://cr.openjdk.java.net/~stefank/8199739/webrev.02/ >>>> >>>> Thanks, >>>> StefanK >>>> >>>> On 2018-03-16 15:39, Stefan Karlsson wrote: >>>>> Hi all, >>>>> >>>>> Please review this patch to use HeapAccess<>::oop_load instead of >>>>> oopDesc::load_decode_heap_oop when loading oops from static fields in >>>>> javaClasses.cpp: >>>>> >>>>> http://cr.openjdk.java.net/~stefank/8199739/webrev.01/ >>>>> https://bugs.openjdk.java.net/browse/JDK-8199739 >>>>> >>>>> It's necessary to use HeapAccess<>::oop_load to inject load barriers >>>>> for GCs that need them. >>>>> >>>>> Thanks, >>>>> StefanK >>> >>> From rkennke at redhat.com Mon Mar 19 13:42:17 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 19 Mar 2018 14:42:17 +0100 Subject: RFR: 8198949: Modularize arraycopy stub routine GC barriers In-Reply-To: <5AAFA3B7.5040101@oracle.com> References: <5AA2BD2B.2060100@oracle.com> <62a2c346-4260-8b79-d8a9-a4037a00d1bc@oracle.com> <5AAFA3B7.5040101@oracle.com> Message-ID: <2f4bf2ae-642a-4778-4587-b4f63dcc6904@redhat.com> I liked BSCodeGen better, but BarrierSetAssembler is ok too. Go for it! Roman > After some internal discussions, it turns out that the name "*BSCodeGen" > was not so popular, and has been changed to *BarrierSetAssembler instead. > I have rebased this on top of 8199604: Rename CardTableModRefBS to > CardTableBarrierSet and went ahead with the performing the required > renaming. > > New full webrev: > http://cr.openjdk.java.net/~eosterlund/8198949/webrev.03/ > > New incremental webrev (from the rebased version): > http://cr.openjdk.java.net/~eosterlund/8198949/webrev.02_03/ > > Thanks, > /Erik > > On 2018-03-13 10:47, Erik ?sterlund wrote: >> Hi Roman, >> >> Thanks for the review. >> >> /Erik >> >> On 2018-03-13 10:26, Roman Kennke wrote: >>> Am 09.03.2018 um 17:58 schrieb Erik ?sterlund: >>>> Hi, >>>> >>>> The GC barriers for arraycopy stub routines are not as modular as they >>>> could be. They currently use switch statements to check which GC >>>> barrier >>>> set is being used, and call one or another barrier based on that, with >>>> registers already allocated in such a way that it can only be used for >>>> write barriers. >>>> >>>> My solution to the problem is to introduce a platform-specific GC >>>> barrier set code generator. The abstract super class is >>>> BarrierSetCodeGen, and you can get it from the active BarrierSet. A >>>> virtual call to the BarrierSetCodeGen generates the relevant GC >>>> barriers >>>> for the arraycopy stub routines. >>>> >>>> The BarrierSetCodeGen inheritance hierarchy exactly matches the >>>> corresponding BarrierSet inheritance hierarchy. In other words, every >>>> BarrierSet class has a corresponding BarrierSetCodeGen class. >>>> >>>> The various switch statements that generate different GC barriers >>>> depending on the enum type of the barrier set have been changed to call >>>> a corresponding virtual member function in the BarrierSetCodeGen class >>>> instead. >>>> >>>> Thanks to Martin Doerr and Roman Kennke for providing platform specific >>>> code for PPC, S390 and AArch64. >>>> >>>> Webrev: >>>> http://cr.openjdk.java.net/~eosterlund/8198949/webrev.00/ >>>> >>>> CR: >>>> https://bugs.openjdk.java.net/browse/JDK-8198949 >>>> >>>> Thanks, >>>> /Erik >>> >>> I looked over x86, aarch64 and shared code (in webrev.01), and it looks >>> good to me! >>> >>> As I commented earlier in private, I would find it useful if the >>> barriers could 'take over' the whole arraycopy, for example to do the >>> pre- and post-barrier and arraycopy in one pass, instead of 3. However, >>> let's keep that for later. >>> >>> Awesome work, thank you! >>> >>> Cheers, >>> Roman >>> >>> >> > From dms at samersoff.net Mon Mar 19 13:51:48 2018 From: dms at samersoff.net (Dmitry Samersoff) Date: Mon, 19 Mar 2018 16:51:48 +0300 Subject: [aarch64-port-dev ] RFR: 8193266: AArch64: TestOptionsWithRanges.java SIGSEGV In-Reply-To: References: <7dbf43d1-72b9-5720-3878-ce31f3e8f555@redhat.com> <20e812bc-d132-9863-815b-345283f9517e@redhat.com> <3c83440f-dd4b-f988-1f96-afa88dff36eb@redhat.com> Message-ID: <9f523448-5e21-1f4d-c22b-45977f271fb8@samersoff.net> Stuart, Changes looks good to me. -Dmitry On 19.03.2018 10:34, Stuart Monteith wrote: > Hello, > Would it be possible for this to be reviewed? I'd like to get it in for > JDK11. I don't believe there were any outstanding issues. > > I've updated and reapplied the patch against current jdk/hs > > http://cr.openjdk.java.net/~smonteith/8193266/webrev-6/ > > Thanks, > Stuart > > > On 11/01/18 08:20, Rahul Raghavan wrote: >> < Just resending below review request email from Stuart for 8193266 >> including aarch64-port-dev also. Thanks.> >> >> >> -- On Saturday 06 January 2018 12:13 AM, Stuart Monteith wrote: >> I've removed the AARCH64 conditionals, added the empty line I removed, >> and changed the type of "use_XOR_for_compressed_class_base" to bool. >> >> http://cr.openjdk.java.net/~smonteith/8193266/webrev-5/ >> >> BR, >> ??? Stuart >> >>> On 4 January 2018 at 14:45, Andrew Haley wrote: >>>> Hi, >>>> >>>> On 04/01/18 14:26, coleen.phillimore at oracle.com wrote: >>>>> ? I was going to offer to sponsor this since it touches shared code but >>>>> I'm not sure I like that there's AARCH64 specific code in >>>>> universe.cpp/hpp.?? And the name is somewhat offputting, suggesting >>>>> implementation details of one target leaking into shared code. >>>>> >>>>> set_use_XOR_for_compressed_class_base >>>>> >>>>> I think webrev-3 looked more reasonable, and could elide the #ifdef >>>>> AARCH64 in the shared code for that version.?? And the indentation is >>>>> better. >>>> >>>> I hate the #ifdef AARCH64 stuff too, but it's always a sign that there >>>> is something wrong with the front-end to back-end modularization.? We >>>> can handle the use_XOR_for_compressed_class_base later: we really >>>> should have a way to communicate with the back ends when the memory >>>> layout is initialized.? We can go with webrev-3. >>>> >>>> -- >>>> Andrew Haley >>>> Java Platform Lead Engineer >>>> Red Hat UK Ltd. >>>> EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 -- Dmitry Samersoff http://devnull.samersoff.net * There will come soft rains ... From erik.osterlund at oracle.com Mon Mar 19 14:04:02 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 19 Mar 2018 15:04:02 +0100 Subject: RFR: 8198949: Modularize arraycopy stub routine GC barriers In-Reply-To: <2f4bf2ae-642a-4778-4587-b4f63dcc6904@redhat.com> References: <5AA2BD2B.2060100@oracle.com> <62a2c346-4260-8b79-d8a9-a4037a00d1bc@oracle.com> <5AAFA3B7.5040101@oracle.com> <2f4bf2ae-642a-4778-4587-b4f63dcc6904@redhat.com> Message-ID: <5AAFC352.8060708@oracle.com> Hi Roman, Thank you for the review. /Erik On 2018-03-19 14:42, Roman Kennke wrote: > I liked BSCodeGen better, but BarrierSetAssembler is ok too. Go for it! > > Roman > > >> After some internal discussions, it turns out that the name "*BSCodeGen" >> was not so popular, and has been changed to *BarrierSetAssembler instead. >> I have rebased this on top of 8199604: Rename CardTableModRefBS to >> CardTableBarrierSet and went ahead with the performing the required >> renaming. >> >> New full webrev: >> http://cr.openjdk.java.net/~eosterlund/8198949/webrev.03/ >> >> New incremental webrev (from the rebased version): >> http://cr.openjdk.java.net/~eosterlund/8198949/webrev.02_03/ >> >> Thanks, >> /Erik >> >> On 2018-03-13 10:47, Erik ?sterlund wrote: >>> Hi Roman, >>> >>> Thanks for the review. >>> >>> /Erik >>> >>> On 2018-03-13 10:26, Roman Kennke wrote: >>>> Am 09.03.2018 um 17:58 schrieb Erik ?sterlund: >>>>> Hi, >>>>> >>>>> The GC barriers for arraycopy stub routines are not as modular as they >>>>> could be. They currently use switch statements to check which GC >>>>> barrier >>>>> set is being used, and call one or another barrier based on that, with >>>>> registers already allocated in such a way that it can only be used for >>>>> write barriers. >>>>> >>>>> My solution to the problem is to introduce a platform-specific GC >>>>> barrier set code generator. The abstract super class is >>>>> BarrierSetCodeGen, and you can get it from the active BarrierSet. A >>>>> virtual call to the BarrierSetCodeGen generates the relevant GC >>>>> barriers >>>>> for the arraycopy stub routines. >>>>> >>>>> The BarrierSetCodeGen inheritance hierarchy exactly matches the >>>>> corresponding BarrierSet inheritance hierarchy. In other words, every >>>>> BarrierSet class has a corresponding BarrierSetCodeGen class. >>>>> >>>>> The various switch statements that generate different GC barriers >>>>> depending on the enum type of the barrier set have been changed to call >>>>> a corresponding virtual member function in the BarrierSetCodeGen class >>>>> instead. >>>>> >>>>> Thanks to Martin Doerr and Roman Kennke for providing platform specific >>>>> code for PPC, S390 and AArch64. >>>>> >>>>> Webrev: >>>>> http://cr.openjdk.java.net/~eosterlund/8198949/webrev.00/ >>>>> >>>>> CR: >>>>> https://bugs.openjdk.java.net/browse/JDK-8198949 >>>>> >>>>> Thanks, >>>>> /Erik >>>> I looked over x86, aarch64 and shared code (in webrev.01), and it looks >>>> good to me! >>>> >>>> As I commented earlier in private, I would find it useful if the >>>> barriers could 'take over' the whole arraycopy, for example to do the >>>> pre- and post-barrier and arraycopy in one pass, instead of 3. However, >>>> let's keep that for later. >>>> >>>> Awesome work, thank you! >>>> >>>> Cheers, >>>> Roman >>>> >>>> > From rkennke at redhat.com Mon Mar 19 14:44:33 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 19 Mar 2018 15:44:33 +0100 Subject: RFR(XS): JDK-8199780: SetMemory0 and CopyMemory0 in unsafe.cpp need to resolve their operands Message-ID: <00c5000d-3e1f-b0cd-5426-c8b14d7516c7@redhat.com> SetMemory0 and CopyMemory0 in unsafe.cpp read and write from/to objects, and thus need to resolve their operands via Access::resolve() before accessing them. Bug: https://bugs.openjdk.java.net/browse/JDK-8199780 Webrev: http://cr.openjdk.java.net/~rkennke/JDK-8199780/webrev.00/ I'll say again that I'd prefer resolve_for_read() and resolve_for_write(), but for now the strong resolve() will suffice. ;-) Can I please get a review? Roman From erik.joelsson at oracle.com Mon Mar 19 15:56:29 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Mon, 19 Mar 2018 08:56:29 -0700 Subject: RFR: 8199138: Add RISC-V support to Zero In-Reply-To: <1521313360.26308.4.camel@gmail.com> References: <1521313360.26308.4.camel@gmail.com> Message-ID: <8d312e6e-2466-847c-57f7-77eb3fdc1e2e@oracle.com> Build changes look ok to me. /Erik On 2018-03-17 12:02, Edward Nevill wrote: > Hi, > > Please review the following webrev > > Bugid: https://bugs.openjdk.java.net/browse/JDK-8199138 > Webrev: http://cr.openjdk.java.net/~enevill/8199138/webrev.00 > > This webrev add Zero support for RISC-V > > I propose to set up a project to develop template interpreter, C1 & C2 > support for RISC-V and I will file a JEP for that work. This patch just > gets RISC-V building with Zero. > > Many thanks, > Ed. From mvala at redhat.com Fri Mar 16 10:48:49 2018 From: mvala at redhat.com (Michal Vala) Date: Fri, 16 Mar 2018 11:48:49 +0100 Subject: RFR: build pragma error with gcc 4.4.7 Message-ID: <3196d61c-9b7c-b795-a68d-6e50a3416f41@redhat.com> Hi, I've been trying to build latest jdk with gcc 4.4.7 and I hit compile error due to pragma used in function: /mnt/ramdisk/openjdk/src/hotspot/os/linux/os_linux.inline.hpp:103: error: #pragma GCC diagnostic not allowed inside functions I'm sending little patch that fixes the issue by wrapping whole function. I've also created a macro for ignoring deprecated declaration inside compilerWarnings.hpp to line up with others. Can someone please review? If it's ok, I would also need a sponsor. diff -r 422615764e12 src/hotspot/os/linux/os_linux.inline.hpp --- a/src/hotspot/os/linux/os_linux.inline.hpp Thu Mar 15 14:54:10 2018 -0700 +++ b/src/hotspot/os/linux/os_linux.inline.hpp Fri Mar 16 10:50:24 2018 +0100 @@ -96,13 +96,12 @@ return ::ftruncate64(fd, length); } -inline struct dirent* os::readdir(DIR* dirp, dirent *dbuf) -{ // readdir_r has been deprecated since glibc 2.24. // See https://sourceware.org/bugzilla/show_bug.cgi?id=19056 for more details. -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wdeprecated-declarations" - +PRAGMA_DIAG_PUSH +PRAGMA_DEPRECATED_IGNORED +inline struct dirent* os::readdir(DIR* dirp, dirent *dbuf) +{ dirent* p; int status; assert(dirp != NULL, "just checking"); @@ -114,11 +113,11 @@ if((status = ::readdir_r(dirp, dbuf, &p)) != 0) { errno = status; return NULL; - } else + } else { return p; - -#pragma GCC diagnostic pop + } } +PRAGMA_DIAG_POP inline int os::closedir(DIR *dirp) { assert(dirp != NULL, "argument is NULL"); diff -r 422615764e12 src/hotspot/share/utilities/compilerWarnings.hpp --- a/src/hotspot/share/utilities/compilerWarnings.hpp Thu Mar 15 14:54:10 2018 -0700 +++ b/src/hotspot/share/utilities/compilerWarnings.hpp Fri Mar 16 10:50:24 2018 +0100 @@ -48,6 +48,7 @@ #define PRAGMA_FORMAT_NONLITERAL_IGNORED _Pragma("GCC diagnostic ignored \"-Wformat-nonliteral\"") \ _Pragma("GCC diagnostic ignored \"-Wformat-security\"") #define PRAGMA_FORMAT_IGNORED _Pragma("GCC diagnostic ignored \"-Wformat\"") +#define PRAGMA_DEPRECATED_IGNORED _Pragma("GCC diagnostic ignored \"-Wdeprecated-declarations\"") #if defined(__clang_major__) && \ (__clang_major__ >= 4 || \ Thanks! -- Michal Vala OpenJDK QE Red Hat Czech From mvala at redhat.com Fri Mar 16 13:48:37 2018 From: mvala at redhat.com (Michal Vala) Date: Fri, 16 Mar 2018 14:48:37 +0100 Subject: RFR: build pragma error with gcc 4.4.7 In-Reply-To: <0221bde4-1313-31d7-65fa-e4f4ebed4200@oracle.com> References: <3196d61c-9b7c-b795-a68d-6e50a3416f41@redhat.com> <2dd2d37c-e0d7-ad13-6da6-92196ab6749d@oracle.com> <0221bde4-1313-31d7-65fa-e4f4ebed4200@oracle.com> Message-ID: On 03/16/2018 12:36 PM, Magnus Ihse Bursie wrote: > > On 2018-03-16 12:05, David Holmes wrote: >> Hi Michal, >> >> On 16/03/2018 8:48 PM, Michal Vala wrote: >>> Hi, >>> >>> I've been trying to build latest jdk with gcc 4.4.7 and I hit compile error >>> due to pragma used in function: > I don't think gcc 4.4.7 is likely to work at all. Configure will complain (but > continue) if you use a gcc prior to 4.7 (very recently raised to 4.8). > > You can try getting past this error, but you are likely to hit more issues down > the road. > > Do you have any specific reasons for using such an old compiler? Yes, I'm targeting RHEL6 where 4.4.7 is base gcc. With patch I've posted I'm able to compile. Configure is complaining with warning, but otherwise it's ok. Few more warnings during the build but no errors. We'd like to keep it 'compilable' in RHEL6 with code as close as possible to upstream. -- Michal Vala OpenJDK QE Red Hat Czech From coleen.phillimore at oracle.com Mon Mar 19 19:00:15 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 19 Mar 2018 15:00:15 -0400 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> Message-ID: I like Roman's version with static_field_base() the best.? The reason I wanted to keep static_field_addr and not have static_oop_addr was so there is one function to find static fields and this would work with the jvmci classes and with loading/storing primitives also.? So I like the consistent change that Roman has. There's a subtlety that I haven't quite figured out here. static_field_addr gets an address mirror+offset, so needs a load barrier on this offset, then needs a load barrier on the offset of the additional load (?) http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.00/src/hotspot/share/oops/instanceKlass.hpp.udiff.html + oop static_field_base() { return java_mirror(); } Currently java_mirror() has no load_barriers because it's an OopHandle in the ClassLoaderData.? Will it with Shenandoah and ZGC (even before concurrent class unloading?) thanks, Coleen On 3/19/18 9:24 AM, Roman Kennke wrote: > Hi Stefan, > > To be honest, I'd find it better to get rid of static_field_addr() > altogether, and I am also not a fan of resolve(): it seems easy to just > cover everything with it, but for Shenandoah it means to do a > write-barrier, even when a read-barrier would suffice (which is > cheaper). I recognize that none of the affected code is very performance > sensitive, but if there is a cleaner solution, I'd go for this. What > about this approach: > > http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.00/ > > ? > > > Roman > > >> Hi again, >> >> Seems like Coleen wanted me to solve this problem slightly differently. >> She suggested that I add an Access<>::resolve barrier in static_field_addr: >> >> http://cr.openjdk.java.net/~stefank/8199739/webrev.03/ >> >> This will probably solve the primitive access for Shenandoah. What do >> you think? >> >> Thanks, >> StefanK >> >> On 2018-03-19 12:29, Stefan Karlsson wrote: >>> Hi Roman, >>> >>> On 2018-03-19 11:17, Roman Kennke wrote: >>>> Hi Stefan, >>>> >>>> thank you! >>>> >>>> grepping for static_field_addr() yields some more places that'd need >>>> similar treatment: >>>> >>>> jlong java_lang_ref_SoftReference::clock() >>>> void java_lang_ref_SoftReference::set_clock(jlong value) >>>> >>>> Maybe cover them as well? Or I'll file a separate issue. Your call. >>> This seems like primitive accesses, I'll be happy to leave those to >>> you. ;) >>> >>> Thanks, >>> StefanK >>> >>>> Thanks, Roman >>>> >>>> >>>>> Hi all, >>>>> >>>>> Kim and Roman commented that my patch doesn't work with Shenandoah. >>>>> Here's an updated version: >>>>> http://cr.openjdk.java.net/~stefank/8199739/webrev.02/ >>>>> >>>>> Thanks, >>>>> StefanK >>>>> >>>>> On 2018-03-16 15:39, Stefan Karlsson wrote: >>>>>> Hi all, >>>>>> >>>>>> Please review this patch to use HeapAccess<>::oop_load instead of >>>>>> oopDesc::load_decode_heap_oop when loading oops from static fields in >>>>>> javaClasses.cpp: >>>>>> >>>>>> http://cr.openjdk.java.net/~stefank/8199739/webrev.01/ >>>>>> https://bugs.openjdk.java.net/browse/JDK-8199739 >>>>>> >>>>>> It's necessary to use HeapAccess<>::oop_load to inject load barriers >>>>>> for GCs that need them. >>>>>> >>>>>> Thanks, >>>>>> StefanK >>>> > From stefan.karlsson at oracle.com Mon Mar 19 19:15:20 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 19 Mar 2018 20:15:20 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> Message-ID: On 2018-03-19 20:00, coleen.phillimore at oracle.com wrote: > > I like Roman's version with static_field_base() the best.? The reason > I wanted to keep static_field_addr and not have static_oop_addr was so > there is one function to find static fields and this would work with > the jvmci classes and with loading/storing primitives also.? So I like > the consistent change that Roman has. That's OK with me. This RFE grew in scope of what I first intended, so I'm fine with Roman taking over this. > > There's a subtlety that I haven't quite figured out here. > static_field_addr gets an address mirror+offset, so needs a load > barrier on this offset, then needs a load barrier on the offset of the > additional load (?) There are two barriers in this piece of code: 1) Shenandoah needs a barrier to be able to read fields out of the java mirror 2) ZGC and UseCompressedOops needs a barrier when loading oop fields in the java mirror. Is that what you are referring to? > > http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.00/src/hotspot/share/oops/instanceKlass.hpp.udiff.html > > > + oop static_field_base() { return java_mirror(); } I wonder if this should be named static_field_base_raw to be consistent with recent changes to arrayOopDesc?: void* arrayOopDesc::base(BasicType type) const { ? oop resolved_obj = Access<>::resolve(as_oop()); ? return arrayOop(resolved_obj)->base_raw(type); } void* arrayOopDesc::base_raw(BasicType type) const { ? return reinterpret_cast(cast_from_oop(as_oop()) + base_offset_in_bytes(type)); } Here base() has the barrier and the base_raw() doesn't have a barrier. Thanks, Stefank > > > Currently java_mirror() has no load_barriers because it's an OopHandle > in the ClassLoaderData.? Will it with Shenandoah and ZGC (even before > concurrent class unloading?) > > thanks, > Coleen > > On 3/19/18 9:24 AM, Roman Kennke wrote: >> Hi Stefan, >> >> To be honest, I'd find it better to get rid of static_field_addr() >> altogether, and I am also not a fan of resolve(): it seems easy to just >> cover everything with it, but for Shenandoah it means to do a >> write-barrier, even when a read-barrier would suffice (which is >> cheaper). I recognize that none of the affected code is very performance >> sensitive, but if there is a cleaner solution, I'd go for this. What >> about this approach: >> >> http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.00/ >> >> ? >> >> >> Roman >> >> >>> Hi again, >>> >>> Seems like Coleen wanted me to solve this problem slightly differently. >>> She suggested that I add an Access<>::resolve barrier in >>> static_field_addr: >>> >>> http://cr.openjdk.java.net/~stefank/8199739/webrev.03/ >>> >>> This will probably solve the primitive access for Shenandoah. What do >>> you think? >>> >>> Thanks, >>> StefanK >>> >>> On 2018-03-19 12:29, Stefan Karlsson wrote: >>>> Hi Roman, >>>> >>>> On 2018-03-19 11:17, Roman Kennke wrote: >>>>> Hi Stefan, >>>>> >>>>> thank you! >>>>> >>>>> grepping for static_field_addr() yields some more places that'd need >>>>> similar treatment: >>>>> >>>>> jlong java_lang_ref_SoftReference::clock() >>>>> void java_lang_ref_SoftReference::set_clock(jlong value) >>>>> >>>>> Maybe cover them as well? Or I'll file a separate issue. Your call. >>>> This seems like primitive accesses, I'll be happy to leave those to >>>> you. ;) >>>> >>>> Thanks, >>>> StefanK >>>> >>>>> Thanks, Roman >>>>> >>>>> >>>>>> Hi all, >>>>>> >>>>>> Kim and Roman commented that my patch doesn't work with Shenandoah. >>>>>> Here's an updated version: >>>>>> http://cr.openjdk.java.net/~stefank/8199739/webrev.02/ >>>>>> >>>>>> Thanks, >>>>>> StefanK >>>>>> >>>>>> On 2018-03-16 15:39, Stefan Karlsson wrote: >>>>>>> Hi all, >>>>>>> >>>>>>> Please review this patch to use HeapAccess<>::oop_load instead of >>>>>>> oopDesc::load_decode_heap_oop when loading oops from static >>>>>>> fields in >>>>>>> javaClasses.cpp: >>>>>>> >>>>>>> http://cr.openjdk.java.net/~stefank/8199739/webrev.01/ >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8199739 >>>>>>> >>>>>>> It's necessary to use HeapAccess<>::oop_load to inject load >>>>>>> barriers >>>>>>> for GCs that need them. >>>>>>> >>>>>>> Thanks, >>>>>>> StefanK >>>>> >> > From rkennke at redhat.com Mon Mar 19 19:35:19 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 19 Mar 2018 20:35:19 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> Message-ID: <7bdeee07-8a7a-216a-642e-f3cb6026af34@redhat.com> Am 19.03.2018 um 20:15 schrieb Stefan Karlsson: > On 2018-03-19 20:00, coleen.phillimore at oracle.com wrote: >> >> I like Roman's version with static_field_base() the best.? The reason >> I wanted to keep static_field_addr and not have static_oop_addr was so >> there is one function to find static fields and this would work with >> the jvmci classes and with loading/storing primitives also.? So I like >> the consistent change that Roman has. > > That's OK with me. This RFE grew in scope of what I first intended, so > I'm fine with Roman taking over this. > >> >> There's a subtlety that I haven't quite figured out here. >> static_field_addr gets an address mirror+offset, so needs a load >> barrier on this offset, then needs a load barrier on the offset of the >> additional load (?) > There are two barriers in this piece of code: > 1) Shenandoah needs a barrier to be able to read fields out of the java > mirror > 2) ZGC and UseCompressedOops needs a barrier when loading oop fields in > the java mirror. Both should be covered by the Access::load* or store* calls. > Is that what you are referring to? > >> >> http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.00/src/hotspot/share/oops/instanceKlass.hpp.udiff.html >> >> >> + oop static_field_base() { return java_mirror(); } > > I wonder if this should be named static_field_base_raw to be consistent > with recent changes to arrayOopDesc?: > > void* arrayOopDesc::base(BasicType type) const { > ? oop resolved_obj = Access<>::resolve(as_oop()); > ? return arrayOop(resolved_obj)->base_raw(type); > } > > void* arrayOopDesc::base_raw(BasicType type) const { > ? return reinterpret_cast(cast_from_oop(as_oop()) + > base_offset_in_bytes(type)); > } > > Here base() has the barrier and the base_raw() doesn't have a barrier. I don't actually want static_field_base() to have a barrier, because the HeapAccess<>::load_(oop_)at() already does the right thing. I would not rename it to static_field_base_raw() and/or add such a method. In-fact, I'd prefer if arrayOopDesc::base() wouldn't have a barrier either, and all users do the right thing but that is another story (and RFE). Roman From coleen.phillimore at oracle.com Mon Mar 19 19:35:41 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 19 Mar 2018 15:35:41 -0400 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> Message-ID: <3b1de555-54f4-6b30-b2b2-b8ddddf2fe29@oracle.com> On 3/19/18 3:15 PM, Stefan Karlsson wrote: > On 2018-03-19 20:00, coleen.phillimore at oracle.com wrote: >> >> I like Roman's version with static_field_base() the best.? The reason >> I wanted to keep static_field_addr and not have static_oop_addr was >> so there is one function to find static fields and this would work >> with the jvmci classes and with loading/storing primitives also.? So >> I like the consistent change that Roman has. > > That's OK with me. This RFE grew in scope of what I first intended, so > I'm fine with Roman taking over this. > >> >> There's a subtlety that I haven't quite figured out here. >> static_field_addr gets an address mirror+offset, so needs a load >> barrier on this offset, then needs a load barrier on the offset of >> the additional load (?) > There are two barriers in this piece of code: > 1) Shenandoah needs a barrier to be able to read fields out of the > java mirror > 2) ZGC and UseCompressedOops needs a barrier when loading oop fields > in the java mirror. > > Is that what you are referring to? I had to read this thread over again, and am still foggy, but it was because your original change didn't work for shenandoah, ie Kim's last response. The brooks pointer has to be applied to get the mirror address as well as reading fields out of the mirror, if I understand correctly. OopHandle::resolve() which is what java_mirror() is not accessorized but should be for shenandoah.? I think.? I guess that was my question before. I agree it is best to avoid _addr() functions. Coleen > >> >> http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.00/src/hotspot/share/oops/instanceKlass.hpp.udiff.html >> >> >> + oop static_field_base() { return java_mirror(); } > > I wonder if this should be named static_field_base_raw to be > consistent with recent changes to arrayOopDesc?: > > void* arrayOopDesc::base(BasicType type) const { > ? oop resolved_obj = Access<>::resolve(as_oop()); > ? return arrayOop(resolved_obj)->base_raw(type); > } > > void* arrayOopDesc::base_raw(BasicType type) const { > ? return reinterpret_cast(cast_from_oop(as_oop()) + > base_offset_in_bytes(type)); > } > > Here base() has the barrier and the base_raw() doesn't have a barrier. > > Thanks, > Stefank >> >> >> Currently java_mirror() has no load_barriers because it's an >> OopHandle in the ClassLoaderData.? Will it with Shenandoah and ZGC >> (even before concurrent class unloading?) >> >> thanks, >> Coleen >> >> On 3/19/18 9:24 AM, Roman Kennke wrote: >>> Hi Stefan, >>> >>> To be honest, I'd find it better to get rid of static_field_addr() >>> altogether, and I am also not a fan of resolve(): it seems easy to just >>> cover everything with it, but for Shenandoah it means to do a >>> write-barrier, even when a read-barrier would suffice (which is >>> cheaper). I recognize that none of the affected code is very >>> performance >>> sensitive, but if there is a cleaner solution, I'd go for this. What >>> about this approach: >>> >>> http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.00/ >>> >>> ? >>> >>> >>> Roman >>> >>> >>>> Hi again, >>>> >>>> Seems like Coleen wanted me to solve this problem slightly >>>> differently. >>>> She suggested that I add an Access<>::resolve barrier in >>>> static_field_addr: >>>> >>>> http://cr.openjdk.java.net/~stefank/8199739/webrev.03/ >>>> >>>> This will probably solve the primitive access for Shenandoah. What do >>>> you think? >>>> >>>> Thanks, >>>> StefanK >>>> >>>> On 2018-03-19 12:29, Stefan Karlsson wrote: >>>>> Hi Roman, >>>>> >>>>> On 2018-03-19 11:17, Roman Kennke wrote: >>>>>> Hi Stefan, >>>>>> >>>>>> thank you! >>>>>> >>>>>> grepping for static_field_addr() yields some more places that'd need >>>>>> similar treatment: >>>>>> >>>>>> jlong java_lang_ref_SoftReference::clock() >>>>>> void java_lang_ref_SoftReference::set_clock(jlong value) >>>>>> >>>>>> Maybe cover them as well? Or I'll file a separate issue. Your call. >>>>> This seems like primitive accesses, I'll be happy to leave those to >>>>> you. ;) >>>>> >>>>> Thanks, >>>>> StefanK >>>>> >>>>>> Thanks, Roman >>>>>> >>>>>> >>>>>>> Hi all, >>>>>>> >>>>>>> Kim and Roman commented that my patch doesn't work with Shenandoah. >>>>>>> Here's an updated version: >>>>>>> http://cr.openjdk.java.net/~stefank/8199739/webrev.02/ >>>>>>> >>>>>>> Thanks, >>>>>>> StefanK >>>>>>> >>>>>>> On 2018-03-16 15:39, Stefan Karlsson wrote: >>>>>>>> Hi all, >>>>>>>> >>>>>>>> Please review this patch to use HeapAccess<>::oop_load instead of >>>>>>>> oopDesc::load_decode_heap_oop when loading oops from static >>>>>>>> fields in >>>>>>>> javaClasses.cpp: >>>>>>>> >>>>>>>> http://cr.openjdk.java.net/~stefank/8199739/webrev.01/ >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8199739 >>>>>>>> >>>>>>>> It's necessary to use HeapAccess<>::oop_load to inject load >>>>>>>> barriers >>>>>>>> for GCs that need them. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> StefanK >>>>>> >>> >> > From stefan.karlsson at oracle.com Mon Mar 19 19:46:00 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 19 Mar 2018 20:46:00 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <7bdeee07-8a7a-216a-642e-f3cb6026af34@redhat.com> References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> <7bdeee07-8a7a-216a-642e-f3cb6026af34@redhat.com> Message-ID: On 2018-03-19 20:35, Roman Kennke wrote: > Am 19.03.2018 um 20:15 schrieb Stefan Karlsson: >> On 2018-03-19 20:00, coleen.phillimore at oracle.com wrote: >>> I like Roman's version with static_field_base() the best.? The reason >>> I wanted to keep static_field_addr and not have static_oop_addr was so >>> there is one function to find static fields and this would work with >>> the jvmci classes and with loading/storing primitives also.? So I like >>> the consistent change that Roman has. >> That's OK with me. This RFE grew in scope of what I first intended, so >> I'm fine with Roman taking over this. >> >>> There's a subtlety that I haven't quite figured out here. >>> static_field_addr gets an address mirror+offset, so needs a load >>> barrier on this offset, then needs a load barrier on the offset of the >>> additional load (?) >> There are two barriers in this piece of code: >> 1) Shenandoah needs a barrier to be able to read fields out of the java >> mirror >> 2) ZGC and UseCompressedOops needs a barrier when loading oop fields in >> the java mirror. > Both should be covered by the Access::load* or store* calls. Yes. > >> Is that what you are referring to? >> >>> http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.00/src/hotspot/share/oops/instanceKlass.hpp.udiff.html >>> >>> >>> + oop static_field_base() { return java_mirror(); } >> I wonder if this should be named static_field_base_raw to be consistent >> with recent changes to arrayOopDesc?: >> >> void* arrayOopDesc::base(BasicType type) const { >> ? oop resolved_obj = Access<>::resolve(as_oop()); >> ? return arrayOop(resolved_obj)->base_raw(type); >> } >> >> void* arrayOopDesc::base_raw(BasicType type) const { >> ? return reinterpret_cast(cast_from_oop(as_oop()) + >> base_offset_in_bytes(type)); >> } >> >> Here base() has the barrier and the base_raw() doesn't have a barrier. > I don't actually want static_field_base() to have a barrier, because the > HeapAccess<>::load_(oop_)at() already does the right thing. I agree. > I would not > rename it to static_field_base_raw() and/or add such a method. That makes this code inconsistent with the naming in arrayOopDesc. I don't find that very appealing. > > In-fact, I'd prefer if arrayOopDesc::base() wouldn't have a barrier > either, and all users do the right thing but that is another story (and > RFE). Maybe this contention point needs to be resolved sooner rather than later. StefanK > > Roman > > From nezihyigitbasi at gmail.com Mon Mar 19 18:43:44 2018 From: nezihyigitbasi at gmail.com (nezih yigitbasi) Date: Mon, 19 Mar 2018 11:43:44 -0700 Subject: SIGSEGV with build 9.0.1+11 Message-ID: Hi, Our production app recently crashed with a SIGSEGV at "~BufferBlob::vtable chunks" with Java build 9.0.1+11. This is the first time we see a crash with this particular stack after upgrading to Java 9. A quick search leads to JDK-8169938 (which is related to AOT compiled binaries and we don't use AOT) and JDK-8191081, which is closed as incomplete. You can find the hs_err file in the attachment. Any help is greatly appreciated. Thanks, Nezih From rkennke at redhat.com Mon Mar 19 20:11:30 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 19 Mar 2018 21:11:30 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <3b1de555-54f4-6b30-b2b2-b8ddddf2fe29@oracle.com> References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> <3b1de555-54f4-6b30-b2b2-b8ddddf2fe29@oracle.com> Message-ID: <3cf32a5b-ca9e-d9d2-abda-943ced0d782d@redhat.com> Am 19.03.2018 um 20:35 schrieb coleen.phillimore at oracle.com: > > > On 3/19/18 3:15 PM, Stefan Karlsson wrote: >> On 2018-03-19 20:00, coleen.phillimore at oracle.com wrote: >>> >>> I like Roman's version with static_field_base() the best.? The reason >>> I wanted to keep static_field_addr and not have static_oop_addr was >>> so there is one function to find static fields and this would work >>> with the jvmci classes and with loading/storing primitives also.? So >>> I like the consistent change that Roman has. >> >> That's OK with me. This RFE grew in scope of what I first intended, so >> I'm fine with Roman taking over this. >> >>> >>> There's a subtlety that I haven't quite figured out here. >>> static_field_addr gets an address mirror+offset, so needs a load >>> barrier on this offset, then needs a load barrier on the offset of >>> the additional load (?) >> There are two barriers in this piece of code: >> 1) Shenandoah needs a barrier to be able to read fields out of the >> java mirror >> 2) ZGC and UseCompressedOops needs a barrier when loading oop fields >> in the java mirror. >> >> Is that what you are referring to? > > I had to read this thread over again, and am still foggy, but it was > because your original change didn't work for shenandoah, ie Kim's last > response. > > The brooks pointer has to be applied to get the mirror address as well > as reading fields out of the mirror, if I understand correctly. > > OopHandle::resolve() which is what java_mirror() is not accessorized but > should be for shenandoah.? I think.? I guess that was my question before. The family of _at() functions in Access, those which accept oop+offset, do the chasing of the forwarding pointer in Shenandoah, then they apply the offset, load the memory field and return the value in the right type. They also do the load-barrier in ZGC (haven't checked, but that's just logical). There is also oop Access::resolve(oop) which is a bit of a hack. It has been introduced because of arraycopy and java <-> native bulk copy stuff that uses typeArrayOop::*_at_addr() family of methods. In those situations we still need to 1. chase the fwd ptr (for reads) or 2. maybe evacuate the object (for writes), where #2 is stronger than #1 (i.e. if we do #2, then we don't need to do #1). In order to keep things simple, we decided to make Access::resolve(oop) do #2, and have it cover all those cases, and put it in arrayOopDesc::base(). This does the right thing for all cases, but it is a bit broad, for example, it may lead to double-copying a potentially large array (resolve-copy src array from from-space to to-space, then copy it again to the dst array). For those reasons, it is advisable to think twice before using _at_addr() or in-fact Access::resolve() if there's a better/cleaner way to do it. Stefan: Should I assign the bug to me and take it over? Or do you want to take my patch and push it yourself. I don't mind either way? Roman From stefan.karlsson at oracle.com Mon Mar 19 20:23:11 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 19 Mar 2018 21:23:11 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <3cf32a5b-ca9e-d9d2-abda-943ced0d782d@redhat.com> References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> <3b1de555-54f4-6b30-b2b2-b8ddddf2fe29@oracle.com> <3cf32a5b-ca9e-d9d2-abda-943ced0d782d@redhat.com> Message-ID: <4aff9543-f059-572b-08f1-efe82304e8ba@oracle.com> On 2018-03-19 21:11, Roman Kennke wrote: > Am 19.03.2018 um 20:35 schrieb coleen.phillimore at oracle.com: >> >> On 3/19/18 3:15 PM, Stefan Karlsson wrote: >>> On 2018-03-19 20:00, coleen.phillimore at oracle.com wrote: >>>> I like Roman's version with static_field_base() the best.? The reason >>>> I wanted to keep static_field_addr and not have static_oop_addr was >>>> so there is one function to find static fields and this would work >>>> with the jvmci classes and with loading/storing primitives also.? So >>>> I like the consistent change that Roman has. >>> That's OK with me. This RFE grew in scope of what I first intended, so >>> I'm fine with Roman taking over this. >>> >>>> There's a subtlety that I haven't quite figured out here. >>>> static_field_addr gets an address mirror+offset, so needs a load >>>> barrier on this offset, then needs a load barrier on the offset of >>>> the additional load (?) >>> There are two barriers in this piece of code: >>> 1) Shenandoah needs a barrier to be able to read fields out of the >>> java mirror >>> 2) ZGC and UseCompressedOops needs a barrier when loading oop fields >>> in the java mirror. >>> >>> Is that what you are referring to? >> I had to read this thread over again, and am still foggy, but it was >> because your original change didn't work for shenandoah, ie Kim's last >> response. >> >> The brooks pointer has to be applied to get the mirror address as well >> as reading fields out of the mirror, if I understand correctly. >> >> OopHandle::resolve() which is what java_mirror() is not accessorized but >> should be for shenandoah.? I think.? I guess that was my question before. > The family of _at() functions in Access, those which accept oop+offset, > do the chasing of the forwarding pointer in Shenandoah, then they apply > the offset, load the memory field and return the value in the right > type. They also do the load-barrier in ZGC (haven't checked, but that's > just logical). > > There is also oop Access::resolve(oop) which is a bit of a hack. It has > been introduced because of arraycopy and java <-> native bulk copy stuff > that uses typeArrayOop::*_at_addr() family of methods. In those > situations we still need to 1. chase the fwd ptr (for reads) or 2. maybe > evacuate the object (for writes), where #2 is stronger than #1 (i.e. if > we do #2, then we don't need to do #1). In order to keep things simple, > we decided to make Access::resolve(oop) do #2, and have it cover all > those cases, and put it in arrayOopDesc::base(). This does the right > thing for all cases, but it is a bit broad, for example, it may lead to > double-copying a potentially large array (resolve-copy src array from > from-space to to-space, then copy it again to the dst array). For those > reasons, it is advisable to think twice before using _at_addr() or > in-fact Access::resolve() if there's a better/cleaner way to do it. > > Stefan: Should I assign the bug to me and take it over? Or do you want > to take my patch and push it yourself. I don't mind either way? I assigned the bug to you. Cheers, StefanK > > Roman > From rkennke at redhat.com Mon Mar 19 20:40:20 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 19 Mar 2018 21:40:20 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <4aff9543-f059-572b-08f1-efe82304e8ba@oracle.com> References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> <3b1de555-54f4-6b30-b2b2-b8ddddf2fe29@oracle.com> <3cf32a5b-ca9e-d9d2-abda-943ced0d782d@redhat.com> <4aff9543-f059-572b-08f1-efe82304e8ba@oracle.com> Message-ID: Am 19.03.2018 um 21:23 schrieb Stefan Karlsson: > On 2018-03-19 21:11, Roman Kennke wrote: >> Am 19.03.2018 um 20:35 schrieb coleen.phillimore at oracle.com: >>> >>> On 3/19/18 3:15 PM, Stefan Karlsson wrote: >>>> On 2018-03-19 20:00, coleen.phillimore at oracle.com wrote: >>>>> I like Roman's version with static_field_base() the best.? The reason >>>>> I wanted to keep static_field_addr and not have static_oop_addr was >>>>> so there is one function to find static fields and this would work >>>>> with the jvmci classes and with loading/storing primitives also.? So >>>>> I like the consistent change that Roman has. >>>> That's OK with me. This RFE grew in scope of what I first intended, so >>>> I'm fine with Roman taking over this. >>>> >>>>> There's a subtlety that I haven't quite figured out here. >>>>> static_field_addr gets an address mirror+offset, so needs a load >>>>> barrier on this offset, then needs a load barrier on the offset of >>>>> the additional load (?) >>>> There are two barriers in this piece of code: >>>> 1) Shenandoah needs a barrier to be able to read fields out of the >>>> java mirror >>>> 2) ZGC and UseCompressedOops needs a barrier when loading oop fields >>>> in the java mirror. >>>> >>>> Is that what you are referring to? >>> I had to read this thread over again, and am still foggy, but it was >>> because your original change didn't work for shenandoah, ie Kim's last >>> response. >>> >>> The brooks pointer has to be applied to get the mirror address as well >>> as reading fields out of the mirror, if I understand correctly. >>> >>> OopHandle::resolve() which is what java_mirror() is not accessorized but >>> should be for shenandoah.? I think.? I guess that was my question >>> before. >> The family of _at() functions in Access, those which accept oop+offset, >> do the chasing of the forwarding pointer in Shenandoah, then they apply >> the offset, load the memory field and return the value in the right >> type. They also do the load-barrier in ZGC (haven't checked, but that's >> just logical). >> >> There is also oop Access::resolve(oop) which is a bit of a hack. It has >> been introduced because of arraycopy and java <-> native bulk copy stuff >> that uses typeArrayOop::*_at_addr() family of methods. In those >> situations we still need to 1. chase the fwd ptr (for reads) or 2. maybe >> evacuate the object (for writes), where #2 is stronger than #1 (i.e. if >> we do #2, then we don't need to do #1). In order to keep things simple, >> we decided to make Access::resolve(oop) do #2, and have it cover all >> those cases, and put it in arrayOopDesc::base(). This does the right >> thing for all cases, but it is a bit broad, for example, it may lead to >> double-copying a potentially large array (resolve-copy src array from >> from-space to to-space, then copy it again to the dst array). For those >> reasons, it is advisable to think twice before using _at_addr() or >> in-fact Access::resolve() if there's a better/cleaner way to do it. >> >> Stefan: Should I assign the bug to me and take it over? Or do you want >> to take my patch and push it yourself. I don't mind either way? > > I assigned the bug to you. > Ok, thanks :-) I filed: https://bugs.openjdk.java.net/browse/JDK-8199801 and promise to get rid of arrayOopDesc::base_raw() as part of it (if it gets approved). Can I consider your and Coleen's approval as reviews for this patch? Thanks, Roman From nezihyigitbasi at gmail.com Mon Mar 19 20:50:12 2018 From: nezihyigitbasi at gmail.com (nezih yigitbasi) Date: Mon, 19 Mar 2018 13:50:12 -0700 Subject: SIGSEGV with build 9.0.1+11 In-Reply-To: References: Message-ID: In case the attachment doesn't make it, here is the hs_err file: https://gist.github.com/nezihyigitbasi/427919d3c86b4d86281ee1f33f96a23e Thanks, Nezih 2018-03-19 11:43 GMT-07:00 nezih yigitbasi : > Hi, > Our production app recently crashed with a SIGSEGV at > "~BufferBlob::vtable chunks" with Java build 9.0.1+11. This is the > first time we see a crash with this particular stack after upgrading > to Java 9. A quick search leads to JDK-8169938 (which is related to > AOT compiled binaries and we don't use AOT) and JDK-8191081, which is > closed as incomplete. You can find the hs_err file in the attachment. > > Any help is greatly appreciated. > > Thanks, > Nezih From coleen.phillimore at oracle.com Mon Mar 19 20:51:49 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 19 Mar 2018 16:51:49 -0400 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> <3b1de555-54f4-6b30-b2b2-b8ddddf2fe29@oracle.com> <3cf32a5b-ca9e-d9d2-abda-943ced0d782d@redhat.com> <4aff9543-f059-572b-08f1-efe82304e8ba@oracle.com> Message-ID: On 3/19/18 4:40 PM, Roman Kennke wrote: > Am 19.03.2018 um 21:23 schrieb Stefan Karlsson: >> On 2018-03-19 21:11, Roman Kennke wrote: >>> Am 19.03.2018 um 20:35 schrieb coleen.phillimore at oracle.com: >>>> On 3/19/18 3:15 PM, Stefan Karlsson wrote: >>>>> On 2018-03-19 20:00, coleen.phillimore at oracle.com wrote: >>>>>> I like Roman's version with static_field_base() the best.? The reason >>>>>> I wanted to keep static_field_addr and not have static_oop_addr was >>>>>> so there is one function to find static fields and this would work >>>>>> with the jvmci classes and with loading/storing primitives also.? So >>>>>> I like the consistent change that Roman has. >>>>> That's OK with me. This RFE grew in scope of what I first intended, so >>>>> I'm fine with Roman taking over this. >>>>> >>>>>> There's a subtlety that I haven't quite figured out here. >>>>>> static_field_addr gets an address mirror+offset, so needs a load >>>>>> barrier on this offset, then needs a load barrier on the offset of >>>>>> the additional load (?) >>>>> There are two barriers in this piece of code: >>>>> 1) Shenandoah needs a barrier to be able to read fields out of the >>>>> java mirror >>>>> 2) ZGC and UseCompressedOops needs a barrier when loading oop fields >>>>> in the java mirror. >>>>> >>>>> Is that what you are referring to? >>>> I had to read this thread over again, and am still foggy, but it was >>>> because your original change didn't work for shenandoah, ie Kim's last >>>> response. >>>> >>>> The brooks pointer has to be applied to get the mirror address as well >>>> as reading fields out of the mirror, if I understand correctly. >>>> >>>> OopHandle::resolve() which is what java_mirror() is not accessorized but >>>> should be for shenandoah.? I think.? I guess that was my question >>>> before. >>> The family of _at() functions in Access, those which accept oop+offset, >>> do the chasing of the forwarding pointer in Shenandoah, then they apply >>> the offset, load the memory field and return the value in the right >>> type. They also do the load-barrier in ZGC (haven't checked, but that's >>> just logical). >>> >>> There is also oop Access::resolve(oop) which is a bit of a hack. It has >>> been introduced because of arraycopy and java <-> native bulk copy stuff >>> that uses typeArrayOop::*_at_addr() family of methods. In those >>> situations we still need to 1. chase the fwd ptr (for reads) or 2. maybe >>> evacuate the object (for writes), where #2 is stronger than #1 (i.e. if >>> we do #2, then we don't need to do #1). In order to keep things simple, >>> we decided to make Access::resolve(oop) do #2, and have it cover all >>> those cases, and put it in arrayOopDesc::base(). This does the right >>> thing for all cases, but it is a bit broad, for example, it may lead to >>> double-copying a potentially large array (resolve-copy src array from >>> from-space to to-space, then copy it again to the dst array). For those >>> reasons, it is advisable to think twice before using _at_addr() or >>> in-fact Access::resolve() if there's a better/cleaner way to do it. >>> >>> Stefan: Should I assign the bug to me and take it over? Or do you want >>> to take my patch and push it yourself. I don't mind either way? >> I assigned the bug to you. >> > Ok, thanks :-) > > I filed: > https://bugs.openjdk.java.net/browse/JDK-8199801 > > and promise to get rid of arrayOopDesc::base_raw() as part of it (if it > gets approved). Can I consider your and Coleen's approval as reviews for > this patch? Yes, it looks really good to me. thanks! Coleen > > Thanks, Roman > > From kim.barrett at oracle.com Mon Mar 19 21:33:02 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 19 Mar 2018 17:33:02 -0400 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> <3b1de555-54f4-6b30-b2b2-b8ddddf2fe29@oracle.com> <3cf32a5b-ca9e-d9d2-abda-943ced0d782d@redhat.com> <4aff9543-f059-572b-08f1-efe82304e8ba@oracle.com> Message-ID: <182C95D5-84D7-4411-98C5-DFC55D9DCCE5@oracle.com> > On Mar 19, 2018, at 4:40 PM, Roman Kennke wrote: >> > > Ok, thanks :-) > > I filed: > https://bugs.openjdk.java.net/browse/JDK-8199801 > > and promise to get rid of arrayOopDesc::base_raw() as part of it (if it > gets approved). Can I consider your and Coleen's approval as reviews for > this patch? > > Thanks, Roman Assuming you mean this patch: http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.00/ That one looks good to me. From christian.tornqvist at oracle.com Mon Mar 19 21:49:16 2018 From: christian.tornqvist at oracle.com (Christian Tornqvist) Date: Mon, 19 Mar 2018 17:49:16 -0400 Subject: Naming restriction for branches in submit-hs In-Reply-To: References: Message-ID: Hi Volker, I?ve update the wiki page to reflect what the branch naming restrictions are for the submit repo, you can find the page at: https://wiki.openjdk.java.net/display/Build/Submit+Repo Thanks, Christian > On Mar 19, 2018, at 6:45 00AM, jesper.wilhelmsson at oracle.com wrote: > > Hi Volker, > > It seems that a . in the branch name might be an issue. This is being investigated. > > The last build I see with your name on it is from 2018-03-15 17:48. I can see that this is not the one you are looking for. > /Jesper > >> On 17 Mar 2018, at 08:55, Volker Simonis wrote: >> >> Hi Jesper, >> >> the Wiki mentions that ?only branches starting with ?JDK-? will be built and tested? [1]. I?ve submitted a branch called ?JDK-8199698.v2? [2] (i.e. the second version of a change after review) yesterday, but it doesn?t seem to be build. Are there any other naming conventions? I?m pretty sure names like these worked in the first version of the submit repo last year. >> >> Thanks, >> Volker >> >> [1] https://wiki.openjdk.java.net/display/Build/Submit+Repo >> [2] http://hg.openjdk.java.net/jdk/submit-hs/rev/60bae43fe453 >> >> > From vladimir.kozlov at oracle.com Mon Mar 19 22:20:39 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 19 Mar 2018 15:20:39 -0700 Subject: SIGSEGV with build 9.0.1+11 In-Reply-To: References: Message-ID: <6dfa9554-8a92-34aa-6c36-6e9018a76f6e@oracle.com> Hi Nezih, Such failures happens due to bad oops (java object pointers) due to different reasons. And it is very hard to reproduce or find the cause. I can't guarantee that we will be able to fix this particular case (your program ran for 8 days and had millions of JIT compilations). I filed bug to record this failure: https://bugs.openjdk.java.net/browse/JDK-8199804 Regards, Vladimir On 3/19/18 1:50 PM, nezih yigitbasi wrote: > In case the attachment doesn't make it, here is the hs_err file: > https://gist.github.com/nezihyigitbasi/427919d3c86b4d86281ee1f33f96a23e > > Thanks, > Nezih > > 2018-03-19 11:43 GMT-07:00 nezih yigitbasi : >> Hi, >> Our production app recently crashed with a SIGSEGV at >> "~BufferBlob::vtable chunks" with Java build 9.0.1+11. This is the >> first time we see a crash with this particular stack after upgrading >> to Java 9. A quick search leads to JDK-8169938 (which is related to >> AOT compiled binaries and we don't use AOT) and JDK-8191081, which is >> closed as incomplete. You can find the hs_err file in the attachment. >> >> Any help is greatly appreciated. >> >> Thanks, >> Nezih From david.holmes at oracle.com Mon Mar 19 22:54:01 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 20 Mar 2018 08:54:01 +1000 Subject: SIGSEGV with build 9.0.1+11 In-Reply-To: <6dfa9554-8a92-34aa-6c36-6e9018a76f6e@oracle.com> References: <6dfa9554-8a92-34aa-6c36-6e9018a76f6e@oracle.com> Message-ID: <95d08ce0-cd6e-f699-0488-2ea13d93d3e2@oracle.com> On 20/03/2018 8:20 AM, Vladimir Kozlov wrote: > Hi Nezih, > > Such failures happens due to bad oops (java object pointers) due to > different reasons. And it is very hard to reproduce or find the cause. I > can't guarantee that we will be able to fix this particular case (your > program ran for 8 days and had millions of JIT compilations). > > I filed bug to record this failure: > > https://bugs.openjdk.java.net/browse/JDK-8199804 But please note Nezih that the OpenJDK mailing lists are _not_ for reporting bugs. # If you would like to submit a bug report, please visit: # http://bugreport.java.com/bugreport/crash.jsp Thanks, David > Regards, > Vladimir > > On 3/19/18 1:50 PM, nezih yigitbasi wrote: >> In case the attachment doesn't make it, here is the hs_err file: >> https://gist.github.com/nezihyigitbasi/427919d3c86b4d86281ee1f33f96a23e >> >> Thanks, >> Nezih >> >> 2018-03-19 11:43 GMT-07:00 nezih yigitbasi : >>> Hi, >>> Our production app recently crashed with a SIGSEGV at >>> "~BufferBlob::vtable chunks" with Java build 9.0.1+11. This is the >>> first time we see a crash with this particular stack after upgrading >>> to Java 9. A quick search leads to JDK-8169938 (which is related to >>> AOT compiled binaries and we don't use AOT) and JDK-8191081, which is >>> closed as incomplete. You can find the hs_err file in the attachment. >>> >>> Any help is greatly appreciated. >>> >>> Thanks, >>> Nezih From kim.barrett at oracle.com Mon Mar 19 23:23:58 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 19 Mar 2018 19:23:58 -0400 Subject: RFR: build pragma error with gcc 4.4.7 In-Reply-To: <3196d61c-9b7c-b795-a68d-6e50a3416f41@redhat.com> References: <3196d61c-9b7c-b795-a68d-6e50a3416f41@redhat.com> Message-ID: <01173273-2A13-4FE0-81EE-3C50954D7A90@oracle.com> > On Mar 16, 2018, at 6:48 AM, Michal Vala wrote: > > Hi, > > I've been trying to build latest jdk with gcc 4.4.7 and I hit compile error due to pragma used in function: > > /mnt/ramdisk/openjdk/src/hotspot/os/linux/os_linux.inline.hpp:103: error: #pragma GCC diagnostic not allowed inside functions > > > I'm sending little patch that fixes the issue by wrapping whole function. I've also created a macro for ignoring deprecated declaration inside compilerWarnings.hpp to line up with others. > > Can someone please review? If it's ok, I would also need a sponsor. > > > diff -r 422615764e12 src/hotspot/os/linux/os_linux.inline.hpp > --- a/src/hotspot/os/linux/os_linux.inline.hpp Thu Mar 15 14:54:10 2018 -0700 > +++ b/src/hotspot/os/linux/os_linux.inline.hpp Fri Mar 16 10:50:24 2018 +0100 > @@ -96,13 +96,12 @@ > return ::ftruncate64(fd, length); > } > > -inline struct dirent* os::readdir(DIR* dirp, dirent *dbuf) > -{ > // readdir_r has been deprecated since glibc 2.24. > // See https://sourceware.org/bugzilla/show_bug.cgi?id=19056 for more details. > -#pragma GCC diagnostic push > -#pragma GCC diagnostic ignored "-Wdeprecated-declarations" > - > +PRAGMA_DIAG_PUSH > +PRAGMA_DEPRECATED_IGNORED > +inline struct dirent* os::readdir(DIR* dirp, dirent *dbuf) > +{ > dirent* p; > int status; > assert(dirp != NULL, "just checking"); > @@ -114,11 +113,11 @@ > if((status = ::readdir_r(dirp, dbuf, &p)) != 0) { > errno = status; > return NULL; > - } else > + } else { > return p; > - > -#pragma GCC diagnostic pop > + } > } > +PRAGMA_DIAG_POP > > inline int os::closedir(DIR *dirp) { > assert(dirp != NULL, "argument is NULL"); > diff -r 422615764e12 src/hotspot/share/utilities/compilerWarnings.hpp > --- a/src/hotspot/share/utilities/compilerWarnings.hpp Thu Mar 15 14:54:10 2018 -0700 > +++ b/src/hotspot/share/utilities/compilerWarnings.hpp Fri Mar 16 10:50:24 2018 +0100 > @@ -48,6 +48,7 @@ > #define PRAGMA_FORMAT_NONLITERAL_IGNORED _Pragma("GCC diagnostic ignored \"-Wformat-nonliteral\"") \ > _Pragma("GCC diagnostic ignored \"-Wformat-security\"") > #define PRAGMA_FORMAT_IGNORED _Pragma("GCC diagnostic ignored \"-Wformat\"") > +#define PRAGMA_DEPRECATED_IGNORED _Pragma("GCC diagnostic ignored \"-Wdeprecated-declarations\"") > > #if defined(__clang_major__) && \ > (__clang_major__ >= 4 || \ > > > Thanks! > > -- > Michal Vala > OpenJDK QE > Red Hat Czech Given that there seem to be no callers of os::readdir that share the DIR* among multiple threads, it would seem easier to just replace the use of ::readdir_r with ::readdir. That seems to be the intent in the deprecation decision; use ::readdir, and either don't share a DIR* among threads, or use external locking when doing so. There are also problems with the patch as provided. (1) Since PRAGMA_DIAG_PUSH/POP do nothing in the version of gcc this change is being made in support of, the warning would be disabled for all following code in any translation unit that includes this file. That doesn't seem good. (2) The default empty definition for PRAGMA_DEPRECATED_IGNORED is missing. That means the macro can't be used in shared code, in which case having defined in (shared) compilerWarnings.hpp is questionable. From tobias.hartmann at oracle.com Tue Mar 20 09:20:45 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Tue, 20 Mar 2018 10:20:45 +0100 Subject: [11] RFR(S): 8199777: Deprecate -XX:+AggressiveOpts Message-ID: <022c4dc3-5750-c151-e08e-bd3a2826aa16@oracle.com> Hi, please review the following patch: https://bugs.openjdk.java.net/browse/JDK-8199777 http://cr.openjdk.java.net/~thartmann/8199777/webrev.00/ The VM option "-XX:+AggressiveOpts" should be deprecated in JDK 11 and removed in a future release. The option was originally supposed to enable some experimental optimizations of the C2 compiler to improve performance of specific benchmarks. Most features have been removed or integrated over time leaving the behavior of the option ill-defined and error-prone. The only effect that the flag currently has is setting "AutoBoxCacheMax" to 20000 and "BiasedLockingStartupDelay" to 500. The same configuration can be achieved by setting the corresponding flags via the command line. I've deprecated the flag and replaced the usages in tests by -XX:+EliminateAutoBox -XX:AutoBoxCacheMax=20000. Thanks, Tobias From shade at redhat.com Tue Mar 20 09:24:44 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Tue, 20 Mar 2018 10:24:44 +0100 Subject: [11] RFR(S): 8199777: Deprecate -XX:+AggressiveOpts In-Reply-To: <022c4dc3-5750-c151-e08e-bd3a2826aa16@oracle.com> References: <022c4dc3-5750-c151-e08e-bd3a2826aa16@oracle.com> Message-ID: <00460e5d-291a-17f2-85c3-a8eb72153e77@redhat.com> On 03/20/2018 10:20 AM, Tobias Hartmann wrote: > please review the following patch: > https://bugs.openjdk.java.net/browse/JDK-8199777 > http://cr.openjdk.java.net/~thartmann/8199777/webrev.00/ Yes, please. Looks good. -Aleksey From tobias.hartmann at oracle.com Tue Mar 20 09:25:55 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Tue, 20 Mar 2018 10:25:55 +0100 Subject: [11] RFR(S): 8199777: Deprecate -XX:+AggressiveOpts In-Reply-To: <00460e5d-291a-17f2-85c3-a8eb72153e77@redhat.com> References: <022c4dc3-5750-c151-e08e-bd3a2826aa16@oracle.com> <00460e5d-291a-17f2-85c3-a8eb72153e77@redhat.com> Message-ID: <18907723-a18b-34d2-f5b8-84732c526474@oracle.com> Thanks Aleksey! Best regards, Tobias On 20.03.2018 10:24, Aleksey Shipilev wrote: > On 03/20/2018 10:20 AM, Tobias Hartmann wrote: >> please review the following patch: >> https://bugs.openjdk.java.net/browse/JDK-8199777 >> http://cr.openjdk.java.net/~thartmann/8199777/webrev.00/ > > Yes, please. Looks good. > > -Aleksey > From erik.osterlund at oracle.com Tue Mar 20 10:07:50 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 20 Mar 2018 11:07:50 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <3cf32a5b-ca9e-d9d2-abda-943ced0d782d@redhat.com> References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> <3b1de555-54f4-6b30-b2b2-b8ddddf2fe29@oracle.com> <3cf32a5b-ca9e-d9d2-abda-943ced0d782d@redhat.com> Message-ID: <5AB0DD76.6020807@oracle.com> Hi Roman, On 2018-03-19 21:11, Roman Kennke wrote: > Am 19.03.2018 um 20:35 schrieb coleen.phillimore at oracle.com: >> >> On 3/19/18 3:15 PM, Stefan Karlsson wrote: >>> On 2018-03-19 20:00, coleen.phillimore at oracle.com wrote: >>>> I like Roman's version with static_field_base() the best. The reason >>>> I wanted to keep static_field_addr and not have static_oop_addr was >>>> so there is one function to find static fields and this would work >>>> with the jvmci classes and with loading/storing primitives also. So >>>> I like the consistent change that Roman has. >>> That's OK with me. This RFE grew in scope of what I first intended, so >>> I'm fine with Roman taking over this. >>> >>>> There's a subtlety that I haven't quite figured out here. >>>> static_field_addr gets an address mirror+offset, so needs a load >>>> barrier on this offset, then needs a load barrier on the offset of >>>> the additional load (?) >>> There are two barriers in this piece of code: >>> 1) Shenandoah needs a barrier to be able to read fields out of the >>> java mirror >>> 2) ZGC and UseCompressedOops needs a barrier when loading oop fields >>> in the java mirror. >>> >>> Is that what you are referring to? >> I had to read this thread over again, and am still foggy, but it was >> because your original change didn't work for shenandoah, ie Kim's last >> response. >> >> The brooks pointer has to be applied to get the mirror address as well >> as reading fields out of the mirror, if I understand correctly. >> >> OopHandle::resolve() which is what java_mirror() is not accessorized but >> should be for shenandoah. I think. I guess that was my question before. > The family of _at() functions in Access, those which accept oop+offset, > do the chasing of the forwarding pointer in Shenandoah, then they apply > the offset, load the memory field and return the value in the right > type. They also do the load-barrier in ZGC (haven't checked, but that's > just logical). > > There is also oop Access::resolve(oop) which is a bit of a hack. It has > been introduced because of arraycopy and java <-> native bulk copy stuff > that uses typeArrayOop::*_at_addr() family of methods. In those > situations we still need to 1. chase the fwd ptr (for reads) or 2. maybe > evacuate the object (for writes), where #2 is stronger than #1 (i.e. if > we do #2, then we don't need to do #1). In order to keep things simple, > we decided to make Access::resolve(oop) do #2, and have it cover all > those cases, and put it in arrayOopDesc::base(). This does the right > thing for all cases, but it is a bit broad, for example, it may lead to > double-copying a potentially large array (resolve-copy src array from > from-space to to-space, then copy it again to the dst array). For those > reasons, it is advisable to think twice before using _at_addr() or > in-fact Access::resolve() if there's a better/cleaner way to do it. Are we certain that it is indeed only arraycopy that requires stable accesses until the next thread transition? I seem to recall that last time we discussed this, you thought that there was more than arraycopy code that needed this. For example printing and string encoding/decoding logic. If we are going to make changes based on the assumption that we will be able to get rid of the resolve() barrier, then we should be fairly certain that we can indeed get rid of it. So have the other previously discussed roadblocks other than arraycopy disappeared? Thanks, /Erik > Stefan: Should I assign the bug to me and take it over? Or do you want > to take my patch and push it yourself. I don't mind either way? > > Roman > From erik.osterlund at oracle.com Tue Mar 20 10:22:10 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 20 Mar 2018 11:22:10 +0100 Subject: RFR(XS): JDK-8199780: SetMemory0 and CopyMemory0 in unsafe.cpp need to resolve their operands In-Reply-To: <00c5000d-3e1f-b0cd-5426-c8b14d7516c7@redhat.com> References: <00c5000d-3e1f-b0cd-5426-c8b14d7516c7@redhat.com> Message-ID: <5AB0E0D2.7090303@oracle.com> Hi Roman, Is there a good reason why the Access<>::resolve is not performed inside of index_oop_from_field_offset_long instead of its callsites. For example, it looks like barriers are missing in Unsafe_CopySwapMemory0, that you would get for free by putting the resolve barrier in the API used in this file for resolving addresses. Thanks, /Erik On 2018-03-19 15:44, Roman Kennke wrote: > SetMemory0 and CopyMemory0 in unsafe.cpp read and write from/to > objects, and thus need to resolve their operands via Access::resolve() > before accessing them. > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8199780 > Webrev: > http://cr.openjdk.java.net/~rkennke/JDK-8199780/webrev.00/ > > I'll say again that I'd prefer resolve_for_read() and > resolve_for_write(), but for now the strong resolve() will suffice. ;-) > > Can I please get a review? > > Roman > From volker.simonis at gmail.com Tue Mar 20 10:21:19 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 20 Mar 2018 11:21:19 +0100 Subject: Naming restriction for branches in submit-hs In-Reply-To: References: Message-ID: On Mon, Mar 19, 2018 at 10:49 PM, Christian Tornqvist wrote: > Hi Volker, > > I?ve update the wiki page to reflect what the branch naming restrictions are > for the submit repo, you can find the page at: > https://wiki.openjdk.java.net/display/Build/Submit+Repo > Hi Christian, thanks for updating the Wiki! I've tried yesterday with the branch name "JDK-8199698-v2" and it worked fine. One trivial suggestion regarding the regexp: I think `\w` already contains the digits so you could actually simplify "^JDK\-\d{7}([\w\d-]+)?" to "^JDK\-\d{7}([\w-]+)?" Regards, Volker > Thanks, > Christian > > On Mar 19, 2018, at 6:45 00AM, jesper.wilhelmsson at oracle.com wrote: > > Hi Volker, > > It seems that a . in the branch name might be an issue. This is being > investigated. > > The last build I see with your name on it is from 2018-03-15 17:48. I can > see that this is not the one you are looking for. > /Jesper > > On 17 Mar 2018, at 08:55, Volker Simonis wrote: > > Hi Jesper, > > the Wiki mentions that ?only branches starting with ?JDK-? will be built and > tested? [1]. I?ve submitted a branch called ?JDK-8199698.v2? [2] (i.e. the > second version of a change after review) yesterday, but it doesn?t seem to > be build. Are there any other naming conventions? I?m pretty sure names like > these worked in the first version of the submit repo last year. > > Thanks, > Volker > > [1] https://wiki.openjdk.java.net/display/Build/Submit+Repo > > [2] http://hg.openjdk.java.net/jdk/submit-hs/rev/60bae43fe453 > > > > > From rkennke at redhat.com Tue Mar 20 10:26:15 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 20 Mar 2018 11:26:15 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <5AB0DD76.6020807@oracle.com> References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> <3b1de555-54f4-6b30-b2b2-b8ddddf2fe29@oracle.com> <3cf32a5b-ca9e-d9d2-abda-943ced0d782d@redhat.com> <5AB0DD76.6020807@oracle.com> Message-ID: <0e7560a9-ab2a-133a-6c16-87ac811729bb@redhat.com> Am 20.03.2018 um 11:07 schrieb Erik ?sterlund: > Hi Roman, > > On 2018-03-19 21:11, Roman Kennke wrote: >> Am 19.03.2018 um 20:35 schrieb coleen.phillimore at oracle.com: >>> >>> On 3/19/18 3:15 PM, Stefan Karlsson wrote: >>>> On 2018-03-19 20:00, coleen.phillimore at oracle.com wrote: >>>>> I like Roman's version with static_field_base() the best.? The reason >>>>> I wanted to keep static_field_addr and not have static_oop_addr was >>>>> so there is one function to find static fields and this would work >>>>> with the jvmci classes and with loading/storing primitives also.? So >>>>> I like the consistent change that Roman has. >>>> That's OK with me. This RFE grew in scope of what I first intended, so >>>> I'm fine with Roman taking over this. >>>> >>>>> There's a subtlety that I haven't quite figured out here. >>>>> static_field_addr gets an address mirror+offset, so needs a load >>>>> barrier on this offset, then needs a load barrier on the offset of >>>>> the additional load (?) >>>> There are two barriers in this piece of code: >>>> 1) Shenandoah needs a barrier to be able to read fields out of the >>>> java mirror >>>> 2) ZGC and UseCompressedOops needs a barrier when loading oop fields >>>> in the java mirror. >>>> >>>> Is that what you are referring to? >>> I had to read this thread over again, and am still foggy, but it was >>> because your original change didn't work for shenandoah, ie Kim's last >>> response. >>> >>> The brooks pointer has to be applied to get the mirror address as well >>> as reading fields out of the mirror, if I understand correctly. >>> >>> OopHandle::resolve() which is what java_mirror() is not accessorized but >>> should be for shenandoah.? I think.? I guess that was my question >>> before. >> The family of _at() functions in Access, those which accept oop+offset, >> do the chasing of the forwarding pointer in Shenandoah, then they apply >> the offset, load the memory field and return the value in the right >> type. They also do the load-barrier in ZGC (haven't checked, but that's >> just logical). >> >> There is also oop Access::resolve(oop) which is a bit of a hack. It has >> been introduced because of arraycopy and java <-> native bulk copy stuff >> that uses typeArrayOop::*_at_addr() family of methods. In those >> situations we still need to 1. chase the fwd ptr (for reads) or 2. maybe >> evacuate the object (for writes), where #2 is stronger than #1 (i.e. if >> we do #2, then we don't need to do #1). In order to keep things simple, >> we decided to make Access::resolve(oop) do #2, and have it cover all >> those cases, and put it in arrayOopDesc::base(). This does the right >> thing for all cases, but it is a bit broad, for example, it may lead to >> double-copying a potentially large array (resolve-copy src array from >> from-space to to-space, then copy it again to the dst array). For those >> reasons, it is advisable to think twice before using _at_addr() or >> in-fact Access::resolve() if there's a better/cleaner way to do it. > > Are we certain that it is indeed only arraycopy that requires stable > accesses until the next thread transition? > I seem to recall that last time we discussed this, you thought that > there was more than arraycopy code that needed this. For example > printing and string encoding/decoding logic. > > If we are going to make changes based on the assumption that we will be > able to get rid of the resolve() barrier, then we should be fairly > certain that we can indeed get rid of it. So have the other previously > discussed roadblocks other than arraycopy disappeared? No, I don't think that resolve() can go away. If you look at: http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2018-March/021464.html You'll see all kinds of uses of _at_addr() that cannot be covered by some sort of arraycopy, e.g. the string conversions stuff. The above patch proposes to split resolve() to resolve_for_read() and resolve_for_write(), and I don't think it is unreasonable to distinguish those. Besides being better for Shenandoah (reduced latency on read-only accesses), there are conceivable GC algorithms that require that distinction too, e.g. transactional memory based GC or copy-on-write based GCs. But let's probably continue this discussion in the thread mentioned above? Thanks, Roman From rkennke at redhat.com Tue Mar 20 10:36:27 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 20 Mar 2018 11:36:27 +0100 Subject: RFR(XS): JDK-8199780: SetMemory0 and CopyMemory0 in unsafe.cpp need to resolve their operands In-Reply-To: <5AB0E0D2.7090303@oracle.com> References: <00c5000d-3e1f-b0cd-5426-c8b14d7516c7@redhat.com> <5AB0E0D2.7090303@oracle.com> Message-ID: <20b2bcd2-66f9-f80d-4eb9-bd0ee44d5261@redhat.com> Same reason as splitting resolve -> resolve_for_read/resolve_for_write in other routines: being able to distinguish read and write access. Also, I'd rather be careful to put this stuff in central places that might over-cover it. I've missed Unsafe_CopySwapMemory0, good find (has this been added recently?). I'll add Access calls there, and meditate a little bit how to put this into a more central place to avoid having to fix this for every code change in unsafe.cpp ;-) Thanks, Roman > Is there a good reason why the Access<>::resolve is not performed inside > of index_oop_from_field_offset_long instead of its callsites. For > example, it looks like barriers are missing in Unsafe_CopySwapMemory0, > that you would get for free by putting the resolve barrier in the API > used in this file for resolving addresses. > > Thanks, > /Erik > > On 2018-03-19 15:44, Roman Kennke wrote: >> ? SetMemory0 and CopyMemory0 in unsafe.cpp read and write from/to >> objects, and thus need to resolve their operands via Access::resolve() >> before accessing them. >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8199780 >> Webrev: >> http://cr.openjdk.java.net/~rkennke/JDK-8199780/webrev.00/ >> >> I'll say again that I'd prefer resolve_for_read() and >> resolve_for_write(), but for now the strong resolve() will suffice. ;-) >> >> Can I please get a review? >> >> Roman >> > From erik.osterlund at oracle.com Tue Mar 20 10:44:37 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 20 Mar 2018 11:44:37 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <0e7560a9-ab2a-133a-6c16-87ac811729bb@redhat.com> References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> <3b1de555-54f4-6b30-b2b2-b8ddddf2fe29@oracle.com> <3cf32a5b-ca9e-d9d2-abda-943ced0d782d@redhat.com> <5AB0DD76.6020807@oracle.com> <0e7560a9-ab2a-133a-6c16-87ac811729bb@redhat.com> Message-ID: <5AB0E615.9060700@oracle.com> Hi Roman, On 2018-03-20 11:26, Roman Kennke wrote: > Am 20.03.2018 um 11:07 schrieb Erik ?sterlund: >> Hi Roman, >> >> On 2018-03-19 21:11, Roman Kennke wrote: >>> Am 19.03.2018 um 20:35 schrieb coleen.phillimore at oracle.com: >>>> On 3/19/18 3:15 PM, Stefan Karlsson wrote: >>>>> On 2018-03-19 20:00, coleen.phillimore at oracle.com wrote: >>>>>> I like Roman's version with static_field_base() the best. The reason >>>>>> I wanted to keep static_field_addr and not have static_oop_addr was >>>>>> so there is one function to find static fields and this would work >>>>>> with the jvmci classes and with loading/storing primitives also. So >>>>>> I like the consistent change that Roman has. >>>>> That's OK with me. This RFE grew in scope of what I first intended, so >>>>> I'm fine with Roman taking over this. >>>>> >>>>>> There's a subtlety that I haven't quite figured out here. >>>>>> static_field_addr gets an address mirror+offset, so needs a load >>>>>> barrier on this offset, then needs a load barrier on the offset of >>>>>> the additional load (?) >>>>> There are two barriers in this piece of code: >>>>> 1) Shenandoah needs a barrier to be able to read fields out of the >>>>> java mirror >>>>> 2) ZGC and UseCompressedOops needs a barrier when loading oop fields >>>>> in the java mirror. >>>>> >>>>> Is that what you are referring to? >>>> I had to read this thread over again, and am still foggy, but it was >>>> because your original change didn't work for shenandoah, ie Kim's last >>>> response. >>>> >>>> The brooks pointer has to be applied to get the mirror address as well >>>> as reading fields out of the mirror, if I understand correctly. >>>> >>>> OopHandle::resolve() which is what java_mirror() is not accessorized but >>>> should be for shenandoah. I think. I guess that was my question >>>> before. >>> The family of _at() functions in Access, those which accept oop+offset, >>> do the chasing of the forwarding pointer in Shenandoah, then they apply >>> the offset, load the memory field and return the value in the right >>> type. They also do the load-barrier in ZGC (haven't checked, but that's >>> just logical). >>> >>> There is also oop Access::resolve(oop) which is a bit of a hack. It has >>> been introduced because of arraycopy and java <-> native bulk copy stuff >>> that uses typeArrayOop::*_at_addr() family of methods. In those >>> situations we still need to 1. chase the fwd ptr (for reads) or 2. maybe >>> evacuate the object (for writes), where #2 is stronger than #1 (i.e. if >>> we do #2, then we don't need to do #1). In order to keep things simple, >>> we decided to make Access::resolve(oop) do #2, and have it cover all >>> those cases, and put it in arrayOopDesc::base(). This does the right >>> thing for all cases, but it is a bit broad, for example, it may lead to >>> double-copying a potentially large array (resolve-copy src array from >>> from-space to to-space, then copy it again to the dst array). For those >>> reasons, it is advisable to think twice before using _at_addr() or >>> in-fact Access::resolve() if there's a better/cleaner way to do it. >> Are we certain that it is indeed only arraycopy that requires stable >> accesses until the next thread transition? >> I seem to recall that last time we discussed this, you thought that >> there was more than arraycopy code that needed this. For example >> printing and string encoding/decoding logic. >> >> If we are going to make changes based on the assumption that we will be >> able to get rid of the resolve() barrier, then we should be fairly >> certain that we can indeed get rid of it. So have the other previously >> discussed roadblocks other than arraycopy disappeared? > No, I don't think that resolve() can go away. If you look at: > > http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2018-March/021464.html > > You'll see all kinds of uses of _at_addr() that cannot be covered by > some sort of arraycopy, e.g. the string conversions stuff. > > The above patch proposes to split resolve() to resolve_for_read() and > resolve_for_write(), and I don't think it is unreasonable to distinguish > those. Besides being better for Shenandoah (reduced latency on read-only > accesses), there are conceivable GC algorithms that require that > distinction too, e.g. transactional memory based GC or copy-on-write > based GCs. But let's probably continue this discussion in the thread > mentioned above? As I thought. The reason I bring it up in this thread is because as I understand it, you are proposing to push this patch without renaming static_field_base() to static_field_base_raw(), which is what we did consistently everywhere else so far, with the motivation that you will remove resolve() from the other ones soon, and get rid of base_raw(). And I feel like we should have that discussion first. Until that is actually changed, static_field_base_raw() should be the name of that method. If we decide to change the other code to do something else, then we can revisit this then, but not yet. Thanks, /Erik > Thanks, Roman > From erik.osterlund at oracle.com Tue Mar 20 11:00:11 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 20 Mar 2018 12:00:11 +0100 Subject: RFR(XS): JDK-8199780: SetMemory0 and CopyMemory0 in unsafe.cpp need to resolve their operands In-Reply-To: <20b2bcd2-66f9-f80d-4eb9-bd0ee44d5261@redhat.com> References: <00c5000d-3e1f-b0cd-5426-c8b14d7516c7@redhat.com> <5AB0E0D2.7090303@oracle.com> <20b2bcd2-66f9-f80d-4eb9-bd0ee44d5261@redhat.com> Message-ID: <5AB0E9BA.5000002@oracle.com> Hi Roman, On 2018-03-20 11:36, Roman Kennke wrote: > Same reason as splitting resolve -> resolve_for_read/resolve_for_write > in other routines: being able to distinguish read and write access. > Also, I'd rather be careful to put this stuff in central places that > might over-cover it. It sounds like the motivation for this in my opinion more fragile call site chasing code is optimization. What is the performance difference? Has this showed up in any profiles? Whenever robustness is traded for performance, it would be great to have some understanding about how much performance was lost. > I've missed Unsafe_CopySwapMemory0, good find (has this been added > recently?). I'll add Access calls there, and meditate a little bit how > to put this into a more central place to avoid having to fix this for > every code change in unsafe.cpp ;-) No, this has been around for at least 2 years. In this file, address resolution is consistently done with index_oop_from_field_offset_long, so I think if you want to play it safe (and I know I would), then I would put the Access<>::resolve in there, and leave the rest of the call sites the way they are. Thanks, /Erik > Thanks, Roman > >> Is there a good reason why the Access<>::resolve is not performed inside >> of index_oop_from_field_offset_long instead of its callsites. For >> example, it looks like barriers are missing in Unsafe_CopySwapMemory0, >> that you would get for free by putting the resolve barrier in the API >> used in this file for resolving addresses. >> >> Thanks, >> /Erik >> >> On 2018-03-19 15:44, Roman Kennke wrote: >>> SetMemory0 and CopyMemory0 in unsafe.cpp read and write from/to >>> objects, and thus need to resolve their operands via Access::resolve() >>> before accessing them. >>> >>> Bug: >>> https://bugs.openjdk.java.net/browse/JDK-8199780 >>> Webrev: >>> http://cr.openjdk.java.net/~rkennke/JDK-8199780/webrev.00/ >>> >>> I'll say again that I'd prefer resolve_for_read() and >>> resolve_for_write(), but for now the strong resolve() will suffice. ;-) >>> >>> Can I please get a review? >>> >>> Roman >>> > From stuart.monteith at linaro.org Tue Mar 20 12:00:32 2018 From: stuart.monteith at linaro.org (Stuart Monteith) Date: Tue, 20 Mar 2018 12:00:32 +0000 Subject: [aarch64-port-dev ] RFR: 8193266: AArch64: TestOptionsWithRanges.java SIGSEGV In-Reply-To: <9f523448-5e21-1f4d-c22b-45977f271fb8@samersoff.net> References: <7dbf43d1-72b9-5720-3878-ce31f3e8f555@redhat.com> <20e812bc-d132-9863-815b-345283f9517e@redhat.com> <3c83440f-dd4b-f988-1f96-afa88dff36eb@redhat.com> <9f523448-5e21-1f4d-c22b-45977f271fb8@samersoff.net> Message-ID: Thank you for everyone's attention. I've put the updated patch here: http://cr.openjdk.java.net/~smonteith/8193266/webrev-7/ I've added Andrew Haley and Dmitry Samersoff as the reviewer. BR, Stuart On 19 March 2018 at 13:51, Dmitry Samersoff wrote: > Stuart, > > Changes looks good to me. > > -Dmitry > > On 19.03.2018 10:34, Stuart Monteith wrote: >> Hello, >> Would it be possible for this to be reviewed? I'd like to get it in for >> JDK11. I don't believe there were any outstanding issues. >> >> I've updated and reapplied the patch against current jdk/hs >> >> http://cr.openjdk.java.net/~smonteith/8193266/webrev-6/ >> >> Thanks, >> Stuart >> >> >> On 11/01/18 08:20, Rahul Raghavan wrote: >>> < Just resending below review request email from Stuart for 8193266 >>> including aarch64-port-dev also. Thanks.> >>> >>> >>> -- On Saturday 06 January 2018 12:13 AM, Stuart Monteith wrote: >>> I've removed the AARCH64 conditionals, added the empty line I removed, >>> and changed the type of "use_XOR_for_compressed_class_base" to bool. >>> >>> http://cr.openjdk.java.net/~smonteith/8193266/webrev-5/ >>> >>> BR, >>> Stuart >>> >>>> On 4 January 2018 at 14:45, Andrew Haley wrote: >>>>> Hi, >>>>> >>>>> On 04/01/18 14:26, coleen.phillimore at oracle.com wrote: >>>>>> I was going to offer to sponsor this since it touches shared code but >>>>>> I'm not sure I like that there's AARCH64 specific code in >>>>>> universe.cpp/hpp. And the name is somewhat offputting, suggesting >>>>>> implementation details of one target leaking into shared code. >>>>>> >>>>>> set_use_XOR_for_compressed_class_base >>>>>> >>>>>> I think webrev-3 looked more reasonable, and could elide the #ifdef >>>>>> AARCH64 in the shared code for that version. And the indentation is >>>>>> better. >>>>> >>>>> I hate the #ifdef AARCH64 stuff too, but it's always a sign that there >>>>> is something wrong with the front-end to back-end modularization. We >>>>> can handle the use_XOR_for_compressed_class_base later: we really >>>>> should have a way to communicate with the back ends when the memory >>>>> layout is initialized. We can go with webrev-3. >>>>> >>>>> -- >>>>> Andrew Haley >>>>> Java Platform Lead Engineer >>>>> Red Hat UK Ltd. >>>>> EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 > > > -- > Dmitry Samersoff > http://devnull.samersoff.net > * There will come soft rains ... > From edward.nevill at gmail.com Tue Mar 20 13:54:15 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Tue, 20 Mar 2018 13:54:15 +0000 Subject: RFR: 8199138: Add RISC-V support to Zero In-Reply-To: <259e05b8-dbb1-4aa4-f451-6b7078eeb2ff@oracle.com> References: <1521313360.26308.4.camel@gmail.com> <259e05b8-dbb1-4aa4-f451-6b7078eeb2ff@oracle.com> Message-ID: <1521554055.3029.4.camel@gmail.com> On Tue, 2018-03-20 at 08:39 +0100, Erik Helin wrote: > Please review the following webrev > > > > Bugid: https://bugs.openjdk.java.net/browse/JDK-8199138 > > Webrev: http://cr.openjdk.java.net/~enevill/8199138/webrev.00 > > 32 # First, filter out everything that doesn't begin with "aarch64-" > 33 if ! echo $* | grep '^aarch64-\|^riscv64-' >/dev/null ; then > > Could you please update the comment on line 32 to say the same thing as > the code? > Hi Eirk, Thanks for this. I have updated the webrev with the above comment. http://cr.openjdk.java.net/~enevill/8199138/webrev.01 I have also fixed a problem encountered with the submit-hs repo where the build machine had older headers which did not define EM_RISCV. The solution is to define EM_RISCV if not already defined as is done for aarch64. IE. #ifndef EM_AARCH64 #define EM_AARCH64 183 /* ARM AARCH64 */ #endif +#ifndef EM_RISCV + #define EM_RISCV 243 +#endif This now passes the submit-hs tests. Does this look OK to push now? Thanks, Ed. From mvala at redhat.com Tue Mar 20 09:45:40 2018 From: mvala at redhat.com (Michal Vala) Date: Tue, 20 Mar 2018 10:45:40 +0100 Subject: RFR: build pragma error with gcc 4.4.7 In-Reply-To: <01173273-2A13-4FE0-81EE-3C50954D7A90@oracle.com> References: <3196d61c-9b7c-b795-a68d-6e50a3416f41@redhat.com> <01173273-2A13-4FE0-81EE-3C50954D7A90@oracle.com> Message-ID: <8698f94a-43fe-014f-6627-1cd88c4a1963@redhat.com> On 03/20/2018 12:23 AM, Kim Barrett wrote: > > Given that there seem to be no callers of os::readdir that share the > DIR* among multiple threads, it would seem easier to just replace the > use of ::readdir_r with ::readdir. That seems to be the intent in the > deprecation decision; use ::readdir, and either don't share a DIR* > among threads, or use external locking when doing so. > > There are also problems with the patch as provided. > > (1) Since PRAGMA_DIAG_PUSH/POP do nothing in the version of gcc this > change is being made in support of, the warning would be disabled for > all following code in any translation unit that includes this file. > That doesn't seem good. > > (2) The default empty definition for PRAGMA_DEPRECATED_IGNORED is > missing. That means the macro can't be used in shared code, in which > case having defined in (shared) compilerWarnings.hpp is questionable. > Thanks for the review, these are valid comments. I'll prepare new patch replacing ::readdir_r with ::readdir. -- Michal Vala OpenJDK QE Red Hat Czech From tobias.hartmann at oracle.com Tue Mar 20 14:35:28 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Tue, 20 Mar 2018 15:35:28 +0100 Subject: [11] RFR(S): 8199624: [Graal] Blocking jvmci compilations time out Message-ID: Hi, please review the following patch: https://bugs.openjdk.java.net/browse/JDK-8199624 http://cr.openjdk.java.net/~thartmann/8199624/webrev.00/ Multiple tests fail with Graal as JIT and BackgroundCompilation disabled because compilations issued via WhiteBox.enqueueMethodForCompilation do not block. The problem is that the jvmci compilation may time out (see CompileBroker::wait_for_jvmci_completion() added by [1]). I think we want to avoid checking the PrintCompilation output for all tests that trigger a compilation because that would mean that we need to spawn a separate VM process and verify the output for all these test. Instead, I suggest to increase the timeout value for blocking jvmci compilations and change the WhiteBox API to return false in case a blocking compilation does not block (i.e. times out). We can then catch and ignore the timeout in the affected tests. I've verified that the timeout value is high enough in most cases (100 runs) and that the tests still pass in case the jvmci compilation does still time out. More tests might be affected but I think we should only fix them on demand. Thanks, Tobias [1] https://bugs.openjdk.java.net/browse/JDK-8146705 From tobias.hartmann at oracle.com Tue Mar 20 14:39:01 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Tue, 20 Mar 2018 15:39:01 +0100 Subject: [11] RFR(S): 8199624: [Graal] Blocking jvmci compilations time out In-Reply-To: References: Message-ID: <566060b5-7d24-90a4-548f-757d90fa9fdb@oracle.com> Forgot to mention that I've also fixed the AbstractMethodErrorTest by changing the to be compiled method name from "c" to "mc" and making the methods public such that they will be found by Class.getMethod(). Before, the test would never find these methods and always bail out through the NoSuchMethodException catch clause. Best regards, Tobias On 20.03.2018 15:35, Tobias Hartmann wrote: > Hi, > > please review the following patch: > https://bugs.openjdk.java.net/browse/JDK-8199624 > http://cr.openjdk.java.net/~thartmann/8199624/webrev.00/ > > Multiple tests fail with Graal as JIT and BackgroundCompilation disabled because compilations issued > via WhiteBox.enqueueMethodForCompilation do not block. The problem is that the jvmci compilation may > time out (see CompileBroker::wait_for_jvmci_completion() added by [1]). > > I think we want to avoid checking the PrintCompilation output for all tests that trigger a > compilation because that would mean that we need to spawn a separate VM process and verify the > output for all these test. Instead, I suggest to increase the timeout value for blocking jvmci > compilations and change the WhiteBox API to return false in case a blocking compilation does not > block (i.e. times out). We can then catch and ignore the timeout in the affected tests. > > I've verified that the timeout value is high enough in most cases (100 runs) and that the tests > still pass in case the jvmci compilation does still time out. More tests might be affected but I > think we should only fix them on demand. > > Thanks, > Tobias > > [1] https://bugs.openjdk.java.net/browse/JDK-8146705 > From rkennke at redhat.com Tue Mar 20 15:13:54 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 20 Mar 2018 16:13:54 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <5AB0E615.9060700@oracle.com> References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> <3b1de555-54f4-6b30-b2b2-b8ddddf2fe29@oracle.com> <3cf32a5b-ca9e-d9d2-abda-943ced0d782d@redhat.com> <5AB0DD76.6020807@oracle.com> <0e7560a9-ab2a-133a-6c16-87ac811729bb@redhat.com> <5AB0E615.9060700@oracle.com> Message-ID: <0f080611-5085-74e4-9339-da38fe8c96ac@redhat.com> Am 20.03.2018 um 11:44 schrieb Erik ?sterlund: > Hi Roman, > > On 2018-03-20 11:26, Roman Kennke wrote: >> Am 20.03.2018 um 11:07 schrieb Erik ?sterlund: >>> Hi Roman, >>> >>> On 2018-03-19 21:11, Roman Kennke wrote: >>>> Am 19.03.2018 um 20:35 schrieb coleen.phillimore at oracle.com: >>>>> On 3/19/18 3:15 PM, Stefan Karlsson wrote: >>>>>> On 2018-03-19 20:00, coleen.phillimore at oracle.com wrote: >>>>>>> I like Roman's version with static_field_base() the best.? The >>>>>>> reason >>>>>>> I wanted to keep static_field_addr and not have static_oop_addr was >>>>>>> so there is one function to find static fields and this would work >>>>>>> with the jvmci classes and with loading/storing primitives also.? So >>>>>>> I like the consistent change that Roman has. >>>>>> That's OK with me. This RFE grew in scope of what I first >>>>>> intended, so >>>>>> I'm fine with Roman taking over this. >>>>>> >>>>>>> There's a subtlety that I haven't quite figured out here. >>>>>>> static_field_addr gets an address mirror+offset, so needs a load >>>>>>> barrier on this offset, then needs a load barrier on the offset of >>>>>>> the additional load (?) >>>>>> There are two barriers in this piece of code: >>>>>> 1) Shenandoah needs a barrier to be able to read fields out of the >>>>>> java mirror >>>>>> 2) ZGC and UseCompressedOops needs a barrier when loading oop fields >>>>>> in the java mirror. >>>>>> >>>>>> Is that what you are referring to? >>>>> I had to read this thread over again, and am still foggy, but it was >>>>> because your original change didn't work for shenandoah, ie Kim's last >>>>> response. >>>>> >>>>> The brooks pointer has to be applied to get the mirror address as well >>>>> as reading fields out of the mirror, if I understand correctly. >>>>> >>>>> OopHandle::resolve() which is what java_mirror() is not >>>>> accessorized but >>>>> should be for shenandoah.? I think.? I guess that was my question >>>>> before. >>>> The family of _at() functions in Access, those which accept oop+offset, >>>> do the chasing of the forwarding pointer in Shenandoah, then they apply >>>> the offset, load the memory field and return the value in the right >>>> type. They also do the load-barrier in ZGC (haven't checked, but that's >>>> just logical). >>>> >>>> There is also oop Access::resolve(oop) which is a bit of a hack. It has >>>> been introduced because of arraycopy and java <-> native bulk copy >>>> stuff >>>> that uses typeArrayOop::*_at_addr() family of methods. In those >>>> situations we still need to 1. chase the fwd ptr (for reads) or 2. >>>> maybe >>>> evacuate the object (for writes), where #2 is stronger than #1 (i.e. if >>>> we do #2, then we don't need to do #1). In order to keep things simple, >>>> we decided to make Access::resolve(oop) do #2, and have it cover all >>>> those cases, and put it in arrayOopDesc::base(). This does the right >>>> thing for all cases, but it is a bit broad, for example, it may lead to >>>> double-copying a potentially large array (resolve-copy src array from >>>> from-space to to-space, then copy it again to the dst array). For those >>>> reasons, it is advisable to think twice before using _at_addr() or >>>> in-fact Access::resolve() if there's a better/cleaner way to do it. >>> Are we certain that it is indeed only arraycopy that requires stable >>> accesses until the next thread transition? >>> I seem to recall that last time we discussed this, you thought that >>> there was more than arraycopy code that needed this. For example >>> printing and string encoding/decoding logic. >>> >>> If we are going to make changes based on the assumption that we will be >>> able to get rid of the resolve() barrier, then we should be fairly >>> certain that we can indeed get rid of it. So have the other previously >>> discussed roadblocks other than arraycopy disappeared? >> No, I don't think that resolve() can go away. If you look at: >> >> http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2018-March/021464.html >> >> >> You'll see all kinds of uses of _at_addr() that cannot be covered by >> some sort of arraycopy, e.g. the string conversions stuff. >> >> The above patch proposes to split resolve() to resolve_for_read() and >> resolve_for_write(), and I don't think it is unreasonable to distinguish >> those. Besides being better for Shenandoah (reduced latency on read-only >> accesses), there are conceivable GC algorithms that require that >> distinction too, e.g. transactional memory based GC or copy-on-write >> based GCs. But let's probably continue this discussion in the thread >> mentioned above? > > As I thought. The reason I bring it up in this thread is because as I > understand it, you are proposing to push this patch without renaming > static_field_base() to static_field_base_raw(), which is what we did > consistently everywhere else so far, with the motivation that you will > remove resolve() from the other ones soon, and get rid of base_raw(). > And I feel like we should have that discussion first. Until that is > actually changed, static_field_base_raw() should be the name of that > method. If we decide to change the other code to do something else, then > we can revisit this then, but not yet. Ok, so I changed static_field_base() -> static_field_base_raw(): Diff: http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.01.diff/ Full: http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.01/ Better? Thanks, Roman From rkennke at redhat.com Tue Mar 20 15:40:32 2018 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 20 Mar 2018 16:40:32 +0100 Subject: RFR(XS): JDK-8199780: SetMemory0 and CopyMemory0 in unsafe.cpp need to resolve their operands In-Reply-To: <5AB0E9BA.5000002@oracle.com> References: <00c5000d-3e1f-b0cd-5426-c8b14d7516c7@redhat.com> <5AB0E0D2.7090303@oracle.com> <20b2bcd2-66f9-f80d-4eb9-bd0ee44d5261@redhat.com> <5AB0E9BA.5000002@oracle.com> Message-ID: <15133453-3d69-020f-780c-b04c5f820bb8@redhat.com> Am 20.03.2018 um 12:00 schrieb Erik ?sterlund: > Hi Roman, > > On 2018-03-20 11:36, Roman Kennke wrote: >> Same reason as splitting resolve -> resolve_for_read/resolve_for_write >> in other routines: being able to distinguish read and write access. >> Also, I'd rather be careful to put this stuff in central places that >> might over-cover it. > > It sounds like the motivation for this in my opinion more fragile call > site chasing code is optimization. > What is the performance difference? Has this showed up in any profiles? > Whenever robustness is traded for performance, it would be great to have > some understanding about how much performance was lost. I don't have numbers. But it is not hard to see that copying potentially large arrays twice has some impact. It may only really matter in interpreter and C1, because C2 would most likely intrinsify anything that would show up in profiles, but this would still amount to startup time penalty I would think. I don't really intend to trade robustness for performance: my goal is to make a robust API that also allows GCs to be efficient. >> I've missed Unsafe_CopySwapMemory0, good find (has this been added >> recently?). I'll add Access calls there, and meditate a little bit how >> to put this into a more central place to avoid having to fix this for >> every code change in unsafe.cpp ;-) > > No, this has been around for at least 2 years. In this file, address > resolution is consistently done with index_oop_from_field_offset_long, > so I think if you want to play it safe (and I know I would), then I > would put the Access<>::resolve in there, and leave the rest of the call > sites the way they are. It seems that index_oop_from_field_offset_long() is otherwise only used for the non-heap paths (i.e. when p == NULL). Well I guess it's ok to put the resolve() there for now. Are those methods also meant to do native memory copies/fills when getting src/dst == NULL ? If so, then this new revision fixes the resolve() for those cases as well. Diff: http://cr.openjdk.java.net/~rkennke/JDK-8199780/webrev.01.diff/ Full: http://cr.openjdk.java.net/~rkennke/JDK-8199780/webrev.01/ Better now? Assuming we can reach an agreement about JDK-8199801: Finer grained primitive arrays bulk access barriers, I'd probably also split index_oop_from_field_offset_long() into versions for read and write. Might that be acceptable? Roman From erik.osterlund at oracle.com Tue Mar 20 15:51:46 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 20 Mar 2018 16:51:46 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <0f080611-5085-74e4-9339-da38fe8c96ac@redhat.com> References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> <3b1de555-54f4-6b30-b2b2-b8ddddf2fe29@oracle.com> <3cf32a5b-ca9e-d9d2-abda-943ced0d782d@redhat.com> <5AB0DD76.6020807@oracle.com> <0e7560a9-ab2a-133a-6c16-87ac811729bb@redhat.com> <5AB0E615.9060700@oracle.com> <0f080611-5085-74e4-9339-da38fe8c96ac@redhat.com> Message-ID: <5AB12E12.7030302@oracle.com> Hi Roman, This looks good to me. The unfortunate include problems in jvmciJavaClasses.hpp are pre-existing and should be cleaned up at some point. Thanks, /Erik On 2018-03-20 16:13, Roman Kennke wrote: > Am 20.03.2018 um 11:44 schrieb Erik ?sterlund: >> Hi Roman, >> >> On 2018-03-20 11:26, Roman Kennke wrote: >>> Am 20.03.2018 um 11:07 schrieb Erik ?sterlund: >>>> Hi Roman, >>>> >>>> On 2018-03-19 21:11, Roman Kennke wrote: >>>>> Am 19.03.2018 um 20:35 schrieb coleen.phillimore at oracle.com: >>>>>> On 3/19/18 3:15 PM, Stefan Karlsson wrote: >>>>>>> On 2018-03-19 20:00, coleen.phillimore at oracle.com wrote: >>>>>>>> I like Roman's version with static_field_base() the best. The >>>>>>>> reason >>>>>>>> I wanted to keep static_field_addr and not have static_oop_addr was >>>>>>>> so there is one function to find static fields and this would work >>>>>>>> with the jvmci classes and with loading/storing primitives also. So >>>>>>>> I like the consistent change that Roman has. >>>>>>> That's OK with me. This RFE grew in scope of what I first >>>>>>> intended, so >>>>>>> I'm fine with Roman taking over this. >>>>>>> >>>>>>>> There's a subtlety that I haven't quite figured out here. >>>>>>>> static_field_addr gets an address mirror+offset, so needs a load >>>>>>>> barrier on this offset, then needs a load barrier on the offset of >>>>>>>> the additional load (?) >>>>>>> There are two barriers in this piece of code: >>>>>>> 1) Shenandoah needs a barrier to be able to read fields out of the >>>>>>> java mirror >>>>>>> 2) ZGC and UseCompressedOops needs a barrier when loading oop fields >>>>>>> in the java mirror. >>>>>>> >>>>>>> Is that what you are referring to? >>>>>> I had to read this thread over again, and am still foggy, but it was >>>>>> because your original change didn't work for shenandoah, ie Kim's last >>>>>> response. >>>>>> >>>>>> The brooks pointer has to be applied to get the mirror address as well >>>>>> as reading fields out of the mirror, if I understand correctly. >>>>>> >>>>>> OopHandle::resolve() which is what java_mirror() is not >>>>>> accessorized but >>>>>> should be for shenandoah. I think. I guess that was my question >>>>>> before. >>>>> The family of _at() functions in Access, those which accept oop+offset, >>>>> do the chasing of the forwarding pointer in Shenandoah, then they apply >>>>> the offset, load the memory field and return the value in the right >>>>> type. They also do the load-barrier in ZGC (haven't checked, but that's >>>>> just logical). >>>>> >>>>> There is also oop Access::resolve(oop) which is a bit of a hack. It has >>>>> been introduced because of arraycopy and java <-> native bulk copy >>>>> stuff >>>>> that uses typeArrayOop::*_at_addr() family of methods. In those >>>>> situations we still need to 1. chase the fwd ptr (for reads) or 2. >>>>> maybe >>>>> evacuate the object (for writes), where #2 is stronger than #1 (i.e. if >>>>> we do #2, then we don't need to do #1). In order to keep things simple, >>>>> we decided to make Access::resolve(oop) do #2, and have it cover all >>>>> those cases, and put it in arrayOopDesc::base(). This does the right >>>>> thing for all cases, but it is a bit broad, for example, it may lead to >>>>> double-copying a potentially large array (resolve-copy src array from >>>>> from-space to to-space, then copy it again to the dst array). For those >>>>> reasons, it is advisable to think twice before using _at_addr() or >>>>> in-fact Access::resolve() if there's a better/cleaner way to do it. >>>> Are we certain that it is indeed only arraycopy that requires stable >>>> accesses until the next thread transition? >>>> I seem to recall that last time we discussed this, you thought that >>>> there was more than arraycopy code that needed this. For example >>>> printing and string encoding/decoding logic. >>>> >>>> If we are going to make changes based on the assumption that we will be >>>> able to get rid of the resolve() barrier, then we should be fairly >>>> certain that we can indeed get rid of it. So have the other previously >>>> discussed roadblocks other than arraycopy disappeared? >>> No, I don't think that resolve() can go away. If you look at: >>> >>> http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2018-March/021464.html >>> >>> >>> You'll see all kinds of uses of _at_addr() that cannot be covered by >>> some sort of arraycopy, e.g. the string conversions stuff. >>> >>> The above patch proposes to split resolve() to resolve_for_read() and >>> resolve_for_write(), and I don't think it is unreasonable to distinguish >>> those. Besides being better for Shenandoah (reduced latency on read-only >>> accesses), there are conceivable GC algorithms that require that >>> distinction too, e.g. transactional memory based GC or copy-on-write >>> based GCs. But let's probably continue this discussion in the thread >>> mentioned above? >> As I thought. The reason I bring it up in this thread is because as I >> understand it, you are proposing to push this patch without renaming >> static_field_base() to static_field_base_raw(), which is what we did >> consistently everywhere else so far, with the motivation that you will >> remove resolve() from the other ones soon, and get rid of base_raw(). >> And I feel like we should have that discussion first. Until that is >> actually changed, static_field_base_raw() should be the name of that >> method. If we decide to change the other code to do something else, then >> we can revisit this then, but not yet. > Ok, so I changed static_field_base() -> static_field_base_raw(): > > Diff: > http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.01.diff/ > Full: > http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.01/ > > Better? > > Thanks, Roman > > From erik.osterlund at oracle.com Tue Mar 20 16:25:26 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 20 Mar 2018 17:25:26 +0100 Subject: RFR(XS): JDK-8199780: SetMemory0 and CopyMemory0 in unsafe.cpp need to resolve their operands In-Reply-To: <15133453-3d69-020f-780c-b04c5f820bb8@redhat.com> References: <00c5000d-3e1f-b0cd-5426-c8b14d7516c7@redhat.com> <5AB0E0D2.7090303@oracle.com> <20b2bcd2-66f9-f80d-4eb9-bd0ee44d5261@redhat.com> <5AB0E9BA.5000002@oracle.com> <15133453-3d69-020f-780c-b04c5f820bb8@redhat.com> Message-ID: <5AB135F6.3020508@oracle.com> Hi Roman, On 2018-03-20 16:40, Roman Kennke wrote: > Am 20.03.2018 um 12:00 schrieb Erik ?sterlund: >> Hi Roman, >> >> On 2018-03-20 11:36, Roman Kennke wrote: >>> Same reason as splitting resolve -> resolve_for_read/resolve_for_write >>> in other routines: being able to distinguish read and write access. >>> Also, I'd rather be careful to put this stuff in central places that >>> might over-cover it. >> It sounds like the motivation for this in my opinion more fragile call >> site chasing code is optimization. >> What is the performance difference? Has this showed up in any profiles? >> Whenever robustness is traded for performance, it would be great to have >> some understanding about how much performance was lost. > I don't have numbers. But it is not hard to see that copying potentially > large arrays twice has some impact. It may only really matter in > interpreter and C1, because C2 would most likely intrinsify anything > that would show up in profiles, but this would still amount to startup > time penalty I would think. I don't really intend to trade robustness > for performance: my goal is to make a robust API that also allows GCs to > be efficient. Conversely, I would be surprised if there was a considerable difference to startup due to hitting an unnecessary write barrier for an arraycopy during startup, happening precisely while concurrent relocation is going on and the object has been previously unmodified since before relocation started. I think that if you want to change the API to something in my opinion more fragile purely for optimization purposes, I think it would be appropriate to at least measure if it makes a difference or not so that we get a good understanding about why we are doing this. >>> I've missed Unsafe_CopySwapMemory0, good find (has this been added >>> recently?). I'll add Access calls there, and meditate a little bit how >>> to put this into a more central place to avoid having to fix this for >>> every code change in unsafe.cpp ;-) >> No, this has been around for at least 2 years. In this file, address >> resolution is consistently done with index_oop_from_field_offset_long, >> so I think if you want to play it safe (and I know I would), then I >> would put the Access<>::resolve in there, and leave the rest of the call >> sites the way they are. > It seems that index_oop_from_field_offset_long() is otherwise only used > for the non-heap paths (i.e. when p == NULL). Well I guess it's ok to > put the resolve() there for now. Are those methods also meant to do > native memory copies/fills when getting src/dst == NULL ? If so, then > this new revision fixes the resolve() for those cases as well. Indeed. > Diff: > http://cr.openjdk.java.net/~rkennke/JDK-8199780/webrev.01.diff/ > Full: > http://cr.openjdk.java.net/~rkennke/JDK-8199780/webrev.01/ > > Better now? Yes, much better. It looks good now. Thank you. > Assuming we can reach an agreement about JDK-8199801: Finer grained > primitive arrays bulk access barriers, I'd probably also split > index_oop_from_field_offset_long() into versions for read and write. > Might that be acceptable? As I said earlier, the motivator for introducing this seems to be to optimize. I would feel more comfortable to introduce these news concepts, if the performance benefits are better understood. Thanks, /Erik > Roman > From paul.sandoz at oracle.com Tue Mar 20 16:55:23 2018 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Tue, 20 Mar 2018 09:55:23 -0700 Subject: RFR(XS): JDK-8199780: SetMemory0 and CopyMemory0 in unsafe.cpp need to resolve their operands In-Reply-To: <5AB135F6.3020508@oracle.com> References: <00c5000d-3e1f-b0cd-5426-c8b14d7516c7@redhat.com> <5AB0E0D2.7090303@oracle.com> <20b2bcd2-66f9-f80d-4eb9-bd0ee44d5261@redhat.com> <5AB0E9BA.5000002@oracle.com> <15133453-3d69-020f-780c-b04c5f820bb8@redhat.com> <5AB135F6.3020508@oracle.com> Message-ID: > On Mar 20, 2018, at 9:25 AM, Erik ?sterlund wrote: > > Hi Roman, > > On 2018-03-20 16:40, Roman Kennke wrote: >> Am 20.03.2018 um 12:00 schrieb Erik ?sterlund: >>> Hi Roman, >>> >>> On 2018-03-20 11:36, Roman Kennke wrote: >>>> Same reason as splitting resolve -> resolve_for_read/resolve_for_write >>>> in other routines: being able to distinguish read and write access. >>>> Also, I'd rather be careful to put this stuff in central places that >>>> might over-cover it. >>> It sounds like the motivation for this in my opinion more fragile call >>> site chasing code is optimization. >>> What is the performance difference? Has this showed up in any profiles? >>> Whenever robustness is traded for performance, it would be great to have >>> some understanding about how much performance was lost. >> I don't have numbers. But it is not hard to see that copying potentially >> large arrays twice has some impact. It may only really matter in >> interpreter and C1, because C2 would most likely intrinsify anything >> that would show up in profiles, but this would still amount to startup >> time penalty I would think. I don't really intend to trade robustness >> for performance: my goal is to make a robust API that also allows GCs to >> be efficient. > > Conversely, I would be surprised if there was a considerable difference to startup due to hitting an unnecessary write barrier for an arraycopy during startup, happening precisely while concurrent relocation is going on and the object has been previously unmodified since before relocation started. I think that if you want to change the API to something in my opinion more fragile purely for optimization purposes, I think it would be appropriate to at least measure if it makes a difference or not so that we get a good understanding about why we are doing this. > I concur. In prior updates to unsafe i was involved in we focused on correctness, and if performance was a concern or priority then an intrinsic would developed for C2 and possibly for C1. Paul. From christian.tornqvist at oracle.com Tue Mar 20 21:38:53 2018 From: christian.tornqvist at oracle.com (Christian Tornqvist) Date: Tue, 20 Mar 2018 17:38:53 -0400 Subject: Naming restriction for branches in submit-hs In-Reply-To: References: Message-ID: > On Mar 20, 2018, at 6:21 19AM, Volker Simonis wrote: > > On Mon, Mar 19, 2018 at 10:49 PM, Christian Tornqvist > wrote: >> Hi Volker, >> >> I?ve update the wiki page to reflect what the branch naming restrictions are >> for the submit repo, you can find the page at: >> https://wiki.openjdk.java.net/display/Build/Submit+Repo >> > > Hi Christian, > > thanks for updating the Wiki! I've tried yesterday with the branch > name "JDK-8199698-v2" and it worked fine. > > One trivial suggestion regarding the regexp: I think `\w` already > contains the digits so you could actually simplify > "^JDK\-\d{7}([\w\d-]+)?" to "^JDK\-\d{7}([\w-]+)?? You?re correct, I?ve fixed it now :) > > Regards, > Volker > >> Thanks, >> Christian >> >> On Mar 19, 2018, at 6:45 00AM, jesper.wilhelmsson at oracle.com wrote: >> >> Hi Volker, >> >> It seems that a . in the branch name might be an issue. This is being >> investigated. >> >> The last build I see with your name on it is from 2018-03-15 17:48. I can >> see that this is not the one you are looking for. >> /Jesper >> >> On 17 Mar 2018, at 08:55, Volker Simonis wrote: >> >> Hi Jesper, >> >> the Wiki mentions that ?only branches starting with ?JDK-? will be built and >> tested? [1]. I?ve submitted a branch called ?JDK-8199698.v2? [2] (i.e. the >> second version of a change after review) yesterday, but it doesn?t seem to >> be build. Are there any other naming conventions? I?m pretty sure names like >> these worked in the first version of the submit repo last year. >> >> Thanks, >> Volker >> >> [1] https://wiki.openjdk.java.net/display/Build/Submit+Repo >> >> [2] http://hg.openjdk.java.net/jdk/submit-hs/rev/60bae43fe453 >> >> >> >> >> From christian.tornqvist at oracle.com Tue Mar 20 21:48:21 2018 From: christian.tornqvist at oracle.com (Christian Tornqvist) Date: Tue, 20 Mar 2018 17:48:21 -0400 Subject: Merging jdk/hs with jdk/jdk In-Reply-To: <8819CAD3-AF29-463E-8A76-14440CF37D2B@oracle.com> References: <8819CAD3-AF29-463E-8A76-14440CF37D2B@oracle.com> Message-ID: <6B255F88-2AA7-4A44-9365-591D81BF84ED@oracle.com> Hi Jesper, This sounds like a step in the right direction, this should simplify a lot of things, thanks for doing this! Thanks, Christian > On Mar 14, 2018, at 5:00 28PM, jesper.wilhelmsson at oracle.com wrote: > > All, > > Over the last couple of years we have left behind a graph of > integration forests where each component in the JVM had its own > line of development. Today all HotSpot development is done in the > same repository, jdk/hs [1]. As a result of merging we have seen > several positive effects, ranging from less confusion around > where and how to do things, and reduced time for fixes to > propagate, to significantly better cooperation between the > components, and improved quality of the product. We would like to > improve further and therefore we suggest to merge jdk/hs into > jdk/jdk [2]. > > As before, we expect this change to build a stronger team spirit > between the merged areas, and contribute to less confusion - > especially around ramp down phases and similar. We also expect > further improvements in quality as changes that cause problems in > a different area are found faster and can be dealt with > immediately. > > In the same way as we did in the past, we suggest to try this out > as an experiment for at least two weeks (giving us some time to > adapt in case of issues). Monitoring and evaluation of the new > structure will take place continuously, with an option to revert > back if things do not work out. The experiment would keep going > for at least a few months, after which we will evaluate it and > depending on the results consider making it the new standard. If > so, the jdk/hs forest will eventually be retired. As part of this > merge we can also retire the newly setup submit-hs [3] repository > and do all testing using the submit repo based on jdk/jdk [4]. > > Much like what we have done in the past we would leave the jdk/hs > forest around until we see if the experiment works out. We would > also lock it down so that no accidental pushes are made to > it. Once the jdk/hs forest is locked down, any work in flight > based on it would have to be rebased on jdk/jdk. > > We tried this approach during the last few months of JDK 10 > development and it worked out fine there. > > Please let us know if you have any feedback or questions! > > Thanks, > /Jesper > > [1] http://hg.openjdk.java.net/jdk/hs > [2] http://hg.openjdk.java.net/jdk/jdk > [3] http://hg.openjdk.java.net/jdk/submit-hs > [4] http://hg.openjdk.java.net/jdk/submit From vladimir.kozlov at oracle.com Tue Mar 20 22:20:46 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 20 Mar 2018 15:20:46 -0700 Subject: [11] RFR(S): 8199777: Deprecate -XX:+AggressiveOpts In-Reply-To: <022c4dc3-5750-c151-e08e-bd3a2826aa16@oracle.com> References: <022c4dc3-5750-c151-e08e-bd3a2826aa16@oracle.com> Message-ID: <96444d90-f28e-398a-095a-feb2c6e27b3a@oracle.com> Hi Tobias, EliminateAutoBox is C2 specific flag. Please, add -XX:+IgnoreUnrecognizedVMOptions flag in case someone has VM without C2. Thanks, Vladimir On 3/20/18 2:20 AM, Tobias Hartmann wrote: > Hi, > > please review the following patch: > https://bugs.openjdk.java.net/browse/JDK-8199777 > http://cr.openjdk.java.net/~thartmann/8199777/webrev.00/ > > The VM option "-XX:+AggressiveOpts" should be deprecated in JDK 11 and removed in a future release. > The option was originally supposed to enable some experimental optimizations of the C2 compiler to > improve performance of specific benchmarks. Most features have been removed or integrated over time > leaving the behavior of the option ill-defined and error-prone. The only effect that the flag > currently has is setting "AutoBoxCacheMax" to 20000 and "BiasedLockingStartupDelay" to 500. The same > configuration can be achieved by setting the corresponding flags via the command line. > > I've deprecated the flag and replaced the usages in tests by -XX:+EliminateAutoBox > -XX:AutoBoxCacheMax=20000. > > Thanks, > Tobias > From vladimir.kozlov at oracle.com Tue Mar 20 22:23:00 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 20 Mar 2018 15:23:00 -0700 Subject: [11] RFR(S): 8199777: Deprecate -XX:+AggressiveOpts In-Reply-To: <96444d90-f28e-398a-095a-feb2c6e27b3a@oracle.com> References: <022c4dc3-5750-c151-e08e-bd3a2826aa16@oracle.com> <96444d90-f28e-398a-095a-feb2c6e27b3a@oracle.com> Message-ID: Actually you don't need to specify EliminateAutoBox in tests because it is true by default: http://hg.openjdk.java.net/jdk/hs/file/74db2b7cec75/src/hotspot/share/opto/c2_globals.hpp#l506 Vladimir On 3/20/18 3:20 PM, Vladimir Kozlov wrote: > Hi Tobias, > > EliminateAutoBox is C2 specific flag. Please, add -XX:+IgnoreUnrecognizedVMOptions flag in case > someone has VM without C2. > > Thanks, > Vladimir > > On 3/20/18 2:20 AM, Tobias Hartmann wrote: >> Hi, >> >> please review the following patch: >> https://bugs.openjdk.java.net/browse/JDK-8199777 >> http://cr.openjdk.java.net/~thartmann/8199777/webrev.00/ >> >> The VM option "-XX:+AggressiveOpts" should be deprecated in JDK 11 and removed in a future release. >> The option was originally supposed to enable some experimental optimizations of the C2 compiler to >> improve performance of specific benchmarks. Most features have been removed or integrated over time >> leaving the behavior of the option ill-defined and error-prone. The only effect that the flag >> currently has is setting "AutoBoxCacheMax" to 20000 and "BiasedLockingStartupDelay" to 500. The same >> configuration can be achieved by setting the corresponding flags via the command line. >> >> I've deprecated the flag and replaced the usages in tests by -XX:+EliminateAutoBox >> -XX:AutoBoxCacheMax=20000. >> >> Thanks, >> Tobias >> From coleen.phillimore at oracle.com Wed Mar 21 00:08:53 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 20 Mar 2018 20:08:53 -0400 Subject: RFR (L, tedious) 8199809: Don't include frame.inline.hpp and other.inline.hpp from .hpp files Message-ID: Summary: Remove frame.inline.hpp,etc from header files and adjust transitive includes. Tested with mach5 tier1 on Oracle platforms: linux-x64, solaris-sparc, windows-x64.? Built with open-only sources using --disable-precompiled-headers on linux-x64, built with zero (also disable precompiled headers).? Roman built with aarch64, and have request to build ppc, etc.? (Please test this patch!) Semi-interesting details:? moved SignatureHandlerGenerator constructor to cpp file, moved interpreter_frame_stack_direction() to target specific hpp files (even though they're all -1), pd_last_frame to thread_.cpp because there isn't a thread_.inline.hpp file, lastly moved InterpreterRuntime::LastFrameAccessor into interpreterRuntime.cpp file, and a few other functions moved in shared code. This is the last of this include file technical debt cleanup that I'm going to do.? See bug for more information. open webrev at http://cr.openjdk.java.net/~coleenp/8199809.01/webrev bug link https://bugs.openjdk.java.net/browse/JDK-8199809 I'll update the copyrights when I commit. Thanks, Coleen From coleen.phillimore at oracle.com Wed Mar 21 00:11:53 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 20 Mar 2018 20:11:53 -0400 Subject: [CLOSED] RFR (L, tedious) 8199809: Don't include frame.inline.hpp and other.inline.hpp from .hpp files In-Reply-To: References: Message-ID: This is the closed part of this change. local webrev at http://oklahoma.us.oracle.com/~cphillim/webrev/8199809.closed.01/webrev Thanks, Coleen On 3/20/18 8:08 PM, coleen.phillimore at oracle.com wrote: > Summary: Remove frame.inline.hpp,etc from header files and adjust > transitive includes. > > Tested with mach5 tier1 on Oracle platforms: linux-x64, solaris-sparc, > windows-x64.? Built with open-only sources using > --disable-precompiled-headers on linux-x64, built with zero (also > disable precompiled headers).? Roman built with aarch64, and have > request to build ppc, etc.? (Please test this patch!) > > Semi-interesting details:? moved SignatureHandlerGenerator constructor > to cpp file, moved interpreter_frame_stack_direction() to target > specific hpp files (even though they're all -1), pd_last_frame to > thread_.cpp because there isn't a thread_.inline.hpp > file, lastly moved InterpreterRuntime::LastFrameAccessor into > interpreterRuntime.cpp file, and a few other functions moved in shared > code. > > This is the last of this include file technical debt cleanup that I'm > going to do.? See bug for more information. > > open webrev at http://cr.openjdk.java.net/~coleenp/8199809.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8199809 > > I'll update the copyrights when I commit. > > Thanks, > Coleen From vladimir.kozlov at oracle.com Wed Mar 21 00:23:43 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 20 Mar 2018 17:23:43 -0700 Subject: [11] RFR(S): 8199624: [Graal] Blocking jvmci compilations time out In-Reply-To: <566060b5-7d24-90a4-548f-757d90fa9fdb@oracle.com> References: <566060b5-7d24-90a4-548f-757d90fa9fdb@oracle.com> Message-ID: <1dbf3828-ef01-16e6-38c1-9c27a43f046b@oracle.com> Looks good. Thanks, Vladimir On 3/20/18 7:39 AM, Tobias Hartmann wrote: > Forgot to mention that I've also fixed the AbstractMethodErrorTest by changing the to be compiled > method name from "c" to "mc" and making the methods public such that they will be found by > Class.getMethod(). Before, the test would never find these methods and always bail out through the > NoSuchMethodException catch clause. > > Best regards, > Tobias > > On 20.03.2018 15:35, Tobias Hartmann wrote: >> Hi, >> >> please review the following patch: >> https://bugs.openjdk.java.net/browse/JDK-8199624 >> http://cr.openjdk.java.net/~thartmann/8199624/webrev.00/ >> >> Multiple tests fail with Graal as JIT and BackgroundCompilation disabled because compilations issued >> via WhiteBox.enqueueMethodForCompilation do not block. The problem is that the jvmci compilation may >> time out (see CompileBroker::wait_for_jvmci_completion() added by [1]). >> >> I think we want to avoid checking the PrintCompilation output for all tests that trigger a >> compilation because that would mean that we need to spawn a separate VM process and verify the >> output for all these test. Instead, I suggest to increase the timeout value for blocking jvmci >> compilations and change the WhiteBox API to return false in case a blocking compilation does not >> block (i.e. times out). We can then catch and ignore the timeout in the affected tests. >> >> I've verified that the timeout value is high enough in most cases (100 runs) and that the tests >> still pass in case the jvmci compilation does still time out. More tests might be affected but I >> think we should only fix them on demand. >> >> Thanks, >> Tobias >> >> [1] https://bugs.openjdk.java.net/browse/JDK-8146705 >> From coleen.phillimore at oracle.com Wed Mar 21 00:21:22 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 20 Mar 2018 20:21:22 -0400 Subject: [CLOSED] RFR (L, tedious) 8199809: Don't include frame.inline.hpp and other.inline.hpp from .hpp files In-Reply-To: References: Message-ID: <4daf2d27-87be-f2e4-6329-bfd8cd08a24a@oracle.com> Please ignore this. Coleen On 3/20/18 8:11 PM, coleen.phillimore at oracle.com wrote: > > This is the closed part of this change. > > local webrev at > http://oklahoma.us.oracle.com/~cphillim/webrev/8199809.closed.01/webrev > > Thanks, > Coleen > > On 3/20/18 8:08 PM, coleen.phillimore at oracle.com wrote: >> Summary: Remove frame.inline.hpp,etc from header files and adjust >> transitive includes. >> >> Tested with mach5 tier1 on Oracle platforms: linux-x64, >> solaris-sparc, windows-x64.? Built with open-only sources using >> --disable-precompiled-headers on linux-x64, built with zero (also >> disable precompiled headers).? Roman built with aarch64, and have >> request to build ppc, etc.? (Please test this patch!) >> >> Semi-interesting details:? moved SignatureHandlerGenerator >> constructor to cpp file, moved interpreter_frame_stack_direction() to >> target specific hpp files (even though they're all -1), pd_last_frame >> to thread_.cpp because there isn't a >> thread_.inline.hpp file, lastly moved >> InterpreterRuntime::LastFrameAccessor into interpreterRuntime.cpp >> file, and a few other functions moved in shared code. >> >> This is the last of this include file technical debt cleanup that I'm >> going to do.? See bug for more information. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8199809.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8199809 >> >> I'll update the copyrights when I commit. >> >> Thanks, >> Coleen > From irogers at google.com Wed Mar 21 02:07:30 2018 From: irogers at google.com (Ian Rogers) Date: Wed, 21 Mar 2018 02:07:30 +0000 Subject: GetPrimitiveArrayCritical vs GetByteArrayRegion: 140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream In-Reply-To: <1521451687.2323.5.camel@oracle.com> References: <22bbd632-47c1-7702-bd65-6a51f0bbad02@oracle.com> <9b5ff318-af41-03c4-efdc-b41b97cf6e61@oracle.com> <5A9D8C60.3050505@oracle.com> <5A9D8F33.5060500@oracle.com> <5A9DAC84.906@oracle.com> <5A9DAE87.8020801@oracle.com> <1521102348.2448.25.camel@oracle.com> <23639144-5217-4A0F-930C-EF24B4976544@oracle.com> <1521451687.2323.5.camel@oracle.com> Message-ID: Thanks, via bugreport.java.com I filed bugs/RFEs with ids 9053047, 9053048, 9053049, 9053050 and 9053051. Ian On Mon, Mar 19, 2018 at 2:28 AM Thomas Schatzl wrote: > Hi, > > On Fri, 2018-03-16 at 17:19 +0000, Ian Rogers wrote: > > Thanks Paul, very interesting. > > > > On Fri, Mar 16, 2018 at 9:21 AM Paul Sandoz > > wrote: > > > Hi Ian, Thomas, > > > > > > [...] > > > (This is also something we need to consider if we modify buffers to > > > support capacities larger than Integer.MAX_VALUE. Also connects > > > with Project Panama.) > > > > > > If Thomas has not done so or does not plan to i can log an issue > > > for you. > > > > > > > That'd be great. I wonder if identifying more TTSP issues should also > > be a bug. Its interesting to observe that overlooking TTSP in C2 > > motivated the Unsafe.copyMemory change permitting a fresh TTSP issue. > > If TTSP is a 1st class issue then maybe we can deprecate JNI critical > > regions to support that effort :-) > > Please log an issue. I am still a bit unsure what and how many issues > should be filed. > > @Ian: at bugreports.oracle.com everyone may file bug reports without > the need for an account. > It will take some time until they show up in Jira due to vetting, but > if you have a good case, and can e.g. link to the mailing list, this > should be painless. > > Thanks, > Thomas > > From david.holmes at oracle.com Wed Mar 21 02:46:55 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 21 Mar 2018 12:46:55 +1000 Subject: [11] RFR(S): 8199777: Deprecate -XX:+AggressiveOpts In-Reply-To: <022c4dc3-5750-c151-e08e-bd3a2826aa16@oracle.com> References: <022c4dc3-5750-c151-e08e-bd3a2826aa16@oracle.com> Message-ID: <2d954eca-bcb2-3924-7471-916129bd523f@oracle.com> Hi Tobias, The actual deprecation changes look fine. Regarding the tests, I would not rely on the default for EliminateAutoBox. If the default were to change it would be impossible to know that these tests expect to test it being turned on. Arguably however the flags AggressiveOpts affected when the tests were created may be quite different to what is left now, so it may be the tests already don't really test what they originally intended to test. :( Thanks, David On 20/03/2018 7:20 PM, Tobias Hartmann wrote: > Hi, > > please review the following patch: > https://bugs.openjdk.java.net/browse/JDK-8199777 > http://cr.openjdk.java.net/~thartmann/8199777/webrev.00/ > > The VM option "-XX:+AggressiveOpts" should be deprecated in JDK 11 and removed in a future release. > The option was originally supposed to enable some experimental optimizations of the C2 compiler to > improve performance of specific benchmarks. Most features have been removed or integrated over time > leaving the behavior of the option ill-defined and error-prone. The only effect that the flag > currently has is setting "AutoBoxCacheMax" to 20000 and "BiasedLockingStartupDelay" to 500. The same > configuration can be achieved by setting the corresponding flags via the command line. > > I've deprecated the flag and replaced the usages in tests by -XX:+EliminateAutoBox > -XX:AutoBoxCacheMax=20000. > > Thanks, > Tobias > From tobias.hartmann at oracle.com Wed Mar 21 07:09:32 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 21 Mar 2018 08:09:32 +0100 Subject: [11] RFR(S): 8199624: [Graal] Blocking jvmci compilations time out In-Reply-To: <1dbf3828-ef01-16e6-38c1-9c27a43f046b@oracle.com> References: <566060b5-7d24-90a4-548f-757d90fa9fdb@oracle.com> <1dbf3828-ef01-16e6-38c1-9c27a43f046b@oracle.com> Message-ID: Thanks, Vladimir! Best regards, Tobias On 21.03.2018 01:23, Vladimir Kozlov wrote: > Looks good. > > Thanks, > Vladimir > > On 3/20/18 7:39 AM, Tobias Hartmann wrote: >> Forgot to mention that I've also fixed the AbstractMethodErrorTest by changing the to be compiled >> method name from "c" to "mc" and making the methods public such that they will be found by >> Class.getMethod(). Before, the test would never find these methods and always bail out through the >> NoSuchMethodException catch clause. >> >> Best regards, >> Tobias >> >> On 20.03.2018 15:35, Tobias Hartmann wrote: >>> Hi, >>> >>> please review the following patch: >>> https://bugs.openjdk.java.net/browse/JDK-8199624 >>> http://cr.openjdk.java.net/~thartmann/8199624/webrev.00/ >>> >>> Multiple tests fail with Graal as JIT and BackgroundCompilation disabled because compilations issued >>> via WhiteBox.enqueueMethodForCompilation do not block. The problem is that the jvmci compilation may >>> time out (see CompileBroker::wait_for_jvmci_completion() added by [1]). >>> >>> I think we want to avoid checking the PrintCompilation output for all tests that trigger a >>> compilation because that would mean that we need to spawn a separate VM process and verify the >>> output for all these test. Instead, I suggest to increase the timeout value for blocking jvmci >>> compilations and change the WhiteBox API to return false in case a blocking compilation does not >>> block (i.e. times out). We can then catch and ignore the timeout in the affected tests. >>> >>> I've verified that the timeout value is high enough in most cases (100 runs) and that the tests >>> still pass in case the jvmci compilation does still time out. More tests might be affected but I >>> think we should only fix them on demand. >>> >>> Thanks, >>> Tobias >>> >>> [1] https://bugs.openjdk.java.net/browse/JDK-8146705 >>> From tobias.hartmann at oracle.com Wed Mar 21 07:38:41 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 21 Mar 2018 08:38:41 +0100 Subject: [11] RFR(S): 8199777: Deprecate -XX:+AggressiveOpts In-Reply-To: References: <022c4dc3-5750-c151-e08e-bd3a2826aa16@oracle.com> <96444d90-f28e-398a-095a-feb2c6e27b3a@oracle.com> Message-ID: <6e56a98f-4ed3-74fc-9f21-1a9c2b247a03@oracle.com> Hi Vladimir and David, thanks for the review! On 20.03.2018 23:23, Vladimir Kozlov wrote: > Actually you don't need to specify EliminateAutoBox in tests because it is true by default: Yes, but I agree with David that it's better to leave the flag in to state what the test is supposed to be run with. Also, AutoBoxCacheMax is a C2 specific flag as well. Here's the new webrev with -XX:+IgnoreUnrecognizedVMOptions added to the tests: http://cr.openjdk.java.net/~thartmann/8199777/webrev.01/ Thanks, Tobias From glaubitz at physik.fu-berlin.de Wed Mar 21 09:17:47 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Wed, 21 Mar 2018 18:17:47 +0900 Subject: RFR: 8199138: Add RISC-V support to Zero In-Reply-To: <1521404351.3951.7.camel@gmail.com> References: <1521313360.26308.4.camel@gmail.com> <1521404351.3951.7.camel@gmail.com> Message-ID: On 03/19/2018 05:19 AM, Edward Nevill wrote: > Interestingly, there is no implementation of atomic_copy64 for ARM32. I guess it just relies on the compiler generating LDRD/STRD correctly and doesn't support earlier ARM32 archs. I'll do a bit of investigation. I am planning to add arch-specific implementations for m68k and sh in the near future. From the current build logs in Debian, it seems that the JVM is actually hanging on these architectures from time to time and I think this could probably be related to atomic_copy64 actually not being 100% atomic. I already added the one for PowerPCSPE. It's also interesting that there is no implementation for 32-Bit MIPS either. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From thomas.stuefe at gmail.com Wed Mar 21 10:41:25 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 21 Mar 2018 11:41:25 +0100 Subject: RFR (L, tedious) 8199809: Don't include frame.inline.hpp and other.inline.hpp from .hpp files In-Reply-To: References: Message-ID: Hi Coleen, linuxs390 needs this: - .../source $ hg diff diff -r daf3abb9031f src/hotspot/cpu/s390/interpreterRT_s390.cpp --- a/src/hotspot/cpu/s390/interpreterRT_s390.cpp Wed Mar 21 08:37:04 2018 +0100 +++ b/src/hotspot/cpu/s390/interpreterRT_s390.cpp Wed Mar 21 11:12:03 2018 +0100 @@ -65,7 +65,7 @@ } // Implementation of SignatureHandlerGenerator -InteprerterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( +InterpreterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( const methodHandle& method, CodeBuffer* buffer) : NativeSignatureIterator(method) { _masm = new MacroAssembler(buffer); _fp_arg_nr = 0; (typo). Otherwise it builds fine. I'm getting build errors on AIX which are a bit more complicated, still looking.. Thanks, Thomas On Wed, Mar 21, 2018 at 1:08 AM, wrote: > Summary: Remove frame.inline.hpp,etc from header files and adjust > transitive includes. > > Tested with mach5 tier1 on Oracle platforms: linux-x64, solaris-sparc, > windows-x64. Built with open-only sources using > --disable-precompiled-headers on linux-x64, built with zero (also disable > precompiled headers). Roman built with aarch64, and have request to build > ppc, etc. (Please test this patch!) > > Semi-interesting details: moved SignatureHandlerGenerator constructor to > cpp file, moved interpreter_frame_stack_direction() to target specific > hpp files (even though they're all -1), pd_last_frame to > thread_.cpp because there isn't a thread_.inline.hpp file, > lastly moved InterpreterRuntime::LastFrameAccessor into > interpreterRuntime.cpp file, and a few other functions moved in shared code. > > This is the last of this include file technical debt cleanup that I'm > going to do. See bug for more information. > > open webrev at http://cr.openjdk.java.net/~coleenp/8199809.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8199809 > > I'll update the copyrights when I commit. > > Thanks, > Coleen > From thomas.stuefe at gmail.com Wed Mar 21 11:50:15 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 21 Mar 2018 12:50:15 +0100 Subject: RFR (L, tedious) 8199809: Don't include frame.inline.hpp and other.inline.hpp from .hpp files In-Reply-To: References: Message-ID: Hi Coleen, I think your patch uncovered an issue. I saw this weird compile error on AIX: 471 54 | bool is_sigtrap_ic_miss_check() { 471 55 | assert(UseSIGTRAP, "precondition"); 471 56 | return MacroAssembler::is_trap_ic_miss_check(long_at(0)); ===========================^ "/priv/d031900/openjdk/jdk-hs/source/src/hotspot/cpu/ppc/nativeInst_ppc.hpp", line 56.12: 1540-0062 (S) The incomplete class "MacroAssembler" must not be used as a qualifier. 471 57 | } in a number of places. But the definition of class MacroAssembler was available. So I checked if MacroAssembler was accidentally pulled into a namespace or a class, and sure enough, your patch caused it to be defined *inside* the class InterpreterRuntime. See interpreterRuntime.hpp: class InterpreterRuntime: AllStatic { ... // Platform dependent stuff #include CPU_HEADER(interpreterRT) ... }; which pulls in the content of interpreterRT_ppc.hpp. interpreterRT_ppc.hpp includes #include "asm/macroAssembler.hpp" #include "memory/allocation.hpp" (minus allocation.hpp after your patch) which is certainly an error, yes? We should not pull in any includes into a class definition. I wondered why this did not cause errors earlier, but the include order changed with your patch. Before the patch, the error was covered by a different include order: nothing was really included by interpreterRT_ppc.hpp, the include directives were noops. I think this was caused by src/hotspot/share/prims/methodHandles.hpp pulling frame.inline.hpp and via that path pulling macroAssembler.hpp. With your patch, it pulls only frame.hpp. One could certainly work around that issue but the real fix would be to not include anything in files which are included into other classes. Those are not "real" includes anyway. And maybe add a comment to that file :) Thanks, Thomas On Wed, Mar 21, 2018 at 11:41 AM, Thomas St?fe wrote: > Hi Coleen, > > linuxs390 needs this: > > - .../source $ hg diff > diff -r daf3abb9031f src/hotspot/cpu/s390/interpreterRT_s390.cpp > --- a/src/hotspot/cpu/s390/interpreterRT_s390.cpp Wed Mar 21 > 08:37:04 2018 +0100 > +++ b/src/hotspot/cpu/s390/interpreterRT_s390.cpp Wed Mar 21 > 11:12:03 2018 +0100 > @@ -65,7 +65,7 @@ > } > > // Implementation of SignatureHandlerGenerator > -InteprerterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( > +InterpreterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( > const methodHandle& method, CodeBuffer* buffer) : > NativeSignatureIterator(method) { > _masm = new MacroAssembler(buffer); > _fp_arg_nr = 0; > > (typo). Otherwise it builds fine. > > I'm getting build errors on AIX which are a bit more complicated, still > looking.. > > Thanks, Thomas > > > On Wed, Mar 21, 2018 at 1:08 AM, wrote: > >> Summary: Remove frame.inline.hpp,etc from header files and adjust >> transitive includes. >> >> Tested with mach5 tier1 on Oracle platforms: linux-x64, solaris-sparc, >> windows-x64. Built with open-only sources using >> --disable-precompiled-headers on linux-x64, built with zero (also disable >> precompiled headers). Roman built with aarch64, and have request to build >> ppc, etc. (Please test this patch!) >> >> Semi-interesting details: moved SignatureHandlerGenerator constructor to >> cpp file, moved interpreter_frame_stack_direction() to target specific >> hpp files (even though they're all -1), pd_last_frame to >> thread_.cpp because there isn't a thread_.inline.hpp file, >> lastly moved InterpreterRuntime::LastFrameAccessor into >> interpreterRuntime.cpp file, and a few other functions moved in shared code. >> >> This is the last of this include file technical debt cleanup that I'm >> going to do. See bug for more information. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8199809.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8199809 >> >> I'll update the copyrights when I commit. >> >> Thanks, >> Coleen >> > > From coleen.phillimore at oracle.com Wed Mar 21 12:35:20 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 21 Mar 2018 08:35:20 -0400 Subject: RFR (L, tedious) 8199809: Don't include frame.inline.hpp and other.inline.hpp from .hpp files In-Reply-To: References: Message-ID: <54e3f098-f4bb-f3f6-2563-aff521c5ca9a@oracle.com> On 3/21/18 6:41 AM, Thomas St?fe wrote: > Hi Coleen, > > linuxs390 needs this: > > - .../source $ hg diff > diff -r daf3abb9031f src/hotspot/cpu/s390/interpreterRT_s390.cpp > --- a/src/hotspot/cpu/s390/interpreterRT_s390.cpp ?Wed Mar 21 08:37:04 > 2018 +0100 > +++ b/src/hotspot/cpu/s390/interpreterRT_s390.cpp ?Wed Mar 21 11:12:03 > 2018 +0100 > @@ -65,7 +65,7 @@ > ?} > > ?// Implementation of SignatureHandlerGenerator > -InteprerterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( > +InterpreterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( > ? ? ?const methodHandle& method, CodeBuffer* buffer) : > NativeSignatureIterator(method) { > ? ?_masm = new MacroAssembler(buffer); > ? ?_fp_arg_nr = 0; > > (typo). Otherwise it builds fine. Thanks I had that typo twice, fixed. Coleen > > I'm getting build errors on AIX which are a bit more complicated, > still looking.. > > Thanks, Thomas > > > On Wed, Mar 21, 2018 at 1:08 AM, > wrote: > > Summary: Remove frame.inline.hpp,etc from header files and adjust > transitive includes. > > Tested with mach5 tier1 on Oracle platforms: linux-x64, > solaris-sparc, windows-x64.? Built with open-only sources using > --disable-precompiled-headers on linux-x64, built with zero (also > disable precompiled headers).? Roman built with aarch64, and have > request to build ppc, etc.? (Please test this patch!) > > Semi-interesting details:? moved SignatureHandlerGenerator > constructor to cpp file, moved interpreter_frame_stack_direction() > to target specific hpp files (even though they're all -1), > pd_last_frame to thread_.cpp because there isn't a > thread_.inline.hpp file, lastly moved > InterpreterRuntime::LastFrameAccessor into interpreterRuntime.cpp > file, and a few other functions moved in shared code. > > This is the last of this include file technical debt cleanup that > I'm going to do.? See bug for more information. > > open webrev at > http://cr.openjdk.java.net/~coleenp/8199809.01/webrev > > bug link https://bugs.openjdk.java.net/browse/JDK-8199809 > > > I'll update the copyrights when I commit. > > Thanks, > Coleen > > From coleen.phillimore at oracle.com Wed Mar 21 12:39:01 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 21 Mar 2018 08:39:01 -0400 Subject: RFR (L, tedious) 8199809: Don't include frame.inline.hpp and other.inline.hpp from .hpp files In-Reply-To: References: Message-ID: <115cb618-5fe3-f59e-9b4b-fa56060ad766@oracle.com> Thomas, Thank you for building this. On 3/21/18 7:50 AM, Thomas St?fe wrote: > Hi Coleen, > > I think your patch uncovered an issue. > > I saw this weird compile error on AIX: > > ? ?471? ? ?54 |? ?bool is_sigtrap_ic_miss_check() { > ? ?471? ? ?55 |? ? ?assert(UseSIGTRAP, "precondition"); > ? ?471? ? ?56 |? ? ?return > MacroAssembler::is_trap_ic_miss_check(long_at(0)); > ===========================^ > "/priv/d031900/openjdk/jdk-hs/source/src/hotspot/cpu/ppc/nativeInst_ppc.hpp", > line 56.12: 1540-0062 (S) The incomplete class "MacroAssembler" must > not be used as a qualifier. > ? ?471? ? ?57 |? ?} > > in a number of places. But the definition of class MacroAssembler was > available. So I checked if MacroAssembler was accidentally pulled into > a namespace or a class, and sure enough, your patch caused it to be > defined *inside* the class InterpreterRuntime. See interpreterRuntime.hpp: > > class InterpreterRuntime: AllStatic { > ... > ? // Platform dependent stuff > #include CPU_HEADER(interpreterRT) > ... > }; > > which pulls in the content of interpreterRT_ppc.hpp. > > interpreterRT_ppc.hpp includes > > #include "asm/macroAssembler.hpp" > #include "memory/allocation.hpp" > > (minus allocation.hpp after your patch) > > which is certainly an error, yes? We should not pull in any includes > into a class definition. Yes, I had this problem with x86 which was very befuddling.? I hate that we include files in the middle of class definitions! > > I wondered why this did not cause errors earlier, but the include > order changed with your patch. Before the patch, the error was covered > by a different include order: nothing was really included by > interpreterRT_ppc.hpp, the include directives were noops. I think this > was caused by src/hotspot/share/prims/methodHandles.hpp pulling > frame.inline.hpp and via that path pulling macroAssembler.hpp. With > your patch, it pulls only frame.hpp. > > One could certainly work around that issue but the real fix would be > to not include anything in files which are included into other > classes. Those are not "real" includes anyway. And maybe add a comment > to that file :) I will add a comment to all of these like: // This is included in the middle of class Interpreter. // Do not include files here. Hm so I need to add the #include for macroAssembler.hpp somewhere new like nativeInst_ppc.hpp or does just removing it from interpreterRT_ppc.hpp fix the problem? thanks, Coleen > > Thanks, Thomas > > > > > > On Wed, Mar 21, 2018 at 11:41 AM, Thomas St?fe > > wrote: > > Hi Coleen, > > linuxs390 needs this: > > - .../source $ hg diff > diff -r daf3abb9031f src/hotspot/cpu/s390/interpreterRT_s390.cpp > --- a/src/hotspot/cpu/s390/interpreterRT_s390.cpp ? ? ?Wed Mar 21 > 08:37:04 2018 +0100 > +++ b/src/hotspot/cpu/s390/interpreterRT_s390.cpp ? ? ?Wed Mar 21 > 11:12:03 2018 +0100 > @@ -65,7 +65,7 @@ > ?} > > ?// Implementation of SignatureHandlerGenerator > -InteprerterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( > +InterpreterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( > ? ? ?const methodHandle& method, CodeBuffer* buffer) : > NativeSignatureIterator(method) { > ? ?_masm = new MacroAssembler(buffer); > ? ?_fp_arg_nr = 0; > > (typo). Otherwise it builds fine. > > I'm getting build errors on AIX which are a bit more complicated, > still looking.. > > Thanks, Thomas > > > On Wed, Mar 21, 2018 at 1:08 AM, > wrote: > > Summary: Remove frame.inline.hpp,etc from header files and > adjust transitive includes. > > Tested with mach5 tier1 on Oracle platforms: linux-x64, > solaris-sparc, windows-x64.? Built with open-only sources > using --disable-precompiled-headers on linux-x64, built with > zero (also disable precompiled headers). Roman built with > aarch64, and have request to build ppc, etc.? (Please test > this patch!) > > Semi-interesting details:? moved SignatureHandlerGenerator > constructor to cpp file, moved > interpreter_frame_stack_direction() to target specific hpp > files (even though they're all -1), pd_last_frame to > thread_.cpp because there isn't a > thread_.inline.hpp file, lastly moved > InterpreterRuntime::LastFrameAccessor into > interpreterRuntime.cpp file, and a few other functions moved > in shared code. > > This is the last of this include file technical debt cleanup > that I'm going to do.? See bug for more information. > > open webrev at > http://cr.openjdk.java.net/~coleenp/8199809.01/webrev > > bug link https://bugs.openjdk.java.net/browse/JDK-8199809 > > > I'll update the copyrights when I commit. > > Thanks, > Coleen > > > From david.holmes at oracle.com Wed Mar 21 13:02:42 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 21 Mar 2018 23:02:42 +1000 Subject: RFR (L, tedious) 8199809: Don't include frame.inline.hpp and other.inline.hpp from .hpp files In-Reply-To: <115cb618-5fe3-f59e-9b4b-fa56060ad766@oracle.com> References: <115cb618-5fe3-f59e-9b4b-fa56060ad766@oracle.com> Message-ID: <319623dc-512a-ba73-b8b5-af0956d0007b@oracle.com> On 21/03/2018 10:39 PM, coleen.phillimore at oracle.com wrote: > > Thomas, > > Thank you for building this. > > On 3/21/18 7:50 AM, Thomas St?fe wrote: >> Hi Coleen, >> >> I think your patch uncovered an issue. >> >> I saw this weird compile error on AIX: >> >> ? ?471? ? ?54 |? ?bool is_sigtrap_ic_miss_check() { >> ? ?471? ? ?55 |? ? ?assert(UseSIGTRAP, "precondition"); >> ? ?471? ? ?56 |? ? ?return >> MacroAssembler::is_trap_ic_miss_check(long_at(0)); >> ===========================^ >> "/priv/d031900/openjdk/jdk-hs/source/src/hotspot/cpu/ppc/nativeInst_ppc.hpp", >> line 56.12: 1540-0062 (S) The incomplete class "MacroAssembler" must >> not be used as a qualifier. >> ? ?471? ? ?57 |? ?} >> >> in a number of places. But the definition of class MacroAssembler was >> available. So I checked if MacroAssembler was accidentally pulled into >> a namespace or a class, and sure enough, your patch caused it to be >> defined *inside* the class InterpreterRuntime. See >> interpreterRuntime.hpp: >> >> class InterpreterRuntime: AllStatic { >> ... >> ? // Platform dependent stuff >> #include CPU_HEADER(interpreterRT) >> ... >> }; >> >> which pulls in the content of interpreterRT_ppc.hpp. >> >> interpreterRT_ppc.hpp includes >> >> #include "asm/macroAssembler.hpp" >> #include "memory/allocation.hpp" >> >> (minus allocation.hpp after your patch) >> >> which is certainly an error, yes? We should not pull in any includes >> into a class definition. > > Yes, I had this problem with x86 which was very befuddling.? I hate that > we include files in the middle of class definitions! It's a crude but effective way to "extend" a class with platform specific code at build time. But it does have constraints. >> >> I wondered why this did not cause errors earlier, but the include >> order changed with your patch. Before the patch, the error was covered >> by a different include order: nothing was really included by >> interpreterRT_ppc.hpp, the include directives were noops. I think this >> was caused by src/hotspot/share/prims/methodHandles.hpp pulling >> frame.inline.hpp and via that path pulling macroAssembler.hpp. With >> your patch, it pulls only frame.hpp. >> >> One could certainly work around that issue but the real fix would be >> to not include anything in files which are included into other >> classes. Those are not "real" includes anyway. And maybe add a comment >> to that file :) > > I will add a comment to all of these like: > > // This is included in the middle of class Interpreter. > // Do not include files here. > > Hm so I need to add the #include for macroAssembler.hpp somewhere new > like nativeInst_ppc.hpp or does just removing it from > interpreterRT_ppc.hpp fix the problem? Whatever code is in the included platform specific header still needs to ensure the definitions that it needs have been included. If those are shared files then you may just be able to move them into the shared cpp file, but any platform specific headers must still be included in the platform specific headers. David ----- > thanks, > Coleen > > >> >> Thanks, Thomas >> >> >> >> >> >> On Wed, Mar 21, 2018 at 11:41 AM, Thomas St?fe >> > wrote: >> >> ??? Hi Coleen, >> >> ??? linuxs390 needs this: >> >> ??? - .../source $ hg diff >> ??? diff -r daf3abb9031f src/hotspot/cpu/s390/interpreterRT_s390.cpp >> ??? --- a/src/hotspot/cpu/s390/interpreterRT_s390.cpp ? ? ?Wed Mar 21 >> ??? 08:37:04 2018 +0100 >> ??? +++ b/src/hotspot/cpu/s390/interpreterRT_s390.cpp ? ? ?Wed Mar 21 >> ??? 11:12:03 2018 +0100 >> ??? @@ -65,7 +65,7 @@ >> ??? ?} >> >> ??? ?// Implementation of SignatureHandlerGenerator >> >> -InteprerterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( >> >> >> +InterpreterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( >> >> ??? ? ? ?const methodHandle& method, CodeBuffer* buffer) : >> ??? NativeSignatureIterator(method) { >> ??? ? ?_masm = new MacroAssembler(buffer); >> ??? ? ?_fp_arg_nr = 0; >> >> ??? (typo). Otherwise it builds fine. >> >> ??? I'm getting build errors on AIX which are a bit more complicated, >> ??? still looking.. >> >> ??? Thanks, Thomas >> >> >> ??? On Wed, Mar 21, 2018 at 1:08 AM, > ??? > wrote: >> >> ??????? Summary: Remove frame.inline.hpp,etc from header files and >> ??????? adjust transitive includes. >> >> ??????? Tested with mach5 tier1 on Oracle platforms: linux-x64, >> ??????? solaris-sparc, windows-x64.? Built with open-only sources >> ??????? using --disable-precompiled-headers on linux-x64, built with >> ??????? zero (also disable precompiled headers). Roman built with >> ??????? aarch64, and have request to build ppc, etc.? (Please test >> ??????? this patch!) >> >> ??????? Semi-interesting details:? moved SignatureHandlerGenerator >> ??????? constructor to cpp file, moved >> ??????? interpreter_frame_stack_direction() to target specific hpp >> ??????? files (even though they're all -1), pd_last_frame to >> ??????? thread_.cpp because there isn't a >> ??????? thread_.inline.hpp file, lastly moved >> ??????? InterpreterRuntime::LastFrameAccessor into >> ??????? interpreterRuntime.cpp file, and a few other functions moved >> ??????? in shared code. >> >> ??????? This is the last of this include file technical debt cleanup >> ??????? that I'm going to do.? See bug for more information. >> >> ??????? open webrev at >> ??????? http://cr.openjdk.java.net/~coleenp/8199809.01/webrev >> ??????? >> ??????? bug link https://bugs.openjdk.java.net/browse/JDK-8199809 >> ??????? >> >> ??????? I'll update the copyrights when I commit. >> >> ??????? Thanks, >> ??????? Coleen >> >> >> > From erik.helin at oracle.com Wed Mar 21 13:21:11 2018 From: erik.helin at oracle.com (Erik Helin) Date: Wed, 21 Mar 2018 14:21:11 +0100 Subject: RFR: 8199138: Add RISC-V support to Zero In-Reply-To: <1521554055.3029.4.camel@gmail.com> References: <1521313360.26308.4.camel@gmail.com> <259e05b8-dbb1-4aa4-f451-6b7078eeb2ff@oracle.com> <1521554055.3029.4.camel@gmail.com> Message-ID: <4fcbfbe2-4a05-6968-6ba2-0f8900970336@oracle.com> On 03/20/2018 02:54 PM, Edward Nevill wrote: > On Tue, 2018-03-20 at 08:39 +0100, Erik Helin wrote: >> Please review the following webrev >>> >>> Bugid: https://bugs.openjdk.java.net/browse/JDK-8199138 >>> Webrev: http://cr.openjdk.java.net/~enevill/8199138/webrev.00 >> >> 32 # First, filter out everything that doesn't begin with "aarch64-" >> 33 if ! echo $* | grep '^aarch64-\|^riscv64-' >/dev/null ; then >> >> Could you please update the comment on line 32 to say the same thing as >> the code? >> > > Hi Eirk, > > Thanks for this. I have updated the webrev with the above comment. > > http://cr.openjdk.java.net/~enevill/8199138/webrev.01 Please also update the error message at line 1802 - 1804: 1802 #error Method os::dll_load requires that one of following is defined:\ 1803 AARCH64, ALPHA, ARM, AMD64, IA32, IA64, M68K, MIPS, MIPSEL, PARISC, __powerpc__, __powerpc64__, S390, SH, __sparc 1804 #endif > I have also fixed a problem encountered with the submit-hs repo where the build machine had older headers which did not define EM_RISCV. > > The solution is to define EM_RISCV if not already defined as is done for aarch64. > > IE. > > #ifndef EM_AARCH64 > #define EM_AARCH64 183 /* ARM AARCH64 */ > #endif > +#ifndef EM_RISCV > + #define EM_RISCV 243 > +#endif Maybe add a corresponding /* RISC-V */ comment to use the same style as the other defines? > This now passes the submit-hs tests. Ok, good. > Does this look OK to push now? Please send out a final webrev first. Thanks, Erik > Thanks, > Ed. > From coleen.phillimore at oracle.com Wed Mar 21 13:28:53 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 21 Mar 2018 09:28:53 -0400 Subject: RFR (L, tedious) 8199809: Don't include frame.inline.hpp and other.inline.hpp from .hpp files In-Reply-To: <319623dc-512a-ba73-b8b5-af0956d0007b@oracle.com> References: <115cb618-5fe3-f59e-9b4b-fa56060ad766@oracle.com> <319623dc-512a-ba73-b8b5-af0956d0007b@oracle.com> Message-ID: <8dd7c3a4-c9d7-340b-9599-93c47eeab3ed@oracle.com> On 3/21/18 9:02 AM, David Holmes wrote: > On 21/03/2018 10:39 PM, coleen.phillimore at oracle.com wrote: >> >> Thomas, >> >> Thank you for building this. >> >> On 3/21/18 7:50 AM, Thomas St?fe wrote: >>> Hi Coleen, >>> >>> I think your patch uncovered an issue. >>> >>> I saw this weird compile error on AIX: >>> >>> ? ?471? ? ?54 |? ?bool is_sigtrap_ic_miss_check() { >>> ? ?471? ? ?55 |? ? ?assert(UseSIGTRAP, "precondition"); >>> ? ?471? ? ?56 |? ? ?return >>> MacroAssembler::is_trap_ic_miss_check(long_at(0)); >>> ===========================^ >>> "/priv/d031900/openjdk/jdk-hs/source/src/hotspot/cpu/ppc/nativeInst_ppc.hpp", >>> line 56.12: 1540-0062 (S) The incomplete class "MacroAssembler" must >>> not be used as a qualifier. >>> ? ?471? ? ?57 |? ?} >>> >>> in a number of places. But the definition of class MacroAssembler >>> was available. So I checked if MacroAssembler was accidentally >>> pulled into a namespace or a class, and sure enough, your patch >>> caused it to be defined *inside* the class InterpreterRuntime. See >>> interpreterRuntime.hpp: >>> >>> class InterpreterRuntime: AllStatic { >>> ... >>> ? // Platform dependent stuff >>> #include CPU_HEADER(interpreterRT) >>> ... >>> }; >>> >>> which pulls in the content of interpreterRT_ppc.hpp. >>> >>> interpreterRT_ppc.hpp includes >>> >>> #include "asm/macroAssembler.hpp" >>> #include "memory/allocation.hpp" >>> >>> (minus allocation.hpp after your patch) >>> >>> which is certainly an error, yes? We should not pull in any includes >>> into a class definition. >> >> Yes, I had this problem with x86 which was very befuddling.? I hate >> that we include files in the middle of class definitions! > > It's a crude but effective way to "extend" a class with platform > specific code at build time. But it does have constraints. Crude is the key adjective. > >>> >>> I wondered why this did not cause errors earlier, but the include >>> order changed with your patch. Before the patch, the error was >>> covered by a different include order: nothing was really included by >>> interpreterRT_ppc.hpp, the include directives were noops. I think >>> this was caused by src/hotspot/share/prims/methodHandles.hpp pulling >>> frame.inline.hpp and via that path pulling macroAssembler.hpp. With >>> your patch, it pulls only frame.hpp. >>> >>> One could certainly work around that issue but the real fix would be >>> to not include anything in files which are included into other >>> classes. Those are not "real" includes anyway. And maybe add a >>> comment to that file :) >> >> I will add a comment to all of these like: >> >> // This is included in the middle of class Interpreter. >> // Do not include files here. >> >> Hm so I need to add the #include for macroAssembler.hpp somewhere new >> like nativeInst_ppc.hpp or does just removing it from >> interpreterRT_ppc.hpp fix the problem? > > Whatever code is in the included platform specific header still needs > to ensure the definitions that it needs have been included. If those > are shared files then you may just be able to move them into the > shared cpp file, but any platform specific headers must still be > included in the platform specific headers. Yes, I don't know which files need this inclusion on ppc though. They are not needed or already transitively included on the platforms we have. thanks, Coleen > > > David > ----- > >> thanks, >> Coleen >> >> >>> >>> Thanks, Thomas >>> >>> >>> >>> >>> >>> On Wed, Mar 21, 2018 at 11:41 AM, Thomas St?fe >>> > wrote: >>> >>> ??? Hi Coleen, >>> >>> ??? linuxs390 needs this: >>> >>> ??? - .../source $ hg diff >>> ??? diff -r daf3abb9031f src/hotspot/cpu/s390/interpreterRT_s390.cpp >>> ??? --- a/src/hotspot/cpu/s390/interpreterRT_s390.cpp ? ? ?Wed Mar 21 >>> ??? 08:37:04 2018 +0100 >>> ??? +++ b/src/hotspot/cpu/s390/interpreterRT_s390.cpp ? ? ?Wed Mar 21 >>> ??? 11:12:03 2018 +0100 >>> ??? @@ -65,7 +65,7 @@ >>> ??? ?} >>> >>> ??? ?// Implementation of SignatureHandlerGenerator >>> -InteprerterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( >>> >>> +InterpreterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( >>> >>> ??? ? ? ?const methodHandle& method, CodeBuffer* buffer) : >>> ??? NativeSignatureIterator(method) { >>> ??? ? ?_masm = new MacroAssembler(buffer); >>> ??? ? ?_fp_arg_nr = 0; >>> >>> ??? (typo). Otherwise it builds fine. >>> >>> ??? I'm getting build errors on AIX which are a bit more complicated, >>> ??? still looking.. >>> >>> ??? Thanks, Thomas >>> >>> >>> ??? On Wed, Mar 21, 2018 at 1:08 AM, >> ??? > wrote: >>> >>> ??????? Summary: Remove frame.inline.hpp,etc from header files and >>> ??????? adjust transitive includes. >>> >>> ??????? Tested with mach5 tier1 on Oracle platforms: linux-x64, >>> ??????? solaris-sparc, windows-x64.? Built with open-only sources >>> ??????? using --disable-precompiled-headers on linux-x64, built with >>> ??????? zero (also disable precompiled headers). Roman built with >>> ??????? aarch64, and have request to build ppc, etc.? (Please test >>> ??????? this patch!) >>> >>> ??????? Semi-interesting details:? moved SignatureHandlerGenerator >>> ??????? constructor to cpp file, moved >>> ??????? interpreter_frame_stack_direction() to target specific hpp >>> ??????? files (even though they're all -1), pd_last_frame to >>> ??????? thread_.cpp because there isn't a >>> ??????? thread_.inline.hpp file, lastly moved >>> ??????? InterpreterRuntime::LastFrameAccessor into >>> ??????? interpreterRuntime.cpp file, and a few other functions moved >>> ??????? in shared code. >>> >>> ??????? This is the last of this include file technical debt cleanup >>> ??????? that I'm going to do.? See bug for more information. >>> >>> ??????? open webrev at >>> ??????? http://cr.openjdk.java.net/~coleenp/8199809.01/webrev >>> >>> ??????? bug link https://bugs.openjdk.java.net/browse/JDK-8199809 >>> >>> >>> ??????? I'll update the copyrights when I commit. >>> >>> ??????? Thanks, >>> ??????? Coleen >>> >>> >>> >> From per.liden at oracle.com Wed Mar 21 13:36:43 2018 From: per.liden at oracle.com (Per Liden) Date: Wed, 21 Mar 2018 14:36:43 +0100 Subject: RFR: 8198949: Modularize arraycopy stub routine GC barriers In-Reply-To: <5AAFA3B7.5040101@oracle.com> References: <5AA2BD2B.2060100@oracle.com> <62a2c346-4260-8b79-d8a9-a4037a00d1bc@oracle.com> <5AAFA3B7.5040101@oracle.com> Message-ID: Looks good! I have one small-ish suggestion/question. The some of the headers in hotspot/share/gc/shared/ does: 30 #ifndef ZERO 31 #include CPU_HEADER(gc/shared/barrierSetAssembler) 32 #endif 33 34 class BarrierSetAssembler; Shouldn't we just have CPU-specific files for zero too, which would just do the forward declaration? That would remove platform specific #ifndef's in the shared code, which is always nice. /Per On 03/19/2018 12:49 PM, Erik ?sterlund wrote: > Hi, > > After some internal discussions, it turns out that the name "*BSCodeGen" > was not so popular, and has been changed to *BarrierSetAssembler instead. > I have rebased this on top of 8199604: Rename CardTableModRefBS to > CardTableBarrierSet and went ahead with the performing the required > renaming. > > New full webrev: > http://cr.openjdk.java.net/~eosterlund/8198949/webrev.03/ > > New incremental webrev (from the rebased version): > http://cr.openjdk.java.net/~eosterlund/8198949/webrev.02_03/ > > Thanks, > /Erik > > On 2018-03-13 10:47, Erik ?sterlund wrote: >> Hi Roman, >> >> Thanks for the review. >> >> /Erik >> >> On 2018-03-13 10:26, Roman Kennke wrote: >>> Am 09.03.2018 um 17:58 schrieb Erik ?sterlund: >>>> Hi, >>>> >>>> The GC barriers for arraycopy stub routines are not as modular as they >>>> could be. They currently use switch statements to check which GC >>>> barrier >>>> set is being used, and call one or another barrier based on that, with >>>> registers already allocated in such a way that it can only be used for >>>> write barriers. >>>> >>>> My solution to the problem is to introduce a platform-specific GC >>>> barrier set code generator. The abstract super class is >>>> BarrierSetCodeGen, and you can get it from the active BarrierSet. A >>>> virtual call to the BarrierSetCodeGen generates the relevant GC >>>> barriers >>>> for the arraycopy stub routines. >>>> >>>> The BarrierSetCodeGen inheritance hierarchy exactly matches the >>>> corresponding BarrierSet inheritance hierarchy. In other words, every >>>> BarrierSet class has a corresponding BarrierSetCodeGen class. >>>> >>>> The various switch statements that generate different GC barriers >>>> depending on the enum type of the barrier set have been changed to call >>>> a corresponding virtual member function in the BarrierSetCodeGen class >>>> instead. >>>> >>>> Thanks to Martin Doerr and Roman Kennke for providing platform specific >>>> code for PPC, S390 and AArch64. >>>> >>>> Webrev: >>>> http://cr.openjdk.java.net/~eosterlund/8198949/webrev.00/ >>>> >>>> CR: >>>> https://bugs.openjdk.java.net/browse/JDK-8198949 >>>> >>>> Thanks, >>>> /Erik >>> >>> I looked over x86, aarch64 and shared code (in webrev.01), and it looks >>> good to me! >>> >>> As I commented earlier in private, I would find it useful if the >>> barriers could 'take over' the whole arraycopy, for example to do the >>> pre- and post-barrier and arraycopy in one pass, instead of 3. However, >>> let's keep that for later. >>> >>> Awesome work, thank you! >>> >>> Cheers, >>> Roman >>> >>> >> > From erik.osterlund at oracle.com Wed Mar 21 13:58:16 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Wed, 21 Mar 2018 14:58:16 +0100 Subject: RFR: 8198949: Modularize arraycopy stub routine GC barriers In-Reply-To: References: <5AA2BD2B.2060100@oracle.com> <62a2c346-4260-8b79-d8a9-a4037a00d1bc@oracle.com> <5AAFA3B7.5040101@oracle.com> Message-ID: <5AB264F8.7010007@oracle.com> Hi Per, Thank you for the review. On 2018-03-21 14:36, Per Liden wrote: > Looks good! > > I have one small-ish suggestion/question. The some of the headers in > hotspot/share/gc/shared/ does: > > 30 #ifndef ZERO > 31 #include CPU_HEADER(gc/shared/barrierSetAssembler) > 32 #endif > 33 > 34 class BarrierSetAssembler; > > Shouldn't we just have CPU-specific files for zero too, which would > just do the forward declaration? That would remove platform specific > #ifndef's in the shared code, which is always nice. Sure. New full webrev: http://cr.openjdk.java.net/~eosterlund/8198949/webrev.04/ Incremental webrev: http://cr.openjdk.java.net/~eosterlund/8198949/webrev.03_04/ Thanks, /Erik > > /Per > > On 03/19/2018 12:49 PM, Erik ?sterlund wrote: >> Hi, >> >> After some internal discussions, it turns out that the name >> "*BSCodeGen" was not so popular, and has been changed to >> *BarrierSetAssembler instead. >> I have rebased this on top of 8199604: Rename CardTableModRefBS to >> CardTableBarrierSet and went ahead with the performing the required >> renaming. >> >> New full webrev: >> http://cr.openjdk.java.net/~eosterlund/8198949/webrev.03/ >> >> New incremental webrev (from the rebased version): >> http://cr.openjdk.java.net/~eosterlund/8198949/webrev.02_03/ >> >> Thanks, >> /Erik >> >> On 2018-03-13 10:47, Erik ?sterlund wrote: >>> Hi Roman, >>> >>> Thanks for the review. >>> >>> /Erik >>> >>> On 2018-03-13 10:26, Roman Kennke wrote: >>>> Am 09.03.2018 um 17:58 schrieb Erik ?sterlund: >>>>> Hi, >>>>> >>>>> The GC barriers for arraycopy stub routines are not as modular as >>>>> they >>>>> could be. They currently use switch statements to check which GC >>>>> barrier >>>>> set is being used, and call one or another barrier based on that, >>>>> with >>>>> registers already allocated in such a way that it can only be used >>>>> for >>>>> write barriers. >>>>> >>>>> My solution to the problem is to introduce a platform-specific GC >>>>> barrier set code generator. The abstract super class is >>>>> BarrierSetCodeGen, and you can get it from the active BarrierSet. A >>>>> virtual call to the BarrierSetCodeGen generates the relevant GC >>>>> barriers >>>>> for the arraycopy stub routines. >>>>> >>>>> The BarrierSetCodeGen inheritance hierarchy exactly matches the >>>>> corresponding BarrierSet inheritance hierarchy. In other words, every >>>>> BarrierSet class has a corresponding BarrierSetCodeGen class. >>>>> >>>>> The various switch statements that generate different GC barriers >>>>> depending on the enum type of the barrier set have been changed to >>>>> call >>>>> a corresponding virtual member function in the BarrierSetCodeGen >>>>> class >>>>> instead. >>>>> >>>>> Thanks to Martin Doerr and Roman Kennke for providing platform >>>>> specific >>>>> code for PPC, S390 and AArch64. >>>>> >>>>> Webrev: >>>>> http://cr.openjdk.java.net/~eosterlund/8198949/webrev.00/ >>>>> >>>>> CR: >>>>> https://bugs.openjdk.java.net/browse/JDK-8198949 >>>>> >>>>> Thanks, >>>>> /Erik >>>> >>>> I looked over x86, aarch64 and shared code (in webrev.01), and it >>>> looks >>>> good to me! >>>> >>>> As I commented earlier in private, I would find it useful if the >>>> barriers could 'take over' the whole arraycopy, for example to do the >>>> pre- and post-barrier and arraycopy in one pass, instead of 3. >>>> However, >>>> let's keep that for later. >>>> >>>> Awesome work, thank you! >>>> >>>> Cheers, >>>> Roman >>>> >>>> >>> >> From per.liden at oracle.com Wed Mar 21 14:05:09 2018 From: per.liden at oracle.com (Per Liden) Date: Wed, 21 Mar 2018 15:05:09 +0100 Subject: RFR: 8198949: Modularize arraycopy stub routine GC barriers In-Reply-To: <5AB264F8.7010007@oracle.com> References: <5AA2BD2B.2060100@oracle.com> <62a2c346-4260-8b79-d8a9-a4037a00d1bc@oracle.com> <5AAFA3B7.5040101@oracle.com> <5AB264F8.7010007@oracle.com> Message-ID: <8beb7f96-2288-d683-c393-feb16dce2b11@oracle.com> Awesome! Looks good! /Per On 03/21/2018 02:58 PM, Erik ?sterlund wrote: > Hi Per, > > Thank you for the review. > > On 2018-03-21 14:36, Per Liden wrote: >> Looks good! >> >> I have one small-ish suggestion/question. The some of the headers in >> hotspot/share/gc/shared/ does: >> >> 30 #ifndef ZERO >> 31 #include CPU_HEADER(gc/shared/barrierSetAssembler) >> 32 #endif >> 33 >> 34 class BarrierSetAssembler; >> >> Shouldn't we just have CPU-specific files for zero too, which would >> just do the forward declaration? That would remove platform specific >> #ifndef's in the shared code, which is always nice. > > Sure. > > New full webrev: > http://cr.openjdk.java.net/~eosterlund/8198949/webrev.04/ > > Incremental webrev: > http://cr.openjdk.java.net/~eosterlund/8198949/webrev.03_04/ > > Thanks, > /Erik > >> >> /Per >> >> On 03/19/2018 12:49 PM, Erik ?sterlund wrote: >>> Hi, >>> >>> After some internal discussions, it turns out that the name >>> "*BSCodeGen" was not so popular, and has been changed to >>> *BarrierSetAssembler instead. >>> I have rebased this on top of 8199604: Rename CardTableModRefBS to >>> CardTableBarrierSet and went ahead with the performing the required >>> renaming. >>> >>> New full webrev: >>> http://cr.openjdk.java.net/~eosterlund/8198949/webrev.03/ >>> >>> New incremental webrev (from the rebased version): >>> http://cr.openjdk.java.net/~eosterlund/8198949/webrev.02_03/ >>> >>> Thanks, >>> /Erik >>> >>> On 2018-03-13 10:47, Erik ?sterlund wrote: >>>> Hi Roman, >>>> >>>> Thanks for the review. >>>> >>>> /Erik >>>> >>>> On 2018-03-13 10:26, Roman Kennke wrote: >>>>> Am 09.03.2018 um 17:58 schrieb Erik ?sterlund: >>>>>> Hi, >>>>>> >>>>>> The GC barriers for arraycopy stub routines are not as modular as >>>>>> they >>>>>> could be. They currently use switch statements to check which GC >>>>>> barrier >>>>>> set is being used, and call one or another barrier based on that, >>>>>> with >>>>>> registers already allocated in such a way that it can only be used >>>>>> for >>>>>> write barriers. >>>>>> >>>>>> My solution to the problem is to introduce a platform-specific GC >>>>>> barrier set code generator. The abstract super class is >>>>>> BarrierSetCodeGen, and you can get it from the active BarrierSet. A >>>>>> virtual call to the BarrierSetCodeGen generates the relevant GC >>>>>> barriers >>>>>> for the arraycopy stub routines. >>>>>> >>>>>> The BarrierSetCodeGen inheritance hierarchy exactly matches the >>>>>> corresponding BarrierSet inheritance hierarchy. In other words, every >>>>>> BarrierSet class has a corresponding BarrierSetCodeGen class. >>>>>> >>>>>> The various switch statements that generate different GC barriers >>>>>> depending on the enum type of the barrier set have been changed to >>>>>> call >>>>>> a corresponding virtual member function in the BarrierSetCodeGen >>>>>> class >>>>>> instead. >>>>>> >>>>>> Thanks to Martin Doerr and Roman Kennke for providing platform >>>>>> specific >>>>>> code for PPC, S390 and AArch64. >>>>>> >>>>>> Webrev: >>>>>> http://cr.openjdk.java.net/~eosterlund/8198949/webrev.00/ >>>>>> >>>>>> CR: >>>>>> https://bugs.openjdk.java.net/browse/JDK-8198949 >>>>>> >>>>>> Thanks, >>>>>> /Erik >>>>> >>>>> I looked over x86, aarch64 and shared code (in webrev.01), and it >>>>> looks >>>>> good to me! >>>>> >>>>> As I commented earlier in private, I would find it useful if the >>>>> barriers could 'take over' the whole arraycopy, for example to do the >>>>> pre- and post-barrier and arraycopy in one pass, instead of 3. >>>>> However, >>>>> let's keep that for later. >>>>> >>>>> Awesome work, thank you! >>>>> >>>>> Cheers, >>>>> Roman >>>>> >>>>> >>>> >>> > From erik.osterlund at oracle.com Wed Mar 21 14:14:54 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Wed, 21 Mar 2018 15:14:54 +0100 Subject: RFR: 8198949: Modularize arraycopy stub routine GC barriers In-Reply-To: <8beb7f96-2288-d683-c393-feb16dce2b11@oracle.com> References: <5AA2BD2B.2060100@oracle.com> <62a2c346-4260-8b79-d8a9-a4037a00d1bc@oracle.com> <5AAFA3B7.5040101@oracle.com> <5AB264F8.7010007@oracle.com> <8beb7f96-2288-d683-c393-feb16dce2b11@oracle.com> Message-ID: <5AB268DE.2060906@oracle.com> Hi Per, Thanks for the review. /Erik On 2018-03-21 15:05, Per Liden wrote: > Awesome! Looks good! > > /Per > > On 03/21/2018 02:58 PM, Erik ?sterlund wrote: >> Hi Per, >> >> Thank you for the review. >> >> On 2018-03-21 14:36, Per Liden wrote: >>> Looks good! >>> >>> I have one small-ish suggestion/question. The some of the headers in >>> hotspot/share/gc/shared/ does: >>> >>> 30 #ifndef ZERO >>> 31 #include CPU_HEADER(gc/shared/barrierSetAssembler) >>> 32 #endif >>> 33 >>> 34 class BarrierSetAssembler; >>> >>> Shouldn't we just have CPU-specific files for zero too, which would >>> just do the forward declaration? That would remove platform specific >>> #ifndef's in the shared code, which is always nice. >> >> Sure. >> >> New full webrev: >> http://cr.openjdk.java.net/~eosterlund/8198949/webrev.04/ >> >> Incremental webrev: >> http://cr.openjdk.java.net/~eosterlund/8198949/webrev.03_04/ >> >> Thanks, >> /Erik >> >>> >>> /Per >>> >>> On 03/19/2018 12:49 PM, Erik ?sterlund wrote: >>>> Hi, >>>> >>>> After some internal discussions, it turns out that the name >>>> "*BSCodeGen" was not so popular, and has been changed to >>>> *BarrierSetAssembler instead. >>>> I have rebased this on top of 8199604: Rename CardTableModRefBS to >>>> CardTableBarrierSet and went ahead with the performing the required >>>> renaming. >>>> >>>> New full webrev: >>>> http://cr.openjdk.java.net/~eosterlund/8198949/webrev.03/ >>>> >>>> New incremental webrev (from the rebased version): >>>> http://cr.openjdk.java.net/~eosterlund/8198949/webrev.02_03/ >>>> >>>> Thanks, >>>> /Erik >>>> >>>> On 2018-03-13 10:47, Erik ?sterlund wrote: >>>>> Hi Roman, >>>>> >>>>> Thanks for the review. >>>>> >>>>> /Erik >>>>> >>>>> On 2018-03-13 10:26, Roman Kennke wrote: >>>>>> Am 09.03.2018 um 17:58 schrieb Erik ?sterlund: >>>>>>> Hi, >>>>>>> >>>>>>> The GC barriers for arraycopy stub routines are not as modular >>>>>>> as they >>>>>>> could be. They currently use switch statements to check which GC >>>>>>> barrier >>>>>>> set is being used, and call one or another barrier based on >>>>>>> that, with >>>>>>> registers already allocated in such a way that it can only be >>>>>>> used for >>>>>>> write barriers. >>>>>>> >>>>>>> My solution to the problem is to introduce a platform-specific GC >>>>>>> barrier set code generator. The abstract super class is >>>>>>> BarrierSetCodeGen, and you can get it from the active BarrierSet. A >>>>>>> virtual call to the BarrierSetCodeGen generates the relevant GC >>>>>>> barriers >>>>>>> for the arraycopy stub routines. >>>>>>> >>>>>>> The BarrierSetCodeGen inheritance hierarchy exactly matches the >>>>>>> corresponding BarrierSet inheritance hierarchy. In other words, >>>>>>> every >>>>>>> BarrierSet class has a corresponding BarrierSetCodeGen class. >>>>>>> >>>>>>> The various switch statements that generate different GC barriers >>>>>>> depending on the enum type of the barrier set have been changed >>>>>>> to call >>>>>>> a corresponding virtual member function in the BarrierSetCodeGen >>>>>>> class >>>>>>> instead. >>>>>>> >>>>>>> Thanks to Martin Doerr and Roman Kennke for providing platform >>>>>>> specific >>>>>>> code for PPC, S390 and AArch64. >>>>>>> >>>>>>> Webrev: >>>>>>> http://cr.openjdk.java.net/~eosterlund/8198949/webrev.00/ >>>>>>> >>>>>>> CR: >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8198949 >>>>>>> >>>>>>> Thanks, >>>>>>> /Erik >>>>>> >>>>>> I looked over x86, aarch64 and shared code (in webrev.01), and it >>>>>> looks >>>>>> good to me! >>>>>> >>>>>> As I commented earlier in private, I would find it useful if the >>>>>> barriers could 'take over' the whole arraycopy, for example to do >>>>>> the >>>>>> pre- and post-barrier and arraycopy in one pass, instead of 3. >>>>>> However, >>>>>> let's keep that for later. >>>>>> >>>>>> Awesome work, thank you! >>>>>> >>>>>> Cheers, >>>>>> Roman >>>>>> >>>>>> >>>>> >>>> >> From rkennke at redhat.com Wed Mar 21 14:28:18 2018 From: rkennke at redhat.com (Roman Kennke) Date: Wed, 21 Mar 2018 15:28:18 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <5AB12E12.7030302@oracle.com> References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> <3b1de555-54f4-6b30-b2b2-b8ddddf2fe29@oracle.com> <3cf32a5b-ca9e-d9d2-abda-943ced0d782d@redhat.com> <5AB0DD76.6020807@oracle.com> <0e7560a9-ab2a-133a-6c16-87ac811729bb@redhat.com> <5AB0E615.9060700@oracle.com> <0f080611-5085-74e4-9339-da38fe8c96ac@redhat.com> <5AB12E12.7030302@oracle.com> Message-ID: I got a failure back from submit repo: Build Details: 2018-03-21-1213342.roman.source 1 Failed Test Test Tier Platform Keywords Description Task compiler/jsr292/RedefineMethodUsedByMultipleMethodHandles.java tier1 macosx-x64-debug bug8042235 othervm Exception: java.lang.reflect.InvocationTargetException task Mach5 Tasks Results Summary PASSED: 74 EXECUTED_WITH_FAILURE: 1 UNABLE_TO_RUN: 0 FAILED: 0 KILLED: 0 NA: 0 Test 1 Executed with failure tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_3-macosx-x64-debug-28 Results: total: 165, passed: 164; failed: 1 Can you tell if that is related to the change, or something other already known issue? Thanks, Roman > Hi Roman, > > This looks good to me. The unfortunate include problems in > jvmciJavaClasses.hpp are pre-existing and should be cleaned up at some > point. > > Thanks, > /Erik > > On 2018-03-20 16:13, Roman Kennke wrote: >> Am 20.03.2018 um 11:44 schrieb Erik ?sterlund: >>> Hi Roman, >>> >>> On 2018-03-20 11:26, Roman Kennke wrote: >>>> Am 20.03.2018 um 11:07 schrieb Erik ?sterlund: >>>>> Hi Roman, >>>>> >>>>> On 2018-03-19 21:11, Roman Kennke wrote: >>>>>> Am 19.03.2018 um 20:35 schrieb coleen.phillimore at oracle.com: >>>>>>> On 3/19/18 3:15 PM, Stefan Karlsson wrote: >>>>>>>> On 2018-03-19 20:00, coleen.phillimore at oracle.com wrote: >>>>>>>>> I like Roman's version with static_field_base() the best.? The >>>>>>>>> reason >>>>>>>>> I wanted to keep static_field_addr and not have static_oop_addr >>>>>>>>> was >>>>>>>>> so there is one function to find static fields and this would work >>>>>>>>> with the jvmci classes and with loading/storing primitives >>>>>>>>> also.? So >>>>>>>>> I like the consistent change that Roman has. >>>>>>>> That's OK with me. This RFE grew in scope of what I first >>>>>>>> intended, so >>>>>>>> I'm fine with Roman taking over this. >>>>>>>> >>>>>>>>> There's a subtlety that I haven't quite figured out here. >>>>>>>>> static_field_addr gets an address mirror+offset, so needs a load >>>>>>>>> barrier on this offset, then needs a load barrier on the offset of >>>>>>>>> the additional load (?) >>>>>>>> There are two barriers in this piece of code: >>>>>>>> 1) Shenandoah needs a barrier to be able to read fields out of the >>>>>>>> java mirror >>>>>>>> 2) ZGC and UseCompressedOops needs a barrier when loading oop >>>>>>>> fields >>>>>>>> in the java mirror. >>>>>>>> >>>>>>>> Is that what you are referring to? >>>>>>> I had to read this thread over again, and am still foggy, but it was >>>>>>> because your original change didn't work for shenandoah, ie Kim's >>>>>>> last >>>>>>> response. >>>>>>> >>>>>>> The brooks pointer has to be applied to get the mirror address as >>>>>>> well >>>>>>> as reading fields out of the mirror, if I understand correctly. >>>>>>> >>>>>>> OopHandle::resolve() which is what java_mirror() is not >>>>>>> accessorized but >>>>>>> should be for shenandoah.? I think.? I guess that was my question >>>>>>> before. >>>>>> The family of _at() functions in Access, those which accept >>>>>> oop+offset, >>>>>> do the chasing of the forwarding pointer in Shenandoah, then they >>>>>> apply >>>>>> the offset, load the memory field and return the value in the right >>>>>> type. They also do the load-barrier in ZGC (haven't checked, but >>>>>> that's >>>>>> just logical). >>>>>> >>>>>> There is also oop Access::resolve(oop) which is a bit of a hack. >>>>>> It has >>>>>> been introduced because of arraycopy and java <-> native bulk copy >>>>>> stuff >>>>>> that uses typeArrayOop::*_at_addr() family of methods. In those >>>>>> situations we still need to 1. chase the fwd ptr (for reads) or 2. >>>>>> maybe >>>>>> evacuate the object (for writes), where #2 is stronger than #1 >>>>>> (i.e. if >>>>>> we do #2, then we don't need to do #1). In order to keep things >>>>>> simple, >>>>>> we decided to make Access::resolve(oop) do #2, and have it cover all >>>>>> those cases, and put it in arrayOopDesc::base(). This does the right >>>>>> thing for all cases, but it is a bit broad, for example, it may >>>>>> lead to >>>>>> double-copying a potentially large array (resolve-copy src array from >>>>>> from-space to to-space, then copy it again to the dst array). For >>>>>> those >>>>>> reasons, it is advisable to think twice before using _at_addr() or >>>>>> in-fact Access::resolve() if there's a better/cleaner way to do it. >>>>> Are we certain that it is indeed only arraycopy that requires stable >>>>> accesses until the next thread transition? >>>>> I seem to recall that last time we discussed this, you thought that >>>>> there was more than arraycopy code that needed this. For example >>>>> printing and string encoding/decoding logic. >>>>> >>>>> If we are going to make changes based on the assumption that we >>>>> will be >>>>> able to get rid of the resolve() barrier, then we should be fairly >>>>> certain that we can indeed get rid of it. So have the other previously >>>>> discussed roadblocks other than arraycopy disappeared? >>>> No, I don't think that resolve() can go away. If you look at: >>>> >>>> http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2018-March/021464.html >>>> >>>> >>>> >>>> You'll see all kinds of uses of _at_addr() that cannot be covered by >>>> some sort of arraycopy, e.g. the string conversions stuff. >>>> >>>> The above patch proposes to split resolve() to resolve_for_read() and >>>> resolve_for_write(), and I don't think it is unreasonable to >>>> distinguish >>>> those. Besides being better for Shenandoah (reduced latency on >>>> read-only >>>> accesses), there are conceivable GC algorithms that require that >>>> distinction too, e.g. transactional memory based GC or copy-on-write >>>> based GCs. But let's probably continue this discussion in the thread >>>> mentioned above? >>> As I thought. The reason I bring it up in this thread is because as I >>> understand it, you are proposing to push this patch without renaming >>> static_field_base() to static_field_base_raw(), which is what we did >>> consistently everywhere else so far, with the motivation that you will >>> remove resolve() from the other ones soon, and get rid of base_raw(). >>> And I feel like we should have that discussion first. Until that is >>> actually changed, static_field_base_raw() should be the name of that >>> method. If we decide to change the other code to do something else, then >>> we can revisit this then, but not yet. >> Ok, so I changed static_field_base() -> static_field_base_raw(): >> >> Diff: >> http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.01.diff/ >> Full: >> http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.01/ >> >> Better? >> >> Thanks, Roman >> >> > From erik.osterlund at oracle.com Wed Mar 21 14:39:57 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Wed, 21 Mar 2018 15:39:57 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> <3b1de555-54f4-6b30-b2b2-b8ddddf2fe29@oracle.com> <3cf32a5b-ca9e-d9d2-abda-943ced0d782d@redhat.com> <5AB0DD76.6020807@oracle.com> <0e7560a9-ab2a-133a-6c16-87ac811729bb@redhat.com> <5AB0E615.9060700@oracle.com> <0f080611-5085-74e4-9339-da38fe8c96ac@redhat.com> <5AB12E12.7030302@oracle.com> Message-ID: <5AB26EBD.50805@oracle.com> Hi Roman, I got the same problem when pushing the remove Runtime1::arraycopy changes, so I can confirm this is unrelated to your changes. Thanks, /Erik On 2018-03-21 15:28, Roman Kennke wrote: > I got a failure back from submit repo: > > Build Details: 2018-03-21-1213342.roman.source > 1 Failed Test > Test Tier Platform Keywords Description Task > compiler/jsr292/RedefineMethodUsedByMultipleMethodHandles.java tier1 > macosx-x64-debug bug8042235 othervm Exception: > java.lang.reflect.InvocationTargetException task > Mach5 Tasks Results Summary > > PASSED: 74 > EXECUTED_WITH_FAILURE: 1 > UNABLE_TO_RUN: 0 > FAILED: 0 > KILLED: 0 > NA: 0 > Test > > 1 Executed with failure > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_3-macosx-x64-debug-28 > Results: total: 165, passed: 164; failed: 1 > > > Can you tell if that is related to the change, or something other > already known issue? > > Thanks, Roman > > >> Hi Roman, >> >> This looks good to me. The unfortunate include problems in >> jvmciJavaClasses.hpp are pre-existing and should be cleaned up at some >> point. >> >> Thanks, >> /Erik >> >> On 2018-03-20 16:13, Roman Kennke wrote: >>> Am 20.03.2018 um 11:44 schrieb Erik ?sterlund: >>>> Hi Roman, >>>> >>>> On 2018-03-20 11:26, Roman Kennke wrote: >>>>> Am 20.03.2018 um 11:07 schrieb Erik ?sterlund: >>>>>> Hi Roman, >>>>>> >>>>>> On 2018-03-19 21:11, Roman Kennke wrote: >>>>>>> Am 19.03.2018 um 20:35 schrieb coleen.phillimore at oracle.com: >>>>>>>> On 3/19/18 3:15 PM, Stefan Karlsson wrote: >>>>>>>>> On 2018-03-19 20:00, coleen.phillimore at oracle.com wrote: >>>>>>>>>> I like Roman's version with static_field_base() the best. The >>>>>>>>>> reason >>>>>>>>>> I wanted to keep static_field_addr and not have static_oop_addr >>>>>>>>>> was >>>>>>>>>> so there is one function to find static fields and this would work >>>>>>>>>> with the jvmci classes and with loading/storing primitives >>>>>>>>>> also. So >>>>>>>>>> I like the consistent change that Roman has. >>>>>>>>> That's OK with me. This RFE grew in scope of what I first >>>>>>>>> intended, so >>>>>>>>> I'm fine with Roman taking over this. >>>>>>>>> >>>>>>>>>> There's a subtlety that I haven't quite figured out here. >>>>>>>>>> static_field_addr gets an address mirror+offset, so needs a load >>>>>>>>>> barrier on this offset, then needs a load barrier on the offset of >>>>>>>>>> the additional load (?) >>>>>>>>> There are two barriers in this piece of code: >>>>>>>>> 1) Shenandoah needs a barrier to be able to read fields out of the >>>>>>>>> java mirror >>>>>>>>> 2) ZGC and UseCompressedOops needs a barrier when loading oop >>>>>>>>> fields >>>>>>>>> in the java mirror. >>>>>>>>> >>>>>>>>> Is that what you are referring to? >>>>>>>> I had to read this thread over again, and am still foggy, but it was >>>>>>>> because your original change didn't work for shenandoah, ie Kim's >>>>>>>> last >>>>>>>> response. >>>>>>>> >>>>>>>> The brooks pointer has to be applied to get the mirror address as >>>>>>>> well >>>>>>>> as reading fields out of the mirror, if I understand correctly. >>>>>>>> >>>>>>>> OopHandle::resolve() which is what java_mirror() is not >>>>>>>> accessorized but >>>>>>>> should be for shenandoah. I think. I guess that was my question >>>>>>>> before. >>>>>>> The family of _at() functions in Access, those which accept >>>>>>> oop+offset, >>>>>>> do the chasing of the forwarding pointer in Shenandoah, then they >>>>>>> apply >>>>>>> the offset, load the memory field and return the value in the right >>>>>>> type. They also do the load-barrier in ZGC (haven't checked, but >>>>>>> that's >>>>>>> just logical). >>>>>>> >>>>>>> There is also oop Access::resolve(oop) which is a bit of a hack. >>>>>>> It has >>>>>>> been introduced because of arraycopy and java <-> native bulk copy >>>>>>> stuff >>>>>>> that uses typeArrayOop::*_at_addr() family of methods. In those >>>>>>> situations we still need to 1. chase the fwd ptr (for reads) or 2. >>>>>>> maybe >>>>>>> evacuate the object (for writes), where #2 is stronger than #1 >>>>>>> (i.e. if >>>>>>> we do #2, then we don't need to do #1). In order to keep things >>>>>>> simple, >>>>>>> we decided to make Access::resolve(oop) do #2, and have it cover all >>>>>>> those cases, and put it in arrayOopDesc::base(). This does the right >>>>>>> thing for all cases, but it is a bit broad, for example, it may >>>>>>> lead to >>>>>>> double-copying a potentially large array (resolve-copy src array from >>>>>>> from-space to to-space, then copy it again to the dst array). For >>>>>>> those >>>>>>> reasons, it is advisable to think twice before using _at_addr() or >>>>>>> in-fact Access::resolve() if there's a better/cleaner way to do it. >>>>>> Are we certain that it is indeed only arraycopy that requires stable >>>>>> accesses until the next thread transition? >>>>>> I seem to recall that last time we discussed this, you thought that >>>>>> there was more than arraycopy code that needed this. For example >>>>>> printing and string encoding/decoding logic. >>>>>> >>>>>> If we are going to make changes based on the assumption that we >>>>>> will be >>>>>> able to get rid of the resolve() barrier, then we should be fairly >>>>>> certain that we can indeed get rid of it. So have the other previously >>>>>> discussed roadblocks other than arraycopy disappeared? >>>>> No, I don't think that resolve() can go away. If you look at: >>>>> >>>>> http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2018-March/021464.html >>>>> >>>>> >>>>> >>>>> You'll see all kinds of uses of _at_addr() that cannot be covered by >>>>> some sort of arraycopy, e.g. the string conversions stuff. >>>>> >>>>> The above patch proposes to split resolve() to resolve_for_read() and >>>>> resolve_for_write(), and I don't think it is unreasonable to >>>>> distinguish >>>>> those. Besides being better for Shenandoah (reduced latency on >>>>> read-only >>>>> accesses), there are conceivable GC algorithms that require that >>>>> distinction too, e.g. transactional memory based GC or copy-on-write >>>>> based GCs. But let's probably continue this discussion in the thread >>>>> mentioned above? >>>> As I thought. The reason I bring it up in this thread is because as I >>>> understand it, you are proposing to push this patch without renaming >>>> static_field_base() to static_field_base_raw(), which is what we did >>>> consistently everywhere else so far, with the motivation that you will >>>> remove resolve() from the other ones soon, and get rid of base_raw(). >>>> And I feel like we should have that discussion first. Until that is >>>> actually changed, static_field_base_raw() should be the name of that >>>> method. If we decide to change the other code to do something else, then >>>> we can revisit this then, but not yet. >>> Ok, so I changed static_field_base() -> static_field_base_raw(): >>> >>> Diff: >>> http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.01.diff/ >>> Full: >>> http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.01/ >>> >>> Better? >>> >>> Thanks, Roman >>> >>> > From thomas.stuefe at gmail.com Wed Mar 21 15:33:23 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 21 Mar 2018 16:33:23 +0100 Subject: RFR (L, tedious) 8199809: Don't include frame.inline.hpp and other.inline.hpp from .hpp files In-Reply-To: <115cb618-5fe3-f59e-9b4b-fa56060ad766@oracle.com> References: <115cb618-5fe3-f59e-9b4b-fa56060ad766@oracle.com> Message-ID: Hi Coleen, On Wed, Mar 21, 2018 at 1:39 PM, wrote: > > Thomas, > > Thank you for building this. > > On 3/21/18 7:50 AM, Thomas St?fe wrote: > > Hi Coleen, > > I think your patch uncovered an issue. > > I saw this weird compile error on AIX: > > 471 54 | bool is_sigtrap_ic_miss_check() { > 471 55 | assert(UseSIGTRAP, "precondition"); > 471 56 | return MacroAssembler::is_trap_ic_ > miss_check(long_at(0)); > ===========================^ > "/priv/d031900/openjdk/jdk-hs/source/src/hotspot/cpu/ppc/nativeInst_ppc.hpp", > line 56.12: 1540-0062 (S) The incomplete class "MacroAssembler" must not be > used as a qualifier. > 471 57 | } > > in a number of places. But the definition of class MacroAssembler was > available. So I checked if MacroAssembler was accidentally pulled into a > namespace or a class, and sure enough, your patch caused it to be defined > *inside* the class InterpreterRuntime. See interpreterRuntime.hpp: > > class InterpreterRuntime: AllStatic { > ... > // Platform dependent stuff > #include CPU_HEADER(interpreterRT) > ... > }; > > which pulls in the content of interpreterRT_ppc.hpp. > > interpreterRT_ppc.hpp includes > > #include "asm/macroAssembler.hpp" > #include "memory/allocation.hpp" > > (minus allocation.hpp after your patch) > > which is certainly an error, yes? We should not pull in any includes into > a class definition. > > > Yes, I had this problem with x86 which was very befuddling. I hate that > we include files in the middle of class definitions! > It is annoying. I had errors like this several times over the last years already, especially in the os_xxx_xxx "headers". All these AllStatic functions would be a perfect fit for C++ namespaces. > > > I wondered why this did not cause errors earlier, but the include order > changed with your patch. Before the patch, the error was covered by a > different include order: nothing was really included by > interpreterRT_ppc.hpp, the include directives were noops. I think this was > caused by src/hotspot/share/prims/methodHandles.hpp pulling > frame.inline.hpp and via that path pulling macroAssembler.hpp. With your > patch, it pulls only frame.hpp. > > One could certainly work around that issue but the real fix would be to > not include anything in files which are included into other classes. Those > are not "real" includes anyway. And maybe add a comment to that file :) > > > I will add a comment to all of these like: > > // This is included in the middle of class Interpreter. > // Do not include files here. > Thank you. This would be also valuable for all the os_.hpp files. > Hm so I need to add the #include for macroAssembler.hpp somewhere new like > nativeInst_ppc.hpp or does just removing it from interpreterRT_ppc.hpp fix > the problem? > > In this case it seems to be sufficient to remove the include from interpreterRT_ppc.hpp: - .../hotspot $ hg diff diff -r d3daa45c2c8d src/hotspot/cpu/ppc/interpreterRT_ppc.hpp --- a/src/hotspot/cpu/ppc/interpreterRT_ppc.hpp Wed Mar 21 12:25:06 2018 +0100 +++ b/src/hotspot/cpu/ppc/interpreterRT_ppc.hpp Wed Mar 21 16:27:02 2018 +0100 @@ -26,7 +26,6 @@ #ifndef CPU_PPC_VM_INTERPRETERRT_PPC_HPP #define CPU_PPC_VM_INTERPRETERRT_PPC_HPP -#include "asm/macroAssembler.hpp" // native method calls The build went thru afterwards. I assume MacroAssembler was not really needed for linkResolver.cpp. Thanks, Thomas > thanks, > Coleen > > > > > Thanks, Thomas > > > > > > > On Wed, Mar 21, 2018 at 11:41 AM, Thomas St?fe > wrote: > >> Hi Coleen, >> >> linuxs390 needs this: >> >> - .../source $ hg diff >> diff -r daf3abb9031f src/hotspot/cpu/s390/interpreterRT_s390.cpp >> --- a/src/hotspot/cpu/s390/interpreterRT_s390.cpp Wed Mar 21 >> 08:37:04 2018 +0100 >> +++ b/src/hotspot/cpu/s390/interpreterRT_s390.cpp Wed Mar 21 >> 11:12:03 2018 +0100 >> @@ -65,7 +65,7 @@ >> } >> >> // Implementation of SignatureHandlerGenerator >> -InteprerterRuntime::SignatureHandlerGenerator::SignatureHan >> dlerGenerator( >> +InterpreterRuntime::SignatureHandlerGenerator::SignatureHan >> dlerGenerator( >> const methodHandle& method, CodeBuffer* buffer) : >> NativeSignatureIterator(method) { >> _masm = new MacroAssembler(buffer); >> _fp_arg_nr = 0; >> >> (typo). Otherwise it builds fine. >> >> I'm getting build errors on AIX which are a bit more complicated, still >> looking.. >> >> Thanks, Thomas >> >> >> On Wed, Mar 21, 2018 at 1:08 AM, wrote: >> >>> Summary: Remove frame.inline.hpp,etc from header files and adjust >>> transitive includes. >>> >>> Tested with mach5 tier1 on Oracle platforms: linux-x64, solaris-sparc, >>> windows-x64. Built with open-only sources using >>> --disable-precompiled-headers on linux-x64, built with zero (also disable >>> precompiled headers). Roman built with aarch64, and have request to build >>> ppc, etc. (Please test this patch!) >>> >>> Semi-interesting details: moved SignatureHandlerGenerator constructor >>> to cpp file, moved interpreter_frame_stack_direction() to target >>> specific hpp files (even though they're all -1), pd_last_frame to >>> thread_.cpp because there isn't a thread_.inline.hpp file, >>> lastly moved InterpreterRuntime::LastFrameAccessor into >>> interpreterRuntime.cpp file, and a few other functions moved in shared code. >>> >>> This is the last of this include file technical debt cleanup that I'm >>> going to do. See bug for more information. >>> >>> open webrev at http://cr.openjdk.java.net/~coleenp/8199809.01/webrev >>> bug link https://bugs.openjdk.java.net/browse/JDK-8199809 >>> >>> I'll update the copyrights when I commit. >>> >>> Thanks, >>> Coleen >>> >> >> > > From thomas.stuefe at gmail.com Wed Mar 21 15:35:07 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 21 Mar 2018 16:35:07 +0100 Subject: RFR (L, tedious) 8199809: Don't include frame.inline.hpp and other.inline.hpp from .hpp files In-Reply-To: <319623dc-512a-ba73-b8b5-af0956d0007b@oracle.com> References: <115cb618-5fe3-f59e-9b4b-fa56060ad766@oracle.com> <319623dc-512a-ba73-b8b5-af0956d0007b@oracle.com> Message-ID: Hi David, On Wed, Mar 21, 2018 at 2:02 PM, David Holmes wrote: > On 21/03/2018 10:39 PM, coleen.phillimore at oracle.com wrote: > >> >> Thomas, >> >> Thank you for building this. >> >> On 3/21/18 7:50 AM, Thomas St?fe wrote: >> >>> Hi Coleen, >>> >>> I think your patch uncovered an issue. >>> >>> I saw this weird compile error on AIX: >>> >>> 471 54 | bool is_sigtrap_ic_miss_check() { >>> 471 55 | assert(UseSIGTRAP, "precondition"); >>> 471 56 | return MacroAssembler::is_trap_ic_mis >>> s_check(long_at(0)); >>> ===========================^ >>> "/priv/d031900/openjdk/jdk-hs/source/src/hotspot/cpu/ppc/nativeInst_ppc.hpp", >>> line 56.12: 1540-0062 (S) The incomplete class "MacroAssembler" must not be >>> used as a qualifier. >>> 471 57 | } >>> >>> in a number of places. But the definition of class MacroAssembler was >>> available. So I checked if MacroAssembler was accidentally pulled into a >>> namespace or a class, and sure enough, your patch caused it to be defined >>> *inside* the class InterpreterRuntime. See interpreterRuntime.hpp: >>> >>> class InterpreterRuntime: AllStatic { >>> ... >>> // Platform dependent stuff >>> #include CPU_HEADER(interpreterRT) >>> ... >>> }; >>> >>> which pulls in the content of interpreterRT_ppc.hpp. >>> >>> interpreterRT_ppc.hpp includes >>> >>> #include "asm/macroAssembler.hpp" >>> #include "memory/allocation.hpp" >>> >>> (minus allocation.hpp after your patch) >>> >>> which is certainly an error, yes? We should not pull in any includes >>> into a class definition. >>> >> >> Yes, I had this problem with x86 which was very befuddling. I hate that >> we include files in the middle of class definitions! >> > > It's a crude but effective way to "extend" a class with platform specific > code at build time. But it does have constraints. > > >>> I wondered why this did not cause errors earlier, but the include order >>> changed with your patch. Before the patch, the error was covered by a >>> different include order: nothing was really included by >>> interpreterRT_ppc.hpp, the include directives were noops. I think this was >>> caused by src/hotspot/share/prims/methodHandles.hpp pulling >>> frame.inline.hpp and via that path pulling macroAssembler.hpp. With your >>> patch, it pulls only frame.hpp. >>> >>> One could certainly work around that issue but the real fix would be to >>> not include anything in files which are included into other classes. Those >>> are not "real" includes anyway. And maybe add a comment to that file :) >>> >> >> I will add a comment to all of these like: >> >> // This is included in the middle of class Interpreter. >> // Do not include files here. >> >> Hm so I need to add the #include for macroAssembler.hpp somewhere new >> like nativeInst_ppc.hpp or does just removing it from interpreterRT_ppc.hpp >> fix the problem? >> > > Whatever code is in the included platform specific header still needs to > ensure the definitions that it needs have been included. If those are > shared files then you may just be able to move them into the shared cpp > file, but any platform specific headers must still be included in the > platform specific headers. > > I disagree in this particular case. In my opinion, headers whose purpose is to be included into class declarations should not include other headers. Thanks, Thomas > David > ----- > > thanks, >> Coleen >> >> >> >>> Thanks, Thomas >>> >>> >>> >>> >>> >>> On Wed, Mar 21, 2018 at 11:41 AM, Thomas St?fe >> > wrote: >>> >>> Hi Coleen, >>> >>> linuxs390 needs this: >>> >>> - .../source $ hg diff >>> diff -r daf3abb9031f src/hotspot/cpu/s390/interpreterRT_s390.cpp >>> --- a/src/hotspot/cpu/s390/interpreterRT_s390.cpp Wed Mar 21 >>> 08:37:04 2018 +0100 >>> +++ b/src/hotspot/cpu/s390/interpreterRT_s390.cpp Wed Mar 21 >>> 11:12:03 2018 +0100 >>> @@ -65,7 +65,7 @@ >>> } >>> >>> // Implementation of SignatureHandlerGenerator >>> -InteprerterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( >>> >>> +InterpreterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( >>> >>> const methodHandle& method, CodeBuffer* buffer) : >>> NativeSignatureIterator(method) { >>> _masm = new MacroAssembler(buffer); >>> _fp_arg_nr = 0; >>> >>> (typo). Otherwise it builds fine. >>> >>> I'm getting build errors on AIX which are a bit more complicated, >>> still looking.. >>> >>> Thanks, Thomas >>> >>> >>> On Wed, Mar 21, 2018 at 1:08 AM, >> > wrote: >>> >>> Summary: Remove frame.inline.hpp,etc from header files and >>> adjust transitive includes. >>> >>> Tested with mach5 tier1 on Oracle platforms: linux-x64, >>> solaris-sparc, windows-x64. Built with open-only sources >>> using --disable-precompiled-headers on linux-x64, built with >>> zero (also disable precompiled headers). Roman built with >>> aarch64, and have request to build ppc, etc. (Please test >>> this patch!) >>> >>> Semi-interesting details: moved SignatureHandlerGenerator >>> constructor to cpp file, moved >>> interpreter_frame_stack_direction() to target specific hpp >>> files (even though they're all -1), pd_last_frame to >>> thread_.cpp because there isn't a >>> thread_.inline.hpp file, lastly moved >>> InterpreterRuntime::LastFrameAccessor into >>> interpreterRuntime.cpp file, and a few other functions moved >>> in shared code. >>> >>> This is the last of this include file technical debt cleanup >>> that I'm going to do. See bug for more information. >>> >>> open webrev at >>> http://cr.openjdk.java.net/~coleenp/8199809.01/webrev >>> >>> bug link https://bugs.openjdk.java.net/browse/JDK-8199809 >>> >>> >>> I'll update the copyrights when I commit. >>> >>> Thanks, >>> Coleen >>> >>> >>> >>> >> From coleen.phillimore at oracle.com Wed Mar 21 16:02:15 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 21 Mar 2018 12:02:15 -0400 Subject: RFR (L, tedious) 8199809: Don't include frame.inline.hpp and other.inline.hpp from .hpp files In-Reply-To: References: <115cb618-5fe3-f59e-9b4b-fa56060ad766@oracle.com> Message-ID: <3cb116ce-b9d3-5edf-a9fa-ebab3f4bc899@oracle.com> On 3/21/18 11:33 AM, Thomas St?fe wrote: > Hi Coleen, > > On Wed, Mar 21, 2018 at 1:39 PM, > wrote: > > > Thomas, > > Thank you for building this. > > On 3/21/18 7:50 AM, Thomas St?fe wrote: >> Hi Coleen, >> >> I think your patch uncovered an issue. >> >> I saw this weird compile error on AIX: >> >> ? ?471? ? ?54 |? ?bool is_sigtrap_ic_miss_check() { >> ? ?471? ? ?55 |? ? ?assert(UseSIGTRAP, "precondition"); >> ? ?471? ? ?56 |? ? ?return >> MacroAssembler::is_trap_ic_miss_check(long_at(0)); >> ===========================^ >> "/priv/d031900/openjdk/jdk-hs/source/src/hotspot/cpu/ppc/nativeInst_ppc.hpp", >> line 56.12: 1540-0062 (S) The incomplete class "MacroAssembler" >> must not be used as a qualifier. >> ? ?471? ? ?57 |? ?} >> >> in a number of places. But the definition of class MacroAssembler >> was available. So I checked if MacroAssembler was accidentally >> pulled into a namespace or a class, and sure enough, your patch >> caused it to be defined *inside* the class InterpreterRuntime. >> See interpreterRuntime.hpp: >> >> class InterpreterRuntime: AllStatic { >> ... >> ? // Platform dependent stuff >> #include CPU_HEADER(interpreterRT) >> ... >> }; >> >> which pulls in the content of interpreterRT_ppc.hpp. >> >> interpreterRT_ppc.hpp includes >> >> #include "asm/macroAssembler.hpp" >> #include "memory/allocation.hpp" >> >> (minus allocation.hpp after your patch) >> >> which is certainly an error, yes? We should not pull in any >> includes into a class definition. > > Yes, I had this problem with x86 which was very befuddling.? I > hate that we include files in the middle of class definitions! > > > It is annoying. I had errors like this several times over the last > years already, especially in the os_xxx_xxx "headers". > > All these AllStatic functions would be a perfect fit for C++ namespaces. Yes, we were going to have namespace os at one point.? namespace metaspace would also be nice. > > >> >> I wondered why this did not cause errors earlier, but the include >> order changed with your patch. Before the patch, the error was >> covered by a different include order: nothing was really included >> by interpreterRT_ppc.hpp, the include directives were noops. I >> think this was caused by >> src/hotspot/share/prims/methodHandles.hpp pulling >> frame.inline.hpp and via that path pulling macroAssembler.hpp. >> With your patch, it pulls only frame.hpp. >> >> One could certainly work around that issue but the real fix would >> be to not include anything in files which are included into other >> classes. Those are not "real" includes anyway. And maybe add a >> comment to that file :) > > I will add a comment to all of these like: > > // This is included in the middle of class Interpreter. > // Do not include files here. > > > Thank you. This would be also valuable for all the os_.hpp files. I didn't change these files so I don't want to update them.? This should be another RFE. Oddly frame_.hpp files include synchronizer.hpp but I don't want to fix this right now. > > > Hm so I need to add the #include for macroAssembler.hpp somewhere > new like nativeInst_ppc.hpp or does just removing it from > interpreterRT_ppc.hpp fix the problem? > > > In this case it seems to be sufficient to remove the include from > interpreterRT_ppc.hpp: > > - .../hotspot $ hg diff > diff -r d3daa45c2c8d src/hotspot/cpu/ppc/interpreterRT_ppc.hpp > --- a/src/hotspot/cpu/ppc/interpreterRT_ppc.hpp Wed Mar 21 12:25:06 > 2018 +0100 > +++ b/src/hotspot/cpu/ppc/interpreterRT_ppc.hpp Wed Mar 21 16:27:02 > 2018 +0100 > @@ -26,7 +26,6 @@ > ?#ifndef CPU_PPC_VM_INTERPRETERRT_PPC_HPP > ?#define CPU_PPC_VM_INTERPRETERRT_PPC_HPP > > -#include "asm/macroAssembler.hpp" > > ?// native method calls > > > The build went thru afterwards. I assume MacroAssembler was not really > needed for linkResolver.cpp. > > Thanks, Thomas This is great.? Can you click on the files and Review it? Thanks, Coleen > > thanks, > Coleen > > > >> >> Thanks, Thomas >> >> >> >> >> >> On Wed, Mar 21, 2018 at 11:41 AM, Thomas St?fe >> > wrote: >> >> Hi Coleen, >> >> linuxs390 needs this: >> >> - .../source $ hg diff >> diff -r daf3abb9031f src/hotspot/cpu/s390/interpreterRT_s390.cpp >> --- a/src/hotspot/cpu/s390/interpreterRT_s390.cpp ? ? ?Wed >> Mar 21 08:37:04 2018 +0100 >> +++ b/src/hotspot/cpu/s390/interpreterRT_s390.cpp ? ? ?Wed >> Mar 21 11:12:03 2018 +0100 >> @@ -65,7 +65,7 @@ >> ?} >> >> ?// Implementation of SignatureHandlerGenerator >> -InteprerterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( >> +InterpreterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( >> ? ? ?const methodHandle& method, CodeBuffer* buffer) : >> NativeSignatureIterator(method) { >> ? ?_masm = new MacroAssembler(buffer); >> ? ?_fp_arg_nr = 0; >> >> (typo). Otherwise it builds fine. >> >> I'm getting build errors on AIX which are a bit more >> complicated, still looking.. >> >> Thanks, Thomas >> >> >> On Wed, Mar 21, 2018 at 1:08 AM, >> > > wrote: >> >> Summary: Remove frame.inline.hpp,etc from header files >> and adjust transitive includes. >> >> Tested with mach5 tier1 on Oracle platforms: linux-x64, >> solaris-sparc, windows-x64.? Built with open-only sources >> using --disable-precompiled-headers on linux-x64, built >> with zero (also disable precompiled headers). Roman built >> with aarch64, and have request to build ppc, etc. (Please >> test this patch!) >> >> Semi-interesting details:? moved >> SignatureHandlerGenerator constructor to cpp file, moved >> interpreter_frame_stack_direction() to target specific >> hpp files (even though they're all -1), pd_last_frame to >> thread_.cpp because there isn't a >> thread_.inline.hpp file, lastly moved >> InterpreterRuntime::LastFrameAccessor into >> interpreterRuntime.cpp file, and a few other functions >> moved in shared code. >> >> This is the last of this include file technical debt >> cleanup that I'm going to do.? See bug for more information. >> >> open webrev at >> http://cr.openjdk.java.net/~coleenp/8199809.01/webrev >> >> bug link https://bugs.openjdk.java.net/browse/JDK-8199809 >> >> >> I'll update the copyrights when I commit. >> >> Thanks, >> Coleen >> >> >> > > From thomas.stuefe at gmail.com Wed Mar 21 16:24:20 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 21 Mar 2018 17:24:20 +0100 Subject: RFR (L, tedious) 8199809: Don't include frame.inline.hpp and other.inline.hpp from .hpp files In-Reply-To: <3cb116ce-b9d3-5edf-a9fa-ebab3f4bc899@oracle.com> References: <115cb618-5fe3-f59e-9b4b-fa56060ad766@oracle.com> <3cb116ce-b9d3-5edf-a9fa-ebab3f4bc899@oracle.com> Message-ID: On Wed, Mar 21, 2018 at 5:02 PM, wrote: > > > On 3/21/18 11:33 AM, Thomas St?fe wrote: > > Hi Coleen, > > On Wed, Mar 21, 2018 at 1:39 PM, wrote: > >> >> Thomas, >> >> Thank you for building this. >> >> On 3/21/18 7:50 AM, Thomas St?fe wrote: >> >> Hi Coleen, >> >> I think your patch uncovered an issue. >> >> I saw this weird compile error on AIX: >> >> 471 54 | bool is_sigtrap_ic_miss_check() { >> 471 55 | assert(UseSIGTRAP, "precondition"); >> 471 56 | return MacroAssembler::is_trap_ic_mis >> s_check(long_at(0)); >> ===========================^ >> "/priv/d031900/openjdk/jdk-hs/source/src/hotspot/cpu/ppc/nativeInst_ppc.hpp", >> line 56.12: 1540-0062 (S) The incomplete class "MacroAssembler" must not be >> used as a qualifier. >> 471 57 | } >> >> in a number of places. But the definition of class MacroAssembler was >> available. So I checked if MacroAssembler was accidentally pulled into a >> namespace or a class, and sure enough, your patch caused it to be defined >> *inside* the class InterpreterRuntime. See interpreterRuntime.hpp: >> >> class InterpreterRuntime: AllStatic { >> ... >> // Platform dependent stuff >> #include CPU_HEADER(interpreterRT) >> ... >> }; >> >> which pulls in the content of interpreterRT_ppc.hpp. >> >> interpreterRT_ppc.hpp includes >> >> #include "asm/macroAssembler.hpp" >> #include "memory/allocation.hpp" >> >> (minus allocation.hpp after your patch) >> >> which is certainly an error, yes? We should not pull in any includes into >> a class definition. >> >> >> Yes, I had this problem with x86 which was very befuddling. I hate that >> we include files in the middle of class definitions! >> > > It is annoying. I had errors like this several times over the last years > already, especially in the os_xxx_xxx "headers". > > All these AllStatic functions would be a perfect fit for C++ namespaces. > > > Yes, we were going to have namespace os at one point. namespace metaspace > would also be nice. > > > >> >> >> I wondered why this did not cause errors earlier, but the include order >> changed with your patch. Before the patch, the error was covered by a >> different include order: nothing was really included by >> interpreterRT_ppc.hpp, the include directives were noops. I think this was >> caused by src/hotspot/share/prims/methodHandles.hpp pulling >> frame.inline.hpp and via that path pulling macroAssembler.hpp. With your >> patch, it pulls only frame.hpp. >> >> One could certainly work around that issue but the real fix would be to >> not include anything in files which are included into other classes. Those >> are not "real" includes anyway. And maybe add a comment to that file :) >> >> >> I will add a comment to all of these like: >> >> // This is included in the middle of class Interpreter. >> // Do not include files here. >> > > Thank you. This would be also valuable for all the os_.hpp files. > > > I didn't change these files so I don't want to update them. This should > be another RFE. > > Oddly frame_.hpp files include synchronizer.hpp but I don't want to > fix this right now. > > > >> Hm so I need to add the #include for macroAssembler.hpp somewhere new >> like nativeInst_ppc.hpp or does just removing it from interpreterRT_ppc.hpp >> fix the problem? >> >> > In this case it seems to be sufficient to remove the include from > interpreterRT_ppc.hpp: > > - .../hotspot $ hg diff > diff -r d3daa45c2c8d src/hotspot/cpu/ppc/interpreterRT_ppc.hpp > --- a/src/hotspot/cpu/ppc/interpreterRT_ppc.hpp Wed Mar 21 12:25:06 2018 > +0100 > +++ b/src/hotspot/cpu/ppc/interpreterRT_ppc.hpp Wed Mar 21 16:27:02 2018 > +0100 > @@ -26,7 +26,6 @@ > #ifndef CPU_PPC_VM_INTERPRETERRT_PPC_HPP > #define CPU_PPC_VM_INTERPRETERRT_PPC_HPP > > -#include "asm/macroAssembler.hpp" > > // native method calls > > > The build went thru afterwards. I assume MacroAssembler was not really > needed for linkResolver.cpp. > > Thanks, Thomas > > > > This is great. Can you click on the files and Review it? > you were not lying, this was tedious :-) This is a good cleanup! Love that so much unnecessary inline functions are moved to cpp files. Remarks: src/hotspot/cpu/aarch64/interpreterRT_aarch64.hpp src/hotspot/cpu/sparc/interpreterRT_sparc.hpp also include headers and should not. -- http://cr.openjdk.java.net/~coleenp/8199809.01/webrev/src/hotspot/share/runtime/vframe.hpp.udiff.html The prototypes of those methods moved to vframe.inline.hpp should be marked as inline. Thanks, Thomas > Thanks, > Coleen > > thanks, >> Coleen >> >> >> >> >> Thanks, Thomas >> >> >> >> >> >> >> On Wed, Mar 21, 2018 at 11:41 AM, Thomas St?fe >> wrote: >> >>> Hi Coleen, >>> >>> linuxs390 needs this: >>> >>> - .../source $ hg diff >>> diff -r daf3abb9031f src/hotspot/cpu/s390/interpreterRT_s390.cpp >>> --- a/src/hotspot/cpu/s390/interpreterRT_s390.cpp Wed Mar 21 >>> 08:37:04 2018 +0100 >>> +++ b/src/hotspot/cpu/s390/interpreterRT_s390.cpp Wed Mar 21 >>> 11:12:03 2018 +0100 >>> @@ -65,7 +65,7 @@ >>> } >>> >>> // Implementation of SignatureHandlerGenerator >>> -InteprerterRuntime::SignatureHandlerGenerator::SignatureHan >>> dlerGenerator( >>> +InterpreterRuntime::SignatureHandlerGenerator::SignatureHan >>> dlerGenerator( >>> const methodHandle& method, CodeBuffer* buffer) : >>> NativeSignatureIterator(method) { >>> _masm = new MacroAssembler(buffer); >>> _fp_arg_nr = 0; >>> >>> (typo). Otherwise it builds fine. >>> >>> I'm getting build errors on AIX which are a bit more complicated, still >>> looking.. >>> >>> Thanks, Thomas >>> >>> >>> On Wed, Mar 21, 2018 at 1:08 AM, wrote: >>> >>>> Summary: Remove frame.inline.hpp,etc from header files and adjust >>>> transitive includes. >>>> >>>> Tested with mach5 tier1 on Oracle platforms: linux-x64, solaris-sparc, >>>> windows-x64. Built with open-only sources using >>>> --disable-precompiled-headers on linux-x64, built with zero (also disable >>>> precompiled headers). Roman built with aarch64, and have request to build >>>> ppc, etc. (Please test this patch!) >>>> >>>> Semi-interesting details: moved SignatureHandlerGenerator constructor >>>> to cpp file, moved interpreter_frame_stack_direction() to target >>>> specific hpp files (even though they're all -1), pd_last_frame to >>>> thread_.cpp because there isn't a thread_.inline.hpp file, >>>> lastly moved InterpreterRuntime::LastFrameAccessor into >>>> interpreterRuntime.cpp file, and a few other functions moved in shared code. >>>> >>>> This is the last of this include file technical debt cleanup that I'm >>>> going to do. See bug for more information. >>>> >>>> open webrev at http://cr.openjdk.java.net/~coleenp/8199809.01/webrev >>>> bug link https://bugs.openjdk.java.net/browse/JDK-8199809 >>>> >>>> I'll update the copyrights when I commit. >>>> >>>> Thanks, >>>> Coleen >>>> >>> >>> >> >> > > From irogers at google.com Wed Mar 21 17:05:01 2018 From: irogers at google.com (Ian Rogers) Date: Wed, 21 Mar 2018 17:05:01 +0000 Subject: GetPrimitiveArrayCritical vs GetByteArrayRegion: 140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream In-Reply-To: References: <22bbd632-47c1-7702-bd65-6a51f0bbad02@oracle.com> <9b5ff318-af41-03c4-efdc-b41b97cf6e61@oracle.com> <5A9D8C60.3050505@oracle.com> <5A9D8F33.5060500@oracle.com> <5A9DAC84.906@oracle.com> <5A9DAE87.8020801@oracle.com> <1521102348.2448.25.camel@oracle.com> <23639144-5217-4A0F-930C-EF24B4976544@oracle.com> <1521451687.2323.5.camel@oracle.com> Message-ID: Thanks for accepting these as bugs/RFEs: "140x slow-down using -Xcheck:jni and java.util.zip.DeflaterOutputStream" https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8199920 "Deprecate JNI critical APIs" https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8199919 "Report tardy threads as potential TTSP issues" https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8199921 "With -Xcheck:jni warn when a JNI critical region is "too long"" https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8199922 "Reduce size of JNI critical regions in JDK (for example via loop strip mining)" https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8199916 Ian On Tue, Mar 20, 2018 at 7:07 PM Ian Rogers wrote: > Thanks, via bugreport.java.com I filed bugs/RFEs with ids 9053047, > 9053048, 9053049, 9053050 and 9053051. > Ian > > On Mon, Mar 19, 2018 at 2:28 AM Thomas Schatzl > wrote: > >> Hi, >> >> On Fri, 2018-03-16 at 17:19 +0000, Ian Rogers wrote: >> > Thanks Paul, very interesting. >> > >> > On Fri, Mar 16, 2018 at 9:21 AM Paul Sandoz >> > wrote: >> > > Hi Ian, Thomas, >> > > >> > > [...] >> > > (This is also something we need to consider if we modify buffers to >> > > support capacities larger than Integer.MAX_VALUE. Also connects >> > > with Project Panama.) >> > > >> > > If Thomas has not done so or does not plan to i can log an issue >> > > for you. >> > > >> > >> > That'd be great. I wonder if identifying more TTSP issues should also >> > be a bug. Its interesting to observe that overlooking TTSP in C2 >> > motivated the Unsafe.copyMemory change permitting a fresh TTSP issue. >> > If TTSP is a 1st class issue then maybe we can deprecate JNI critical >> > regions to support that effort :-) >> >> Please log an issue. I am still a bit unsure what and how many issues >> should be filed. >> >> @Ian: at bugreports.oracle.com everyone may file bug reports without >> the need for an account. >> It will take some time until they show up in Jira due to vetting, but >> if you have a good case, and can e.g. link to the mailing list, this >> should be painless. >> >> Thanks, >> Thomas >> >> From stefan.karlsson at oracle.com Wed Mar 21 17:27:52 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 21 Mar 2018 18:27:52 +0100 Subject: RFR: 8199946: Move load/store and encode/decode out of oopDesc Message-ID: Hi all, Please review this patch to get rid of the oopDesc::load/store functions and to move the oopDesc::encode/decode functions to a new CompressedOops subsystem. http://cr.openjdk.java.net/~stefank/8199946/webrev.01 https://bugs.openjdk.java.net/browse/JDK-8199946 When the Access API was introduced many of the usages of oopDesc::load_decode_heap_oop, and friends, were replaced by calls to the Access API. However, there are still some usages of these functions, most notably in the GC code. This patch is two-fold: 1) It replaces the oopDesc load and store calls with RawAccess equivalents. 2) It moves the oopDesc encode and decode functions to a new, separate, subsystem called CompressedOops. A future patch could even move all the Universe::_narrow_oop variables over to CompressedOops. The second part has the nice property that it breaks up a circular dependency between oop.inline.hpp and access.inline.hpp. After the change we have: oop.inline.hpp includes: access.inline.hpp compressedOops.inline.hpp access.inline.hpp includes: compressedOops.inline.hpp Thanks, StefanK From stefan.karlsson at oracle.com Wed Mar 21 17:36:10 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 21 Mar 2018 18:36:10 +0100 Subject: RFR (L, tedious) 8199809: Don't include frame.inline.hpp and other.inline.hpp from .hpp files In-Reply-To: References: Message-ID: <9dd4bc75-e6e3-0409-2cb2-ea2b7f43ae47@oracle.com> Hi Coleen, This looks good to me (minus the comments from others in this thread). I wonder about the functions in the new vframe.inline.hpp file: http://cr.openjdk.java.net/~coleenp/8199809.01/webrev/src/hotspot/share/runtime/vframe.inline.hpp.html I thought vframeStreamCommon::fill_from_frame and friends were going to move a .cpp, and then we could get rid of even more .inline.hpp includes. Did you change your mind about that? Thanks, StefanK On 2018-03-21 01:08, coleen.phillimore at oracle.com wrote: > Summary: Remove frame.inline.hpp,etc from header files and adjust > transitive includes. > > Tested with mach5 tier1 on Oracle platforms: linux-x64, solaris-sparc, > windows-x64.? Built with open-only sources using > --disable-precompiled-headers on linux-x64, built with zero (also > disable precompiled headers).? Roman built with aarch64, and have > request to build ppc, etc.? (Please test this patch!) > > Semi-interesting details:? moved SignatureHandlerGenerator constructor > to cpp file, moved interpreter_frame_stack_direction() to target > specific hpp files (even though they're all -1), pd_last_frame to > thread_.cpp because there isn't a thread_.inline.hpp > file, lastly moved InterpreterRuntime::LastFrameAccessor into > interpreterRuntime.cpp file, and a few other functions moved in shared > code. > > This is the last of this include file technical debt cleanup that I'm > going to do.? See bug for more information. > > open webrev at http://cr.openjdk.java.net/~coleenp/8199809.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8199809 > > I'll update the copyrights when I commit. > > Thanks, > Coleen From vladimir.kozlov at oracle.com Wed Mar 21 17:59:08 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 21 Mar 2018 10:59:08 -0700 Subject: [11] RFR(S): 8199777: Deprecate -XX:+AggressiveOpts In-Reply-To: <6e56a98f-4ed3-74fc-9f21-1a9c2b247a03@oracle.com> References: <022c4dc3-5750-c151-e08e-bd3a2826aa16@oracle.com> <96444d90-f28e-398a-095a-feb2c6e27b3a@oracle.com> <6e56a98f-4ed3-74fc-9f21-1a9c2b247a03@oracle.com> Message-ID: <1C8F5F0E-9E5B-4EC0-818D-DCFBE9A80346@oracle.com> Okay. Looks good. Thanks, Vladimir > On Mar 21, 2018, at 12:38 AM, Tobias Hartmann wrote: > > Hi Vladimir and David, > > thanks for the review! > > On 20.03.2018 23:23, Vladimir Kozlov wrote: >> Actually you don't need to specify EliminateAutoBox in tests because it is true by default: > > Yes, but I agree with David that it's better to leave the flag in to state what the test is supposed > to be run with. Also, AutoBoxCacheMax is a C2 specific flag as well. > > Here's the new webrev with -XX:+IgnoreUnrecognizedVMOptions added to the tests: > http://cr.openjdk.java.net/~thartmann/8199777/webrev.01/ > > Thanks, > Tobias From coleen.phillimore at oracle.com Wed Mar 21 18:21:07 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 21 Mar 2018 14:21:07 -0400 Subject: RFR (L, tedious) 8199809: Don't include frame.inline.hpp and other.inline.hpp from .hpp files In-Reply-To: References: <115cb618-5fe3-f59e-9b4b-fa56060ad766@oracle.com> <3cb116ce-b9d3-5edf-a9fa-ebab3f4bc899@oracle.com> Message-ID: <6c5cd7f5-586b-ab96-38a6-638e41fdae5c@oracle.com> Thank you for reviewing this. On 3/21/18 12:24 PM, Thomas St?fe wrote: > > > On Wed, Mar 21, 2018 at 5:02 PM, > wrote: > > > > On 3/21/18 11:33 AM, Thomas St?fe wrote: >> Hi Coleen, >> >> On Wed, Mar 21, 2018 at 1:39 PM, > > wrote: >> >> >> Thomas, >> >> Thank you for building this. >> >> On 3/21/18 7:50 AM, Thomas St?fe wrote: >>> Hi Coleen, >>> >>> I think your patch uncovered an issue. >>> >>> I saw this weird compile error on AIX: >>> >>> ? ?471? ? ?54 |? ?bool is_sigtrap_ic_miss_check() { >>> ? ?471? ? ?55 | ?assert(UseSIGTRAP, "precondition"); >>> ? ?471? ? ?56 |? ? ?return >>> MacroAssembler::is_trap_ic_miss_check(long_at(0)); >>> ===========================^ >>> "/priv/d031900/openjdk/jdk-hs/source/src/hotspot/cpu/ppc/nativeInst_ppc.hpp", >>> line 56.12: 1540-0062 (S) The incomplete class >>> "MacroAssembler" must not be used as a qualifier. >>> ? ?471? ? ?57 |? ?} >>> >>> in a number of places. But the definition of class >>> MacroAssembler was available. So I checked if MacroAssembler >>> was accidentally pulled into a namespace or a class, and >>> sure enough, your patch caused it to be defined *inside* the >>> class InterpreterRuntime. See interpreterRuntime.hpp: >>> >>> class InterpreterRuntime: AllStatic { >>> ... >>> ? // Platform dependent stuff >>> #include CPU_HEADER(interpreterRT) >>> ... >>> }; >>> >>> which pulls in the content of interpreterRT_ppc.hpp. >>> >>> interpreterRT_ppc.hpp includes >>> >>> #include "asm/macroAssembler.hpp" >>> #include "memory/allocation.hpp" >>> >>> (minus allocation.hpp after your patch) >>> >>> which is certainly an error, yes? We should not pull in any >>> includes into a class definition. >> >> Yes, I had this problem with x86 which was very befuddling.? >> I hate that we include files in the middle of class definitions! >> >> >> It is annoying. I had errors like this several times over the >> last years already, especially in the os_xxx_xxx "headers". >> >> All these AllStatic functions would be a perfect fit for C++ >> namespaces. > > Yes, we were going to have namespace os at one point. namespace > metaspace would also be nice. >> >> >>> >>> I wondered why this did not cause errors earlier, but the >>> include order changed with your patch. Before the patch, the >>> error was covered by a different include order: nothing was >>> really included by interpreterRT_ppc.hpp, the include >>> directives were noops. I think this was caused by >>> src/hotspot/share/prims/methodHandles.hpp pulling >>> frame.inline.hpp and via that path pulling >>> macroAssembler.hpp. With your patch, it pulls only frame.hpp. >>> >>> One could certainly work around that issue but the real fix >>> would be to not include anything in files which are included >>> into other classes. Those are not "real" includes anyway. >>> And maybe add a comment to that file :) >> >> I will add a comment to all of these like: >> >> // This is included in the middle of class Interpreter. >> // Do not include files here. >> >> >> Thank you. This would be also valuable for all the >> os_.hpp files. > > I didn't change these files so I don't want to update them.? This > should be another RFE. > > Oddly frame_.hpp files include synchronizer.hpp but I don't > want to fix this right now. >> >> >> Hm so I need to add the #include for macroAssembler.hpp >> somewhere new like nativeInst_ppc.hpp or does just removing >> it from interpreterRT_ppc.hpp fix the problem? >> >> >> In this case it seems to be sufficient to remove the include from >> interpreterRT_ppc.hpp: >> >> - .../hotspot $ hg diff >> diff -r d3daa45c2c8d src/hotspot/cpu/ppc/interpreterRT_ppc.hpp >> --- a/src/hotspot/cpu/ppc/interpreterRT_ppc.hpp Wed Mar 21 >> 12:25:06 2018 +0100 >> +++ b/src/hotspot/cpu/ppc/interpreterRT_ppc.hpp Wed Mar 21 >> 16:27:02 2018 +0100 >> @@ -26,7 +26,6 @@ >> ?#ifndef CPU_PPC_VM_INTERPRETERRT_PPC_HPP >> ?#define CPU_PPC_VM_INTERPRETERRT_PPC_HPP >> >> -#include "asm/macroAssembler.hpp" >> >> ?// native method calls >> >> >> The build went thru afterwards. I assume MacroAssembler was not >> really needed for linkResolver.cpp. >> >> Thanks, Thomas > > This is great.? Can you click on the files and Review it? > > > you were not lying, this was tedious :-) > > This is a good cleanup! Love that so much unnecessary inline functions > are moved to cpp files. > > Remarks: > > src/hotspot/cpu/aarch64/interpreterRT_aarch64.hpp > src/hotspot/cpu/sparc/interpreterRT_sparc.hpp > > also include headers and should not. Yes, I removed these headers when I added the comment above and rebuilt the sparc version. > > -- > > http://cr.openjdk.java.net/~coleenp/8199809.01/webrev/src/hotspot/share/runtime/vframe.hpp.udiff.html > > > The prototypes of those methods moved to vframe.inline.hpp should be > marked as inline. Okay, I'll change it and retest. Thanks, Coleen > > Thanks, Thomas > > Thanks, > Coleen > >> thanks, >> Coleen >> >> >> >>> >>> Thanks, Thomas >>> >>> >>> >>> >>> >>> On Wed, Mar 21, 2018 at 11:41 AM, Thomas St?fe >>> > >>> wrote: >>> >>> Hi Coleen, >>> >>> linuxs390 needs this: >>> >>> - .../source $ hg diff >>> diff -r daf3abb9031f >>> src/hotspot/cpu/s390/interpreterRT_s390.cpp >>> --- a/src/hotspot/cpu/s390/interpreterRT_s390.cpp ? ? >>> ?Wed Mar 21 08:37:04 2018 +0100 >>> +++ b/src/hotspot/cpu/s390/interpreterRT_s390.cpp ? ? >>> ?Wed Mar 21 11:12:03 2018 +0100 >>> @@ -65,7 +65,7 @@ >>> ?} >>> >>> ?// Implementation of SignatureHandlerGenerator >>> -InteprerterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( >>> +InterpreterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( >>> ? ? ?const methodHandle& method, CodeBuffer* buffer) : >>> NativeSignatureIterator(method) { >>> ? ?_masm = new MacroAssembler(buffer); >>> ? ?_fp_arg_nr = 0; >>> >>> (typo). Otherwise it builds fine. >>> >>> I'm getting build errors on AIX which are a bit more >>> complicated, still looking.. >>> >>> Thanks, Thomas >>> >>> >>> On Wed, Mar 21, 2018 at 1:08 AM, >>> >> > wrote: >>> >>> Summary: Remove frame.inline.hpp,etc from header >>> files and adjust transitive includes. >>> >>> Tested with mach5 tier1 on Oracle platforms: >>> linux-x64, solaris-sparc, windows-x64. Built with >>> open-only sources using >>> --disable-precompiled-headers on linux-x64, built >>> with zero (also disable precompiled headers).? Roman >>> built with aarch64, and have request to build ppc, >>> etc.? (Please test this patch!) >>> >>> Semi-interesting details:? moved >>> SignatureHandlerGenerator constructor to cpp file, >>> moved interpreter_frame_stack_direction() to target >>> specific hpp files (even though they're all -1), >>> pd_last_frame to thread_.cpp because there >>> isn't a thread_.inline.hpp file, lastly >>> moved InterpreterRuntime::LastFrameAccessor into >>> interpreterRuntime.cpp file, and a few other >>> functions moved in shared code. >>> >>> This is the last of this include file technical debt >>> cleanup that I'm going to do. See bug for more >>> information. >>> >>> open webrev at >>> http://cr.openjdk.java.net/~coleenp/8199809.01/webrev >>> >>> bug link >>> https://bugs.openjdk.java.net/browse/JDK-8199809 >>> >>> >>> I'll update the copyrights when I commit. >>> >>> Thanks, >>> Coleen >>> >>> >>> >> >> > > From coleen.phillimore at oracle.com Wed Mar 21 18:24:29 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 21 Mar 2018 14:24:29 -0400 Subject: RFR (L, tedious) 8199809: Don't include frame.inline.hpp and other.inline.hpp from .hpp files In-Reply-To: <9dd4bc75-e6e3-0409-2cb2-ea2b7f43ae47@oracle.com> References: <9dd4bc75-e6e3-0409-2cb2-ea2b7f43ae47@oracle.com> Message-ID: <88d9ab65-6c0f-f4ad-67ae-d385ee0ff2d0@oracle.com> On 3/21/18 1:36 PM, Stefan Karlsson wrote: > Hi Coleen, > > This looks good to me (minus the comments from others in this thread). > > I wonder about the functions in the new vframe.inline.hpp file: > http://cr.openjdk.java.net/~coleenp/8199809.01/webrev/src/hotspot/share/runtime/vframe.inline.hpp.html > > > I thought vframeStreamCommon::fill_from_frame and friends were going > to move a .cpp, and then we could get rid of even more .inline.hpp > includes. Did you change your mind about that? Yes.? There is code that cares about stack walking performance, and I didn't want to risk changing this. Coleen > > Thanks, > StefanK > > > On 2018-03-21 01:08, coleen.phillimore at oracle.com wrote: >> Summary: Remove frame.inline.hpp,etc from header files and adjust >> transitive includes. >> >> Tested with mach5 tier1 on Oracle platforms: linux-x64, >> solaris-sparc, windows-x64.? Built with open-only sources using >> --disable-precompiled-headers on linux-x64, built with zero (also >> disable precompiled headers).? Roman built with aarch64, and have >> request to build ppc, etc.? (Please test this patch!) >> >> Semi-interesting details:? moved SignatureHandlerGenerator >> constructor to cpp file, moved interpreter_frame_stack_direction() to >> target specific hpp files (even though they're all -1), pd_last_frame >> to thread_.cpp because there isn't a >> thread_.inline.hpp file, lastly moved >> InterpreterRuntime::LastFrameAccessor into interpreterRuntime.cpp >> file, and a few other functions moved in shared code. >> >> This is the last of this include file technical debt cleanup that I'm >> going to do.? See bug for more information. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8199809.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8199809 >> >> I'll update the copyrights when I commit. >> >> Thanks, >> Coleen From rkennke at redhat.com Wed Mar 21 19:18:37 2018 From: rkennke at redhat.com (Roman Kennke) Date: Wed, 21 Mar 2018 20:18:37 +0100 Subject: RFR(XS): JDK-8199780: SetMemory0 and CopyMemory0 in unsafe.cpp need to resolve their operands In-Reply-To: <5AB135F6.3020508@oracle.com> References: <00c5000d-3e1f-b0cd-5426-c8b14d7516c7@redhat.com> <5AB0E0D2.7090303@oracle.com> <20b2bcd2-66f9-f80d-4eb9-bd0ee44d5261@redhat.com> <5AB0E9BA.5000002@oracle.com> <15133453-3d69-020f-780c-b04c5f820bb8@redhat.com> <5AB135F6.3020508@oracle.com> Message-ID: <3b8ce496-c053-4dc0-ff54-ccd9a6e74198@redhat.com> > >> Diff: >> http://cr.openjdk.java.net/~rkennke/JDK-8199780/webrev.01.diff/ >> Full: >> http://cr.openjdk.java.net/~rkennke/JDK-8199780/webrev.01/ >> >> Better now? > > Yes, much better. It looks good now. Thank you. Erik: Thank you for reviewing. I believe this needs one more review, doesn't it? Thanks, Roman From erik.joelsson at oracle.com Wed Mar 21 22:05:48 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Wed, 21 Mar 2018 15:05:48 -0700 Subject: RFR: JDK-8198652: Stop linking with -base:0x8000000 on Windows Message-ID: <7adcdebe-5586-e1d4-8813-d4327de1db94@oracle.com> On Windows, we have been linking libjvm.so with -base:0x8000000 since forever. This may have been a good idea on earlier versions of windows, but with VS2017 it generates a warning and with ASLR, the address a given binary is loaded at will vary between runs anyway, so there is little point setting this linker option anymore (since Windows Vista). So I propose we just remove it. Bug: https://bugs.openjdk.java.net/browse/JDK-8198652 Webrev: http://cr.openjdk.java.net/~erikj/8198652/webrev.01/ /Erik From david.holmes at oracle.com Wed Mar 21 22:11:51 2018 From: david.holmes at oracle.com (David Holmes) Date: Thu, 22 Mar 2018 08:11:51 +1000 Subject: [11] RFR(S): 8199777: Deprecate -XX:+AggressiveOpts In-Reply-To: <1C8F5F0E-9E5B-4EC0-818D-DCFBE9A80346@oracle.com> References: <022c4dc3-5750-c151-e08e-bd3a2826aa16@oracle.com> <96444d90-f28e-398a-095a-feb2c6e27b3a@oracle.com> <6e56a98f-4ed3-74fc-9f21-1a9c2b247a03@oracle.com> <1C8F5F0E-9E5B-4EC0-818D-DCFBE9A80346@oracle.com> Message-ID: +1 (in case there was any doubt) David On 22/03/2018 3:59 AM, Vladimir Kozlov wrote: > Okay. Looks good. > > Thanks, > Vladimir > >> On Mar 21, 2018, at 12:38 AM, Tobias Hartmann wrote: >> >> Hi Vladimir and David, >> >> thanks for the review! >> >> On 20.03.2018 23:23, Vladimir Kozlov wrote: >>> Actually you don't need to specify EliminateAutoBox in tests because it is true by default: >> >> Yes, but I agree with David that it's better to leave the flag in to state what the test is supposed >> to be run with. Also, AutoBoxCacheMax is a C2 specific flag as well. >> >> Here's the new webrev with -XX:+IgnoreUnrecognizedVMOptions added to the tests: >> http://cr.openjdk.java.net/~thartmann/8199777/webrev.01/ >> >> Thanks, >> Tobias > From david.holmes at oracle.com Wed Mar 21 22:34:06 2018 From: david.holmes at oracle.com (David Holmes) Date: Thu, 22 Mar 2018 08:34:06 +1000 Subject: RFR (L, tedious) 8199809: Don't include frame.inline.hpp and other.inline.hpp from .hpp files In-Reply-To: References: <115cb618-5fe3-f59e-9b4b-fa56060ad766@oracle.com> <319623dc-512a-ba73-b8b5-af0956d0007b@oracle.com> Message-ID: <8eedc970-6f09-268c-2a33-97b1460a32cf@oracle.com> On 22/03/2018 1:35 AM, Thomas St?fe wrote: > On Wed, Mar 21, 2018 at 2:02 PM, David Holmes On 21/03/2018 10:39 PM, coleen.phillimore at oracle.com > Hm so I need to add the #include for macroAssembler.hpp > somewhere new like nativeInst_ppc.hpp or does just removing it > from interpreterRT_ppc.hpp fix the problem? > > > Whatever code is in the included platform specific header still > needs to ensure the definitions that it needs have been included. If > those are shared files then you may just be able to move them into > the shared cpp file, but any platform specific headers must still be > included in the platform specific headers. > > > I disagree in this particular case. In my opinion, headers whose purpose > is to be included into class declarations should not include other headers. ??? If the code you are including relies on things from other header files then you have no choice else it won't compile! David > Thanks, Thomas > > David > ----- > > thanks, > Coleen > > > > Thanks, Thomas > > > > > > On Wed, Mar 21, 2018 at 11:41 AM, Thomas St?fe > > >> wrote: > > ??? Hi Coleen, > > ??? linuxs390 needs this: > > ??? - .../source $ hg diff > ??? diff -r daf3abb9031f > src/hotspot/cpu/s390/interpreterRT_s390.cpp > ??? --- a/src/hotspot/cpu/s390/interpreterRT_s390.cpp > ?Wed Mar 21 > ??? 08:37:04 2018 +0100 > ??? +++ b/src/hotspot/cpu/s390/interpreterRT_s390.cpp > ?Wed Mar 21 > ??? 11:12:03 2018 +0100 > ??? @@ -65,7 +65,7 @@ > ??? ?} > > ??? ?// Implementation of SignatureHandlerGenerator > > -InteprerterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( > > > +InterpreterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( > > ??? ? ? ?const methodHandle& method, CodeBuffer* buffer) : > ??? NativeSignatureIterator(method) { > ??? ? ?_masm = new MacroAssembler(buffer); > ??? ? ?_fp_arg_nr = 0; > > ??? (typo). Otherwise it builds fine. > > ??? I'm getting build errors on AIX which are a bit more > complicated, > ??? still looking.. > > ??? Thanks, Thomas > > > ??? On Wed, Mar 21, 2018 at 1:08 AM, > > ??? >> wrote: > > ??????? Summary: Remove frame.inline.hpp,etc from header > files and > ??????? adjust transitive includes. > > ??????? Tested with mach5 tier1 on Oracle platforms: linux-x64, > ??????? solaris-sparc, windows-x64.? Built with open-only > sources > ??????? using --disable-precompiled-headers on linux-x64, > built with > ??????? zero (also disable precompiled headers). Roman > built with > ??????? aarch64, and have request to build ppc, etc. > (Please test > ??????? this patch!) > > ??????? Semi-interesting details:? moved > SignatureHandlerGenerator > ??????? constructor to cpp file, moved > ??????? interpreter_frame_stack_direction() to target > specific hpp > ??????? files (even though they're all -1), pd_last_frame to > ??????? thread_.cpp because there isn't a > ??????? thread_.inline.hpp file, lastly moved > ??????? InterpreterRuntime::LastFrameAccessor into > ??????? interpreterRuntime.cpp file, and a few other > functions moved > ??????? in shared code. > > ??????? This is the last of this include file technical > debt cleanup > ??????? that I'm going to do.? See bug for more information. > > ??????? open webrev at > http://cr.openjdk.java.net/~coleenp/8199809.01/webrev > > > > > ??????? bug link > https://bugs.openjdk.java.net/browse/JDK-8199809 > > ??????? > > > ??????? I'll update the copyrights when I commit. > > ??????? Thanks, > ??????? Coleen > > > > > From tim.bell at oracle.com Wed Mar 21 23:36:36 2018 From: tim.bell at oracle.com (Tim Bell) Date: Wed, 21 Mar 2018 16:36:36 -0700 Subject: RFR: JDK-8198652: Stop linking with -base:0x8000000 on Windows In-Reply-To: <7adcdebe-5586-e1d4-8813-d4327de1db94@oracle.com> References: <7adcdebe-5586-e1d4-8813-d4327de1db94@oracle.com> Message-ID: <5AB2EC84.4030006@oracle.com> Erik: > On Windows, we have been linking libjvm.so with -base:0x8000000 since > forever. This may have been a good idea on earlier versions of windows, > but with VS2017 it generates a warning and with ASLR, the address a > given binary is loaded at will vary between runs anyway, so there is > little point setting this linker option anymore (since Windows Vista). > So I propose we just remove it. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8198652 > > Webrev: http://cr.openjdk.java.net/~erikj/8198652/webrev.01/ Looks good. /Tim From felix.yang at huawei.com Thu Mar 22 01:42:50 2018 From: felix.yang at huawei.com (Yangfei (Felix)) Date: Thu, 22 Mar 2018 01:42:50 +0000 Subject: [aarch64-port-dev ] RFR: 8193266: AArch64: TestOptionsWithRanges.java SIGSEGV In-Reply-To: References: <7dbf43d1-72b9-5720-3878-ce31f3e8f555@redhat.com> <20e812bc-d132-9863-815b-345283f9517e@redhat.com> <3c83440f-dd4b-f988-1f96-afa88dff36eb@redhat.com> <9f523448-5e21-1f4d-c22b-45977f271fb8@samersoff.net> Message-ID: Hi, Looks like the " Summary:" entry is missing in your changeset. I have created a new changeset adding the this entry and pushed. Thanks, Felix > > Thank you for everyone's attention. I've put the updated patch here: > > http://cr.openjdk.java.net/~smonteith/8193266/webrev-7/ > > I've added Andrew Haley and Dmitry Samersoff as the reviewer. > From thomas.stuefe at gmail.com Thu Mar 22 05:52:55 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 22 Mar 2018 06:52:55 +0100 Subject: RFR (L, tedious) 8199809: Don't include frame.inline.hpp and other.inline.hpp from .hpp files In-Reply-To: <8eedc970-6f09-268c-2a33-97b1460a32cf@oracle.com> References: <115cb618-5fe3-f59e-9b4b-fa56060ad766@oracle.com> <319623dc-512a-ba73-b8b5-af0956d0007b@oracle.com> <8eedc970-6f09-268c-2a33-97b1460a32cf@oracle.com> Message-ID: On Wed, Mar 21, 2018 at 11:34 PM, David Holmes wrote: > > > On 22/03/2018 1:35 AM, Thomas St?fe wrote: > >> On Wed, Mar 21, 2018 at 2:02 PM, David Holmes > On 21/03/2018 10:39 PM, coleen.phillimore at oracle.com >> Hm so I need to add the #include for macroAssembler.hpp >> somewhere new like nativeInst_ppc.hpp or does just removing it >> from interpreterRT_ppc.hpp fix the problem? >> >> >> Whatever code is in the included platform specific header still >> needs to ensure the definitions that it needs have been included. If >> those are shared files then you may just be able to move them into >> the shared cpp file, but any platform specific headers must still be >> included in the platform specific headers. >> >> >> I disagree in this particular case. In my opinion, headers whose purpose >> is to be included into class declarations should not include other headers. >> > > ??? If the code you are including relies on things from other header files > then you have no choice else it won't compile! > > Is this a misunderstanding? I am not saying not to provide the dependencies. But they cannot be provided from within this header, if this header gets dropped in right in the middle of a class definition (like e.g. os_aix.hpp), right? If, for an easy example, my header uses pthread_t, I cannot just simply include pthread.h, because then all declarations of pthread.h appear in class scope in the surrounding class. So I have to make sure the platform header is included elsewhere, preferably at the start of the surrounding header, or of the cpp file. Thomas > David > > Thanks, Thomas >> >> David >> ----- >> >> thanks, >> Coleen >> >> >> >> Thanks, Thomas >> >> >> >> >> >> On Wed, Mar 21, 2018 at 11:41 AM, Thomas St?fe >> >> > >> >> wrote: >> >> Hi Coleen, >> >> linuxs390 needs this: >> >> - .../source $ hg diff >> diff -r daf3abb9031f >> src/hotspot/cpu/s390/interpreterRT_s390.cpp >> --- a/src/hotspot/cpu/s390/interpreterRT_s390.cpp >> Wed Mar 21 >> 08:37:04 2018 +0100 >> +++ b/src/hotspot/cpu/s390/interpreterRT_s390.cpp >> Wed Mar 21 >> 11:12:03 2018 +0100 >> @@ -65,7 +65,7 @@ >> } >> >> // Implementation of SignatureHandlerGenerator >> -InteprerterRuntime::Signature >> HandlerGenerator::SignatureHandlerGenerator( >> >> +InterpreterRuntime::Signature >> HandlerGenerator::SignatureHandlerGenerator( >> >> const methodHandle& method, CodeBuffer* buffer) : >> NativeSignatureIterator(method) { >> _masm = new MacroAssembler(buffer); >> _fp_arg_nr = 0; >> >> (typo). Otherwise it builds fine. >> >> I'm getting build errors on AIX which are a bit more >> complicated, >> still looking.. >> >> Thanks, Thomas >> >> >> On Wed, Mar 21, 2018 at 1:08 AM, >> > >> > >> >> wrote: >> >> Summary: Remove frame.inline.hpp,etc from header >> files and >> adjust transitive includes. >> >> Tested with mach5 tier1 on Oracle platforms: >> linux-x64, >> solaris-sparc, windows-x64. Built with open-only >> sources >> using --disable-precompiled-headers on linux-x64, >> built with >> zero (also disable precompiled headers). Roman >> built with >> aarch64, and have request to build ppc, etc. >> (Please test >> this patch!) >> >> Semi-interesting details: moved >> SignatureHandlerGenerator >> constructor to cpp file, moved >> interpreter_frame_stack_direction() to target >> specific hpp >> files (even though they're all -1), pd_last_frame to >> thread_.cpp because there isn't a >> thread_.inline.hpp file, lastly moved >> InterpreterRuntime::LastFrameAccessor into >> interpreterRuntime.cpp file, and a few other >> functions moved >> in shared code. >> >> This is the last of this include file technical >> debt cleanup >> that I'm going to do. See bug for more information. >> >> open webrev at >> http://cr.openjdk.java.net/~coleenp/8199809.01/webrev >> >> > Ecoleenp/8199809.01/webrev >> > >> bug link >> https://bugs.openjdk.java.net/browse/JDK-8199809 >> >> > > >> >> I'll update the copyrights when I commit. >> >> Thanks, >> Coleen >> >> >> >> >> >> From david.holmes at oracle.com Thu Mar 22 06:06:14 2018 From: david.holmes at oracle.com (David Holmes) Date: Thu, 22 Mar 2018 16:06:14 +1000 Subject: RFR (L, tedious) 8199809: Don't include frame.inline.hpp and other.inline.hpp from .hpp files In-Reply-To: References: <115cb618-5fe3-f59e-9b4b-fa56060ad766@oracle.com> <319623dc-512a-ba73-b8b5-af0956d0007b@oracle.com> <8eedc970-6f09-268c-2a33-97b1460a32cf@oracle.com> Message-ID: <9fea0f0a-315b-6157-864b-d0aaf4b0c2b2@oracle.com> On 22/03/2018 3:52 PM, Thomas St?fe wrote: > On Wed, Mar 21, 2018 at 11:34 PM, David Holmes > wrote: > > > > On 22/03/2018 1:35 AM, Thomas St?fe wrote: > > On Wed, Mar 21, 2018 at 2:02 PM, David Holmes > ? ? ?On > 21/03/2018 10:39 PM, coleen.phillimore at oracle.com > > ? ? ? ? Hm so I need to add the #include for macroAssembler.hpp > ? ? ? ? somewhere new like nativeInst_ppc.hpp or does just > removing it > ? ? ? ? from interpreterRT_ppc.hpp fix the problem? > > > ? ? Whatever code is in the included platform specific header still > ? ? needs to ensure the definitions that it needs have been > included. If > ? ? those are shared files then you may just be able to move > them into > ? ? the shared cpp file, but any platform specific headers must > still be > ? ? included in the platform specific headers. > > > I disagree in this particular case. In my opinion, headers whose > purpose is to be included into class declarations should not > include other headers. > > > ??? If the code you are including relies on things from other header > files then you have no choice else it won't compile! > > > Is this a misunderstanding? I am not saying not to provide the > dependencies. But they cannot be provided from within this header, if > this header gets dropped in right in the middle of a class definition > (like e.g. os_aix.hpp), right? > > If, for an easy example, my header uses pthread_t, I cannot just simply > include pthread.h, because then all declarations of pthread.h appear in > class scope in the surrounding class. So I have to make sure > the platform header is included elsewhere, preferably at the start of > the surrounding header, or of the cpp file. I would find it undesirable to have to put in a platform specific include, like pthread.h, if the surrounding header or cpp file were shared files. But you're saying that including it in place may not actually work anyway. David > Thomas > > > David > > Thanks, Thomas > > ? ? David > ? ? ----- > > ? ? ? ? thanks, > ? ? ? ? Coleen > > > > ? ? ? ? ? ? Thanks, Thomas > > > > > > ? ? ? ? ? ? On Wed, Mar 21, 2018 at 11:41 AM, Thomas St?fe > ? ? ? ? ? ? > > ? ? ? ? ? ? > > ? ? ? ? ? ? >>> wrote: > > ? ? ? ? ? ? ???? Hi Coleen, > > ? ? ? ? ? ? ???? linuxs390 needs this: > > ? ? ? ? ? ? ???? - .../source $ hg diff > ? ? ? ? ? ? ???? diff -r daf3abb9031f > ? ? ? ? ? ? src/hotspot/cpu/s390/interpreterRT_s390.cpp > ? ? ? ? ? ? ???? --- > a/src/hotspot/cpu/s390/interpreterRT_s390.cpp > ??Wed Mar 21 > ? ? ? ? ? ? ???? 08:37:04 2018 +0100 > ? ? ? ? ? ? ???? +++ > b/src/hotspot/cpu/s390/interpreterRT_s390.cpp > ??Wed Mar 21 > ? ? ? ? ? ? ???? 11:12:03 2018 +0100 > ? ? ? ? ? ? ???? @@ -65,7 +65,7 @@ > ? ? ? ? ? ? ???? ?} > > ? ? ? ? ? ? ???? ?// Implementation of SignatureHandlerGenerator > > -InteprerterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( > > > +InterpreterRuntime::SignatureHandlerGenerator::SignatureHandlerGenerator( > > ? ? ? ? ? ? ???? ? ? ?const methodHandle& method, CodeBuffer* > buffer) : > ? ? ? ? ? ? ???? NativeSignatureIterator(method) { > ? ? ? ? ? ? ???? ? ?_masm = new MacroAssembler(buffer); > ? ? ? ? ? ? ???? ? ?_fp_arg_nr = 0; > > ? ? ? ? ? ? ???? (typo). Otherwise it builds fine. > > ? ? ? ? ? ? ???? I'm getting build errors on AIX which are a > bit more > ? ? ? ? ? ? complicated, > ? ? ? ? ? ? ???? still looking.. > > ? ? ? ? ? ? ???? Thanks, Thomas > > > ? ? ? ? ? ? ???? On Wed, Mar 21, 2018 at 1:08 AM, > ? ? ? ? ? ? > ? ? ? ? ? ? > > ? ? ? ? ? ? ???? > > ? ? ? ? ? ? >>> wrote: > > ? ? ? ? ? ? ???????? Summary: Remove frame.inline.hpp,etc from > header > ? ? ? ? ? ? files and > ? ? ? ? ? ? ???????? adjust transitive includes. > > ? ? ? ? ? ? ???????? Tested with mach5 tier1 on Oracle > platforms: linux-x64, > ? ? ? ? ? ? ???????? solaris-sparc, windows-x64.? Built with > open-only > ? ? ? ? ? ? sources > ? ? ? ? ? ? ???????? using --disable-precompiled-headers on > linux-x64, > ? ? ? ? ? ? built with > ? ? ? ? ? ? ???????? zero (also disable precompiled headers). Roman > ? ? ? ? ? ? built with > ? ? ? ? ? ? ???????? aarch64, and have request to build ppc, > etc.? ? ? ? ? ? ?(Please test > ? ? ? ? ? ? ???????? this patch!) > > ? ? ? ? ? ? ???????? Semi-interesting details:? moved > ? ? ? ? ? ? SignatureHandlerGenerator > ? ? ? ? ? ? ???????? constructor to cpp file, moved > ? ? ? ? ? ? ???????? interpreter_frame_stack_direction() to target > ? ? ? ? ? ? specific hpp > ? ? ? ? ? ? ???????? files (even though they're all -1), > pd_last_frame to > ? ? ? ? ? ? ???????? thread_.cpp because there isn't a > ? ? ? ? ? ? ???????? thread_.inline.hpp file, lastly moved > ? ? ? ? ? ? ???????? InterpreterRuntime::LastFrameAccessor into > ? ? ? ? ? ? ???????? interpreterRuntime.cpp file, and a few other > ? ? ? ? ? ? functions moved > ? ? ? ? ? ? ???????? in shared code. > > ? ? ? ? ? ? ???????? This is the last of this include file > technical > ? ? ? ? ? ? debt cleanup > ? ? ? ? ? ? ???????? that I'm going to do.? See bug for more > information. > > ? ? ? ? ? ? ???????? open webrev at > http://cr.openjdk.java.net/~coleenp/8199809.01/webrev > > > > > > > > >> > ? ? ? ? ? ? ???????? bug link > https://bugs.openjdk.java.net/browse/JDK-8199809 > > ? ? ? ? ? ? > > > > ? ? ? ? ? ? >> > > ? ? ? ? ? ? ???????? I'll update the copyrights when I commit. > > ? ? ? ? ? ? ???????? Thanks, > ? ? ? ? ? ? ???????? Coleen > > > > > > From thomas.stuefe at gmail.com Thu Mar 22 07:16:26 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 22 Mar 2018 08:16:26 +0100 Subject: RFR (L, tedious) 8199809: Don't include frame.inline.hpp and other.inline.hpp from .hpp files In-Reply-To: <9fea0f0a-315b-6157-864b-d0aaf4b0c2b2@oracle.com> References: <115cb618-5fe3-f59e-9b4b-fa56060ad766@oracle.com> <319623dc-512a-ba73-b8b5-af0956d0007b@oracle.com> <8eedc970-6f09-268c-2a33-97b1460a32cf@oracle.com> <9fea0f0a-315b-6157-864b-d0aaf4b0c2b2@oracle.com> Message-ID: On Thu, Mar 22, 2018 at 7:06 AM, David Holmes wrote: > On 22/03/2018 3:52 PM, Thomas St?fe wrote: > >> On Wed, Mar 21, 2018 at 11:34 PM, David Holmes > > wrote: >> >> >> >> On 22/03/2018 1:35 AM, Thomas St?fe wrote: >> >> On Wed, Mar 21, 2018 at 2:02 PM, David Holmes >> On >> 21/03/2018 10:39 PM, coleen.phillimore at oracle.com >> >> Hm so I need to add the #include for macroAssembler.hpp >> somewhere new like nativeInst_ppc.hpp or does just >> removing it >> from interpreterRT_ppc.hpp fix the problem? >> >> >> Whatever code is in the included platform specific header >> still >> needs to ensure the definitions that it needs have been >> included. If >> those are shared files then you may just be able to move >> them into >> the shared cpp file, but any platform specific headers must >> still be >> included in the platform specific headers. >> >> >> I disagree in this particular case. In my opinion, headers whose >> purpose is to be included into class declarations should not >> include other headers. >> >> >> ??? If the code you are including relies on things from other header >> files then you have no choice else it won't compile! >> >> >> Is this a misunderstanding? I am not saying not to provide the >> dependencies. But they cannot be provided from within this header, if this >> header gets dropped in right in the middle of a class definition (like e.g. >> os_aix.hpp), right? >> >> If, for an easy example, my header uses pthread_t, I cannot just simply >> include pthread.h, because then all declarations of pthread.h appear in >> class scope in the surrounding class. So I have to make sure >> the platform header is included elsewhere, preferably at the start of the >> surrounding header, or of the cpp file. >> > > I would find it undesirable to have to put in a platform specific include, > like pthread.h, if the surrounding header or cpp file were shared files. > But you're saying that including it in place may not actually work anyway. Yes unfortunately. The correct solution is to use C++ namespaces. Since namespaces can have multiple bodies - as opposed to class definitions - this whole include-into-a-class-definition pattern can go away. Thomas > > David > > > Thomas >> >> >> David >> >> Thanks, Thomas >> >> David >> ----- >> >> thanks, >> Coleen >> >> >> >> Thanks, Thomas >> >> >> >> >> >> On Wed, Mar 21, 2018 at 11:41 AM, Thomas St?fe >> > > > >> > >> >> > >>> wrote: >> >> Hi Coleen, >> >> linuxs390 needs this: >> >> - .../source $ hg diff >> diff -r daf3abb9031f >> src/hotspot/cpu/s390/interpreterRT_s390.cpp >> --- >> a/src/hotspot/cpu/s390/interpreterRT_s390.cpp >> Wed Mar 21 >> 08:37:04 2018 +0100 >> +++ >> b/src/hotspot/cpu/s390/interpreterRT_s390.cpp >> Wed Mar 21 >> 11:12:03 2018 +0100 >> @@ -65,7 +65,7 @@ >> } >> >> // Implementation of SignatureHandlerGenerator >> -InteprerterRuntime::Signature >> HandlerGenerator::SignatureHandlerGenerator( >> >> +InterpreterRuntime::Signature >> HandlerGenerator::SignatureHandlerGenerator( >> >> const methodHandle& method, CodeBuffer* >> buffer) : >> NativeSignatureIterator(method) { >> _masm = new MacroAssembler(buffer); >> _fp_arg_nr = 0; >> >> (typo). Otherwise it builds fine. >> >> I'm getting build errors on AIX which are a >> bit more >> complicated, >> still looking.. >> >> Thanks, Thomas >> >> >> On Wed, Mar 21, 2018 at 1:08 AM, >> > >> > > >> > >> >> > >>> wrote: >> >> Summary: Remove frame.inline.hpp,etc from >> header >> files and >> adjust transitive includes. >> >> Tested with mach5 tier1 on Oracle >> platforms: linux-x64, >> solaris-sparc, windows-x64. Built with >> open-only >> sources >> using --disable-precompiled-headers on >> linux-x64, >> built with >> zero (also disable precompiled headers). >> Roman >> built with >> aarch64, and have request to build ppc, >> etc. (Please test >> this patch!) >> >> Semi-interesting details: moved >> SignatureHandlerGenerator >> constructor to cpp file, moved >> interpreter_frame_stack_direction() to >> target >> specific hpp >> files (even though they're all -1), >> pd_last_frame to >> thread_.cpp because there isn't a >> thread_.inline.hpp file, lastly >> moved >> InterpreterRuntime::LastFrameAccessor into >> interpreterRuntime.cpp file, and a few other >> functions moved >> in shared code. >> >> This is the last of this include file >> technical >> debt cleanup >> that I'm going to do. See bug for more >> information. >> >> open webrev at >> http://cr.openjdk.java.net/~coleenp/8199809.01/webrev >> >> > oleenp/8199809.01/webrev >> > >> < >> http://cr.openjdk.java.net/%7Ecoleenp/8199809.01/webrev >> >> > Ecoleenp/8199809.01/webrev >> >> >> bug link >> https://bugs.openjdk.java.net/browse/JDK-8199809 >> >> > > >> > t/browse/JDK-8199809 >> >> > >> >> >> I'll update the copyrights when I commit. >> >> Thanks, >> Coleen >> >> >> >> >> >> >> From tobias.hartmann at oracle.com Thu Mar 22 07:23:12 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Thu, 22 Mar 2018 08:23:12 +0100 Subject: [11] RFR(S): 8199777: Deprecate -XX:+AggressiveOpts In-Reply-To: References: <022c4dc3-5750-c151-e08e-bd3a2826aa16@oracle.com> <96444d90-f28e-398a-095a-feb2c6e27b3a@oracle.com> <6e56a98f-4ed3-74fc-9f21-1a9c2b247a03@oracle.com> <1C8F5F0E-9E5B-4EC0-818D-DCFBE9A80346@oracle.com> Message-ID: Thanks Vladimir and David! Best regards, Tobias On 21.03.2018 23:11, David Holmes wrote: > +1 (in case there was any doubt) > > David > > On 22/03/2018 3:59 AM, Vladimir Kozlov wrote: >> Okay. Looks good. >> >> Thanks, >> Vladimir >> >>> On Mar 21, 2018, at 12:38 AM, Tobias Hartmann wrote: >>> >>> Hi Vladimir and David, >>> >>> thanks for the review! >>> >>> On 20.03.2018 23:23, Vladimir Kozlov wrote: >>>> Actually you don't need to specify EliminateAutoBox in tests because it is true by default: >>> >>> Yes, but I agree with David that it's better to leave the flag in to state what the test is supposed >>> to be run with. Also, AutoBoxCacheMax is a C2 specific flag as well. >>> >>> Here's the new webrev with -XX:+IgnoreUnrecognizedVMOptions added to the tests: >>> http://cr.openjdk.java.net/~thartmann/8199777/webrev.01/ >>> >>> Thanks, >>> Tobias >> From thomas.stuefe at gmail.com Thu Mar 22 07:53:52 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 22 Mar 2018 08:53:52 +0100 Subject: RFR: JDK-8198652: Stop linking with -base:0x8000000 on Windows In-Reply-To: <7adcdebe-5586-e1d4-8813-d4327de1db94@oracle.com> References: <7adcdebe-5586-e1d4-8813-d4327de1db94@oracle.com> Message-ID: Hi Erik, makes sense and looks fine. ..Thomas On Wed, Mar 21, 2018 at 11:05 PM, Erik Joelsson wrote: > On Windows, we have been linking libjvm.so with -base:0x8000000 since > forever. This may have been a good idea on earlier versions of windows, but > with VS2017 it generates a warning and with ASLR, the address a given > binary is loaded at will vary between runs anyway, so there is little point > setting this linker option anymore (since Windows Vista). So I propose we > just remove it. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8198652 > > Webrev: http://cr.openjdk.java.net/~erikj/8198652/webrev.01/ > > /Erik > > From magnus.ihse.bursie at oracle.com Thu Mar 22 10:06:41 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Thu, 22 Mar 2018 11:06:41 +0100 Subject: RFR: JDK-8198652: Stop linking with -base:0x8000000 on Windows In-Reply-To: <7adcdebe-5586-e1d4-8813-d4327de1db94@oracle.com> References: <7adcdebe-5586-e1d4-8813-d4327de1db94@oracle.com> Message-ID: <6606336e-7cef-6ab0-ffe1-60973b5a8108@oracle.com> On 2018-03-21 23:05, Erik Joelsson wrote: > On Windows, we have been linking libjvm.so with -base:0x8000000 since > forever. This may have been a good idea on earlier versions of > windows, but with VS2017 it generates a warning and with ASLR, the > address a given binary is loaded at will vary between runs anyway, so > there is little point setting this linker option anymore (since > Windows Vista). So I propose we just remove it. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8198652 > > Webrev: http://cr.openjdk.java.net/~erikj/8198652/webrev.01/ Looks good to me. Thanks for fixing this! /Magnus > > /Erik > From rkennke at redhat.com Thu Mar 22 10:40:27 2018 From: rkennke at redhat.com (Roman Kennke) Date: Thu, 22 Mar 2018 11:40:27 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <5AB26EBD.50805@oracle.com> References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> <3b1de555-54f4-6b30-b2b2-b8ddddf2fe29@oracle.com> <3cf32a5b-ca9e-d9d2-abda-943ced0d782d@redhat.com> <5AB0DD76.6020807@oracle.com> <0e7560a9-ab2a-133a-6c16-87ac811729bb@redhat.com> <5AB0E615.9060700@oracle.com> <0f080611-5085-74e4-9339-da38fe8c96ac@redhat.com> <5AB12E12.7030302@oracle.com> <5AB26EBD.50805@oracle.com> Message-ID: I keep getting: Mach5 mach5-one-rkennke-JDK-8199739-20180322-0911-15544: Builds UNSTABLE. Testing UNSTABLE. Do I need to be worried? Full output: Build Details: 2018-03-22-0901391.roman.source 0 Failed Tests Mach5 Tasks Results Summary PASSED: 3 EXECUTED_WITH_FAILURE: 8 UNABLE_TO_RUN: 64 FAILED: 0 KILLED: 0 NA: 0 Build 8 Not run build_jdk_linux-linux-x64-linux-x64-OEL-7-0 error while building, return value: 2 build_jdk_linux-linux-x64-debug-linux-x64-OEL-7-1 error while building, return value: 2 build_jdk_macosx-macosx-x64-macosx-x64-anyof_10_9_10_10-2 error while building, return value: 2 build_jdk_macosx-macosx-x64-debug-macosx-x64-anyof_10_9_10_10-3 error while building, return value: 2 build_jdk_solaris-solaris-sparcv9-solaris-sparcv9-11_2-4 error while building, return value: 2 build_jdk_solaris-solaris-sparcv9-debug-solaris-sparcv9-11_2-5 error while building, return value: 2 build_jdk_windows-windows-x64-windows-x64-2012R2-6 error while building, return value: 2 build_jdk_windows-windows-x64-debug-windows-x64-2012R2-7 error while building, return value: 2 Test 64 Not run tier1-product-jdk_closed_test_hotspot_jtreg_tier1_common-linux-x64-15 Dependency task failed: mach5...4-build_jdk_linux-linux-x64-linux-x64-OEL-7-0 tier1-debug-jdk_closed_test_hotspot_jtreg_tier1_common-linux-x64-debug-48 Dependency task failed: mach5...d_jdk_linux-linux-x64-debug-linux-x64-OEL-7-1 tier1-product-jdk_closed_test_hotspot_jtreg_tier1_common-macosx-x64-16 Dependency task failed: mach5...cosx-macosx-x64-macosx-x64-anyof_10_9_10_10-2 tier1-debug-jdk_closed_test_hotspot_jtreg_tier1_common-macosx-x64-debug-49 Dependency task failed: mach5...acosx-x64-debug-macosx-x64-anyof_10_9_10_10-3 tier1-solaris-sparc-jdk_closed_test_hotspot_jtreg_tier1_common-solaris-sparcv9-62 Dependency task failed: mach5...olaris-solaris-sparcv9-solaris-sparcv9-11_2-4 tier1-solaris-sparc-jdk_closed_test_hotspot_jtreg_tier1_common-solaris-sparcv9-debug-63 Dependency task failed: mach5...-solaris-sparcv9-debug-solaris-sparcv9-11_2-5 tier1-product-jdk_closed_test_hotspot_jtreg_tier1_common-windows-x64-17 Dependency task failed: viI3aXAomf tier1-debug-jdk_closed_test_hotspot_jtreg_tier1_common-windows-x64-debug-50 Dependency task failed: viI3aXApmf tier1-debug-jdk_closed_test_hotspot_jtreg_tier1_compiler_closed-linux-x64-debug-51 Dependency task failed: mach5...d_jdk_linux-linux-x64-debug-linux-x64-OEL-7-1 tier1-debug-jdk_closed_test_hotspot_jtreg_tier1_compiler_closed-macosx-x64-debug-52 Dependency task failed: mach5...acosx-x64-debug-macosx-x64-anyof_10_9_10_10-3 See all 64... > Hi Roman, > > I got the same problem when pushing the remove Runtime1::arraycopy > changes, so I can confirm this is unrelated to your changes. > > Thanks, > /Erik > > On 2018-03-21 15:28, Roman Kennke wrote: >> I got a failure back from submit repo: >> >> Build Details: 2018-03-21-1213342.roman.source >> 1 Failed Test >> Test??? Tier??? Platform??? Keywords??? Description??? Task >> compiler/jsr292/RedefineMethodUsedByMultipleMethodHandles.java???? tier1 >> macosx-x64-debug???? bug8042235 othervm???? Exception: >> java.lang.reflect.InvocationTargetException???? task >> Mach5 Tasks Results Summary >> >> ???? PASSED: 74 >> ???? EXECUTED_WITH_FAILURE: 1 >> ???? UNABLE_TO_RUN: 0 >> ???? FAILED: 0 >> ???? KILLED: 0 >> ???? NA: 0 >> ???? Test >> >> ???????? 1 Executed with failure >> >> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_3-macosx-x64-debug-28 >> >> Results: total: 165, passed: 164; failed: 1 >> >> >> Can you tell if that is related to the change, or something other >> already known issue? >> >> Thanks, Roman >> >> >>> Hi Roman, >>> >>> This looks good to me. The unfortunate include problems in >>> jvmciJavaClasses.hpp are pre-existing and should be cleaned up at some >>> point. >>> >>> Thanks, >>> /Erik >>> >>> On 2018-03-20 16:13, Roman Kennke wrote: >>>> Am 20.03.2018 um 11:44 schrieb Erik ?sterlund: >>>>> Hi Roman, >>>>> >>>>> On 2018-03-20 11:26, Roman Kennke wrote: >>>>>> Am 20.03.2018 um 11:07 schrieb Erik ?sterlund: >>>>>>> Hi Roman, >>>>>>> >>>>>>> On 2018-03-19 21:11, Roman Kennke wrote: >>>>>>>> Am 19.03.2018 um 20:35 schrieb coleen.phillimore at oracle.com: >>>>>>>>> On 3/19/18 3:15 PM, Stefan Karlsson wrote: >>>>>>>>>> On 2018-03-19 20:00, coleen.phillimore at oracle.com wrote: >>>>>>>>>>> I like Roman's version with static_field_base() the best.? The >>>>>>>>>>> reason >>>>>>>>>>> I wanted to keep static_field_addr and not have static_oop_addr >>>>>>>>>>> was >>>>>>>>>>> so there is one function to find static fields and this would >>>>>>>>>>> work >>>>>>>>>>> with the jvmci classes and with loading/storing primitives >>>>>>>>>>> also.? So >>>>>>>>>>> I like the consistent change that Roman has. >>>>>>>>>> That's OK with me. This RFE grew in scope of what I first >>>>>>>>>> intended, so >>>>>>>>>> I'm fine with Roman taking over this. >>>>>>>>>> >>>>>>>>>>> There's a subtlety that I haven't quite figured out here. >>>>>>>>>>> static_field_addr gets an address mirror+offset, so needs a load >>>>>>>>>>> barrier on this offset, then needs a load barrier on the >>>>>>>>>>> offset of >>>>>>>>>>> the additional load (?) >>>>>>>>>> There are two barriers in this piece of code: >>>>>>>>>> 1) Shenandoah needs a barrier to be able to read fields out of >>>>>>>>>> the >>>>>>>>>> java mirror >>>>>>>>>> 2) ZGC and UseCompressedOops needs a barrier when loading oop >>>>>>>>>> fields >>>>>>>>>> in the java mirror. >>>>>>>>>> >>>>>>>>>> Is that what you are referring to? >>>>>>>>> I had to read this thread over again, and am still foggy, but >>>>>>>>> it was >>>>>>>>> because your original change didn't work for shenandoah, ie Kim's >>>>>>>>> last >>>>>>>>> response. >>>>>>>>> >>>>>>>>> The brooks pointer has to be applied to get the mirror address as >>>>>>>>> well >>>>>>>>> as reading fields out of the mirror, if I understand correctly. >>>>>>>>> >>>>>>>>> OopHandle::resolve() which is what java_mirror() is not >>>>>>>>> accessorized but >>>>>>>>> should be for shenandoah.? I think.? I guess that was my question >>>>>>>>> before. >>>>>>>> The family of _at() functions in Access, those which accept >>>>>>>> oop+offset, >>>>>>>> do the chasing of the forwarding pointer in Shenandoah, then they >>>>>>>> apply >>>>>>>> the offset, load the memory field and return the value in the right >>>>>>>> type. They also do the load-barrier in ZGC (haven't checked, but >>>>>>>> that's >>>>>>>> just logical). >>>>>>>> >>>>>>>> There is also oop Access::resolve(oop) which is a bit of a hack. >>>>>>>> It has >>>>>>>> been introduced because of arraycopy and java <-> native bulk copy >>>>>>>> stuff >>>>>>>> that uses typeArrayOop::*_at_addr() family of methods. In those >>>>>>>> situations we still need to 1. chase the fwd ptr (for reads) or 2. >>>>>>>> maybe >>>>>>>> evacuate the object (for writes), where #2 is stronger than #1 >>>>>>>> (i.e. if >>>>>>>> we do #2, then we don't need to do #1). In order to keep things >>>>>>>> simple, >>>>>>>> we decided to make Access::resolve(oop) do #2, and have it cover >>>>>>>> all >>>>>>>> those cases, and put it in arrayOopDesc::base(). This does the >>>>>>>> right >>>>>>>> thing for all cases, but it is a bit broad, for example, it may >>>>>>>> lead to >>>>>>>> double-copying a potentially large array (resolve-copy src array >>>>>>>> from >>>>>>>> from-space to to-space, then copy it again to the dst array). For >>>>>>>> those >>>>>>>> reasons, it is advisable to think twice before using _at_addr() or >>>>>>>> in-fact Access::resolve() if there's a better/cleaner way to do it. >>>>>>> Are we certain that it is indeed only arraycopy that requires stable >>>>>>> accesses until the next thread transition? >>>>>>> I seem to recall that last time we discussed this, you thought that >>>>>>> there was more than arraycopy code that needed this. For example >>>>>>> printing and string encoding/decoding logic. >>>>>>> >>>>>>> If we are going to make changes based on the assumption that we >>>>>>> will be >>>>>>> able to get rid of the resolve() barrier, then we should be fairly >>>>>>> certain that we can indeed get rid of it. So have the other >>>>>>> previously >>>>>>> discussed roadblocks other than arraycopy disappeared? >>>>>> No, I don't think that resolve() can go away. If you look at: >>>>>> >>>>>> http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2018-March/021464.html >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> You'll see all kinds of uses of _at_addr() that cannot be covered by >>>>>> some sort of arraycopy, e.g. the string conversions stuff. >>>>>> >>>>>> The above patch proposes to split resolve() to resolve_for_read() and >>>>>> resolve_for_write(), and I don't think it is unreasonable to >>>>>> distinguish >>>>>> those. Besides being better for Shenandoah (reduced latency on >>>>>> read-only >>>>>> accesses), there are conceivable GC algorithms that require that >>>>>> distinction too, e.g. transactional memory based GC or copy-on-write >>>>>> based GCs. But let's probably continue this discussion in the thread >>>>>> mentioned above? >>>>> As I thought. The reason I bring it up in this thread is because as I >>>>> understand it, you are proposing to push this patch without renaming >>>>> static_field_base() to static_field_base_raw(), which is what we did >>>>> consistently everywhere else so far, with the motivation that you will >>>>> remove resolve() from the other ones soon, and get rid of base_raw(). >>>>> And I feel like we should have that discussion first. Until that is >>>>> actually changed, static_field_base_raw() should be the name of that >>>>> method. If we decide to change the other code to do something else, >>>>> then >>>>> we can revisit this then, but not yet. >>>> Ok, so I changed static_field_base() -> static_field_base_raw(): >>>> >>>> Diff: >>>> http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.01.diff/ >>>> Full: >>>> http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.01/ >>>> >>>> Better? >>>> >>>> Thanks, Roman >>>> >>>> >> > From stefan.karlsson at oracle.com Thu Mar 22 10:46:29 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Mar 2018 11:46:29 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> <3b1de555-54f4-6b30-b2b2-b8ddddf2fe29@oracle.com> <3cf32a5b-ca9e-d9d2-abda-943ced0d782d@redhat.com> <5AB0DD76.6020807@oracle.com> <0e7560a9-ab2a-133a-6c16-87ac811729bb@redhat.com> <5AB0E615.9060700@oracle.com> <0f080611-5085-74e4-9339-da38fe8c96ac@redhat.com> <5AB12E12.7030302@oracle.com> <5AB26EBD.50805@oracle.com> Message-ID: <705cc625-b66d-fd0b-06aa-8e58eb2422a1@oracle.com> On 2018-03-22 11:40, Roman Kennke wrote: > I keep getting: > > Mach5 mach5-one-rkennke-JDK-8199739-20180322-0911-15544: Builds > UNSTABLE. Testing UNSTABLE. > > Do I need to be worried? Yes. In a closed file you get: fatal error: runtime/vframe.inline.hpp: No such file or directory I think that this will be resolved if you rebase against the latest open changes. StefanK > > Full output: > > Build Details: 2018-03-22-0901391.roman.source > 0 Failed Tests > Mach5 Tasks Results Summary > > PASSED: 3 > EXECUTED_WITH_FAILURE: 8 > UNABLE_TO_RUN: 64 > FAILED: 0 > KILLED: 0 > NA: 0 > Build > > 8 Not run > build_jdk_linux-linux-x64-linux-x64-OEL-7-0 error while > building, return value: 2 > build_jdk_linux-linux-x64-debug-linux-x64-OEL-7-1 error > while building, return value: 2 > build_jdk_macosx-macosx-x64-macosx-x64-anyof_10_9_10_10-2 > error while building, return value: 2 > > build_jdk_macosx-macosx-x64-debug-macosx-x64-anyof_10_9_10_10-3 error > while building, return value: 2 > build_jdk_solaris-solaris-sparcv9-solaris-sparcv9-11_2-4 > error while building, return value: 2 > > build_jdk_solaris-solaris-sparcv9-debug-solaris-sparcv9-11_2-5 error > while building, return value: 2 > build_jdk_windows-windows-x64-windows-x64-2012R2-6 error > while building, return value: 2 > build_jdk_windows-windows-x64-debug-windows-x64-2012R2-7 > error while building, return value: 2 > > Test > > 64 Not run > > tier1-product-jdk_closed_test_hotspot_jtreg_tier1_common-linux-x64-15 > Dependency task failed: > mach5...4-build_jdk_linux-linux-x64-linux-x64-OEL-7-0 > > tier1-debug-jdk_closed_test_hotspot_jtreg_tier1_common-linux-x64-debug-48 > Dependency task failed: > mach5...d_jdk_linux-linux-x64-debug-linux-x64-OEL-7-1 > > tier1-product-jdk_closed_test_hotspot_jtreg_tier1_common-macosx-x64-16 > Dependency task failed: > mach5...cosx-macosx-x64-macosx-x64-anyof_10_9_10_10-2 > > tier1-debug-jdk_closed_test_hotspot_jtreg_tier1_common-macosx-x64-debug-49 > Dependency task failed: > mach5...acosx-x64-debug-macosx-x64-anyof_10_9_10_10-3 > > tier1-solaris-sparc-jdk_closed_test_hotspot_jtreg_tier1_common-solaris-sparcv9-62 > Dependency task failed: > mach5...olaris-solaris-sparcv9-solaris-sparcv9-11_2-4 > > tier1-solaris-sparc-jdk_closed_test_hotspot_jtreg_tier1_common-solaris-sparcv9-debug-63 > Dependency task failed: > mach5...-solaris-sparcv9-debug-solaris-sparcv9-11_2-5 > > tier1-product-jdk_closed_test_hotspot_jtreg_tier1_common-windows-x64-17 > Dependency task failed: viI3aXAomf > > tier1-debug-jdk_closed_test_hotspot_jtreg_tier1_common-windows-x64-debug-50 > Dependency task failed: viI3aXApmf > > tier1-debug-jdk_closed_test_hotspot_jtreg_tier1_compiler_closed-linux-x64-debug-51 > Dependency task failed: > mach5...d_jdk_linux-linux-x64-debug-linux-x64-OEL-7-1 > > tier1-debug-jdk_closed_test_hotspot_jtreg_tier1_compiler_closed-macosx-x64-debug-52 > Dependency task failed: > mach5...acosx-x64-debug-macosx-x64-anyof_10_9_10_10-3 > See all 64... > > > > >> Hi Roman, >> >> I got the same problem when pushing the remove Runtime1::arraycopy >> changes, so I can confirm this is unrelated to your changes. >> >> Thanks, >> /Erik >> >> On 2018-03-21 15:28, Roman Kennke wrote: >>> I got a failure back from submit repo: >>> >>> Build Details: 2018-03-21-1213342.roman.source >>> 1 Failed Test >>> Test??? Tier??? Platform??? Keywords??? Description??? Task >>> compiler/jsr292/RedefineMethodUsedByMultipleMethodHandles.java???? tier1 >>> macosx-x64-debug???? bug8042235 othervm???? Exception: >>> java.lang.reflect.InvocationTargetException???? task >>> Mach5 Tasks Results Summary >>> >>> ???? PASSED: 74 >>> ???? EXECUTED_WITH_FAILURE: 1 >>> ???? UNABLE_TO_RUN: 0 >>> ???? FAILED: 0 >>> ???? KILLED: 0 >>> ???? NA: 0 >>> ???? Test >>> >>> ???????? 1 Executed with failure >>> >>> tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_3-macosx-x64-debug-28 >>> >>> Results: total: 165, passed: 164; failed: 1 >>> >>> >>> Can you tell if that is related to the change, or something other >>> already known issue? >>> >>> Thanks, Roman >>> >>> >>>> Hi Roman, >>>> >>>> This looks good to me. The unfortunate include problems in >>>> jvmciJavaClasses.hpp are pre-existing and should be cleaned up at some >>>> point. >>>> >>>> Thanks, >>>> /Erik >>>> >>>> On 2018-03-20 16:13, Roman Kennke wrote: >>>>> Am 20.03.2018 um 11:44 schrieb Erik ?sterlund: >>>>>> Hi Roman, >>>>>> >>>>>> On 2018-03-20 11:26, Roman Kennke wrote: >>>>>>> Am 20.03.2018 um 11:07 schrieb Erik ?sterlund: >>>>>>>> Hi Roman, >>>>>>>> >>>>>>>> On 2018-03-19 21:11, Roman Kennke wrote: >>>>>>>>> Am 19.03.2018 um 20:35 schrieb coleen.phillimore at oracle.com: >>>>>>>>>> On 3/19/18 3:15 PM, Stefan Karlsson wrote: >>>>>>>>>>> On 2018-03-19 20:00, coleen.phillimore at oracle.com wrote: >>>>>>>>>>>> I like Roman's version with static_field_base() the best.? The >>>>>>>>>>>> reason >>>>>>>>>>>> I wanted to keep static_field_addr and not have static_oop_addr >>>>>>>>>>>> was >>>>>>>>>>>> so there is one function to find static fields and this would >>>>>>>>>>>> work >>>>>>>>>>>> with the jvmci classes and with loading/storing primitives >>>>>>>>>>>> also.? So >>>>>>>>>>>> I like the consistent change that Roman has. >>>>>>>>>>> That's OK with me. This RFE grew in scope of what I first >>>>>>>>>>> intended, so >>>>>>>>>>> I'm fine with Roman taking over this. >>>>>>>>>>> >>>>>>>>>>>> There's a subtlety that I haven't quite figured out here. >>>>>>>>>>>> static_field_addr gets an address mirror+offset, so needs a load >>>>>>>>>>>> barrier on this offset, then needs a load barrier on the >>>>>>>>>>>> offset of >>>>>>>>>>>> the additional load (?) >>>>>>>>>>> There are two barriers in this piece of code: >>>>>>>>>>> 1) Shenandoah needs a barrier to be able to read fields out of >>>>>>>>>>> the >>>>>>>>>>> java mirror >>>>>>>>>>> 2) ZGC and UseCompressedOops needs a barrier when loading oop >>>>>>>>>>> fields >>>>>>>>>>> in the java mirror. >>>>>>>>>>> >>>>>>>>>>> Is that what you are referring to? >>>>>>>>>> I had to read this thread over again, and am still foggy, but >>>>>>>>>> it was >>>>>>>>>> because your original change didn't work for shenandoah, ie Kim's >>>>>>>>>> last >>>>>>>>>> response. >>>>>>>>>> >>>>>>>>>> The brooks pointer has to be applied to get the mirror address as >>>>>>>>>> well >>>>>>>>>> as reading fields out of the mirror, if I understand correctly. >>>>>>>>>> >>>>>>>>>> OopHandle::resolve() which is what java_mirror() is not >>>>>>>>>> accessorized but >>>>>>>>>> should be for shenandoah.? I think.? I guess that was my question >>>>>>>>>> before. >>>>>>>>> The family of _at() functions in Access, those which accept >>>>>>>>> oop+offset, >>>>>>>>> do the chasing of the forwarding pointer in Shenandoah, then they >>>>>>>>> apply >>>>>>>>> the offset, load the memory field and return the value in the right >>>>>>>>> type. They also do the load-barrier in ZGC (haven't checked, but >>>>>>>>> that's >>>>>>>>> just logical). >>>>>>>>> >>>>>>>>> There is also oop Access::resolve(oop) which is a bit of a hack. >>>>>>>>> It has >>>>>>>>> been introduced because of arraycopy and java <-> native bulk copy >>>>>>>>> stuff >>>>>>>>> that uses typeArrayOop::*_at_addr() family of methods. In those >>>>>>>>> situations we still need to 1. chase the fwd ptr (for reads) or 2. >>>>>>>>> maybe >>>>>>>>> evacuate the object (for writes), where #2 is stronger than #1 >>>>>>>>> (i.e. if >>>>>>>>> we do #2, then we don't need to do #1). In order to keep things >>>>>>>>> simple, >>>>>>>>> we decided to make Access::resolve(oop) do #2, and have it cover >>>>>>>>> all >>>>>>>>> those cases, and put it in arrayOopDesc::base(). This does the >>>>>>>>> right >>>>>>>>> thing for all cases, but it is a bit broad, for example, it may >>>>>>>>> lead to >>>>>>>>> double-copying a potentially large array (resolve-copy src array >>>>>>>>> from >>>>>>>>> from-space to to-space, then copy it again to the dst array). For >>>>>>>>> those >>>>>>>>> reasons, it is advisable to think twice before using _at_addr() or >>>>>>>>> in-fact Access::resolve() if there's a better/cleaner way to do it. >>>>>>>> Are we certain that it is indeed only arraycopy that requires stable >>>>>>>> accesses until the next thread transition? >>>>>>>> I seem to recall that last time we discussed this, you thought that >>>>>>>> there was more than arraycopy code that needed this. For example >>>>>>>> printing and string encoding/decoding logic. >>>>>>>> >>>>>>>> If we are going to make changes based on the assumption that we >>>>>>>> will be >>>>>>>> able to get rid of the resolve() barrier, then we should be fairly >>>>>>>> certain that we can indeed get rid of it. So have the other >>>>>>>> previously >>>>>>>> discussed roadblocks other than arraycopy disappeared? >>>>>>> No, I don't think that resolve() can go away. If you look at: >>>>>>> >>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2018-March/021464.html >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> You'll see all kinds of uses of _at_addr() that cannot be covered by >>>>>>> some sort of arraycopy, e.g. the string conversions stuff. >>>>>>> >>>>>>> The above patch proposes to split resolve() to resolve_for_read() and >>>>>>> resolve_for_write(), and I don't think it is unreasonable to >>>>>>> distinguish >>>>>>> those. Besides being better for Shenandoah (reduced latency on >>>>>>> read-only >>>>>>> accesses), there are conceivable GC algorithms that require that >>>>>>> distinction too, e.g. transactional memory based GC or copy-on-write >>>>>>> based GCs. But let's probably continue this discussion in the thread >>>>>>> mentioned above? >>>>>> As I thought. The reason I bring it up in this thread is because as I >>>>>> understand it, you are proposing to push this patch without renaming >>>>>> static_field_base() to static_field_base_raw(), which is what we did >>>>>> consistently everywhere else so far, with the motivation that you will >>>>>> remove resolve() from the other ones soon, and get rid of base_raw(). >>>>>> And I feel like we should have that discussion first. Until that is >>>>>> actually changed, static_field_base_raw() should be the name of that >>>>>> method. If we decide to change the other code to do something else, >>>>>> then >>>>>> we can revisit this then, but not yet. >>>>> Ok, so I changed static_field_base() -> static_field_base_raw(): >>>>> >>>>> Diff: >>>>> http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.01.diff/ >>>>> Full: >>>>> http://cr.openjdk.java.net/~rkennke/JDK-8199739/webrev.01/ >>>>> >>>>> Better? >>>>> >>>>> Thanks, Roman >>>>> >>>>> >>> >> > > From rkennke at redhat.com Thu Mar 22 11:11:10 2018 From: rkennke at redhat.com (Roman Kennke) Date: Thu, 22 Mar 2018 12:11:10 +0100 Subject: RFR: 8199739: Use HeapAccess when loading oops from static fields in javaClasses.cpp In-Reply-To: <705cc625-b66d-fd0b-06aa-8e58eb2422a1@oracle.com> References: <76e1ccf9-4a5e-805d-af3d-c8613f834d93@oracle.com> <3b14e247-a719-6eb0-8388-09fe194b3904@redhat.com> <71f10856-3142-ba6d-5981-d78d4cdbc6e0@oracle.com> <2edfc8ee-2d25-20cf-12f4-11dd86b44ebb@oracle.com> <2dfe493a-bf15-4ebd-05ce-0f3a141d475c@redhat.com> <3b1de555-54f4-6b30-b2b2-b8ddddf2fe29@oracle.com> <3cf32a5b-ca9e-d9d2-abda-943ced0d782d@redhat.com> <5AB0DD76.6020807@oracle.com> <0e7560a9-ab2a-133a-6c16-87ac811729bb@redhat.com> <5AB0E615.9060700@oracle.com> <0f080611-5085-74e4-9339-da38fe8c96ac@redhat.com> <5AB12E12.7030302@oracle.com> <5AB26EBD.50805@oracle.com> <705cc625-b66d-fd0b-06aa-8e58eb2422a1@oracle.com> Message-ID: <1d93d3b8-5f4b-63f5-57ac-1bd22458c3eb@redhat.com> Am 22.03.2018 um 11:46 schrieb Stefan Karlsson: > On 2018-03-22 11:40, Roman Kennke wrote: >> I keep getting: >> >> Mach5 mach5-one-rkennke-JDK-8199739-20180322-0911-15544: Builds >> UNSTABLE. Testing UNSTABLE. >> >> Do I need to be worried? > > Yes. > > In a closed file you get: > fatal error: runtime/vframe.inline.hpp: No such file or directory > Ok, so this is not actually caused by my change for JDK-8199739, I'll go ahead and close this branch in the submit repo then. Thanks, Roman From robin.westberg at oracle.com Thu Mar 22 14:34:58 2018 From: robin.westberg at oracle.com (Robin Westberg) Date: Thu, 22 Mar 2018 15:34:58 +0100 Subject: RFR: 8199736: Define WIN32_LEAN_AND_MEAN before including windows.h Message-ID: <7F114FA8-59F0-42F1-A0B8-33D8C22DE81A@oracle.com> Hi all, Please review the following change that defines WIN32_LEAN_AND_MEAN [1] before including windows.h. This marginally improves build times, and makes it possible to include winsock2.h. Issue: https://bugs.openjdk.java.net/browse/JDK-8199736 Webrev: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00/ Testing: hs-tier1 Best regards, Robin [1] https://msdn.microsoft.com/en-us/library/windows/desktop/aa383745%28v=vs.85%29.aspx#faster_builds_with_smaller_header_files From erik.joelsson at oracle.com Thu Mar 22 15:08:55 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Thu, 22 Mar 2018 08:08:55 -0700 Subject: RFR: 8199736: Define WIN32_LEAN_AND_MEAN before including windows.h In-Reply-To: <7F114FA8-59F0-42F1-A0B8-33D8C22DE81A@oracle.com> References: <7F114FA8-59F0-42F1-A0B8-33D8C22DE81A@oracle.com> Message-ID: <3725f362-93e2-d8a4-7dd2-fef1f8367498@oracle.com> We always set this define on the command line for jdk libraries. Not clear to me if this belongs better in the source or as a command line option. /Erik On 2018-03-22 07:34, Robin Westberg wrote: > Hi all, > > Please review the following change that defines WIN32_LEAN_AND_MEAN [1] before including windows.h. This marginally improves build times, and makes it possible to include winsock2.h. > > Issue: https://bugs.openjdk.java.net/browse/JDK-8199736 > Webrev: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00/ > Testing: hs-tier1 > > Best regards, > Robin > > [1] https://msdn.microsoft.com/en-us/library/windows/desktop/aa383745%28v=vs.85%29.aspx#faster_builds_with_smaller_header_files > From stefan.karlsson at oracle.com Thu Mar 22 15:31:52 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Mar 2018 16:31:52 +0100 Subject: RFR: 8200111: MallocArrayAllocator::free should not take a length parameter Message-ID: <20dcb572-f16c-25fc-47f4-f41af21367f9@oracle.com> Hi all, Please review this trivial patch to remove the length parameter from MallocArrayAllocator::free: http://cr.openjdk.java.net/~stefank/8200111/webrev.01/ https://bugs.openjdk.java.net/browse/JDK-8200111 This makes the MallocArrayAllocator as easy to use as the NEW_C_HEAP_ARRAY and FREE_C_HEAP_ARRAY. Thanks, StefanK From george.triantafillou at oracle.com Thu Mar 22 15:49:23 2018 From: george.triantafillou at oracle.com (George Triantafillou) Date: Thu, 22 Mar 2018 11:49:23 -0400 Subject: RFR: 8200111: MallocArrayAllocator::free should not take a length parameter In-Reply-To: <20dcb572-f16c-25fc-47f4-f41af21367f9@oracle.com> References: <20dcb572-f16c-25fc-47f4-f41af21367f9@oracle.com> Message-ID: Stefan, Looks good! -George On 3/22/2018 11:31 AM, Stefan Karlsson wrote: > Hi all, > > Please review this trivial patch to remove the length parameter from > MallocArrayAllocator::free: > > http://cr.openjdk.java.net/~stefank/8200111/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8200111 > > This makes the MallocArrayAllocator as easy to use as the > NEW_C_HEAP_ARRAY and FREE_C_HEAP_ARRAY. > > Thanks, > StefanK From kim.barrett at oracle.com Thu Mar 22 15:52:52 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 22 Mar 2018 11:52:52 -0400 Subject: RFR: 8199736: Define WIN32_LEAN_AND_MEAN before including windows.h In-Reply-To: <7F114FA8-59F0-42F1-A0B8-33D8C22DE81A@oracle.com> References: <7F114FA8-59F0-42F1-A0B8-33D8C22DE81A@oracle.com> Message-ID: > On Mar 22, 2018, at 10:34 AM, Robin Westberg wrote: > > Hi all, > > Please review the following change that defines WIN32_LEAN_AND_MEAN [1] before including windows.h. This marginally improves build times, and makes it possible to include winsock2.h. > > Issue: https://bugs.openjdk.java.net/browse/JDK-8199736 > Webrev: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00/ > Testing: hs-tier1 > > Best regards, > Robin > > [1] https://msdn.microsoft.com/en-us/library/windows/desktop/aa383745%28v=vs.85%29.aspx#faster_builds_with_smaller_header_files I think the addition of the WIN32_LEAN_AND_MEAN definition should be done through the build system, so that it applies everywhere. From stefan.karlsson at oracle.com Thu Mar 22 15:56:49 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Mar 2018 16:56:49 +0100 Subject: RFR: 8200111: MallocArrayAllocator::free should not take a length parameter In-Reply-To: References: <20dcb572-f16c-25fc-47f4-f41af21367f9@oracle.com> Message-ID: <6f3969b3-ae4f-c223-f007-99991372cd98@oracle.com> Thanks George! StefanK On 2018-03-22 16:49, George Triantafillou wrote: > Stefan, > > Looks good! > > -George > > On 3/22/2018 11:31 AM, Stefan Karlsson wrote: >> Hi all, >> >> Please review this trivial patch to remove the length parameter from >> MallocArrayAllocator::free: >> >> http://cr.openjdk.java.net/~stefank/8200111/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8200111 >> >> This makes the MallocArrayAllocator as easy to use as the >> NEW_C_HEAP_ARRAY and FREE_C_HEAP_ARRAY. >> >> Thanks, >> StefanK > From erik.osterlund at oracle.com Thu Mar 22 16:03:58 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 22 Mar 2018 17:03:58 +0100 Subject: RFR: 8200113: Make Access load proxys smarter In-Reply-To: <5AB3BB8B.8090309@oracle.com> References: <5AB3BB8B.8090309@oracle.com> Message-ID: <5AB3D3EE.1070209@oracle.com> Looping in hotspot-dev. /Erik On 2018-03-22 15:19, Erik ?sterlund wrote: > Hi, > > Access returns the result of loads through load proxy objects that > implicity convert themselves to a template inferred type. This is a > metaprogramming technique used to infer return types in C++. > > However, I have heard requests that it would be great if it could be a > bit smarter and do more than just be assigned to a type. > > Example use cases that do not work today without workarounds: > > oop val = ...; > narrowOop narrow = 0u; > oop *oop_val = &val; > narrowOop *narrow_val = &narrow; > HeapWord *heap_word_val = reinterpret_cast(oop_val); > > if (val == HeapAccess<>::oop_load_at(val, 16)) {} > if (HeapAccess<>::oop_load_at(val, 16) == val) {} > if (val != HeapAccess<>::oop_load_at(val, 16)) {} > if (HeapAccess<>::oop_load_at(val, 16) != val) {} > > if (HeapAccess<>::oop_load(oop_val) != val) {} > if (HeapAccess<>::oop_load(heap_word_val) != val) {} > if (RawAccess<>::oop_load(narrow_val) != narrow) {} > > if (HeapAccess<>::oop_load(oop_val) == val) {} > if (HeapAccess<>::oop_load(heap_word_val) == val) {} > if (RawAccess<>::oop_load(narrow_val) == narrow) {} > > if (val != HeapAccess<>::oop_load(oop_val)) {} > if (val != HeapAccess<>::oop_load(heap_word_val)) {} > if (narrow != RawAccess<>::oop_load(narrow_val)) {} > > if (val == HeapAccess<>::oop_load(oop_val)) {} > if (val == HeapAccess<>::oop_load(heap_word_val)) {} > if (narrow == RawAccess<>::oop_load(narrow_val)) {} > > if ((oop)HeapAccess<>::oop_load(oop_val) == NULL) {} > > oop tmp = true ? HeapAccess<>::oop_load(oop_val) : val; > > Here is a patch that solves this: > http://cr.openjdk.java.net/~eosterlund/8200113/webrev.00/ > > ...and here is the bug ID: > https://bugs.openjdk.java.net/browse/JDK-8200113 > > Thanks, > /Erik > From stefan.karlsson at oracle.com Thu Mar 22 16:01:49 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Mar 2018 17:01:49 +0100 Subject: RFR: 8199946: Move load/store and encode/decode out of oopDesc In-Reply-To: References: Message-ID: <5a6ea069-5c78-b10f-1cbc-8d38f0560f9d@oracle.com> Hi, This patch needs Erik's change to the LoadProxies: http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2018-March/021504.html to build on fastdebug. Here's a rebased patch: http://cr.openjdk.java.net/~stefank/8199946/webrev.02/ Thanks, StefanK On 2018-03-21 18:27, Stefan Karlsson wrote: > Hi all, > > Please review this patch to get rid of the oopDesc::load/store functions > and to move the oopDesc::encode/decode functions to a new CompressedOops > subsystem. > > http://cr.openjdk.java.net/~stefank/8199946/webrev.01 > https://bugs.openjdk.java.net/browse/JDK-8199946 > > When the Access API was introduced many of the usages of > oopDesc::load_decode_heap_oop, and friends, were replaced by calls to > the Access API. However, there are still some usages of these functions, > most notably in the GC code. > > This patch is two-fold: > > 1) It replaces the oopDesc load and store calls with RawAccess equivalents. > > 2) It moves the oopDesc encode and decode functions to a new, separate, > subsystem called CompressedOops. A future patch could even move all the > Universe::_narrow_oop variables over to CompressedOops. > > The second part has the nice property that it breaks up a circular > dependency between oop.inline.hpp and access.inline.hpp. After the > change we have: > > oop.inline.hpp includes: > ? access.inline.hpp > ? compressedOops.inline.hpp > > access.inline.hpp includes: > ? compressedOops.inline.hpp > > Thanks, > StefanK From coleen.phillimore at oracle.com Thu Mar 22 16:03:11 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 22 Mar 2018 12:03:11 -0400 Subject: RFR: 8200111: MallocArrayAllocator::free should not take a length parameter In-Reply-To: <20dcb572-f16c-25fc-47f4-f41af21367f9@oracle.com> References: <20dcb572-f16c-25fc-47f4-f41af21367f9@oracle.com> Message-ID: This looks good. Coleen On 3/22/18 11:31 AM, Stefan Karlsson wrote: > Hi all, > > Please review this trivial patch to remove the length parameter from > MallocArrayAllocator::free: > > http://cr.openjdk.java.net/~stefank/8200111/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8200111 > > This makes the MallocArrayAllocator as easy to use as the > NEW_C_HEAP_ARRAY and FREE_C_HEAP_ARRAY. > > Thanks, > StefanK From stefan.karlsson at oracle.com Thu Mar 22 16:10:36 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Mar 2018 17:10:36 +0100 Subject: RFR: 8200105: Remove cyclic dependency between oop.inline.hpp and collectedHeap.inline.hpp Message-ID: <3308e0f4-62fd-7329-4c0a-4605c1c246d1@oracle.com> Hi all, Please review this patch to remove a cyclic dependency between oop.inline.hpp and collectedHeap.inline.hpp: http://cr.openjdk.java.net/~stefank/8200105/webrev.01/ https://bugs.openjdk.java.net/browse/JDK-8200105 The ground work for this cleanup had already been done with: 8199728: Remove oopDesc::is_scavengable What's done in this trivial patch is: 1) Move some debug functions out from the collectedHeap.inline.hpp and into collectedHeap.hpp. 2) Removed collectedHeap.inline.hpp and fixed missing includes to make it compile again. Thanks, StefanK From stefan.karlsson at oracle.com Thu Mar 22 16:24:02 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Mar 2018 17:24:02 +0100 Subject: RFR: 8200106: Move NoSafepointVerifier out from gcLocker.hpp Message-ID: <35fd51fd-57d9-5ac1-bce6-409a57b161e5@oracle.com> Hi all, Please review this patch to separate out the NoSafepointVerifier class (and friends) from gcLocker.hpp into its own file. http://cr.openjdk.java.net/~stefank/8200106/webrev.01 https://bugs.openjdk.java.net/browse/JDK-8200106 After this patch gcLocker.hpp only contains code for the GCLocker. I've gone through all usages of the GCCLocker and NoSafepointVerifier classes and changed the code to include the correct headers. The new files are names safepointVerifiers.hpp/cpp and the main class is NoSafepointVerifier. However, I also moved the NoGCVerifier, which is the parent class of NoSafepointVerifier, and NoAllocVerfier. I think all of these are used to verify that we don't do things that will interact badly with safepoints, hence the name of the new file. Are others OK with the name? Thanks, StefanK From stefan.karlsson at oracle.com Thu Mar 22 16:25:06 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Mar 2018 17:25:06 +0100 Subject: RFR: 8200111: MallocArrayAllocator::free should not take a length parameter In-Reply-To: References: <20dcb572-f16c-25fc-47f4-f41af21367f9@oracle.com> Message-ID: <8cb6146b-9ccb-807f-7121-c19ce6b0d70e@oracle.com> Thanks Coleen. StefanK On 2018-03-22 17:03, coleen.phillimore at oracle.com wrote: > > This looks good. > Coleen > > On 3/22/18 11:31 AM, Stefan Karlsson wrote: >> Hi all, >> >> Please review this trivial patch to remove the length parameter from >> MallocArrayAllocator::free: >> >> http://cr.openjdk.java.net/~stefank/8200111/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8200111 >> >> This makes the MallocArrayAllocator as easy to use as the >> NEW_C_HEAP_ARRAY and FREE_C_HEAP_ARRAY. >> >> Thanks, >> StefanK > From stefan.karlsson at oracle.com Thu Mar 22 16:26:13 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Mar 2018 17:26:13 +0100 Subject: RFR: 8200106: Move NoSafepointVerifier out from gcLocker.hpp In-Reply-To: <35fd51fd-57d9-5ac1-bce6-409a57b161e5@oracle.com> References: <35fd51fd-57d9-5ac1-bce6-409a57b161e5@oracle.com> Message-ID: This patch builds upon the changes in: http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030928.html StefanK On 2018-03-22 17:24, Stefan Karlsson wrote: > Hi all, > > Please review this patch to separate out the NoSafepointVerifier class > (and friends) from gcLocker.hpp into its own file. > > http://cr.openjdk.java.net/~stefank/8200106/webrev.01 > https://bugs.openjdk.java.net/browse/JDK-8200106 > > After this patch gcLocker.hpp only contains code for the GCLocker. I've > gone through all usages of the GCCLocker and NoSafepointVerifier classes > and changed the code to include the correct headers. > > The new files are names safepointVerifiers.hpp/cpp and the main class is > NoSafepointVerifier. However, I also moved the NoGCVerifier, which is > the parent class of NoSafepointVerifier, and NoAllocVerfier. I think all > of these are used to verify that we don't do things that will interact > badly with safepoints, hence the name of the new file. Are others OK > with the name? > > Thanks, > StefanK From thomas.schatzl at oracle.com Thu Mar 22 16:27:41 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 22 Mar 2018 17:27:41 +0100 Subject: RFR: 8200111: MallocArrayAllocator::free should not take a length parameter In-Reply-To: <20dcb572-f16c-25fc-47f4-f41af21367f9@oracle.com> References: <20dcb572-f16c-25fc-47f4-f41af21367f9@oracle.com> Message-ID: <1521736061.2264.6.camel@oracle.com> Hi, On Thu, 2018-03-22 at 16:31 +0100, Stefan Karlsson wrote: > Hi all, > > Please review this trivial patch to remove the length parameter from > MallocArrayAllocator::free: > > http://cr.openjdk.java.net/~stefank/8200111/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8200111 > > This makes the MallocArrayAllocator as easy to use as the > NEW_C_HEAP_ARRAY and FREE_C_HEAP_ARRAY. looks good :) Thomas From stefan.karlsson at oracle.com Thu Mar 22 16:27:04 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Mar 2018 17:27:04 +0100 Subject: RFR: 8200105: Remove cyclic dependency between oop.inline.hpp and collectedHeap.inline.hpp In-Reply-To: <3308e0f4-62fd-7329-4c0a-4605c1c246d1@oracle.com> References: <3308e0f4-62fd-7329-4c0a-4605c1c246d1@oracle.com> Message-ID: This patch builds upon the changes in: http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030926.html StefanK On 2018-03-22 17:10, Stefan Karlsson wrote: > Hi all, > > Please review this patch to remove a cyclic dependency between > oop.inline.hpp and collectedHeap.inline.hpp: > > http://cr.openjdk.java.net/~stefank/8200105/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8200105 > > The ground work for this cleanup had already been done with: > 8199728: Remove oopDesc::is_scavengable > > What's done in this trivial patch is: > 1) Move some debug functions out from the collectedHeap.inline.hpp and > into collectedHeap.hpp. > > 2) Removed collectedHeap.inline.hpp and fixed missing includes to make > it compile again. > > Thanks, > StefanK From stefan.karlsson at oracle.com Thu Mar 22 16:27:38 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Mar 2018 17:27:38 +0100 Subject: RFR: 8200111: MallocArrayAllocator::free should not take a length parameter In-Reply-To: <1521736061.2264.6.camel@oracle.com> References: <20dcb572-f16c-25fc-47f4-f41af21367f9@oracle.com> <1521736061.2264.6.camel@oracle.com> Message-ID: <86e0cadc-1a6b-1611-3f7b-aebc12e0de32@oracle.com> Thanks Thomas. StefanK On 2018-03-22 17:27, Thomas Schatzl wrote: > Hi, > > On Thu, 2018-03-22 at 16:31 +0100, Stefan Karlsson wrote: >> Hi all, >> >> Please review this trivial patch to remove the length parameter from >> MallocArrayAllocator::free: >> >> http://cr.openjdk.java.net/~stefank/8200111/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8200111 >> >> This makes the MallocArrayAllocator as easy to use as the >> NEW_C_HEAP_ARRAY and FREE_C_HEAP_ARRAY. > > looks good :) > > Thomas > From coleen.phillimore at oracle.com Thu Mar 22 18:05:17 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 22 Mar 2018 14:05:17 -0400 Subject: RFR: 8200105: Remove cyclic dependency between oop.inline.hpp and collectedHeap.inline.hpp In-Reply-To: <3308e0f4-62fd-7329-4c0a-4605c1c246d1@oracle.com> References: <3308e0f4-62fd-7329-4c0a-4605c1c246d1@oracle.com> Message-ID: <85bfa1bd-4d55-1ba0-26b3-480dd10c6929@oracle.com> http://cr.openjdk.java.net/~stefank/8200105/webrev.01/src/hotspot/share/oops/oop.inline.hpp.udiff.html This is cool how every include line is 2 characters less than the previous. On 3/22/18 12:10 PM, Stefan Karlsson wrote: > Hi all, > > Please review this patch to remove a cyclic dependency between > oop.inline.hpp and collectedHeap.inline.hpp: > > http://cr.openjdk.java.net/~stefank/8200105/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8200105 > > The ground work for this cleanup had already been done with: > 8199728: Remove oopDesc::is_scavengable > > What's done in this trivial patch is: > 1) Move some debug functions out from the collectedHeap.inline.hpp and > into collectedHeap.hpp. You mean into collectedHeap.cpp right? > > 2) Removed collectedHeap.inline.hpp and fixed missing includes to make > it compile again. Looks good.?? Less transitive includes.?? I assume you ran open-only with --disable-precompiled-headers as well as the oracle platforms in mach5? Zero build is probably unnecessary here (it builds now). I concur that this is trivial. Thanks, Coleen > > Thanks, > StefanK From coleen.phillimore at oracle.com Thu Mar 22 18:06:39 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 22 Mar 2018 14:06:39 -0400 Subject: RFR: 8200111: MallocArrayAllocator::free should not take a length parameter In-Reply-To: <20dcb572-f16c-25fc-47f4-f41af21367f9@oracle.com> References: <20dcb572-f16c-25fc-47f4-f41af21367f9@oracle.com> Message-ID: <54337f16-4d5c-c125-b45e-755126e43a54@oracle.com> I also confirm that this is trivial and can be pushed under the trivial rules. Coleen On 3/22/18 11:31 AM, Stefan Karlsson wrote: > Hi all, > > Please review this trivial patch to remove the length parameter from > MallocArrayAllocator::free: > > http://cr.openjdk.java.net/~stefank/8200111/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8200111 > > This makes the MallocArrayAllocator as easy to use as the > NEW_C_HEAP_ARRAY and FREE_C_HEAP_ARRAY. > > Thanks, > StefanK From stefan.karlsson at oracle.com Thu Mar 22 18:23:53 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Mar 2018 19:23:53 +0100 Subject: RFR: 8200105: Remove cyclic dependency between oop.inline.hpp and collectedHeap.inline.hpp In-Reply-To: <85bfa1bd-4d55-1ba0-26b3-480dd10c6929@oracle.com> References: <3308e0f4-62fd-7329-4c0a-4605c1c246d1@oracle.com> <85bfa1bd-4d55-1ba0-26b3-480dd10c6929@oracle.com> Message-ID: <9277abb6-71c8-3e80-8489-2314ab8ab45e@oracle.com> On 2018-03-22 19:05, coleen.phillimore at oracle.com wrote: > > http://cr.openjdk.java.net/~stefank/8200105/webrev.01/src/hotspot/share/oops/oop.inline.hpp.udiff.html > > This is cool how every include line is 2 characters less than the > previous. :) > > On 3/22/18 12:10 PM, Stefan Karlsson wrote: >> Hi all, >> >> Please review this patch to remove a cyclic dependency between >> oop.inline.hpp and collectedHeap.inline.hpp: >> >> http://cr.openjdk.java.net/~stefank/8200105/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8200105 >> >> The ground work for this cleanup had already been done with: >> 8199728: Remove oopDesc::is_scavengable >> >> What's done in this trivial patch is: >> 1) Move some debug functions out from the collectedHeap.inline.hpp >> and into collectedHeap.hpp. > > You mean into collectedHeap.cpp right? Yes, of course. :) >> >> 2) Removed collectedHeap.inline.hpp and fixed missing includes to >> make it compile again. > > Looks good.?? Less transitive includes.?? I assume you ran open-only > with --disable-precompiled-headers as well as the oracle platforms in > mach5? Yes. But everything is a moving target, so I'll rebuild again before this is pushed. > > Zero build is probably unnecessary here (it builds now). > > I concur that this is trivial. Thanks for the review! StefanK > > Thanks, > Coleen >> >> Thanks, >> StefanK > From coleen.phillimore at oracle.com Thu Mar 22 18:33:04 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 22 Mar 2018 14:33:04 -0400 Subject: RFR: 8200106: Move NoSafepointVerifier out from gcLocker.hpp In-Reply-To: References: <35fd51fd-57d9-5ac1-bce6-409a57b161e5@oracle.com> Message-ID: <77ef8be5-a5a2-169d-72a2-2561bd0b6702@oracle.com> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.hpp.html This is really interesting. I never noticed this buried in gcLocker.? I think this should probably go in interfaceSupport.inline.hpp like all the classes that are only used there instead, unless you think it should be used on its own.? I'm not sure about that. http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.cpp.udiff.html This seems strange to have another is_at_safepoint function.?? I don't see why you changed this as is_at_safepoint is inlined in safepoint.hpp. http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.inline.hpp.udiff.html I thought the inline keyword should be on both declaration and definition (?)? Do we need these functions to be inlined anyway? Can we put them in gcLocker.cpp and remove the .inline file.? It looks like the inline file is not included in most places anyway with this change. http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.cpp.udiff.html nit, can you add // ASSERT to the #endif ? http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/safepointVerifiers.hpp.html I like this and the name seems Ok.? This will be a lot easier to find than in GCLocker.? Thank you for this change. Coleen On 3/22/18 12:26 PM, Stefan Karlsson wrote: > This patch builds upon the changes in: > http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030928.html > > StefanK > > On 2018-03-22 17:24, Stefan Karlsson wrote: >> Hi all, >> >> Please review this patch to separate out the NoSafepointVerifier >> class (and friends) from gcLocker.hpp into its own file. >> >> http://cr.openjdk.java.net/~stefank/8200106/webrev.01 >> https://bugs.openjdk.java.net/browse/JDK-8200106 >> >> After this patch gcLocker.hpp only contains code for the GCLocker. >> I've gone through all usages of the GCCLocker and NoSafepointVerifier >> classes and changed the code to include the correct headers. >> >> The new files are names safepointVerifiers.hpp/cpp and the main class >> is NoSafepointVerifier. However, I also moved the NoGCVerifier, which >> is the parent class of NoSafepointVerifier, and NoAllocVerfier. I >> think all of these are used to verify that we don't do things that >> will interact badly with safepoints, hence the name of the new file. >> Are others OK with the name? >> >> Thanks, >> StefanK From kim.barrett at oracle.com Thu Mar 22 18:44:16 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 22 Mar 2018 14:44:16 -0400 Subject: RFR: 8200105: Remove cyclic dependency between oop.inline.hpp and collectedHeap.inline.hpp In-Reply-To: <3308e0f4-62fd-7329-4c0a-4605c1c246d1@oracle.com> References: <3308e0f4-62fd-7329-4c0a-4605c1c246d1@oracle.com> Message-ID: <089BDC24-D933-4696-94FB-7E75896E4E22@oracle.com> > On Mar 22, 2018, at 12:10 PM, Stefan Karlsson wrote: > > Hi all, > > Please review this patch to remove a cyclic dependency between oop.inline.hpp and collectedHeap.inline.hpp: > > http://cr.openjdk.java.net/~stefank/8200105/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8200105 > > The ground work for this cleanup had already been done with: > 8199728: Remove oopDesc::is_scavengable > > What's done in this trivial patch is: > 1) Move some debug functions out from the collectedHeap.inline.hpp and into collectedHeap.hpp. > > 2) Removed collectedHeap.inline.hpp and fixed missing includes to make it compile again. > > Thanks, > StefanK Very nice cleanup. I assume you will update copyrights. Looks good. ------------------------------------------------------------------------------ src/hotspot/share/oops/oop.inline.hpp I think this file nearly doesn't need to include collectedHeap.hpp at all; not having that dependency would be nice. The only use I see is in an assert in oopDesc::size_given_klass that uses Universe::heap(). Doing anything about this is likely pulling on a lot more string though, so I wouldn't want it added to this change. So file a new RFE if you think eliminating the dependency is worthwhile. ------------------------------------------------------------------------------ src/hotspot/share/jvmci/jvmciCompilerToVMInit.cpp There seems to be a stray blank line removal after line 186. ------------------------------------------------------------------------------ From stefan.karlsson at oracle.com Thu Mar 22 18:47:12 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Mar 2018 19:47:12 +0100 Subject: RFR: 8200106: Move NoSafepointVerifier out from gcLocker.hpp In-Reply-To: <77ef8be5-a5a2-169d-72a2-2561bd0b6702@oracle.com> References: <35fd51fd-57d9-5ac1-bce6-409a57b161e5@oracle.com> <77ef8be5-a5a2-169d-72a2-2561bd0b6702@oracle.com> Message-ID: <0a5a565c-4072-b00a-81cf-efcbe5189e26@oracle.com> On 2018-03-22 19:33, coleen.phillimore at oracle.com wrote: > > http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.hpp.html > > > This is really interesting. I never noticed this buried in gcLocker.? > I think this should probably go in interfaceSupport.inline.hpp like > all the classes that are only used there instead, unless you think it > should be used on its own.? I'm not sure about that. I can move that. > > http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.cpp.udiff.html > > > This seems strange to have another is_at_safepoint function.?? I don't > see why you changed this as is_at_safepoint is inlined in safepoint.hpp. I did that so that we wouldn't have to include safepoint.hpp in gcLocker.hpp, just to be able to do an assert. > > http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.inline.hpp.udiff.html > > > I thought the inline keyword should be on both declaration and > definition (?)? Do we need these functions to be inlined anyway? Can > we put them in gcLocker.cpp and remove the .inline file.? It looks > like the inline file is not included in most places anyway with this > change. You can have it either at the declaration or the definition. The benefit of having it on the declaration is that the compiler will complain instead of the linker. This might be performance sensitive, so I didn't want to risk moving it to the .cpp file. > > http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.cpp.udiff.html > > > nit, can you add // ASSERT to the #endif ? Sure. > > http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/safepointVerifiers.hpp.html > > > I like this and the name seems Ok.? This will be a lot easier to find > than in GCLocker.? Thank you for this change. Thanks! I'll make the changes and will send out a new webrev. StefanK > > Coleen > > > On 3/22/18 12:26 PM, Stefan Karlsson wrote: >> This patch builds upon the changes in: >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030928.html >> >> >> StefanK >> >> On 2018-03-22 17:24, Stefan Karlsson wrote: >>> Hi all, >>> >>> Please review this patch to separate out the NoSafepointVerifier >>> class (and friends) from gcLocker.hpp into its own file. >>> >>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01 >>> https://bugs.openjdk.java.net/browse/JDK-8200106 >>> >>> After this patch gcLocker.hpp only contains code for the GCLocker. >>> I've gone through all usages of the GCCLocker and >>> NoSafepointVerifier classes and changed the code to include the >>> correct headers. >>> >>> The new files are names safepointVerifiers.hpp/cpp and the main >>> class is NoSafepointVerifier. However, I also moved the >>> NoGCVerifier, which is the parent class of NoSafepointVerifier, and >>> NoAllocVerfier. I think all of these are used to verify that we >>> don't do things that will interact badly with safepoints, hence the >>> name of the new file. Are others OK with the name? >>> >>> Thanks, >>> StefanK > From stefan.karlsson at oracle.com Thu Mar 22 18:53:34 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Mar 2018 19:53:34 +0100 Subject: RFR: 8200105: Remove cyclic dependency between oop.inline.hpp and collectedHeap.inline.hpp In-Reply-To: <089BDC24-D933-4696-94FB-7E75896E4E22@oracle.com> References: <3308e0f4-62fd-7329-4c0a-4605c1c246d1@oracle.com> <089BDC24-D933-4696-94FB-7E75896E4E22@oracle.com> Message-ID: On 2018-03-22 19:44, Kim Barrett wrote: >> On Mar 22, 2018, at 12:10 PM, Stefan Karlsson wrote: >> >> Hi all, >> >> Please review this patch to remove a cyclic dependency between oop.inline.hpp and collectedHeap.inline.hpp: >> >> http://cr.openjdk.java.net/~stefank/8200105/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8200105 >> >> The ground work for this cleanup had already been done with: >> 8199728: Remove oopDesc::is_scavengable >> >> What's done in this trivial patch is: >> 1) Move some debug functions out from the collectedHeap.inline.hpp and into collectedHeap.hpp. >> >> 2) Removed collectedHeap.inline.hpp and fixed missing includes to make it compile again. >> >> Thanks, >> StefanK > Very nice cleanup. I assume you will update copyrights. > > Looks good. Thanks, I'll update the copyrights. > > ------------------------------------------------------------------------------ > src/hotspot/share/oops/oop.inline.hpp > > I think this file nearly doesn't need to include collectedHeap.hpp at > all; not having that dependency would be nice. The only use I see is > in an assert in oopDesc::size_given_klass that uses Universe::heap(). > > Doing anything about this is likely pulling on a lot more string > though, so I wouldn't want it added to this change. So file a new RFE > if you think eliminating the dependency is worthwhile. I thought about that, but didn't do it. Yes, it would probably be good to do what you suggest. > > ------------------------------------------------------------------------------ > src/hotspot/share/jvmci/jvmciCompilerToVMInit.cpp > > There seems to be a stray blank line removal after line 186. Thanks for catching that. StefanK > > ------------------------------------------------------------------------------ > From coleen.phillimore at oracle.com Thu Mar 22 18:55:34 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 22 Mar 2018 14:55:34 -0400 Subject: RFR: 8200106: Move NoSafepointVerifier out from gcLocker.hpp In-Reply-To: <0a5a565c-4072-b00a-81cf-efcbe5189e26@oracle.com> References: <35fd51fd-57d9-5ac1-bce6-409a57b161e5@oracle.com> <77ef8be5-a5a2-169d-72a2-2561bd0b6702@oracle.com> <0a5a565c-4072-b00a-81cf-efcbe5189e26@oracle.com> Message-ID: <8705e9a7-e10b-de4e-0ddd-660333087aac@oracle.com> On 3/22/18 2:47 PM, Stefan Karlsson wrote: > On 2018-03-22 19:33, coleen.phillimore at oracle.com wrote: >> >> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.hpp.html >> >> >> This is really interesting. I never noticed this buried in gcLocker.? >> I think this should probably go in interfaceSupport.inline.hpp like >> all the classes that are only used there instead, unless you think it >> should be used on its own.? I'm not sure about that. > > I can move that. Thanks. >> >> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.cpp.udiff.html >> >> >> This seems strange to have another is_at_safepoint function.?? I >> don't see why you changed this as is_at_safepoint is inlined in >> safepoint.hpp. > > I did that so that we wouldn't have to include safepoint.hpp in > gcLocker.hpp, just to be able to do an assert. Ok, it's a private method so that's ok. > >> >> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.inline.hpp.udiff.html >> >> >> I thought the inline keyword should be on both declaration and >> definition (?)? Do we need these functions to be inlined anyway? Can >> we put them in gcLocker.cpp and remove the .inline file.? It looks >> like the inline file is not included in most places anyway with this >> change. > > You can have it either at the declaration or the definition. The > benefit of having it on the declaration is that the compiler will > complain instead of the linker. This might be performance sensitive, > so I didn't want to risk moving it to the .cpp file. Agree.? Thought you needed both, but the declaration is better. > >> >> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.cpp.udiff.html >> >> >> nit, can you add // ASSERT to the #endif ? > > Sure. > >> >> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/safepointVerifiers.hpp.html >> >> >> I like this and the name seems Ok.? This will be a lot easier to find >> than in GCLocker.? Thank you for this change. > > Thanks! I'll make the changes and will send out a new webrev. Ok, I don't need to see another webrev.? Changes are reviewed as long as testing goes well. thanks, Coleen > > StefanK >> >> Coleen >> >> >> On 3/22/18 12:26 PM, Stefan Karlsson wrote: >>> This patch builds upon the changes in: >>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030928.html >>> >>> >>> StefanK >>> >>> On 2018-03-22 17:24, Stefan Karlsson wrote: >>>> Hi all, >>>> >>>> Please review this patch to separate out the NoSafepointVerifier >>>> class (and friends) from gcLocker.hpp into its own file. >>>> >>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01 >>>> https://bugs.openjdk.java.net/browse/JDK-8200106 >>>> >>>> After this patch gcLocker.hpp only contains code for the GCLocker. >>>> I've gone through all usages of the GCCLocker and >>>> NoSafepointVerifier classes and changed the code to include the >>>> correct headers. >>>> >>>> The new files are names safepointVerifiers.hpp/cpp and the main >>>> class is NoSafepointVerifier. However, I also moved the >>>> NoGCVerifier, which is the parent class of NoSafepointVerifier, and >>>> NoAllocVerfier. I think all of these are used to verify that we >>>> don't do things that will interact badly with safepoints, hence the >>>> name of the new file. Are others OK with the name? >>>> >>>> Thanks, >>>> StefanK >> > From stefan.karlsson at oracle.com Thu Mar 22 19:28:36 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Mar 2018 20:28:36 +0100 Subject: RFR: 8199946: Move load/store and encode/decode out of oopDesc In-Reply-To: <5a6ea069-5c78-b10f-1cbc-8d38f0560f9d@oracle.com> References: <5a6ea069-5c78-b10f-1cbc-8d38f0560f9d@oracle.com> Message-ID: Erik found two places where I didn't use OOP_NOT_NULL, where we previously used _not_null functions. diff --git a/src/hotspot/share/gc/cms/parNewGeneration.cpp b/src/hotspot/share/gc/cms/parNewGeneration.cpp --- a/src/hotspot/share/gc/cms/parNewGeneration.cpp +++ b/src/hotspot/share/gc/cms/parNewGeneration.cpp @@ -734,7 +734,7 @@ ?????? oop new_obj = obj->is_forwarded() ?????????????????????? ? obj->forwardee() ?????????????????????? : _g->DefNewGeneration::copy_to_survivor_space(obj); -????? RawAccess<>::oop_store(p, new_obj); +????? RawAccess::oop_store(p, new_obj); ???? } ???? if (_gc_barrier) { ?????? // If p points to a younger generation, mark the card. diff --git a/src/hotspot/share/gc/cms/parOopClosures.inline.hpp b/src/hotspot/share/gc/cms/parOopClosures.inline.hpp --- a/src/hotspot/share/gc/cms/parOopClosures.inline.hpp +++ b/src/hotspot/share/gc/cms/parOopClosures.inline.hpp @@ -52,7 +52,7 @@ ?????? new_obj = ((ParNewGeneration*)_g)->copy_to_survivor_space(_par_scan_state, obj, obj_sz, m); ???? } -??? RawAccess<>::oop_store(p, new_obj); +??? RawAccess::oop_store(p, new_obj); ?? } ?} StefanK On 2018-03-22 17:01, Stefan Karlsson wrote: > Hi, > > This patch needs Erik's change to the LoadProxies: > http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2018-March/021504.html > > > to build on fastdebug. > > Here's a rebased patch: > http://cr.openjdk.java.net/~stefank/8199946/webrev.02/ > > Thanks, > StefanK > > On 2018-03-21 18:27, Stefan Karlsson wrote: >> Hi all, >> >> Please review this patch to get rid of the oopDesc::load/store >> functions and to move the oopDesc::encode/decode functions to a new >> CompressedOops subsystem. >> >> http://cr.openjdk.java.net/~stefank/8199946/webrev.01 >> https://bugs.openjdk.java.net/browse/JDK-8199946 >> >> When the Access API was introduced many of the usages of >> oopDesc::load_decode_heap_oop, and friends, were replaced by calls to >> the Access API. However, there are still some usages of these >> functions, most notably in the GC code. >> >> This patch is two-fold: >> >> 1) It replaces the oopDesc load and store calls with RawAccess >> equivalents. >> >> 2) It moves the oopDesc encode and decode functions to a new, >> separate, subsystem called CompressedOops. A future patch could even >> move all the Universe::_narrow_oop variables over to CompressedOops. >> >> The second part has the nice property that it breaks up a circular >> dependency between oop.inline.hpp and access.inline.hpp. After the >> change we have: >> >> oop.inline.hpp includes: >> ?? access.inline.hpp >> ?? compressedOops.inline.hpp >> >> access.inline.hpp includes: >> ?? compressedOops.inline.hpp >> >> Thanks, >> StefanK From jan.lahoda at oracle.com Thu Mar 22 21:23:25 2018 From: jan.lahoda at oracle.com (Jan Lahoda) Date: Thu, 22 Mar 2018 22:23:25 +0100 Subject: RFR: JDK-8200136: Problem list test/hotspot/jtreg/compiler/jvmci/compilerToVM/GetExceptionTableTest.java Message-ID: <5AB41ECD.7000709@oracle.com> Hi, The fix for JDK-8194978 unfortunately broke test/hotspot/jtreg/compiler/jvmci/compilerToVM/GetExceptionTableTest.java. I'd like to propose problem listing that test, until it is fixed: http://cr.openjdk.java.net/~jlahoda/8200136/webrev.00/ I apologize for any inconvenience this may cause. Thanks, Jan From joe.darcy at oracle.com Thu Mar 22 21:27:06 2018 From: joe.darcy at oracle.com (joe darcy) Date: Thu, 22 Mar 2018 14:27:06 -0700 Subject: RFR: JDK-8200136: Problem list test/hotspot/jtreg/compiler/jvmci/compilerToVM/GetExceptionTableTest.java In-Reply-To: <5AB41ECD.7000709@oracle.com> References: <5AB41ECD.7000709@oracle.com> Message-ID: Looks good Jan; thanks, -Joe On 3/22/2018 2:23 PM, Jan Lahoda wrote: > Hi, > > The fix for JDK-8194978 unfortunately broke > test/hotspot/jtreg/compiler/jvmci/compilerToVM/GetExceptionTableTest.java. > I'd like to propose problem listing that test, until it is fixed: > http://cr.openjdk.java.net/~jlahoda/8200136/webrev.00/ > > I apologize for any inconvenience this may cause. > > Thanks, > ??? Jan From vladimir.kozlov at oracle.com Thu Mar 22 21:32:26 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 22 Mar 2018 14:32:26 -0700 Subject: RFR: JDK-8200136: Problem list test/hotspot/jtreg/compiler/jvmci/compilerToVM/GetExceptionTableTest.java In-Reply-To: <5AB41ECD.7000709@oracle.com> References: <5AB41ECD.7000709@oracle.com> Message-ID: <0bd22d6b-a659-7ac0-2c61-52a568a1b0c7@oracle.com> Looks good. Thanks, Vladimir On 3/22/18 2:23 PM, Jan Lahoda wrote: > Hi, > > The fix for JDK-8194978 unfortunately broke > test/hotspot/jtreg/compiler/jvmci/compilerToVM/GetExceptionTableTest.java. I'd like to propose > problem listing that test, until it is fixed: > http://cr.openjdk.java.net/~jlahoda/8200136/webrev.00/ > > I apologize for any inconvenience this may cause. > > Thanks, > ??? Jan From david.holmes at oracle.com Thu Mar 22 21:47:13 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 23 Mar 2018 07:47:13 +1000 Subject: RFR: 8200106: Move NoSafepointVerifier out from gcLocker.hpp In-Reply-To: <0a5a565c-4072-b00a-81cf-efcbe5189e26@oracle.com> References: <35fd51fd-57d9-5ac1-bce6-409a57b161e5@oracle.com> <77ef8be5-a5a2-169d-72a2-2561bd0b6702@oracle.com> <0a5a565c-4072-b00a-81cf-efcbe5189e26@oracle.com> Message-ID: <074c7cb7-7c42-ffbf-18c4-b4a9f2255b8b@oracle.com> Hi Stefan, Jumping in on one issue ... On 23/03/2018 4:47 AM, Stefan Karlsson wrote: > On 2018-03-22 19:33, coleen.phillimore at oracle.com wrote: >> >> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.hpp.html >> >> >> This is really interesting. I never noticed this buried in gcLocker. I >> think this should probably go in interfaceSupport.inline.hpp like all >> the classes that are only used there instead, unless you think it >> should be used on its own.? I'm not sure about that. > > I can move that. >> >> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.cpp.udiff.html >> >> >> This seems strange to have another is_at_safepoint function.?? I don't >> see why you changed this as is_at_safepoint is inlined in safepoint.hpp. > > I did that so that we wouldn't have to include safepoint.hpp in > gcLocker.hpp, just to be able to do an assert. Why is that an issue? And shouldn't gcLocker.cpp be including safepoint.hpp now? (I guess it gets it indirectly from "runtime/thread.inline.hpp") Thanks, David ----- >> >> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.inline.hpp.udiff.html >> >> >> I thought the inline keyword should be on both declaration and >> definition (?)? Do we need these functions to be inlined anyway? Can >> we put them in gcLocker.cpp and remove the .inline file.? It looks >> like the inline file is not included in most places anyway with this >> change. > > You can have it either at the declaration or the definition. The benefit > of having it on the declaration is that the compiler will complain > instead of the linker. This might be performance sensitive, so I didn't > want to risk moving it to the .cpp file. > >> >> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.cpp.udiff.html >> >> >> nit, can you add // ASSERT to the #endif ? > > Sure. > >> >> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/safepointVerifiers.hpp.html >> >> >> I like this and the name seems Ok.? This will be a lot easier to find >> than in GCLocker.? Thank you for this change. > > Thanks! I'll make the changes and will send out a new webrev. > > StefanK >> >> Coleen >> >> >> On 3/22/18 12:26 PM, Stefan Karlsson wrote: >>> This patch builds upon the changes in: >>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030928.html >>> >>> >>> StefanK >>> >>> On 2018-03-22 17:24, Stefan Karlsson wrote: >>>> Hi all, >>>> >>>> Please review this patch to separate out the NoSafepointVerifier >>>> class (and friends) from gcLocker.hpp into its own file. >>>> >>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01 >>>> https://bugs.openjdk.java.net/browse/JDK-8200106 >>>> >>>> After this patch gcLocker.hpp only contains code for the GCLocker. >>>> I've gone through all usages of the GCCLocker and >>>> NoSafepointVerifier classes and changed the code to include the >>>> correct headers. >>>> >>>> The new files are names safepointVerifiers.hpp/cpp and the main >>>> class is NoSafepointVerifier. However, I also moved the >>>> NoGCVerifier, which is the parent class of NoSafepointVerifier, and >>>> NoAllocVerfier. I think all of these are used to verify that we >>>> don't do things that will interact badly with safepoints, hence the >>>> name of the new file. Are others OK with the name? >>>> >>>> Thanks, >>>> StefanK >> > From stefan.karlsson at oracle.com Thu Mar 22 21:59:56 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Mar 2018 22:59:56 +0100 Subject: RFR: 8200106: Move NoSafepointVerifier out from gcLocker.hpp In-Reply-To: <074c7cb7-7c42-ffbf-18c4-b4a9f2255b8b@oracle.com> References: <35fd51fd-57d9-5ac1-bce6-409a57b161e5@oracle.com> <77ef8be5-a5a2-169d-72a2-2561bd0b6702@oracle.com> <0a5a565c-4072-b00a-81cf-efcbe5189e26@oracle.com> <074c7cb7-7c42-ffbf-18c4-b4a9f2255b8b@oracle.com> Message-ID: <1f5f7ebc-cb81-fef7-444e-33c9e2d0f213@oracle.com> On 2018-03-22 22:47, David Holmes wrote: > Hi Stefan, > > Jumping in on one issue ... > > On 23/03/2018 4:47 AM, Stefan Karlsson wrote: >> On 2018-03-22 19:33, coleen.phillimore at oracle.com wrote: >>> >>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.hpp.html >>> >>> >>> This is really interesting. I never noticed this buried in gcLocker. >>> I think this should probably go in interfaceSupport.inline.hpp like >>> all the classes that are only used there instead, unless you think >>> it should be used on its own.? I'm not sure about that. >> >> I can move that. >>> >>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.cpp.udiff.html >>> >>> >>> This seems strange to have another is_at_safepoint function. I don't >>> see why you changed this as is_at_safepoint is inlined in >>> safepoint.hpp. >> >> I did that so that we wouldn't have to include safepoint.hpp in >> gcLocker.hpp, just to be able to do an assert. > > Why is that an issue? I simply try to minimize includes in our our .hpp files. > > And shouldn't gcLocker.cpp be including safepoint.hpp now? Thanks, I missed that because the line below used SafepointSynchronize. StefanK > (I guess it gets it indirectly from "runtime/thread.inline.hpp") > > Thanks, > David > ----- > >>> >>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.inline.hpp.udiff.html >>> >>> >>> I thought the inline keyword should be on both declaration and >>> definition (?)? Do we need these functions to be inlined anyway? Can >>> we put them in gcLocker.cpp and remove the .inline file.? It looks >>> like the inline file is not included in most places anyway with this >>> change. >> >> You can have it either at the declaration or the definition. The >> benefit of having it on the declaration is that the compiler will >> complain instead of the linker. This might be performance sensitive, >> so I didn't want to risk moving it to the .cpp file. >> >>> >>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.cpp.udiff.html >>> >>> >>> nit, can you add // ASSERT to the #endif ? >> >> Sure. >> >>> >>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/safepointVerifiers.hpp.html >>> >>> >>> I like this and the name seems Ok.? This will be a lot easier to >>> find than in GCLocker.? Thank you for this change. >> >> Thanks! I'll make the changes and will send out a new webrev. >> >> StefanK >>> >>> Coleen >>> >>> >>> On 3/22/18 12:26 PM, Stefan Karlsson wrote: >>>> This patch builds upon the changes in: >>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030928.html >>>> >>>> >>>> StefanK >>>> >>>> On 2018-03-22 17:24, Stefan Karlsson wrote: >>>>> Hi all, >>>>> >>>>> Please review this patch to separate out the NoSafepointVerifier >>>>> class (and friends) from gcLocker.hpp into its own file. >>>>> >>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01 >>>>> https://bugs.openjdk.java.net/browse/JDK-8200106 >>>>> >>>>> After this patch gcLocker.hpp only contains code for the GCLocker. >>>>> I've gone through all usages of the GCCLocker and >>>>> NoSafepointVerifier classes and changed the code to include the >>>>> correct headers. >>>>> >>>>> The new files are names safepointVerifiers.hpp/cpp and the main >>>>> class is NoSafepointVerifier. However, I also moved the >>>>> NoGCVerifier, which is the parent class of NoSafepointVerifier, >>>>> and NoAllocVerfier. I think all of these are used to verify that >>>>> we don't do things that will interact badly with safepoints, hence >>>>> the name of the new file. Are others OK with the name? >>>>> >>>>> Thanks, >>>>> StefanK >>> >> From david.holmes at oracle.com Thu Mar 22 22:05:57 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 23 Mar 2018 08:05:57 +1000 Subject: RFR: 8200106: Move NoSafepointVerifier out from gcLocker.hpp In-Reply-To: <1f5f7ebc-cb81-fef7-444e-33c9e2d0f213@oracle.com> References: <35fd51fd-57d9-5ac1-bce6-409a57b161e5@oracle.com> <77ef8be5-a5a2-169d-72a2-2561bd0b6702@oracle.com> <0a5a565c-4072-b00a-81cf-efcbe5189e26@oracle.com> <074c7cb7-7c42-ffbf-18c4-b4a9f2255b8b@oracle.com> <1f5f7ebc-cb81-fef7-444e-33c9e2d0f213@oracle.com> Message-ID: On 23/03/2018 7:59 AM, Stefan Karlsson wrote: > On 2018-03-22 22:47, David Holmes wrote: >> Hi Stefan, >> >> Jumping in on one issue ... >> >> On 23/03/2018 4:47 AM, Stefan Karlsson wrote: >>> On 2018-03-22 19:33, coleen.phillimore at oracle.com wrote: >>>> >>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.hpp.html >>>> >>>> >>>> This is really interesting. I never noticed this buried in gcLocker. >>>> I think this should probably go in interfaceSupport.inline.hpp like >>>> all the classes that are only used there instead, unless you think >>>> it should be used on its own.? I'm not sure about that. >>> >>> I can move that. >>>> >>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.cpp.udiff.html >>>> >>>> >>>> This seems strange to have another is_at_safepoint function. I don't >>>> see why you changed this as is_at_safepoint is inlined in >>>> safepoint.hpp. >>> >>> I did that so that we wouldn't have to include safepoint.hpp in >>> gcLocker.hpp, just to be able to do an assert. >> >> Why is that an issue? > > I simply try to minimize includes in our our .hpp files. If it was to avoid some kind of problem I could understand, but just to "minimize" this seems like false economy. You shouldn't need to add forwarding wrapper functions in the local API just because you don't want to include the real API's header file. David >> >> And shouldn't gcLocker.cpp be including safepoint.hpp now? > > Thanks, I missed that because the line below used SafepointSynchronize. > > StefanK > >> (I guess it gets it indirectly from "runtime/thread.inline.hpp") >> >> Thanks, >> David >> ----- >> >>>> >>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.inline.hpp.udiff.html >>>> >>>> >>>> I thought the inline keyword should be on both declaration and >>>> definition (?)? Do we need these functions to be inlined anyway? Can >>>> we put them in gcLocker.cpp and remove the .inline file.? It looks >>>> like the inline file is not included in most places anyway with this >>>> change. >>> >>> You can have it either at the declaration or the definition. The >>> benefit of having it on the declaration is that the compiler will >>> complain instead of the linker. This might be performance sensitive, >>> so I didn't want to risk moving it to the .cpp file. >>> >>>> >>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.cpp.udiff.html >>>> >>>> >>>> nit, can you add // ASSERT to the #endif ? >>> >>> Sure. >>> >>>> >>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/safepointVerifiers.hpp.html >>>> >>>> >>>> I like this and the name seems Ok.? This will be a lot easier to >>>> find than in GCLocker.? Thank you for this change. >>> >>> Thanks! I'll make the changes and will send out a new webrev. >>> >>> StefanK >>>> >>>> Coleen >>>> >>>> >>>> On 3/22/18 12:26 PM, Stefan Karlsson wrote: >>>>> This patch builds upon the changes in: >>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030928.html >>>>> >>>>> >>>>> StefanK >>>>> >>>>> On 2018-03-22 17:24, Stefan Karlsson wrote: >>>>>> Hi all, >>>>>> >>>>>> Please review this patch to separate out the NoSafepointVerifier >>>>>> class (and friends) from gcLocker.hpp into its own file. >>>>>> >>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01 >>>>>> https://bugs.openjdk.java.net/browse/JDK-8200106 >>>>>> >>>>>> After this patch gcLocker.hpp only contains code for the GCLocker. >>>>>> I've gone through all usages of the GCCLocker and >>>>>> NoSafepointVerifier classes and changed the code to include the >>>>>> correct headers. >>>>>> >>>>>> The new files are names safepointVerifiers.hpp/cpp and the main >>>>>> class is NoSafepointVerifier. However, I also moved the >>>>>> NoGCVerifier, which is the parent class of NoSafepointVerifier, >>>>>> and NoAllocVerfier. I think all of these are used to verify that >>>>>> we don't do things that will interact badly with safepoints, hence >>>>>> the name of the new file. Are others OK with the name? >>>>>> >>>>>> Thanks, >>>>>> StefanK >>>> >>> > From stefan.karlsson at oracle.com Thu Mar 22 22:18:34 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 22 Mar 2018 23:18:34 +0100 Subject: RFR: 8200106: Move NoSafepointVerifier out from gcLocker.hpp In-Reply-To: References: <35fd51fd-57d9-5ac1-bce6-409a57b161e5@oracle.com> <77ef8be5-a5a2-169d-72a2-2561bd0b6702@oracle.com> <0a5a565c-4072-b00a-81cf-efcbe5189e26@oracle.com> <074c7cb7-7c42-ffbf-18c4-b4a9f2255b8b@oracle.com> <1f5f7ebc-cb81-fef7-444e-33c9e2d0f213@oracle.com> Message-ID: <647cc897-b7e6-52ba-cfaf-ea8ba19a39b7@oracle.com> On 2018-03-22 23:05, David Holmes wrote: > On 23/03/2018 7:59 AM, Stefan Karlsson wrote: >> On 2018-03-22 22:47, David Holmes wrote: >>> Hi Stefan, >>> >>> Jumping in on one issue ... >>> >>> On 23/03/2018 4:47 AM, Stefan Karlsson wrote: >>>> On 2018-03-22 19:33, coleen.phillimore at oracle.com wrote: >>>>> >>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.hpp.html >>>>> >>>>> >>>>> This is really interesting. I never noticed this buried in >>>>> gcLocker. I think this should probably go in >>>>> interfaceSupport.inline.hpp like all the classes that are only >>>>> used there instead, unless you think it should be used on its >>>>> own.? I'm not sure about that. >>>> >>>> I can move that. >>>>> >>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.cpp.udiff.html >>>>> >>>>> >>>>> This seems strange to have another is_at_safepoint function. I >>>>> don't see why you changed this as is_at_safepoint is inlined in >>>>> safepoint.hpp. >>>> >>>> I did that so that we wouldn't have to include safepoint.hpp in >>>> gcLocker.hpp, just to be able to do an assert. >>> >>> Why is that an issue? >> >> I simply try to minimize includes in our our .hpp files. > > If it was to avoid some kind of problem I could understand, but just > to "minimize" this seems like false economy. You shouldn't need to add > forwarding wrapper functions in the local API just because you don't > want to include the real API's header file. Of course I don't "need" to, but safepoint.hpp includes 158 other HotSpot headers, and whenever I touch any of those, I'll get mostly unnecessary recompiles of files that included gcLocker.hpp. By being more restrictive with our includes, we could minimize our compile times, and that's worth the tiny overhead of doing this change. Minimizing our includes in our header files also help reduce the risk of getting cyclic dependencies, which do cause problems. If you don't like this, I could move is_active to gcLocker.inline.hpp, and then I'd add #include "gc/shared/gcLocker.inline.hpp" to the ~50 files that use is_active. StefanK > David > >>> >>> And shouldn't gcLocker.cpp be including safepoint.hpp now? >> >> Thanks, I missed that because the line below used SafepointSynchronize. >> >> StefanK >> >>> (I guess it gets it indirectly from "runtime/thread.inline.hpp") >>> >>> Thanks, >>> David >>> ----- >>> >>>>> >>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.inline.hpp.udiff.html >>>>> >>>>> >>>>> I thought the inline keyword should be on both declaration and >>>>> definition (?)? Do we need these functions to be inlined anyway? >>>>> Can we put them in gcLocker.cpp and remove the .inline file.? It >>>>> looks like the inline file is not included in most places anyway >>>>> with this change. >>>> >>>> You can have it either at the declaration or the definition. The >>>> benefit of having it on the declaration is that the compiler will >>>> complain instead of the linker. This might be performance >>>> sensitive, so I didn't want to risk moving it to the .cpp file. >>>> >>>>> >>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.cpp.udiff.html >>>>> >>>>> >>>>> nit, can you add // ASSERT to the #endif ? >>>> >>>> Sure. >>>> >>>>> >>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/safepointVerifiers.hpp.html >>>>> >>>>> >>>>> I like this and the name seems Ok.? This will be a lot easier to >>>>> find than in GCLocker.? Thank you for this change. >>>> >>>> Thanks! I'll make the changes and will send out a new webrev. >>>> >>>> StefanK >>>>> >>>>> Coleen >>>>> >>>>> >>>>> On 3/22/18 12:26 PM, Stefan Karlsson wrote: >>>>>> This patch builds upon the changes in: >>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030928.html >>>>>> >>>>>> >>>>>> StefanK >>>>>> >>>>>> On 2018-03-22 17:24, Stefan Karlsson wrote: >>>>>>> Hi all, >>>>>>> >>>>>>> Please review this patch to separate out the NoSafepointVerifier >>>>>>> class (and friends) from gcLocker.hpp into its own file. >>>>>>> >>>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01 >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8200106 >>>>>>> >>>>>>> After this patch gcLocker.hpp only contains code for the >>>>>>> GCLocker. I've gone through all usages of the GCCLocker and >>>>>>> NoSafepointVerifier classes and changed the code to include the >>>>>>> correct headers. >>>>>>> >>>>>>> The new files are names safepointVerifiers.hpp/cpp and the main >>>>>>> class is NoSafepointVerifier. However, I also moved the >>>>>>> NoGCVerifier, which is the parent class of NoSafepointVerifier, >>>>>>> and NoAllocVerfier. I think all of these are used to verify that >>>>>>> we don't do things that will interact badly with safepoints, >>>>>>> hence the name of the new file. Are others OK with the name? >>>>>>> >>>>>>> Thanks, >>>>>>> StefanK >>>>> >>>> >> From coleen.phillimore at oracle.com Thu Mar 22 22:41:50 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 22 Mar 2018 18:41:50 -0400 Subject: RFR: 8200106: Move NoSafepointVerifier out from gcLocker.hpp In-Reply-To: <647cc897-b7e6-52ba-cfaf-ea8ba19a39b7@oracle.com> References: <35fd51fd-57d9-5ac1-bce6-409a57b161e5@oracle.com> <77ef8be5-a5a2-169d-72a2-2561bd0b6702@oracle.com> <0a5a565c-4072-b00a-81cf-efcbe5189e26@oracle.com> <074c7cb7-7c42-ffbf-18c4-b4a9f2255b8b@oracle.com> <1f5f7ebc-cb81-fef7-444e-33c9e2d0f213@oracle.com> <647cc897-b7e6-52ba-cfaf-ea8ba19a39b7@oracle.com> Message-ID: <2caab8c0-3c1d-a25b-b78e-3ebed41b0c23@oracle.com> On 3/22/18 6:18 PM, Stefan Karlsson wrote: > On 2018-03-22 23:05, David Holmes wrote: >> On 23/03/2018 7:59 AM, Stefan Karlsson wrote: >>> On 2018-03-22 22:47, David Holmes wrote: >>>> Hi Stefan, >>>> >>>> Jumping in on one issue ... >>>> >>>> On 23/03/2018 4:47 AM, Stefan Karlsson wrote: >>>>> On 2018-03-22 19:33, coleen.phillimore at oracle.com wrote: >>>>>> >>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.hpp.html >>>>>> >>>>>> >>>>>> This is really interesting. I never noticed this buried in >>>>>> gcLocker. I think this should probably go in >>>>>> interfaceSupport.inline.hpp like all the classes that are only >>>>>> used there instead, unless you think it should be used on its >>>>>> own.? I'm not sure about that. >>>>> >>>>> I can move that. >>>>>> >>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.cpp.udiff.html >>>>>> >>>>>> >>>>>> This seems strange to have another is_at_safepoint function. I >>>>>> don't see why you changed this as is_at_safepoint is inlined in >>>>>> safepoint.hpp. >>>>> >>>>> I did that so that we wouldn't have to include safepoint.hpp in >>>>> gcLocker.hpp, just to be able to do an assert. >>>> >>>> Why is that an issue? >>> >>> I simply try to minimize includes in our our .hpp files. >> >> If it was to avoid some kind of problem I could understand, but just >> to "minimize" this seems like false economy. You shouldn't need to >> add forwarding wrapper functions in the local API just because you >> don't want to include the real API's header file. > > Of course I don't "need" to, but safepoint.hpp includes 158 other > HotSpot headers, and whenever I touch any of those, I'll get mostly > unnecessary recompiles of files that included gcLocker.hpp. By being > more restrictive with our includes, we could minimize our compile > times, and that's worth the tiny overhead of doing this change. > > Minimizing our includes in our header files also help reduce the risk > of getting cyclic dependencies, which do cause problems. > > If you don't like this, I could move is_active to gcLocker.inline.hpp, > and then I'd add #include "gc/shared/gcLocker.inline.hpp" to the ~50 > files that use is_active. No!? I thought your solution was best given that gcLocker::is_at_safepoint() was private.? Maybe is_active() could be in the cpp file instead? Coleen > > StefanK > >> David >> >>>> >>>> And shouldn't gcLocker.cpp be including safepoint.hpp now? >>> >>> Thanks, I missed that because the line below used SafepointSynchronize. >>> >>> StefanK >>> >>>> (I guess it gets it indirectly from "runtime/thread.inline.hpp") >>>> >>>> Thanks, >>>> David >>>> ----- >>>> >>>>>> >>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.inline.hpp.udiff.html >>>>>> >>>>>> >>>>>> I thought the inline keyword should be on both declaration and >>>>>> definition (?)? Do we need these functions to be inlined anyway? >>>>>> Can we put them in gcLocker.cpp and remove the .inline file.? It >>>>>> looks like the inline file is not included in most places anyway >>>>>> with this change. >>>>> >>>>> You can have it either at the declaration or the definition. The >>>>> benefit of having it on the declaration is that the compiler will >>>>> complain instead of the linker. This might be performance >>>>> sensitive, so I didn't want to risk moving it to the .cpp file. >>>>> >>>>>> >>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.cpp.udiff.html >>>>>> >>>>>> >>>>>> nit, can you add // ASSERT to the #endif ? >>>>> >>>>> Sure. >>>>> >>>>>> >>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/safepointVerifiers.hpp.html >>>>>> >>>>>> >>>>>> I like this and the name seems Ok.? This will be a lot easier to >>>>>> find than in GCLocker.? Thank you for this change. >>>>> >>>>> Thanks! I'll make the changes and will send out a new webrev. >>>>> >>>>> StefanK >>>>>> >>>>>> Coleen >>>>>> >>>>>> >>>>>> On 3/22/18 12:26 PM, Stefan Karlsson wrote: >>>>>>> This patch builds upon the changes in: >>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030928.html >>>>>>> >>>>>>> >>>>>>> StefanK >>>>>>> >>>>>>> On 2018-03-22 17:24, Stefan Karlsson wrote: >>>>>>>> Hi all, >>>>>>>> >>>>>>>> Please review this patch to separate out the >>>>>>>> NoSafepointVerifier class (and friends) from gcLocker.hpp into >>>>>>>> its own file. >>>>>>>> >>>>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01 >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8200106 >>>>>>>> >>>>>>>> After this patch gcLocker.hpp only contains code for the >>>>>>>> GCLocker. I've gone through all usages of the GCCLocker and >>>>>>>> NoSafepointVerifier classes and changed the code to include the >>>>>>>> correct headers. >>>>>>>> >>>>>>>> The new files are names safepointVerifiers.hpp/cpp and the main >>>>>>>> class is NoSafepointVerifier. However, I also moved the >>>>>>>> NoGCVerifier, which is the parent class of NoSafepointVerifier, >>>>>>>> and NoAllocVerfier. I think all of these are used to verify >>>>>>>> that we don't do things that will interact badly with >>>>>>>> safepoints, hence the name of the new file. Are others OK with >>>>>>>> the name? >>>>>>>> >>>>>>>> Thanks, >>>>>>>> StefanK >>>>>> >>>>> >>> > From david.holmes at oracle.com Thu Mar 22 23:45:26 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 23 Mar 2018 09:45:26 +1000 Subject: RFR: 8200106: Move NoSafepointVerifier out from gcLocker.hpp In-Reply-To: <647cc897-b7e6-52ba-cfaf-ea8ba19a39b7@oracle.com> References: <35fd51fd-57d9-5ac1-bce6-409a57b161e5@oracle.com> <77ef8be5-a5a2-169d-72a2-2561bd0b6702@oracle.com> <0a5a565c-4072-b00a-81cf-efcbe5189e26@oracle.com> <074c7cb7-7c42-ffbf-18c4-b4a9f2255b8b@oracle.com> <1f5f7ebc-cb81-fef7-444e-33c9e2d0f213@oracle.com> <647cc897-b7e6-52ba-cfaf-ea8ba19a39b7@oracle.com> Message-ID: <8119a43c-ad5f-01e8-9853-bdf5d499c661@oracle.com> Hi Stefan, I've looked through everything now and it all seems okay. One additional comment, and a follow up below ... On 23/03/2018 8:18 AM, Stefan Karlsson wrote: > On 2018-03-22 23:05, David Holmes wrote: >> On 23/03/2018 7:59 AM, Stefan Karlsson wrote: >>> On 2018-03-22 22:47, David Holmes wrote: >>>> Hi Stefan, >>>> >>>> Jumping in on one issue ... >>>> >>>> On 23/03/2018 4:47 AM, Stefan Karlsson wrote: >>>>> On 2018-03-22 19:33, coleen.phillimore at oracle.com wrote: >>>>>> >>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.hpp.html >>>>>> >>>>>> >>>>>> This is really interesting. I never noticed this buried in >>>>>> gcLocker. I think this should probably go in >>>>>> interfaceSupport.inline.hpp like all the classes that are only >>>>>> used there instead, unless you think it should be used on its >>>>>> own.? I'm not sure about that. >>>>> >>>>> I can move that. Now I see what this part was about, I'll just comment that which ever way this ends up, interfaceSupport.* needs some future tidy up. If we are going to have the plain .hpp file then a lot of what is in the .inline.hpp can possibly move across (ie basic class definitions). >>>>>> >>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.cpp.udiff.html >>>>>> >>>>>> >>>>>> This seems strange to have another is_at_safepoint function. I >>>>>> don't see why you changed this as is_at_safepoint is inlined in >>>>>> safepoint.hpp. >>>>> >>>>> I did that so that we wouldn't have to include safepoint.hpp in >>>>> gcLocker.hpp, just to be able to do an assert. >>>> >>>> Why is that an issue? >>> >>> I simply try to minimize includes in our our .hpp files. >> >> If it was to avoid some kind of problem I could understand, but just >> to "minimize" this seems like false economy. You shouldn't need to add >> forwarding wrapper functions in the local API just because you don't >> want to include the real API's header file. > > Of course I don't "need" to, but safepoint.hpp includes 158 other Wow! Is that the transitive closure? I see 7 direct includes and 2 of them seem unnecessary to start with. I think os.hpp is likely the main problem and that is easily fixed by moving somethings to the .cpp file. This header file should not be something that code is reluctant to include because of the issues you cite. It's primary role should be to provide the is_at_safepoint function and the related assertions. Another RFE I guess. > HotSpot headers, and whenever I touch any of those, I'll get mostly > unnecessary recompiles of files that included gcLocker.hpp. By being > more restrictive with our includes, we could minimize our compile times, > and that's worth the tiny overhead of doing this change. > > Minimizing our includes in our header files also help reduce the risk of > getting cyclic dependencies, which do cause problems. > > If you don't like this, I could move is_active to gcLocker.inline.hpp, > and then I'd add #include "gc/shared/gcLocker.inline.hpp" to the ~50 > files that use is_active. That would be a step in the wrong direction. :) Thanks, David > StefanK > >> David >> >>>> >>>> And shouldn't gcLocker.cpp be including safepoint.hpp now? >>> >>> Thanks, I missed that because the line below used SafepointSynchronize. >>> >>> StefanK >>> >>>> (I guess it gets it indirectly from "runtime/thread.inline.hpp") >>>> >>>> Thanks, >>>> David >>>> ----- >>>> >>>>>> >>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.inline.hpp.udiff.html >>>>>> >>>>>> >>>>>> I thought the inline keyword should be on both declaration and >>>>>> definition (?)? Do we need these functions to be inlined anyway? >>>>>> Can we put them in gcLocker.cpp and remove the .inline file.? It >>>>>> looks like the inline file is not included in most places anyway >>>>>> with this change. >>>>> >>>>> You can have it either at the declaration or the definition. The >>>>> benefit of having it on the declaration is that the compiler will >>>>> complain instead of the linker. This might be performance >>>>> sensitive, so I didn't want to risk moving it to the .cpp file. >>>>> >>>>>> >>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.cpp.udiff.html >>>>>> >>>>>> >>>>>> nit, can you add // ASSERT to the #endif ? >>>>> >>>>> Sure. >>>>> >>>>>> >>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/safepointVerifiers.hpp.html >>>>>> >>>>>> >>>>>> I like this and the name seems Ok.? This will be a lot easier to >>>>>> find than in GCLocker.? Thank you for this change. >>>>> >>>>> Thanks! I'll make the changes and will send out a new webrev. >>>>> >>>>> StefanK >>>>>> >>>>>> Coleen >>>>>> >>>>>> >>>>>> On 3/22/18 12:26 PM, Stefan Karlsson wrote: >>>>>>> This patch builds upon the changes in: >>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030928.html >>>>>>> >>>>>>> >>>>>>> StefanK >>>>>>> >>>>>>> On 2018-03-22 17:24, Stefan Karlsson wrote: >>>>>>>> Hi all, >>>>>>>> >>>>>>>> Please review this patch to separate out the NoSafepointVerifier >>>>>>>> class (and friends) from gcLocker.hpp into its own file. >>>>>>>> >>>>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01 >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8200106 >>>>>>>> >>>>>>>> After this patch gcLocker.hpp only contains code for the >>>>>>>> GCLocker. I've gone through all usages of the GCCLocker and >>>>>>>> NoSafepointVerifier classes and changed the code to include the >>>>>>>> correct headers. >>>>>>>> >>>>>>>> The new files are names safepointVerifiers.hpp/cpp and the main >>>>>>>> class is NoSafepointVerifier. However, I also moved the >>>>>>>> NoGCVerifier, which is the parent class of NoSafepointVerifier, >>>>>>>> and NoAllocVerfier. I think all of these are used to verify that >>>>>>>> we don't do things that will interact badly with safepoints, >>>>>>>> hence the name of the new file. Are others OK with the name? >>>>>>>> >>>>>>>> Thanks, >>>>>>>> StefanK >>>>>> >>>>> >>> > From stefan.karlsson at oracle.com Fri Mar 23 07:49:51 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 23 Mar 2018 08:49:51 +0100 Subject: RFR: 8200106: Move NoSafepointVerifier out from gcLocker.hpp In-Reply-To: <8119a43c-ad5f-01e8-9853-bdf5d499c661@oracle.com> References: <35fd51fd-57d9-5ac1-bce6-409a57b161e5@oracle.com> <77ef8be5-a5a2-169d-72a2-2561bd0b6702@oracle.com> <0a5a565c-4072-b00a-81cf-efcbe5189e26@oracle.com> <074c7cb7-7c42-ffbf-18c4-b4a9f2255b8b@oracle.com> <1f5f7ebc-cb81-fef7-444e-33c9e2d0f213@oracle.com> <647cc897-b7e6-52ba-cfaf-ea8ba19a39b7@oracle.com> <8119a43c-ad5f-01e8-9853-bdf5d499c661@oracle.com> Message-ID: <86bd733e-ce2c-cfed-7762-29340e29d46d@oracle.com> On 2018-03-23 00:45, David Holmes wrote: > Hi Stefan, > > I've looked through everything now and it all seems okay. One additional > comment, and a follow up below ... > > On 23/03/2018 8:18 AM, Stefan Karlsson wrote: >> On 2018-03-22 23:05, David Holmes wrote: >>> On 23/03/2018 7:59 AM, Stefan Karlsson wrote: >>>> On 2018-03-22 22:47, David Holmes wrote: >>>>> Hi Stefan, >>>>> >>>>> Jumping in on one issue ... >>>>> >>>>> On 23/03/2018 4:47 AM, Stefan Karlsson wrote: >>>>>> On 2018-03-22 19:33, coleen.phillimore at oracle.com wrote: >>>>>>> >>>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.hpp.html >>>>>>> >>>>>>> >>>>>>> This is really interesting. I never noticed this buried in >>>>>>> gcLocker. I think this should probably go in >>>>>>> interfaceSupport.inline.hpp like all the classes that are only >>>>>>> used there instead, unless you think it should be used on its >>>>>>> own.? I'm not sure about that. >>>>>> >>>>>> I can move that. > > Now I see what this part was about, I'll just comment that which ever > way this ends up, interfaceSupport.* needs some future tidy up. If we > are going to have the plain .hpp file then a lot of what is in the > .inline.hpp can possibly move across (ie basic class definitions). I moved the class to the .inline.hpp, and removed the interfaceSupport.hpp file I added. > >>>>>>> >>>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.cpp.udiff.html >>>>>>> >>>>>>> >>>>>>> This seems strange to have another is_at_safepoint function. I >>>>>>> don't see why you changed this as is_at_safepoint is inlined in >>>>>>> safepoint.hpp. >>>>>> >>>>>> I did that so that we wouldn't have to include safepoint.hpp in >>>>>> gcLocker.hpp, just to be able to do an assert. >>>>> >>>>> Why is that an issue? >>>> >>>> I simply try to minimize includes in our our .hpp files. >>> >>> If it was to avoid some kind of problem I could understand, but just >>> to "minimize" this seems like false economy. You shouldn't need to >>> add forwarding wrapper functions in the local API just because you >>> don't want to include the real API's header file. >> >> Of course I don't "need" to, but safepoint.hpp includes 158 other > > Wow! Is that the transitive closure? I see 7 direct includes and 2 of > them seem unnecessary to start with. I think os.hpp is likely the main > problem and that is easily fixed by moving somethings to the .cpp file. > This header file should not be something that code is reluctant to > include because of the issues you cite. It's primary role should be to > provide the is_at_safepoint function and the related assertions. Another > RFE I guess. Yes, I think it would make sense to start cleaning os.hpp at some point. > >> HotSpot headers, and whenever I touch any of those, I'll get mostly >> unnecessary recompiles of files that included gcLocker.hpp. By being >> more restrictive with our includes, we could minimize our compile >> times, and that's worth the tiny overhead of doing this change. >> >> Minimizing our includes in our header files also help reduce the risk >> of getting cyclic dependencies, which do cause problems. >> >> If you don't like this, I could move is_active to gcLocker.inline.hpp, >> and then I'd add #include "gc/shared/gcLocker.inline.hpp" to the ~50 >> files that use is_active. > > That would be a step in the wrong direction. :) Yes. :) StefanK > > Thanks, > David > >> StefanK >> >>> David >>> >>>>> >>>>> And shouldn't gcLocker.cpp be including safepoint.hpp now? >>>> >>>> Thanks, I missed that because the line below used SafepointSynchronize. >>>> >>>> StefanK >>>> >>>>> (I guess it gets it indirectly from "runtime/thread.inline.hpp") >>>>> >>>>> Thanks, >>>>> David >>>>> ----- >>>>> >>>>>>> >>>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.inline.hpp.udiff.html >>>>>>> >>>>>>> >>>>>>> I thought the inline keyword should be on both declaration and >>>>>>> definition (?)? Do we need these functions to be inlined anyway? >>>>>>> Can we put them in gcLocker.cpp and remove the .inline file.? It >>>>>>> looks like the inline file is not included in most places anyway >>>>>>> with this change. >>>>>> >>>>>> You can have it either at the declaration or the definition. The >>>>>> benefit of having it on the declaration is that the compiler will >>>>>> complain instead of the linker. This might be performance >>>>>> sensitive, so I didn't want to risk moving it to the .cpp file. >>>>>> >>>>>>> >>>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.cpp.udiff.html >>>>>>> >>>>>>> >>>>>>> nit, can you add // ASSERT to the #endif ? >>>>>> >>>>>> Sure. >>>>>> >>>>>>> >>>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/safepointVerifiers.hpp.html >>>>>>> >>>>>>> >>>>>>> I like this and the name seems Ok.? This will be a lot easier to >>>>>>> find than in GCLocker.? Thank you for this change. >>>>>> >>>>>> Thanks! I'll make the changes and will send out a new webrev. >>>>>> >>>>>> StefanK >>>>>>> >>>>>>> Coleen >>>>>>> >>>>>>> >>>>>>> On 3/22/18 12:26 PM, Stefan Karlsson wrote: >>>>>>>> This patch builds upon the changes in: >>>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030928.html >>>>>>>> >>>>>>>> >>>>>>>> StefanK >>>>>>>> >>>>>>>> On 2018-03-22 17:24, Stefan Karlsson wrote: >>>>>>>>> Hi all, >>>>>>>>> >>>>>>>>> Please review this patch to separate out the >>>>>>>>> NoSafepointVerifier class (and friends) from gcLocker.hpp into >>>>>>>>> its own file. >>>>>>>>> >>>>>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01 >>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8200106 >>>>>>>>> >>>>>>>>> After this patch gcLocker.hpp only contains code for the >>>>>>>>> GCLocker. I've gone through all usages of the GCCLocker and >>>>>>>>> NoSafepointVerifier classes and changed the code to include the >>>>>>>>> correct headers. >>>>>>>>> >>>>>>>>> The new files are names safepointVerifiers.hpp/cpp and the main >>>>>>>>> class is NoSafepointVerifier. However, I also moved the >>>>>>>>> NoGCVerifier, which is the parent class of NoSafepointVerifier, >>>>>>>>> and NoAllocVerfier. I think all of these are used to verify >>>>>>>>> that we don't do things that will interact badly with >>>>>>>>> safepoints, hence the name of the new file. Are others OK with >>>>>>>>> the name? >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> StefanK >>>>>>> >>>>>> >>>> >> From per.liden at oracle.com Fri Mar 23 08:05:41 2018 From: per.liden at oracle.com (Per Liden) Date: Fri, 23 Mar 2018 09:05:41 +0100 Subject: RFR: 8200113: Make Access load proxys smarter In-Reply-To: <5AB3D3EE.1070209@oracle.com> References: <5AB3BB8B.8090309@oracle.com> <5AB3D3EE.1070209@oracle.com> Message-ID: <159c0462-ad12-c2be-ee7a-8fb255d20861@oracle.com> Looks good! /Per On 03/22/2018 05:03 PM, Erik ?sterlund wrote: > Looping in hotspot-dev. > > /Erik > > On 2018-03-22 15:19, Erik ?sterlund wrote: >> Hi, >> >> Access returns the result of loads through load proxy objects that >> implicity convert themselves to a template inferred type. This is a >> metaprogramming technique used to infer return types in C++. >> >> However, I have heard requests that it would be great if it could be a >> bit smarter and do more than just be assigned to a type. >> >> Example use cases that do not work today without workarounds: >> >> ? oop val = ...; >> ? narrowOop narrow = 0u; >> ? oop *oop_val = &val; >> ? narrowOop *narrow_val = &narrow; >> ? HeapWord *heap_word_val = reinterpret_cast(oop_val); >> >> ? if (val == HeapAccess<>::oop_load_at(val, 16)) {} >> ? if (HeapAccess<>::oop_load_at(val, 16) == val) {} >> ? if (val != HeapAccess<>::oop_load_at(val, 16)) {} >> ? if (HeapAccess<>::oop_load_at(val, 16) != val) {} >> >> ? if (HeapAccess<>::oop_load(oop_val) != val) {} >> ? if (HeapAccess<>::oop_load(heap_word_val) != val) {} >> ? if (RawAccess<>::oop_load(narrow_val) != narrow) {} >> >> ? if (HeapAccess<>::oop_load(oop_val) == val) {} >> ? if (HeapAccess<>::oop_load(heap_word_val) == val) {} >> ? if (RawAccess<>::oop_load(narrow_val) == narrow) {} >> >> ? if (val != HeapAccess<>::oop_load(oop_val)) {} >> ? if (val != HeapAccess<>::oop_load(heap_word_val)) {} >> ? if (narrow != RawAccess<>::oop_load(narrow_val)) {} >> >> ? if (val == HeapAccess<>::oop_load(oop_val)) {} >> ? if (val == HeapAccess<>::oop_load(heap_word_val)) {} >> ? if (narrow == RawAccess<>::oop_load(narrow_val)) {} >> >> ? if ((oop)HeapAccess<>::oop_load(oop_val) == NULL) {} >> >> ? oop tmp = true ? HeapAccess<>::oop_load(oop_val) : val; >> >> Here is a patch that solves this: >> http://cr.openjdk.java.net/~eosterlund/8200113/webrev.00/ >> >> ...and here is the bug ID: >> https://bugs.openjdk.java.net/browse/JDK-8200113 >> >> Thanks, >> /Erik >> > From erik.osterlund at oracle.com Fri Mar 23 08:29:17 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Fri, 23 Mar 2018 09:29:17 +0100 Subject: RFR: 8200113: Make Access load proxys smarter In-Reply-To: <159c0462-ad12-c2be-ee7a-8fb255d20861@oracle.com> References: <5AB3BB8B.8090309@oracle.com> <5AB3D3EE.1070209@oracle.com> <159c0462-ad12-c2be-ee7a-8fb255d20861@oracle.com> Message-ID: <326c449f-a513-5961-945b-79cb337f520c@oracle.com> Hi Per, Thanks for the review. /Erik On 2018-03-23 09:05, Per Liden wrote: > Looks good! > > /Per > > On 03/22/2018 05:03 PM, Erik ?sterlund wrote: >> Looping in hotspot-dev. >> >> /Erik >> >> On 2018-03-22 15:19, Erik ?sterlund wrote: >>> Hi, >>> >>> Access returns the result of loads through load proxy objects that >>> implicity convert themselves to a template inferred type. This is a >>> metaprogramming technique used to infer return types in C++. >>> >>> However, I have heard requests that it would be great if it could be >>> a bit smarter and do more than just be assigned to a type. >>> >>> Example use cases that do not work today without workarounds: >>> >>> ? oop val = ...; >>> ? narrowOop narrow = 0u; >>> ? oop *oop_val = &val; >>> ? narrowOop *narrow_val = &narrow; >>> ? HeapWord *heap_word_val = reinterpret_cast(oop_val); >>> >>> ? if (val == HeapAccess<>::oop_load_at(val, 16)) {} >>> ? if (HeapAccess<>::oop_load_at(val, 16) == val) {} >>> ? if (val != HeapAccess<>::oop_load_at(val, 16)) {} >>> ? if (HeapAccess<>::oop_load_at(val, 16) != val) {} >>> >>> ? if (HeapAccess<>::oop_load(oop_val) != val) {} >>> ? if (HeapAccess<>::oop_load(heap_word_val) != val) {} >>> ? if (RawAccess<>::oop_load(narrow_val) != narrow) {} >>> >>> ? if (HeapAccess<>::oop_load(oop_val) == val) {} >>> ? if (HeapAccess<>::oop_load(heap_word_val) == val) {} >>> ? if (RawAccess<>::oop_load(narrow_val) == narrow) {} >>> >>> ? if (val != HeapAccess<>::oop_load(oop_val)) {} >>> ? if (val != HeapAccess<>::oop_load(heap_word_val)) {} >>> ? if (narrow != RawAccess<>::oop_load(narrow_val)) {} >>> >>> ? if (val == HeapAccess<>::oop_load(oop_val)) {} >>> ? if (val == HeapAccess<>::oop_load(heap_word_val)) {} >>> ? if (narrow == RawAccess<>::oop_load(narrow_val)) {} >>> >>> ? if ((oop)HeapAccess<>::oop_load(oop_val) == NULL) {} >>> >>> ? oop tmp = true ? HeapAccess<>::oop_load(oop_val) : val; >>> >>> Here is a patch that solves this: >>> http://cr.openjdk.java.net/~eosterlund/8200113/webrev.00/ >>> >>> ...and here is the bug ID: >>> https://bugs.openjdk.java.net/browse/JDK-8200113 >>> >>> Thanks, >>> /Erik >>> >> From coleen.phillimore at oracle.com Fri Mar 23 11:27:46 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 23 Mar 2018 07:27:46 -0400 Subject: RFR: 8200106: Move NoSafepointVerifier out from gcLocker.hpp In-Reply-To: <8119a43c-ad5f-01e8-9853-bdf5d499c661@oracle.com> References: <35fd51fd-57d9-5ac1-bce6-409a57b161e5@oracle.com> <77ef8be5-a5a2-169d-72a2-2561bd0b6702@oracle.com> <0a5a565c-4072-b00a-81cf-efcbe5189e26@oracle.com> <074c7cb7-7c42-ffbf-18c4-b4a9f2255b8b@oracle.com> <1f5f7ebc-cb81-fef7-444e-33c9e2d0f213@oracle.com> <647cc897-b7e6-52ba-cfaf-ea8ba19a39b7@oracle.com> <8119a43c-ad5f-01e8-9853-bdf5d499c661@oracle.com> Message-ID: On 3/22/18 7:45 PM, David Holmes wrote: > Hi Stefan, > > I've looked through everything now and it all seems okay. One > additional comment, and a follow up below ... > > On 23/03/2018 8:18 AM, Stefan Karlsson wrote: >> On 2018-03-22 23:05, David Holmes wrote: >>> On 23/03/2018 7:59 AM, Stefan Karlsson wrote: >>>> On 2018-03-22 22:47, David Holmes wrote: >>>>> Hi Stefan, >>>>> >>>>> Jumping in on one issue ... >>>>> >>>>> On 23/03/2018 4:47 AM, Stefan Karlsson wrote: >>>>>> On 2018-03-22 19:33, coleen.phillimore at oracle.com wrote: >>>>>>> >>>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.hpp.html >>>>>>> >>>>>>> >>>>>>> This is really interesting. I never noticed this buried in >>>>>>> gcLocker. I think this should probably go in >>>>>>> interfaceSupport.inline.hpp like all the classes that are only >>>>>>> used there instead, unless you think it should be used on its >>>>>>> own.? I'm not sure about that. >>>>>> >>>>>> I can move that. > > Now I see what this part was about, I'll just comment that which ever > way this ends up, interfaceSupport.* needs some future tidy up. If we > are going to have the plain .hpp file then a lot of what is in the > .inline.hpp can possibly move across (ie basic class definitions). The reason that I moved all of interfaceSupport.hpp classes to interfaceSupport.inline.hpp was because all of these classes are needed to be inlined into the JRT macros, and also are needed to be inlined at their call sites.? There were no classes left that didn't have this characteristic that could be included without their inline definitions and were needed as class declarations in header files. Moving the inlined functions (which turned out to be all of them) to leave class declaration shells seems like a waste of time and a not meaningful division.?? If you think differently and have a webrev, I would be interested to see if it turns out better than I expect it would. Thanks, Coleen > >>>>>>> >>>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.cpp.udiff.html >>>>>>> >>>>>>> >>>>>>> This seems strange to have another is_at_safepoint function. I >>>>>>> don't see why you changed this as is_at_safepoint is inlined in >>>>>>> safepoint.hpp. >>>>>> >>>>>> I did that so that we wouldn't have to include safepoint.hpp in >>>>>> gcLocker.hpp, just to be able to do an assert. >>>>> >>>>> Why is that an issue? >>>> >>>> I simply try to minimize includes in our our .hpp files. >>> >>> If it was to avoid some kind of problem I could understand, but just >>> to "minimize" this seems like false economy. You shouldn't need to >>> add forwarding wrapper functions in the local API just because you >>> don't want to include the real API's header file. >> >> Of course I don't "need" to, but safepoint.hpp includes 158 other > > Wow! Is that the transitive closure? I see 7 direct includes and 2 of > them seem unnecessary to start with. I think os.hpp is likely the main > problem and that is easily fixed by moving somethings to the .cpp > file. This header file should not be something that code is reluctant > to include because of the issues you cite. It's primary role should be > to provide the is_at_safepoint function and the related assertions. > Another RFE I guess. > >> HotSpot headers, and whenever I touch any of those, I'll get mostly >> unnecessary recompiles of files that included gcLocker.hpp. By being >> more restrictive with our includes, we could minimize our compile >> times, and that's worth the tiny overhead of doing this change. >> >> Minimizing our includes in our header files also help reduce the risk >> of getting cyclic dependencies, which do cause problems. >> >> If you don't like this, I could move is_active to >> gcLocker.inline.hpp, and then I'd add #include >> "gc/shared/gcLocker.inline.hpp" to the ~50 files that use is_active. > > That would be a step in the wrong direction. :) > > Thanks, > David > >> StefanK >> >>> David >>> >>>>> >>>>> And shouldn't gcLocker.cpp be including safepoint.hpp now? >>>> >>>> Thanks, I missed that because the line below used >>>> SafepointSynchronize. >>>> >>>> StefanK >>>> >>>>> (I guess it gets it indirectly from "runtime/thread.inline.hpp") >>>>> >>>>> Thanks, >>>>> David >>>>> ----- >>>>> >>>>>>> >>>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/gc/shared/gcLocker.inline.hpp.udiff.html >>>>>>> >>>>>>> >>>>>>> I thought the inline keyword should be on both declaration and >>>>>>> definition (?)? Do we need these functions to be inlined anyway? >>>>>>> Can we put them in gcLocker.cpp and remove the .inline file.? It >>>>>>> looks like the inline file is not included in most places anyway >>>>>>> with this change. >>>>>> >>>>>> You can have it either at the declaration or the definition. The >>>>>> benefit of having it on the declaration is that the compiler will >>>>>> complain instead of the linker. This might be performance >>>>>> sensitive, so I didn't want to risk moving it to the .cpp file. >>>>>> >>>>>>> >>>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/interfaceSupport.cpp.udiff.html >>>>>>> >>>>>>> >>>>>>> nit, can you add // ASSERT to the #endif ? >>>>>> >>>>>> Sure. >>>>>> >>>>>>> >>>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01/src/hotspot/share/runtime/safepointVerifiers.hpp.html >>>>>>> >>>>>>> >>>>>>> I like this and the name seems Ok.? This will be a lot easier to >>>>>>> find than in GCLocker.? Thank you for this change. >>>>>> >>>>>> Thanks! I'll make the changes and will send out a new webrev. >>>>>> >>>>>> StefanK >>>>>>> >>>>>>> Coleen >>>>>>> >>>>>>> >>>>>>> On 3/22/18 12:26 PM, Stefan Karlsson wrote: >>>>>>>> This patch builds upon the changes in: >>>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030928.html >>>>>>>> >>>>>>>> >>>>>>>> StefanK >>>>>>>> >>>>>>>> On 2018-03-22 17:24, Stefan Karlsson wrote: >>>>>>>>> Hi all, >>>>>>>>> >>>>>>>>> Please review this patch to separate out the >>>>>>>>> NoSafepointVerifier class (and friends) from gcLocker.hpp into >>>>>>>>> its own file. >>>>>>>>> >>>>>>>>> http://cr.openjdk.java.net/~stefank/8200106/webrev.01 >>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8200106 >>>>>>>>> >>>>>>>>> After this patch gcLocker.hpp only contains code for the >>>>>>>>> GCLocker. I've gone through all usages of the GCCLocker and >>>>>>>>> NoSafepointVerifier classes and changed the code to include >>>>>>>>> the correct headers. >>>>>>>>> >>>>>>>>> The new files are names safepointVerifiers.hpp/cpp and the >>>>>>>>> main class is NoSafepointVerifier. However, I also moved the >>>>>>>>> NoGCVerifier, which is the parent class of >>>>>>>>> NoSafepointVerifier, and NoAllocVerfier. I think all of these >>>>>>>>> are used to verify that we don't do things that will interact >>>>>>>>> badly with safepoints, hence the name of the new file. Are >>>>>>>>> others OK with the name? >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> StefanK >>>>>>> >>>>>> >>>> >> From coleen.phillimore at oracle.com Fri Mar 23 11:55:02 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 23 Mar 2018 07:55:02 -0400 Subject: RFR (S) 8198760: Move global lock SpaceManager::_expand_lock to MutexLocker.cpp Message-ID: Summary: We should avoid having global locks buried in cpp files open webrev at http://cr.openjdk.java.net/~coleenp/8198760.01/webrev bug link https://bugs.openjdk.java.net/browse/JDK-8198760 Tested with mach5 tier1 and 2. Thanks, Coleen From thomas.schatzl at oracle.com Fri Mar 23 12:00:36 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Fri, 23 Mar 2018 13:00:36 +0100 Subject: RFR (S) 8198760: Move global lock SpaceManager::_expand_lock to MutexLocker.cpp In-Reply-To: References: Message-ID: <1521806436.14740.4.camel@oracle.com> Hi Coleen, On Fri, 2018-03-23 at 07:55 -0400, coleen.phillimore at oracle.com wrote: > Summary: We should avoid having global locks buried in cpp files > > open webrev at http://cr.openjdk.java.net/~coleenp/8198760.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8198760 > > Tested with mach5 tier1 and 2. looks good to me. Thanks, Thomas From coleen.phillimore at oracle.com Fri Mar 23 12:15:40 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 23 Mar 2018 08:15:40 -0400 Subject: RFR (S) 8198760: Move global lock SpaceManager::_expand_lock to MutexLocker.cpp In-Reply-To: <1521806436.14740.4.camel@oracle.com> References: <1521806436.14740.4.camel@oracle.com> Message-ID: <6ffbb9dd-757a-8ecb-4932-520dfce1d524@oracle.com> Thanks Thomas! Coleen On 3/23/18 8:00 AM, Thomas Schatzl wrote: > Hi Coleen, > > On Fri, 2018-03-23 at 07:55 -0400, coleen.phillimore at oracle.com wrote: >> Summary: We should avoid having global locks buried in cpp files >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8198760.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8198760 >> >> Tested with mach5 tier1 and 2. > looks good to me. > > Thanks, > Thomas > From erik.osterlund at oracle.com Fri Mar 23 12:28:53 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Fri, 23 Mar 2018 13:28:53 +0100 Subject: RFR: 8199946: Move load/store and encode/decode out of oopDesc In-Reply-To: References: <5a6ea069-5c78-b10f-1cbc-8d38f0560f9d@oracle.com> Message-ID: <5AB4F305.4090904@oracle.com> Hi Stefan, Looks good. There is one remaining OOP_NOT_NULL in psScavenge.inline.hpp, but I do not need another webrev. Thanks, /Erik On 2018-03-22 20:28, Stefan Karlsson wrote: > Erik found two places where I didn't use OOP_NOT_NULL, where we > previously used _not_null functions. > > diff --git a/src/hotspot/share/gc/cms/parNewGeneration.cpp > b/src/hotspot/share/gc/cms/parNewGeneration.cpp > --- a/src/hotspot/share/gc/cms/parNewGeneration.cpp > +++ b/src/hotspot/share/gc/cms/parNewGeneration.cpp > @@ -734,7 +734,7 @@ > oop new_obj = obj->is_forwarded() > ? obj->forwardee() > : > _g->DefNewGeneration::copy_to_survivor_space(obj); > - RawAccess<>::oop_store(p, new_obj); > + RawAccess::oop_store(p, new_obj); > } > if (_gc_barrier) { > // If p points to a younger generation, mark the card. > diff --git a/src/hotspot/share/gc/cms/parOopClosures.inline.hpp > b/src/hotspot/share/gc/cms/parOopClosures.inline.hpp > --- a/src/hotspot/share/gc/cms/parOopClosures.inline.hpp > +++ b/src/hotspot/share/gc/cms/parOopClosures.inline.hpp > @@ -52,7 +52,7 @@ > new_obj = > ((ParNewGeneration*)_g)->copy_to_survivor_space(_par_scan_state, > obj, obj_sz, m); > } > - RawAccess<>::oop_store(p, new_obj); > + RawAccess::oop_store(p, new_obj); > } > } > > StefanK > > On 2018-03-22 17:01, Stefan Karlsson wrote: >> Hi, >> >> This patch needs Erik's change to the LoadProxies: >> http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2018-March/021504.html >> >> >> to build on fastdebug. >> >> Here's a rebased patch: >> http://cr.openjdk.java.net/~stefank/8199946/webrev.02/ >> >> Thanks, >> StefanK >> >> On 2018-03-21 18:27, Stefan Karlsson wrote: >>> Hi all, >>> >>> Please review this patch to get rid of the oopDesc::load/store >>> functions and to move the oopDesc::encode/decode functions to a new >>> CompressedOops subsystem. >>> >>> http://cr.openjdk.java.net/~stefank/8199946/webrev.01 >>> https://bugs.openjdk.java.net/browse/JDK-8199946 >>> >>> When the Access API was introduced many of the usages of >>> oopDesc::load_decode_heap_oop, and friends, were replaced by calls >>> to the Access API. However, there are still some usages of these >>> functions, most notably in the GC code. >>> >>> This patch is two-fold: >>> >>> 1) It replaces the oopDesc load and store calls with RawAccess >>> equivalents. >>> >>> 2) It moves the oopDesc encode and decode functions to a new, >>> separate, subsystem called CompressedOops. A future patch could even >>> move all the Universe::_narrow_oop variables over to CompressedOops. >>> >>> The second part has the nice property that it breaks up a circular >>> dependency between oop.inline.hpp and access.inline.hpp. After the >>> change we have: >>> >>> oop.inline.hpp includes: >>> access.inline.hpp >>> compressedOops.inline.hpp >>> >>> access.inline.hpp includes: >>> compressedOops.inline.hpp >>> >>> Thanks, >>> StefanK > > From stefan.karlsson at oracle.com Fri Mar 23 12:27:23 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 23 Mar 2018 13:27:23 +0100 Subject: RFR: 8199946: Move load/store and encode/decode out of oopDesc In-Reply-To: <5AB4F305.4090904@oracle.com> References: <5a6ea069-5c78-b10f-1cbc-8d38f0560f9d@oracle.com> <5AB4F305.4090904@oracle.com> Message-ID: On 2018-03-23 13:28, Erik ?sterlund wrote: > Hi Stefan, > > Looks good. There is one remaining OOP_NOT_NULL in > psScavenge.inline.hpp, but I do not need another webrev. Fixed. Thanks for the review! StefanK > > Thanks, > /Erik > > On 2018-03-22 20:28, Stefan Karlsson wrote: >> Erik found two places where I didn't use OOP_NOT_NULL, where we >> previously used _not_null functions. >> >> diff --git a/src/hotspot/share/gc/cms/parNewGeneration.cpp >> b/src/hotspot/share/gc/cms/parNewGeneration.cpp >> --- a/src/hotspot/share/gc/cms/parNewGeneration.cpp >> +++ b/src/hotspot/share/gc/cms/parNewGeneration.cpp >> @@ -734,7 +734,7 @@ >> ?????? oop new_obj = obj->is_forwarded() >> ?????????????????????? ? obj->forwardee() >> ?????????????????????? : >> _g->DefNewGeneration::copy_to_survivor_space(obj); >> -????? RawAccess<>::oop_store(p, new_obj); >> +????? RawAccess::oop_store(p, new_obj); >> ???? } >> ???? if (_gc_barrier) { >> ?????? // If p points to a younger generation, mark the card. >> diff --git a/src/hotspot/share/gc/cms/parOopClosures.inline.hpp >> b/src/hotspot/share/gc/cms/parOopClosures.inline.hpp >> --- a/src/hotspot/share/gc/cms/parOopClosures.inline.hpp >> +++ b/src/hotspot/share/gc/cms/parOopClosures.inline.hpp >> @@ -52,7 +52,7 @@ >> ?????? new_obj = >> ((ParNewGeneration*)_g)->copy_to_survivor_space(_par_scan_state, >> obj, obj_sz, m); >> ???? } >> -??? RawAccess<>::oop_store(p, new_obj); >> +??? RawAccess::oop_store(p, new_obj); >> ?? } >> ?} >> >> StefanK >> >> On 2018-03-22 17:01, Stefan Karlsson wrote: >>> Hi, >>> >>> This patch needs Erik's change to the LoadProxies: >>> http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2018-March/021504.html >>> >>> >>> to build on fastdebug. >>> >>> Here's a rebased patch: >>> http://cr.openjdk.java.net/~stefank/8199946/webrev.02/ >>> >>> Thanks, >>> StefanK >>> >>> On 2018-03-21 18:27, Stefan Karlsson wrote: >>>> Hi all, >>>> >>>> Please review this patch to get rid of the oopDesc::load/store >>>> functions and to move the oopDesc::encode/decode functions to a new >>>> CompressedOops subsystem. >>>> >>>> http://cr.openjdk.java.net/~stefank/8199946/webrev.01 >>>> https://bugs.openjdk.java.net/browse/JDK-8199946 >>>> >>>> When the Access API was introduced many of the usages of >>>> oopDesc::load_decode_heap_oop, and friends, were replaced by calls >>>> to the Access API. However, there are still some usages of these >>>> functions, most notably in the GC code. >>>> >>>> This patch is two-fold: >>>> >>>> 1) It replaces the oopDesc load and store calls with RawAccess >>>> equivalents. >>>> >>>> 2) It moves the oopDesc encode and decode functions to a new, >>>> separate, subsystem called CompressedOops. A future patch could even >>>> move all the Universe::_narrow_oop variables over to CompressedOops. >>>> >>>> The second part has the nice property that it breaks up a circular >>>> dependency between oop.inline.hpp and access.inline.hpp. After the >>>> change we have: >>>> >>>> oop.inline.hpp includes: >>>> ?? access.inline.hpp >>>> ?? compressedOops.inline.hpp >>>> >>>> access.inline.hpp includes: >>>> ?? compressedOops.inline.hpp >>>> >>>> Thanks, >>>> StefanK >> >> > From lois.foltan at oracle.com Fri Mar 23 12:30:29 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Fri, 23 Mar 2018 08:30:29 -0400 Subject: RFR (S) 8198760: Move global lock SpaceManager::_expand_lock to MutexLocker.cpp In-Reply-To: References: Message-ID: <392cea90-a0a5-3645-62a2-b314503309dd@oracle.com> Looks good. Lois On 3/23/2018 7:55 AM, coleen.phillimore at oracle.com wrote: > Summary: We should avoid having global locks buried in cpp files > > open webrev at http://cr.openjdk.java.net/~coleenp/8198760.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8198760 > > Tested with mach5 tier1 and 2. > > Thanks, > Coleen From robin.westberg at oracle.com Fri Mar 23 12:37:39 2018 From: robin.westberg at oracle.com (Robin Westberg) Date: Fri, 23 Mar 2018 13:37:39 +0100 Subject: RFR: 8199736: Define WIN32_LEAN_AND_MEAN before including windows.h In-Reply-To: References: <7F114FA8-59F0-42F1-A0B8-33D8C22DE81A@oracle.com> Message-ID: Hi Kim & Erik, Certainly makes sense to define it from the build system, I?ve updated the patch accordingly: Full: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.01/ Incremental: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00-01/ (Not quite sure if the definition belongs where I put it or a bit later where most other windows-specific JVM flags are defined, but seemed reasonable to put it close to where it is defined for the JDK libraries). Best regards, Robin > On 22 Mar 2018, at 16:52, Kim Barrett wrote: > >> On Mar 22, 2018, at 10:34 AM, Robin Westberg wrote: >> >> Hi all, >> >> Please review the following change that defines WIN32_LEAN_AND_MEAN [1] before including windows.h. This marginally improves build times, and makes it possible to include winsock2.h. >> >> Issue: https://bugs.openjdk.java.net/browse/JDK-8199736 >> Webrev: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00/ >> Testing: hs-tier1 >> >> Best regards, >> Robin >> >> [1] https://msdn.microsoft.com/en-us/library/windows/desktop/aa383745%28v=vs.85%29.aspx#faster_builds_with_smaller_header_files > > I think the addition of the WIN32_LEAN_AND_MEAN definition should be done through the build > system, so that it applies everywhere. > From george.triantafillou at oracle.com Fri Mar 23 12:51:01 2018 From: george.triantafillou at oracle.com (George Triantafillou) Date: Fri, 23 Mar 2018 08:51:01 -0400 Subject: RFR (S) 8198760: Move global lock SpaceManager::_expand_lock to MutexLocker.cpp In-Reply-To: References: Message-ID: Hi Coleen, Looks good. -George On 3/23/2018 7:55 AM, coleen.phillimore at oracle.com wrote: > Summary: We should avoid having global locks buried in cpp files > > open webrev at http://cr.openjdk.java.net/~coleenp/8198760.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8198760 > > Tested with mach5 tier1 and 2. > > Thanks, > Coleen From coleen.phillimore at oracle.com Fri Mar 23 12:53:30 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 23 Mar 2018 08:53:30 -0400 Subject: RFR (S) 8198760: Move global lock SpaceManager::_expand_lock to MutexLocker.cpp In-Reply-To: References: Message-ID: <48a01ab8-c636-9ec0-1906-f837724657ba@oracle.com> Thanks George and Lois! Coleen On 3/23/18 8:51 AM, George Triantafillou wrote: > Hi Coleen, > > Looks good. > > -George > > On 3/23/2018 7:55 AM, coleen.phillimore at oracle.com wrote: >> Summary: We should avoid having global locks buried in cpp files >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8198760.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8198760 >> >> Tested with mach5 tier1 and 2. >> >> Thanks, >> Coleen > From erik.joelsson at oracle.com Fri Mar 23 13:58:13 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Fri, 23 Mar 2018 06:58:13 -0700 Subject: RFR: 8199736: Define WIN32_LEAN_AND_MEAN before including windows.h In-Reply-To: References: <7F114FA8-59F0-42F1-A0B8-33D8C22DE81A@oracle.com> Message-ID: <52afd5ec-e136-b924-8127-d3e33bf04428@oracle.com> I think this looks good, but Magnus is currently refactoring the flags handling in configure so better get his input as well. (adding build-dev) /Erik On 2018-03-23 05:37, Robin Westberg wrote: > Hi Kim & Erik, > > Certainly makes sense to define it from the build system, I?ve updated > the patch accordingly: > > Full: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.01/ > > Incremental: > http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00-01/ > > > (Not quite sure if the definition belongs where I put it or a bit > later where most other windows-specific JVM flags are defined, but > seemed reasonable to put it close to where it is defined for the JDK > libraries). > > Best regards, > Robin > >> On 22 Mar 2018, at 16:52, Kim Barrett > > wrote: >> >>> On Mar 22, 2018, at 10:34 AM, Robin Westberg >>> > wrote: >>> >>> Hi all, >>> >>> Please review the following change that defines WIN32_LEAN_AND_MEAN >>> [1] before including windows.h. This marginally improves build >>> times, and makes it possible to include winsock2.h. >>> >>> Issue: https://bugs.openjdk.java.net/browse/JDK-8199736 >>> >>> Webrev: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00/ >>> >>> >> > >>> Testing: hs-tier1 >>> >>> Best regards, >>> Robin >>> >>> [1] >>> https://msdn.microsoft.com/en-us/library/windows/desktop/aa383745%28v=vs.85%29.aspx#faster_builds_with_smaller_header_files >>> >> > >> >> I think the addition of the WIN32_LEAN_AND_MEAN definition should be >> done through the build >> system, so that it applies everywhere. >> > From jan.lahoda at oracle.com Fri Mar 23 15:04:51 2018 From: jan.lahoda at oracle.com (Jan Lahoda) Date: Fri, 23 Mar 2018 16:04:51 +0100 Subject: RFR JDK-8200135: test/hotspot/jtreg/compiler/jvmci/compilerToVM/GetExceptionTableTest.java is failing after JDK-8194978 Message-ID: <5AB51793.2030300@oracle.com> Hi, In JDK-8194978, the desugaring of try-with-resources in javac has been made more efficient. This also leads to less entries in the exception table. But the test/hotspot/jtreg/compiler/jvmci/compilerToVM/GetExceptionTableTest.java test is expecting a particular number of entries in the table, and so fails. The proposal here is to update the expected number of entries in the table. Bug: https://bugs.openjdk.java.net/browse/JDK-8200135 Webrev: http://cr.openjdk.java.net/~jlahoda/8200135/webrev.00/ Thanks, Jan From thomas.stuefe at gmail.com Fri Mar 23 15:43:03 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Fri, 23 Mar 2018 15:43:03 +0000 Subject: RFR (S) 8198760: Move global lock SpaceManager::_expand_lock to MutexLocker.cpp In-Reply-To: References: Message-ID: Hi Coleen, Looks good. Can we loose the friend declaration for SpaceManager in ClassloaderMetaspace now? Thomas On Fri 23. Mar 2018 at 12:55, wrote: > Summary: We should avoid having global locks buried in cpp files > > open webrev at http://cr.openjdk.java.net/~coleenp/8198760.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8198760 > > Tested with mach5 tier1 and 2. > > Thanks, > Coleen > From magnus.ihse.bursie at oracle.com Fri Mar 23 15:43:13 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Fri, 23 Mar 2018 16:43:13 +0100 Subject: RFR: 8199736: Define WIN32_LEAN_AND_MEAN before including windows.h In-Reply-To: <52afd5ec-e136-b924-8127-d3e33bf04428@oracle.com> References: <7F114FA8-59F0-42F1-A0B8-33D8C22DE81A@oracle.com> <52afd5ec-e136-b924-8127-d3e33bf04428@oracle.com> Message-ID: This looks good to me. /Magnus > 23 mars 2018 kl. 14:58 skrev Erik Joelsson : > > I think this looks good, but Magnus is currently refactoring the flags handling in configure so better get his input as well. (adding build-dev) > > /Erik > > >> On 2018-03-23 05:37, Robin Westberg wrote: >> Hi Kim & Erik, >> >> Certainly makes sense to define it from the build system, I?ve updated the patch accordingly: >> >> Full: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.01/ >> Incremental: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00-01/ >> >> (Not quite sure if the definition belongs where I put it or a bit later where most other windows-specific JVM flags are defined, but seemed reasonable to put it close to where it is defined for the JDK libraries). >> >> Best regards, >> Robin >> >>>>> On 22 Mar 2018, at 16:52, Kim Barrett > wrote: >>>> >>>> On Mar 22, 2018, at 10:34 AM, Robin Westberg > wrote: >>>> >>>> Hi all, >>>> >>>> Please review the following change that defines WIN32_LEAN_AND_MEAN [1] before including windows.h. This marginally improves build times, and makes it possible to include winsock2.h. >>>> >>>> Issue: https://bugs.openjdk.java.net/browse/JDK-8199736 >>>> Webrev: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00/ > >>>> Testing: hs-tier1 >>>> >>>> Best regards, >>>> Robin >>>> >>>> [1] https://msdn.microsoft.com/en-us/library/windows/desktop/aa383745%28v=vs.85%29.aspx#faster_builds_with_smaller_header_files > >>> >>> I think the addition of the WIN32_LEAN_AND_MEAN definition should be done through the build >>> system, so that it applies everywhere. >>> >> > From vladimir.kozlov at oracle.com Fri Mar 23 15:49:35 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 23 Mar 2018 08:49:35 -0700 Subject: RFR JDK-8200135: test/hotspot/jtreg/compiler/jvmci/compilerToVM/GetExceptionTableTest.java is failing after JDK-8194978 In-Reply-To: <5AB51793.2030300@oracle.com> References: <5AB51793.2030300@oracle.com> Message-ID: <00540be1-2101-6684-f8a1-e2032a5daf73@oracle.com> Looks good. Thanks, Vladimir On 3/23/18 8:04 AM, Jan Lahoda wrote: > Hi, > > In JDK-8194978, the desugaring of try-with-resources in javac has been made more efficient. This also leads to less > entries in the exception table. But the test/hotspot/jtreg/compiler/jvmci/compilerToVM/GetExceptionTableTest.java test > is expecting a particular number of entries in the table, and so fails. The proposal here is to update the expected > number of entries in the table. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8200135 > Webrev: http://cr.openjdk.java.net/~jlahoda/8200135/webrev.00/ > > Thanks, > ??? Jan From thomas.schatzl at oracle.com Fri Mar 23 15:50:28 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Fri, 23 Mar 2018 16:50:28 +0100 Subject: RFR: 8199946: Move load/store and encode/decode out of oopDesc In-Reply-To: References: <5a6ea069-5c78-b10f-1cbc-8d38f0560f9d@oracle.com> <5AB4F305.4090904@oracle.com> Message-ID: <1521820228.4147.7.camel@oracle.com> Hi, looks good with minor caveats: There is another at psScavenge.inline.hpp 111 RawAccess<>::oop_store(p, new_obj); and metaspaceShared.cpp 1940 RootAccess::oop_store(p, CompressedOops::decode(o)); has been a decode_heap_oop_not_null() ------ Depending on whether you think this fits this change, it would be nice to fix some code in the following locations. Otherwise I can do a separate change. g1OopClosures.inline.hpp: 244 RawAccess<>::oop_store(p, forwardee); This could be a RawAccess::oop_store() g1ParScanThreadState.inline.hpp: 49 RawAccess<>::oop_store(p, obj); Same here. Thanks, Thomas On Fri, 2018-03-23 at 13:27 +0100, Stefan Karlsson wrote: > On 2018-03-23 13:28, Erik ?sterlund wrote: > > Hi Stefan, > > > > Looks good. There is one remaining OOP_NOT_NULL in > > psScavenge.inline.hpp, but I do not need another webrev. > > > Fixed. Thanks for the review! > > StefanK > > > > > Thanks, > > /Erik > > > > On 2018-03-22 20:28, Stefan Karlsson wrote: > > > Erik found two places where I didn't use OOP_NOT_NULL, where we > > > previously used _not_null functions. > > > > > > diff --git a/src/hotspot/share/gc/cms/parNewGeneration.cpp > > > b/src/hotspot/share/gc/cms/parNewGeneration.cpp > > > --- a/src/hotspot/share/gc/cms/parNewGeneration.cpp > > > +++ b/src/hotspot/share/gc/cms/parNewGeneration.cpp > > > @@ -734,7 +734,7 @@ > > > oop new_obj = obj->is_forwarded() > > > ? obj->forwardee() > > > : > > > _g->DefNewGeneration::copy_to_survivor_space(obj); > > > - RawAccess<>::oop_store(p, new_obj); > > > + RawAccess::oop_store(p, new_obj); > > > } > > > if (_gc_barrier) { > > > // If p points to a younger generation, mark the card. > > > diff --git a/src/hotspot/share/gc/cms/parOopClosures.inline.hpp > > > b/src/hotspot/share/gc/cms/parOopClosures.inline.hpp > > > --- a/src/hotspot/share/gc/cms/parOopClosures.inline.hpp > > > +++ b/src/hotspot/share/gc/cms/parOopClosures.inline.hpp > > > @@ -52,7 +52,7 @@ > > > new_obj = > > > ((ParNewGeneration*)_g)->copy_to_survivor_space(_par_scan_state, > > > obj, obj_sz, m); > > > } > > > - RawAccess<>::oop_store(p, new_obj); > > > + RawAccess::oop_store(p, new_obj); > > > } > > > } > > > > > > StefanK > > > > > > On 2018-03-22 17:01, Stefan Karlsson wrote: > > > > Hi, > > > > > > > > This patch needs Erik's change to the LoadProxies: > > > > http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2018-Marc > > > > h/021504.html > > > > > > > > > > > > to build on fastdebug. > > > > > > > > Here's a rebased patch: > > > > http://cr.openjdk.java.net/~stefank/8199946/webrev.02/ > > > > > > > > Thanks, > > > > StefanK > > > > > > > > On 2018-03-21 18:27, Stefan Karlsson wrote: > > > > > Hi all, > > > > > > > > > > Please review this patch to get rid of the > > > > > oopDesc::load/store > > > > > functions and to move the oopDesc::encode/decode functions to > > > > > a new > > > > > CompressedOops subsystem. > > > > > > > > > > http://cr.openjdk.java.net/~stefank/8199946/webrev.01 > > > > > https://bugs.openjdk.java.net/browse/JDK-8199946 > > > > > > > > > > When the Access API was introduced many of the usages of > > > > > oopDesc::load_decode_heap_oop, and friends, were replaced by > > > > > calls > > > > > to the Access API. However, there are still some usages of > > > > > these > > > > > functions, most notably in the GC code. > > > > > > > > > > This patch is two-fold: > > > > > > > > > > 1) It replaces the oopDesc load and store calls with > > > > > RawAccess > > > > > equivalents. > > > > > > > > > > 2) It moves the oopDesc encode and decode functions to a > > > > > new, > > > > > separate, subsystem called CompressedOops. A future patch > > > > > could even > > > > > move all the Universe::_narrow_oop variables over to > > > > > CompressedOops. > > > > > > > > > > The second part has the nice property that it breaks up a > > > > > circular > > > > > dependency between oop.inline.hpp and access.inline.hpp. > > > > > After the > > > > > change we have: > > > > > > > > > > oop.inline.hpp includes: > > > > > access.inline.hpp > > > > > compressedOops.inline.hpp > > > > > > > > > > access.inline.hpp includes: > > > > > compressedOops.inline.hpp > > > > > > > > > > Thanks, > > > > > StefanK > > > > > > From stefan.karlsson at oracle.com Fri Mar 23 15:54:46 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 23 Mar 2018 16:54:46 +0100 Subject: RFR: 8199946: Move load/store and encode/decode out of oopDesc In-Reply-To: <1521820228.4147.7.camel@oracle.com> References: <5a6ea069-5c78-b10f-1cbc-8d38f0560f9d@oracle.com> <5AB4F305.4090904@oracle.com> <1521820228.4147.7.camel@oracle.com> Message-ID: <4bfad961-8ece-2f93-facc-7e3a6c12228e@oracle.com> Hi Thomas, On 2018-03-23 16:50, Thomas Schatzl wrote: > Hi, > > looks good with minor caveats: > > There is another at > psScavenge.inline.hpp > 111 RawAccess<>::oop_store(p, new_obj); I'll fix. > > and > metaspaceShared.cpp > 1940 RootAccess::oop_store(p, > CompressedOops::decode(o)); > > has been a decode_heap_oop_not_null() I'll fix. > > ------ > > Depending on whether you think this fits this change, it would be nice > to fix some code in the following locations. Otherwise I can do a > separate change. > > g1OopClosures.inline.hpp: > 244 RawAccess<>::oop_store(p, forwardee); > > This could be a RawAccess::oop_store() > > g1ParScanThreadState.inline.hpp: > 49 RawAccess<>::oop_store(p, obj); > > Same here. I had changes to do this, but I decided to revert those changes for this patch. A new RFE for this would be good. Thanks, StefanK > > Thanks, > Thomas > > > > On Fri, 2018-03-23 at 13:27 +0100, Stefan Karlsson wrote: >> On 2018-03-23 13:28, Erik ?sterlund wrote: >>> Hi Stefan, >>> >>> Looks good. There is one remaining OOP_NOT_NULL in >>> psScavenge.inline.hpp, but I do not need another webrev. >> >> >> Fixed. Thanks for the review! >> >> StefanK >> >>> >>> Thanks, >>> /Erik >>> >>> On 2018-03-22 20:28, Stefan Karlsson wrote: >>>> Erik found two places where I didn't use OOP_NOT_NULL, where we >>>> previously used _not_null functions. >>>> >>>> diff --git a/src/hotspot/share/gc/cms/parNewGeneration.cpp >>>> b/src/hotspot/share/gc/cms/parNewGeneration.cpp >>>> --- a/src/hotspot/share/gc/cms/parNewGeneration.cpp >>>> +++ b/src/hotspot/share/gc/cms/parNewGeneration.cpp >>>> @@ -734,7 +734,7 @@ >>>> oop new_obj = obj->is_forwarded() >>>> ? obj->forwardee() >>>> : >>>> _g->DefNewGeneration::copy_to_survivor_space(obj); >>>> - RawAccess<>::oop_store(p, new_obj); >>>> + RawAccess::oop_store(p, new_obj); >>>> } >>>> if (_gc_barrier) { >>>> // If p points to a younger generation, mark the card. >>>> diff --git a/src/hotspot/share/gc/cms/parOopClosures.inline.hpp >>>> b/src/hotspot/share/gc/cms/parOopClosures.inline.hpp >>>> --- a/src/hotspot/share/gc/cms/parOopClosures.inline.hpp >>>> +++ b/src/hotspot/share/gc/cms/parOopClosures.inline.hpp >>>> @@ -52,7 +52,7 @@ >>>> new_obj = >>>> ((ParNewGeneration*)_g)->copy_to_survivor_space(_par_scan_state, >>>> obj, obj_sz, m); >>>> } >>>> - RawAccess<>::oop_store(p, new_obj); >>>> + RawAccess::oop_store(p, new_obj); >>>> } >>>> } >>>> >>>> StefanK >>>> >>>> On 2018-03-22 17:01, Stefan Karlsson wrote: >>>>> Hi, >>>>> >>>>> This patch needs Erik's change to the LoadProxies: >>>>> http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2018-Marc >>>>> h/021504.html >>>>> >>>>> >>>>> to build on fastdebug. >>>>> >>>>> Here's a rebased patch: >>>>> http://cr.openjdk.java.net/~stefank/8199946/webrev.02/ >>>>> >>>>> Thanks, >>>>> StefanK >>>>> >>>>> On 2018-03-21 18:27, Stefan Karlsson wrote: >>>>>> Hi all, >>>>>> >>>>>> Please review this patch to get rid of the >>>>>> oopDesc::load/store >>>>>> functions and to move the oopDesc::encode/decode functions to >>>>>> a new >>>>>> CompressedOops subsystem. >>>>>> >>>>>> http://cr.openjdk.java.net/~stefank/8199946/webrev.01 >>>>>> https://bugs.openjdk.java.net/browse/JDK-8199946 >>>>>> >>>>>> When the Access API was introduced many of the usages of >>>>>> oopDesc::load_decode_heap_oop, and friends, were replaced by >>>>>> calls >>>>>> to the Access API. However, there are still some usages of >>>>>> these >>>>>> functions, most notably in the GC code. >>>>>> >>>>>> This patch is two-fold: >>>>>> >>>>>> 1) It replaces the oopDesc load and store calls with >>>>>> RawAccess >>>>>> equivalents. >>>>>> >>>>>> 2) It moves the oopDesc encode and decode functions to a >>>>>> new, >>>>>> separate, subsystem called CompressedOops. A future patch >>>>>> could even >>>>>> move all the Universe::_narrow_oop variables over to >>>>>> CompressedOops. >>>>>> >>>>>> The second part has the nice property that it breaks up a >>>>>> circular >>>>>> dependency between oop.inline.hpp and access.inline.hpp. >>>>>> After the >>>>>> change we have: >>>>>> >>>>>> oop.inline.hpp includes: >>>>>> access.inline.hpp >>>>>> compressedOops.inline.hpp >>>>>> >>>>>> access.inline.hpp includes: >>>>>> compressedOops.inline.hpp >>>>>> >>>>>> Thanks, >>>>>> StefanK >>>> >>>> > From coleen.phillimore at oracle.com Fri Mar 23 16:05:04 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 23 Mar 2018 12:05:04 -0400 Subject: RFR: 8199946: Move load/store and encode/decode out of oopDesc In-Reply-To: <5a6ea069-5c78-b10f-1cbc-8d38f0560f9d@oracle.com> References: <5a6ea069-5c78-b10f-1cbc-8d38f0560f9d@oracle.com> Message-ID: http://cr.openjdk.java.net/~stefank/8199946/webrev.02/src/hotspot/cpu/arm/nativeInst_arm_64.cpp.udiff.html http://cr.openjdk.java.net/~stefank/8199946/webrev.02/src/hotspot/cpu/arm/relocInfo_arm.cpp.udiff.html I think the include should be oops/compressedOops.inline.hpp in these. Besides my confusion over whether RawAccess<>::oop_load and store decode the oop or not in the gc code, this looks really good to me.? It's nice to encapsulate the compressedOops code now. Thanks, Coleen On 3/22/18 12:01 PM, Stefan Karlsson wrote: > Hi, > > This patch needs Erik's change to the LoadProxies: > http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2018-March/021504.html > > > to build on fastdebug. > > Here's a rebased patch: > http://cr.openjdk.java.net/~stefank/8199946/webrev.02/ > > Thanks, > StefanK > > On 2018-03-21 18:27, Stefan Karlsson wrote: >> Hi all, >> >> Please review this patch to get rid of the oopDesc::load/store >> functions and to move the oopDesc::encode/decode functions to a new >> CompressedOops subsystem. >> >> http://cr.openjdk.java.net/~stefank/8199946/webrev.01 >> https://bugs.openjdk.java.net/browse/JDK-8199946 >> >> When the Access API was introduced many of the usages of >> oopDesc::load_decode_heap_oop, and friends, were replaced by calls to >> the Access API. However, there are still some usages of these >> functions, most notably in the GC code. >> >> This patch is two-fold: >> >> 1) It replaces the oopDesc load and store calls with RawAccess >> equivalents. >> >> 2) It moves the oopDesc encode and decode functions to a new, >> separate, subsystem called CompressedOops. A future patch could even >> move all the Universe::_narrow_oop variables over to CompressedOops. >> >> The second part has the nice property that it breaks up a circular >> dependency between oop.inline.hpp and access.inline.hpp. After the >> change we have: >> >> oop.inline.hpp includes: >> ?? access.inline.hpp >> ?? compressedOops.inline.hpp >> >> access.inline.hpp includes: >> ?? compressedOops.inline.hpp >> >> Thanks, >> StefanK From stefan.karlsson at oracle.com Fri Mar 23 16:08:18 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 23 Mar 2018 17:08:18 +0100 Subject: RFR: 8199946: Move load/store and encode/decode out of oopDesc In-Reply-To: References: <5a6ea069-5c78-b10f-1cbc-8d38f0560f9d@oracle.com> Message-ID: Thanks Coleen, I've uploaded new webrevs with the latest changes: http://cr.openjdk.java.net/~stefank/8199946/webrev.03.delta http://cr.openjdk.java.net/~stefank/8199946/webrev.03 StefanK On 2018-03-23 17:05, coleen.phillimore at oracle.com wrote: > > http://cr.openjdk.java.net/~stefank/8199946/webrev.02/src/hotspot/cpu/arm/nativeInst_arm_64.cpp.udiff.html > > http://cr.openjdk.java.net/~stefank/8199946/webrev.02/src/hotspot/cpu/arm/relocInfo_arm.cpp.udiff.html > > > I think the include should be oops/compressedOops.inline.hpp in these. > > Besides my confusion over whether RawAccess<>::oop_load and store decode > the oop or not in the gc code, this looks really good to me.? It's nice > to encapsulate the compressedOops code now. > > Thanks, > Coleen > > > > On 3/22/18 12:01 PM, Stefan Karlsson wrote: >> Hi, >> >> This patch needs Erik's change to the LoadProxies: >> http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2018-March/021504.html >> >> >> to build on fastdebug. >> >> Here's a rebased patch: >> http://cr.openjdk.java.net/~stefank/8199946/webrev.02/ >> >> Thanks, >> StefanK >> >> On 2018-03-21 18:27, Stefan Karlsson wrote: >>> Hi all, >>> >>> Please review this patch to get rid of the oopDesc::load/store >>> functions and to move the oopDesc::encode/decode functions to a new >>> CompressedOops subsystem. >>> >>> http://cr.openjdk.java.net/~stefank/8199946/webrev.01 >>> https://bugs.openjdk.java.net/browse/JDK-8199946 >>> >>> When the Access API was introduced many of the usages of >>> oopDesc::load_decode_heap_oop, and friends, were replaced by calls to >>> the Access API. However, there are still some usages of these >>> functions, most notably in the GC code. >>> >>> This patch is two-fold: >>> >>> 1) It replaces the oopDesc load and store calls with RawAccess >>> equivalents. >>> >>> 2) It moves the oopDesc encode and decode functions to a new, >>> separate, subsystem called CompressedOops. A future patch could even >>> move all the Universe::_narrow_oop variables over to CompressedOops. >>> >>> The second part has the nice property that it breaks up a circular >>> dependency between oop.inline.hpp and access.inline.hpp. After the >>> change we have: >>> >>> oop.inline.hpp includes: >>> ?? access.inline.hpp >>> ?? compressedOops.inline.hpp >>> >>> access.inline.hpp includes: >>> ?? compressedOops.inline.hpp >>> >>> Thanks, >>> StefanK > From coleen.phillimore at oracle.com Fri Mar 23 16:14:02 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 23 Mar 2018 12:14:02 -0400 Subject: RFR (S) 8198760: Move global lock SpaceManager::_expand_lock to MutexLocker.cpp In-Reply-To: References: Message-ID: On 3/23/18 11:43 AM, Thomas St?fe wrote: > Hi Coleen, > > Looks good. Can we loose the friend declaration for SpaceManager in > ClassloaderMetaspace now? No, I don't think so.? SpaceManager has another lock (the CLD::_metaspace_lock passed down) that ClassloaderMetaspace needs. More refactoring is needed! Thanks, Coleen > > Thomas > > On Fri 23. Mar 2018 at 12:55, > wrote: > > Summary: We should avoid having global locks buried in cpp files > > open webrev at > http://cr.openjdk.java.net/~coleenp/8198760.01/webrev > > bug link https://bugs.openjdk.java.net/browse/JDK-8198760 > > Tested with mach5 tier1 and 2. > > Thanks, > Coleen > From thomas.schatzl at oracle.com Fri Mar 23 16:20:15 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Fri, 23 Mar 2018 17:20:15 +0100 Subject: RFR: 8199946: Move load/store and encode/decode out of oopDesc In-Reply-To: References: <5a6ea069-5c78-b10f-1cbc-8d38f0560f9d@oracle.com> Message-ID: <1521822015.4147.9.camel@oracle.com> Hi Stefan, On Fri, 2018-03-23 at 17:08 +0100, Stefan Karlsson wrote: > Thanks Coleen, > > I've uploaded new webrevs with the latest changes: > > http://cr.openjdk.java.net/~stefank/8199946/webrev.03.delta > http://cr.openjdk.java.net/~stefank/8199946/webrev.03 > > StefanK > looks good. Thanks. Thomas From erik.osterlund at oracle.com Fri Mar 23 16:44:11 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Fri, 23 Mar 2018 17:44:11 +0100 Subject: RFR: 8199946: Move load/store and encode/decode out of oopDesc In-Reply-To: References: <5a6ea069-5c78-b10f-1cbc-8d38f0560f9d@oracle.com> Message-ID: <5AB52EDA.7050600@oracle.com> Hi Stefan, Looks good. Thanks, /Erik On 2018-03-23 17:08, Stefan Karlsson wrote: > Thanks Coleen, > > I've uploaded new webrevs with the latest changes: > > http://cr.openjdk.java.net/~stefank/8199946/webrev.03.delta > http://cr.openjdk.java.net/~stefank/8199946/webrev.03 > > StefanK > > On 2018-03-23 17:05, coleen.phillimore at oracle.com wrote: >> >> http://cr.openjdk.java.net/~stefank/8199946/webrev.02/src/hotspot/cpu/arm/nativeInst_arm_64.cpp.udiff.html >> >> http://cr.openjdk.java.net/~stefank/8199946/webrev.02/src/hotspot/cpu/arm/relocInfo_arm.cpp.udiff.html >> >> >> I think the include should be oops/compressedOops.inline.hpp in these. >> >> Besides my confusion over whether RawAccess<>::oop_load and store >> decode the oop or not in the gc code, this looks really good to me. >> It's nice to encapsulate the compressedOops code now. >> >> Thanks, >> Coleen >> >> >> >> On 3/22/18 12:01 PM, Stefan Karlsson wrote: >>> Hi, >>> >>> This patch needs Erik's change to the LoadProxies: >>> http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2018-March/021504.html >>> >>> >>> to build on fastdebug. >>> >>> Here's a rebased patch: >>> http://cr.openjdk.java.net/~stefank/8199946/webrev.02/ >>> >>> Thanks, >>> StefanK >>> >>> On 2018-03-21 18:27, Stefan Karlsson wrote: >>>> Hi all, >>>> >>>> Please review this patch to get rid of the oopDesc::load/store >>>> functions and to move the oopDesc::encode/decode functions to a new >>>> CompressedOops subsystem. >>>> >>>> http://cr.openjdk.java.net/~stefank/8199946/webrev.01 >>>> https://bugs.openjdk.java.net/browse/JDK-8199946 >>>> >>>> When the Access API was introduced many of the usages of >>>> oopDesc::load_decode_heap_oop, and friends, were replaced by calls >>>> to the Access API. However, there are still some usages of these >>>> functions, most notably in the GC code. >>>> >>>> This patch is two-fold: >>>> >>>> 1) It replaces the oopDesc load and store calls with RawAccess >>>> equivalents. >>>> >>>> 2) It moves the oopDesc encode and decode functions to a new, >>>> separate, subsystem called CompressedOops. A future patch could >>>> even move all the Universe::_narrow_oop variables over to >>>> CompressedOops. >>>> >>>> The second part has the nice property that it breaks up a circular >>>> dependency between oop.inline.hpp and access.inline.hpp. After the >>>> change we have: >>>> >>>> oop.inline.hpp includes: >>>> access.inline.hpp >>>> compressedOops.inline.hpp >>>> >>>> access.inline.hpp includes: >>>> compressedOops.inline.hpp >>>> >>>> Thanks, >>>> StefanK >> From thomas.stuefe at gmail.com Fri Mar 23 17:05:44 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Fri, 23 Mar 2018 18:05:44 +0100 Subject: RFR (S) 8198760: Move global lock SpaceManager::_expand_lock to MutexLocker.cpp In-Reply-To: References: Message-ID: On Fri, Mar 23, 2018 at 5:14 PM, wrote: > > > On 3/23/18 11:43 AM, Thomas St?fe wrote: > > Hi Coleen, > > Looks good. Can we loose the friend declaration for SpaceManager in > ClassloaderMetaspace now? > > > No, I don't think so. SpaceManager has another lock (the > CLD::_metaspace_lock passed down) that ClassloaderMetaspace needs. More > refactoring is needed! > > Thanks, > Coleen > Ok... thanks for checking. More refactoring will come. Patch is fine and certainly an improvement. Have a nice weekend! ..Thomas > > > > Thomas > > On Fri 23. Mar 2018 at 12:55, wrote: > >> Summary: We should avoid having global locks buried in cpp files >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8198760.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8198760 >> >> Tested with mach5 tier1 and 2. >> >> Thanks, >> Coleen >> > > From magnus.ihse.bursie at oracle.com Fri Mar 23 17:26:18 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Fri, 23 Mar 2018 18:26:18 +0100 Subject: RFR: 8199138: Add RISC-V support to Zero In-Reply-To: <1521554055.3029.4.camel@gmail.com> References: <1521313360.26308.4.camel@gmail.com> <259e05b8-dbb1-4aa4-f451-6b7078eeb2ff@oracle.com> <1521554055.3029.4.camel@gmail.com> Message-ID: On 2018-03-20 14:54, Edward Nevill wrote: > On Tue, 2018-03-20 at 08:39 +0100, Erik Helin wrote: >> Please review the following webrev >>> Bugid: https://bugs.openjdk.java.net/browse/JDK-8199138 >>> Webrev: http://cr.openjdk.java.net/~enevill/8199138/webrev.00 >> 32 # First, filter out everything that doesn't begin with "aarch64-" >> 33 if ! echo $* | grep '^aarch64-\|^riscv64-' >/dev/null ; then >> >> Could you please update the comment on line 32 to say the same thing as >> the code? >> > Hi Eirk, > > Thanks for this. I have updated the webrev with the above comment. > > http://cr.openjdk.java.net/~enevill/8199138/webrev.01 I note that in platform.m4 (sorry I didn't say this earlier), you set the CPU_ARCH to riscv64 as well, and not just riscv. Now I don't know how likely it is that OpenJDK will ever support the 32-bit version of riscv, but it seems like it would make more sense to define the CPU_ARCH as "riscv", and the CPU as "riscv64". It's just a minor thing, if you like it the way it is, keep it. /Magnus > > I have also fixed a problem encountered with the submit-hs repo where the build machine had older headers which did not define EM_RISCV. > > The solution is to define EM_RISCV if not already defined as is done for aarch64. > > IE. > > #ifndef EM_AARCH64 > #define EM_AARCH64 183 /* ARM AARCH64 */ > #endif > +#ifndef EM_RISCV > + #define EM_RISCV 243 > +#endif > > This now passes the submit-hs tests. > > Does this look OK to push now? > > Thanks, > Ed. > From leo.korinth at oracle.com Fri Mar 23 17:56:02 2018 From: leo.korinth at oracle.com (Leo Korinth) Date: Fri, 23 Mar 2018 18:56:02 +0100 Subject: RFR: 8176717: GC log file handle leaked to child processes In-Reply-To: <271f07b2-2a74-c5ff-7a7b-d9805929a23c@oracle.com> References: <271f07b2-2a74-c5ff-7a7b-d9805929a23c@oracle.com> Message-ID: <04b0987c-0de4-1e5e-52be-0c603c1fab10@oracle.com> > New webrev (only change is removing module-info.java change): > http://cr.openjdk.java.net/~lkorinth/8176717/01/ > > Thanks, > Leo Hi! (again) I would like more feedback on my change. Thomas St?fe has proposed some changes (including maybe rename os::fopen_retain and/or possibly move it away from os). I feel it is vital to have a hotspot-wide wrapper function for fopen. More so if core libs can not fix the problem due to legacy concerns. In the long run all "open" operations ought to be handled by wrappers --- in my opinion. I need help to get a solution that is to your satisfaction; so please continue the discussion that Thomas St?fe started! When we agree on a solution, I will also create a bug on core-libs to fix the TRUE on inherits system handles (or document the reason for why it ought to be TRUE). Thanks, Leo From aph at redhat.com Fri Mar 23 18:11:42 2018 From: aph at redhat.com (Andrew Haley) Date: Fri, 23 Mar 2018 18:11:42 +0000 Subject: RFD: AOT for AArch64 Message-ID: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> How to build it: Check out jdk-hs. Apply http://cr.openjdk.java.net/~aph/jaotc/jdk-hs-1/ to that checkout then build OpenJDK images. Then $ git checkout https://github.com/theRealAph/graal.git $ cd graal $ git branch aarch64-branch-overflows MAKE SURE that JAVA_HOME is pointing at the jdk-hs you just built: $ export JAVA_HOME=/local/jdk-hs/build/linux-aarch64-normal-server-release/images/jdk/ Follow the "Building Graal" instructions at https://github.com/theRealAph/graal/tree/aarch64-branch-overflows/compiler My graal is in /local/graal/ and my jdk-hs is in /local/jdk-hs/. To run jaotc, I do something like this: /local/jdk-hs/build/linux-aarch64-normal-server-release/images/jdk/bin/jaotc \ -J--module-path=/local/graal/graal/sdk/mxbuild/modules/org.graalvm.graal_sdk.jar:/local/graal/graal/truffle/mxbuild/modules/com.oracle.truffle.truffle_api.jar \ -J--upgrade-module-path=/local/graal/graal/compiler/mxbuild/modules/jdk.internal.vm.compiler.jar \ myjar.jar --output myjar.so Note that the "-J" commands point jaotc at the version of Graal you've just built rather than OpenJDK's built-in version of Graal. Enjoy. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From aph at redhat.com Fri Mar 23 18:13:34 2018 From: aph at redhat.com (Andrew Haley) Date: Fri, 23 Mar 2018 18:13:34 +0000 Subject: RFD: AOT for AArch64 In-Reply-To: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> Message-ID: <12f40038-7a4e-724b-f952-7f28032715d7@redhat.com> On 03/23/2018 06:11 PM, Andrew Haley wrote: > $ git checkout https://github.com/theRealAph/graal.git Sorry, "git clone ..." -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From coleen.phillimore at oracle.com Fri Mar 23 18:31:34 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 23 Mar 2018 14:31:34 -0400 Subject: RFR (S) 8198760: Move global lock SpaceManager::_expand_lock to MutexLocker.cpp In-Reply-To: References: Message-ID: <09a3494a-200c-587e-c974-59e32451914c@oracle.com> On 3/23/18 1:05 PM, Thomas St?fe wrote: > > > On Fri, Mar 23, 2018 at 5:14 PM, > wrote: > > > > On 3/23/18 11:43 AM, Thomas St?fe wrote: >> Hi Coleen, >> >> Looks good. Can we loose the friend declaration for SpaceManager >> in ClassloaderMetaspace now? > > No, I don't think so.? SpaceManager has another lock (the > CLD::_metaspace_lock passed down) that ClassloaderMetaspace > needs.?? More refactoring is needed! > > Thanks, > Coleen > > > Ok... thanks for checking. More refactoring will come. Patch is fine > and certainly an improvement. > > Have a nice weekend! You too, thank you! Coleen > > ..Thomas > > > >> >> Thomas >> >> On Fri 23. Mar 2018 at 12:55, > > wrote: >> >> Summary: We should avoid having global locks buried in cpp files >> >> open webrev at >> http://cr.openjdk.java.net/~coleenp/8198760.01/webrev >> >> bug link https://bugs.openjdk.java.net/browse/JDK-8198760 >> >> >> Tested with mach5 tier1 and 2. >> >> Thanks, >> Coleen >> > > From kim.barrett at oracle.com Fri Mar 23 18:42:31 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 23 Mar 2018 14:42:31 -0400 Subject: RFR: 8199736: Define WIN32_LEAN_AND_MEAN before including windows.h In-Reply-To: References: <7F114FA8-59F0-42F1-A0B8-33D8C22DE81A@oracle.com> Message-ID: <945D9C5D-FF82-46C2-B6EE-40941CD44FD5@oracle.com> > On Mar 23, 2018, at 8:37 AM, Robin Westberg wrote: > > Hi Kim & Erik, > > Certainly makes sense to define it from the build system, I?ve updated the patch accordingly: > > Full: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.01/ > Incremental: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00-01/ > > (Not quite sure if the definition belongs where I put it or a bit later where most other windows-specific JVM flags are defined, but seemed reasonable to put it close to where it is defined for the JDK libraries). This looks good to me. > Best regards, > Robin > >> On 22 Mar 2018, at 16:52, Kim Barrett wrote: >> >>> On Mar 22, 2018, at 10:34 AM, Robin Westberg wrote: >>> >>> Hi all, >>> >>> Please review the following change that defines WIN32_LEAN_AND_MEAN [1] before including windows.h. This marginally improves build times, and makes it possible to include winsock2.h. >>> >>> Issue: https://bugs.openjdk.java.net/browse/JDK-8199736 >>> Webrev: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00/ >>> Testing: hs-tier1 >>> >>> Best regards, >>> Robin >>> >>> [1] https://msdn.microsoft.com/en-us/library/windows/desktop/aa383745%28v=vs.85%29.aspx#faster_builds_with_smaller_header_files >> >> I think the addition of the WIN32_LEAN_AND_MEAN definition should be done through the build >> system, so that it applies everywhere. From leo.korinth at oracle.com Fri Mar 23 19:03:12 2018 From: leo.korinth at oracle.com (Leo Korinth) Date: Fri, 23 Mar 2018 20:03:12 +0100 Subject: RFR: 8176717: GC log file handle leaked to child processes In-Reply-To: <04b0987c-0de4-1e5e-52be-0c603c1fab10@oracle.com> References: <271f07b2-2a74-c5ff-7a7b-d9805929a23c@oracle.com> <04b0987c-0de4-1e5e-52be-0c603c1fab10@oracle.com> Message-ID: <4c216bb1-a3bf-e916-d07a-643431faa341@oracle.com> Hi! Cross-posting this to both hotspot-dev and hotspot-runtime-dev to get more attention. Sorry. Original mail conversation can be found here: http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030637.html I need feedback to know how to continue. Thanks, Leo On 23/03/18 18:56, Leo Korinth wrote: > >> New webrev (only change is removing module-info.java change): >> http://cr.openjdk.java.net/~lkorinth/8176717/01/ >> >> Thanks, >> Leo > > Hi! (again) > > I would like more feedback on my change. Thomas St?fe has proposed some > changes (including maybe rename os::fopen_retain and/or possibly move it > away from os). > > I feel it is vital to have a hotspot-wide wrapper function for fopen. > More so if core libs can not fix the problem due to legacy concerns. In > the long run all "open" operations ought to be handled by wrappers --- > in my opinion. > > I need help to get a solution that is to your satisfaction; so please > continue the discussion that Thomas St?fe started! When we agree on a > solution, I will also create a bug on core-libs to fix the TRUE on > inherits system handles (or document the reason for why it ought to be > TRUE). > > Thanks, > Leo From vladimir.kozlov at oracle.com Fri Mar 23 23:27:30 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 23 Mar 2018 16:27:30 -0700 Subject: RFD: AOT for AArch64 In-Reply-To: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> Message-ID: <9bb3d296-323c-07b4-0a91-aa24992282c6@oracle.com> Hi Andrew, This is great!!! I checked Hotspot changes and they seems fine. At least I build and ran AOT tests on X64 Linux. Please, explain changes in JvmFeatures.gmk. Few question about jaotc changes. TODO part in CodeSectionProcessor.java - you not doing it for aarch64. Why? May be we don't need to for x64 too. Originally it was added because when we used libelf tools to geenrate .o files linker did not patch this memory if it is not 0. Code in AOTCompiledClass.java look strange in try block. Why you need it? Why you made allocate_metadata_index virtual in oopRecorder.hpp? May be we need java property to keep object file. I wasn't able to extract patch from aarch64-branch-overflows branch. I am new to git: $ git branch aarch64-branch-overflows * master $ git diff master aarch64-branch-overflows $ I see changes in https://github.com/oracle/graal/compare/master...theRealAph:aarch64-branch-overflows Can you just send a patch instead? And update it to latest Graal master? Thanks, Vladimir On 3/23/18 11:11 AM, Andrew Haley wrote: > How to build it: > > Check out jdk-hs. Apply > http://cr.openjdk.java.net/~aph/jaotc/jdk-hs-1/ to that checkout then > build OpenJDK images. > > Then > > $ git checkout https://github.com/theRealAph/graal.git > $ cd graal > $ git branch aarch64-branch-overflows > > MAKE SURE that JAVA_HOME is pointing at the jdk-hs you just built: > > $ export JAVA_HOME=/local/jdk-hs/build/linux-aarch64-normal-server-release/images/jdk/ > > Follow the "Building Graal" instructions at > https://github.com/theRealAph/graal/tree/aarch64-branch-overflows/compiler > > My graal is in /local/graal/ and my jdk-hs is in /local/jdk-hs/. > To run jaotc, I do something like this: > > /local/jdk-hs/build/linux-aarch64-normal-server-release/images/jdk/bin/jaotc \ > -J--module-path=/local/graal/graal/sdk/mxbuild/modules/org.graalvm.graal_sdk.jar:/local/graal/graal/truffle/mxbuild/modules/com.oracle.truffle.truffle_api.jar \ > -J--upgrade-module-path=/local/graal/graal/compiler/mxbuild/modules/jdk.internal.vm.compiler.jar \ > myjar.jar --output myjar.so > > Note that the "-J" commands point jaotc at the version of Graal you've > just built rather than OpenJDK's built-in version of Graal. > > Enjoy. > From edward.nevill at gmail.com Sat Mar 24 00:02:42 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Sat, 24 Mar 2018 00:02:42 +0000 Subject: RFR: 8200197: Zero fails to build after 8200105 Message-ID: <1521849762.3186.6.camel@gmail.com> Hi, Please review the following webrev BugID: https://bugs.openjdk.java.net/browse/JDK-8200197 Webrev: http://cr.openjdk.java.net/~enevill/8200197/webrev.00 Zero fails to build after 8200105: Remove cyclic dependency between oop.inline.hpp and collectedHeap.inline.hpp with the error messages below The webrev above gets it building again. Thanks, Ed. --- CUT HERE --- In file included from /home/ed/openjdk/hs/src/hotspot/share/runtime/thread.hpp:29:0, from /home/ed/openjdk/hs/src/hotspot/share/utilities/events.hpp:30, from /home/ed/openjdk/hs/src/hotspot/share/gc/shared/collectedHeap.hpp:35, from /home/ed/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:27: /home/ed/openjdk/hs/src/hotspot/share/gc/shared/threadLocalAllocBuffer.hpp:129:20: warning: inline function 'HeapWord* ThreadLocalAllocBuffer::allocate(size_t)' used but never defined inline HeapWord* allocate(size_t size); ^~~~~~~~ In file included from /home/ed/openjdk/hs/src/hotspot/share/runtime/thread.hpp:29:0, from /home/ed/openjdk/hs/src/hotspot/share/utilities/events.hpp:30, from /home/ed/openjdk/hs/src/hotspot/share/gc/shared/collectedHeap.hpp:35, from /home/ed/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:27, from /home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/gensrc/jvmtifiles/bytecodeInterpreterWithChecks.cpp:3: /home/ed/openjdk/hs/src/hotspot/share/gc/shared/threadLocalAllocBuffer.hpp:129:20: warning: inline function 'HeapWord* ThreadLocalAllocBuffer::allocate(size_t)' used but never defined inline HeapWord* allocate(size_t size); ^~~~~~~~ /home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/objs/bytecodeInterpreter.o: In function `BytecodeInterpreter::run(BytecodeInterpreter*)': /home/ed/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:2151: undefined reference to `ThreadLocalAllocBuffer::allocate(unsigned long)' /home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/objs/bytecodeInterpreterWithChecks.o: In function `BytecodeInterpreter::runWithChecks(BytecodeInterpreter*)': /home/ed/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:2151: undefined reference to `ThreadLocalAllocBuffer::allocate(unsigned long)' collect2: error: ld returned 1 exit status lib/CompileJvm.gmk:212: recipe for target '/home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/support/modules_libs/java.base/server/libjvm.so' failed gmake[3]: *** [/home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/support/modules_libs/java.base/server/libjvm.so] Error 1 gmake[3]: *** Waiting for unfinished jobs.... /home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/objs/bytecodeInterpreter.o: In function `BytecodeInterpreter::run(BytecodeInterpreter*)': /home/ed/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:2151: undefined reference to `ThreadLocalAllocBuffer::allocate(unsigned long)' /home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/objs/bytecodeInterpreterWithChecks.o: In function `BytecodeInterpreter::runWithChecks(BytecodeInterpreter*)': /home/ed/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:2151: undefined reference to `ThreadLocalAllocBuffer::allocate(unsigned long)' collect2: error: ld returned 1 exit status lib/CompileGtest.gmk:65: recipe for target '/home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/gtest/libjvm.so' failed gmake[3]: *** [/home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/gtest/libjvm.so] Error 1 make/Main.gmk:267: recipe for target 'hotspot-zero-libs' failed gmake[2]: *** [hotspot-zero-libs] Error 2 ERROR: Build failed for target 'jdk' in configuration 'linux-x86_64-normal-zero-release' (exit code 2) === Output from failing command(s) repeated here === * For target hotspot_variant-zero_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link: /home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/objs/bytecodeInterpreter.o: In function `BytecodeInterpreter::run(BytecodeInterpreter*)': /home/ed/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:2151: undefined reference to `ThreadLocalAllocBuffer::allocate(unsigned long)' /home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/objs/bytecodeInterpreterWithChecks.o: In function `BytecodeInterpreter::runWithChecks(BytecodeInterpreter*)': /home/ed/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:2151: undefined reference to `ThreadLocalAllocBuffer::allocate(unsigned long)' collect2: error: ld returned 1 exit status * For target hotspot_variant-zero_libjvm_objs_BUILD_LIBJVM_link: /home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/objs/bytecodeInterpreter.o: In function `BytecodeInterpreter::run(BytecodeInterpreter*)': /home/ed/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:2151: undefined reference to `ThreadLocalAllocBuffer::allocate(unsigned long)' /home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/objs/bytecodeInterpreterWithChecks.o: In function `BytecodeInterpreter::runWithChecks(BytecodeInterpreter*)': /home/ed/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:2151: undefined reference to `ThreadLocalAllocBuffer::allocate(unsigned long)' collect2: error: ld returned 1 exit status * All command lines available in /home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/make-support/failure-logs. === End of repeated output === === Make failed targets repeated here === lib/CompileJvm.gmk:212: recipe for target '/home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/support/modules_libs/java.base/server/libjvm.so' failed lib/CompileGtest.gmk:65: recipe for target '/home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/gtest/libjvm.so' failed make/Main.gmk:267: recipe for target 'hotspot-zero-libs' failed === End of repeated output === Hint: Try searching the build log for the name of the first failed target. Hint: See doc/building.html#troubleshooting for assistance. /home/ed/openjdk/hs/make/Init.gmk:291: recipe for target 'main' failed make[1]: *** [main] Error 2 /home/ed/openjdk/hs/make/Init.gmk:186: recipe for target 'jdk' failed make: *** [jdk] Error 2 --- CUT HERE --- From aph at redhat.com Sat Mar 24 09:37:49 2018 From: aph at redhat.com (Andrew Haley) Date: Sat, 24 Mar 2018 09:37:49 +0000 Subject: RFD: AOT for AArch64 In-Reply-To: <9bb3d296-323c-07b4-0a91-aa24992282c6@oracle.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <9bb3d296-323c-07b4-0a91-aa24992282c6@oracle.com> Message-ID: <9b46df31-94a0-02c9-ea87-1e092fe9d3ef@redhat.com> Hi, On 03/23/2018 11:27 PM, Vladimir Kozlov wrote: > This is great!!! Thanks! My pleasure. > I checked Hotspot changes and they seems fine. At least I build and ran AOT tests on X64 Linux. > > Please, explain changes in JvmFeatures.gmk. It's just some stuff I'm doing when debugging. I'll pull it out. > Few question about jaotc changes. TODO part in > CodeSectionProcessor.java - you not doing it for aarch64. Why? May > be we don't need to for x64 too. Originally it was added because > when we used libelf tools to geenrate .o files linker did not patch > this memory if it is not 0. I don't know why it's needed in x86. We don't need to do it and it patches the wrong place for AArch64. > Code in AOTCompiledClass.java look strange in try block. Why you need it? > Why you made allocate_metadata_index virtual in oopRecorder.hpp? It's not needed anyomore. I forgot to take it out. > May be we need java property to keep object file. Perhaps so. It's impossible to debug without that file. If you suggest an appropriate name I'll add it. > I wasn't able to extract patch from aarch64-branch-overflows > branch. I am new to git: > > $ git branch > aarch64-branch-overflows > * master > $ git diff master aarch64-branch-overflows > $ > > I see changes in > > https://github.com/oracle/graal/compare/master...theRealAph:aarch64-branch-overflows > > Can you just send a patch instead? git diff -r 74bae05fac60c035bf0387e76e4ece6c5b9119a8 should do it. It's im http://cr.openjdk.java.net/~aph/jaotc/graal-aarch64-0.patch > And update it to latest Graal master? Sure, I'll do that. I'm a bit nervous about that because every time I update the Graal devs have refactored everything. ;-) -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From edward.nevill at gmail.com Sat Mar 24 16:09:42 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Sat, 24 Mar 2018 16:09:42 +0000 Subject: RFD: AOT for AArch64 In-Reply-To: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> Message-ID: <1521907782.18910.9.camel@gmail.com> On Fri, 2018-03-23 at 18:11 +0000, Andrew Haley wrote: > > /local/jdk-hs/build/linux-aarch64-normal-server-release/images/jdk/bin/jaotc \ > -J--module-path=/local/graal/graal/sdk/mxbuild/modules/org.graalvm.graal_sdk.jar:/local/graal/graal/truffle/mxbuild/modules/com.oracle.truffle.truffle_api.jar \ > -J--upgrade-module-path=/local/graal/graal/compiler/mxbuild/modules/jdk.internal.vm.compiler.jar \ > myjar.jar --output myjar.so > > Looks promising, but I get as far as here and then get Exception in thread "main" jdk.vm.ci.common.JVMCIError: expected VM constant not found: CardTableModRefBS::dirty_card at jdk.internal.vm.ci/jdk.vm.ci.hotspot.HotSpotVMConfigAccess.getConstant(HotSpotVMConfigAccess.java:84) at jdk.internal.vm.ci/jdk.vm.ci.hotspot.HotSpotVMConfigAccess.getConstant(HotSpotVMConfigAccess.java:98) at jdk.internal.vm.compiler/org.graalvm.compiler.hotspot.GraalHotSpotVMConfig.(GraalHotSpotVMConfig.java:557) at jdk.internal.vm.compiler/org.graalvm.compiler.hotspot.AOTGraalHotSpotVMConfig.(AOTGraalHotSpotVMConfig.java:34) at jdk.internal.vm.compiler/org.graalvm.compiler.hotspot.HotSpotGraalRuntime.(HotSpotGraalRuntime.java:111) at jdk.internal.vm.compiler/org.graalvm.compiler.hotspot.HotSpotGraalCompilerFactory.createCompiler(HotSpotGraalCompilerFactory.java:132) at jdk.aot/jdk.tools.jaotc.Main.run(Main.java:144) at jdk.aot/jdk.tools.jaotc.Main.run(Main.java:101) at jdk.aot/jdk.tools.jaotc.Main.main(Main.java:80) The exact command I am executing is ed at ubuntu:~/openjdk$ /home/ed/openjdk/hs/build/linux-aarch64-normal-server-release/images/jdk/bin/jaotc -J--module-path=/home/ed/openjdk/graal/sdk/mxbuild/modules/org.graalvm.graal_sdk.jar:/home/ed/openjdk/graal/truffle/mxbuild/modules/com.oracle.truffle.truffle_api.jar -J--upgrade-module-path=/home/ed/openjdk/graal/compiler/mxbuild/modules/jdk.internal.vm.compiler.jar Queens.class --Output Queens.so My JAVA_HOME is ed at ubuntu:~/openjdk$ echo $JAVA_HOME /home/ed/openjdk/hs/build/linux-aarch64-normal-server-release/images/jdk Any ideas? Thanks, Ed. From edward.nevill at gmail.com Sat Mar 24 18:54:04 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Sat, 24 Mar 2018 18:54:04 +0000 Subject: RFD: AOT for AArch64 In-Reply-To: <1521907782.18910.9.camel@gmail.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <1521907782.18910.9.camel@gmail.com> Message-ID: <1521917644.2929.5.camel@gmail.com> On Sat, 2018-03-24 at 16:09 +0000, Edward Nevill wrote: > On Fri, 2018-03-23 at 18:11 +0000, Andrew Haley wrote: > > > > > Looks promising, but I get as far as here and then get > > Exception in thread "main" jdk.vm.ci.common.JVMCIError: expected VM constant not found: CardTableModRefBS::dirty_card > It looks like your patch http://cr.openjdk.java.net/~aph/jaotc/jdk-hs-1/ didn't apply cleanly to tip of http://hg.openjdk.java.net/jdk/hs I tried updating to changeset: 48711:e321560ac819 user: adinn date: Thu Jan 25 14:47:27 2018 +0000 files: src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp description: 8195859: AArch64: vtableStubs gtest fails after 8174962 Summary: gtest vtableStubs introduced by 8174962 fails on AArch64 with an invalid insn encoding Reviewed-by: duke Which looks like the rev you are basing your patch on based on the rejected patch. It seems to get further, but now I get ed at ubuntu:~/openjdk$ /home/ed/openjdk/hs/build/linux-aarch64-normal-server-release/images/jdk/bin/jaotc -J--module-path=/home/ed/openjdk/graal/sdk/mxbuild/modules/org.graalvm.graal_sdk.jar:/home/ed/openjdk/graal/truffle/mxbuild/modules/com.oracle.truffle.truffle_api.jar -J--upgrade-module-path=/home/ed/openjdk/graal/compiler/mxbuild/modules/jdk.internal.vm.compiler.jar Queens.class --output Queens.so Error: Failed compilation: Queens.main([Ljava/lang/String;)V: org.graalvm.compiler.graph.GraalGraphError: org.graalvm.compiler.debug.GraalError: Emitting code to load a metaspace address is not currently supported on aarch64 at node: 287|LoadConstantIndirectly Error: Failed compilation: Queens.print([I)V: org.graalvm.compiler.graph.GraalGraphError: org.graalvm.compiler.debug.GraalError: Emitting code to load a metaspace address is not currently supported on aarch64 at node: 1273|LoadConstantIndirectly Exception in thread "main" java.lang.NoSuchMethodError: jdk.tools.jaotc.aarch64.AArch64ELFMacroAssembler.addressOf(Ljdk/vm/ci/code/Register;)V at jdk.aot/jdk.tools.jaotc.aarch64.AArch64ELFMacroAssembler.getPLTStaticEntryCode(AArch64ELFMacroAssembler.java:68) at jdk.aot/jdk.tools.jaotc.CodeSectionProcessor.addCallStub(CodeSectionProcessor.java:139) at jdk.aot/jdk.tools.jaotc.CodeSectionProcessor.process(CodeSectionProcessor.java:117) at jdk.aot/jdk.tools.jaotc.DataBuilder.prepareData(DataBuilder.java:142) at jdk.aot/jdk.tools.jaotc.Main.run(Main.java:188) at jdk.aot/jdk.tools.jaotc.Main.run(Main.java:101) at jdk.aot/jdk.tools.jaotc.Main.main(Main.java:80) Is this known/expected? What revision of jdk/hs are you building with? I'd like to see this working. Thanks, Ed. From coleen.phillimore at oracle.com Sun Mar 25 15:24:32 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Sun, 25 Mar 2018 11:24:32 -0400 Subject: RFR: 8200197: Zero fails to build after 8200105 In-Reply-To: <1521849762.3186.6.camel@gmail.com> References: <1521849762.3186.6.camel@gmail.com> Message-ID: Looks good and trivial. thanks, Coleen On 3/23/18 8:02 PM, Edward Nevill wrote: > Hi, > > Please review the following webrev > > BugID: https://bugs.openjdk.java.net/browse/JDK-8200197 > Webrev: http://cr.openjdk.java.net/~enevill/8200197/webrev.00 > > Zero fails to build after > > 8200105: Remove cyclic dependency between oop.inline.hpp and collectedHeap.inline.hpp > > with the error messages below > > The webrev above gets it building again. > > Thanks, > Ed. > > --- CUT HERE --- > In file included from /home/ed/openjdk/hs/src/hotspot/share/runtime/thread.hpp:29:0, > from /home/ed/openjdk/hs/src/hotspot/share/utilities/events.hpp:30, > from /home/ed/openjdk/hs/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /home/ed/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:27: > /home/ed/openjdk/hs/src/hotspot/share/gc/shared/threadLocalAllocBuffer.hpp:129:20: warning: inline function 'HeapWord* ThreadLocalAllocBuffer::allocate(size_t)' used but never defined > inline HeapWord* allocate(size_t size); > ^~~~~~~~ > In file included from /home/ed/openjdk/hs/src/hotspot/share/runtime/thread.hpp:29:0, > from /home/ed/openjdk/hs/src/hotspot/share/utilities/events.hpp:30, > from /home/ed/openjdk/hs/src/hotspot/share/gc/shared/collectedHeap.hpp:35, > from /home/ed/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:27, > from /home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/gensrc/jvmtifiles/bytecodeInterpreterWithChecks.cpp:3: > /home/ed/openjdk/hs/src/hotspot/share/gc/shared/threadLocalAllocBuffer.hpp:129:20: warning: inline function 'HeapWord* ThreadLocalAllocBuffer::allocate(size_t)' used but never defined > inline HeapWord* allocate(size_t size); > ^~~~~~~~ > /home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/objs/bytecodeInterpreter.o: In function `BytecodeInterpreter::run(BytecodeInterpreter*)': > /home/ed/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:2151: undefined reference to `ThreadLocalAllocBuffer::allocate(unsigned long)' > /home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/objs/bytecodeInterpreterWithChecks.o: In function `BytecodeInterpreter::runWithChecks(BytecodeInterpreter*)': > /home/ed/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:2151: undefined reference to `ThreadLocalAllocBuffer::allocate(unsigned long)' > collect2: error: ld returned 1 exit status > lib/CompileJvm.gmk:212: recipe for target '/home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/support/modules_libs/java.base/server/libjvm.so' failed > gmake[3]: *** [/home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/support/modules_libs/java.base/server/libjvm.so] Error 1 > gmake[3]: *** Waiting for unfinished jobs.... > /home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/objs/bytecodeInterpreter.o: In function `BytecodeInterpreter::run(BytecodeInterpreter*)': > /home/ed/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:2151: undefined reference to `ThreadLocalAllocBuffer::allocate(unsigned long)' > /home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/objs/bytecodeInterpreterWithChecks.o: In function `BytecodeInterpreter::runWithChecks(BytecodeInterpreter*)': > /home/ed/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:2151: undefined reference to `ThreadLocalAllocBuffer::allocate(unsigned long)' > collect2: error: ld returned 1 exit status > lib/CompileGtest.gmk:65: recipe for target '/home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/gtest/libjvm.so' failed > gmake[3]: *** [/home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/gtest/libjvm.so] Error 1 > make/Main.gmk:267: recipe for target 'hotspot-zero-libs' failed > gmake[2]: *** [hotspot-zero-libs] Error 2 > > ERROR: Build failed for target 'jdk' in configuration 'linux-x86_64-normal-zero-release' (exit code 2) > > === Output from failing command(s) repeated here === > * For target hotspot_variant-zero_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link: > /home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/objs/bytecodeInterpreter.o: In function `BytecodeInterpreter::run(BytecodeInterpreter*)': > /home/ed/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:2151: undefined reference to `ThreadLocalAllocBuffer::allocate(unsigned long)' > /home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/objs/bytecodeInterpreterWithChecks.o: In function `BytecodeInterpreter::runWithChecks(BytecodeInterpreter*)': > /home/ed/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:2151: undefined reference to `ThreadLocalAllocBuffer::allocate(unsigned long)' > collect2: error: ld returned 1 exit status > * For target hotspot_variant-zero_libjvm_objs_BUILD_LIBJVM_link: > /home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/objs/bytecodeInterpreter.o: In function `BytecodeInterpreter::run(BytecodeInterpreter*)': > /home/ed/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:2151: undefined reference to `ThreadLocalAllocBuffer::allocate(unsigned long)' > /home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/objs/bytecodeInterpreterWithChecks.o: In function `BytecodeInterpreter::runWithChecks(BytecodeInterpreter*)': > /home/ed/openjdk/hs/src/hotspot/share/interpreter/bytecodeInterpreter.cpp:2151: undefined reference to `ThreadLocalAllocBuffer::allocate(unsigned long)' > collect2: error: ld returned 1 exit status > > * All command lines available in /home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/make-support/failure-logs. > === End of repeated output === > > === Make failed targets repeated here === > lib/CompileJvm.gmk:212: recipe for target '/home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/support/modules_libs/java.base/server/libjvm.so' failed > lib/CompileGtest.gmk:65: recipe for target '/home/ed/openjdk/hs/build/linux-x86_64-normal-zero-release/hotspot/variant-zero/libjvm/gtest/libjvm.so' failed > make/Main.gmk:267: recipe for target 'hotspot-zero-libs' failed > === End of repeated output === > > Hint: Try searching the build log for the name of the first failed target. > Hint: See doc/building.html#troubleshooting for assistance. > > /home/ed/openjdk/hs/make/Init.gmk:291: recipe for target 'main' failed > make[1]: *** [main] Error 2 > /home/ed/openjdk/hs/make/Init.gmk:186: recipe for target 'jdk' failed > make: *** [jdk] Error 2 > --- CUT HERE --- > From david.holmes at oracle.com Sun Mar 25 23:03:59 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 26 Mar 2018 09:03:59 +1000 Subject: RFR: 8199736: Define WIN32_LEAN_AND_MEAN before including windows.h In-Reply-To: References: <7F114FA8-59F0-42F1-A0B8-33D8C22DE81A@oracle.com> Message-ID: Hi Robin, On 23/03/2018 10:37 PM, Robin Westberg wrote: > Hi Kim & Erik, > > Certainly makes sense to define it from the build system, I?ve updated the patch accordingly: > > Full: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.01/ > Incremental: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00-01/ I'm a little unclear on the hotspot changes. If we define WIN32_LEAN_AND_MEAN then certain APIs like sockets are excluded from windows.h so we then have to include the specific header files like winsock2.h - is that right? src/hotspot/share/interpreter/bytecodes.cpp I'm curious about this change. u_short comes from types.h on non-Windows, is it simply missing on Windows (at least once we have WIN32_LEAN_AND_MEAN defined) ? src/hotspot/share/utilities/ostream.cpp 1029 #endif 1030 #if defined(_WINDOWS) Using elif could be marginally faster given the two sets of conditions are mutually exclusive. Thanks, David > (Not quite sure if the definition belongs where I put it or a bit later where most other windows-specific JVM flags are defined, but seemed reasonable to put it close to where it is defined for the JDK libraries). > > Best regards, > Robin > >> On 22 Mar 2018, at 16:52, Kim Barrett wrote: >> >>> On Mar 22, 2018, at 10:34 AM, Robin Westberg wrote: >>> >>> Hi all, >>> >>> Please review the following change that defines WIN32_LEAN_AND_MEAN [1] before including windows.h. This marginally improves build times, and makes it possible to include winsock2.h. >>> >>> Issue: https://bugs.openjdk.java.net/browse/JDK-8199736 >>> Webrev: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00/ >>> Testing: hs-tier1 >>> >>> Best regards, >>> Robin >>> >>> [1] https://msdn.microsoft.com/en-us/library/windows/desktop/aa383745%28v=vs.85%29.aspx#faster_builds_with_smaller_header_files >> >> I think the addition of the WIN32_LEAN_AND_MEAN definition should be done through the build >> system, so that it applies everywhere. >> > From per.liden at oracle.com Mon Mar 26 08:17:43 2018 From: per.liden at oracle.com (Per Liden) Date: Mon, 26 Mar 2018 10:17:43 +0200 Subject: RFR: 8176717: GC log file handle leaked to child processes In-Reply-To: References: <8ad5b65b-9596-bbca-1f50-27c81d8d65a1@oracle.com> Message-ID: <09be0f9f-7ba4-dbc2-cef9-27dec98c5140@oracle.com> Hi, On 03/13/2018 07:46 AM, Thomas St?fe wrote: > Hi Leo, > > On Mon, Mar 12, 2018 at 8:40 PM, Leo Korinth wrote: > >> >> >> On 12/03/18 17:48, Thomas St?fe wrote: >> >>> Hi Leo, >>> >>> On Mon, Mar 12, 2018 at 4:54 PM, Leo Korinth >> > wrote: >>> >>> >>> >>> On 12/03/18 15:29, Thomas St?fe wrote: >>> >>> Hi Leo, >>> >>> This seems weird. >>> >>> This would affect numerous open() calls, not just this GC log, I >>> cannot imagine the correct fix is to change all of them. >>> >>> >>> Sorry, I do not understand what you mean with "numerous open()". >>> This fix will only affect logging -- or am I missing something? >>> os::open does roughly what I try to do in os::fopen_retain. >>> >>> >>> Sorry, I spoke unclear. What I meant was I would expect the problem you >>> found in gc logging to be present for every raw open()/fopen()/CreateFile() >>> call in the VM and in the JDK, which are quite a few. I wondered why we do >>> not see more problems like this. >>> >> >> Oh, now I see, I just *assumed* os::open was used everywhere when in fact >> it is only used in two places where the file in addition seems to be closed >> fast afterwards. I should assume less... >> >> I guess leaking file descriptors is not that big of a problem. It seems to >> *have* been a problem on Solaris (which seems to be the reason for >> os::open) but as the unix file descriptors are closed by core-libs before >> exec is mostly a windows problem. >> >> On windows it is also more of a problem because open files are harder to >> rename or remove, and therefore the bug report. >> >> >> >>> >>> >>> In fact, on Posix platforms we close all file descriptors except >>> the Pipe ones before between fork() and exec() - see >>> unix/native/libjava/ childproc.c. >>> >>> >>> Yes, that is why my test case did not fail before the fix on >>> unix-like systems. I do not know why it is not handled in Windows >>> (possibly a bug, possibly to keep old behaviour???), I had planned >>> to ask that as a follow up question later, maybe open a bug report >>> if it was not for keeping old behaviour. Even though childproc.c >>> does close the file handler, I think it is much nicer to open them >>> with FD_CLOEXEC (in addition to let childproc.c close it). os::open >>> does so, and I would like to handle ::fopen the same way as ::open >>> with a proxy call that ensures that the VM process will retain the >>> file descriptor it opens (in HotSpot at least). >>> >>> Such code is missing on Windows - see >>> windows/native/libjava/ProcessImpl_md.c . There, we do not have >>> fork/exec, but CreateProcess(), and whether we inherit handles >>> or not is controlled via an argument to CreateProcess(). But >>> that flag is TRUE, so child processes inherit handles. >>> >>> 331 if (!CreateProcessW( >>> 332 NULL, /* executable name */ >>> 333 (LPWSTR)pcmd, /* command line */ >>> 334 NULL, /* process security >>> attribute */ >>> 335 NULL, /* thread security >>> attribute */ >>> 336 TRUE, /* inherits system >>> handles */ <<<<<< >>> 337 processFlag, /* selected based >>> on exe type */ >>> 338 (LPVOID)penvBlock,/* environment block >>> */ >>> 339 (LPCWSTR)pdir, /* change to the >>> new current directory */ >>> 340 &si, /* (in) startup >>> information */ >>> 341 &pi)) /* (out) process >>> information */ >>> 342 { >>> 343 win32Error(env, L"CreateProcess"); >>> 344 } >>> >>> Maybe this is the real error we should fix? Make Windows >>> Runtime.exec behave like the Posix variant by closing all file >>> descriptors upon CreateProcess > >>> (This seems more of a core-libs question.) >>> >>> >>> I think it is both a core-libs question and a hotspot question. I >>> firmly believe we should retain file descriptors with help of >>> FD_CLOEXEC and its variants in HotSpot. I am unsure (and have no >>> opinion) what to do in core-libs, maybe there is a deeper thought >>> behind line 336? >>> >>> Some reasons for this: >>> >>> - if a process is forked using JNI, it would still be good if the >>> hotspot descriptors would not leak. >>> >>> - if (I have no idea if this is true) the behaviour in core-libs can >>> not be changed because the behaviour is already wildly (ab)used, >>> this is still a correct fix. Remember this will only close file >>> descriptors opened by HotSpot code, and at the moment only logging >>> code. >>> >>> - this will fix the issue in the bug report, and give time for >>> core-libs to consider what is correct (and what can be changed >>> without breaking applications). >>> >>> Thanks, >>> Leo >>> >>> >>> yes, you convinced me. >>> >>> 1 We should fix raw open() calls, because if native code forks via a >>> different code paths than java Runtime.exec(), we run into the same >>> problem. Your patch fixes one instance of the problem. >>> >> Yes, I agree. I now understand that ::open() is much more used in the code >> base. >> >>> >>> 2 And we should fix Windows Runtime.exec() to the same behaviour as on >>> Posix. I can see this being backward-compatible-problematic, but it >>> certainly would be the right thing to do. Would love to know what core-libs >>> says. >>> >> >> Possibly (I am intentionally dodging this question) >> >>> >>> Okay, about your change: I dislike that we add a new function, especially >>> a first class open function, to the os namespace. How about this instead: >>> since we know that os::open() does the right thing on all platforms, why >>> can we not just use os::open() instead? Afterwards call fdopen() to wrap a >>> FILE structure around it, respectively call "FILE* os::open(int fd, const >>> char* mode)" , which seems to be just a wrapped fdopen(). That way you can >>> get what you want with less change and without introducing a new API. >>> >> >> Yes, that might be a better solution. I did consider it, but was afraid >> that, for example the (significant) "w"/"w+" differences in semantics would >> matter. or that: >> >> os::open(os::open(path, WINDOWS_ONLY(_)O_CREAT|WINDOWS_ONLY(_)O_TRUNC, >> flags, mode) mode2) >> >> ...or something similar for fopen(path, "w"), would not be exactly the >> same. For example it would set the file to binary mode on windows. Maybe it >> is exactly the same otherwise? For me, the equality in semantics are not >> obvious. >> >> Also, now when I realized there is only two users of os::open I am less >> sure it always does the right thing... >> >> I prefer the os::fopen_retain way. >> >> > I agree with you on the proposed fix: to open the file - at least on > windows - with the inherit flag turned off. I still disagree with you about > the way this is done. I am not a bit fan on "one trick APIs" being dropped > into the os namespace for one singular purpose - I think we had recently a > similar discussion about an snprintf variant specific for logging only. > > Just counting are a couple of variants I would prefer: > > 1) keep the API locally to logging, do not make it global. It is logging > specific, after all. > > 2) Or even easier, just do this (logFileOutput.cpp): > > const char* const LogFileOutput::FileOpenMode = WINDOWS_ONLY("aN") > NOT_WINDOWS("a"); > > that would fix windows. The other platforms do not have the problem if > spawning via Runtime.exec(), and the problem of > native-forking-and-handle-leaking is, while possible, rather theoretical. > > 2) If you really want a new global API, rename the thing to just > "os::fopen()". Because after all you want to wrap a generic fopen() and > forbid handle inheritance, yes? This is the same thing the os::open() > sister function does too, so if you think you need that, give it a first > class name :) And we could use tests then too (I think we have gtests for > os::open()). In that case I also dislike the many ifdefs, so if you keep > the function in its current form, I'd prefer it fanned out for different > platforms, like os::open() does it. Just offering my 2c on this. I would go with what Thomas suggests in bullet 2 above. /Per > > Just my 5c, and tastes differ, so I'll wait what others say. I'll cc Markus > as the UL owner. > > Oh, I also think the bug desciption is a bit misleading, since this is > about the UL file handle in general, not only gc log. And may it make sense > to post this in hotspot-runtime, not hotspot-dev? > > Thanks and Best Regards, Thomas > > > >> Thanks, >> Leo >> >> >>> Kind Regards, Thomas >>> >>> >>> >>> Kind Regards, Thomas >>> >>> >>> On Mon, Mar 12, 2018 at 2:20 PM, Leo Korinth >>> >>> >> >>> >>> wrote: >>> >>> Hi, >>> >>> This fix is for all operating systems though the problem >>> only seams >>> to appear on windows. >>> >>> I am creating a proxy function for fopen (os::fopen_retain) >>> that >>> appends the non-standard "e" mode for linux and bsds. For >>> windows >>> the "N" mode is used. For other operating systems, I assume >>> that I >>> can use fcntl F_SETFD FD_CLOEXEC. I think this will work >>> for AIX, >>> Solaris and other operating systems that do not support the >>> "e" >>> flag. Feedback otherwise please! >>> >>> The reason that I use the mode "e" and not only fcntl for >>> linux and >>> bsds is threefold. First, I still need to use mode flags on >>> windows >>> as it does not support fcntl. Second, I probably save a >>> system call. >>> Third, the change will be applied directly, and there will >>> be no >>> point in time (between system calls) when the process can >>> leak the >>> file descriptor, so it is safer. >>> >>> The test case forks three VMs in a row. By doing so we know >>> that the >>> second VM is opened with a specific log file. The third VM >>> should >>> have less open file descriptors (as it is does not use >>> logging) >>> which is checked using a UnixOperatingSystemMXBean. This is >>> not >>> possible on windows, so on windows I try to rename the >>> file, which >>> will not work if the file is opened (the actual reason the >>> bug was >>> opened). >>> >>> The added test case shows that the bug fix closes the log >>> file on >>> windows. The VM on other operating systems closed the log >>> file even >>> before the fix. >>> >>> Maybe the test case should be moved to a different path? >>> >>> Bug: >>> https://bugs.openjdk.java.net/browse/JDK-8176717 >>> >>> >> > >>> https://bugs.openjdk.java.net/browse/JDK-8176809 >>> >>> >> > >>> >>> Webrev: >>> http://cr.openjdk.java.net/~lkorinth/8176717/00/ >>> >>> >> > >>> >>> Testing: >>> hs-tier1, hs-tier2 and TestInheritFD.java >>> (on 64-bit linux, solaris, windows and mac) >>> >>> Thanks, >>> Leo >>> >>> >>> >>> From erik.osterlund at oracle.com Mon Mar 26 09:31:37 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 26 Mar 2018 11:31:37 +0200 Subject: RFR: 8199417: Modularize interpreter GC barriers Message-ID: <5AB8BDF9.4050106@oracle.com> Hi, The GC barriers for the interpreter are not as modular as they could be. They currently use switch statements to check which GC barrier set is being used, and call this or that barrier based on that, in a way that assumes GCs only use write barriers. This patch modularizes this by generating accesses in the interpreter with declarative semantics. Accesses to the heap may now use store_at and load_at functions of the BarrierSetAssembler, passing along appropriate arguments and decorators. Each concrete BarrierSetAssembler can override the access completely or sprinkle some appropriate GC barriers as necessary. Big thanks go to Martin Doerr and Roman Kennke, who helped plugging this into S390, PPC and AArch64 respectively. Webrev: http://cr.openjdk.java.net/~eosterlund/8199417/webrev.00/ Bug: https://bugs.openjdk.java.net/browse/JDK-8199417 Thanks, /Erik From aph at redhat.com Mon Mar 26 10:00:46 2018 From: aph at redhat.com (Andrew Haley) Date: Mon, 26 Mar 2018 11:00:46 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: <1521907782.18910.9.camel@gmail.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <1521907782.18910.9.camel@gmail.com> Message-ID: <6d7fa270-9e1a-7b25-1c40-c729075bb51f@redhat.com> On 03/24/2018 04:09 PM, Edward Nevill wrote: > On Fri, 2018-03-23 at 18:11 +0000, Andrew Haley wrote: >> >> /local/jdk-hs/build/linux-aarch64-normal-server-release/images/jdk/bin/jaotc \ >> -J--module-path=/local/graal/graal/sdk/mxbuild/modules/org.graalvm.graal_sdk.jar:/local/graal/graal/truffle/mxbuild/modules/com.oracle.truffle.truffle_api.jar \ >> -J--upgrade-module-path=/local/graal/graal/compiler/mxbuild/modules/jdk.internal.vm.compiler.jar \ >> myjar.jar --output myjar.so >> >> > > Looks promising, but I get as far as here and then get > > Exception in thread "main" jdk.vm.ci.common.JVMCIError: expected VM constant not found: CardTableModRefBS::dirty_card > at jdk.internal.vm.ci/jdk.vm.ci.hotspot.HotSpotVMConfigAccess.getConstant(HotSpotVMConfigAccess.java:84) > at jdk.internal.vm.ci/jdk.vm.ci.hotspot.HotSpotVMConfigAccess.getConstant(HotSpotVMConfigAccess.java:98) > at jdk.internal.vm.compiler/org.graalvm.compiler.hotspot.GraalHotSpotVMConfig.(GraalHotSpotVMConfig.java:557) > at jdk.internal.vm.compiler/org.graalvm.compiler.hotspot.AOTGraalHotSpotVMConfig.(AOTGraalHotSpotVMConfig.java:34) > at jdk.internal.vm.compiler/org.graalvm.compiler.hotspot.HotSpotGraalRuntime.(HotSpotGraalRuntime.java:111) > at jdk.internal.vm.compiler/org.graalvm.compiler.hotspot.HotSpotGraalCompilerFactory.createCompiler(HotSpotGraalCompilerFactory.java:132) > at jdk.aot/jdk.tools.jaotc.Main.run(Main.java:144) > at jdk.aot/jdk.tools.jaotc.Main.run(Main.java:101) > at jdk.aot/jdk.tools.jaotc.Main.main(Main.java:80) > > The exact command I am executing is > > ed at ubuntu:~/openjdk$ /home/ed/openjdk/hs/build/linux-aarch64-normal-server-release/images/jdk/bin/jaotc -J--module-path=/home/ed/openjdk/graal/sdk/mxbuild/modules/org.graalvm.graal_sdk.jar:/home/ed/openjdk/graal/truffle/mxbuild/modules/com.oracle.truffle.truffle_api.jar -J--upgrade-module-path=/home/ed/openjdk/graal/compiler/mxbuild/modules/jdk.internal.vm.compiler.jar Queens.class --Output Queens.so > > My JAVA_HOME is > > ed at ubuntu:~/openjdk$ echo $JAVA_HOME > /home/ed/openjdk/hs/build/linux-aarch64-normal-server-release/images/jdk > > Any ideas? Thanks, You must have built Graal with an old JDK. This is the line in question: public final byte dirtyCardValue = isJDK8 ? getFieldValue("CompilerToVM::Data::dirty_card", Byte.class, "int") : getConstant("CardTableModRefBS::dirty_card", Byte.class); -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From aph at redhat.com Mon Mar 26 10:10:57 2018 From: aph at redhat.com (Andrew Haley) Date: Mon, 26 Mar 2018 11:10:57 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: <6d7fa270-9e1a-7b25-1c40-c729075bb51f@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <1521907782.18910.9.camel@gmail.com> <6d7fa270-9e1a-7b25-1c40-c729075bb51f@redhat.com> Message-ID: On 03/26/2018 11:00 AM, Andrew Haley wrote: > You must have built Graal with an old JDK. This is the line in question: > > public final byte dirtyCardValue = isJDK8 ? getFieldValue("CompilerToVM::Data::dirty_card", Byte.class, "int") : getConstant("CardTableModRefBS::dirty_card", Byte.class); Hold on, that's not right. I'm doing some more digging now. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From adinn at redhat.com Mon Mar 26 10:11:38 2018 From: adinn at redhat.com (Andrew Dinn) Date: Mon, 26 Mar 2018 11:11:38 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> Message-ID: <15f51315-3497-6c01-2ecb-1f758762e99e@redhat.com> On 23/03/18 18:11, Andrew Haley wrote: > How to build it: > > Check out jdk-hs. Apply > http://cr.openjdk.java.net/~aph/jaotc/jdk-hs-1/ to that checkout then > build OpenJDK images. I am still building and testing this. However, I have two initial comments: 1) The hotspot change set includes two changes which are redundant against the latest hs. i) make/autoconf/generated-configure.sh : all modifications to this file are redundant since this file has now been removed from the source tree in HEAD ii) src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp:1050 : extra argument 0 in the call to form_address is already present in the version in HEAD 2) I merged your aarch64-branch-overflows changes into my own aarch64-branch-overflows branch cloned from the latest graal master repo. There were no merge conflicts. So, if you pull the latest master changes into your own local master branch: $ git checkout master; git pull master and then rebase your local aarch64-branch-overflows branch $ git checkout aarch64-branch-overflows ; git rebase master that should reapply your changes on top of the current master without any merge conflicts. Of course, it may not work any more (I am about to test that now :-). regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From aph at redhat.com Mon Mar 26 10:13:57 2018 From: aph at redhat.com (Andrew Haley) Date: Mon, 26 Mar 2018 11:13:57 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: <15f51315-3497-6c01-2ecb-1f758762e99e@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <15f51315-3497-6c01-2ecb-1f758762e99e@redhat.com> Message-ID: <5208812a-d380-4d4a-45bf-d2239db509ed@redhat.com> On 03/26/2018 11:11 AM, Andrew Dinn wrote: > $ git checkout master; git pull master What string did you use for ? -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From adinn at redhat.com Mon Mar 26 10:20:45 2018 From: adinn at redhat.com (Andrew Dinn) Date: Mon, 26 Mar 2018 11:20:45 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: <5208812a-d380-4d4a-45bf-d2239db509ed@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <15f51315-3497-6c01-2ecb-1f758762e99e@redhat.com> <5208812a-d380-4d4a-45bf-d2239db509ed@redhat.com> Message-ID: <643c0f23-9f79-9e07-ad45-13c5cdf1d00d@redhat.com> On 26/03/18 11:13, Andrew Haley wrote: > On 03/26/2018 11:11 AM, Andrew Dinn wrote: >> $ git checkout master; git pull master > > What string did you use for ? In my case it is upstream :-) That's because in my graal repo I have the origin repo set to my own github repo and the upstream repo set to the Oracle repo: [adinn at sputstik compiler]$ git remote -v origin git at github.com:adinn/graal.git (fetch) origin git at github.com:adinn/graal.git (push) upstream https://github.com/graalvm/graal.git (fetch) upstream https://github.com/graalvm/graal.git (push) origin gets set by default when you clone the repo. I set upstream explicitly using: $ git remote add upstream https://github.com/graalvm/graal.git regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From shade at redhat.com Mon Mar 26 10:36:31 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Mon, 26 Mar 2018 12:36:31 +0200 Subject: RFR: 8200232: Build failures after JDK-8200106 (Move NoSafepointVerifier out from gcLocker.hpp) Message-ID: <90be57aa-827e-5ca8-d86b-4d112667dd42@redhat.com> Bug: https://bugs.openjdk.java.net/browse/JDK-8200232 Webrev: http://cr.openjdk.java.net/~shade/8200232/webrev.01/ Testing: x86_32 build I only saw failures on x86_32, but maybe PPC folks see the breakage in other configs too? Thanks, -Aleksey From aph at redhat.com Mon Mar 26 11:08:04 2018 From: aph at redhat.com (Andrew Haley) Date: Mon, 26 Mar 2018 12:08:04 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: <15f51315-3497-6c01-2ecb-1f758762e99e@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <15f51315-3497-6c01-2ecb-1f758762e99e@redhat.com> Message-ID: <849a8fcf-85cd-b8c6-1134-9aea3877e9b7@redhat.com> On 03/26/2018 11:11 AM, Andrew Dinn wrote: > Of course, it may not work any more (I am about to > test that now :-). OK. I'll wait for that. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From stefan.karlsson at oracle.com Mon Mar 26 12:21:51 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 26 Mar 2018 14:21:51 +0200 Subject: RFR: 8200232: Build failures after JDK-8200106 (Move NoSafepointVerifier out from gcLocker.hpp) In-Reply-To: <90be57aa-827e-5ca8-d86b-4d112667dd42@redhat.com> References: <90be57aa-827e-5ca8-d86b-4d112667dd42@redhat.com> Message-ID: <5cefd7ee-1917-c5d3-3bb6-c9f5ff44a902@oracle.com> Looks good. StefanK On 2018-03-26 12:36, Aleksey Shipilev wrote: > Bug: > https://bugs.openjdk.java.net/browse/JDK-8200232 > > Webrev: > http://cr.openjdk.java.net/~shade/8200232/webrev.01/ > > Testing: x86_32 build > > I only saw failures on x86_32, but maybe PPC folks see the breakage in other configs too? > > Thanks, > -Aleksey > From rkennke at redhat.com Mon Mar 26 12:39:55 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 26 Mar 2018 14:39:55 +0200 Subject: RFR: 8199417: Modularize interpreter GC barriers In-Reply-To: <5AB8BDF9.4050106@oracle.com> References: <5AB8BDF9.4050106@oracle.com> Message-ID: <46f52e97-47e5-9c37-ddd8-8380e97f5ac1@redhat.com> Hi Erik, I like it. As far as I can see, the patch currently only handles oop stores and loads. Are you planning to add support for primitive access in the near future? Or else I can probably do that, or help with it (I know where to hook this up, based on Shenandoah experience). Thanks, Roman > The GC barriers for the interpreter are not as modular as they could be. > They currently use switch statements to check which GC barrier set is > being used, and call this or that barrier based on that, in a way that > assumes GCs only use write barriers. > > This patch modularizes this by generating accesses in the interpreter > with declarative semantics. Accesses to the heap may now use store_at > and load_at functions of the BarrierSetAssembler, passing along > appropriate arguments and decorators. Each concrete BarrierSetAssembler > can override the access completely or sprinkle some appropriate GC > barriers as necessary. > > Big thanks go to Martin Doerr and Roman Kennke, who helped plugging this > into S390, PPC and AArch64 respectively. > > Webrev: > http://cr.openjdk.java.net/~eosterlund/8199417/webrev.00/ > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8199417 From thomas.schatzl at oracle.com Mon Mar 26 12:48:01 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Mon, 26 Mar 2018 14:48:01 +0200 Subject: RFR: 8200232: Build failures after JDK-8200106 (Move NoSafepointVerifier out from gcLocker.hpp) In-Reply-To: <90be57aa-827e-5ca8-d86b-4d112667dd42@redhat.com> References: <90be57aa-827e-5ca8-d86b-4d112667dd42@redhat.com> Message-ID: <1522068481.8723.8.camel@oracle.com> Hi, On Mon, 2018-03-26 at 12:36 +0200, Aleksey Shipilev wrote: > Bug: > https://bugs.openjdk.java.net/browse/JDK-8200232 > > Webrev: > http://cr.openjdk.java.net/~shade/8200232/webrev.01/ > > Testing: x86_32 build > > I only saw failures on x86_32, but maybe PPC folks see the breakage > in other configs too? looks good, but maybe wait for the ppc folks to comment too. Thanks, Thomas From david.holmes at oracle.com Mon Mar 26 12:54:08 2018 From: david.holmes at oracle.com (David Holmes) Date: Mon, 26 Mar 2018 22:54:08 +1000 Subject: RFR: 8200232: Build failures after JDK-8200106 (Move NoSafepointVerifier out from gcLocker.hpp) In-Reply-To: <90be57aa-827e-5ca8-d86b-4d112667dd42@redhat.com> References: <90be57aa-827e-5ca8-d86b-4d112667dd42@redhat.com> Message-ID: <50a17b9e-91d2-3a87-a649-1592380348c7@oracle.com> Seems perfectly reasonable. These include file changes are really error prone. :( Thanks, David On 26/03/2018 8:36 PM, Aleksey Shipilev wrote: > Bug: > https://bugs.openjdk.java.net/browse/JDK-8200232 > > Webrev: > http://cr.openjdk.java.net/~shade/8200232/webrev.01/ > > Testing: x86_32 build > > I only saw failures on x86_32, but maybe PPC folks see the breakage in other configs too? > > Thanks, > -Aleksey > From erik.osterlund at oracle.com Mon Mar 26 12:59:23 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 26 Mar 2018 14:59:23 +0200 Subject: RFR: 8199417: Modularize interpreter GC barriers In-Reply-To: <46f52e97-47e5-9c37-ddd8-8380e97f5ac1@redhat.com> References: <5AB8BDF9.4050106@oracle.com> <46f52e97-47e5-9c37-ddd8-8380e97f5ac1@redhat.com> Message-ID: <5AB8EEAB.900@oracle.com> Hi Roman, Thank you. As for primitive accesses, I think you have the best knowledge about where these hooks must be in the code base, based on your experience with Shenandoah. Therefore, I think it is probably best if you add those hooks. Thanks, /Erik On 2018-03-26 14:39, Roman Kennke wrote: > Hi Erik, > > I like it. > > As far as I can see, the patch currently only handles oop stores and > loads. Are you planning to add support for primitive access in the near > future? Or else I can probably do that, or help with it (I know where to > hook this up, based on Shenandoah experience). > > Thanks, Roman > > >> The GC barriers for the interpreter are not as modular as they could be. >> They currently use switch statements to check which GC barrier set is >> being used, and call this or that barrier based on that, in a way that >> assumes GCs only use write barriers. >> >> This patch modularizes this by generating accesses in the interpreter >> with declarative semantics. Accesses to the heap may now use store_at >> and load_at functions of the BarrierSetAssembler, passing along >> appropriate arguments and decorators. Each concrete BarrierSetAssembler >> can override the access completely or sprinkle some appropriate GC >> barriers as necessary. >> >> Big thanks go to Martin Doerr and Roman Kennke, who helped plugging this >> into S390, PPC and AArch64 respectively. >> >> Webrev: >> http://cr.openjdk.java.net/~eosterlund/8199417/webrev.00/ >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8199417 > > > From rkennke at redhat.com Mon Mar 26 13:10:16 2018 From: rkennke at redhat.com (Roman Kennke) Date: Mon, 26 Mar 2018 15:10:16 +0200 Subject: RFR: 8199417: Modularize interpreter GC barriers In-Reply-To: <5AB8EEAB.900@oracle.com> References: <5AB8BDF9.4050106@oracle.com> <46f52e97-47e5-9c37-ddd8-8380e97f5ac1@redhat.com> <5AB8EEAB.900@oracle.com> Message-ID: <9b89afac-368c-4a05-a259-1a58e378e1da@redhat.com> Ok, will do that. Your changes look ok to me (x86 and aarch64). Thanks, Roman > Thank you. As for primitive accesses, I think you have the best > knowledge about where these hooks must be in the code base, based on > your experience with Shenandoah. Therefore, I think it is probably best > if you add those hooks. > > Thanks, > /Erik > > On 2018-03-26 14:39, Roman Kennke wrote: >> Hi Erik, >> >> I like it. >> >> As far as I can see, the patch currently only handles oop stores and >> loads. Are you planning to add support for primitive access in the near >> future? Or else I can probably do that, or help with it (I know where to >> hook this up, based on Shenandoah experience). >> >> Thanks, Roman >> >> >>> The GC barriers for the interpreter are not as modular as they could be. >>> They currently use switch statements to check which GC barrier set is >>> being used, and call this or that barrier based on that, in a way that >>> assumes GCs only use write barriers. >>> >>> This patch modularizes this by generating accesses in the interpreter >>> with declarative semantics. Accesses to the heap may now use store_at >>> and load_at functions of the BarrierSetAssembler, passing along >>> appropriate arguments and decorators. Each concrete BarrierSetAssembler >>> can override the access completely or sprinkle some appropriate GC >>> barriers as necessary. >>> >>> Big thanks go to Martin Doerr and Roman Kennke, who helped plugging this >>> into S390, PPC and AArch64 respectively. >>> >>> Webrev: >>> http://cr.openjdk.java.net/~eosterlund/8199417/webrev.00/ >>> >>> Bug: >>> https://bugs.openjdk.java.net/browse/JDK-8199417 >> >> >> > From glaubitz at physik.fu-berlin.de Mon Mar 26 13:55:46 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Mon, 26 Mar 2018 22:55:46 +0900 Subject: RFR: 8200245: Zero fails to build on linux-ia64 due to ia64-specific cruft Message-ID: <31e26c1c-4549-e04d-18fb-70a03acde103@physik.fu-berlin.de> Hi! Zero currently fails to build from source on linux-ia64 due to some remaining ia64-specific cruft which was previously part of the native ia64 port. The error - shown here for jdk7u - is the same on all versions of OpenJDK after 7u: g++-6 -DLINUX -D_GNU_SOURCE -DCC_INTERP -DZERO -DIA64 -DZERO_LIBARCH=\"ia64\" -DPRODUCT -I. -I/<>/build/openjdk-boot/hotspot/src/share/vm/prims -I/<>/build/openjdk-boot/hotspot/src/share/vm -I/<>/build/openjdk-boot/hotspot/src/share/vm/precompiled -I/<>/build/openjdk-boot/hotspot/src/cpu/zero/vm -I/<>/build/openjdk-boot/hotspot/src/os_cpu/linux_zero/vm -I/<>/build/openjdk-boot/hotspot/src/os/linux/vm -I/<>/build/openjdk-boot/hotspot/src/os/posix/vm -I../generated -DHOTSPOT_RELEASE_VERSION="\"24.161-b01\"" -DHOTSPOT_BUILD_TARGET="\"product\"" -DHOTSPOT_BUILD_USER="\"buildd2\"" -DHOTSPOT_LIB_ARCH=\"ia64\" -DHOTSPOT_VM_DISTRO="\"OpenJDK\"" -DDERIVATIVE_ID="\"IcedTea 2.6.12\"" -DDEB_MULTIARCH="\"ia64-linux-gnu\"" -DDISTRIBUTION_ID="\"Debian GNU/Linux unstable (sid), package 7u161-2.6.12-1\"" -DTARGET_OS_FAMILY_linux -DTARGET_ARCH_zero -DTARGET_ARCH_MODEL_zero -DTARGET_OS_ARCH_linux_zero -DTARGET_OS_ARCH_MODEL_linux_zero -DTARGET_COMPILER_gcc -std=gnu++98 -fpic -fno-rtti -fno-exceptions -D_REENTRANT -fcheck-new -fvisibility=hidden -fno-delete-null-pointer-checks -fno-lifetime-dse -pipe -g -O2 -finline-functions -fno-strict-aliasing -DVM_LITTLE_ENDIAN -D_LP64=1 -DINCLUDE_TRACE=1 -Wpointer-arith -Wsign-compare -Wno-deprecated-declarations -g -fdebug-prefix-map=/<>=. -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -c -fpch-deps -MMD -MP -MF ../generated/dependencies/orderAccess.o.d -o orderAccess.o /<>/build/openjdk-boot/hotspot/src/share/vm/runtime/orderAccess.cpp /<>/build/openjdk-boot/hotspot/src/share/vm/runtime/os.cpp: In static member function 'static bool os::is_first_C_frame(frame*)': /<>/build/openjdk-boot/hotspot/src/share/vm/runtime/os.cpp:1019:15: error: 'class Thread' has no member named 'register_stack_base'; did you mean 'set_stack_base'? thread->register_stack_base() HPUX_ONLY(+ 0x0) LINUX_ONLY(+ 0x50)) { ^~~~~~~~~~~~~~~~~~~ /<>/build/openjdk-boot/hotspot/src/share/vm/runtime/os.cpp:1019:37: error: expected ')' before 'HPUX_ONLY' thread->register_stack_base() HPUX_ONLY(+ 0x0) LINUX_ONLY(+ 0x50)) { ^~~~~~~~~ /<>/build/openjdk-boot/hotspot/make/linux/makefiles/rules.make:150: recipe for target 'os.o' failed make[8]: *** [os.o] Error 1 make[8]: *** Waiting for unfinished jobs.... make[8]: Leaving directory '/<>/build/openjdk.build-boot/hotspot/outputdir/linux_ia64_zero/product' /<>/build/openjdk-boot/hotspot/make/linux/makefiles/top.make:119: recipe for target 'the_vm' failed make[7]: *** [the_vm] Error 2 The referenced method register_stack_base() was part of the ia64-specific implementation of the Thread class which is no longer part of OpenJDK. Thus, the best way to fix this is just removing this remaining cruft. This fixes the Zero build on linux-ia64 for me. Please review the change in [1]. Thanks, Adrian > [1] http://cr.openjdk.java.net/~glaubitz/8200245/webrev.00/ -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From erik.osterlund at oracle.com Mon Mar 26 14:37:16 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Mon, 26 Mar 2018 16:37:16 +0200 Subject: RFR: 8199417: Modularize interpreter GC barriers In-Reply-To: <9b89afac-368c-4a05-a259-1a58e378e1da@redhat.com> References: <5AB8BDF9.4050106@oracle.com> <46f52e97-47e5-9c37-ddd8-8380e97f5ac1@redhat.com> <5AB8EEAB.900@oracle.com> <9b89afac-368c-4a05-a259-1a58e378e1da@redhat.com> Message-ID: <5AB9059C.5030608@oracle.com> Hi Roman, On 2018-03-26 15:10, Roman Kennke wrote: > Ok, will do that. Great, thanks. > Your changes look ok to me (x86 and aarch64). Thank you for the review. /Erik > Thanks, > Roman > >> Thank you. As for primitive accesses, I think you have the best >> knowledge about where these hooks must be in the code base, based on >> your experience with Shenandoah. Therefore, I think it is probably best >> if you add those hooks. >> >> Thanks, >> /Erik >> >> On 2018-03-26 14:39, Roman Kennke wrote: >>> Hi Erik, >>> >>> I like it. >>> >>> As far as I can see, the patch currently only handles oop stores and >>> loads. Are you planning to add support for primitive access in the near >>> future? Or else I can probably do that, or help with it (I know where to >>> hook this up, based on Shenandoah experience). >>> >>> Thanks, Roman >>> >>> >>>> The GC barriers for the interpreter are not as modular as they could be. >>>> They currently use switch statements to check which GC barrier set is >>>> being used, and call this or that barrier based on that, in a way that >>>> assumes GCs only use write barriers. >>>> >>>> This patch modularizes this by generating accesses in the interpreter >>>> with declarative semantics. Accesses to the heap may now use store_at >>>> and load_at functions of the BarrierSetAssembler, passing along >>>> appropriate arguments and decorators. Each concrete BarrierSetAssembler >>>> can override the access completely or sprinkle some appropriate GC >>>> barriers as necessary. >>>> >>>> Big thanks go to Martin Doerr and Roman Kennke, who helped plugging this >>>> into S390, PPC and AArch64 respectively. >>>> >>>> Webrev: >>>> http://cr.openjdk.java.net/~eosterlund/8199417/webrev.00/ >>>> >>>> Bug: >>>> https://bugs.openjdk.java.net/browse/JDK-8199417 >>> >>> > From stuart.monteith at linaro.org Mon Mar 26 14:38:53 2018 From: stuart.monteith at linaro.org (Stuart Monteith) Date: Mon, 26 Mar 2018 15:38:53 +0100 Subject: [aarch64-port-dev ] RFD: AOT for AArch64 In-Reply-To: <1521917644.2929.5.camel@gmail.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <1521907782.18910.9.camel@gmail.com> <1521917644.2929.5.camel@gmail.com> Message-ID: Hi Ed, Just to be explicit, did you originally see something like: # after -XX: or in .hotspotrc: SuppressErrorAt=/macroAssembler_aarch64.cpp:804 # # A fatal error has been detected by the Java Runtime Environment: # # Internal Error (/home/stuart/repos/hs/src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp:804), pid=40733, tid=40752 # assert(is_NativeCallTrampolineStub_at(stub_start_addr)) failed: doesn't look like a trampoline # # JRE version: OpenJDK Runtime Environment (11.0) (fastdebug build 11-internal+0-adhoc.stuart.hs) # Java VM: OpenJDK 64-Bit Server VM (fastdebug 11-internal+0-adhoc.stuart.hs, mixed mode, aot, tiered, jvmci, compressed oops, g1 gc, linux-aarch64) # No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again # This is with a tip from today: changeset: 49465:d7c83c8e4e65 tag: tip user: roland date: Tue Mar 20 15:38:00 2018 +0100 summary: 8197931: Null pointer dereference in Unique_Node_List::push of node.hpp:1510 On 24 March 2018 at 18:54, Edward Nevill wrote: > On Sat, 2018-03-24 at 16:09 +0000, Edward Nevill wrote: >> On Fri, 2018-03-23 at 18:11 +0000, Andrew Haley wrote: >> > >> > >> Looks promising, but I get as far as here and then get >> >> Exception in thread "main" jdk.vm.ci.common.JVMCIError: expected VM constant not found: CardTableModRefBS::dirty_card >> > > It looks like your patch > > http://cr.openjdk.java.net/~aph/jaotc/jdk-hs-1/ > > didn't apply cleanly to tip of http://hg.openjdk.java.net/jdk/hs > > I tried updating to > > changeset: 48711:e321560ac819 > user: adinn > date: Thu Jan 25 14:47:27 2018 +0000 > files: src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp > description: > 8195859: AArch64: vtableStubs gtest fails after 8174962 > Summary: gtest vtableStubs introduced by 8174962 fails on AArch64 with an invalid insn encoding > Reviewed-by: duke > > Which looks like the rev you are basing your patch on based on the rejected patch. > > It seems to get further, but now I get > > ed at ubuntu:~/openjdk$ /home/ed/openjdk/hs/build/linux-aarch64-normal-server-release/images/jdk/bin/jaotc -J--module-path=/home/ed/openjdk/graal/sdk/mxbuild/modules/org.graalvm.graal_sdk.jar:/home/ed/openjdk/graal/truffle/mxbuild/modules/com.oracle.truffle.truffle_api.jar -J--upgrade-module-path=/home/ed/openjdk/graal/compiler/mxbuild/modules/jdk.internal.vm.compiler.jar Queens.class --output Queens.so > Error: Failed compilation: Queens.main([Ljava/lang/String;)V: org.graalvm.compiler.graph.GraalGraphError: org.graalvm.compiler.debug.GraalError: Emitting code to load a metaspace address is not currently supported on aarch64 > at node: 287|LoadConstantIndirectly > Error: Failed compilation: Queens.print([I)V: org.graalvm.compiler.graph.GraalGraphError: org.graalvm.compiler.debug.GraalError: Emitting code to load a metaspace address is not currently supported on aarch64 > at node: 1273|LoadConstantIndirectly > Exception in thread "main" java.lang.NoSuchMethodError: jdk.tools.jaotc.aarch64.AArch64ELFMacroAssembler.addressOf(Ljdk/vm/ci/code/Register;)V > at jdk.aot/jdk.tools.jaotc.aarch64.AArch64ELFMacroAssembler.getPLTStaticEntryCode(AArch64ELFMacroAssembler.java:68) > at jdk.aot/jdk.tools.jaotc.CodeSectionProcessor.addCallStub(CodeSectionProcessor.java:139) > at jdk.aot/jdk.tools.jaotc.CodeSectionProcessor.process(CodeSectionProcessor.java:117) > at jdk.aot/jdk.tools.jaotc.DataBuilder.prepareData(DataBuilder.java:142) > at jdk.aot/jdk.tools.jaotc.Main.run(Main.java:188) > at jdk.aot/jdk.tools.jaotc.Main.run(Main.java:101) > at jdk.aot/jdk.tools.jaotc.Main.main(Main.java:80) > > Is this known/expected? What revision of jdk/hs are you building with? I'd like to see this working. > > Thanks, > Ed. > From adinn at redhat.com Mon Mar 26 14:46:43 2018 From: adinn at redhat.com (Andrew Dinn) Date: Mon, 26 Mar 2018 15:46:43 +0100 Subject: [aarch64-port-dev ] RFD: AOT for AArch64 In-Reply-To: References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <1521907782.18910.9.camel@gmail.com> <1521917644.2929.5.camel@gmail.com> Message-ID: Hi Stuart, On 26/03/18 15:38, Stuart Monteith wrote: > Hi Ed, > Just to be explicit, did you originally see something like: > > # after -XX: or in .hotspotrc: SuppressErrorAt=/macroAssembler_aarch64.cpp:804 > # > # A fatal error has been detected by the Java Runtime Environment: > # > # Internal Error > (/home/stuart/repos/hs/src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp:804), > pid=40733, tid=40752 > # assert(is_NativeCallTrampolineStub_at(stub_start_addr)) failed: > doesn't look like a trampoline > # > # JRE version: OpenJDK Runtime Environment (11.0) (fastdebug build > 11-internal+0-adhoc.stuart.hs) > # Java VM: OpenJDK 64-Bit Server VM (fastdebug > 11-internal+0-adhoc.stuart.hs, mixed mode, aot, tiered, jvmci, > compressed oops, g1 gc, linux-aarch64) > # No core dump will be written. Core dumps have been disabled. To > enable core dumping, try "ulimit -c unlimited" before starting Java > again > # Yes, I just sorted that out with Andrew. That happens if you use a slowdebug build. There is an over-aggressive assert in the trampoline code that doesn't allow for Labels being skipped when C2 checks code size. Andrew has a patch for that. regards, Andrew Dinn ----------- From matthias.baesken at sap.com Mon Mar 26 14:55:17 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Mon, 26 Mar 2018 14:55:17 +0000 Subject: 8200246 : AIX build fails after adjustments of src/hotspot/share/trace/traceEventClasses.xsl Message-ID: <16201e338dda41b0913bbd16286b6e2d@sap.com> Hello, after recent adjustments of src/hotspot/share/trace/traceEventClasses.xsl in jdk/hs ( see 8196337: Add commit methods that take all event properties as argument ) the AIX build fails. The xlC compiler is not happy with using TraceEvent::commit; in traceEventClasses.xsl (looks like correct C++ but xlc 12.1 refuses to compile ). Error messages : /nightly/output-jdk-hs/hotspot/variant-server/gensrc/tracefiles/traceEventClasses.hpp", line 226.9: 1540-1113 (S) The class template name "TraceEvent" must be followed by a < in this context. Bug : https://bugs.openjdk.java.net/browse/JDK-8200246 Adding the template parameter to TraceEvent makes xlc happy too. http://cr.openjdk.java.net/~mbaesken/webrevs/8200246/ Are you fine with this change ? Best regards, Matthias From adinn at redhat.com Mon Mar 26 14:56:15 2018 From: adinn at redhat.com (Andrew Dinn) Date: Mon, 26 Mar 2018 15:56:15 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: <849a8fcf-85cd-b8c6-1134-9aea3877e9b7@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <15f51315-3497-6c01-2ecb-1f758762e99e@redhat.com> <849a8fcf-85cd-b8c6-1134-9aea3877e9b7@redhat.com> Message-ID: <2c42186a-523e-6912-6eb0-36c7ee2be09f@redhat.com> On 26/03/18 12:08, Andrew Haley wrote: > On 03/26/2018 11:11 AM, Andrew Dinn wrote: >> Of course, it may not work any more (I am about to >> test that now :-). > > OK. I'll wait for that. Ok, so modulo the two omitted changes mentioned earlier the only other problem I found using the hs tip is an over-aggressive assert in emit_trampoline_stub at macroAssembler_aarch64.cpp:804 assert(is_NativeCallTrampolineStub_at(stub_start_addr), "doesn't look like a trampoline"); This asserts if you run using a slowdebug build when a C2 compile tries to guess the size of a MachNode for a branch by runnign the emit routine with Compile::current()->in_scratch_emit_size() set to return true. The label does not compile in offset 8, the assert test finds offset 0 and blammo! I patched the assert to assert((Compile::current()->in_scratch_emit_size() || is_NativeCallTrampolineStub_at(stub_start_addr)), "doesn't look like a trampoline"); and slowdebug works. Otherwise I managed to do the following: 1) compile a simple test program from a jar into AOT lib test.so and run the code from test.so. 2) compile a simple test program plus module java.base from a jar into AOT lib test.so and run the code from test.so. (n.b. step 2 was using a release build :-). Ship it! regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From robin.westberg at oracle.com Mon Mar 26 15:00:15 2018 From: robin.westberg at oracle.com (Robin Westberg) Date: Mon, 26 Mar 2018 17:00:15 +0200 Subject: RFR: 8199736: Define WIN32_LEAN_AND_MEAN before including windows.h In-Reply-To: <52afd5ec-e136-b924-8127-d3e33bf04428@oracle.com> References: <7F114FA8-59F0-42F1-A0B8-33D8C22DE81A@oracle.com> <52afd5ec-e136-b924-8127-d3e33bf04428@oracle.com> Message-ID: Hi Erik, Thanks for reviewing! Best regards, Robin > On 23 Mar 2018, at 14:58, Erik Joelsson wrote: > > I think this looks good, but Magnus is currently refactoring the flags handling in configure so better get his input as well. (adding build-dev) > /Erik > > On 2018-03-23 05:37, Robin Westberg wrote: >> Hi Kim & Erik, >> >> Certainly makes sense to define it from the build system, I?ve updated the patch accordingly: >> >> Full: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.01/ >> Incremental: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00-01/ >> >> (Not quite sure if the definition belongs where I put it or a bit later where most other windows-specific JVM flags are defined, but seemed reasonable to put it close to where it is defined for the JDK libraries). >> >> Best regards, >> Robin >> >>> On 22 Mar 2018, at 16:52, Kim Barrett > wrote: >>> >>>> On Mar 22, 2018, at 10:34 AM, Robin Westberg > wrote: >>>> >>>> Hi all, >>>> >>>> Please review the following change that defines WIN32_LEAN_AND_MEAN [1] before including windows.h. This marginally improves build times, and makes it possible to include winsock2.h. >>>> >>>> Issue: https://bugs.openjdk.java.net/browse/JDK-8199736 > >>>> Webrev: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00/ > >>>> Testing: hs-tier1 >>>> >>>> Best regards, >>>> Robin >>>> >>>> [1] https://msdn.microsoft.com/en-us/library/windows/desktop/aa383745%28v=vs.85%29.aspx#faster_builds_with_smaller_header_files > >>> >>> I think the addition of the WIN32_LEAN_AND_MEAN definition should be done through the build >>> system, so that it applies everywhere. >>> >> > From robin.westberg at oracle.com Mon Mar 26 15:00:44 2018 From: robin.westberg at oracle.com (Robin Westberg) Date: Mon, 26 Mar 2018 17:00:44 +0200 Subject: RFR: 8199736: Define WIN32_LEAN_AND_MEAN before including windows.h In-Reply-To: References: <7F114FA8-59F0-42F1-A0B8-33D8C22DE81A@oracle.com> <52afd5ec-e136-b924-8127-d3e33bf04428@oracle.com> Message-ID: <8BF38F0F-59E6-4BAF-8A1E-E5D26B23A9C3@oracle.com> Hi Magnus, Thanks for the review! Best regards, Robin > On 23 Mar 2018, at 16:43, Magnus Ihse Bursie wrote: > > This looks good to me. > > /Magnus > >> 23 mars 2018 kl. 14:58 skrev Erik Joelsson : >> >> I think this looks good, but Magnus is currently refactoring the flags handling in configure so better get his input as well. (adding build-dev) >> >> /Erik >> >> >>> On 2018-03-23 05:37, Robin Westberg wrote: >>> Hi Kim & Erik, >>> >>> Certainly makes sense to define it from the build system, I?ve updated the patch accordingly: >>> >>> Full: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.01/ >>> Incremental: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00-01/ >>> >>> (Not quite sure if the definition belongs where I put it or a bit later where most other windows-specific JVM flags are defined, but seemed reasonable to put it close to where it is defined for the JDK libraries). >>> >>> Best regards, >>> Robin >>> >>>>>> On 22 Mar 2018, at 16:52, Kim Barrett > wrote: >>>>> >>>>> On Mar 22, 2018, at 10:34 AM, Robin Westberg > wrote: >>>>> >>>>> Hi all, >>>>> >>>>> Please review the following change that defines WIN32_LEAN_AND_MEAN [1] before including windows.h. This marginally improves build times, and makes it possible to include winsock2.h. >>>>> >>>>> Issue: https://bugs.openjdk.java.net/browse/JDK-8199736 >>>>> Webrev: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00/ > >>>>> Testing: hs-tier1 >>>>> >>>>> Best regards, >>>>> Robin >>>>> >>>>> [1] https://msdn.microsoft.com/en-us/library/windows/desktop/aa383745%28v=vs.85%29.aspx#faster_builds_with_smaller_header_files > >>>> >>>> I think the addition of the WIN32_LEAN_AND_MEAN definition should be done through the build >>>> system, so that it applies everywhere. >>>> >>> >> > From robin.westberg at oracle.com Mon Mar 26 15:00:54 2018 From: robin.westberg at oracle.com (Robin Westberg) Date: Mon, 26 Mar 2018 17:00:54 +0200 Subject: RFR: 8199736: Define WIN32_LEAN_AND_MEAN before including windows.h In-Reply-To: <945D9C5D-FF82-46C2-B6EE-40941CD44FD5@oracle.com> References: <7F114FA8-59F0-42F1-A0B8-33D8C22DE81A@oracle.com> <945D9C5D-FF82-46C2-B6EE-40941CD44FD5@oracle.com> Message-ID: Hi Kim, Thanks for reviewing! Best regards, Robin > On 23 Mar 2018, at 19:42, Kim Barrett wrote: > >> On Mar 23, 2018, at 8:37 AM, Robin Westberg wrote: >> >> Hi Kim & Erik, >> >> Certainly makes sense to define it from the build system, I?ve updated the patch accordingly: >> >> Full: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.01/ >> Incremental: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00-01/ >> >> (Not quite sure if the definition belongs where I put it or a bit later where most other windows-specific JVM flags are defined, but seemed reasonable to put it close to where it is defined for the JDK libraries). > > This looks good to me. > >> Best regards, >> Robin >> >>> On 22 Mar 2018, at 16:52, Kim Barrett wrote: >>> >>>> On Mar 22, 2018, at 10:34 AM, Robin Westberg wrote: >>>> >>>> Hi all, >>>> >>>> Please review the following change that defines WIN32_LEAN_AND_MEAN [1] before including windows.h. This marginally improves build times, and makes it possible to include winsock2.h. >>>> >>>> Issue: https://bugs.openjdk.java.net/browse/JDK-8199736 >>>> Webrev: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00/ >>>> Testing: hs-tier1 >>>> >>>> Best regards, >>>> Robin >>>> >>>> [1] https://msdn.microsoft.com/en-us/library/windows/desktop/aa383745%28v=vs.85%29.aspx#faster_builds_with_smaller_header_files >>> >>> I think the addition of the WIN32_LEAN_AND_MEAN definition should be done through the build >>> system, so that it applies everywhere. > > From robin.westberg at oracle.com Mon Mar 26 15:01:10 2018 From: robin.westberg at oracle.com (Robin Westberg) Date: Mon, 26 Mar 2018 17:01:10 +0200 Subject: RFR: 8199736: Define WIN32_LEAN_AND_MEAN before including windows.h In-Reply-To: References: <7F114FA8-59F0-42F1-A0B8-33D8C22DE81A@oracle.com> Message-ID: <9DA6FFD9-1DD0-4C39-9D81-DF6FA49EDF45@oracle.com> Hi David, Thanks for taking a look! > On 26 Mar 2018, at 01:03, David Holmes wrote: > > Hi Robin, > > On 23/03/2018 10:37 PM, Robin Westberg wrote: >> Hi Kim & Erik, >> Certainly makes sense to define it from the build system, I?ve updated the patch accordingly: >> Full: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.01/ >> Incremental: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00-01/ > > I'm a little unclear on the hotspot changes. If we define WIN32_LEAN_AND_MEAN then certain APIs like sockets are excluded from windows.h so we then have to include the specific header files like winsock2.h - is that right? Yep that?s correct, headers like winsock, dde, ole, shellapi and a few other uncommon ones are no longer included from windows.h when this is defined. > src/hotspot/share/interpreter/bytecodes.cpp > > I'm curious about this change. u_short comes from types.h on non-Windows, is it simply missing on Windows (at least once we have WIN32_LEAN_AND_MEAN defined) ? Yeah, on Windows these comes from winsock(2).h: /* * Basic system type definitions, taken from the BSD file sys/types.h. */ typedef unsigned char u_char; typedef unsigned short u_short; typedef unsigned int u_int; typedef unsigned long u_long; I noticed that one of these (u_char) is also defined in globalDefinitions.hpp so could perhaps define u_short there, or include winsock2.h globally again. But since it was only used in a single place in the existing code it seemed simple enough to just expand the typedef there. > src/hotspot/share/utilities/ostream.cpp > > 1029 #endif > 1030 #if defined(_WINDOWS) > > Using elif could be marginally faster given the two sets of conditions are mutually exclusive. Good point, will change it. I also had to move the flag definition to adapt to the latest changes in the hs repo, cc?ing build-dev again to make sure I got it right. Updated webrev (full): http://cr.openjdk.java.net/~rwestberg/8199736/webrev.02/ Updated webrev (incremental): http://cr.openjdk.java.net/~rwestberg/8199736/webrev.01-02/ Best regards, Robin > > Thanks, > David > >> (Not quite sure if the definition belongs where I put it or a bit later where most other windows-specific JVM flags are defined, but seemed reasonable to put it close to where it is defined for the JDK libraries). >> Best regards, >> Robin >>> On 22 Mar 2018, at 16:52, Kim Barrett wrote: >>> >>>> On Mar 22, 2018, at 10:34 AM, Robin Westberg wrote: >>>> >>>> Hi all, >>>> >>>> Please review the following change that defines WIN32_LEAN_AND_MEAN [1] before including windows.h. This marginally improves build times, and makes it possible to include winsock2.h. >>>> >>>> Issue: https://bugs.openjdk.java.net/browse/JDK-8199736 >>>> Webrev: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00/ >>>> Testing: hs-tier1 >>>> >>>> Best regards, >>>> Robin >>>> >>>> [1] https://msdn.microsoft.com/en-us/library/windows/desktop/aa383745%28v=vs.85%29.aspx#faster_builds_with_smaller_header_files >>> >>> I think the addition of the WIN32_LEAN_AND_MEAN definition should be done through the build >>> system, so that it applies everywhere. >>> From robin.westberg at oracle.com Mon Mar 26 15:01:19 2018 From: robin.westberg at oracle.com (Robin Westberg) Date: Mon, 26 Mar 2018 17:01:19 +0200 Subject: RFR: 8199619: Building HotSpot on Windows should define NOMINMAX Message-ID: <7545A83F-E296-40B1-9C15-358A999D1AAF@oracle.com> Hi all, Please review this small change that defines the NOMINMAX macro when building HotSpot on Windows. Issue: https://bugs.openjdk.java.net/browse/JDK-8199619 Webrev: http://cr.openjdk.java.net/~rwestberg/8199619/webrev.00/ Testing: building with/without precompiled headers, hs-tier1 Best regards, Robin From adinn at redhat.com Mon Mar 26 15:02:20 2018 From: adinn at redhat.com (Andrew Dinn) Date: Mon, 26 Mar 2018 16:02:20 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: <2c42186a-523e-6912-6eb0-36c7ee2be09f@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <15f51315-3497-6c01-2ecb-1f758762e99e@redhat.com> <849a8fcf-85cd-b8c6-1134-9aea3877e9b7@redhat.com> <2c42186a-523e-6912-6eb0-36c7ee2be09f@redhat.com> Message-ID: <8b4a211e-44af-ed28-5abf-697d722771d1@redhat.com> On 26/03/18 15:56, Andrew Dinn wrote: > Ship it! Ok, so I know this really needs a code audit too. I'm working on that now. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From erik.joelsson at oracle.com Mon Mar 26 15:50:40 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Mon, 26 Mar 2018 08:50:40 -0700 Subject: RFR: 8199619: Building HotSpot on Windows should define NOMINMAX In-Reply-To: <7545A83F-E296-40B1-9C15-358A999D1AAF@oracle.com> References: <7545A83F-E296-40B1-9C15-358A999D1AAF@oracle.com> Message-ID: <7e318372-2c04-be32-dd0d-7c0fe092c98f@oracle.com> Looks good. /Erik On 2018-03-26 08:01, Robin Westberg wrote: > Hi all, > > Please review this small change that defines the NOMINMAX macro when building HotSpot on Windows. > > Issue: https://bugs.openjdk.java.net/browse/JDK-8199619 > Webrev: http://cr.openjdk.java.net/~rwestberg/8199619/webrev.00/ > Testing: building with/without precompiled headers, hs-tier1 > > Best regards, > Robin From stewartd.qdt at qualcommdatacenter.com Mon Mar 26 16:24:18 2018 From: stewartd.qdt at qualcommdatacenter.com (stewartd.qdt) Date: Mon, 26 Mar 2018 16:24:18 +0000 Subject: RFR: 8200251: AArch64::CPUFeature out of sync with VM_Version::Feature_Flag Message-ID: <1b32c22bf37044cd93e62c8a1a69afe8@NASANEXM01E.na.qualcomm.com> Please review this webrev [1] which attempts to bring the AArch64::CPUFeature enum (Java) in sync with VM_Version::Feature_Flag enum (C++ enum) for aarch64. This is in preparation for creating AArch64 some intrinsics for Graal. But I found that the CPUFeature enum was not being transferred over to Graal for AArch64. In attempting to do that I then found out that CPUFeatures was not in sync with the VM_Version::Feature_Flag enum. The bug report is filed at [2]. I am happy to modify the patch as necessary. Regards, Daniel Stewart [1] - http://cr.openjdk.java.net/~dstewart/8200251/webrev.00/ [2] - https://bugs.openjdk.java.net/browse/JDK-8200251 From kim.barrett at oracle.com Mon Mar 26 16:34:15 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 26 Mar 2018 12:34:15 -0400 Subject: RFR: 8199619: Building HotSpot on Windows should define NOMINMAX In-Reply-To: <7545A83F-E296-40B1-9C15-358A999D1AAF@oracle.com> References: <7545A83F-E296-40B1-9C15-358A999D1AAF@oracle.com> Message-ID: <446F6608-6FC0-4962-AAD3-CC8CF36F60F7@oracle.com> > On Mar 26, 2018, at 11:01 AM, Robin Westberg wrote: > > Hi all, > > Please review this small change that defines the NOMINMAX macro when building HotSpot on Windows. > > Issue: https://bugs.openjdk.java.net/browse/JDK-8199619 > Webrev: http://cr.openjdk.java.net/~rwestberg/8199619/webrev.00/ > Testing: building with/without precompiled headers, hs-tier1 > > Best regards, > Robin Looks good. This change will have a (easy to resolve) merge conflict with your fix for JDK-8199736, right? From vladimir.kozlov at oracle.com Mon Mar 26 16:47:30 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 26 Mar 2018 09:47:30 -0700 Subject: RFR: 8200251: AArch64::CPUFeature out of sync with VM_Version::Feature_Flag In-Reply-To: <1b32c22bf37044cd93e62c8a1a69afe8@NASANEXM01E.na.qualcomm.com> References: <1b32c22bf37044cd93e62c8a1a69afe8@NASANEXM01E.na.qualcomm.com> Message-ID: Good. Thanks, Vladimir On 3/26/18 9:24 AM, stewartd.qdt wrote: > Please review this webrev [1] which attempts to bring the AArch64::CPUFeature enum (Java) in sync with VM_Version::Feature_Flag enum (C++ enum) for aarch64. > > This is in preparation for creating AArch64 some intrinsics for Graal. But I found that the CPUFeature enum was not being transferred over to Graal for AArch64. In attempting to do that I then found out that CPUFeatures was not in sync with the VM_Version::Feature_Flag enum. > > The bug report is filed at [2]. > > I am happy to modify the patch as necessary. > > Regards, > > Daniel Stewart > > [1] - http://cr.openjdk.java.net/~dstewart/8200251/webrev.00/ > [2] - https://bugs.openjdk.java.net/browse/JDK-8200251 > > From coleen.phillimore at oracle.com Mon Mar 26 17:26:43 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 26 Mar 2018 13:26:43 -0400 Subject: RFR (M) 8198313: Wrap holder object for ClassLoaderData in a WeakHandle Message-ID: <3fe8b4c5-3e1d-d192-07ce-0828e3982e75@oracle.com> Summary: Use WeakHandle for ClassLoaderData::_holder so that is_alive closure is not needed The purpose of WeakHandle is to encapsulate weak oops within the runtime code in the vm.? The class was initially written by StefanK.?? The weak handles are pointers to OopStorage.?? This code is a basis for future work to move direct pointers to the heap (oops) from runtime structures like the StringTable, into pointers into an area that the GC efficiently manages, in parallel and/or concurrently. Tested with mach5 tier 1-5.? Performance tested with internal dev-submit performance tests, and locally. open webrev at http://cr.openjdk.java.net/~coleenp/8198313.01/webrev bug link https://bugs.openjdk.java.net/browse/JDK-8198313 Thanks, Coleen From shade at redhat.com Mon Mar 26 17:31:34 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Mon, 26 Mar 2018 19:31:34 +0200 Subject: RFR: 8200232: Build failures after JDK-8200106 (Move NoSafepointVerifier out from gcLocker.hpp) In-Reply-To: <1522068481.8723.8.camel@oracle.com> References: <90be57aa-827e-5ca8-d86b-4d112667dd42@redhat.com> <1522068481.8723.8.camel@oracle.com> Message-ID: On 03/26/2018 02:48 PM, Thomas Schatzl wrote: > On Mon, 2018-03-26 at 12:36 +0200, Aleksey Shipilev wrote: >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8200232 >> >> Webrev: >> http://cr.openjdk.java.net/~shade/8200232/webrev.01/ >> >> Testing: x86_32 build >> >> I only saw failures on x86_32, but maybe PPC folks see the breakage >> in other configs too? > > looks good, but maybe wait for the ppc folks to comment too. Yup. Paging Martin Doerr? Mr. Doerr to the build clinic. -Aleksey From thomas.stuefe at gmail.com Mon Mar 26 18:13:47 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 26 Mar 2018 20:13:47 +0200 Subject: 8200246 : AIX build fails after adjustments of src/hotspot/share/trace/traceEventClasses.xsl In-Reply-To: <16201e338dda41b0913bbd16286b6e2d@sap.com> References: <16201e338dda41b0913bbd16286b6e2d@sap.com> Message-ID: Hi Matthias, thanks for fixing this! Patch works and looks ok. Best Regards, Thomas On Mon, Mar 26, 2018 at 4:55 PM, Baesken, Matthias wrote: > Hello, after recent adjustments of src/hotspot/share/trace/ > traceEventClasses.xsl > in jdk/hs ( see 8196337: Add commit methods that take all event > properties as argument ) the AIX build fails. > > The xlC compiler is not happy with > > using TraceEvent::commit; > > in traceEventClasses.xsl (looks like correct C++ but xlc 12.1 refuses to > compile ). > Error messages : > > /nightly/output-jdk-hs/hotspot/variant-server/gensrc/ > tracefiles/traceEventClasses.hpp", line 226.9: 1540-1113 (S) The class > template name "TraceEvent" must be followed by a < in this context. > > > Bug : > > https://bugs.openjdk.java.net/browse/JDK-8200246 > > Adding the template parameter to TraceEvent makes xlc happy too. > > http://cr.openjdk.java.net/~mbaesken/webrevs/8200246/ > > > Are you fine with this change ? > > Best regards, Matthias > > From thomas.stuefe at gmail.com Mon Mar 26 18:29:18 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 26 Mar 2018 20:29:18 +0200 Subject: RFR: 8200232: Build failures after JDK-8200106 (Move NoSafepointVerifier out from gcLocker.hpp) In-Reply-To: References: <90be57aa-827e-5ca8-d86b-4d112667dd42@redhat.com> <1522068481.8723.8.camel@oracle.com> Message-ID: Hi, On Mon, Mar 26, 2018 at 7:31 PM, Aleksey Shipilev wrote: > On 03/26/2018 02:48 PM, Thomas Schatzl wrote: > > On Mon, 2018-03-26 at 12:36 +0200, Aleksey Shipilev wrote: > >> Bug: > >> https://bugs.openjdk.java.net/browse/JDK-8200232 > >> > >> Webrev: > >> http://cr.openjdk.java.net/~shade/8200232/webrev.01/ > >> > >> Testing: x86_32 build > >> > >> I only saw failures on x86_32, but maybe PPC folks see the breakage > >> in other configs too? > > > > looks good, but maybe wait for the ppc folks to comment too. > > Yup. > > Paging Martin Doerr? Mr. Doerr to the build clinic. > > -Aleksey > > Well, he can't, he is is out sick himself :( I just tested AIX, linux ppc64 and s390, all build fine (had unrelated build errors on AIX, but that was the jdk, libjvm was successfully built). Note that we do not build ppc 32bit. I also did not test non-pch builds. But from my view, this change is fine, thanks for fixing. ..Thomas From shade at redhat.com Mon Mar 26 18:35:07 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Mon, 26 Mar 2018 20:35:07 +0200 Subject: RFR: 8200232: Build failures after JDK-8200106 (Move NoSafepointVerifier out from gcLocker.hpp) In-Reply-To: References: <90be57aa-827e-5ca8-d86b-4d112667dd42@redhat.com> <1522068481.8723.8.camel@oracle.com> Message-ID: <416364b7-9cb6-2439-5388-33d675d3a079@redhat.com> On 03/26/2018 08:29 PM, Thomas St?fe wrote: > On Mon, Mar 26, 2018 at 7:31 PM, Aleksey Shipilev > wrote: > On 03/26/2018 02:48 PM, Thomas Schatzl wrote: > > On Mon, 2018-03-26 at 12:36 +0200, Aleksey Shipilev wrote: > >> Bug: > >>? ?https://bugs.openjdk.java.net/browse/JDK-8200232 > >> > >> Webrev: > >>? ?http://cr.openjdk.java.net/~shade/8200232/webrev.01/ > > >> > >> Testing: x86_32 build > >> > >> I only saw failures on x86_32, but maybe PPC folks see the breakage > >> in other configs too? > > > >? ?looks good, but maybe wait for the ppc folks to comment too. > > Yup. > > Paging Martin Doerr? Mr. Doerr to the build clinic. > > Well, he can't, he is is out sick himself :(? > > I just tested AIX, linux ppc64 and s390, all build fine (had unrelated build errors on AIX, but that > was the jdk, libjvm was successfully built). Note that we do not build ppc 32bit. I also did not > test non-pch builds. Thanks! Yeah, I wanted to see if Windows/32 and PPC/32 are affected too. Martin had fixed build issues there before, as these seem to be built by SAP routinely. I can wait a day for someone to look into CI and give a clear-go. If that does not happen, I'll push the x86_32 fix at least. Thanks, -Aleksey From thomas.stuefe at gmail.com Mon Mar 26 19:13:01 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 26 Mar 2018 21:13:01 +0200 Subject: RFR: 8200232: Build failures after JDK-8200106 (Move NoSafepointVerifier out from gcLocker.hpp) In-Reply-To: <416364b7-9cb6-2439-5388-33d675d3a079@redhat.com> References: <90be57aa-827e-5ca8-d86b-4d112667dd42@redhat.com> <1522068481.8723.8.camel@oracle.com> <416364b7-9cb6-2439-5388-33d675d3a079@redhat.com> Message-ID: On Mon, Mar 26, 2018 at 8:35 PM, Aleksey Shipilev wrote: > On 03/26/2018 08:29 PM, Thomas St?fe wrote: > > On Mon, Mar 26, 2018 at 7:31 PM, Aleksey Shipilev > wrote: > > On 03/26/2018 02:48 PM, Thomas Schatzl wrote: > > > On Mon, 2018-03-26 at 12:36 +0200, Aleksey Shipilev wrote: > > >> Bug: > > >> https://bugs.openjdk.java.net/browse/JDK-8200232 < > https://bugs.openjdk.java.net/browse/JDK-8200232> > > >> > > >> Webrev: > > >> http://cr.openjdk.java.net/~shade/8200232/webrev.01/ > > > > >> > > >> Testing: x86_32 build > > >> > > >> I only saw failures on x86_32, but maybe PPC folks see the > breakage > > >> in other configs too? > > > > > > looks good, but maybe wait for the ppc folks to comment too. > > > > Yup. > > > > Paging Martin Doerr? Mr. Doerr to the build clinic. > > > > Well, he can't, he is is out sick himself :( > > > > I just tested AIX, linux ppc64 and s390, all build fine (had unrelated > build errors on AIX, but that > > was the jdk, libjvm was successfully built). Note that we do not build > ppc 32bit. I also did not > > test non-pch builds. > > Thanks! > > Yeah, I wanted to see if Windows/32 and PPC/32 are affected too. Martin > had fixed build issues there > before, as these seem to be built by SAP routinely. I can wait a day for > someone to look into CI and > give a clear-go. If that does not happen, I'll push the x86_32 fix at > least. > > We do not build windows 32bit anymore, at least not for openjdk11. That was Martins private thing over the last weeks. We never did build ppc32 bit to my knowledge - this should be zero only, no? Might be worth checking with Adrian, he builds zero regularly. Anyway, I had a quick look through the sources and it seems you should fix os_solaris_x86.cpp and os_bsd_x86.cpp too, same error there. All the other os_xx_xx.cpp seem fine to me. Thanks, Thomas Thanks, > -Aleksey > > From shade at redhat.com Mon Mar 26 20:39:41 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Mon, 26 Mar 2018 22:39:41 +0200 Subject: RFR: 8200232: Build failures after JDK-8200106 (Move NoSafepointVerifier out from gcLocker.hpp) In-Reply-To: References: <90be57aa-827e-5ca8-d86b-4d112667dd42@redhat.com> <1522068481.8723.8.camel@oracle.com> <416364b7-9cb6-2439-5388-33d675d3a079@redhat.com> Message-ID: <0b35e3dd-3c57-9fe3-c131-b7718db9adb8@redhat.com> On 03/26/2018 09:13 PM, Thomas St?fe wrote: > We do not build windows 32bit anymore, at least not for openjdk11. That was Martins private thing > over the last weeks. > > We never did build ppc32 bit to my knowledge - this should be zero only, no? Might be worth checking > with Adrian, he builds zero regularly. > > Anyway, I had a quick look through the sources and it seems you should fix os_solaris_x86.cpp and > os_bsd_x86.cpp too, same error there. All the other os_xx_xx.cpp seem fine to me. Right. I double checked all os_*.cpp things too. This is a new webrev: http://cr.openjdk.java.net/~shade/8200232/webrev.02/ I shall run it through hs-submit to make sure. Thanks, -Aleksey From thomas.stuefe at gmail.com Mon Mar 26 20:46:05 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Mon, 26 Mar 2018 20:46:05 +0000 Subject: RFR: 8200232: Build failures after JDK-8200106 (Move NoSafepointVerifier out from gcLocker.hpp) In-Reply-To: <0b35e3dd-3c57-9fe3-c131-b7718db9adb8@redhat.com> References: <90be57aa-827e-5ca8-d86b-4d112667dd42@redhat.com> <1522068481.8723.8.camel@oracle.com> <416364b7-9cb6-2439-5388-33d675d3a079@redhat.com> <0b35e3dd-3c57-9fe3-c131-b7718db9adb8@redhat.com> Message-ID: On Mon 26. Mar 2018 at 22:40, Aleksey Shipilev wrote: > On 03/26/2018 09:13 PM, Thomas St?fe wrote: > > We do not build windows 32bit anymore, at least not for openjdk11. That > was Martins private thing > > over the last weeks. > > > > We never did build ppc32 bit to my knowledge - this should be zero only, > no? Might be worth checking > > with Adrian, he builds zero regularly. > > > > Anyway, I had a quick look through the sources and it seems you should > fix os_solaris_x86.cpp and > > os_bsd_x86.cpp too, same error there. All the other os_xx_xx.cpp seem > fine to me. > > Right. I double checked all os_*.cpp things too. This is a new webrev: > http://cr.openjdk.java.net/~shade/8200232/webrev.02/ > > I shall run it through hs-submit to make sure. > > Thanks, > -Aleksey > > Change looks good and trivial. Thanks, Thomas From vladimir.kozlov at oracle.com Mon Mar 26 21:21:31 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 26 Mar 2018 14:21:31 -0700 Subject: RFD: AOT for AArch64 In-Reply-To: <8b4a211e-44af-ed28-5abf-697d722771d1@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <15f51315-3497-6c01-2ecb-1f758762e99e@redhat.com> <849a8fcf-85cd-b8c6-1134-9aea3877e9b7@redhat.com> <2c42186a-523e-6912-6eb0-36c7ee2be09f@redhat.com> <8b4a211e-44af-ed28-5abf-697d722771d1@redhat.com> Message-ID: <36e42949-634a-fce4-9fd5-d6385bd16aa7@oracle.com> I will wait final changes. Are all suggested Graal changes pushed into main repo https://github.com/oracle/graal? You should start there. Thanks, Vladimir On 3/26/18 8:02 AM, Andrew Dinn wrote: > On 26/03/18 15:56, Andrew Dinn wrote: >> Ship it! > Ok, so I know this really needs a code audit too. I'm working on that now. > > regards, > > > Andrew Dinn > ----------- > Senior Principal Software Engineer > Red Hat UK Ltd > Registered in England and Wales under Company Registration No. 03798903 > Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander > From magnus.ihse.bursie at oracle.com Mon Mar 26 21:24:10 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Mon, 26 Mar 2018 23:24:10 +0200 Subject: RFR: 8199619: Building HotSpot on Windows should define NOMINMAX In-Reply-To: <7545A83F-E296-40B1-9C15-358A999D1AAF@oracle.com> References: <7545A83F-E296-40B1-9C15-358A999D1AAF@oracle.com> Message-ID: On 2018-03-26 17:01, Robin Westberg wrote: > Hi all, > > Please review this small change that defines the NOMINMAX macro when building HotSpot on Windows. > > Issue: https://bugs.openjdk.java.net/browse/JDK-8199619 > Webrev: http://cr.openjdk.java.net/~rwestberg/8199619/webrev.00/ Looks good to me. /Magnus > Testing: building with/without precompiled headers, hs-tier1 > > Best regards, > Robin From coleen.phillimore at oracle.com Mon Mar 26 21:46:46 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 26 Mar 2018 17:46:46 -0400 Subject: RFR: 8200232: Build failures after JDK-8200106 (Move NoSafepointVerifier out from gcLocker.hpp) In-Reply-To: References: <90be57aa-827e-5ca8-d86b-4d112667dd42@redhat.com> <1522068481.8723.8.camel@oracle.com> <416364b7-9cb6-2439-5388-33d675d3a079@redhat.com> <0b35e3dd-3c57-9fe3-c131-b7718db9adb8@redhat.com> Message-ID: <1b78fbbf-c474-2983-5aaa-376ebfe85262@oracle.com> I agree that it's a trivial change and I reviewed it too. Thanks, Coleen On 3/26/18 4:46 PM, Thomas St?fe wrote: > On Mon 26. Mar 2018 at 22:40, Aleksey Shipilev wrote: > >> On 03/26/2018 09:13 PM, Thomas St?fe wrote: >>> We do not build windows 32bit anymore, at least not for openjdk11. That >> was Martins private thing >>> over the last weeks. >>> >>> We never did build ppc32 bit to my knowledge - this should be zero only, >> no? Might be worth checking >>> with Adrian, he builds zero regularly. >>> >>> Anyway, I had a quick look through the sources and it seems you should >> fix os_solaris_x86.cpp and >>> os_bsd_x86.cpp too, same error there. All the other os_xx_xx.cpp seem >> fine to me. >> >> Right. I double checked all os_*.cpp things too. This is a new webrev: >> http://cr.openjdk.java.net/~shade/8200232/webrev.02/ >> >> I shall run it through hs-submit to make sure. >> >> Thanks, >> -Aleksey >> >> > Change looks good and trivial. > > Thanks, Thomas From coleen.phillimore at oracle.com Mon Mar 26 23:56:32 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 26 Mar 2018 19:56:32 -0400 Subject: RFR (S,trivial) 8200276: Cleanup allocation.hpp includes Message-ID: <11afbff5-2450-b9b3-4bce-7487a17697c1@oracle.com> These includes are no longer needed with VALUE_OBJ_CLASS_SPEC removed in these files. open webrev at http://cr.openjdk.java.net/~coleenp/8200276.01/webrev bug link https://bugs.openjdk.java.net/browse/JDK-8200276 Tested tier1 on oracle platforms: linux-x64, solaris-sparc, macos-x86, windows-x86.? Tested open-only with --disable-precompiled-headers.? Built zero on linux x64. Thanks, Coleen From david.holmes at oracle.com Tue Mar 27 00:19:15 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 27 Mar 2018 10:19:15 +1000 Subject: RFR: 8200245: Zero fails to build on linux-ia64 due to ia64-specific cruft In-Reply-To: <31e26c1c-4549-e04d-18fb-70a03acde103@physik.fu-berlin.de> References: <31e26c1c-4549-e04d-18fb-70a03acde103@physik.fu-berlin.de> Message-ID: Hi Adrian, On 26/03/2018 11:55 PM, John Paul Adrian Glaubitz wrote: > Hi! > > Zero currently fails to build from source on linux-ia64 due to some remaining > ia64-specific cruft which was previously part of the native ia64 port. The Everytime I think we already cleaned all this out I get reminded that we didn't :( Your specific fix looks fine. I'm unclear whether any of the other ia64 cruft pertains to the "native port" or to zero? Thanks, David > error - shown here for jdk7u - is the same on all versions of OpenJDK after > 7u: > > g++-6 -DLINUX -D_GNU_SOURCE -DCC_INTERP -DZERO -DIA64 -DZERO_LIBARCH=\"ia64\" -DPRODUCT -I. -I/<>/build/openjdk-boot/hotspot/src/share/vm/prims > -I/<>/build/openjdk-boot/hotspot/src/share/vm -I/<>/build/openjdk-boot/hotspot/src/share/vm/precompiled > -I/<>/build/openjdk-boot/hotspot/src/cpu/zero/vm -I/<>/build/openjdk-boot/hotspot/src/os_cpu/linux_zero/vm > -I/<>/build/openjdk-boot/hotspot/src/os/linux/vm -I/<>/build/openjdk-boot/hotspot/src/os/posix/vm -I../generated > -DHOTSPOT_RELEASE_VERSION="\"24.161-b01\"" -DHOTSPOT_BUILD_TARGET="\"product\"" -DHOTSPOT_BUILD_USER="\"buildd2\"" -DHOTSPOT_LIB_ARCH=\"ia64\" > -DHOTSPOT_VM_DISTRO="\"OpenJDK\"" -DDERIVATIVE_ID="\"IcedTea 2.6.12\"" -DDEB_MULTIARCH="\"ia64-linux-gnu\"" -DDISTRIBUTION_ID="\"Debian GNU/Linux unstable > (sid), package 7u161-2.6.12-1\"" -DTARGET_OS_FAMILY_linux -DTARGET_ARCH_zero -DTARGET_ARCH_MODEL_zero -DTARGET_OS_ARCH_linux_zero > -DTARGET_OS_ARCH_MODEL_linux_zero -DTARGET_COMPILER_gcc -std=gnu++98 -fpic -fno-rtti -fno-exceptions -D_REENTRANT -fcheck-new -fvisibility=hidden > -fno-delete-null-pointer-checks -fno-lifetime-dse -pipe -g -O2 -finline-functions -fno-strict-aliasing -DVM_LITTLE_ENDIAN -D_LP64=1 -DINCLUDE_TRACE=1 > -Wpointer-arith -Wsign-compare -Wno-deprecated-declarations -g -fdebug-prefix-map=/<>=. -Wformat -Werror=format-security -Wdate-time > -D_FORTIFY_SOURCE=2 -c -fpch-deps -MMD -MP -MF ../generated/dependencies/orderAccess.o.d -o orderAccess.o > /<>/build/openjdk-boot/hotspot/src/share/vm/runtime/orderAccess.cpp > /<>/build/openjdk-boot/hotspot/src/share/vm/runtime/os.cpp: In static member function 'static bool os::is_first_C_frame(frame*)': > /<>/build/openjdk-boot/hotspot/src/share/vm/runtime/os.cpp:1019:15: error: 'class Thread' has no member named 'register_stack_base'; did you mean > 'set_stack_base'? > thread->register_stack_base() HPUX_ONLY(+ 0x0) LINUX_ONLY(+ 0x50)) { > ^~~~~~~~~~~~~~~~~~~ > /<>/build/openjdk-boot/hotspot/src/share/vm/runtime/os.cpp:1019:37: error: expected ')' before 'HPUX_ONLY' > thread->register_stack_base() HPUX_ONLY(+ 0x0) LINUX_ONLY(+ 0x50)) { > ^~~~~~~~~ > /<>/build/openjdk-boot/hotspot/make/linux/makefiles/rules.make:150: recipe for target 'os.o' failed > make[8]: *** [os.o] Error 1 > make[8]: *** Waiting for unfinished jobs.... > make[8]: Leaving directory '/<>/build/openjdk.build-boot/hotspot/outputdir/linux_ia64_zero/product' > /<>/build/openjdk-boot/hotspot/make/linux/makefiles/top.make:119: recipe for target 'the_vm' failed > make[7]: *** [the_vm] Error 2 > > The referenced method register_stack_base() was part of the ia64-specific implementation > of the Thread class which is no longer part of OpenJDK. Thus, the best way to fix this > is just removing this remaining cruft. This fixes the Zero build on linux-ia64 for me. > > Please review the change in [1]. > > Thanks, > Adrian > >> [1] http://cr.openjdk.java.net/~glaubitz/8200245/webrev.00/ > From david.holmes at oracle.com Tue Mar 27 00:57:42 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 27 Mar 2018 10:57:42 +1000 Subject: RFR: 8199736: Define WIN32_LEAN_AND_MEAN before including windows.h In-Reply-To: <9DA6FFD9-1DD0-4C39-9D81-DF6FA49EDF45@oracle.com> References: <7F114FA8-59F0-42F1-A0B8-33D8C22DE81A@oracle.com> <9DA6FFD9-1DD0-4C39-9D81-DF6FA49EDF45@oracle.com> Message-ID: <29b9be70-5855-fa0d-e1b2-73565e76d052@oracle.com> Looks good to me. Thanks, David On 27/03/2018 1:01 AM, Robin Westberg wrote: > Hi David, > > Thanks for taking a look! > >> On 26 Mar 2018, at 01:03, David Holmes > > wrote: >> >> Hi Robin, >> >> On 23/03/2018 10:37 PM, Robin Westberg wrote: >>> Hi Kim & Erik, >>> Certainly makes sense to define it from the build system, I?ve >>> updated the patch accordingly: >>> Full: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.01/ >>> >>> Incremental: >>> http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00-01/ >>> >> >> I'm a little unclear on the hotspot changes. If we define >> WIN32_LEAN_AND_MEAN then certain APIs like sockets are excluded from >> windows.h so we then have to include the specific header files like >> winsock2.h - is that right? > > Yep that?s correct, headers like winsock, dde, ole, shellapi and a few > other uncommon ones are no longer included from windows.h when this is > defined. > >> src/hotspot/share/interpreter/bytecodes.cpp >> >> I'm curious about this change. u_short comes from types.h on >> non-Windows, is it simply missing on Windows (at least once we have >> WIN32_LEAN_AND_MEAN defined) ? > > Yeah, on Windows these comes from winsock(2).h: > > /* > ?* Basic system type definitions, taken from the BSD file sys/types.h. > ?*/ > typedef unsigned char ? u_char; > typedef unsigned short ?u_short; > typedef unsigned int ? ?u_int; > typedef unsigned long ? u_long; > > I noticed that one of these (u_char) is also defined in > globalDefinitions.hpp so could perhaps define u_short there, or include > winsock2.h globally again. But since it was only used in a single place > in the existing code it seemed simple enough to just expand the typedef > there. > >> src/hotspot/share/utilities/ostream.cpp >> >> 1029 #endif >> 1030 #if defined(_WINDOWS) >> >> Using elif could be marginally faster given the two sets of conditions >> are mutually exclusive. > > Good point, will change it. > > I also had to move the flag definition to adapt to the latest changes in > the hs repo, cc?ing build-dev again to make sure I got it right. > > Updated webrev (full): > http://cr.openjdk.java.net/~rwestberg/8199736/webrev.02/ > Updated webrev (incremental): > http://cr.openjdk.java.net/~rwestberg/8199736/webrev.01-02/ > > Best regards, > Robin > >> >> Thanks, >> David >> >>> (Not quite sure if the definition belongs where I put it or a bit >>> later where most other windows-specific JVM flags are defined, but >>> seemed reasonable to put it close to where it is defined for the JDK >>> libraries). >>> Best regards, >>> Robin >>>> On 22 Mar 2018, at 16:52, Kim Barrett >>> > wrote: >>>> >>>>> On Mar 22, 2018, at 10:34 AM, Robin Westberg >>>>> > wrote: >>>>> >>>>> Hi all, >>>>> >>>>> Please review the following change that defines WIN32_LEAN_AND_MEAN >>>>> [1] before including windows.h. This marginally improves build >>>>> times, and makes it possible to include winsock2.h. >>>>> >>>>> Issue: https://bugs.openjdk.java.net/browse/JDK-8199736 >>>>> >>>>> Webrev: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00/ >>>>> >>>>> Testing: hs-tier1 >>>>> >>>>> Best regards, >>>>> Robin >>>>> >>>>> [1] >>>>> https://msdn.microsoft.com/en-us/library/windows/desktop/aa383745%28v=vs.85%29.aspx#faster_builds_with_smaller_header_files >>>>> >>>> >>>> I think the addition of the WIN32_LEAN_AND_MEAN definition should be >>>> done through the build >>>> system, so that it applies everywhere. >>>> > From john.r.rose at oracle.com Tue Mar 27 01:06:00 2018 From: john.r.rose at oracle.com (John Rose) Date: Mon, 26 Mar 2018 18:06:00 -0700 Subject: RFC: 8185062: os::is_MP() sometimes returns false In-Reply-To: References: Message-ID: On Aug 1, 2017, at 9:35 PM, David Holmes wrote: > > 3. Assume the universe is MP and only allow MP to be built I vote #3. My conservative first impulse was to drive towards an is_MP that returns true always, but remains for the sake of commenting the MP dependencies in our logic. But there are so few uses of it, that a few well written comments could do the same job. I left a comment on the bug with a few more details. https://bugs.openjdk.java.net/browse/JDK-8185062?focusedCommentId=14167081 ? John From john.r.rose at oracle.com Tue Mar 27 01:07:08 2018 From: john.r.rose at oracle.com (John Rose) Date: Mon, 26 Mar 2018 18:07:08 -0700 Subject: RFC: 8185062: os::is_MP() sometimes returns false In-Reply-To: References: Message-ID: <6775CB01-428C-4214-B722-FD21FCEF93F5@oracle.com> (Wow the miracle of full-text search over one's inbox. Sorry about the spam.) On Mar 26, 2018, at 6:06 PM, John Rose wrote: > > On Aug 1, 2017, at 9:35 PM, David Holmes > wrote: >> >> 3. Assume the universe is MP and only allow MP to be built > > I vote #3. My conservative first impulse was to drive towards an is_MP > that returns true always, but remains for the sake of commenting the MP > dependencies in our logic. But there are so few uses of it, that a few well > written comments could do the same job. I left a comment on the bug with > a few more details. > > https://bugs.openjdk.java.net/browse/JDK-8185062?focusedCommentId=14167081 > > ? John From dean.long at oracle.com Tue Mar 27 01:21:56 2018 From: dean.long at oracle.com (dean.long at oracle.com) Date: Mon, 26 Mar 2018 18:21:56 -0700 Subject: RFD: AOT for AArch64 In-Reply-To: <9bb3d296-323c-07b4-0a91-aa24992282c6@oracle.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <9bb3d296-323c-07b4-0a91-aa24992282c6@oracle.com> Message-ID: <3f069df6-a1ec-df1b-a51a-711c7e291f4f@oracle.com> On 3/23/18 4:27 PM, Vladimir Kozlov wrote: > Code in AOTCompiledClass.java look strange in try block. Why you need it? I've seen that code fail as well, but thought it was due to me doing something wrong, because it always went away with the final version of my changes.? It would be great to fix this issue once and for all.? Andrew, if you have a test case to reproduce the problem, it would be great to have it as a regression test. dl From david.holmes at oracle.com Tue Mar 27 01:34:22 2018 From: david.holmes at oracle.com (David Holmes) Date: Tue, 27 Mar 2018 11:34:22 +1000 Subject: RFC: 8185062: os::is_MP() sometimes returns false In-Reply-To: <6775CB01-428C-4214-B722-FD21FCEF93F5@oracle.com> References: <6775CB01-428C-4214-B722-FD21FCEF93F5@oracle.com> Message-ID: <0a663580-17ab-413c-416d-c3af4773d8e4@oracle.com> On 27/03/2018 11:07 AM, John Rose wrote: > (Wow the miracle of full-text search over one's inbox. ?Sorry about the > spam.) Not at all John - this is good input! Thanks. Progress on this stalled for a number of reasons, but one was the resurgence of UP environments through containers. David > On Mar 26, 2018, at 6:06 PM, John Rose > wrote: >> >> On Aug 1, 2017, at 9:35 PM, David Holmes > > wrote: >>> >>> 3. Assume the universe is MP and only allow MP to be built >> >> I vote #3. ?My conservative first impulse was to drive towards an is_MP >> that returns true always, but remains for the sake of commenting the MP >> dependencies in our logic. ?But there are so few uses of it, that a >> few well >> written comments could do the same job. ?I left a comment on the bug with >> a few more details. >> >> https://bugs.openjdk.java.net/browse/JDK-8185062?focusedCommentId=14167081 >> >> ? John > From glaubitz at physik.fu-berlin.de Tue Mar 27 03:27:10 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Tue, 27 Mar 2018 12:27:10 +0900 Subject: RFR: 8200245: Zero fails to build on linux-ia64 due to ia64-specific cruft In-Reply-To: References: <31e26c1c-4549-e04d-18fb-70a03acde103@physik.fu-berlin.de> Message-ID: <9f97f4c3-d8a9-174e-a8ce-aa1b8d1b510e@physik.fu-berlin.de> On 03/27/2018 09:19 AM, David Holmes wrote: > Everytime I think we already cleaned all this out I get reminded that we didn't :( Happens ;). > Your specific fix looks fine. Thanks. > I'm unclear whether any of the other ia64 cruft pertains to the "native port" or to zero? The only other part that I know of is part of Zero and deals with the odd stack configuration on ia64. Since it's part of Zero, we do still need it. The fact that with just this patch applied, Zero builds and works fine on ia64, makes me confident that this is the only cruft that is still left to be removed. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From thomas.stuefe at gmail.com Tue Mar 27 04:25:24 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 27 Mar 2018 06:25:24 +0200 Subject: RFR: 8200245: Zero fails to build on linux-ia64 due to ia64-specific cruft In-Reply-To: <9f97f4c3-d8a9-174e-a8ce-aa1b8d1b510e@physik.fu-berlin.de> References: <31e26c1c-4549-e04d-18fb-70a03acde103@physik.fu-berlin.de> <9f97f4c3-d8a9-174e-a8ce-aa1b8d1b510e@physik.fu-berlin.de> Message-ID: Looks good to me to. Interesting, Sun did have an HPUX port? Best, Thomas On Tue, Mar 27, 2018 at 5:27 AM, John Paul Adrian Glaubitz < glaubitz at physik.fu-berlin.de> wrote: > On 03/27/2018 09:19 AM, David Holmes wrote: > > Everytime I think we already cleaned all this out I get reminded that we > didn't :( > > Happens ;). > > > Your specific fix looks fine. > > Thanks. > > > I'm unclear whether any of the other ia64 cruft pertains to the "native > port" or to zero? > > The only other part that I know of is part of Zero and deals with the odd > stack > configuration on ia64. Since it's part of Zero, we do still need it. > > The fact that with just this patch applied, Zero builds and works fine on > ia64, > makes me confident that this is the only cruft that is still left to be > removed. > > Adrian > > -- > .''`. John Paul Adrian Glaubitz > : :' : Debian Developer - glaubitz at debian.org > `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de > `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 > From glaubitz at physik.fu-berlin.de Tue Mar 27 05:10:31 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Tue, 27 Mar 2018 14:10:31 +0900 Subject: RFR: 8199138: Add RISC-V support to Zero In-Reply-To: References: <1521313360.26308.4.camel@gmail.com> <259e05b8-dbb1-4aa4-f451-6b7078eeb2ff@oracle.com> <1521554055.3029.4.camel@gmail.com> Message-ID: <9b675cad-0aec-dfbe-1540-15417b58aaea@physik.fu-berlin.de> On 03/24/2018 02:26 AM, Magnus Ihse Bursie wrote: > > On 2018-03-20 14:54, Edward Nevill wrote: >> Thanks for this. I have updated the webrev with the above comment. >> >> http://cr.openjdk.java.net/~enevill/8199138/webrev.01 > I note that in platform.m4 (sorry I didn't say this earlier), you set the CPU_ARCH to riscv64 as well, and not just riscv. Now I don't know how likely it is > that OpenJDK will ever support the 32-bit version of riscv, but it seems like it would make more sense to define the CPU_ARCH as "riscv", and the CPU as "riscv64". > > It's just a minor thing, if you like it the way it is, keep it. I agree, this is a bit odd. @Edward: Is this correct as it currently is? Would be great if this changeset could finally get merged as Debian just recently bootstrapped riscv64 and is now building packages on real hardware with 10 build machines running: > https://buildd.debian.org/status/architecture.php?a=riscv64&suite=sid I assume the build dependencies for OpenJDK in Debian will be built in around a week or so. Until I then, we should have sorted this patch out so I can add a (backported) patch to Debian's openjdk-8/9/10/11 packages. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From ningsheng.jian at linaro.org Tue Mar 27 06:11:04 2018 From: ningsheng.jian at linaro.org (Ningsheng Jian) Date: Tue, 27 Mar 2018 14:11:04 +0800 Subject: [aarch64-port-dev ] RFD: AOT for AArch64 In-Reply-To: <2c42186a-523e-6912-6eb0-36c7ee2be09f@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <15f51315-3497-6c01-2ecb-1f758762e99e@redhat.com> <849a8fcf-85cd-b8c6-1134-9aea3877e9b7@redhat.com> <2c42186a-523e-6912-6eb0-36c7ee2be09f@redhat.com> Message-ID: Hi Andrew, Nice work! On 26 March 2018 at 22:56, Andrew Dinn wrote: > On 26/03/18 12:08, Andrew Haley wrote: >> On 03/26/2018 11:11 AM, Andrew Dinn wrote: >>> Of course, it may not work any more (I am about to >>> test that now :-). >> >> OK. I'll wait for that. > > 2) compile a simple test program plus module java.base from a jar into > AOT lib test.so and run the code from test.so. > > (n.b. step 2 was using a release build :-). > I have tried the fastdebug build for java.base module with some test programs (e.g. SPECjvm) on my AArch64 system and I also checked the PrintAOT outputs. It worked fine! Thanks, Ningsheng From matthias.baesken at sap.com Tue Mar 27 07:22:38 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Tue, 27 Mar 2018 07:22:38 +0000 Subject: 8200246 : AIX build fails after adjustments of src/hotspot/share/trace/traceEventClasses.xsl In-Reply-To: References: <16201e338dda41b0913bbd16286b6e2d@sap.com> Message-ID: <2fdb6d8d1a8748f5b6c3798a17f1d14d@sap.com> Hi Thomas , thanks for the review . Can I have a second review please ? Best regards, Matthias From: Thomas St?fe [mailto:thomas.stuefe at gmail.com] Sent: Montag, 26. M?rz 2018 20:14 To: Baesken, Matthias Cc: build-dev at openjdk.java.net; hotspot-dev at openjdk.java.net; Simonis, Volker ; Doerr, Martin Subject: Re: 8200246 : AIX build fails after adjustments of src/hotspot/share/trace/traceEventClasses.xsl Hi Matthias, thanks for fixing this! Patch works and looks ok. Best Regards, Thomas On Mon, Mar 26, 2018 at 4:55 PM, Baesken, Matthias > wrote: Hello, after recent adjustments of src/hotspot/share/trace/traceEventClasses.xsl in jdk/hs ( see 8196337: Add commit methods that take all event properties as argument ) the AIX build fails. The xlC compiler is not happy with using TraceEvent::commit; in traceEventClasses.xsl (looks like correct C++ but xlc 12.1 refuses to compile ). Error messages : /nightly/output-jdk-hs/hotspot/variant-server/gensrc/tracefiles/traceEventClasses.hpp", line 226.9: 1540-1113 (S) The class template name "TraceEvent" must be followed by a < in this context. Bug : https://bugs.openjdk.java.net/browse/JDK-8200246 Adding the template parameter to TraceEvent makes xlc happy too. http://cr.openjdk.java.net/~mbaesken/webrevs/8200246/ Are you fine with this change ? Best regards, Matthias From christoph.langer at sap.com Tue Mar 27 07:33:21 2018 From: christoph.langer at sap.com (Langer, Christoph) Date: Tue, 27 Mar 2018 07:33:21 +0000 Subject: 8200246 : AIX build fails after adjustments of src/hotspot/share/trace/traceEventClasses.xsl In-Reply-To: <2fdb6d8d1a8748f5b6c3798a17f1d14d@sap.com> References: <16201e338dda41b0913bbd16286b6e2d@sap.com> <2fdb6d8d1a8748f5b6c3798a17f1d14d@sap.com> Message-ID: <2cd633e5431345cf92c9eb5ef1795efb@sap.com> Hi Matthias, looks good to me, too. Best regards Christoph From: Baesken, Matthias Sent: Dienstag, 27. M?rz 2018 09:23 To: Thomas St?fe Cc: build-dev at openjdk.java.net; hotspot-dev at openjdk.java.net; Simonis, Volker ; Doerr, Martin ; Langer, Christoph Subject: RE: 8200246 : AIX build fails after adjustments of src/hotspot/share/trace/traceEventClasses.xsl Hi Thomas , thanks for the review . Can I have a second review please ? Best regards, Matthias From: Thomas St?fe [mailto:thomas.stuefe at gmail.com] Sent: Montag, 26. M?rz 2018 20:14 To: Baesken, Matthias > Cc: build-dev at openjdk.java.net; hotspot-dev at openjdk.java.net; Simonis, Volker >; Doerr, Martin > Subject: Re: 8200246 : AIX build fails after adjustments of src/hotspot/share/trace/traceEventClasses.xsl Hi Matthias, thanks for fixing this! Patch works and looks ok. Best Regards, Thomas On Mon, Mar 26, 2018 at 4:55 PM, Baesken, Matthias > wrote: Hello, after recent adjustments of src/hotspot/share/trace/traceEventClasses.xsl in jdk/hs ( see 8196337: Add commit methods that take all event properties as argument ) the AIX build fails. The xlC compiler is not happy with using TraceEvent::commit; in traceEventClasses.xsl (looks like correct C++ but xlc 12.1 refuses to compile ). Error messages : /nightly/output-jdk-hs/hotspot/variant-server/gensrc/tracefiles/traceEventClasses.hpp", line 226.9: 1540-1113 (S) The class template name "TraceEvent" must be followed by a < in this context. Bug : https://bugs.openjdk.java.net/browse/JDK-8200246 Adding the template parameter to TraceEvent makes xlc happy too. http://cr.openjdk.java.net/~mbaesken/webrevs/8200246/ Are you fine with this change ? Best regards, Matthias From stefan.karlsson at oracle.com Tue Mar 27 08:04:05 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Tue, 27 Mar 2018 10:04:05 +0200 Subject: RFR (S,trivial) 8200276: Cleanup allocation.hpp includes In-Reply-To: <11afbff5-2450-b9b3-4bce-7487a17697c1@oracle.com> References: <11afbff5-2450-b9b3-4bce-7487a17697c1@oracle.com> Message-ID: Hi Coleen, This file is using CHeapObj and StackObj, and should keep including allocation.hpp. http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/gc/parallel/psVirtualspace.hpp.frames.html These files are using MetaspaceObj: http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/oops/cpCache.hpp.frames.html http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/oops/array.hpp.frames.html This file is using ReallocMark: http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/code/exceptionHandlerTable.hpp.frames.html Thanks, StefanK On 2018-03-27 01:56, coleen.phillimore at oracle.com wrote: > These includes are no longer needed with VALUE_OBJ_CLASS_SPEC removed in > these files. > > open webrev at http://cr.openjdk.java.net/~coleenp/8200276.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8200276 > > Tested tier1 on oracle platforms: linux-x64, solaris-sparc, macos-x86, > windows-x86.? Tested open-only with --disable-precompiled-headers. Built > zero on linux x64. > > Thanks, > Coleen From adinn at redhat.com Tue Mar 27 08:18:48 2018 From: adinn at redhat.com (Andrew Dinn) Date: Tue, 27 Mar 2018 09:18:48 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: <8b4a211e-44af-ed28-5abf-697d722771d1@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <15f51315-3497-6c01-2ecb-1f758762e99e@redhat.com> <849a8fcf-85cd-b8c6-1134-9aea3877e9b7@redhat.com> <2c42186a-523e-6912-6eb0-36c7ee2be09f@redhat.com> <8b4a211e-44af-ed28-5abf-697d722771d1@redhat.com> Message-ID: <97d96f0e-43b0-beca-3b31-3dabc6073f3a@redhat.com> On 26/03/18 16:02, Andrew Dinn wrote: > On 26/03/18 15:56, Andrew Dinn wrote: >> Ship it! > Ok, so I know this really needs a code audit too. I'm working on that now. I am still trying to understand all the details of this patch so this is really just preliminary feedback. Many of the comments below are questions, posed mostly as requests for clarification rather than suggestions for any improvement. Also, I'm now switching to look at the Graal changes so I can tie these two patches together. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander One overall comment: copyrights need updating! 1) make/autoconf/generated-configure.sh you can scratch all these changes as the file is now deleted 2) make/hotspot/lib/JvmFeatures.gmk is this a necessary part of the AOT change or just an extra cleanup? 3) src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp I believe this is the cause of the slowdebug assert . . . @@ -738,25 +738,25 @@ - if (far_branches() && !Compile::current()->in_scratch_emit_size()) { + if (UseAOT || (far_branches() && !Compile::current()->in_scratch_emit_size())) { address stub = emit_trampoline_stub(start_offset, entry.target()); if (stub == NULL) { return NULL; // CodeCache is full } . . . and should be @@ -738,25 +738,25 @@ - if (far_branches() && !Compile::current()->in_scratch_emit_size()) { + if ((UseAOT || (far_branches()) && !Compile::current()->in_scratch_emit_size()) { address stub = emit_trampoline_stub(start_offset, entry.target()); if (stub == NULL) { return NULL; // CodeCache is full } Also, as mentioned earlier this next change is redundant as it is already in the latest hs @@ -1048,11 +1048,11 @@ Address::lsl(LogBytesPerWord))); ldr(method_result, Address(method_result, vtable_offset_in_bytes)); } else { vtable_offset_in_bytes += vtable_index.as_constant() * wordSize; ldr(method_result, - form_address(rscratch1, recv_klass, vtable_offset_in_bytes)); + form_address(rscratch1, recv_klass, vtable_offset_in_bytes, 0)); 4) nativeInst_aarch64.hpp 147 class NativePltCall: public NativeInstruction { 148 public: 149 enum Intel_specific_constants { Hmm, Intel? Shurely AArch64? 5) src/hotspot/share/code/oopRecorder.hpp src/hotspot/share/jvmci/jvmciCodeInstaller.cpp Why is there a need to make these changes to shared code? In particular: why does allocate_metadata need to be virtual? what magic lies behind the argument 64 passed into the GrowableArray created by AOTOopRecorder::AOTOopRecorder()? 6) src/jdk.aot/share/classes/jdk.tools.jaotc.binformat/src/jdk/tools/jaotc/binformat/elf/ElfTargetInfo.java if (archStr.equals("amd64") || archStr.equals("x86_64")) { arch = Elf64_Ehdr.EM_X86_64; + } else if (archStr.equals("amd64") || archStr.equals("aarch64")) { + arch = Elf64_Ehdr.EM_AARCH64; Should be if (archStr.equals("amd64") || archStr.equals("x86_64")) { arch = Elf64_Ehdr.EM_X86_64; + } else if (archStr.equals("aarch64")) { + arch = Elf64_Ehdr.EM_AARCH64; 7) src/jdk.aot/share/classes/jdk.tools.jaotc/src/jdk/tools/jaotc/AOTCompiledClass.java static String metadataName(HotSpotResolvedObjectType type) { AOTKlassData data = getAOTKlassData(type); assert data != null : "no data for " + type; + try { + AOTKlassData t = getAOTKlassData(type); + t.getMetadataName(); + } catch (NullPointerException e) { + return type.getName(); + } return getAOTKlassData(type).getMetadataName(); } This looks a bit like last night's left-overs? Why is getAOTKlassData(type) being called again? Is this just dev-time scaffolding you didn't delete? Or is the catch actually doing something necessary? 8) src/jdk.aot/share/classes/jdk.tools.jaotc/src/jdk/tools/jaotc/CodeSectionProcessor.java final Call callInfopoint = (Call) infopoint; - if (callInfopoint.target instanceof HotSpotForeignCallLinkage) { + if (callInfopoint.target instanceof HotSpotForeignCallLinkage && + target.arch instanceof AMD64) { // TODO 4 is x86 size of relative displacement. So, why can AArch64 simply not worry about zeroing out destinations? Is this because of using PLTs and CompiledPltStaticCall::set_stub_to_clean? 9) src/jdk.aot/share/classes/jdk.tools.jaotc/src/jdk/tools/jaotc/CompiledMethodInfo.java for (Mark m : compilationResult.getMarks()) { + int adjOffset = m.pcOffset; + if (archStr.equals("aarch64")) { + // FIXME: This is very ugly. + // The mark is at the end of a group of three instructions: + // adrp; add; ldr + adjOffset += 12; + } else { // TODO: X64-specific code. I'm not really sure why this is so ugly. Can you explain? 10) /jdk.aot/share/classes/jdk.tools.jaotc/src/jdk/tools/jaotc/Linker.java - if (objFile.exists()) { + if (objFile.exists() && System.getenv("DO_NOT_DELETE_PRECIOUS_FILE") == null) { Is this intended to remain? From adinn at redhat.com Tue Mar 27 08:22:49 2018 From: adinn at redhat.com (Andrew Dinn) Date: Tue, 27 Mar 2018 09:22:49 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: <36e42949-634a-fce4-9fd5-d6385bd16aa7@oracle.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <15f51315-3497-6c01-2ecb-1f758762e99e@redhat.com> <849a8fcf-85cd-b8c6-1134-9aea3877e9b7@redhat.com> <2c42186a-523e-6912-6eb0-36c7ee2be09f@redhat.com> <8b4a211e-44af-ed28-5abf-697d722771d1@redhat.com> <36e42949-634a-fce4-9fd5-d6385bd16aa7@oracle.com> Message-ID: <8405ed8c-916d-ee08-86b3-ea72acd4bc02@redhat.com> Hi Vladimir, I have done a first pass through the hotspot changes and posted a few queries about them. I am now working through the Graal changes in my own clone of Andrew's git branch rebased to the latest Graal master (n.b. the rebase was clean). regards, Andrew Dinn ----------- On 26/03/18 22:21, Vladimir Kozlov wrote: > I will wait final changes. > > Are all suggested Graal changes pushed into main repo > https://github.com/oracle/graal? You should start there. > > Thanks, > Vladimir > > On 3/26/18 8:02 AM, Andrew Dinn wrote: >> On 26/03/18 15:56, Andrew Dinn wrote: >>> Ship it! >> Ok, so I know this really needs a code audit too. I'm working on that >> now. >> >> regards, >> >> >> Andrew Dinn >> ----------- >> Senior Principal Software Engineer >> Red Hat UK Ltd >> Registered in England and Wales under Company Registration No. 03798903 >> Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander >> > -- regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From edward.nevill at gmail.com Tue Mar 27 08:23:59 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Tue, 27 Mar 2018 09:23:59 +0100 Subject: RFR: 8199138: Add RISC-V support to Zero In-Reply-To: <9b675cad-0aec-dfbe-1540-15417b58aaea@physik.fu-berlin.de> References: <1521313360.26308.4.camel@gmail.com> <259e05b8-dbb1-4aa4-f451-6b7078eeb2ff@oracle.com> <1521554055.3029.4.camel@gmail.com> <9b675cad-0aec-dfbe-1540-15417b58aaea@physik.fu-berlin.de> Message-ID: <1522139039.7098.11.camel@gmail.com> Hi, On Tue, 2018-03-27 at 14:10 +0900, John Paul Adrian Glaubitz wrote: > On 03/24/2018 02:26 AM, Magnus Ihse Bursie wrote: > > > > On 2018-03-20 14:54, Edward Nevill wrote: > > > Thanks for this. I have updated the webrev with the above comment. > > > > > > http://cr.openjdk.java.net/~enevill/8199138/webrev.01 > > > > I note that in platform.m4 (sorry I didn't say this earlier), you set the CPU_ARCH to riscv64 as well, and not just riscv. Now I don't know how likely it is > > that OpenJDK will ever support the 32-bit version of riscv, but it seems like it would make more sense to define the CPU_ARCH as "riscv", and the CPU as "riscv64". > > > > It's just a minor thing, if you like it the way it is, keep it. > > I agree, this is a bit odd. > > @Edward: Is this correct as it currently is? Would be great if this changeset > could finally get merged as Debian just recently bootstrapped riscv64 and > is now building packages on real hardware with 10 build machines running: > Sorry for the delay. I was doing another test build on qemu which takes about 3 days. Please review the following webrev http://cr.openjdk.java.net/~enevill/8199138/webrev.02 This has the following additional changes over the previous webrev 1) Add comment in os_linux.cpp @@ -1733,6 +1733,9 @@ #ifndef EM_AARCH64 #define EM_AARCH64 183 /* ARM AARCH64 */ #endif +#ifndef EM_RISCV /* RISCV */ + #define EM_RISCV 243 +#endif static const arch_t arch_array[]={ {EM_386, EM_386, ELFCLASS32, ELFDATA2LSB, (char*)"IA 32"}, 2) Add RISCV to the #error list in os_linux.cpp @@ -1794,7 +1800,7 @@ static Elf32_Half running_arch_code=EM_SH; #else #error Method os::dll_load requires that one of following is defined:\ - AARCH64, ALPHA, ARM, AMD64, IA32, IA64, M68K, MIPS, MIPSEL, PARISC, __powerpc__, __powerpc64__, S390, SH, __sparc + AARCH64, ALPHA, ARM, AMD64, IA32, IA64, M68K, MIPS, MIPSEL, PARISC, __powerpc__, __powerpc64__, S390, SH, __sparc, RISCV #endif // Identify compatability class for VM's architecture and library's architecture 3) Use 'riscv' instead of 'riscv64' for VAR_CPU_ARCH in platform.m4 @@ -114,6 +114,12 @@ VAR_CPU_BITS=64 VAR_CPU_ENDIAN=little ;; + riscv64) + VAR_CPU=riscv64 + VAR_CPU_ARCH=riscv + VAR_CPU_BITS=64 + VAR_CPU_ENDIAN=little + ;; 4) Add riscv to the list of arch which do not have -m64 in flags.m4 @@ -237,7 +237,8 @@ MACHINE_FLAG="-q${OPENJDK_TARGET_CPU_BITS}" elif test "x$TOOLCHAIN_TYPE" != xmicrosoft; then if test "x$OPENJDK_TARGET_CPU" != xaarch64 && - test "x$OPENJDK_TARGET_CPU" != xarm; then + test "x$OPENJDK_TARGET_CPU" != xarm && + test "x$OPENJDK_TARGET_CPU" != xriscv64; then MACHINE_FLAG="-m${OPENJDK_TARGET_CPU_BITS}" fi fi (This is necessary to get it building again. The previous webrev was based on a rev which did not have the -m64 problem) I have run this through submit-hs with no problems and as mentioned have also done a complete rebuild under qemu for riscv. Thanks for your patience, Ed. From glaubitz at physik.fu-berlin.de Tue Mar 27 08:46:11 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Tue, 27 Mar 2018 17:46:11 +0900 Subject: RFR: 8199138: Add RISC-V support to Zero In-Reply-To: <1522139039.7098.11.camel@gmail.com> References: <1521313360.26308.4.camel@gmail.com> <259e05b8-dbb1-4aa4-f451-6b7078eeb2ff@oracle.com> <1521554055.3029.4.camel@gmail.com> <9b675cad-0aec-dfbe-1540-15417b58aaea@physik.fu-berlin.de> <1522139039.7098.11.camel@gmail.com> Message-ID: <62079618-0fdd-af0b-a3ee-3134555326bb@physik.fu-berlin.de> On 03/27/2018 05:23 PM, Edward Nevill wrote: > Sorry for the delay. I was doing another test build on qemu which takes about 3 days. > > Please review the following webrev > > http://cr.openjdk.java.net/~enevill/8199138/webrev.02 > > This has the following additional changes over the previous webrev > > 1) Add comment in os_linux.cpp > > @@ -1733,6 +1733,9 @@ > #ifndef EM_AARCH64 > #define EM_AARCH64 183 /* ARM AARCH64 */ > #endif > +#ifndef EM_RISCV /* RISCV */ > + #define EM_RISCV 243 > +#endif What confuses me: Why RISCV here and not RISCV64? In particular this hunk: @@ -1758,6 +1761,7 @@ {EM_PARISC, EM_PARISC, ELFCLASS32, ELFDATA2MSB, (char*)"PARISC"}, {EM_68K, EM_68K, ELFCLASS32, ELFDATA2MSB, (char*)"M68k"}, {EM_AARCH64, EM_AARCH64, ELFCLASS64, ELFDATA2LSB, (char*)"AARCH64"}, + {EM_RISCV, EM_RISCV, ELFCLASS64, ELFDATA2LSB, (char*)"RISCV"}, }; I know there is already 32-bit RISC-V and there are actually plans for using it. So, it looks to me you would be breaking 32-bit RISC-V here. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From aph at redhat.com Tue Mar 27 08:54:20 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 27 Mar 2018 09:54:20 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: <97d96f0e-43b0-beca-3b31-3dabc6073f3a@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <15f51315-3497-6c01-2ecb-1f758762e99e@redhat.com> <849a8fcf-85cd-b8c6-1134-9aea3877e9b7@redhat.com> <2c42186a-523e-6912-6eb0-36c7ee2be09f@redhat.com> <8b4a211e-44af-ed28-5abf-697d722771d1@redhat.com> <97d96f0e-43b0-beca-3b31-3dabc6073f3a@redhat.com> Message-ID: <3b1675a4-4fe8-ee75-da83-eb119ff0256a@redhat.com> On 03/27/2018 09:18 AM, Andrew Dinn wrote: > On 26/03/18 16:02, Andrew Dinn wrote: >> On 26/03/18 15:56, Andrew Dinn wrote: >>> Ship it! >> Ok, so I know this really needs a code audit too. I'm working on that now. > I am still trying to understand all the details of this patch so this is > really just preliminary feedback. Many of the comments below are > questions, posed mostly as requests for clarification rather than > suggestions for any improvement. Cool. > One overall comment: copyrights need updating! I really need some way to automate that. :-) > 1) make/autoconf/generated-configure.sh > > you can scratch all these changes as the file is now deleted > > 2) make/hotspot/lib/JvmFeatures.gmk > > is this a necessary part of the AOT change or just an extra cleanup? It's a hangover from debug code. > 3) src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp > > I believe this is the cause of the slowdebug assert . . . > > @@ -738,25 +738,25 @@ > - if (far_branches() && !Compile::current()->in_scratch_emit_size()) { > + if (UseAOT || (far_branches() && > !Compile::current()->in_scratch_emit_size())) { > address stub = emit_trampoline_stub(start_offset, entry.target()); > if (stub == NULL) { > return NULL; // CodeCache is full > } > > . . . and should be > > @@ -738,25 +738,25 @@ > - if (far_branches() && !Compile::current()->in_scratch_emit_size()) { > + if ((UseAOT || (far_branches()) && > !Compile::current()->in_scratch_emit_size()) { > address stub = emit_trampoline_stub(start_offset, entry.target()); > if (stub == NULL) { > return NULL; // CodeCache is full > } Yep, got that. > Also, as mentioned earlier this next change is redundant as it is > already in the latest hs > > @@ -1048,11 +1048,11 @@ > Address::lsl(LogBytesPerWord))); > ldr(method_result, Address(method_result, vtable_offset_in_bytes)); > } else { > vtable_offset_in_bytes += vtable_index.as_constant() * wordSize; > ldr(method_result, > - form_address(rscratch1, recv_klass, vtable_offset_in_bytes)); > + form_address(rscratch1, recv_klass, vtable_offset_in_bytes, 0)); Right. > 4) nativeInst_aarch64.hpp > > 147 class NativePltCall: public NativeInstruction { > 148 public: > 149 enum Intel_specific_constants { > > Hmm, Intel? Shurely AArch64? LOL! There's still some cruft in there. > 5) src/hotspot/share/code/oopRecorder.hpp > src/hotspot/share/jvmci/jvmciCodeInstaller.cpp > > Why is there a need to make these changes to shared code? In particular: > > why does allocate_metadata need to be virtual? That's another hangover. > what magic lies behind the argument 64 passed into the GrowableArray > created by AOTOopRecorder::AOTOopRecorder()? That too. > 6) > src/jdk.aot/share/classes/jdk.tools.jaotc.binformat/src/jdk/tools/jaotc/binformat/elf/ElfTargetInfo.java > > if (archStr.equals("amd64") || archStr.equals("x86_64")) { > arch = Elf64_Ehdr.EM_X86_64; > + } else if (archStr.equals("amd64") || archStr.equals("aarch64")) { > + arch = Elf64_Ehdr.EM_AARCH64; > > Should be > > if (archStr.equals("amd64") || archStr.equals("x86_64")) { > arch = Elf64_Ehdr.EM_X86_64; > + } else if (archStr.equals("aarch64")) { > + arch = Elf64_Ehdr.EM_AARCH64; Good catch! I didn't see that one. > 7) > src/jdk.aot/share/classes/jdk.tools.jaotc/src/jdk/tools/jaotc/AOTCompiledClass.java > > static String metadataName(HotSpotResolvedObjectType type) { > AOTKlassData data = getAOTKlassData(type); > assert data != null : "no data for " + type; > + try { > + AOTKlassData t = getAOTKlassData(type); > + t.getMetadataName(); > + } catch (NullPointerException e) { > + return type.getName(); > + } > return getAOTKlassData(type).getMetadataName(); > } > > This looks a bit like last night's left-overs? Why is > getAOTKlassData(type) being called again? Is this just dev-time > scaffolding you didn't delete? Or is the catch actually doing something > necessary? I'll dig. > 8) > src/jdk.aot/share/classes/jdk.tools.jaotc/src/jdk/tools/jaotc/CodeSectionProcessor.java > > final Call callInfopoint = (Call) infopoint; > - if (callInfopoint.target instanceof > HotSpotForeignCallLinkage) { > + if (callInfopoint.target instanceof > HotSpotForeignCallLinkage && > + target.arch instanceof AMD64) { > // TODO 4 is x86 size of relative displacement. > > So, why can AArch64 simply not worry about zeroing out destinations? Is > this because of using PLTs and CompiledPltStaticCall::set_stub_to_clean? It's not necessary on AARch64. There's no need to do an AArch64 version of this code. > 9) > src/jdk.aot/share/classes/jdk.tools.jaotc/src/jdk/tools/jaotc/CompiledMethodInfo.java > > for (Mark m : compilationResult.getMarks()) { > + int adjOffset = m.pcOffset; > + if (archStr.equals("aarch64")) { > + // FIXME: This is very ugly. > + // The mark is at the end of a group of three instructions: > + // adrp; add; ldr > + adjOffset += 12; > + } else { > // TODO: X64-specific code. > > I'm not really sure why this is so ugly. Can you explain? Oh, it just sucks to have CPU-specific code there. I guess I was in a bad mood. > 10) /jdk.aot/share/classes/jdk.tools.jaotc/src/jdk/tools/jaotc/Linker.java > > - if (objFile.exists()) { > + if (objFile.exists() && > System.getenv("DO_NOT_DELETE_PRECIOUS_FILE") == null) { > > Is this intended to remain? Yes. Vladimir suggested we use a property for this, and I'm hoping someone will come up with a suggested name. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From aph at redhat.com Tue Mar 27 09:51:20 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 27 Mar 2018 10:51:20 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: <36e42949-634a-fce4-9fd5-d6385bd16aa7@oracle.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <15f51315-3497-6c01-2ecb-1f758762e99e@redhat.com> <849a8fcf-85cd-b8c6-1134-9aea3877e9b7@redhat.com> <2c42186a-523e-6912-6eb0-36c7ee2be09f@redhat.com> <8b4a211e-44af-ed28-5abf-697d722771d1@redhat.com> <36e42949-634a-fce4-9fd5-d6385bd16aa7@oracle.com> Message-ID: <2adc4d0c-8ecd-cbec-f69f-f60e6fab1602@redhat.com> On 03/26/2018 10:21 PM, Vladimir Kozlov wrote: > Are all suggested Graal changes pushed into main repo > https://github.com/oracle/graal? You should start there. Not yet. I'm getting a few people here to test stuff. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From shade at redhat.com Tue Mar 27 11:22:47 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Tue, 27 Mar 2018 13:22:47 +0200 Subject: RFR (S) 8200299: Non-PCH build for aarch64 fails Message-ID: <6bcca6a2-7f12-e489-8aea-277fe44917f7@redhat.com> Bug: https://bugs.openjdk.java.net/browse/JDK-8200299 Webrev: http://cr.openjdk.java.net/~shade/8200299/webrev.01/ It touches a shared file, but I think it is trivial, and so I haven't run it through submit-hs. Testing: x86_64 build, aarch64 cross-build Thanks, -Aleksey From tobias.hartmann at oracle.com Tue Mar 27 11:36:01 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Tue, 27 Mar 2018 13:36:01 +0200 Subject: RFR (S) 8200299: Non-PCH build for aarch64 fails In-Reply-To: <6bcca6a2-7f12-e489-8aea-277fe44917f7@redhat.com> References: <6bcca6a2-7f12-e489-8aea-277fe44917f7@redhat.com> Message-ID: <2742e7c3-549a-14e6-5832-d57d61b93686@oracle.com> Hi Aleksey, looks good to me. Best regards, Tobias On 27.03.2018 13:22, Aleksey Shipilev wrote: > Bug: > https://bugs.openjdk.java.net/browse/JDK-8200299 > > Webrev: > http://cr.openjdk.java.net/~shade/8200299/webrev.01/ > > It touches a shared file, but I think it is trivial, and so I haven't run it through submit-hs. > > Testing: x86_64 build, aarch64 cross-build > > Thanks, > -Aleksey > From leo.korinth at oracle.com Tue Mar 27 12:01:45 2018 From: leo.korinth at oracle.com (Leo Korinth) Date: Tue, 27 Mar 2018 14:01:45 +0200 Subject: 8200246 : AIX build fails after adjustments of src/hotspot/share/trace/traceEventClasses.xsl In-Reply-To: <16201e338dda41b0913bbd16286b6e2d@sap.com> References: <16201e338dda41b0913bbd16286b6e2d@sap.com> Message-ID: > > Adding the template parameter to TraceEvent makes xlc happy too. > > http://cr.openjdk.java.net/~mbaesken/webrevs/8200246/ > > > Are you fine with this change ? Observe that this is not a review. I have tested that your fix should compile on linux, solaris, windows and mac. Thanks, Leo > Best regards, Matthias > From thomas.stuefe at gmail.com Tue Mar 27 12:22:01 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 27 Mar 2018 14:22:01 +0200 Subject: RFR (S) 8200299: Non-PCH build for aarch64 fails In-Reply-To: <6bcca6a2-7f12-e489-8aea-277fe44917f7@redhat.com> References: <6bcca6a2-7f12-e489-8aea-277fe44917f7@redhat.com> Message-ID: Looks good. ..Thomas On Tue, Mar 27, 2018 at 1:22 PM, Aleksey Shipilev wrote: > Bug: > https://bugs.openjdk.java.net/browse/JDK-8200299 > > Webrev: > http://cr.openjdk.java.net/~shade/8200299/webrev.01/ > > It touches a shared file, but I think it is trivial, and so I haven't run > it through submit-hs. > > Testing: x86_64 build, aarch64 cross-build > > Thanks, > -Aleksey > > From thomas.stuefe at gmail.com Tue Mar 27 12:25:40 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 27 Mar 2018 14:25:40 +0200 Subject: RFR (S) 8200299: Non-PCH build for aarch64 fails In-Reply-To: References: <6bcca6a2-7f12-e489-8aea-277fe44917f7@redhat.com> Message-ID: ... aand I just hit it myself on ppc. Thanks for fixing :) On Tue, Mar 27, 2018 at 2:22 PM, Thomas St?fe wrote: > Looks good. > > ..Thomas > > On Tue, Mar 27, 2018 at 1:22 PM, Aleksey Shipilev > wrote: > >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8200299 >> >> Webrev: >> http://cr.openjdk.java.net/~shade/8200299/webrev.01/ >> >> It touches a shared file, but I think it is trivial, and so I haven't run >> it through submit-hs. >> >> Testing: x86_64 build, aarch64 cross-build >> >> Thanks, >> -Aleksey >> >> > From coleen.phillimore at oracle.com Tue Mar 27 12:41:38 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 27 Mar 2018 08:41:38 -0400 Subject: RFR (S,trivial) 8200276: Cleanup allocation.hpp includes In-Reply-To: References: <11afbff5-2450-b9b3-4bce-7487a17697c1@oracle.com> Message-ID: <78987137-0608-9e87-f742-73e909bab453@oracle.com> On 3/27/18 4:04 AM, Stefan Karlsson wrote: > Hi Coleen, > > This file is using CHeapObj and StackObj, and should keep including > allocation.hpp. > > http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/gc/parallel/psVirtualspace.hpp.frames.html > Yes, that's why I added allocation.hpp to this one. > > These files are using MetaspaceObj: > > http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/oops/cpCache.hpp.frames.html > > > http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/oops/array.hpp.frames.html > > > This file is using ReallocMark: > http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/code/exceptionHandlerTable.hpp.frames.html > My script didn't look for these.? I added them back. open webrev at http://cr.openjdk.java.net/~coleenp/8200276.02/webrev And checked the rest. thanks, Coleen Thanks, Coleen > > Thanks, > StefanK > > On 2018-03-27 01:56, coleen.phillimore at oracle.com wrote: >> These includes are no longer needed with VALUE_OBJ_CLASS_SPEC removed >> in these files. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8200276.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8200276 >> >> Tested tier1 on oracle platforms: linux-x64, solaris-sparc, >> macos-x86, windows-x86.? Tested open-only with >> --disable-precompiled-headers. Built zero on linux x64. >> >> Thanks, >> Coleen From shade at redhat.com Tue Mar 27 12:41:54 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Tue, 27 Mar 2018 14:41:54 +0200 Subject: RFR (S) 8200299: Non-PCH build for aarch64 fails In-Reply-To: References: <6bcca6a2-7f12-e489-8aea-277fe44917f7@redhat.com> Message-ID: <619c88b6-56d9-426f-ff4a-56401d5c970b@redhat.com> No problem, pushed. -Aleksey On 03/27/2018 02:25 PM, Thomas St?fe wrote: > ... aand I just hit it myself on ppc. Thanks for fixing :) > > On Tue, Mar 27, 2018 at 2:22 PM, Thomas St?fe > wrote: > > Looks good. > > ..Thomas > > On Tue, Mar 27, 2018 at 1:22 PM, Aleksey Shipilev > > wrote: > > Bug: > ?https://bugs.openjdk.java.net/browse/JDK-8200299 > > > Webrev: > ?http://cr.openjdk.java.net/~shade/8200299/webrev.01/ > > > It touches a shared file, but I think it is trivial, and so I haven't run it through submit-hs. > > Testing: x86_64 build, aarch64 cross-build > > Thanks, > -Aleksey > > > From stefan.karlsson at oracle.com Tue Mar 27 12:42:10 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Tue, 27 Mar 2018 14:42:10 +0200 Subject: RFR (S,trivial) 8200276: Cleanup allocation.hpp includes In-Reply-To: References: <11afbff5-2450-b9b3-4bce-7487a17697c1@oracle.com> Message-ID: <3f7e98e1-0baf-7635-2a7e-47435c373bc1@oracle.com> On 2018-03-27 10:04, Stefan Karlsson wrote: > Hi Coleen, > > This file is using CHeapObj and StackObj, and should keep including > allocation.hpp. > > http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/gc/parallel/psVirtualspace.hpp.frames.html > Disregard this comment. Coleen pointed at that the patch *adds* allocation.hpp to this file. StefanK > > These files are using MetaspaceObj: > > http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/oops/cpCache.hpp.frames.html > > > http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/oops/array.hpp.frames.html > > > This file is using ReallocMark: > http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/code/exceptionHandlerTable.hpp.frames.html > > > Thanks, > StefanK > > On 2018-03-27 01:56, coleen.phillimore at oracle.com wrote: >> These includes are no longer needed with VALUE_OBJ_CLASS_SPEC removed >> in these files. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8200276.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8200276 >> >> Tested tier1 on oracle platforms: linux-x64, solaris-sparc, macos-x86, >> windows-x86.? Tested open-only with --disable-precompiled-headers. >> Built zero on linux x64. >> >> Thanks, >> Coleen From matthias.baesken at sap.com Tue Mar 27 12:46:21 2018 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Tue, 27 Mar 2018 12:46:21 +0000 Subject: 8200246 : AIX build fails after adjustments of src/hotspot/share/trace/traceEventClasses.xsl In-Reply-To: References: <16201e338dda41b0913bbd16286b6e2d@sap.com> Message-ID: <4026c422f2ae4a068d8632e8d55243f3@sap.com> Thanks Leo . We have tested the same here locally in our OpenJDK night builds . Best regards, Matthias > -----Original Message----- > From: Leo Korinth [mailto:leo.korinth at oracle.com] > Sent: Dienstag, 27. M?rz 2018 14:02 > To: Baesken, Matthias ; 'build- > dev at openjdk.java.net' ; 'hotspot- > dev at openjdk.java.net' > Cc: Simonis, Volker > Subject: Re: 8200246 : AIX build fails after adjustments of > src/hotspot/share/trace/traceEventClasses.xsl > > > > > Adding the template parameter to TraceEvent makes xlc happy too. > > > > http://cr.openjdk.java.net/~mbaesken/webrevs/8200246/ > > > > > > Are you fine with this change ? > > Observe that this is not a review. I have tested that your fix should > compile on linux, solaris, windows and mac. > > Thanks, > Leo > > Best regards, Matthias > > From aph at redhat.com Tue Mar 27 12:58:35 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 27 Mar 2018 13:58:35 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: <3f069df6-a1ec-df1b-a51a-711c7e291f4f@oracle.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <9bb3d296-323c-07b4-0a91-aa24992282c6@oracle.com> <3f069df6-a1ec-df1b-a51a-711c7e291f4f@oracle.com> Message-ID: <02e1f2e8-96f6-ac57-98dc-443c0e766fb0@redhat.com> On 27/03/18 02:21, dean.long at oracle.com wrote: > On 3/23/18 4:27 PM, Vladimir Kozlov wrote: >> Code in AOTCompiledClass.java look strange in try block. Why you need it? > > I've seen that code fail as well, but thought it was due to me doing > something wrong, because it always went away with the final version of > my changes. It would be great to fix this issue once and for all. > Andrew, if you have a test case to reproduce the problem, it would be > great to have it as a regression test. It always happens for me when processing stubs: HotSpotResolvedJavaMethod m = HotSpotMethod type = m.getDeclaringClass() = HotSpotType type.getName() = Lorg/graalvm/compiler/hotspot/stubs/ExceptionHandlerStub; getAOTKlassData(name) = null I can't see where addAOTKlassData() is ever called for stubs code. If I run with assertions I get: Exception in thread "main" java.lang.AssertionError: no data for HotSpotType at jdk.aot/jdk.tools.jaotc.AOTCompiledClass.metadataName(AOTCompiledClass.java:441) at jdk.aot/jdk.tools.jaotc.AOTCompiledClass.metadataName(AOTCompiledClass.java:450) at jdk.aot/jdk.tools.jaotc.AOTCompiledClass.metadataName(AOTCompiledClass.java:456) at jdk.aot/jdk.tools.jaotc.MetadataBuilder.addMetadataEntries(MetadataBuilder.java:185) at jdk.aot/jdk.tools.jaotc.MetadataBuilder.createMethodMetadata(MetadataBuilder.java:119) If anyone knows where stubs are supposed to be added to AOTCompiledClass.klassData I'll have a look, but as far as I can see the answer is "nowhere". -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From coleen.phillimore at oracle.com Tue Mar 27 13:29:34 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 27 Mar 2018 09:29:34 -0400 Subject: RFR (S,trivial) 8200276: Cleanup allocation.hpp includes In-Reply-To: <78987137-0608-9e87-f742-73e909bab453@oracle.com> References: <11afbff5-2450-b9b3-4bce-7487a17697c1@oracle.com> <78987137-0608-9e87-f742-73e909bab453@oracle.com> Message-ID: On 3/27/18 8:41 AM, coleen.phillimore at oracle.com wrote: > > > On 3/27/18 4:04 AM, Stefan Karlsson wrote: >> Hi Coleen, >> >> This file is using CHeapObj and StackObj, and should keep including >> allocation.hpp. >> >> http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/gc/parallel/psVirtualspace.hpp.frames.html >> > > Yes, that's why I added allocation.hpp to this one. >> >> These files are using MetaspaceObj: >> >> http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/oops/cpCache.hpp.frames.html >> >> >> http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/oops/array.hpp.frames.html >> >> >> This file is using ReallocMark: >> http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/code/exceptionHandlerTable.hpp.frames.html >> > > My script didn't look for these.? I added them back. > > open webrev at http://cr.openjdk.java.net/~coleenp/8200276.02/webrev The incremental is that I reverted cpCache.hpp, array.hpp and exceptionHandlerTable.hpp changes, so they're not present in the new webrev. Thanks, Coleen > > And checked the rest. > > thanks, > Coleen > > Thanks, > Coleen >> >> Thanks, >> StefanK >> >> On 2018-03-27 01:56, coleen.phillimore at oracle.com wrote: >>> These includes are no longer needed with VALUE_OBJ_CLASS_SPEC >>> removed in these files. >>> >>> open webrev at http://cr.openjdk.java.net/~coleenp/8200276.01/webrev >>> bug link https://bugs.openjdk.java.net/browse/JDK-8200276 >>> >>> Tested tier1 on oracle platforms: linux-x64, solaris-sparc, >>> macos-x86, windows-x86.? Tested open-only with >>> --disable-precompiled-headers. Built zero on linux x64. >>> >>> Thanks, >>> Coleen > From edward.nevill at gmail.com Tue Mar 27 13:47:32 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Tue, 27 Mar 2018 14:47:32 +0100 Subject: RFR: 8199138: Add RISC-V support to Zero In-Reply-To: <62079618-0fdd-af0b-a3ee-3134555326bb@physik.fu-berlin.de> References: <1521313360.26308.4.camel@gmail.com> <259e05b8-dbb1-4aa4-f451-6b7078eeb2ff@oracle.com> <1521554055.3029.4.camel@gmail.com> <9b675cad-0aec-dfbe-1540-15417b58aaea@physik.fu-berlin.de> <1522139039.7098.11.camel@gmail.com> <62079618-0fdd-af0b-a3ee-3134555326bb@physik.fu-berlin.de> Message-ID: <1522158452.7098.16.camel@gmail.com> On Tue, 2018-03-27 at 17:46 +0900, John Paul Adrian Glaubitz wrote: > On 03/27/2018 05:23 PM, Edward Nevill wrote: > > @@ -1733,6 +1733,9 @@ > > #ifndef EM_AARCH64 > > #define EM_AARCH64 183 /* ARM AARCH64 */ > > #endif > > +#ifndef EM_RISCV /* RISCV */ > > + #define EM_RISCV 243 > > +#endif > > What confuses me: Why RISCV here and not RISCV64? > > In particular this hunk: > > @@ -1758,6 +1761,7 @@ > {EM_PARISC, EM_PARISC, ELFCLASS32, ELFDATA2MSB, > (char*)"PARISC"}, > {EM_68K, EM_68K, ELFCLASS32, ELFDATA2MSB, > (char*)"M68k"}, > {EM_AARCH64, EM_AARCH64, ELFCLASS64, ELFDATA2LSB, > (char*)"AARCH64"}, > + {EM_RISCV, EM_RISCV, ELFCLASS64, ELFDATA2LSB, > (char*)"RISCV"}, > }; > > I know there is already 32-bit RISC-V and there are actually plans > for > using it. So, it looks to me you would be breaking 32-bit RISC-V > here. > Because that is what is defined in elf.h >From /usr/include/elf.h #define EM_RISCV 243 /* RISC-V */ There is no EM_RISCV32 or EM_RISCV64 in elf.h All the best, Ed. From vladimir.kozlov at oracle.com Tue Mar 27 14:20:31 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 27 Mar 2018 07:20:31 -0700 Subject: RFD: AOT for AArch64 In-Reply-To: <3b1675a4-4fe8-ee75-da83-eb119ff0256a@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <15f51315-3497-6c01-2ecb-1f758762e99e@redhat.com> <849a8fcf-85cd-b8c6-1134-9aea3877e9b7@redhat.com> <2c42186a-523e-6912-6eb0-36c7ee2be09f@redhat.com> <8b4a211e-44af-ed28-5abf-697d722771d1@redhat.com> <97d96f0e-43b0-beca-3b31-3dabc6073f3a@redhat.com> <3b1675a4-4fe8-ee75-da83-eb119ff0256a@redhat.com> Message-ID: <9ccf24ea-ef2b-8602-f79a-aa6bd45bc9a6@oracle.com> >> System.getenv("DO_NOT_DELETE_PRECIOUS_FILE") == null) { >> >> Is this intended to remain? > >Yes. Vladimir suggested we use a property for this, and I'm hoping >someone will come up with a suggested name. How about aot.keep.objFile? jaotc --output libtest.so -J-Daot.keep.objFile=true test.class And I found that on Linux we forgot to add '.o' suffix. diff -r 898ef81cbc0e src/jdk.aot/share/classes/jdk.tools.jaotc/src/jdk/tools/jaotc/Linker.java --- a/src/jdk.aot/share/classes/jdk.tools.jaotc/src/jdk/tools/jaotc/Linker.java +++ b/src/jdk.aot/share/classes/jdk.tools.jaotc/src/jdk/tools/jaotc/Linker.java @@ -69,6 +69,7 @@ if (name.endsWith(".so")) { objectFileName = name.substring(0, name.length() - ".so".length()); } + objectFileName = objectFileName + ".o"; linkerPath = (options.linkerpath != null) ? options.linkerpath : "ld"; linkerCmd = linkerPath + " -shared -z noexecstack -o " + libraryFileName + " " + objectFileName; linkerCheck = linkerPath + " -v"; @@ -130,7 +131,8 @@ throw new InternalError(errorMessage); } File objFile = new File(objectFileName); - if (objFile.exists()) { + boolean keepObjFile = Boolean.parseBoolean(System.getProperty("aot.keep.objFile", "false")); + if (objFile.exists() && !keepObjFile) { if (!objFile.delete()) { throw new InternalError("Failed to delete " + objectFileName + " file"); } Thanks, Vladimir On 3/27/18 1:54 AM, Andrew Haley wrote: > On 03/27/2018 09:18 AM, Andrew Dinn wrote: >> On 26/03/18 16:02, Andrew Dinn wrote: >>> On 26/03/18 15:56, Andrew Dinn wrote: >>>> Ship it! >>> Ok, so I know this really needs a code audit too. I'm working on that now. >> I am still trying to understand all the details of this patch so this is >> really just preliminary feedback. Many of the comments below are >> questions, posed mostly as requests for clarification rather than >> suggestions for any improvement. > > Cool. > >> One overall comment: copyrights need updating! > > I really need some way to automate that. :-) > >> 1) make/autoconf/generated-configure.sh >> >> you can scratch all these changes as the file is now deleted >> >> 2) make/hotspot/lib/JvmFeatures.gmk >> >> is this a necessary part of the AOT change or just an extra cleanup? > > It's a hangover from debug code. > >> 3) src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp >> >> I believe this is the cause of the slowdebug assert . . . >> >> @@ -738,25 +738,25 @@ >> - if (far_branches() && !Compile::current()->in_scratch_emit_size()) { >> + if (UseAOT || (far_branches() && >> !Compile::current()->in_scratch_emit_size())) { >> address stub = emit_trampoline_stub(start_offset, entry.target()); >> if (stub == NULL) { >> return NULL; // CodeCache is full >> } >> >> . . . and should be >> >> @@ -738,25 +738,25 @@ >> - if (far_branches() && !Compile::current()->in_scratch_emit_size()) { >> + if ((UseAOT || (far_branches()) && >> !Compile::current()->in_scratch_emit_size()) { >> address stub = emit_trampoline_stub(start_offset, entry.target()); >> if (stub == NULL) { >> return NULL; // CodeCache is full >> } > > Yep, got that. > >> Also, as mentioned earlier this next change is redundant as it is >> already in the latest hs >> >> @@ -1048,11 +1048,11 @@ >> Address::lsl(LogBytesPerWord))); >> ldr(method_result, Address(method_result, vtable_offset_in_bytes)); >> } else { >> vtable_offset_in_bytes += vtable_index.as_constant() * wordSize; >> ldr(method_result, >> - form_address(rscratch1, recv_klass, vtable_offset_in_bytes)); >> + form_address(rscratch1, recv_klass, vtable_offset_in_bytes, 0)); > > Right. > >> 4) nativeInst_aarch64.hpp >> >> 147 class NativePltCall: public NativeInstruction { >> 148 public: >> 149 enum Intel_specific_constants { >> >> Hmm, Intel? Shurely AArch64? > > LOL! There's still some cruft in there. > >> 5) src/hotspot/share/code/oopRecorder.hpp >> src/hotspot/share/jvmci/jvmciCodeInstaller.cpp >> >> Why is there a need to make these changes to shared code? In particular: >> >> why does allocate_metadata need to be virtual? > > That's another hangover. > >> what magic lies behind the argument 64 passed into the GrowableArray >> created by AOTOopRecorder::AOTOopRecorder()? > > That too. > >> 6) >> src/jdk.aot/share/classes/jdk.tools.jaotc.binformat/src/jdk/tools/jaotc/binformat/elf/ElfTargetInfo.java >> >> if (archStr.equals("amd64") || archStr.equals("x86_64")) { >> arch = Elf64_Ehdr.EM_X86_64; >> + } else if (archStr.equals("amd64") || archStr.equals("aarch64")) { >> + arch = Elf64_Ehdr.EM_AARCH64; >> >> Should be >> >> if (archStr.equals("amd64") || archStr.equals("x86_64")) { >> arch = Elf64_Ehdr.EM_X86_64; >> + } else if (archStr.equals("aarch64")) { >> + arch = Elf64_Ehdr.EM_AARCH64; > > Good catch! I didn't see that one. > >> 7) >> src/jdk.aot/share/classes/jdk.tools.jaotc/src/jdk/tools/jaotc/AOTCompiledClass.java >> >> static String metadataName(HotSpotResolvedObjectType type) { >> AOTKlassData data = getAOTKlassData(type); >> assert data != null : "no data for " + type; >> + try { >> + AOTKlassData t = getAOTKlassData(type); >> + t.getMetadataName(); >> + } catch (NullPointerException e) { >> + return type.getName(); >> + } >> return getAOTKlassData(type).getMetadataName(); >> } >> >> This looks a bit like last night's left-overs? Why is >> getAOTKlassData(type) being called again? Is this just dev-time >> scaffolding you didn't delete? Or is the catch actually doing something >> necessary? > > I'll dig. > >> 8) >> src/jdk.aot/share/classes/jdk.tools.jaotc/src/jdk/tools/jaotc/CodeSectionProcessor.java >> >> final Call callInfopoint = (Call) infopoint; >> - if (callInfopoint.target instanceof >> HotSpotForeignCallLinkage) { >> + if (callInfopoint.target instanceof >> HotSpotForeignCallLinkage && >> + target.arch instanceof AMD64) { >> // TODO 4 is x86 size of relative displacement. >> >> So, why can AArch64 simply not worry about zeroing out destinations? Is >> this because of using PLTs and CompiledPltStaticCall::set_stub_to_clean? > > It's not necessary on AARch64. There's no need to do an AArch64 version of > this code. > >> 9) >> src/jdk.aot/share/classes/jdk.tools.jaotc/src/jdk/tools/jaotc/CompiledMethodInfo.java >> >> for (Mark m : compilationResult.getMarks()) { >> + int adjOffset = m.pcOffset; >> + if (archStr.equals("aarch64")) { >> + // FIXME: This is very ugly. >> + // The mark is at the end of a group of three instructions: >> + // adrp; add; ldr >> + adjOffset += 12; >> + } else { >> // TODO: X64-specific code. >> >> I'm not really sure why this is so ugly. Can you explain? > > Oh, it just sucks to have CPU-specific code there. I guess I was in a bad > mood. > >> 10) /jdk.aot/share/classes/jdk.tools.jaotc/src/jdk/tools/jaotc/Linker.java >> >> - if (objFile.exists()) { >> + if (objFile.exists() && >> System.getenv("DO_NOT_DELETE_PRECIOUS_FILE") == null) { >> >> Is this intended to remain? > > Yes. Vladimir suggested we use a property for this, and I'm hoping someone will > come up with a suggested name. > From adinn at redhat.com Tue Mar 27 15:08:00 2018 From: adinn at redhat.com (Andrew Dinn) Date: Tue, 27 Mar 2018 16:08:00 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: <97d96f0e-43b0-beca-3b31-3dabc6073f3a@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <15f51315-3497-6c01-2ecb-1f758762e99e@redhat.com> <849a8fcf-85cd-b8c6-1134-9aea3877e9b7@redhat.com> <2c42186a-523e-6912-6eb0-36c7ee2be09f@redhat.com> <8b4a211e-44af-ed28-5abf-697d722771d1@redhat.com> <97d96f0e-43b0-beca-3b31-3dabc6073f3a@redhat.com> Message-ID: <8342caa0-e4d1-0250-02ab-c8f9cf8085bd@redhat.com> On 27/03/18 09:18, Andrew Dinn wrote: > On 26/03/18 16:02, Andrew Dinn wrote: >> On 26/03/18 15:56, Andrew Dinn wrote: >>> Ship it! >> Ok, so I know this really needs a code audit too. I'm working on that now. > I am still trying to understand all the details of this patch so this is > really just preliminary feedback. Many of the comments below are > questions, posed mostly as requests for clarification rather than > suggestions for any improvement. > > Also, I'm now switching to look at the Graal changes so I can tie these > two patches together. Most of the answers to the last round involved removing the stuff that was asked about. So, I am now quite happy with the remaining hotspot changes (I'm still not clear why x86 has to zero its callInfopoint entries but clearly AArch64 /doesn't/ need -- we know it works -- to so that can pass). Below is initial feedback on about the Graal changes. I didn't find anything much that needed changing nor come up with nay real questions about the code -- it's mostly unneeded imports. However, I think I still need to spend some more time piecing this together with the hotspot changes. I'll try to get final comments plus a yea or nay (well, ok I guess it's going to be a yea) posted by late tomorrow morning. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander 1) compiler/src/org.graalvm.compiler.debug/src/org/graalvm/compiler/debug/GraalError.java /** * This constructor creates a {@link GraalError} with a message assembled via * {@link String#format(String, Object...)}. It always uses the ENGLISH locale in order to - * always generate the same output. - * +internal is * * @param msg the message that will be associated with the error, in String.format syntax * @param args parameters to String.format - parameters that implement {@link Iterable} will be * expanded into a [x, x, ...] representation. This looks like an accidental paste-over! 2) compiler/src/org.graalvm.compiler.hotspot.aarch64/src/org/graalvm/compiler/hotspot/aarch64/AArch64HotSpotLIRGenerator.java Two import issues: +import static jdk.vm.ci.amd64.AMD64.rbp; This import is not needed import org.graalvm.compiler.core.common.calc.Condition; +import org.graalvm.compiler.core.common.spi.ForeignCallDescriptor; +import org.graalvm.compiler.core.common.spi.ForeignCallDescriptor; ForeignCallDescriptor is imported twice. 3) compiler/src/org.graalvm.compiler.hotspot.aarch64/src/org/graalvm/compiler/hotspot/aarch64/AArch64HotSpotLoadAddressOp.java redundant include import static org.graalvm.compiler.asm.aarch64.AArch64Address.*; 4) compiler/src/org.graalvm.compiler.hotspot.aarch64/src/org/graalvm/compiler/hotspot/aarch64/AArch64HotSpotMove.java @@ -117,7 +149,7 @@ public class AArch64HotSpotMove { } else if (nonNull) { masm.sub(64, resultRegister, ptr, base); if (encoding.hasShift()) { - masm.shl(64, resultRegister, resultRegister, encoding.getShift()); + masm.lshr(64, resultRegister, resultRegister, encoding.getShift()); } } else { CompressPointer was doing an shl before? Really? 5) compiler/src/org.graalvm.compiler.lir.aarch64/src/org/graalvm/compiler/lir/aarch64/AArch64Call.java redundant import: import static jdk.vm.ci.aarch64.AArch64.lr; 6) compiler/src/org.graalvm.compiler.lir.aarch64/src/org/graalvm/compiler/lir/aarch64/AArch64Move.java redundant import: import static jdk.vm.ci.meta.JavaKind.Int; 7) compiler/src/org.graalvm.compiler.lir.aarch64/src/org/graalvm/compiler/lir/aarch64/AArch64RestoreRegistersOp.java redundant import: import jdk.vm.ci.aarch64.AArch64Kind; 8) compiler/src/org.graalvm.compiler.lir.aarch64/src/org/graalvm/compiler/lir/aarch64/AArch64SaveRegistersOp.java redundant import: import org.graalvm.collections.EconomicSet; From dmitry.chuyko at bell-sw.com Tue Mar 27 15:08:45 2018 From: dmitry.chuyko at bell-sw.com (Dmitry Chuyko) Date: Tue, 27 Mar 2018 18:08:45 +0300 Subject: RFD: AOT for AArch64 In-Reply-To: <1521917644.2929.5.camel@gmail.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <1521907782.18910.9.camel@gmail.com> <1521917644.2929.5.camel@gmail.com> Message-ID: <927c88df-eb7a-9f3a-9340-1bd9b9833d84@bell-sw.com> Andrew, thank you, great work! On 03/24/2018 09:54 PM, Edward Nevill wrote: > On Sat, 2018-03-24 at 16:09 +0000, Edward Nevill wrote: >> On Fri, 2018-03-23 at 18:11 +0000, Andrew Haley wrote: >>> >> Looks promising, but I get as far as here and then get >> >> Exception in thread "main" jdk.vm.ci.common.JVMCIError: expected VM constant not found: CardTableModRefBS::dirty_card >> > It looks like your patch > > http://cr.openjdk.java.net/~aph/jaotc/jdk-hs-1/ > > didn't apply cleanly to tip of http://hg.openjdk.java.net/jdk/hs > > I tried updating to > > changeset: 48711:e321560ac819 Thanks Ed, that works for me either. > ........... > ed at ubuntu:~/openjdk$ /home/ed/openjdk/hs/build/linux-aarch64-normal-server-release/images/jdk/bin/jaotc -J--module-path=/home/ed/openjdk/graal/sdk/mxbuild/modules/org.graalvm.graal_sdk.jar:/home/ed/openjdk/graal/truffle/mxbuild/modules/com.oracle.truffle.truffle_api.jar -J--upgrade-module-path=/home/ed/openjdk/graal/compiler/mxbuild/modules/jdk.internal.vm.compiler.jar Queens.class --output Queens.so > Error: Failed compilation: Queens.main([Ljava/lang/String;)V: org.graalvm.compiler.graph.GraalGraphError: org.graalvm.compiler.debug.GraalError: Emitting code to load a metaspace address is not currently supported on aarch64 > at node: 287|LoadConstantIndirectly > Error: Failed compilation: Queens.print([I)V: org.graalvm.compiler.graph.GraalGraphError: org.graalvm.compiler.debug.GraalError: Emitting code to load a metaspace address is not currently supported on aarch64 > at node: 1273|LoadConstantIndirectly > Exception in thread "main" java.lang.NoSuchMethodError: jdk.tools.jaotc.aarch64.AArch64ELFMacroAssembler.addressOf(Ljdk/vm/ci/code/Register;)V > at jdk.aot/jdk.tools.jaotc.aarch64.AArch64ELFMacroAssembler.getPLTStaticEntryCode(AArch64ELFMacroAssembler.java:68) > at jdk.aot/jdk.tools.jaotc.CodeSectionProcessor.addCallStub(CodeSectionProcessor.java:139) > at jdk.aot/jdk.tools.jaotc.CodeSectionProcessor.process(CodeSectionProcessor.java:117) > at jdk.aot/jdk.tools.jaotc.DataBuilder.prepareData(DataBuilder.java:142) > at jdk.aot/jdk.tools.jaotc.Main.run(Main.java:188) > at jdk.aot/jdk.tools.jaotc.Main.run(Main.java:101) > at jdk.aot/jdk.tools.jaotc.Main.main(Main.java:80) > > Is this known/expected? What revision of jdk/hs are you building with? I'd like to see this working. > > Thanks, > Ed. > Ed, in case you haven't made it yet: I manually applied 2 parts of the patch to graal sources from Andrew's repo: http://cr.openjdk.java.net/~aph/jaotc/jdk-hs-1/src/jdk.internal.vm.compiler/share/classes/org.graalvm.compiler.asm.aarch64/src/org/graalvm/compiler/asm/aarch64/AArch64Assembler.java.patch http://cr.openjdk.java.net/~aph/jaotc/jdk-hs-1/src/jdk.internal.vm.compiler/share/classes/org.graalvm.compiler.asm.aarch64/src/org/graalvm/compiler/asm/aarch64/AArch64MacroAssembler.java.patch We use those built modules as a replacement for ones from JDK but the changes are not present in aarch64-branch-overflows. Now java.base can be AOT'ed on machines we have. In the logs I see a lot (~37k) of failed compilations with a message like following: Error: Failed compilation: com.sun.crypto.provider.GCMParameters.engineToString()Ljava/lang/String;: org.graalvm.compiler.graph.GraalGraphError: org.graalvm.compiler.debug.GraalError: Emitting code to load an object address is not currently supported on aarch64 ??????? at node: 2058|LoadConstantIndirectly -Dmitry From coleen.phillimore at oracle.com Tue Mar 27 15:17:17 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 27 Mar 2018 11:17:17 -0400 Subject: RFR (S,trivial) 8200276: Cleanup allocation.hpp includes In-Reply-To: <6be32e7a-30a1-e21a-4df7-14d672a9c90b@oracle.com> References: <11afbff5-2450-b9b3-4bce-7487a17697c1@oracle.com> <78987137-0608-9e87-f742-73e909bab453@oracle.com> <6be32e7a-30a1-e21a-4df7-14d672a9c90b@oracle.com> Message-ID: <85689dab-1f24-d97a-f654-e6728ca51930@oracle.com> Thanks for the code review. Coleen On 3/27/18 11:03 AM, Stefan Karlsson wrote: > > > On 2018-03-27 15:29, coleen.phillimore at oracle.com wrote: >> >> >> On 3/27/18 8:41 AM, coleen.phillimore at oracle.com wrote: >>> >>> >>> On 3/27/18 4:04 AM, Stefan Karlsson wrote: >>>> Hi Coleen, >>>> >>>> This file is using CHeapObj and StackObj, and should keep including >>>> allocation.hpp. >>>> >>>> http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/gc/parallel/psVirtualspace.hpp.frames.html >>>> >>> >>> Yes, that's why I added allocation.hpp to this one. >>>> >>>> These files are using MetaspaceObj: >>>> >>>> http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/oops/cpCache.hpp.frames.html >>>> >>>> >>>> http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/oops/array.hpp.frames.html >>>> >>>> >>>> This file is using ReallocMark: >>>> http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/code/exceptionHandlerTable.hpp.frames.html >>>> >>> >>> My script didn't look for these.? I added them back. >>> >>> open webrev at http://cr.openjdk.java.net/~coleenp/8200276.02/webrev >> >> The incremental is that I reverted cpCache.hpp, array.hpp and >> exceptionHandlerTable.hpp changes, so they're not present in the new >> webrev. > > Sounds good. > > StefanK > >> >> Thanks, >> Coleen >> >>> >>> And checked the rest. >>> >>> thanks, >>> Coleen >>> >>> Thanks, >>> Coleen >>>> >>>> Thanks, >>>> StefanK >>>> >>>> On 2018-03-27 01:56, coleen.phillimore at oracle.com wrote: >>>>> These includes are no longer needed with VALUE_OBJ_CLASS_SPEC >>>>> removed in these files. >>>>> >>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8200276.01/webrev >>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8200276 >>>>> >>>>> Tested tier1 on oracle platforms: linux-x64, solaris-sparc, >>>>> macos-x86, windows-x86.? Tested open-only with >>>>> --disable-precompiled-headers. Built zero on linux x64. >>>>> >>>>> Thanks, >>>>> Coleen >>> >> From aph at redhat.com Tue Mar 27 15:23:48 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 27 Mar 2018 16:23:48 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: <927c88df-eb7a-9f3a-9340-1bd9b9833d84@bell-sw.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <1521907782.18910.9.camel@gmail.com> <1521917644.2929.5.camel@gmail.com> <927c88df-eb7a-9f3a-9340-1bd9b9833d84@bell-sw.com> Message-ID: <7d2ad5a8-ca31-20c1-8f0d-819f4f2eba1a@redhat.com> On 27/03/18 16:08, Dmitry Chuyko wrote: > I manually applied 2 parts of the patch to graal sources from Andrew's repo: > http://cr.openjdk.java.net/~aph/jaotc/jdk-hs-1/src/jdk.internal.vm.compiler/share/classes/org.graalvm.compiler.asm.aarch64/src/org/graalvm/compiler/asm/aarch64/AArch64Assembler.java.patch > http://cr.openjdk.java.net/~aph/jaotc/jdk-hs-1/src/jdk.internal.vm.compiler/share/classes/org.graalvm.compiler.asm.aarch64/src/org/graalvm/compiler/asm/aarch64/AArch64MacroAssembler.java.patch > We use those built modules as a replacement for ones from JDK but the > changes are not present in aarch64-branch-overflows. What is this all about? I don't know what you've done or why. > Now java.base can be AOT'ed on machines we have. > In the logs I see a lot (~37k) of failed compilations with a message > like following: > > Error: Failed compilation: > com.sun.crypto.provider.GCMParameters.engineToString()Ljava/lang/String;: > org.graalvm.compiler.graph.GraalGraphError: > org.graalvm.compiler.debug.GraalError: Emitting code to load an object > address is not currently supported on aarch64 > at node: 2058|LoadConstantIndirectly Interesting. I haven't seen that one. Some methods fail to AOT because AOT compilation tries to initialize the classes, and some of the classes are inaccessible for security reasons. I'll look at rebasing to current Graal sources and push again. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From aph at redhat.com Tue Mar 27 15:26:18 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 27 Mar 2018 16:26:18 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: <8342caa0-e4d1-0250-02ab-c8f9cf8085bd@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <15f51315-3497-6c01-2ecb-1f758762e99e@redhat.com> <849a8fcf-85cd-b8c6-1134-9aea3877e9b7@redhat.com> <2c42186a-523e-6912-6eb0-36c7ee2be09f@redhat.com> <8b4a211e-44af-ed28-5abf-697d722771d1@redhat.com> <97d96f0e-43b0-beca-3b31-3dabc6073f3a@redhat.com> <8342caa0-e4d1-0250-02ab-c8f9cf8085bd@redhat.com> Message-ID: <36704e2b-c81c-2533-0cca-da6225b34d2e@redhat.com> On 27/03/18 16:08, Andrew Dinn wrote: > CompressPointer was doing an shl before? Really? Sure as shit. Really. :-) Thanks for the rest. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From stefan.karlsson at oracle.com Tue Mar 27 15:03:53 2018 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Tue, 27 Mar 2018 17:03:53 +0200 Subject: RFR (S,trivial) 8200276: Cleanup allocation.hpp includes In-Reply-To: References: <11afbff5-2450-b9b3-4bce-7487a17697c1@oracle.com> <78987137-0608-9e87-f742-73e909bab453@oracle.com> Message-ID: <6be32e7a-30a1-e21a-4df7-14d672a9c90b@oracle.com> On 2018-03-27 15:29, coleen.phillimore at oracle.com wrote: > > > On 3/27/18 8:41 AM, coleen.phillimore at oracle.com wrote: >> >> >> On 3/27/18 4:04 AM, Stefan Karlsson wrote: >>> Hi Coleen, >>> >>> This file is using CHeapObj and StackObj, and should keep including >>> allocation.hpp. >>> >>> http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/gc/parallel/psVirtualspace.hpp.frames.html >>> >> >> Yes, that's why I added allocation.hpp to this one. >>> >>> These files are using MetaspaceObj: >>> >>> http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/oops/cpCache.hpp.frames.html >>> >>> >>> http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/oops/array.hpp.frames.html >>> >>> >>> This file is using ReallocMark: >>> http://cr.openjdk.java.net/~coleenp/8200276.01/webrev/src/hotspot/share/code/exceptionHandlerTable.hpp.frames.html >>> >> >> My script didn't look for these.? I added them back. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8200276.02/webrev > > The incremental is that I reverted cpCache.hpp, array.hpp and > exceptionHandlerTable.hpp changes, so they're not present in the new > webrev. Sounds good. StefanK > > Thanks, > Coleen > >> >> And checked the rest. >> >> thanks, >> Coleen >> >> Thanks, >> Coleen >>> >>> Thanks, >>> StefanK >>> >>> On 2018-03-27 01:56, coleen.phillimore at oracle.com wrote: >>>> These includes are no longer needed with VALUE_OBJ_CLASS_SPEC >>>> removed in these files. >>>> >>>> open webrev at http://cr.openjdk.java.net/~coleenp/8200276.01/webrev >>>> bug link https://bugs.openjdk.java.net/browse/JDK-8200276 >>>> >>>> Tested tier1 on oracle platforms: linux-x64, solaris-sparc, >>>> macos-x86, windows-x86.? Tested open-only with >>>> --disable-precompiled-headers. Built zero on linux x64. >>>> >>>> Thanks, >>>> Coleen >> > From harold.seigel at oracle.com Tue Mar 27 15:46:23 2018 From: harold.seigel at oracle.com (harold seigel) Date: Tue, 27 Mar 2018 11:46:23 -0400 Subject: RFR (S,trivial) 8200276: Cleanup allocation.hpp includes In-Reply-To: <11afbff5-2450-b9b3-4bce-7487a17697c1@oracle.com> References: <11afbff5-2450-b9b3-4bce-7487a17697c1@oracle.com> Message-ID: Hi Coleen, This looks good. Thanks, Harold On 3/26/2018 7:56 PM, coleen.phillimore at oracle.com wrote: > These includes are no longer needed with VALUE_OBJ_CLASS_SPEC removed > in these files. > > open webrev at http://cr.openjdk.java.net/~coleenp/8200276.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8200276 > > Tested tier1 on oracle platforms: linux-x64, solaris-sparc, macos-x86, > windows-x86.? Tested open-only with --disable-precompiled-headers.? > Built zero on linux x64. > > Thanks, > Coleen From thomas.stuefe at gmail.com Tue Mar 27 15:47:53 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 27 Mar 2018 17:47:53 +0200 Subject: RFR(xxs, ppc/s390 only): 8200302: ppc, s390 (non-pch) build errors Message-ID: Hi all, Please review these small, ppc and s390 only build fixes. Bug: https://bugs.openjdk.java.net/browse/JDK-8200302 Webrev: http://cr.openjdk.java.net/~stuefe/webrevs/8200302-ppc-s390-nonpch-build-broken/webrev.00/webrev/ Thanks, Thomas From coleen.phillimore at oracle.com Tue Mar 27 15:50:26 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 27 Mar 2018 11:50:26 -0400 Subject: RFR (S,trivial) 8200276: Cleanup allocation.hpp includes In-Reply-To: References: <11afbff5-2450-b9b3-4bce-7487a17697c1@oracle.com> Message-ID: Thanks Harold! Coleen On 3/27/18 11:46 AM, harold seigel wrote: > Hi Coleen, > > This looks good. > > Thanks, Harold > > > On 3/26/2018 7:56 PM, coleen.phillimore at oracle.com wrote: >> These includes are no longer needed with VALUE_OBJ_CLASS_SPEC removed >> in these files. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8200276.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8200276 >> >> Tested tier1 on oracle platforms: linux-x64, solaris-sparc, >> macos-x86, windows-x86.? Tested open-only with >> --disable-precompiled-headers.? Built zero on linux x64. >> >> Thanks, >> Coleen > From coleen.phillimore at oracle.com Tue Mar 27 15:51:09 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 27 Mar 2018 11:51:09 -0400 Subject: RFR(xxs, ppc/s390 only): 8200302: ppc, s390 (non-pch) build errors In-Reply-To: References: Message-ID: This looks good.? Thanks Thomas, sorry I broke it. Coleen On 3/27/18 11:47 AM, Thomas St?fe wrote: > Hi all, > > Please review these small, ppc and s390 only build fixes. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8200302 > Webrev: > http://cr.openjdk.java.net/~stuefe/webrevs/8200302-ppc-s390-nonpch-build-broken/webrev.00/webrev/ > > > Thanks, Thomas From lois.foltan at oracle.com Tue Mar 27 15:52:11 2018 From: lois.foltan at oracle.com (Lois Foltan) Date: Tue, 27 Mar 2018 11:52:11 -0400 Subject: RFR(xxs, ppc/s390 only): 8200302: ppc, s390 (non-pch) build errors In-Reply-To: References: Message-ID: <69674b58-5dde-72a0-32b1-052481ff3912@oracle.com> Looks good. Lois On 3/27/2018 11:47 AM, Thomas St?fe wrote: > Hi all, > > Please review these small, ppc and s390 only build fixes. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8200302 > Webrev: > http://cr.openjdk.java.net/~stuefe/webrevs/8200302-ppc-s390-nonpch-build-broken/webrev.00/webrev/ > > > Thanks, Thomas From thomas.stuefe at gmail.com Tue Mar 27 16:06:35 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 27 Mar 2018 18:06:35 +0200 Subject: RFR(xxs, ppc/s390 only): 8200302: ppc, s390 (non-pch) build errors In-Reply-To: References: Message-ID: Thanks Coleen! On Tue, Mar 27, 2018 at 5:51 PM, wrote: > This looks good. Thanks Thomas, sorry I broke it. > Coleen > > > On 3/27/18 11:47 AM, Thomas St?fe wrote: > >> Hi all, >> >> Please review these small, ppc and s390 only build fixes. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8200302 >> Webrev: >> http://cr.openjdk.java.net/~stuefe/webrevs/8200302-ppc-s390- >> nonpch-build-broken/webrev.00/webrev/ >> >> >> Thanks, Thomas >> > > From thomas.stuefe at gmail.com Tue Mar 27 16:06:48 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 27 Mar 2018 18:06:48 +0200 Subject: RFR(xxs, ppc/s390 only): 8200302: ppc, s390 (non-pch) build errors In-Reply-To: <69674b58-5dde-72a0-32b1-052481ff3912@oracle.com> References: <69674b58-5dde-72a0-32b1-052481ff3912@oracle.com> Message-ID: Thank you! On Tue, Mar 27, 2018 at 5:52 PM, Lois Foltan wrote: > Looks good. > Lois > > > On 3/27/2018 11:47 AM, Thomas St?fe wrote: > >> Hi all, >> >> Please review these small, ppc and s390 only build fixes. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8200302 >> Webrev: >> http://cr.openjdk.java.net/~stuefe/webrevs/8200302-ppc-s390- >> nonpch-build-broken/webrev.00/webrev/ >> >> >> Thanks, Thomas >> > > From dmitry.chuyko at bell-sw.com Tue Mar 27 16:17:06 2018 From: dmitry.chuyko at bell-sw.com (Dmitry Chuyko) Date: Tue, 27 Mar 2018 19:17:06 +0300 Subject: RFD: AOT for AArch64 In-Reply-To: <7d2ad5a8-ca31-20c1-8f0d-819f4f2eba1a@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <1521907782.18910.9.camel@gmail.com> <1521917644.2929.5.camel@gmail.com> <927c88df-eb7a-9f3a-9340-1bd9b9833d84@bell-sw.com> <7d2ad5a8-ca31-20c1-8f0d-819f4f2eba1a@redhat.com> Message-ID: On 03/27/2018 06:23 PM, Andrew Haley wrote: > On 27/03/18 16:08, Dmitry Chuyko wrote: >> I manually applied 2 parts of the patch to graal sources from Andrew's repo: >> http://cr.openjdk.java.net/~aph/jaotc/jdk-hs-1/src/jdk.internal.vm.compiler/share/classes/org.graalvm.compiler.asm.aarch64/src/org/graalvm/compiler/asm/aarch64/AArch64Assembler.java.patch >> http://cr.openjdk.java.net/~aph/jaotc/jdk-hs-1/src/jdk.internal.vm.compiler/share/classes/org.graalvm.compiler.asm.aarch64/src/org/graalvm/compiler/asm/aarch64/AArch64MacroAssembler.java.patch >> We use those built modules as a replacement for ones from JDK but the >> changes are not present in aarch64-branch-overflows. > What is this all about? I don't know what you've done or why. Your initial email contains command line for jaotc where --module-path and --upgrade-module-path point to jars built in forked Graal sources. I did so and got exactly the same exception when running jaotc. -Dmitry > >> Now java.base can be AOT'ed on machines we have. >> In the logs I see a lot (~37k) of failed compilations with a message >> like following: >> >> Error: Failed compilation: >> com.sun.crypto.provider.GCMParameters.engineToString()Ljava/lang/String;: >> org.graalvm.compiler.graph.GraalGraphError: >> org.graalvm.compiler.debug.GraalError: Emitting code to load an object >> address is not currently supported on aarch64 >> at node: 2058|LoadConstantIndirectly > Interesting. I haven't seen that one. > > Some methods fail to AOT because AOT compilation tries to initialize > the classes, and some of the classes are inaccessible for security > reasons. I'll look at rebasing to current Graal sources and push again. > From dean.long at oracle.com Tue Mar 27 16:18:08 2018 From: dean.long at oracle.com (dean.long at oracle.com) Date: Tue, 27 Mar 2018 09:18:08 -0700 Subject: RFD: AOT for AArch64 In-Reply-To: <02e1f2e8-96f6-ac57-98dc-443c0e766fb0@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <9bb3d296-323c-07b4-0a91-aa24992282c6@oracle.com> <3f069df6-a1ec-df1b-a51a-711c7e291f4f@oracle.com> <02e1f2e8-96f6-ac57-98dc-443c0e766fb0@redhat.com> Message-ID: We did have a regression in this area recently.? If you rebase to the latest Graal then this fix should help: https://github.com/oracle/graal/commit/3a3570cff497bf91eaf190aa91cc77f4cb9f9f5f dl On 3/27/18 5:58 AM, Andrew Haley wrote: > On 27/03/18 02:21, dean.long at oracle.com wrote: >> On 3/23/18 4:27 PM, Vladimir Kozlov wrote: >>> Code in AOTCompiledClass.java look strange in try block. Why you need it? >> I've seen that code fail as well, but thought it was due to me doing >> something wrong, because it always went away with the final version of >> my changes. It would be great to fix this issue once and for all. >> Andrew, if you have a test case to reproduce the problem, it would be >> great to have it as a regression test. > It always happens for me when processing stubs: > > HotSpotResolvedJavaMethod m = HotSpotMethod > type = m.getDeclaringClass() = HotSpotType > type.getName() = Lorg/graalvm/compiler/hotspot/stubs/ExceptionHandlerStub; > getAOTKlassData(name) = null > > I can't see where addAOTKlassData() is ever called for stubs code. > > If I run with assertions I get: > > Exception in thread "main" java.lang.AssertionError: no data for HotSpotType > at jdk.aot/jdk.tools.jaotc.AOTCompiledClass.metadataName(AOTCompiledClass.java:441) > at jdk.aot/jdk.tools.jaotc.AOTCompiledClass.metadataName(AOTCompiledClass.java:450) > at jdk.aot/jdk.tools.jaotc.AOTCompiledClass.metadataName(AOTCompiledClass.java:456) > at jdk.aot/jdk.tools.jaotc.MetadataBuilder.addMetadataEntries(MetadataBuilder.java:185) > at jdk.aot/jdk.tools.jaotc.MetadataBuilder.createMethodMetadata(MetadataBuilder.java:119) > > If anyone knows where stubs are supposed to be added to > AOTCompiledClass.klassData I'll have a look, but as far as I can see > the answer is "nowhere". > From thomas.stuefe at gmail.com Tue Mar 27 16:28:25 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Tue, 27 Mar 2018 18:28:25 +0200 Subject: RFR(xxs, ppc/s390 only): 8200302: ppc, s390 (non-pch) build errors In-Reply-To: References: Message-ID: Since I have my two reviews now, and since this only affecting our platforms, I invoke the trivial rule and push. Sorry for not making this clear in the original review. I'm still learning :) ..Thomas On Tue, Mar 27, 2018 at 5:47 PM, Thomas St?fe wrote: > Hi all, > > Please review these small, ppc and s390 only build fixes. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8200302 > Webrev: http://cr.openjdk.java.net/~stuefe/webrevs/ > 8200302-ppc-s390-nonpch-build-broken/webrev.00/webrev/ > > > Thanks, Thomas > > From dmitry.chuyko at bell-sw.com Tue Mar 27 16:39:53 2018 From: dmitry.chuyko at bell-sw.com (Dmitry Chuyko) Date: Tue, 27 Mar 2018 19:39:53 +0300 Subject: RFD: AOT for AArch64 In-Reply-To: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> Message-ID: <6f669937-6e59-b4e8-e77b-1fe727dfc9e5@bell-sw.com> Andrew, There are some jtreg failures. Some are related to "Emitting code to load a metaspace address is not currently supported on aarch64" that was spotted before. I'll send you the logs in private FAILED: compiler/aot/calls/fromAot/AotInvokeDynamic2AotTest.java Passed: compiler/aot/calls/fromAot/AotInvokeDynamic2CompiledTest.java FAILED: compiler/aot/calls/fromAot/AotInvokeDynamic2InterpretedTest.java Error:? compiler/aot/calls/fromAot/AotInvokeDynamic2NativeTest.java FAILED: compiler/aot/calls/fromAot/AotInvokeInterface2AotTest.java Passed: compiler/aot/calls/fromAot/AotInvokeInterface2CompiledTest.java FAILED: compiler/aot/calls/fromAot/AotInvokeInterface2InterpretedTest.java Error: compiler/aot/calls/fromAot/AotInvokeInterface2NativeTest.java FAILED: compiler/aot/calls/fromAot/AotInvokeSpecial2AotTest.java Passed: compiler/aot/calls/fromAot/AotInvokeSpecial2CompiledTest.java FAILED: compiler/aot/calls/fromAot/AotInvokeSpecial2InterpretedTest.java Error:? compiler/aot/calls/fromAot/AotInvokeSpecial2NativeTest.java FAILED: compiler/aot/calls/fromAot/AotInvokeStatic2AotTest.java Passed: compiler/aot/calls/fromAot/AotInvokeStatic2CompiledTest.java FAILED: compiler/aot/calls/fromAot/AotInvokeStatic2InterpretedTest.java Error:? compiler/aot/calls/fromAot/AotInvokeStatic2NativeTest.java FAILED: compiler/aot/calls/fromAot/AotInvokeVirtual2AotTest.java Passed: compiler/aot/calls/fromAot/AotInvokeVirtual2CompiledTest.java FAILED: compiler/aot/calls/fromAot/AotInvokeVirtual2InterpretedTest.java Error:? compiler/aot/calls/fromAot/AotInvokeVirtual2NativeTest.java Passed: compiler/aot/calls/fromCompiled/CompiledInvokeDynamic2AotTest.java Passed: compiler/aot/calls/fromCompiled/CompiledInvokeInterface2AotTest.java Passed: compiler/aot/calls/fromCompiled/CompiledInvokeSpecial2AotTest.java Passed: compiler/aot/calls/fromCompiled/CompiledInvokeStatic2AotTest.java Passed: compiler/aot/calls/fromCompiled/CompiledInvokeVirtual2AotTest.java Passed: compiler/aot/calls/fromInterpreted/InterpretedInvokeDynamic2AotTest.java Passed: compiler/aot/calls/fromInterpreted/InterpretedInvokeInterface2AotTest.java Passed: compiler/aot/calls/fromInterpreted/InterpretedInvokeSpecial2AotTest.java Passed: compiler/aot/calls/fromInterpreted/InterpretedInvokeStatic2AotTest.java Passed: compiler/aot/calls/fromInterpreted/InterpretedInvokeVirtual2AotTest.java Error: compiler/aot/calls/fromNative/NativeInvokeSpecial2AotTest.java Error: compiler/aot/calls/fromNative/NativeInvokeStatic2AotTest.java Error: compiler/aot/calls/fromNative/NativeInvokeVirtual2AotTest.java Passed: compiler/aot/cli/jaotc/ClasspathOptionUnknownClassTest.java Passed: compiler/aot/cli/jaotc/CompileClassTest.java Passed: compiler/aot/cli/jaotc/CompileClassWithDebugTest.java Passed: compiler/aot/cli/jaotc/CompileDirectoryTest.java Passed: compiler/aot/cli/jaotc/CompileJarTest.java FAILED: compiler/aot/cli/jaotc/CompileModuleTest.java Passed: compiler/aot/cli/jaotc/ListOptionNotExistingTest.java FAILED: compiler/aot/cli/jaotc/ListOptionTest.java Passed: compiler/aot/cli/jaotc/ListOptionWrongFileTest.java Passed: compiler/aot/cli/DisabledAOTWithLibraryTest.java Passed: compiler/aot/cli/IncorrectAOTLibraryTest.java Passed: compiler/aot/cli/MultipleAOTLibraryTest.java Passed: compiler/aot/cli/NonExistingAOTLibraryTest.java Passed: compiler/aot/cli/SingleAOTLibraryTest.java Passed: compiler/aot/cli/SingleAOTOptionTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/directory/DirectorySourceProviderTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/jar/JarSourceProviderTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/module/ModuleSourceProviderTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/ClassSearchTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/ClassSourceTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/SearchPathTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/NativeOrderOutputStreamTest.java Passed: compiler/aot/verification/vmflags/NotTrackedFlagTest.java Passed: compiler/aot/verification/vmflags/TrackedFlagTest.java Passed: compiler/aot/verification/ClassAndLibraryNotMatchTest.java FAILED: compiler/aot/DeoptimizationTest.java FAILED: compiler/aot/RecompilationTest.java Passed: compiler/aot/SharedUsageTest.java Test results: passed: 39; failed: 14; error: 8 -Dmitry From coleen.phillimore at oracle.com Tue Mar 27 17:13:49 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 27 Mar 2018 13:13:49 -0400 Subject: RFR(xxs, ppc/s390 only): 8200302: ppc, s390 (non-pch) build errors In-Reply-To: References: Message-ID: This seems to be an appropriate use of the "trivial rule". thanks, Coleen On 3/27/18 12:28 PM, Thomas St?fe wrote: > Since I have my two reviews now, and since this only affecting our > platforms, I invoke the trivial rule and push. > > Sorry for not making this clear in the original review. I'm still > learning :) > > ..Thomas > > On Tue, Mar 27, 2018 at 5:47 PM, Thomas St?fe > wrote: > > Hi all, > > Please review these small, ppc and s390 only build fixes. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8200302 > > Webrev: > http://cr.openjdk.java.net/~stuefe/webrevs/8200302-ppc-s390-nonpch-build-broken/webrev.00/webrev/ > > > > Thanks, Thomas > > From volker.simonis at gmail.com Tue Mar 27 17:37:17 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 27 Mar 2018 19:37:17 +0200 Subject: RFR(S): 8198915: [Graal] 3rd testcase of compiler/types/TestMeetIncompatibleInterfaceArrays.java takes more than 10 mins Message-ID: Hi, can I please have a review for the following test-only change: http://cr.openjdk.java.net/~simonis/webrevs/2018/8198915 https://bugs.openjdk.java.net/browse/JDK-8198915 When I wrote this test back in 2015, the WhiteBox API was not powerful enough (or I simply wasn't smart enough to use it :) so I used a lot of -XX options in order to run a method three times in a round where every execution was done at another compilation level (i.e. interpreted, C1 and C2). Unfortunately, the required redefinition of compiler counters and thresholds massively slows down Graal as can be seen in the bug report. I've therefore changed the test to use the Whitbox API to achieve the same test compilation without the need to redefine the global JIT compiler counters and thresholds. Thank you and best regards, Volker From vladimir.kozlov at oracle.com Tue Mar 27 17:50:22 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 27 Mar 2018 10:50:22 -0700 Subject: RFR(S): 8198915: [Graal] 3rd testcase of compiler/types/TestMeetIncompatibleInterfaceArrays.java takes more than 10 mins In-Reply-To: References: Message-ID: <6d134a87-b081-769e-d574-99f25294a4b2@oracle.com> Looks good. Is there difference between "C2 (tier 4)" and "C2 (tier4)" in tier[][] except space? Thanks, Vladimir On 3/27/18 10:37 AM, Volker Simonis wrote: > Hi, > > can I please have a review for the following test-only change: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8198915 > https://bugs.openjdk.java.net/browse/JDK-8198915 > > When I wrote this test back in 2015, the WhiteBox API was not powerful > enough (or I simply wasn't smart enough to use it :) so I used a lot > of -XX options in order to run a method three times in a round where > every execution was done at another compilation level (i.e. > interpreted, C1 and C2). Unfortunately, the required redefinition of > compiler counters and thresholds massively slows down Graal as can be > seen in the bug report. > > I've therefore changed the test to use the Whitbox API to achieve the > same test compilation without the need to redefine the global JIT > compiler counters and thresholds. > > Thank you and best regards, > Volker > From edward.nevill at gmail.com Tue Mar 27 18:12:02 2018 From: edward.nevill at gmail.com (Edward Nevill) Date: Tue, 27 Mar 2018 19:12:02 +0100 Subject: RFR: 8199138: Add RISC-V support to Zero In-Reply-To: <62079618-0fdd-af0b-a3ee-3134555326bb@physik.fu-berlin.de> References: <1521313360.26308.4.camel@gmail.com> <259e05b8-dbb1-4aa4-f451-6b7078eeb2ff@oracle.com> <1521554055.3029.4.camel@gmail.com> <9b675cad-0aec-dfbe-1540-15417b58aaea@physik.fu-berlin.de> <1522139039.7098.11.camel@gmail.com> <62079618-0fdd-af0b-a3ee-3134555326bb@physik.fu-berlin.de> Message-ID: <1522174322.23521.4.camel@gmail.com> On Tue, 2018-03-27 at 17:46 +0900, John Paul Adrian Glaubitz wrote: > On 03/27/2018 05:23 PM, Edward Nevill wrote: > > Sorry for the delay. I was doing another test build on qemu which takes about 3 days. > > > > > What confuses me: Why RISCV here and not RISCV64? > > In particular this hunk: > > @@ -1758,6 +1761,7 @@ > {EM_PARISC, EM_PARISC, ELFCLASS32, ELFDATA2MSB, (char*)"PARISC"}, > {EM_68K, EM_68K, ELFCLASS32, ELFDATA2MSB, (char*)"M68k"}, > {EM_AARCH64, EM_AARCH64, ELFCLASS64, ELFDATA2LSB, (char*)"AARCH64"}, > + {EM_RISCV, EM_RISCV, ELFCLASS64, ELFDATA2LSB, (char*)"RISCV"}, > }; > > I know there is already 32-bit RISC-V and there are actually plans for > using it. So, it looks to me you would be breaking 32-bit RISC-V here. > We could do something like {EM_RISCV, EM_RISCV, LP64_ONLY(ELFCLASS64) NOT_LP64(ELFCLASS32), ELFDATA2LSB, (char*)"RISCV"}, Would this work? All the best, Ed. From dmitry.chuyko at bell-sw.com Tue Mar 27 18:14:41 2018 From: dmitry.chuyko at bell-sw.com (Dmitry Chuyko) Date: Tue, 27 Mar 2018 21:14:41 +0300 Subject: RFD: AOT for AArch64 In-Reply-To: <6f669937-6e59-b4e8-e77b-1fe727dfc9e5@bell-sw.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <6f669937-6e59-b4e8-e77b-1fe727dfc9e5@bell-sw.com> Message-ID: <53daf9a7-5108-06ce-bb05-6b10612c48d8@bell-sw.com> Some more results. Extra 14 AOT tests fail on AArch64 compared to x86. ==== AOT x86 ==== Passed: compiler/aot/calls/fromAot/AotInvokeDynamic2AotTest.java Passed: compiler/aot/calls/fromAot/AotInvokeDynamic2CompiledTest.java Passed: compiler/aot/calls/fromAot/AotInvokeDynamic2InterpretedTest.java Error: compiler/aot/calls/fromAot/AotInvokeDynamic2NativeTest.java Passed: compiler/aot/calls/fromAot/AotInvokeInterface2AotTest.java Passed: compiler/aot/calls/fromAot/AotInvokeInterface2CompiledTest.java Passed: compiler/aot/calls/fromAot/AotInvokeInterface2InterpretedTest.java Error: compiler/aot/calls/fromAot/AotInvokeInterface2NativeTest.java Passed: compiler/aot/calls/fromAot/AotInvokeSpecial2AotTest.java Passed: compiler/aot/calls/fromAot/AotInvokeSpecial2CompiledTest.java Passed: compiler/aot/calls/fromAot/AotInvokeSpecial2InterpretedTest.java Error: compiler/aot/calls/fromAot/AotInvokeSpecial2NativeTest.java Passed: compiler/aot/calls/fromAot/AotInvokeStatic2AotTest.java Passed: compiler/aot/calls/fromAot/AotInvokeStatic2CompiledTest.java Passed: compiler/aot/calls/fromAot/AotInvokeStatic2InterpretedTest.java Error:? compiler/aot/calls/fromAot/AotInvokeStatic2NativeTest.java Passed: compiler/aot/calls/fromAot/AotInvokeVirtual2AotTest.java Passed: compiler/aot/calls/fromAot/AotInvokeVirtual2CompiledTest.java Passed: compiler/aot/calls/fromAot/AotInvokeVirtual2InterpretedTest.java Error: compiler/aot/calls/fromAot/AotInvokeVirtual2NativeTest.java Passed: compiler/aot/calls/fromCompiled/CompiledInvokeDynamic2AotTest.java Passed: compiler/aot/calls/fromCompiled/CompiledInvokeInterface2AotTest.java Passed: compiler/aot/calls/fromCompiled/CompiledInvokeSpecial2AotTest.java Passed: compiler/aot/calls/fromCompiled/CompiledInvokeStatic2AotTest.java Passed: compiler/aot/calls/fromCompiled/CompiledInvokeVirtual2AotTest.java Passed: compiler/aot/calls/fromInterpreted/InterpretedInvokeDynamic2AotTest.java Passed: compiler/aot/calls/fromInterpreted/InterpretedInvokeInterface2AotTest.java Passed: compiler/aot/calls/fromInterpreted/InterpretedInvokeSpecial2AotTest.java Passed: compiler/aot/calls/fromInterpreted/InterpretedInvokeStatic2AotTest.java Passed: compiler/aot/calls/fromInterpreted/InterpretedInvokeVirtual2AotTest.java Error: compiler/aot/calls/fromNative/NativeInvokeSpecial2AotTest.java Error: compiler/aot/calls/fromNative/NativeInvokeStatic2AotTest.java Error: compiler/aot/calls/fromNative/NativeInvokeVirtual2AotTest.java Passed: compiler/aot/cli/jaotc/ClasspathOptionUnknownClassTest.java Passed: compiler/aot/cli/jaotc/CompileClassTest.java Passed: compiler/aot/cli/jaotc/CompileClassWithDebugTest.java Passed: compiler/aot/cli/jaotc/CompileDirectoryTest.java Passed: compiler/aot/cli/jaotc/CompileJarTest.java Passed: compiler/aot/cli/jaotc/CompileModuleTest.java Passed: compiler/aot/cli/jaotc/ListOptionNotExistingTest.java Passed: compiler/aot/cli/jaotc/ListOptionTest.java Passed: compiler/aot/cli/jaotc/ListOptionWrongFileTest.java Passed: compiler/aot/cli/DisabledAOTWithLibraryTest.java Passed: compiler/aot/cli/IncorrectAOTLibraryTest.java Passed: compiler/aot/cli/MultipleAOTLibraryTest.java Passed: compiler/aot/cli/NonExistingAOTLibraryTest.java Passed: compiler/aot/cli/SingleAOTLibraryTest.java Passed: compiler/aot/cli/SingleAOTOptionTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/directory/DirectorySourceProviderTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/jar/JarSourceProviderTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/module/ModuleSourceProviderTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/ClassSearchTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/ClassSourceTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/SearchPathTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/NativeOrderOutputStreamTest.java Passed: compiler/aot/verification/vmflags/NotTrackedFlagTest.java Passed: compiler/aot/verification/vmflags/TrackedFlagTest.java Passed: compiler/aot/verification/ClassAndLibraryNotMatchTest.java Passed: compiler/aot/DeoptimizationTest.java Passed: compiler/aot/RecompilationTest.java Passed: compiler/aot/SharedUsageTest.java Test results: passed: 53; error: 8 ==== JVMCI AArch64 ==== Passed: compiler/jvmci/compilerToVM/AllocateCompileIdTest.java Passed: compiler/jvmci/compilerToVM/AsResolvedJavaMethodTest.java Passed: compiler/jvmci/compilerToVM/CollectCountersTest.java Passed: compiler/jvmci/compilerToVM/DebugOutputTest.java Passed: compiler/jvmci/compilerToVM/DisassembleCodeBlobTest.java Passed: compiler/jvmci/compilerToVM/DoNotInlineOrCompileTest.java Passed: compiler/jvmci/compilerToVM/ExecuteInstalledCodeTest.java Passed: compiler/jvmci/compilerToVM/FindUniqueConcreteMethodTest.java Passed: compiler/jvmci/compilerToVM/GetBytecodeTest.java Passed: compiler/jvmci/compilerToVM/GetClassInitializerTest.java Passed: compiler/jvmci/compilerToVM/GetConstantPoolTest.java Passed: compiler/jvmci/compilerToVM/GetExceptionTableTest.java Passed: compiler/jvmci/compilerToVM/GetFlagValueTest.java Passed: compiler/jvmci/compilerToVM/GetImplementorTest.java Passed: compiler/jvmci/compilerToVM/GetLineNumberTableTest.java Passed: compiler/jvmci/compilerToVM/GetLocalVariableTableTest.java Passed: compiler/jvmci/compilerToVM/GetMaxCallTargetOffsetTest.java Passed: compiler/jvmci/compilerToVM/GetNextStackFrameTest.java Passed: compiler/jvmci/compilerToVM/GetResolvedJavaMethodTest.java FAILED: compiler/jvmci/compilerToVM/GetResolvedJavaTypeTest.java Passed: compiler/jvmci/compilerToVM/GetStackTraceElementTest.java Passed: compiler/jvmci/compilerToVM/GetSymbolTest.java Passed: compiler/jvmci/compilerToVM/GetVtableIndexForInterfaceTest.java Passed: compiler/jvmci/compilerToVM/HasCompiledCodeForOSRTest.java Passed: compiler/jvmci/compilerToVM/HasFinalizableSubclassTest.java Passed: compiler/jvmci/compilerToVM/HasNeverInlineDirectiveTest.java FAILED: compiler/jvmci/compilerToVM/InvalidateInstalledCodeTest.java Passed: compiler/jvmci/compilerToVM/IsCompilableTest.java Passed: compiler/jvmci/compilerToVM/IsMatureTest.java Passed: compiler/jvmci/compilerToVM/IsMatureVsReprofileTest.java Passed: compiler/jvmci/compilerToVM/JVM_RegisterJVMCINatives.java Passed: compiler/jvmci/compilerToVM/LookupKlassInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupKlassRefIndexInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupMethodInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupNameAndTypeRefIndexInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupNameInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupSignatureInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupTypeTest.java Passed: compiler/jvmci/compilerToVM/MaterializeVirtualObjectTest.java Passed: compiler/jvmci/compilerToVM/MethodIsIgnoredBySecurityStackWalkTest.java Passed: compiler/jvmci/compilerToVM/ReadConfigurationTest.java Passed: compiler/jvmci/compilerToVM/ReprofileTest.java Passed: compiler/jvmci/compilerToVM/ResolveConstantInPoolTest.java Passed: compiler/jvmci/compilerToVM/ResolveFieldInPoolTest.java Passed: compiler/jvmci/compilerToVM/ResolveMethodTest.java Passed: compiler/jvmci/compilerToVM/ResolvePossiblyCachedConstantInPoolTest.java Passed: compiler/jvmci/compilerToVM/ResolveTypeInPoolTest.java Passed: compiler/jvmci/compilerToVM/ShouldDebugNonSafepointsTest.java Passed: compiler/jvmci/compilerToVM/ShouldInlineMethodTest.java Passed: compiler/jvmci/errors/TestInvalidCompilationResult.java Passed: compiler/jvmci/errors/TestInvalidDebugInfo.java Passed: compiler/jvmci/errors/TestInvalidOopMap.java Passed: compiler/jvmci/events/JvmciNotifyBootstrapFinishedEventTest.java Passed: compiler/jvmci/events/JvmciNotifyInstallEventTest.java Passed: compiler/jvmci/events/JvmciShutdownEventTest.java Passed: compiler/jvmci/jdk.vm.ci.hotspot.test/src/jdk/vm/ci/hotspot/test/HotSpotConstantReflectionProviderTest.java Passed: compiler/jvmci/jdk.vm.ci.hotspot.test/src/jdk/vm/ci/hotspot/test/MemoryAccessProviderTest.java Passed: compiler/jvmci/jdk.vm.ci.hotspot.test/src/jdk/vm/ci/hotspot/test/MethodHandleAccessProviderTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/ConstantTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/RedefineClassTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/ResolvedJavaTypeResolveConcreteMethodTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/ResolvedJavaTypeResolveMethodTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestConstantReflectionProvider.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestJavaField.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestJavaMethod.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestJavaType.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestMetaAccessProvider.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestResolvedJavaField.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestResolvedJavaMethod.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestResolvedJavaType.java Passed: compiler/jvmci/meta/StableFieldTest.java Passed: compiler/jvmci/JVM_GetJVMCIRuntimeTest.java Passed: compiler/jvmci/SecurityRestrictionsTest.java Passed: compiler/jvmci/TestJVMCIPrintProperties.java Passed: compiler/jvmci/TestValidateModules.java Test results: passed: 73; failed: 2 ==== JVMCI on x86 ==== Passed: compiler/jvmci/compilerToVM/AllocateCompileIdTest.java Passed: compiler/jvmci/compilerToVM/AsResolvedJavaMethodTest.java Passed: compiler/jvmci/compilerToVM/CollectCountersTest.java Passed: compiler/jvmci/compilerToVM/DebugOutputTest.java Passed: compiler/jvmci/compilerToVM/DisassembleCodeBlobTest.java Passed: compiler/jvmci/compilerToVM/DoNotInlineOrCompileTest.java Passed: compiler/jvmci/compilerToVM/ExecuteInstalledCodeTest.java Passed: compiler/jvmci/compilerToVM/FindUniqueConcreteMethodTest.java Passed: compiler/jvmci/compilerToVM/GetBytecodeTest.java Passed: compiler/jvmci/compilerToVM/GetClassInitializerTest.java Passed: compiler/jvmci/compilerToVM/GetConstantPoolTest.java Passed: compiler/jvmci/compilerToVM/GetExceptionTableTest.java Passed: compiler/jvmci/compilerToVM/GetFlagValueTest.java Passed: compiler/jvmci/compilerToVM/GetImplementorTest.java Passed: compiler/jvmci/compilerToVM/GetLineNumberTableTest.java Passed: compiler/jvmci/compilerToVM/GetLocalVariableTableTest.java Passed: compiler/jvmci/compilerToVM/GetMaxCallTargetOffsetTest.java Passed: compiler/jvmci/compilerToVM/GetNextStackFrameTest.java Passed: compiler/jvmci/compilerToVM/GetResolvedJavaMethodTest.java FAILED: compiler/jvmci/compilerToVM/GetResolvedJavaTypeTest.java Passed: compiler/jvmci/compilerToVM/GetStackTraceElementTest.java Passed: compiler/jvmci/compilerToVM/GetSymbolTest.java Passed: compiler/jvmci/compilerToVM/GetVtableIndexForInterfaceTest.java Passed: compiler/jvmci/compilerToVM/HasCompiledCodeForOSRTest.java Passed: compiler/jvmci/compilerToVM/HasFinalizableSubclassTest.java Passed: compiler/jvmci/compilerToVM/HasNeverInlineDirectiveTest.java FAILED: compiler/jvmci/compilerToVM/InvalidateInstalledCodeTest.java Passed: compiler/jvmci/compilerToVM/IsCompilableTest.java Passed: compiler/jvmci/compilerToVM/IsMatureTest.java Passed: compiler/jvmci/compilerToVM/IsMatureVsReprofileTest.java Passed: compiler/jvmci/compilerToVM/JVM_RegisterJVMCINatives.java Passed: compiler/jvmci/compilerToVM/LookupKlassInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupKlassRefIndexInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupMethodInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupNameAndTypeRefIndexInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupNameInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupSignatureInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupTypeTest.java Passed: compiler/jvmci/compilerToVM/MaterializeVirtualObjectTest.java Passed: compiler/jvmci/compilerToVM/MethodIsIgnoredBySecurityStackWalkTest.java Passed: compiler/jvmci/compilerToVM/ReadConfigurationTest.java Passed: compiler/jvmci/compilerToVM/ReprofileTest.java Passed: compiler/jvmci/compilerToVM/ResolveConstantInPoolTest.java Passed: compiler/jvmci/compilerToVM/ResolveFieldInPoolTest.java Passed: compiler/jvmci/compilerToVM/ResolveMethodTest.java Passed: compiler/jvmci/compilerToVM/ResolvePossiblyCachedConstantInPoolTest.java Passed: compiler/jvmci/compilerToVM/ResolveTypeInPoolTest.java Passed: compiler/jvmci/compilerToVM/ShouldDebugNonSafepointsTest.java Passed: compiler/jvmci/compilerToVM/ShouldInlineMethodTest.java Passed: compiler/jvmci/errors/TestInvalidCompilationResult.java Passed: compiler/jvmci/errors/TestInvalidDebugInfo.java Passed: compiler/jvmci/errors/TestInvalidOopMap.java Passed: compiler/jvmci/events/JvmciNotifyBootstrapFinishedEventTest.java Passed: compiler/jvmci/events/JvmciNotifyInstallEventTest.java Passed: compiler/jvmci/events/JvmciShutdownEventTest.java Passed: compiler/jvmci/jdk.vm.ci.code.test/src/jdk/vm/ci/code/test/DataPatchTest.java Passed: compiler/jvmci/jdk.vm.ci.code.test/src/jdk/vm/ci/code/test/InterpreterFrameSizeTest.java Passed: compiler/jvmci/jdk.vm.ci.code.test/src/jdk/vm/ci/code/test/MaxOopMapStackOffsetTest.java Error: compiler/jvmci/jdk.vm.ci.code.test/src/jdk/vm/ci/code/test/NativeCallTest.java Passed: compiler/jvmci/jdk.vm.ci.code.test/src/jdk/vm/ci/code/test/SimpleCodeInstallationTest.java Passed: compiler/jvmci/jdk.vm.ci.code.test/src/jdk/vm/ci/code/test/SimpleDebugInfoTest.java Passed: compiler/jvmci/jdk.vm.ci.code.test/src/jdk/vm/ci/code/test/VirtualObjectDebugInfoTest.java Passed: compiler/jvmci/jdk.vm.ci.hotspot.test/src/jdk/vm/ci/hotspot/test/HotSpotConstantReflectionProviderTest.java Passed: compiler/jvmci/jdk.vm.ci.hotspot.test/src/jdk/vm/ci/hotspot/test/MemoryAccessProviderTest.java Passed: compiler/jvmci/jdk.vm.ci.hotspot.test/src/jdk/vm/ci/hotspot/test/MethodHandleAccessProviderTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/ConstantTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/RedefineClassTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/ResolvedJavaTypeResolveConcreteMethodTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/ResolvedJavaTypeResolveMethodTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestConstantReflectionProvider.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestJavaField.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestJavaMethod.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestJavaType.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestMetaAccessProvider.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestResolvedJavaField.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestResolvedJavaMethod.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestResolvedJavaType.java Passed: compiler/jvmci/meta/StableFieldTest.java Passed: compiler/jvmci/JVM_GetJVMCIRuntimeTest.java Passed: compiler/jvmci/SecurityRestrictionsTest.java Passed: compiler/jvmci/TestJVMCIPrintProperties.java Passed: compiler/jvmci/TestValidateModules.java Test results: passed: 79; failed: 2; error: 1 On 03/27/2018 07:39 PM, Dmitry Chuyko wrote: > Andrew, > > There are some jtreg failures. Some are related to "Emitting code to > load a metaspace address is not currently supported on aarch64" that > was spotted before. I'll send you the logs in private > > FAILED: compiler/aot/calls/fromAot/AotInvokeDynamic2AotTest.java > Passed: compiler/aot/calls/fromAot/AotInvokeDynamic2CompiledTest.java > FAILED: compiler/aot/calls/fromAot/AotInvokeDynamic2InterpretedTest.java > Error: compiler/aot/calls/fromAot/AotInvokeDynamic2NativeTest.java > FAILED: compiler/aot/calls/fromAot/AotInvokeInterface2AotTest.java > Passed: compiler/aot/calls/fromAot/AotInvokeInterface2CompiledTest.java > FAILED: > compiler/aot/calls/fromAot/AotInvokeInterface2InterpretedTest.java > Error: compiler/aot/calls/fromAot/AotInvokeInterface2NativeTest.java > FAILED: compiler/aot/calls/fromAot/AotInvokeSpecial2AotTest.java > Passed: compiler/aot/calls/fromAot/AotInvokeSpecial2CompiledTest.java > FAILED: compiler/aot/calls/fromAot/AotInvokeSpecial2InterpretedTest.java > Error: compiler/aot/calls/fromAot/AotInvokeSpecial2NativeTest.java > FAILED: compiler/aot/calls/fromAot/AotInvokeStatic2AotTest.java > Passed: compiler/aot/calls/fromAot/AotInvokeStatic2CompiledTest.java > FAILED: compiler/aot/calls/fromAot/AotInvokeStatic2InterpretedTest.java > Error:? compiler/aot/calls/fromAot/AotInvokeStatic2NativeTest.java > FAILED: compiler/aot/calls/fromAot/AotInvokeVirtual2AotTest.java > Passed: compiler/aot/calls/fromAot/AotInvokeVirtual2CompiledTest.java > FAILED: compiler/aot/calls/fromAot/AotInvokeVirtual2InterpretedTest.java > Error: compiler/aot/calls/fromAot/AotInvokeVirtual2NativeTest.java > Passed: > compiler/aot/calls/fromCompiled/CompiledInvokeDynamic2AotTest.java > Passed: > compiler/aot/calls/fromCompiled/CompiledInvokeInterface2AotTest.java > Passed: > compiler/aot/calls/fromCompiled/CompiledInvokeSpecial2AotTest.java > Passed: compiler/aot/calls/fromCompiled/CompiledInvokeStatic2AotTest.java > Passed: > compiler/aot/calls/fromCompiled/CompiledInvokeVirtual2AotTest.java > Passed: > compiler/aot/calls/fromInterpreted/InterpretedInvokeDynamic2AotTest.java > Passed: > compiler/aot/calls/fromInterpreted/InterpretedInvokeInterface2AotTest.java > Passed: > compiler/aot/calls/fromInterpreted/InterpretedInvokeSpecial2AotTest.java > Passed: > compiler/aot/calls/fromInterpreted/InterpretedInvokeStatic2AotTest.java > Passed: > compiler/aot/calls/fromInterpreted/InterpretedInvokeVirtual2AotTest.java > Error: compiler/aot/calls/fromNative/NativeInvokeSpecial2AotTest.java > Error: compiler/aot/calls/fromNative/NativeInvokeStatic2AotTest.java > Error: compiler/aot/calls/fromNative/NativeInvokeVirtual2AotTest.java > Passed: compiler/aot/cli/jaotc/ClasspathOptionUnknownClassTest.java > Passed: compiler/aot/cli/jaotc/CompileClassTest.java > Passed: compiler/aot/cli/jaotc/CompileClassWithDebugTest.java > Passed: compiler/aot/cli/jaotc/CompileDirectoryTest.java > Passed: compiler/aot/cli/jaotc/CompileJarTest.java > FAILED: compiler/aot/cli/jaotc/CompileModuleTest.java > Passed: compiler/aot/cli/jaotc/ListOptionNotExistingTest.java > FAILED: compiler/aot/cli/jaotc/ListOptionTest.java > Passed: compiler/aot/cli/jaotc/ListOptionWrongFileTest.java > Passed: compiler/aot/cli/DisabledAOTWithLibraryTest.java > Passed: compiler/aot/cli/IncorrectAOTLibraryTest.java > Passed: compiler/aot/cli/MultipleAOTLibraryTest.java > Passed: compiler/aot/cli/NonExistingAOTLibraryTest.java > Passed: compiler/aot/cli/SingleAOTLibraryTest.java > Passed: compiler/aot/cli/SingleAOTOptionTest.java > Passed: > compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/directory/DirectorySourceProviderTest.java > Passed: > compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/jar/JarSourceProviderTest.java > Passed: > compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/module/ModuleSourceProviderTest.java > Passed: > compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/ClassSearchTest.java > Passed: > compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/ClassSourceTest.java > Passed: > compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/SearchPathTest.java > Passed: > compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/NativeOrderOutputStreamTest.java > Passed: compiler/aot/verification/vmflags/NotTrackedFlagTest.java > Passed: compiler/aot/verification/vmflags/TrackedFlagTest.java > Passed: compiler/aot/verification/ClassAndLibraryNotMatchTest.java > FAILED: compiler/aot/DeoptimizationTest.java > FAILED: compiler/aot/RecompilationTest.java > Passed: compiler/aot/SharedUsageTest.java > Test results: passed: 39; failed: 14; error: 8 > > -Dmitry From aph at redhat.com Tue Mar 27 18:20:18 2018 From: aph at redhat.com (Andrew Haley) Date: Tue, 27 Mar 2018 19:20:18 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: <6f669937-6e59-b4e8-e77b-1fe727dfc9e5@bell-sw.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <6f669937-6e59-b4e8-e77b-1fe727dfc9e5@bell-sw.com> Message-ID: On 27/03/18 17:39, Dmitry Chuyko wrote: > There are some jtreg failures. Some are related to "Emitting code to > load a metaspace address is not currently supported on aarch64" that was > spotted before. I'll send you the logs in private Very cool, thank you. Please also let me know the commands you used to run the AOT jtreg tests. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From ioi.lam at oracle.com Tue Mar 27 23:46:54 2018 From: ioi.lam at oracle.com (Ioi Lam) Date: Tue, 27 Mar 2018 16:46:54 -0700 Subject: RFR (XXS) 8183238 Obsolete CheckEndorsedAndExtDirs and remove checks for lib/endorsed and lib/ext Message-ID: Hi please review this very small change: http://cr.openjdk.java.net/~iklam/jdk11/8183238-obsolete-CheckEndorsedAndExtDirs.v01/ https://bugs.openjdk.java.net/browse/JDK-8183238 The CheckEndorsedAndExtDirs flag has been deprecated since JDK 10 and all uses of it have been removed from the test cases. Thanks - Ioi From david.holmes at oracle.com Tue Mar 27 23:55:59 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 28 Mar 2018 09:55:59 +1000 Subject: RFR (XXS) 8183238 Obsolete CheckEndorsedAndExtDirs and remove checks for lib/endorsed and lib/ext In-Reply-To: References: Message-ID: Hi Ioi, On 28/03/2018 9:46 AM, Ioi Lam wrote: > Hi please review this very small change: > > http://cr.openjdk.java.net/~iklam/jdk11/8183238-obsolete-CheckEndorsedAndExtDirs.v01/ > > https://bugs.openjdk.java.net/browse/JDK-8183238 > > The CheckEndorsedAndExtDirs flag has been deprecated since JDK 10 and > all uses of it have been removed from the test cases. Looks fine. But isn't check_non_empty_dirs(const char* path) unused now? Thanks, David > Thanks > - Ioi From ioi.lam at oracle.com Wed Mar 28 00:55:15 2018 From: ioi.lam at oracle.com (Ioi Lam) Date: Tue, 27 Mar 2018 17:55:15 -0700 Subject: RFR (XXS) 8183238 Obsolete CheckEndorsedAndExtDirs and remove checks for lib/endorsed and lib/ext In-Reply-To: References: Message-ID: You?re right. I?ll remove that function when I push. Thanks for the review. Ioi > On Mar 27, 2018, at 4:55 PM, David Holmes wrote: > > Hi Ioi, > >> On 28/03/2018 9:46 AM, Ioi Lam wrote: >> Hi please review this very small change: >> http://cr.openjdk.java.net/~iklam/jdk11/8183238-obsolete-CheckEndorsedAndExtDirs.v01/ https://bugs.openjdk.java.net/browse/JDK-8183238 >> The CheckEndorsedAndExtDirs flag has been deprecated since JDK 10 and >> all uses of it have been removed from the test cases. > > Looks fine. But isn't check_non_empty_dirs(const char* path) unused now? > > Thanks, > David > >> Thanks >> - Ioi From kim.barrett at oracle.com Wed Mar 28 02:06:00 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 27 Mar 2018 22:06:00 -0400 Subject: RFR: 8195972: Refactor oops in JNI to use the Access API Message-ID: <8BC47508-8585-44EA-8D2B-22C2144E4AF5@oracle.com> Please review this change to the JNIHandles class to use the Access API. The handle construction, deletion, and value access (via resolve &etc) are updated to use the Access API. CR: https://bugs.openjdk.java.net/browse/JDK-8195972 Webrev: http://cr.openjdk.java.net/~kbarrett/8195972/open.00/ Testing: local jck-runtime:vm/jni, tonga:nsk.jvmti jtreg:fromTonga_nsk_coverage, jtreg:fromTonga_vm_runtime, jtreg/runtime/jni Mach5 {jdk,hs}-tier{1,2,3} Mach5 hs-tier{5,6,7}-rt (tiers containing additional JNI tests) From glaubitz at physik.fu-berlin.de Wed Mar 28 03:14:28 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Wed, 28 Mar 2018 12:14:28 +0900 Subject: RFR: 8200245: Zero fails to build on linux-ia64 due to ia64-specific cruft In-Reply-To: References: <31e26c1c-4549-e04d-18fb-70a03acde103@physik.fu-berlin.de> <9f97f4c3-d8a9-174e-a8ce-aa1b8d1b510e@physik.fu-berlin.de> Message-ID: On 03/27/2018 01:25 PM, Thomas St?fe wrote: > Looks good to me to. Great, thank you. Just to confirm I get the procedure now right: So I have Thomas (stuefe) and David (dholmes) now as reviewers, with dholmes having an official reviewer role for "jdk". I will push the changes to the submit-hs repository next to test it. Then run hg jtreg and then push it to master. Correct? > Interesting, Sun did have an HPUX port?? Doesn't surprise me. I think HP still officially supports HPUX on Itanium. They apparently have people paying them lots of money for that. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From mandy.chung at oracle.com Wed Mar 28 03:10:17 2018 From: mandy.chung at oracle.com (mandy chung) Date: Wed, 28 Mar 2018 11:10:17 +0800 Subject: RFR (XXS) 8183238 Obsolete CheckEndorsedAndExtDirs and remove checks for lib/endorsed and lib/ext In-Reply-To: References: Message-ID: <4905c9c2-3dd8-08a1-1ef5-ae44264e3742@oracle.com> On 3/28/18 7:55 AM, David Holmes wrote: > Hi Ioi, > > On 28/03/2018 9:46 AM, Ioi Lam wrote: >> Hi please review this very small change: >> >> http://cr.openjdk.java.net/~iklam/jdk11/8183238-obsolete-CheckEndorsedAndExtDirs.v01/ >> >> https://bugs.openjdk.java.net/browse/JDK-8183238 >> >> The CheckEndorsedAndExtDirs flag has been deprecated since JDK 10 and >> all uses of it have been removed from the test cases. > > Looks fine. But isn't check_non_empty_dirs(const char* path) unused now? Looks good.? check_non_empty_dirs is not needed and can be removed. Mandy From david.holmes at oracle.com Wed Mar 28 03:19:32 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 28 Mar 2018 13:19:32 +1000 Subject: RFR: 8200245: Zero fails to build on linux-ia64 due to ia64-specific cruft In-Reply-To: References: <31e26c1c-4549-e04d-18fb-70a03acde103@physik.fu-berlin.de> <9f97f4c3-d8a9-174e-a8ce-aa1b8d1b510e@physik.fu-berlin.de> Message-ID: <3c037f7e-6fd4-2a6d-cc9d-04d42c2a72b8@oracle.com> On 28/03/2018 1:14 PM, John Paul Adrian Glaubitz wrote: > On 03/27/2018 01:25 PM, Thomas St?fe wrote: >> Looks good to me to. > > Great, thank you. Just to confirm I get the procedure now right: > > So I have Thomas (stuefe) and David (dholmes) now as reviewers, with > dholmes having an official reviewer role for "jdk". I will push the > changes to the submit-hs repository next to test it. Then run hg jtreg > and then push it to master. > > Correct? Not sure what "hg jtreg" is :) hg commit with appropriate changeset comment then hg push >> Interesting, Sun did have an HPUX port? > > Doesn't surprise me. I think HP still officially supports HPUX on > Itanium. They apparently have people paying them lots of money > for that. https://h20392.www2.hpe.com/portal/swdepot/displayProductInfo.do?productNumber=HPUXJDKJRE80 Cheers, David ----- > Adrian > From glaubitz at physik.fu-berlin.de Wed Mar 28 03:33:26 2018 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Wed, 28 Mar 2018 12:33:26 +0900 Subject: RFR: 8200245: Zero fails to build on linux-ia64 due to ia64-specific cruft In-Reply-To: <3c037f7e-6fd4-2a6d-cc9d-04d42c2a72b8@oracle.com> References: <31e26c1c-4549-e04d-18fb-70a03acde103@physik.fu-berlin.de> <9f97f4c3-d8a9-174e-a8ce-aa1b8d1b510e@physik.fu-berlin.de> <3c037f7e-6fd4-2a6d-cc9d-04d42c2a72b8@oracle.com> Message-ID: On 03/28/2018 12:19 PM, David Holmes wrote: >> Correct? > > Not sure what "hg jtreg" is :) Oops, sorry. I meant "hg jcheck". > hg commit with appropriate changeset comment then hg push Ok. >>> Interesting, Sun did have an HPUX port? >> >> Doesn't surprise me. I think HP still officially supports HPUX on >> Itanium. They apparently have people paying them lots of money >> for that. > > https://h20392.www2.hpe.com/portal/swdepot/displayProductInfo.do?productNumber=HPUXJDKJRE80 Wow, even up-to-date jdk8 update version ;). Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaubitz at debian.org `. `' Freie Universitaet Berlin - glaubitz at physik.fu-berlin.de `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From david.holmes at oracle.com Wed Mar 28 04:06:37 2018 From: david.holmes at oracle.com (David Holmes) Date: Wed, 28 Mar 2018 14:06:37 +1000 Subject: RFR: 8200245: Zero fails to build on linux-ia64 due to ia64-specific cruft In-Reply-To: References: <31e26c1c-4549-e04d-18fb-70a03acde103@physik.fu-berlin.de> <9f97f4c3-d8a9-174e-a8ce-aa1b8d1b510e@physik.fu-berlin.de> <3c037f7e-6fd4-2a6d-cc9d-04d42c2a72b8@oracle.com> Message-ID: <4aaa237a-094b-efa4-b938-c593509ce2e3@oracle.com> On 28/03/2018 1:33 PM, John Paul Adrian Glaubitz wrote: > On 03/28/2018 12:19 PM, David Holmes wrote: >>> Correct? >> >> Not sure what "hg jtreg" is :) > > Oops, sorry. I meant "hg jcheck". Note jcheck only runs against commited changesets, and also as part of the commit, so there's no need to run it manually as long as you have it enabled as a hook: [hooks] pretxnchangegroup = python:jcheck.hook pretxncommit = python:jcheck.hook David >> hg commit with appropriate changeset comment then hg push > > Ok. > >>>> Interesting, Sun did have an HPUX port? >>> >>> Doesn't surprise me. I think HP still officially supports HPUX on >>> Itanium. They apparently have people paying them lots of money >>> for that. >> >> https://h20392.www2.hpe.com/portal/swdepot/displayProductInfo.do?productNumber=HPUXJDKJRE80 > > Wow, even up-to-date jdk8 update version ;). > > Adrian > From thomas.stuefe at gmail.com Wed Mar 28 05:11:31 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Wed, 28 Mar 2018 07:11:31 +0200 Subject: RFR: 8200245: Zero fails to build on linux-ia64 due to ia64-specific cruft In-Reply-To: <4aaa237a-094b-efa4-b938-c593509ce2e3@oracle.com> References: <31e26c1c-4549-e04d-18fb-70a03acde103@physik.fu-berlin.de> <9f97f4c3-d8a9-174e-a8ce-aa1b8d1b510e@physik.fu-berlin.de> <3c037f7e-6fd4-2a6d-cc9d-04d42c2a72b8@oracle.com> <4aaa237a-094b-efa4-b938-c593509ce2e3@oracle.com> Message-ID: Hi Adrian, for the complete writeup of the rules see Jespers mail from march 13: http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030656.html He promised to put this in the OpenJdk Wiki sometime. Which would be quite helpful . ..Thomas On Wed, Mar 28, 2018 at 6:06 AM, David Holmes wrote: > On 28/03/2018 1:33 PM, John Paul Adrian Glaubitz wrote: > >> On 03/28/2018 12:19 PM, David Holmes wrote: >> >>> Correct? >>>> >>> >>> Not sure what "hg jtreg" is :) >>> >> >> Oops, sorry. I meant "hg jcheck". >> > > Note jcheck only runs against commited changesets, and also as part of the > commit, so there's no need to run it manually as long as you have it > enabled as a hook: > > [hooks] > pretxnchangegroup = python:jcheck.hook > pretxncommit = python:jcheck.hook > > David > > > hg commit with appropriate changeset comment then hg push >>> >> >> Ok. >> >> Interesting, Sun did have an HPUX port? >>>>> >>>> >>>> Doesn't surprise me. I think HP still officially supports HPUX on >>>> Itanium. They apparently have people paying them lots of money >>>> for that. >>>> >>> >>> https://h20392.www2.hpe.com/portal/swdepot/displayProductInf >>> o.do?productNumber=HPUXJDKJRE80 >>> >> >> Wow, even up-to-date jdk8 update version ;). >> >> Adrian >> >> From shafi.s.ahmad at oracle.com Wed Mar 28 06:23:03 2018 From: shafi.s.ahmad at oracle.com (Shafi Ahmad) Date: Tue, 27 Mar 2018 23:23:03 -0700 (PDT) Subject: [8u] RFR for backport of "JDK-8164480: Crash with assert(handler_address == SharedRuntime::compute_compiled_exc_handler(..) failed: Must be the same" to jdk8u-dev Message-ID: Hi, Please review the backport of ' JDK-8164480: Crash with assert(handler_address == SharedRuntime::compute_compiled_exc_handler(..) failed: Must be the same' to jdk8u-dev. Please note that this is not a clean backport because file src/share/vm/jvmci/jvmciRuntime.cpp is not in jdk8u repo and I ignore this change as I am not seeing relevant code in other file. webrev: http://cr.openjdk.java.net/~shshahma/8164480/webrev.00/ jdk9 bug: https://bugs.openjdk.java.net/browse/JDK-8164480 original patch pushed to jdk9: http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/b9b1b54d53b2 Test: Run jprt -testset hotspot Regards, Shafi From Alan.Bateman at oracle.com Wed Mar 28 07:11:53 2018 From: Alan.Bateman at oracle.com (Alan Bateman) Date: Wed, 28 Mar 2018 08:11:53 +0100 Subject: RFR (XXS) 8183238 Obsolete CheckEndorsedAndExtDirs and remove checks for lib/endorsed and lib/ext In-Reply-To: References: Message-ID: <1daa59e3-b001-53b4-0cef-0cb15f856a7d@oracle.com> On 28/03/2018 00:46, Ioi Lam wrote: > Hi please review this very small change: > > http://cr.openjdk.java.net/~iklam/jdk11/8183238-obsolete-CheckEndorsedAndExtDirs.v01/ > > https://bugs.openjdk.java.net/browse/JDK-8183238 > > The CheckEndorsedAndExtDirs flag has been deprecated since JDK 10 and > all uses of it have been removed from the test cases. This looks good. I assumed you've checked that we don't have any tests or man pages or other references to this option. -Alan From tobias.hartmann at oracle.com Wed Mar 28 08:36:46 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 28 Mar 2018 10:36:46 +0200 Subject: [8u] RFR for backport of "JDK-8164480: Crash with assert(handler_address == SharedRuntime::compute_compiled_exc_handler(..) failed: Must be the same" to jdk8u-dev In-Reply-To: References: Message-ID: <20b15f6f-d7ca-09eb-a37e-37ce35a59ff5@oracle.com> Hi Shafi, looks good to me. Best regards, Tobias On 28.03.2018 08:23, Shafi Ahmad wrote: > Hi, > > Please review the backport of ' JDK-8164480: Crash with assert(handler_address == SharedRuntime::compute_compiled_exc_handler(..) failed: Must be the same' to jdk8u-dev. > Please note that this is not a clean backport because file src/share/vm/jvmci/jvmciRuntime.cpp is not in jdk8u repo and I ignore this change as I am not seeing relevant code in other file. > > webrev: http://cr.openjdk.java.net/~shshahma/8164480/webrev.00/ > jdk9 bug: https://bugs.openjdk.java.net/browse/JDK-8164480 > original patch pushed to jdk9: http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/b9b1b54d53b2 > > Test: Run jprt -testset hotspot > > Regards, > Shafi > From tobias.hartmann at oracle.com Wed Mar 28 08:37:53 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 28 Mar 2018 10:37:53 +0200 Subject: RFR(S): 8198915: [Graal] 3rd testcase of compiler/types/TestMeetIncompatibleInterfaceArrays.java takes more than 10 mins In-Reply-To: References: Message-ID: Hi Volker, looks good to me. Thanks for fixing. Best regards, Tobias On 27.03.2018 19:37, Volker Simonis wrote: > Hi, > > can I please have a review for the following test-only change: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8198915 > https://bugs.openjdk.java.net/browse/JDK-8198915 > > When I wrote this test back in 2015, the WhiteBox API was not powerful > enough (or I simply wasn't smart enough to use it :) so I used a lot > of -XX options in order to run a method three times in a round where > every execution was done at another compilation level (i.e. > interpreted, C1 and C2). Unfortunately, the required redefinition of > compiler counters and thresholds massively slows down Graal as can be > seen in the bug report. > > I've therefore changed the test to use the Whitbox API to achieve the > same test compilation without the need to redefine the global JIT > compiler counters and thresholds. > > Thank you and best regards, > Volker > From shafi.s.ahmad at oracle.com Wed Mar 28 08:50:14 2018 From: shafi.s.ahmad at oracle.com (Shafi Ahmad) Date: Wed, 28 Mar 2018 01:50:14 -0700 (PDT) Subject: [8u] RFR for backport of "JDK-8164480: Crash with assert(handler_address == SharedRuntime::compute_compiled_exc_handler(..) failed: Must be the same" to jdk8u-dev In-Reply-To: <20b15f6f-d7ca-09eb-a37e-37ce35a59ff5@oracle.com> References: <20b15f6f-d7ca-09eb-a37e-37ce35a59ff5@oracle.com> Message-ID: <9717d6f4-2ddd-4f33-8cfc-9e2fd46645e7@default> Thank you Tobias. Regards, Shafi > -----Original Message----- > From: Tobias Hartmann > Sent: Wednesday, March 28, 2018 2:07 PM > To: Shafi Ahmad ; hotspot- > dev at openjdk.java.net > Cc: Vladimir Kozlov ; Douglas Simon > > Subject: Re: [8u] RFR for backport of "JDK-8164480: Crash with > assert(handler_address == > SharedRuntime::compute_compiled_exc_handler(..) failed: Must be the > same" to jdk8u-dev > > Hi Shafi, > > looks good to me. > > Best regards, > Tobias > > On 28.03.2018 08:23, Shafi Ahmad wrote: > > Hi, > > > > Please review the backport of ' JDK-8164480: Crash with > assert(handler_address == > SharedRuntime::compute_compiled_exc_handler(..) failed: Must be the > same' to jdk8u-dev. > > Please note that this is not a clean backport because file > src/share/vm/jvmci/jvmciRuntime.cpp is not in jdk8u repo and I ignore this > change as I am not seeing relevant code in other file. > > > > webrev: http://cr.openjdk.java.net/~shshahma/8164480/webrev.00/ > > jdk9 bug: https://bugs.openjdk.java.net/browse/JDK-8164480 > > original patch pushed to jdk9: > http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/b9b1b54d53b2 > > > > Test: Run jprt -testset hotspot > > > > Regards, > > Shafi > > From adinn at redhat.com Wed Mar 28 10:51:15 2018 From: adinn at redhat.com (Andrew Dinn) Date: Wed, 28 Mar 2018 11:51:15 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: <8342caa0-e4d1-0250-02ab-c8f9cf8085bd@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <15f51315-3497-6c01-2ecb-1f758762e99e@redhat.com> <849a8fcf-85cd-b8c6-1134-9aea3877e9b7@redhat.com> <2c42186a-523e-6912-6eb0-36c7ee2be09f@redhat.com> <8b4a211e-44af-ed28-5abf-697d722771d1@redhat.com> <97d96f0e-43b0-beca-3b31-3dabc6073f3a@redhat.com> <8342caa0-e4d1-0250-02ab-c8f9cf8085bd@redhat.com> Message-ID: On 27/03/18 16:08, Andrew Dinn wrote: > I'll try to get final comments plus a yea or nay (well, ok I > guess it's going to be a yea) posted by late tomorrow morning. Well, this is all very interesting. I used a Java debugger to step through code under jdk.tools.jaotc.Main and gdb to step through an AOT-compiled Test.main([Ljava/lang/String;)V and that made a lot of stuff clearer about how this all ties together. It still doesn't really put me in any better position to critique the patch further but it certainly makes me feel like a lot happier as to what the patch is doing and why. Since this works to AOT-compile and then run simple programs plus java.base, I guess it caters for most (maybe all?) of the cases that are likely to turn up in generation and subsequent execution. I think whatever may be missing will be found by i) committing this and then testing it on larger apps ii) letting you, me and others play with it until we get to the point where we can spot any rare omissions that testing does not uncover. I have only one further comment on the Graal code: 1) compiler/src/org.graalvm.compiler.lir.aarch64/src/org/graalvm/compiler/lir/aarch64/AArch64Call.java @@ public class AArch64Call { public static void directJmp(CompilationResultBuilder crb, AArch64MacroAssembler masm, InvokeTarget callTarget) { try (AArch64MacroAssembler.ScratchRegister scratch = masm.getScratchRegister()) { int before = masm.position(); - masm.movNativeAddress(scratch.getRegister(), 0L); - masm.jmp(scratch.getRegister()); + if (GeneratePIC.getValue(crb.getOptions())) { + masm.jmp(); + } else { + masm.movNativeAddress(scratch.getRegister(), 0L); + masm.jmp(scratch.getRegister()); + } int after = masm.position(); crb.recordDirectCall(before, after, callTarget, null); masm.ensureUniquePC(); The else branch here omits the helpful comment provided in earlier method directCall which it might actually be useful to include. However, in both cases it might also help to add a comment in the if branch to explain that the call is guaranteed to fit into 28 bits because in the PIC case far jumps/calls indirect through a PLT i.e. for directJmp: @@ public class AArch64Call { public static void directJmp(CompilationResultBuilder crb, AArch64MacroAssembler masm, InvokeTarget callTarget) { try (AArch64MacroAssembler.ScratchRegister scratch = masm.getScratchRegister()) { int before = masm.position(); - masm.movNativeAddress(scratch.getRegister(), 0L); - masm.jmp(scratch.getRegister()); + if (GeneratePIC.getValue(crb.getOptions())) { + /* + * Offset must fit into a 28-bit immediate as far jumps + * require indirection through a PLT. + * generate a PC-relative jump fixed up by the linker + */ + masm.jmp(); + } else { + /* + * Offset might not fit into a 28-bit immediate, generate an indirect call with a 64-bit + * immediate address which is fixed up by HotSpot. + */ + masm.movNativeAddress(scratch.getRegister(), 0L); + masm.jmp(scratch.getRegister()); + } int after = masm.position(); crb.recordDirectCall(before, after, callTarget, null); masm.ensureUniquePC(); and ditto for directCall. Otherwise I'm happy to see a patch with all updates accumulated so far pushed to Graal and hs. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From dmitry.chuyko at bell-sw.com Wed Mar 28 11:08:44 2018 From: dmitry.chuyko at bell-sw.com (Dmitry Chuyko) Date: Wed, 28 Mar 2018 14:08:44 +0300 Subject: RFD: AOT for AArch64 In-Reply-To: References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <6f669937-6e59-b4e8-e77b-1fe727dfc9e5@bell-sw.com> Message-ID: <066ab6cb-f7a3-524e-ab25-96b9b1f674bb@bell-sw.com> On 03/27/2018 09:20 PM, Andrew Haley wrote: > On 27/03/18 17:39, Dmitry Chuyko wrote: >> There are some jtreg failures. Some are related to "Emitting code to >> load a metaspace address is not currently supported on aarch64" that was >> spotted before. I'll send you the logs in private > Very cool, thank you. Please also let me know the commands you used > to run the AOT jtreg tests. > The command line itself is quite straightforward, i.e. export JT_JAVA=/home/dchuyko/jdk-aot jtreg -jdk:$JT_JAVA -verbose:summary -w work -r report /home/dchuyko/jdk-aot/test/hotspot/jtreg/compiler/aot But I also use 2 preparatory things: ? * jaotc was replaced with a wrapper script that calls original binary with all passed arguments plus --module-path and --upgrade-module-path pointing to Graal stuff built with mx. ? * jtreg version 4.2-b11 -Dmitry From volker.simonis at gmail.com Wed Mar 28 12:32:41 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Wed, 28 Mar 2018 14:32:41 +0200 Subject: RFR(S): 8198915: [Graal] 3rd testcase of compiler/types/TestMeetIncompatibleInterfaceArrays.java takes more than 10 mins In-Reply-To: References: Message-ID: Hi Tobias, thanks for the review, Volker On Wed, Mar 28, 2018 at 10:37 AM, Tobias Hartmann wrote: > Hi Volker, > > looks good to me. Thanks for fixing. > > Best regards, > Tobias > > On 27.03.2018 19:37, Volker Simonis wrote: >> Hi, >> >> can I please have a review for the following test-only change: >> >> http://cr.openjdk.java.net/~simonis/webrevs/2018/8198915 >> https://bugs.openjdk.java.net/browse/JDK-8198915 >> >> When I wrote this test back in 2015, the WhiteBox API was not powerful >> enough (or I simply wasn't smart enough to use it :) so I used a lot >> of -XX options in order to run a method three times in a round where >> every execution was done at another compilation level (i.e. >> interpreted, C1 and C2). Unfortunately, the required redefinition of >> compiler counters and thresholds massively slows down Graal as can be >> seen in the bug report. >> >> I've therefore changed the test to use the Whitbox API to achieve the >> same test compilation without the need to redefine the global JIT >> compiler counters and thresholds. >> >> Thank you and best regards, >> Volker >> From volker.simonis at gmail.com Wed Mar 28 12:34:30 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Wed, 28 Mar 2018 14:34:30 +0200 Subject: RFR(S): 8198915: [Graal] 3rd testcase of compiler/types/TestMeetIncompatibleInterfaceArrays.java takes more than 10 mins In-Reply-To: <6d134a87-b081-769e-d574-99f25294a4b2@oracle.com> References: <6d134a87-b081-769e-d574-99f25294a4b2@oracle.com> Message-ID: On Tue, Mar 27, 2018 at 7:50 PM, Vladimir Kozlov wrote: > Looks good. > > Is there difference between "C2 (tier 4)" and "C2 (tier4)" in tier[][] > except space? It's just labels for the output (i.e. no semantic) but I've adjusted it in the final version which I've pushed. Thanks, Volker > > Thanks, > Vladimir > > > On 3/27/18 10:37 AM, Volker Simonis wrote: >> >> Hi, >> >> can I please have a review for the following test-only change: >> >> http://cr.openjdk.java.net/~simonis/webrevs/2018/8198915 >> https://bugs.openjdk.java.net/browse/JDK-8198915 >> >> When I wrote this test back in 2015, the WhiteBox API was not powerful >> enough (or I simply wasn't smart enough to use it :) so I used a lot >> of -XX options in order to run a method three times in a round where >> every execution was done at another compilation level (i.e. >> interpreted, C1 and C2). Unfortunately, the required redefinition of >> compiler counters and thresholds massively slows down Graal as can be >> seen in the bug report. >> >> I've therefore changed the test to use the Whitbox API to achieve the >> same test compilation without the need to redefine the global JIT >> compiler counters and thresholds. >> >> Thank you and best regards, >> Volker >> > From stewartd.qdt at qualcommdatacenter.com Wed Mar 28 13:02:06 2018 From: stewartd.qdt at qualcommdatacenter.com (stewartd.qdt) Date: Wed, 28 Mar 2018 13:02:06 +0000 Subject: RFR: 8200251: AArch64::CPUFeature out of sync with VM_Version::Feature_Flag In-Reply-To: References: <1b32c22bf37044cd93e62c8a1a69afe8@NASANEXM01E.na.qualcomm.com> Message-ID: <69f608989b854ba3a32a0ac5ea6e9444@NASANEXM01E.na.qualcomm.com> Please see the updated webrev, where I have also added some flags that were not getting sent over JVMCI. http://cr.openjdk.java.net/~dstewart/8200251/webrev.01/ Thank you, Daniel -----Original Message----- From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] Sent: Monday, March 26, 2018 12:48 PM To: stewartd.qdt ; hotspot-dev at openjdk.java.net Subject: Re: RFR: 8200251: AArch64::CPUFeature out of sync with VM_Version::Feature_Flag Good. Thanks, Vladimir On 3/26/18 9:24 AM, stewartd.qdt wrote: > Please review this webrev [1] which attempts to bring the AArch64::CPUFeature enum (Java) in sync with VM_Version::Feature_Flag enum (C++ enum) for aarch64. > > This is in preparation for creating AArch64 some intrinsics for Graal. But I found that the CPUFeature enum was not being transferred over to Graal for AArch64. In attempting to do that I then found out that CPUFeatures was not in sync with the VM_Version::Feature_Flag enum. > > The bug report is filed at [2]. > > I am happy to modify the patch as necessary. > > Regards, > > Daniel Stewart > > [1] - http://cr.openjdk.java.net/~dstewart/8200251/webrev.00/ > [2] - https://bugs.openjdk.java.net/browse/JDK-8200251 > > From erik.osterlund at oracle.com Wed Mar 28 13:40:00 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Wed, 28 Mar 2018 15:40:00 +0200 Subject: RFR: 8195972: Refactor oops in JNI to use the Access API In-Reply-To: <8BC47508-8585-44EA-8D2B-22C2144E4AF5@oracle.com> References: <8BC47508-8585-44EA-8D2B-22C2144E4AF5@oracle.com> Message-ID: <5ABB9B30.7080808@oracle.com> Hi Kim, I noticed that jobjects are now IN_CONCURRENT_ROOT in this patch. I wonder if this is the right time to upgrade them to IN_CONCURRENT_ROOT. Until there is at least one GC that actually scans these concurrently, this will only impose extra overheads (unnecessary G1 SATB-enqueue barriers on the store required to release jobjects) with no obvious gains. The platform specific code needs to go along with this. I have a patch out to generalize interpreter code. In there, I am treating resolve jobject as a normal strong root. That would probably need to change. It is also troubling that jniFastGetField shoots raw loads into (hopefully) the heap, dodging all GC barriers, hoping that is okay. I wonder if starting to actually scan jobjects concurrently would force us to disable that optimization completely to be generally useful to all collectors. For example, an IN_CONCURRENT_ROOT load access for ZGC might require a slowpath. But in jniFastGetField, there is no frame, and hence any code that runs in there must not call anything in the runtime. Therefore, with IN_CONCURRENT_ROOT, it is not generally safe to use jniFastGetField, without doing... something about that code. I would like to hear your thoughts about this. Perhaps the intention is just to take incremental steps towards being able to scan jobjects concurrently, and this is just the first step? Still, I would be interested to hear about what you think about the next steps. If we decide to go with IN_CONCURRENT_ROOT now already, then I should change my interpreter changes that are out for review to do the same so that we are consistent. Otherwise, this looks great, and I am glad we finally have jni handles accessorized. Thanks, /Erik On 2018-03-28 04:06, Kim Barrett wrote: > Please review this change to the JNIHandles class to use the Access > API. The handle construction, deletion, and value access (via resolve > &etc) are updated to use the Access API. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8195972 > > Webrev: > http://cr.openjdk.java.net/~kbarrett/8195972/open.00/ > > Testing: > local jck-runtime:vm/jni, tonga:nsk.jvmti > jtreg:fromTonga_nsk_coverage, jtreg:fromTonga_vm_runtime, > jtreg/runtime/jni > Mach5 {jdk,hs}-tier{1,2,3} > Mach5 hs-tier{5,6,7}-rt (tiers containing additional JNI tests) > > > From robin.westberg at oracle.com Wed Mar 28 13:43:02 2018 From: robin.westberg at oracle.com (Robin Westberg) Date: Wed, 28 Mar 2018 15:43:02 +0200 Subject: RFR: 8199619: Building HotSpot on Windows should define NOMINMAX In-Reply-To: References: <7545A83F-E296-40B1-9C15-358A999D1AAF@oracle.com> Message-ID: <8C4AC71F-25E8-4DD2-9DBC-D452118464C6@oracle.com> Hi Magnus, Thanks for the review! Best regards, Robin > On 26 Mar 2018, at 23:24, Magnus Ihse Bursie wrote: > > On 2018-03-26 17:01, Robin Westberg wrote: >> Hi all, >> >> Please review this small change that defines the NOMINMAX macro when building HotSpot on Windows. >> >> Issue: https://bugs.openjdk.java.net/browse/JDK-8199619 >> Webrev: http://cr.openjdk.java.net/~rwestberg/8199619/webrev.00/ > Looks good to me. > > /Magnus > >> Testing: building with/without precompiled headers, hs-tier1 >> >> Best regards, >> Robin > From robin.westberg at oracle.com Wed Mar 28 13:43:07 2018 From: robin.westberg at oracle.com (Robin Westberg) Date: Wed, 28 Mar 2018 15:43:07 +0200 Subject: RFR: 8199619: Building HotSpot on Windows should define NOMINMAX In-Reply-To: <7e318372-2c04-be32-dd0d-7c0fe092c98f@oracle.com> References: <7545A83F-E296-40B1-9C15-358A999D1AAF@oracle.com> <7e318372-2c04-be32-dd0d-7c0fe092c98f@oracle.com> Message-ID: <08B716A1-C16E-494D-9C14-11FA4A81387F@oracle.com> Hi Erik, Thanks for reviewing! Best regards, Robin > On 26 Mar 2018, at 17:50, Erik Joelsson wrote: > > Looks good. > > /Erik > > > On 2018-03-26 08:01, Robin Westberg wrote: >> Hi all, >> >> Please review this small change that defines the NOMINMAX macro when building HotSpot on Windows. >> >> Issue: https://bugs.openjdk.java.net/browse/JDK-8199619 >> Webrev: http://cr.openjdk.java.net/~rwestberg/8199619/webrev.00/ >> Testing: building with/without precompiled headers, hs-tier1 >> >> Best regards, >> Robin > From robin.westberg at oracle.com Wed Mar 28 13:43:13 2018 From: robin.westberg at oracle.com (Robin Westberg) Date: Wed, 28 Mar 2018 15:43:13 +0200 Subject: RFR: 8199619: Building HotSpot on Windows should define NOMINMAX In-Reply-To: <446F6608-6FC0-4962-AAD3-CC8CF36F60F7@oracle.com> References: <7545A83F-E296-40B1-9C15-358A999D1AAF@oracle.com> <446F6608-6FC0-4962-AAD3-CC8CF36F60F7@oracle.com> Message-ID: Hi Kim, > On 26 Mar 2018, at 18:34, Kim Barrett wrote: > >> On Mar 26, 2018, at 11:01 AM, Robin Westberg wrote: >> >> Hi all, >> >> Please review this small change that defines the NOMINMAX macro when building HotSpot on Windows. >> >> Issue: https://bugs.openjdk.java.net/browse/JDK-8199619 >> Webrev: http://cr.openjdk.java.net/~rwestberg/8199619/webrev.00/ >> Testing: building with/without precompiled headers, hs-tier1 >> >> Best regards, >> Robin > > Looks good. Thanks for reviewing! > This change will have a (easy to resolve) merge conflict with your fix for JDK-8199736, right? Indeed, the flag definitions should go on a single line I think. I?ll try to get this one in first and rebase 8199736 afterwards. So, if anyone would be willing to sponsor this change, here?s an updated webrev with a proper mercurial changeset (no other changes): http://cr.openjdk.java.net/~rwestberg/8199619/webrev.01/ Best regards, Robin From robin.westberg at oracle.com Wed Mar 28 13:43:22 2018 From: robin.westberg at oracle.com (Robin Westberg) Date: Wed, 28 Mar 2018 15:43:22 +0200 Subject: RFR: 8199736: Define WIN32_LEAN_AND_MEAN before including windows.h In-Reply-To: <29b9be70-5855-fa0d-e1b2-73565e76d052@oracle.com> References: <7F114FA8-59F0-42F1-A0B8-33D8C22DE81A@oracle.com> <9DA6FFD9-1DD0-4C39-9D81-DF6FA49EDF45@oracle.com> <29b9be70-5855-fa0d-e1b2-73565e76d052@oracle.com> Message-ID: <6CE48055-3AEA-4766-9C90-E0071CCEDB58@oracle.com> Hi David, Thanks for reviewing! I?ll delay progressing this one a bit until 8199619 is integrated. Best regards, Robin > On 27 Mar 2018, at 02:57, David Holmes wrote: > > Looks good to me. > > Thanks, > David > > On 27/03/2018 1:01 AM, Robin Westberg wrote: >> Hi David, >> Thanks for taking a look! >>> On 26 Mar 2018, at 01:03, David Holmes > wrote: >>> >>> Hi Robin, >>> >>> On 23/03/2018 10:37 PM, Robin Westberg wrote: >>>> Hi Kim & Erik, >>>> Certainly makes sense to define it from the build system, I?ve updated the patch accordingly: >>>> Full: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.01/ >>>> Incremental: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00-01/ >>> >>> I'm a little unclear on the hotspot changes. If we define WIN32_LEAN_AND_MEAN then certain APIs like sockets are excluded from windows.h so we then have to include the specific header files like winsock2.h - is that right? >> Yep that?s correct, headers like winsock, dde, ole, shellapi and a few other uncommon ones are no longer included from windows.h when this is defined. >>> src/hotspot/share/interpreter/bytecodes.cpp >>> >>> I'm curious about this change. u_short comes from types.h on non-Windows, is it simply missing on Windows (at least once we have WIN32_LEAN_AND_MEAN defined) ? >> Yeah, on Windows these comes from winsock(2).h: >> /* >> * Basic system type definitions, taken from the BSD file sys/types.h. >> */ >> typedef unsigned char u_char; >> typedef unsigned short u_short; >> typedef unsigned int u_int; >> typedef unsigned long u_long; >> I noticed that one of these (u_char) is also defined in globalDefinitions.hpp so could perhaps define u_short there, or include winsock2.h globally again. But since it was only used in a single place in the existing code it seemed simple enough to just expand the typedef there. >>> src/hotspot/share/utilities/ostream.cpp >>> >>> 1029 #endif >>> 1030 #if defined(_WINDOWS) >>> >>> Using elif could be marginally faster given the two sets of conditions are mutually exclusive. >> Good point, will change it. >> I also had to move the flag definition to adapt to the latest changes in the hs repo, cc?ing build-dev again to make sure I got it right. >> Updated webrev (full): http://cr.openjdk.java.net/~rwestberg/8199736/webrev.02/ >> Updated webrev (incremental): http://cr.openjdk.java.net/~rwestberg/8199736/webrev.01-02/ >> Best regards, >> Robin >>> >>> Thanks, >>> David >>> >>>> (Not quite sure if the definition belongs where I put it or a bit later where most other windows-specific JVM flags are defined, but seemed reasonable to put it close to where it is defined for the JDK libraries). >>>> Best regards, >>>> Robin >>>>> On 22 Mar 2018, at 16:52, Kim Barrett > wrote: >>>>> >>>>>> On Mar 22, 2018, at 10:34 AM, Robin Westberg > wrote: >>>>>> >>>>>> Hi all, >>>>>> >>>>>> Please review the following change that defines WIN32_LEAN_AND_MEAN [1] before including windows.h. This marginally improves build times, and makes it possible to include winsock2.h. >>>>>> >>>>>> Issue: https://bugs.openjdk.java.net/browse/JDK-8199736 >>>>>> Webrev: http://cr.openjdk.java.net/~rwestberg/8199736/webrev.00/ >>>>>> Testing: hs-tier1 >>>>>> >>>>>> Best regards, >>>>>> Robin >>>>>> >>>>>> [1] https://msdn.microsoft.com/en-us/library/windows/desktop/aa383745%28v=vs.85%29.aspx#faster_builds_with_smaller_header_files >>>>> >>>>> I think the addition of the WIN32_LEAN_AND_MEAN definition should be done through the build >>>>> system, so that it applies everywhere. >>>>> From rkennke at redhat.com Wed Mar 28 13:55:35 2018 From: rkennke at redhat.com (Roman Kennke) Date: Wed, 28 Mar 2018 15:55:35 +0200 Subject: RFR(XS): JDK-8199780: SetMemory0 and CopyMemory0 in unsafe.cpp need to resolve their operands In-Reply-To: <3b8ce496-c053-4dc0-ff54-ccd9a6e74198@redhat.com> References: <00c5000d-3e1f-b0cd-5426-c8b14d7516c7@redhat.com> <5AB0E0D2.7090303@oracle.com> <20b2bcd2-66f9-f80d-4eb9-bd0ee44d5261@redhat.com> <5AB0E9BA.5000002@oracle.com> <15133453-3d69-020f-780c-b04c5f820bb8@redhat.com> <5AB135F6.3020508@oracle.com> <3b8ce496-c053-4dc0-ff54-ccd9a6e74198@redhat.com> Message-ID: Am 21.03.2018 um 20:18 schrieb Roman Kennke: >> >>> Diff: >>> http://cr.openjdk.java.net/~rkennke/JDK-8199780/webrev.01.diff/ >>> Full: >>> http://cr.openjdk.java.net/~rkennke/JDK-8199780/webrev.01/ >>> >>> Better now? >> >> Yes, much better. It looks good now. Thank you. > > Erik: Thank you for reviewing. > > I believe this needs one more review, doesn't it? > > Thanks, Roman > Ping? Can I get one more review for this? Roman From volker.simonis at gmail.com Wed Mar 28 14:49:32 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Wed, 28 Mar 2018 16:49:32 +0200 Subject: RFR(XS): 8200360: MeetIncompatibleInterfaceArrays fails with "MeetIncompatibleInterfaceArrays0ASM.run() must be compiled at tier 0 !" Message-ID: Hi, can I please have a review for the following tiny test fix: http://cr.openjdk.java.net/~simonis/webrevs/2018/8200360/ https://bugs.openjdk.java.net/browse/JDK-8200360 With the fix for JDK-8198915 I changed the test TestMeetIncompatibleInterfaceArrays.java to more thoroughly verify if the test method has been really compiled on the requested compilation level. This works fine in general, but if the JTreg tests are run in special configurations like for example "-server -XX:-TieredCompilation", C1 compilations doesn't not work. There are two possible solution for this problem: either relax the check for the correct compilation level or explicitly set "-XX:+TieredCompilation" in the "@run" action. The second variant is possible because the options passed in the "@run" action come AFTER the global JTreg options passed with with "-vmoptions" on the final command line and thus can override them. I went for the second variant in this fix because the test in question actually wants to check the different compilation levels and it doesn't make sense in other configurations. Finally, I now also explicitly set "-XX:TieredStopAtLevel=4" just for the case you have other test configurations which limit the compilations levels. It would be great if somebody could run this trough the "hs-tier2" test suite and any other relevant internal test suites because the problem doesn't show up in the tier1 tests. I also won't be available any more today, so if this is urgent and your fine with the fix, please feel free to push it. Otherwise I'll do it tomorrow morning (CET). Thank you and best regards, Volker From vladimir.kozlov at oracle.com Wed Mar 28 15:43:31 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 28 Mar 2018 08:43:31 -0700 Subject: RFR(XS): 8200360: MeetIncompatibleInterfaceArrays fails with "MeetIncompatibleInterfaceArrays0ASM.run() must be compiled at tier 0 !" In-Reply-To: References: Message-ID: Okay. Thanks, Vladimir On 3/28/18 7:49 AM, Volker Simonis wrote: > Hi, > can I please have a review for the following tiny test fix: > > http://cr.openjdk.java.net/~simonis/webrevs/2018/8200360/ > https://bugs.openjdk.java.net/browse/JDK-8200360 > > With the fix for JDK-8198915 I changed the test > TestMeetIncompatibleInterfaceArrays.java to more thoroughly verify if > the test method has been really compiled on the requested compilation > level. This works fine in general, but if the JTreg tests are run in > special configurations like for example "-server > -XX:-TieredCompilation", C1 compilations doesn't not work. > > There are two possible solution for this problem: either relax the > check for the correct compilation level or explicitly set > "-XX:+TieredCompilation" in the "@run" action. The second variant is > possible because the options passed in the "@run" action come AFTER > the global JTreg options passed with with "-vmoptions" on the final > command line and thus can override them. > > I went for the second variant in this fix because the test in question > actually wants to check the different compilation levels and it > doesn't make sense in other configurations. Finally, I now also > explicitly set "-XX:TieredStopAtLevel=4" just for the case you have > other test configurations which limit the compilations levels. > > It would be great if somebody could run this trough the "hs-tier2" > test suite and any other relevant internal test suites because the > problem doesn't show up in the tier1 tests. > > I also won't be available any more today, so if this is urgent and > your fine with the fix, please feel free to push it. Otherwise I'll do > it tomorrow morning (CET). > > Thank you and best regards, > Volker > From erik.joelsson at oracle.com Wed Mar 28 15:47:13 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Wed, 28 Mar 2018 08:47:13 -0700 Subject: RFR: 8199619: Building HotSpot on Windows should define NOMINMAX In-Reply-To: References: <7545A83F-E296-40B1-9C15-358A999D1AAF@oracle.com> <446F6608-6FC0-4962-AAD3-CC8CF36F60F7@oracle.com> Message-ID: I will sponsor the change. /Erik On 2018-03-28 06:43, Robin Westberg wrote: > Hi Kim, > >> On 26 Mar 2018, at 18:34, Kim Barrett wrote: >> >>> On Mar 26, 2018, at 11:01 AM, Robin Westberg wrote: >>> >>> Hi all, >>> >>> Please review this small change that defines the NOMINMAX macro when building HotSpot on Windows. >>> >>> Issue: https://bugs.openjdk.java.net/browse/JDK-8199619 >>> Webrev: http://cr.openjdk.java.net/~rwestberg/8199619/webrev.00/ >>> Testing: building with/without precompiled headers, hs-tier1 >>> >>> Best regards, >>> Robin >> Looks good. > Thanks for reviewing! > >> This change will have a (easy to resolve) merge conflict with your fix for JDK-8199736, right? > Indeed, the flag definitions should go on a single line I think. I?ll try to get this one in first and rebase 8199736 afterwards. > > So, if anyone would be willing to sponsor this change, here?s an updated webrev with a proper mercurial changeset (no other changes): > http://cr.openjdk.java.net/~rwestberg/8199619/webrev.01/ > > Best regards, > Robin > From vladimir.kozlov at oracle.com Wed Mar 28 16:52:02 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 28 Mar 2018 09:52:02 -0700 Subject: RFR: 8200251: AArch64::CPUFeature out of sync with VM_Version::Feature_Flag In-Reply-To: <69f608989b854ba3a32a0ac5ea6e9444@NASANEXM01E.na.qualcomm.com> References: <1b32c22bf37044cd93e62c8a1a69afe8@NASANEXM01E.na.qualcomm.com> <69f608989b854ba3a32a0ac5ea6e9444@NASANEXM01E.na.qualcomm.com> Message-ID: Looks good. Thanks, Vladimir On 3/28/18 6:02 AM, stewartd.qdt wrote: > Please see the updated webrev, where I have also added some flags that were not getting sent over JVMCI. > > http://cr.openjdk.java.net/~dstewart/8200251/webrev.01/ > > Thank you, > Daniel > > -----Original Message----- > From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] > Sent: Monday, March 26, 2018 12:48 PM > To: stewartd.qdt ; hotspot-dev at openjdk.java.net > Subject: Re: RFR: 8200251: AArch64::CPUFeature out of sync with VM_Version::Feature_Flag > > Good. > > Thanks, > Vladimir > > On 3/26/18 9:24 AM, stewartd.qdt wrote: >> Please review this webrev [1] which attempts to bring the AArch64::CPUFeature enum (Java) in sync with VM_Version::Feature_Flag enum (C++ enum) for aarch64. >> >> This is in preparation for creating AArch64 some intrinsics for Graal. But I found that the CPUFeature enum was not being transferred over to Graal for AArch64. In attempting to do that I then found out that CPUFeatures was not in sync with the VM_Version::Feature_Flag enum. >> >> The bug report is filed at [2]. >> >> I am happy to modify the patch as necessary. >> >> Regards, >> >> Daniel Stewart >> >> [1] - http://cr.openjdk.java.net/~dstewart/8200251/webrev.00/ >> [2] - https://bugs.openjdk.java.net/browse/JDK-8200251 >> >> From vladimir.kozlov at oracle.com Wed Mar 28 21:01:22 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 28 Mar 2018 14:01:22 -0700 Subject: [11] RFR(S) 8200383: Can't build on SPARC Hotspot with code which use math functions Message-ID: http://cr.openjdk.java.net/~kvn/8200383/webrev.00/ https://bugs.openjdk.java.net/browse/JDK-8200383 Changes for JDK-8200303 added calls to log2f() math function and it hit problem building Hotspot on SPARC - it can't find this function. For Hotspot build on Solaris we had hack to support old Solaris versions which did not have libm.so.2: http://hg.openjdk.java.net/jdk6/jdk6/hotspot/file/a74480137e6e/make/solaris/makefiles/vm.make#l102 We don't support Solaris 8 and 9 anymore and it is safe to remove the hack. Tested with tier1 Builds and Hotspot testing (similar to submit-hs). -- Thanks, Vladimir From erik.joelsson at oracle.com Wed Mar 28 21:10:41 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Wed, 28 Mar 2018 14:10:41 -0700 Subject: [11] RFR(S) 8200383: Can't build on SPARC Hotspot with code which use math functions In-Reply-To: References: Message-ID: <1e2ce416-53b1-c9f4-49d9-84dbd61c64f3@oracle.com> Looks good. /Erik On 2018-03-28 14:01, Vladimir Kozlov wrote: > http://cr.openjdk.java.net/~kvn/8200383/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8200383 > > Changes for JDK-8200303 added calls to log2f() math function and it > hit problem building Hotspot on SPARC - it can't find this function. > > For Hotspot build on Solaris we had hack to support old Solaris > versions which did not have libm.so.2: > > http://hg.openjdk.java.net/jdk6/jdk6/hotspot/file/a74480137e6e/make/solaris/makefiles/vm.make#l102 > > > We don't support Solaris 8 and 9 anymore and it is safe to remove the > hack. > > Tested with tier1 Builds and Hotspot testing (similar to submit-hs). > From kim.barrett at oracle.com Wed Mar 28 21:12:26 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 28 Mar 2018 17:12:26 -0400 Subject: RFR (M) 8198313: Wrap holder object for ClassLoaderData in a WeakHandle In-Reply-To: <3fe8b4c5-3e1d-d192-07ce-0828e3982e75@oracle.com> References: <3fe8b4c5-3e1d-d192-07ce-0828e3982e75@oracle.com> Message-ID: <9C48FEF4-59DD-4415-AF18-B95ADBDFACB4@oracle.com> > On Mar 26, 2018, at 1:26 PM, coleen.phillimore at oracle.com wrote: > > Summary: Use WeakHandle for ClassLoaderData::_holder so that is_alive closure is not needed > > The purpose of WeakHandle is to encapsulate weak oops within the runtime code in the vm. The class was initially written by StefanK. The weak handles are pointers to OopStorage. This code is a basis for future work to move direct pointers to the heap (oops) from runtime structures like the StringTable, into pointers into an area that the GC efficiently manages, in parallel and/or concurrently. > > Tested with mach5 tier 1-5. Performance tested with internal dev-submit performance tests, and locally. > > open webrev at http://cr.openjdk.java.net/~coleenp/8198313.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8198313 > > Thanks, > Coleen ------------------------------------------------------------------------------ src/hotspot/share/oops/weakHandle.cpp 59 void WeakHandle::release() const { 60 Universe::vm_weak_oop_storage()->release(_obj); Is WeakHandle::release ever called with a handle that has not been cleared by GC? The only caller I found is ~ClassLoaderData. Do we ever construct a CLD with filled-in holder, and then decide we don't want the CLD after all? I'm thinking of something like an error during class loading or the like, but without much knowledge. ------------------------------------------------------------------------------ src/hotspot/share/classfile/classLoaderData.cpp 59 #include "gc/shared/oopStorage.hpp" Why is this include needed? Maybe I missed something, but it looks like all the OopStorage usage is wrapped up in WeakHandle. ------------------------------------------------------------------------------ src/hotspot/share/oops/instanceKlass.cpp 1903 void InstanceKlass::clean_implementors_list(BoolObjectClosure* is_alive) { 1904 assert(class_loader_data()->is_alive(), "this klass should be live"); ... 1909 if (!impl->is_loader_alive(is_alive)) { I'm kind of surprised we still need the is_alive closure here. But there are no changes in is_loader_alive. I think I'm not understanding something. ------------------------------------------------------------------------------ src/hotspot/share/classfile/classLoaderData.hpp 224 oop _class_loader; // oop used to uniquely identify a class loader 225 // class loader or a canonical class path [Not part of the change, but adjacent to one, so it caught my eye.] "class loader \n class loader" in the comment looks redundant? ------------------------------------------------------------------------------ src/hotspot/share/classfile/classLoaderData.cpp 516 assert(_holder.peek() == NULL, "never replace holders"); I think peek is the wrong test here. Shouldn't it be _holder.is_null()? If not, e.g. if !_holder.is_null() can be true here, then I think that would be a leak when _holder is (re)assigned. Of course, this goes back to my earlier private comment that I find WeakHandle::is_null() a bit confusing, because I keep thinking it's about the value of *_handle._obj rather than _handle._obj. ------------------------------------------------------------------------------ src/hotspot/share/classfile/classLoaderData.cpp 632 bool alive = keep_alive() // null class loader and incomplete anonymous klasses. 633 || _holder.is_null() 634 || (_holder.peek() != NULL); // not cleaned by weak reference processing I was initially guessing that _holder.is_null() was for null class loader and/or anonymous classes, but that's covered by the preceeding keep_alive(). So I don't know why a null holder => alive. ------------------------------------------------------------------------------ From vladimir.kozlov at oracle.com Wed Mar 28 21:13:10 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 28 Mar 2018 14:13:10 -0700 Subject: [11] RFR(S) 8200383: Can't build on SPARC Hotspot with code which use math functions In-Reply-To: <1e2ce416-53b1-c9f4-49d9-84dbd61c64f3@oracle.com> References: <1e2ce416-53b1-c9f4-49d9-84dbd61c64f3@oracle.com> Message-ID: <168ddf24-a8a3-f86d-a0c9-f49103cd29cc@oracle.com> Thank you, Erik Vladimir On 3/28/18 2:10 PM, Erik Joelsson wrote: > Looks good. > > /Erik > > > On 2018-03-28 14:01, Vladimir Kozlov wrote: >> http://cr.openjdk.java.net/~kvn/8200383/webrev.00/ >> https://bugs.openjdk.java.net/browse/JDK-8200383 >> >> Changes for JDK-8200303 added calls to log2f() math function and it >> hit problem building Hotspot on SPARC - it can't find this function. >> >> For Hotspot build on Solaris we had hack to support old Solaris >> versions which did not have libm.so.2: >> >> http://hg.openjdk.java.net/jdk6/jdk6/hotspot/file/a74480137e6e/make/solaris/makefiles/vm.make#l102 >> >> >> We don't support Solaris 8 and 9 anymore and it is safe to remove the >> hack. >> >> Tested with tier1 Builds and Hotspot testing (similar to submit-hs). >> > From magnus.ihse.bursie at oracle.com Wed Mar 28 21:51:02 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Wed, 28 Mar 2018 23:51:02 +0200 Subject: [11] RFR(S) 8200383: Can't build on SPARC Hotspot with code which use math functions In-Reply-To: References: Message-ID: On 2018-03-28 23:01, Vladimir Kozlov wrote: > http://cr.openjdk.java.net/~kvn/8200383/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8200383 > > Changes for JDK-8200303 added calls to log2f() math function and it > hit problem building Hotspot on SPARC - it can't find this function. > > For Hotspot build on Solaris we had hack to support old Solaris > versions which did not have libm.so.2: > > http://hg.openjdk.java.net/jdk6/jdk6/hotspot/file/a74480137e6e/make/solaris/makefiles/vm.make#l102 > > > We don't support Solaris 8 and 9 anymore and it is safe to remove the > hack. > > Tested with tier1 Builds and Hotspot testing (similar to submit-hs). Yay, that's wonderful to get it cleaned out! However, there's a similar hack in Awt2dLibraries.gmk: LDFLAGS_solaris := /usr/lib$(OPENJDK_TARGET_CPU_ISADIR)/libm.so.2 Maybe you can/should remove that as well? /Magnus From vladimir.kozlov at oracle.com Wed Mar 28 22:35:28 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 28 Mar 2018 15:35:28 -0700 Subject: [11] RFR(S) 8200383: Can't build on SPARC Hotspot with code which use math functions In-Reply-To: References: Message-ID: <992f56d6-2a85-9a81-8acb-2cd69fe81033@oracle.com> On 3/28/18 2:51 PM, Magnus Ihse Bursie wrote: > On 2018-03-28 23:01, Vladimir Kozlov wrote: >> http://cr.openjdk.java.net/~kvn/8200383/webrev.00/ >> https://bugs.openjdk.java.net/browse/JDK-8200383 >> >> Changes for JDK-8200303 added calls to log2f() math function and it >> hit problem building Hotspot on SPARC - it can't find this function. >> >> For Hotspot build on Solaris we had hack to support old Solaris >> versions which did not have libm.so.2: >> >> http://hg.openjdk.java.net/jdk6/jdk6/hotspot/file/a74480137e6e/make/solaris/makefiles/vm.make#l102 >> >> >> We don't support Solaris 8 and 9 anymore and it is safe to remove the >> hack. >> >> Tested with tier1 Builds and Hotspot testing (similar to submit-hs). > > Yay, that's wonderful to get it cleaned out! Thank you. > > However, there's a similar hack in Awt2dLibraries.gmk: > LDFLAGS_solaris := /usr/lib$(OPENJDK_TARGET_CPU_ISADIR)/libm.so.2 Based on history it was done for 6307603 in 2010 and reason is the same: http://hg.openjdk.java.net/jdk/hs/rev/1a5e995a710b Is next fix correct?: make/lib/Awt2dLibraries.gmk @@ -409,8 +409,8 @@ LDFLAGS := $(LDFLAGS_JDKLIB) \ $(call SET_SHARED_LIBRARY_ORIGIN), \ LDFLAGS_unix := -L$(INSTALL_LIBRARIES_HERE), \ - LDFLAGS_solaris := /usr/lib$(OPENJDK_TARGET_CPU_ISADIR)/libm.so.2, \ LIBS_unix := -lawt -ljvm -ljava $(LCMS_LIBS), \ + LIBS_solaris := $(LIBM), \ LIBS_linux := $(LIBM), \ LIBS_macosx := $(LIBM), \ LIBS_aix := $(LIBM),\ or make/lib/Awt2dLibraries.gmk @@ -409,11 +409,7 @@ LDFLAGS := $(LDFLAGS_JDKLIB) \ $(call SET_SHARED_LIBRARY_ORIGIN), \ LDFLAGS_unix := -L$(INSTALL_LIBRARIES_HERE), \ - LDFLAGS_solaris := /usr/lib$(OPENJDK_TARGET_CPU_ISADIR)/libm.so.2, \ - LIBS_unix := -lawt -ljvm -ljava $(LCMS_LIBS), \ - LIBS_linux := $(LIBM), \ - LIBS_macosx := $(LIBM), \ - LIBS_aix := $(LIBM),\ + LIBS_unix := -lawt -ljvm -ljava $(LCMS_LIBS) $(LIBM), \ LIBS_windows := $(WIN_AWT_LIB) $(WIN_JAVA_LIB), \ )) Thanks, Vladimir > > Maybe you can/should remove that as well? > > /Magnus > From erik.joelsson at oracle.com Wed Mar 28 22:48:27 2018 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Wed, 28 Mar 2018 15:48:27 -0700 Subject: [11] RFR(S) 8200383: Can't build on SPARC Hotspot with code which use math functions In-Reply-To: <992f56d6-2a85-9a81-8acb-2cd69fe81033@oracle.com> References: <992f56d6-2a85-9a81-8acb-2cd69fe81033@oracle.com> Message-ID: On 2018-03-28 15:35, Vladimir Kozlov wrote: > Based on history it was done for 6307603 in 2010 and reason is the same: > > http://hg.openjdk.java.net/jdk/hs/rev/1a5e995a710b > > Is next fix correct?: > > make/lib/Awt2dLibraries.gmk > @@ -409,8 +409,8 @@ > ???? LDFLAGS := $(LDFLAGS_JDKLIB) \ > ???????? $(call SET_SHARED_LIBRARY_ORIGIN), \ > ???? LDFLAGS_unix := -L$(INSTALL_LIBRARIES_HERE), \ > -??? LDFLAGS_solaris := /usr/lib$(OPENJDK_TARGET_CPU_ISADIR)/libm.so.2, \ > ???? LIBS_unix := -lawt -ljvm -ljava $(LCMS_LIBS), \ > +??? LIBS_solaris := $(LIBM), \ > ???? LIBS_linux := $(LIBM), \ > ???? LIBS_macosx := $(LIBM), \ > ???? LIBS_aix := $(LIBM),\ > > or > > make/lib/Awt2dLibraries.gmk > @@ -409,11 +409,7 @@ > ???? LDFLAGS := $(LDFLAGS_JDKLIB) \ > ???????? $(call SET_SHARED_LIBRARY_ORIGIN), \ > ???? LDFLAGS_unix := -L$(INSTALL_LIBRARIES_HERE), \ > -??? LDFLAGS_solaris := /usr/lib$(OPENJDK_TARGET_CPU_ISADIR)/libm.so.2, \ > -??? LIBS_unix := -lawt -ljvm -ljava $(LCMS_LIBS), \ > -??? LIBS_linux := $(LIBM), \ > -??? LIBS_macosx := $(LIBM), \ > -??? LIBS_aix := $(LIBM),\ > +??? LIBS_unix := -lawt -ljvm -ljava $(LCMS_LIBS) $(LIBM), \ > ???? LIBS_windows := $(WIN_AWT_LIB) $(WIN_JAVA_LIB), \ > ?)) > I would say the second one is better. Thanks, /Erik > Thanks, > Vladimir > >> >> Maybe you can/should remove that as well? >> >> /Magnus >> From magnus.ihse.bursie at oracle.com Wed Mar 28 22:50:40 2018 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Thu, 29 Mar 2018 00:50:40 +0200 Subject: [11] RFR(S) 8200383: Can't build on SPARC Hotspot with code which use math functions In-Reply-To: References: <992f56d6-2a85-9a81-8acb-2cd69fe81033@oracle.com> Message-ID: <4894301a-2750-8ce1-5a0e-1da13533c637@oracle.com> On 2018-03-29 00:48, Erik Joelsson wrote: > > > On 2018-03-28 15:35, Vladimir Kozlov wrote: >> Based on history it was done for 6307603 in 2010 and reason is the same: >> >> http://hg.openjdk.java.net/jdk/hs/rev/1a5e995a710b >> >> Is next fix correct?: >> >> make/lib/Awt2dLibraries.gmk >> @@ -409,8 +409,8 @@ >> ???? LDFLAGS := $(LDFLAGS_JDKLIB) \ >> ???????? $(call SET_SHARED_LIBRARY_ORIGIN), \ >> ???? LDFLAGS_unix := -L$(INSTALL_LIBRARIES_HERE), \ >> -??? LDFLAGS_solaris := >> /usr/lib$(OPENJDK_TARGET_CPU_ISADIR)/libm.so.2, \ >> ???? LIBS_unix := -lawt -ljvm -ljava $(LCMS_LIBS), \ >> +??? LIBS_solaris := $(LIBM), \ >> ???? LIBS_linux := $(LIBM), \ >> ???? LIBS_macosx := $(LIBM), \ >> ???? LIBS_aix := $(LIBM),\ >> >> or >> >> make/lib/Awt2dLibraries.gmk >> @@ -409,11 +409,7 @@ >> ???? LDFLAGS := $(LDFLAGS_JDKLIB) \ >> ???????? $(call SET_SHARED_LIBRARY_ORIGIN), \ >> ???? LDFLAGS_unix := -L$(INSTALL_LIBRARIES_HERE), \ >> -??? LDFLAGS_solaris := >> /usr/lib$(OPENJDK_TARGET_CPU_ISADIR)/libm.so.2, \ >> -??? LIBS_unix := -lawt -ljvm -ljava $(LCMS_LIBS), \ >> -??? LIBS_linux := $(LIBM), \ >> -??? LIBS_macosx := $(LIBM), \ >> -??? LIBS_aix := $(LIBM),\ >> +??? LIBS_unix := -lawt -ljvm -ljava $(LCMS_LIBS) $(LIBM), \ >> ???? LIBS_windows := $(WIN_AWT_LIB) $(WIN_JAVA_LIB), \ >> ?)) >> > I would say the second one is better. Agree. With this, the fix looks good to me too. /Magnus > Thanks, > /Erik >> Thanks, >> Vladimir >> >>> >>> Maybe you can/should remove that as well? >>> >>> /Magnus >>> > From vladimir.kozlov at oracle.com Wed Mar 28 22:52:34 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 28 Mar 2018 15:52:34 -0700 Subject: [11] RFR(S) 8200383: Can't build on SPARC Hotspot with code which use math functions In-Reply-To: <4894301a-2750-8ce1-5a0e-1da13533c637@oracle.com> References: <992f56d6-2a85-9a81-8acb-2cd69fe81033@oracle.com> <4894301a-2750-8ce1-5a0e-1da13533c637@oracle.com> Message-ID: <2a89a050-15ba-0b3a-d638-f73b44475acb@oracle.com> Thank you. I will use second version and do testing again before push. Vladimir On 3/28/18 3:50 PM, Magnus Ihse Bursie wrote: > On 2018-03-29 00:48, Erik Joelsson wrote: >> >> >> On 2018-03-28 15:35, Vladimir Kozlov wrote: >>> Based on history it was done for 6307603 in 2010 and reason is the same: >>> >>> http://hg.openjdk.java.net/jdk/hs/rev/1a5e995a710b >>> >>> Is next fix correct?: >>> >>> make/lib/Awt2dLibraries.gmk >>> @@ -409,8 +409,8 @@ >>> ???? LDFLAGS := $(LDFLAGS_JDKLIB) \ >>> ???????? $(call SET_SHARED_LIBRARY_ORIGIN), \ >>> ???? LDFLAGS_unix := -L$(INSTALL_LIBRARIES_HERE), \ >>> -??? LDFLAGS_solaris := >>> /usr/lib$(OPENJDK_TARGET_CPU_ISADIR)/libm.so.2, \ >>> ???? LIBS_unix := -lawt -ljvm -ljava $(LCMS_LIBS), \ >>> +??? LIBS_solaris := $(LIBM), \ >>> ???? LIBS_linux := $(LIBM), \ >>> ???? LIBS_macosx := $(LIBM), \ >>> ???? LIBS_aix := $(LIBM),\ >>> >>> or >>> >>> make/lib/Awt2dLibraries.gmk >>> @@ -409,11 +409,7 @@ >>> ???? LDFLAGS := $(LDFLAGS_JDKLIB) \ >>> ???????? $(call SET_SHARED_LIBRARY_ORIGIN), \ >>> ???? LDFLAGS_unix := -L$(INSTALL_LIBRARIES_HERE), \ >>> -??? LDFLAGS_solaris := >>> /usr/lib$(OPENJDK_TARGET_CPU_ISADIR)/libm.so.2, \ >>> -??? LIBS_unix := -lawt -ljvm -ljava $(LCMS_LIBS), \ >>> -??? LIBS_linux := $(LIBM), \ >>> -??? LIBS_macosx := $(LIBM), \ >>> -??? LIBS_aix := $(LIBM),\ >>> +??? LIBS_unix := -lawt -ljvm -ljava $(LCMS_LIBS) $(LIBM), \ >>> ???? LIBS_windows := $(WIN_AWT_LIB) $(WIN_JAVA_LIB), \ >>> ?)) >>> >> I would say the second one is better. > Agree. With this, the fix looks good to me too. > > /Magnus > >> Thanks, >> /Erik >>> Thanks, >>> Vladimir >>> >>>> >>>> Maybe you can/should remove that as well? >>>> >>>> /Magnus >>>> >> > From kim.barrett at oracle.com Wed Mar 28 23:35:02 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 28 Mar 2018 19:35:02 -0400 Subject: RFR: 8195972: Refactor oops in JNI to use the Access API In-Reply-To: <5ABB9B30.7080808@oracle.com> References: <8BC47508-8585-44EA-8D2B-22C2144E4AF5@oracle.com> <5ABB9B30.7080808@oracle.com> Message-ID: > On Mar 28, 2018, at 9:40 AM, Erik ?sterlund wrote: > > Hi Kim, > > I noticed that jobjects are now IN_CONCURRENT_ROOT in this patch. I wonder if this is the right time to upgrade them to IN_CONCURRENT_ROOT. Until there is at least one GC that actually scans these concurrently, this will only impose extra overheads (unnecessary G1 SATB-enqueue barriers on the store required to release jobjects) with no obvious gains. > > The platform specific code needs to go along with this. I have a patch out to generalize interpreter code. In there, I am treating resolve jobject as a normal strong root. That would probably need to change. It is also troubling that jniFastGetField shoots raw loads into (hopefully) the heap, dodging all GC barriers, hoping that is okay. I wonder if starting to actually scan jobjects concurrently would force us to disable that optimization completely to be generally useful to all collectors. For example, an IN_CONCURRENT_ROOT load access for ZGC might require a slowpath. But in jniFastGetField, there is no frame, and hence any code that runs in there must not call anything in the runtime. Therefore, with IN_CONCURRENT_ROOT, it is not generally safe to use jniFastGetField, without doing... something about that code. > > I would like to hear your thoughts about this. Perhaps the intention is just to take incremental steps towards being able to scan jobjects concurrently, and this is just the first step? Still, I would be interested to hear about what you think about the next steps. If we decide to go with IN_CONCURRENT_ROOT now already, then I should change my interpreter changes that are out for review to do the same so that we are consistent. > > Otherwise, this looks great, and I am glad we finally have jni handles accessorized. With this change in place I think it should be straight-forward for G1 to do JNI global handle marking concurrently, rather than during a pause. This change does come with some costs. (1) For G1 (and presumably Shenandoah), a SATB enqueue barrier when setting a global handle's value to NULL as part of releasing the handle. (2) For other collectors, selection between the above barrier and do-nothing code. (3) For ZGC, a read barrier when resolving the value of a non-weak handle. (4) For other collectors (when ZGC is present), selection between the above barrier and do-nothing code. (1) and (2) are wasted costs until G1 is changed to do that marking concurrently. But the cost is pretty small. I think (3) and (4) don't apply to the jdk repo yet. And even in the zgc repo the impact should be small. All of these are costs that we expect to be taking eventually anyway. The real costs today are that we're not getting the pause-time benefit from these changes yet. Even those (temporary) costs could be mitigated if we weren't forced to use the overly generic IN_CONCURRENT_ROOT decorator, and could instead provide more precise information to the GC-specific backends (e.g. something like IN_JNI_GLOBAL_ROOT), letting each GC defer its extra barrier work until the changes to get the pause-time benefits are being made. I'd forgotten about jniFastGetField. This was discussed when Mikael and I were adding the jweak tag support. At the time it was decided it was acceptable (though dirty) for G1 to not ensure the base object was kept alive when fetching a primitive field value from it. http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-March/026231.html I suspect that choice was driven by the difficulties you noted, and knowing that something better to solve all our problems (Access!) was coming soon :) Unfortunately, that (among other things here) really doesn't work for ZGC, even though it seems okay for all the other collectors, at least for now. Any idea how important an optimization jniFastGetField might be? How bad would it be to turn it off for ZGC? For the interpreter, I think you are referring to 8199417? I hadn't looked at that before (I'll try to review it tomorrow). Yes, I think those should be using IN_CONCURRENT_ROOT too, so that eventually ZGC can do JNI global marking concurrently. And there are two other pre-existing uses of IN_CONCURRENT_ROOT, both of which seem suspicious to me. - In ClassLoaderData::remove_handle() we have // This root is not walked in safepoints, and hence requires an appropriate // decorator that e.g. maintains the SATB invariant in SATB collectors. RootAccess::oop_store(ptr, oop(NULL)); But there aren't any corresponding IN_CONCURRENT_ROOT loads, nor is the initializing store (CLD::ChunkedHandleList::add), which seems inconsistent. (To be pedantic, the initializing store should probably be using the new RootAccess rather than a raw store.) Oh, the load is OopHandle::resolve; and I think OopHandle is still pending accessorizing (and probably needs the access.inline.hpp cleanup...). - In InstanceKlass::klass_holder_phantom() we have return RootAccess::oop_load(addr); My understanding of it is that IN_CONCURRENT_ROOT is not correct here. I think this is similar to jweaks, where I only used ON_PHANTOM_OOP_REF. From vladimir.kozlov at oracle.com Thu Mar 29 00:35:02 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 28 Mar 2018 17:35:02 -0700 Subject: 8200391: clean up test/hotspot/jtreg/ProblemList.txt (compiler related) Message-ID: http://cr.openjdk.java.net/~kvn/8200391/webrev.00/ https://bugs.openjdk.java.net/browse/JDK-8200391 I removed tests from ProblemList-graal.txt for 8195632 and 8196626 bugs which were fixed. I changed 8197446which was closed as duplicate of 8181753. I verified that tests listed in 8196626 and 8195632 are passed now. -- Thanks, Vladimir From jesper.wilhelmsson at oracle.com Thu Mar 29 01:03:09 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Thu, 29 Mar 2018 03:03:09 +0200 Subject: RFR: 8200245: Zero fails to build on linux-ia64 due to ia64-specific cruft In-Reply-To: References: <31e26c1c-4549-e04d-18fb-70a03acde103@physik.fu-berlin.de> <9f97f4c3-d8a9-174e-a8ce-aa1b8d1b510e@physik.fu-berlin.de> <3c037f7e-6fd4-2a6d-cc9d-04d42c2a72b8@oracle.com> <4aaa237a-094b-efa4-b938-c593509ce2e3@oracle.com> Message-ID: <86ABDB49-7E53-46FA-A752-03C5581BB99D@oracle.com> > On 28 Mar 2018, at 07:11, Thomas St?fe wrote: > > Hi Adrian, > > for the complete writeup of the rules see Jespers mail from march 13: > > http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030656.html > > He promised to put this in the OpenJdk Wiki sometime. Which would be quite helpful . https://wiki.openjdk.java.net/display/HotSpot/Pushing+a+HotSpot+change /Jesper > > ..Thomas > > On Wed, Mar 28, 2018 at 6:06 AM, David Holmes > wrote: > On 28/03/2018 1:33 PM, John Paul Adrian Glaubitz wrote: > On 03/28/2018 12:19 PM, David Holmes wrote: > Correct? > > Not sure what "hg jtreg" is :) > > Oops, sorry. I meant "hg jcheck". > > Note jcheck only runs against commited changesets, and also as part of the commit, so there's no need to run it manually as long as you have it enabled as a hook: > > [hooks] > pretxnchangegroup = python:jcheck.hook > pretxncommit = python:jcheck.hook > > David > > > hg commit with appropriate changeset comment then hg push > > Ok. > > Interesting, Sun did have an HPUX port? > > Doesn't surprise me. I think HP still officially supports HPUX on > Itanium. They apparently have people paying them lots of money > for that. > > https://h20392.www2.hpe.com/portal/swdepot/displayProductInfo.do?productNumber=HPUXJDKJRE80 > > Wow, even up-to-date jdk8 update version ;). > > Adrian > > From jesper.wilhelmsson at oracle.com Thu Mar 29 01:26:59 2018 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Thu, 29 Mar 2018 03:26:59 +0200 Subject: How to work with HotSpot Message-ID: Hi all happy HotSpot developers! I have moved a large part of the HotSpot wiki pages from our internal wiki to the OpenJDK wiki and aded some new stuff as well. I hope that these pages will be helpful both for describing how to do things, and for explaining some of our internal process that may affect how things work. You will find the start page here: https://wiki.openjdk.java.net/display/HotSpot/How+to+work+with+HotSpot As I moved these pages I had to update all links etc that binds these pages together. If you find a broken link, or something that looks like it should be a link but isn't, please let me know. Also feel free to send me requests for content if you have questions that are not covered by these pages. Enjoy! /Jesper From shafi.s.ahmad at oracle.com Thu Mar 29 06:17:35 2018 From: shafi.s.ahmad at oracle.com (Shafi Ahmad) Date: Wed, 28 Mar 2018 23:17:35 -0700 (PDT) Subject: [8u] RFR for backport of "JDK-8164480: Crash with assert(handler_address == SharedRuntime::compute_compiled_exc_handler(..) failed: Must be the same" to jdk8u-dev In-Reply-To: <9717d6f4-2ddd-4f33-8cfc-9e2fd46645e7@default> References: <20b15f6f-d7ca-09eb-a37e-37ce35a59ff5@oracle.com> <9717d6f4-2ddd-4f33-8cfc-9e2fd46645e7@default> Message-ID: <2f244940-0263-4908-b550-5bdf1ee3cd39@default> Hi All, May I get the second thumps up. Regards, Shafi > -----Original Message----- > From: Shafi Ahmad > Sent: Wednesday, March 28, 2018 2:20 PM > To: Tobias Hartmann ; hotspot- > dev at openjdk.java.net > Cc: Vladimir Kozlov ; Douglas Simon > > Subject: RE: [8u] RFR for backport of "JDK-8164480: Crash with > assert(handler_address == > SharedRuntime::compute_compiled_exc_handler(..) failed: Must be the > same" to jdk8u-dev > > Thank you Tobias. > > Regards, > Shafi > > > > -----Original Message----- > > From: Tobias Hartmann > > Sent: Wednesday, March 28, 2018 2:07 PM > > To: Shafi Ahmad ; hotspot- > > dev at openjdk.java.net > > Cc: Vladimir Kozlov ; Douglas Simon > > > > Subject: Re: [8u] RFR for backport of "JDK-8164480: Crash with > > assert(handler_address == > > SharedRuntime::compute_compiled_exc_handler(..) failed: Must be the > > same" to jdk8u-dev > > > > Hi Shafi, > > > > looks good to me. > > > > Best regards, > > Tobias > > > > On 28.03.2018 08:23, Shafi Ahmad wrote: > > > Hi, > > > > > > Please review the backport of ' JDK-8164480: Crash with > > assert(handler_address == > > SharedRuntime::compute_compiled_exc_handler(..) failed: Must be the > > same' to jdk8u-dev. > > > Please note that this is not a clean backport because file > > src/share/vm/jvmci/jvmciRuntime.cpp is not in jdk8u repo and I ignore > > this change as I am not seeing relevant code in other file. > > > > > > webrev: http://cr.openjdk.java.net/~shshahma/8164480/webrev.00/ > > > jdk9 bug: https://bugs.openjdk.java.net/browse/JDK-8164480 > > > original patch pushed to jdk9: > > http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/b9b1b54d53b2 > > > > > > Test: Run jprt -testset hotspot > > > > > > Regards, > > > Shafi > > > From shafi.s.ahmad at oracle.com Thu Mar 29 07:07:58 2018 From: shafi.s.ahmad at oracle.com (Shafi Ahmad) Date: Thu, 29 Mar 2018 00:07:58 -0700 (PDT) Subject: [8u] RFR for backport of "JDK-8164480: Crash with assert(handler_address == SharedRuntime::compute_compiled_exc_handler(..) failed: Must be the same" to jdk8u-dev In-Reply-To: <0427fcd0-220a-0219-0181-4f9b7f8bb78c@oracle.com> References: <20b15f6f-d7ca-09eb-a37e-37ce35a59ff5@oracle.com> <9717d6f4-2ddd-4f33-8cfc-9e2fd46645e7@default> <2f244940-0263-4908-b550-5bdf1ee3cd39@default> <0427fcd0-220a-0219-0181-4f9b7f8bb78c@oracle.com> Message-ID: <90b28fb5-7c11-4c5d-92d5-0ce32a0767df@default> Thank you Vladimir. Regards, Shafi > -----Original Message----- > From: Vladimir Kozlov > Sent: Thursday, March 29, 2018 12:37 PM > To: Shafi Ahmad ; Tobias Hartmann > ; hotspot-dev at openjdk.java.net > Cc: Douglas Simon > Subject: Re: [8u] RFR for backport of "JDK-8164480: Crash with > assert(handler_address == > SharedRuntime::compute_compiled_exc_handler(..) failed: Must be the > same" to jdk8u-dev > > Reviewed. > > Vladimir K > > On 3/28/18 11:17 PM, Shafi Ahmad wrote: > > Hi All, > > > > May I get the second thumps up. > > > > Regards, > > Shafi > > > >> -----Original Message----- > >> From: Shafi Ahmad > >> Sent: Wednesday, March 28, 2018 2:20 PM > >> To: Tobias Hartmann ; hotspot- > >> dev at openjdk.java.net > >> Cc: Vladimir Kozlov ; Douglas Simon > >> > >> Subject: RE: [8u] RFR for backport of "JDK-8164480: Crash with > >> assert(handler_address == > >> SharedRuntime::compute_compiled_exc_handler(..) failed: Must be the > >> same" to jdk8u-dev > >> > >> Thank you Tobias. > >> > >> Regards, > >> Shafi > >> > >> > >>> -----Original Message----- > >>> From: Tobias Hartmann > >>> Sent: Wednesday, March 28, 2018 2:07 PM > >>> To: Shafi Ahmad ; hotspot- > >>> dev at openjdk.java.net > >>> Cc: Vladimir Kozlov ; Douglas Simon > >>> > >>> Subject: Re: [8u] RFR for backport of "JDK-8164480: Crash with > >>> assert(handler_address == > >>> SharedRuntime::compute_compiled_exc_handler(..) failed: Must be the > >>> same" to jdk8u-dev > >>> > >>> Hi Shafi, > >>> > >>> looks good to me. > >>> > >>> Best regards, > >>> Tobias > >>> > >>> On 28.03.2018 08:23, Shafi Ahmad wrote: > >>>> Hi, > >>>> > >>>> Please review the backport of ' JDK-8164480: Crash with > >>> assert(handler_address == > >>> SharedRuntime::compute_compiled_exc_handler(..) failed: Must be the > >>> same' to jdk8u-dev. > >>>> Please note that this is not a clean backport because file > >>> src/share/vm/jvmci/jvmciRuntime.cpp is not in jdk8u repo and I > >>> ignore this change as I am not seeing relevant code in other file. > >>>> > >>>> webrev: http://cr.openjdk.java.net/~shshahma/8164480/webrev.00/ > >>>> jdk9 bug: https://bugs.openjdk.java.net/browse/JDK-8164480 > >>>> original patch pushed to jdk9: > >>> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/b9b1b54d53b2 > >>>> > >>>> Test: Run jprt -testset hotspot > >>>> > >>>> Regards, > >>>> Shafi > >>>> From vladimir.kozlov at oracle.com Thu Mar 29 07:07:11 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 29 Mar 2018 00:07:11 -0700 Subject: [8u] RFR for backport of "JDK-8164480: Crash with assert(handler_address == SharedRuntime::compute_compiled_exc_handler(..) failed: Must be the same" to jdk8u-dev In-Reply-To: <2f244940-0263-4908-b550-5bdf1ee3cd39@default> References: <20b15f6f-d7ca-09eb-a37e-37ce35a59ff5@oracle.com> <9717d6f4-2ddd-4f33-8cfc-9e2fd46645e7@default> <2f244940-0263-4908-b550-5bdf1ee3cd39@default> Message-ID: <0427fcd0-220a-0219-0181-4f9b7f8bb78c@oracle.com> Reviewed. Vladimir K On 3/28/18 11:17 PM, Shafi Ahmad wrote: > Hi All, > > May I get the second thumps up. > > Regards, > Shafi > >> -----Original Message----- >> From: Shafi Ahmad >> Sent: Wednesday, March 28, 2018 2:20 PM >> To: Tobias Hartmann ; hotspot- >> dev at openjdk.java.net >> Cc: Vladimir Kozlov ; Douglas Simon >> >> Subject: RE: [8u] RFR for backport of "JDK-8164480: Crash with >> assert(handler_address == >> SharedRuntime::compute_compiled_exc_handler(..) failed: Must be the >> same" to jdk8u-dev >> >> Thank you Tobias. >> >> Regards, >> Shafi >> >> >>> -----Original Message----- >>> From: Tobias Hartmann >>> Sent: Wednesday, March 28, 2018 2:07 PM >>> To: Shafi Ahmad ; hotspot- >>> dev at openjdk.java.net >>> Cc: Vladimir Kozlov ; Douglas Simon >>> >>> Subject: Re: [8u] RFR for backport of "JDK-8164480: Crash with >>> assert(handler_address == >>> SharedRuntime::compute_compiled_exc_handler(..) failed: Must be the >>> same" to jdk8u-dev >>> >>> Hi Shafi, >>> >>> looks good to me. >>> >>> Best regards, >>> Tobias >>> >>> On 28.03.2018 08:23, Shafi Ahmad wrote: >>>> Hi, >>>> >>>> Please review the backport of ' JDK-8164480: Crash with >>> assert(handler_address == >>> SharedRuntime::compute_compiled_exc_handler(..) failed: Must be the >>> same' to jdk8u-dev. >>>> Please note that this is not a clean backport because file >>> src/share/vm/jvmci/jvmciRuntime.cpp is not in jdk8u repo and I ignore >>> this change as I am not seeing relevant code in other file. >>>> >>>> webrev: http://cr.openjdk.java.net/~shshahma/8164480/webrev.00/ >>>> jdk9 bug: https://bugs.openjdk.java.net/browse/JDK-8164480 >>>> original patch pushed to jdk9: >>> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/b9b1b54d53b2 >>>> >>>> Test: Run jprt -testset hotspot >>>> >>>> Regards, >>>> Shafi >>>> From tobias.hartmann at oracle.com Thu Mar 29 07:12:02 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Thu, 29 Mar 2018 09:12:02 +0200 Subject: RFR(XS): 8200360: MeetIncompatibleInterfaceArrays fails with "MeetIncompatibleInterfaceArrays0ASM.run() must be compiled at tier 0 !" In-Reply-To: References: Message-ID: <23fde1a0-776b-9a3e-6133-daa6f387150e@oracle.com> Hi Volker, On 28.03.2018 16:49, Volker Simonis wrote: > http://cr.openjdk.java.net/~simonis/webrevs/2018/8200360/ > https://bugs.openjdk.java.net/browse/JDK-8200360 Looks good to me. > It would be great if somebody could run this trough the "hs-tier2" > test suite and any other relevant internal test suites because the > problem doesn't show up in the tier1 tests. I'll run the relevant internal testing. Best regards, Tobias From volker.simonis at gmail.com Thu Mar 29 08:19:21 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Thu, 29 Mar 2018 10:19:21 +0200 Subject: RFR(XS): 8200360: MeetIncompatibleInterfaceArrays fails with "MeetIncompatibleInterfaceArrays0ASM.run() must be compiled at tier 0 !" In-Reply-To: <23fde1a0-776b-9a3e-6133-daa6f387150e@oracle.com> References: <23fde1a0-776b-9a3e-6133-daa6f387150e@oracle.com> Message-ID: On Thu, Mar 29, 2018 at 9:12 AM, Tobias Hartmann wrote: > Hi Volker, > > On 28.03.2018 16:49, Volker Simonis wrote: >> http://cr.openjdk.java.net/~simonis/webrevs/2018/8200360/ >> https://bugs.openjdk.java.net/browse/JDK-8200360 > > Looks good to me. > Thanks! >> It would be great if somebody could run this trough the "hs-tier2" >> test suite and any other relevant internal test suites because the >> problem doesn't show up in the tier1 tests. > > I'll run the relevant internal testing. > I just saw that Daniel added the "hs-tier3" tag to the bug. What does this mean? Are there other (different?) problems when running the "hs-tier3" suite? I'll wait with the push until I get the OK from you that all internal tests have passed. Thanks, Volker > Best regards, > Tobias From shade at redhat.com Thu Mar 29 10:44:43 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Thu, 29 Mar 2018 12:44:43 +0200 Subject: RFR 8200423: Non-PCH build for x86_32 fails Message-ID: <8e4e83be-54ce-9f98-efc8-699552c22730@redhat.com> Bug: https://bugs.openjdk.java.net/browse/JDK-8200423 The fix is to add the same header sharedRuntime_x86_64.cpp uses: 8200423: Non-PCH build for x86_32 fails Reviewed-by: XXX diff -r 2ad3212a7dd9 -r 663e1ccf9d4e src/hotspot/cpu/x86/sharedRuntime_x86_32.cpp --- a/src/hotspot/cpu/x86/sharedRuntime_x86_32.cpp Thu Mar 29 10:38:29 2018 +0200 +++ b/src/hotspot/cpu/x86/sharedRuntime_x86_32.cpp Thu Mar 29 12:43:52 2018 +0200 @@ -27,6 +27,7 @@ #include "asm/macroAssembler.inline.hpp" #include "code/debugInfoRec.hpp" #include "code/icBuffer.hpp" +#include "code/nativeInst.hpp" #include "code/vtableStubs.hpp" #include "gc/shared/gcLocker.hpp" #include "interpreter/interpreter.hpp" I think it falls under triviality rule. Testing: x86_32 build Thanks, -Aleksey From rkennke at redhat.com Thu Mar 29 10:49:16 2018 From: rkennke at redhat.com (Roman Kennke) Date: Thu, 29 Mar 2018 12:49:16 +0200 Subject: RFR 8200423: Non-PCH build for x86_32 fails In-Reply-To: <8e4e83be-54ce-9f98-efc8-699552c22730@redhat.com> References: <8e4e83be-54ce-9f98-efc8-699552c22730@redhat.com> Message-ID: Am 29.03.2018 um 12:44 schrieb Aleksey Shipilev: > Bug: > https://bugs.openjdk.java.net/browse/JDK-8200423 > > The fix is to add the same header sharedRuntime_x86_64.cpp uses: > > 8200423: Non-PCH build for x86_32 fails > Reviewed-by: XXX > > diff -r 2ad3212a7dd9 -r 663e1ccf9d4e src/hotspot/cpu/x86/sharedRuntime_x86_32.cpp > --- a/src/hotspot/cpu/x86/sharedRuntime_x86_32.cpp Thu Mar 29 10:38:29 2018 +0200 > +++ b/src/hotspot/cpu/x86/sharedRuntime_x86_32.cpp Thu Mar 29 12:43:52 2018 +0200 > @@ -27,6 +27,7 @@ > #include "asm/macroAssembler.inline.hpp" > #include "code/debugInfoRec.hpp" > #include "code/icBuffer.hpp" > +#include "code/nativeInst.hpp" > #include "code/vtableStubs.hpp" > #include "gc/shared/gcLocker.hpp" > #include "interpreter/interpreter.hpp" > > > I think it falls under triviality rule. > > Testing: x86_32 build Sure, go for it From thomas.stuefe at gmail.com Thu Mar 29 10:50:37 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 29 Mar 2018 12:50:37 +0200 Subject: RFR 8200423: Non-PCH build for x86_32 fails In-Reply-To: <8e4e83be-54ce-9f98-efc8-699552c22730@redhat.com> References: <8e4e83be-54ce-9f98-efc8-699552c22730@redhat.com> Message-ID: Looks good. ..Thomas On Thu, Mar 29, 2018 at 12:44 PM, Aleksey Shipilev wrote: > Bug: > https://bugs.openjdk.java.net/browse/JDK-8200423 > > The fix is to add the same header sharedRuntime_x86_64.cpp uses: > > 8200423: Non-PCH build for x86_32 fails > Reviewed-by: XXX > > diff -r 2ad3212a7dd9 -r 663e1ccf9d4e src/hotspot/cpu/x86/ > sharedRuntime_x86_32.cpp > --- a/src/hotspot/cpu/x86/sharedRuntime_x86_32.cpp Thu Mar 29 > 10:38:29 2018 +0200 > +++ b/src/hotspot/cpu/x86/sharedRuntime_x86_32.cpp Thu Mar 29 > 12:43:52 2018 +0200 > @@ -27,6 +27,7 @@ > #include "asm/macroAssembler.inline.hpp" > #include "code/debugInfoRec.hpp" > #include "code/icBuffer.hpp" > +#include "code/nativeInst.hpp" > #include "code/vtableStubs.hpp" > #include "gc/shared/gcLocker.hpp" > #include "interpreter/interpreter.hpp" > > > I think it falls under triviality rule. > > Testing: x86_32 build > > Thanks, > -Aleksey > > From shade at redhat.com Thu Mar 29 10:57:26 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Thu, 29 Mar 2018 12:57:26 +0200 Subject: RFR 8200423: Non-PCH build for x86_32 fails In-Reply-To: References: <8e4e83be-54ce-9f98-efc8-699552c22730@redhat.com> Message-ID: <2c73cd1e-ed06-88f5-3af6-7df2c2878fea@redhat.com> Thanks guys, pushed. -Aleksey On 03/29/2018 12:50 PM, Thomas St?fe wrote: > Looks good. > > ..Thomas > > On Thu, Mar 29, 2018 at 12:44 PM, Aleksey Shipilev > wrote: > > Bug: > ? https://bugs.openjdk.java.net/browse/JDK-8200423 > > > The fix is to add the same header sharedRuntime_x86_64.cpp uses: > > 8200423: Non-PCH build for x86_32 fails > Reviewed-by: XXX > > diff -r 2ad3212a7dd9 -r 663e1ccf9d4e src/hotspot/cpu/x86/sharedRuntime_x86_32.cpp > --- a/src/hotspot/cpu/x86/sharedRuntime_x86_32.cpp? ? ? Thu Mar 29 10:38:29 2018 +0200 > +++ b/src/hotspot/cpu/x86/sharedRuntime_x86_32.cpp? ? ? Thu Mar 29 12:43:52 2018 +0200 > @@ -27,6 +27,7 @@ > ?#include "asm/macroAssembler.inline.hpp" > ?#include "code/debugInfoRec.hpp" > ?#include "code/icBuffer.hpp" > +#include "code/nativeInst.hpp" > ?#include "code/vtableStubs.hpp" > ?#include "gc/shared/gcLocker.hpp" > ?#include "interpreter/interpreter.hpp" > > > I think it falls under triviality rule. > > Testing: x86_32 build > > Thanks, > -Aleksey > > From tobias.hartmann at oracle.com Thu Mar 29 12:17:54 2018 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Thu, 29 Mar 2018 14:17:54 +0200 Subject: RFR(XS): 8200360: MeetIncompatibleInterfaceArrays fails with "MeetIncompatibleInterfaceArrays0ASM.run() must be compiled at tier 0 !" In-Reply-To: References: <23fde1a0-776b-9a3e-6133-daa6f387150e@oracle.com> Message-ID: Hi Volker On 29.03.2018 10:19, Volker Simonis wrote: > I just saw that Daniel added the "hs-tier3" tag to the bug. What does > this mean? Are there other (different?) problems when running the > "hs-tier3" suite? It just means that the problem showed up in tier3 as well. > I'll wait with the push until I get the OK from you that all internal > tests have passed. I've executed the test on tier1-3, no failures. Please go ahead and push! Best regards, Tobias From volker.simonis at gmail.com Thu Mar 29 12:54:57 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Thu, 29 Mar 2018 14:54:57 +0200 Subject: RFR(XS): 8200360: MeetIncompatibleInterfaceArrays fails with "MeetIncompatibleInterfaceArrays0ASM.run() must be compiled at tier 0 !" In-Reply-To: References: <23fde1a0-776b-9a3e-6133-daa6f387150e@oracle.com> Message-ID: On Thu, Mar 29, 2018 at 2:17 PM, Tobias Hartmann wrote: > Hi Volker > > On 29.03.2018 10:19, Volker Simonis wrote: >> I just saw that Daniel added the "hs-tier3" tag to the bug. What does >> this mean? Are there other (different?) problems when running the >> "hs-tier3" suite? > > It just means that the problem showed up in tier3 as well. > >> I'll wait with the push until I get the OK from you that all internal >> tests have passed. > > I've executed the test on tier1-3, no failures. Please go ahead and push! > Thanks, pushed. > Best regards, > Tobias From erik.osterlund at oracle.com Thu Mar 29 13:54:08 2018 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 29 Mar 2018 15:54:08 +0200 Subject: RFR: 8195972: Refactor oops in JNI to use the Access API In-Reply-To: References: <8BC47508-8585-44EA-8D2B-22C2144E4AF5@oracle.com> <5ABB9B30.7080808@oracle.com> Message-ID: Hi Kim, On 2018-03-29 01:35, Kim Barrett wrote: >> On Mar 28, 2018, at 9:40 AM, Erik ?sterlund wrote: >> >> Hi Kim, >> >> I noticed that jobjects are now IN_CONCURRENT_ROOT in this patch. I wonder if this is the right time to upgrade them to IN_CONCURRENT_ROOT. Until there is at least one GC that actually scans these concurrently, this will only impose extra overheads (unnecessary G1 SATB-enqueue barriers on the store required to release jobjects) with no obvious gains. >> >> The platform specific code needs to go along with this. I have a patch out to generalize interpreter code. In there, I am treating resolve jobject as a normal strong root. That would probably need to change. It is also troubling that jniFastGetField shoots raw loads into (hopefully) the heap, dodging all GC barriers, hoping that is okay. I wonder if starting to actually scan jobjects concurrently would force us to disable that optimization completely to be generally useful to all collectors. For example, an IN_CONCURRENT_ROOT load access for ZGC might require a slowpath. But in jniFastGetField, there is no frame, and hence any code that runs in there must not call anything in the runtime. Therefore, with IN_CONCURRENT_ROOT, it is not generally safe to use jniFastGetField, without doing... something about that code. >> >> I would like to hear your thoughts about this. Perhaps the intention is just to take incremental steps towards being able to scan jobjects concurrently, and this is just the first step? Still, I would be interested to hear about what you think about the next steps. If we decide to go with IN_CONCURRENT_ROOT now already, then I should change my interpreter changes that are out for review to do the same so that we are consistent. >> >> Otherwise, this looks great, and I am glad we finally have jni handles accessorized. > With this change in place I think it should be straight-forward for G1 > to do JNI global handle marking concurrently, rather than during a > pause. > > This change does come with some costs. > > (1) For G1 (and presumably Shenandoah), a SATB enqueue barrier when > setting a global handle's value to NULL as part of releasing the > handle. > > (2) For other collectors, selection between the above barrier and > do-nothing code. > > (3) For ZGC, a read barrier when resolving the value of a non-weak > handle. > > (4) For other collectors (when ZGC is present), selection between the > above barrier and do-nothing code. > > (1) and (2) are wasted costs until G1 is changed to do that marking > concurrently. But the cost is pretty small. > > I think (3) and (4) don't apply to the jdk repo yet. And even in the > zgc repo the impact should be small. > > All of these are costs that we expect to be taking eventually anyway. > The real costs today are that we're not getting the pause-time benefit > from these changes yet. Fair enough. > Even those (temporary) costs could be mitigated if we weren't forced > to use the overly generic IN_CONCURRENT_ROOT decorator, and could > instead provide more precise information to the GC-specific backends > (e.g. something like IN_JNI_GLOBAL_ROOT), letting each GC defer its > extra barrier work until the changes to get the pause-time benefits > are being made. Sure. But I'd like to avoid overly specific decorators that describe the exact root being accessed, rather than its semantic properties, unless there are very compelling reasons to do so. > I'd forgotten about jniFastGetField. This was discussed when Mikael > and I were adding the jweak tag support. At the time it was decided > it was acceptable (though dirty) for G1 to not ensure the base object > was kept alive when fetching a primitive field value from it. > > http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-March/026231.html > > I suspect that choice was driven by the difficulties you noted, and > knowing that something better to solve all our problems (Access!) was > coming soon :) Unfortunately, that (among other things here) really > doesn't work for ZGC, even though it seems okay for all the other > collectors, at least for now. Any idea how important an optimization > jniFastGetField might be? How bad would it be to turn it off for ZGC? I have no idea what the cost would be of turning it off. But I feel more compelled to fix it so that we do not have to turn it off. Should not be impossible, but can be done outside of this RFE. > For the interpreter, I think you are referring to 8199417? I hadn't > looked at that before (I'll try to review it tomorrow). Yes, I think > those should be using IN_CONCURRENT_ROOT too, so that eventually ZGC > can do JNI global marking concurrently. Yeah. > And there are two other pre-existing uses of IN_CONCURRENT_ROOT, both > of which seem suspicious to me. > > - In ClassLoaderData::remove_handle() we have > > // This root is not walked in safepoints, and hence requires an appropriate > // decorator that e.g. maintains the SATB invariant in SATB collectors. > RootAccess::oop_store(ptr, oop(NULL)); > > But there aren't any corresponding IN_CONCURRENT_ROOT loads, nor is > the initializing store (CLD::ChunkedHandleList::add), which seems > inconsistent. (To be pedantic, the initializing store should probably > be using the new RootAccess rather than a raw > store.) Oh, the load is OopHandle::resolve; and I think OopHandle is > still pending accessorizing (and probably needs the access.inline.hpp > cleanup...). > > - In InstanceKlass::klass_holder_phantom() we have > > return RootAccess::oop_load(addr); > > My understanding of it is that IN_CONCURRENT_ROOT is not correct > here. I think this is similar to jweaks, where I only used > ON_PHANTOM_OOP_REF. Yes, that sounds like it should be fixed in separate RFEs. Your JNI handle changes look good. Thanks, /Erik From stewartd.qdt at qualcommdatacenter.com Thu Mar 29 14:48:21 2018 From: stewartd.qdt at qualcommdatacenter.com (stewartd.qdt) Date: Thu, 29 Mar 2018 14:48:21 +0000 Subject: RFR: 8200251: AArch64::CPUFeature out of sync with VM_Version::Feature_Flag In-Reply-To: References: <1b32c22bf37044cd93e62c8a1a69afe8@NASANEXM01E.na.qualcomm.com> <69f608989b854ba3a32a0ac5ea6e9444@NASANEXM01E.na.qualcomm.com> Message-ID: <453c58a1f5da4db4a96839dab029add2@NASANEXM01E.na.qualcomm.com> Could I get another review and a sponsor for this? It's fairly innocuous and paves the way for both some Hotspot-side updates and Graal-side updates. Thank you, Daniel -----Original Message----- From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] Sent: Wednesday, March 28, 2018 12:52 PM To: stewartd.qdt ; hotspot-dev at openjdk.java.net Subject: Re: RFR: 8200251: AArch64::CPUFeature out of sync with VM_Version::Feature_Flag Looks good. Thanks, Vladimir On 3/28/18 6:02 AM, stewartd.qdt wrote: > Please see the updated webrev, where I have also added some flags that were not getting sent over JVMCI. > > http://cr.openjdk.java.net/~dstewart/8200251/webrev.01/ > > Thank you, > Daniel > > -----Original Message----- > From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] > Sent: Monday, March 26, 2018 12:48 PM > To: stewartd.qdt ; hotspot-dev at openjdk.java.net > Subject: Re: RFR: 8200251: AArch64::CPUFeature out of sync with VM_Version::Feature_Flag > > Good. > > Thanks, > Vladimir > > On 3/26/18 9:24 AM, stewartd.qdt wrote: >> Please review this webrev [1] which attempts to bring the AArch64::CPUFeature enum (Java) in sync with VM_Version::Feature_Flag enum (C++ enum) for aarch64. >> >> This is in preparation for creating AArch64 some intrinsics for Graal. But I found that the CPUFeature enum was not being transferred over to Graal for AArch64. In attempting to do that I then found out that CPUFeatures was not in sync with the VM_Version::Feature_Flag enum. >> >> The bug report is filed at [2]. >> >> I am happy to modify the patch as necessary. >> >> Regards, >> >> Daniel Stewart >> >> [1] - http://cr.openjdk.java.net/~dstewart/8200251/webrev.00/ >> [2] - https://bugs.openjdk.java.net/browse/JDK-8200251 >> >> From coleen.phillimore at oracle.com Thu Mar 29 15:04:17 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 29 Mar 2018 11:04:17 -0400 Subject: RFR (XS) 8200430: Remove JTwork and JTreport from the .hgignore files Message-ID: <40493ea3-44ef-7a1e-d326-73fb850ac137@oracle.com> open webrev at http://cr.openjdk.java.net/~coleenp/8200430.01/webrev bug link https://bugs.openjdk.java.net/browse/JDK-8200430 Tested in local repository. Thanks, Coleen From dmitry.chuyko at bell-sw.com Thu Mar 29 15:08:54 2018 From: dmitry.chuyko at bell-sw.com (Dmitry Chuyko) Date: Thu, 29 Mar 2018 18:08:54 +0300 Subject: RFD: AOT for AArch64 In-Reply-To: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> Message-ID: Andrew, java.base can be compiled and used successfully in basic scenarios, I'll try something more complicated. But here is some statistics about aot-compiled methods in modules I got: Non-tiered java.base: 13537 methods compiled, 37750 methods failed Tiered (less successful) java.base: 12 methods compiled, 51275 methods failed jdk.compiler: 0 methods compiled, 12495 methods failed jdk.internal.vm.ci: 0 methods compiled, 1792 methods failed jdk.scripting.nashorn: 4 methods compiled, 11877 methods failed Vast majority of failed compilations is org.graalvm.compiler.graph.GraalGraphError: org.graalvm.compiler.debug.GraalError: Emitting code to load a metaspace address is not currently supported on aarch64 also there are many org.graalvm.compiler.graph.GraalGraphError: org.graalvm.compiler.debug.GraalError: Emitting code to load an object address is not currently supported on aarch64 -Dmitry On 03/23/2018 09:11 PM, Andrew Haley wrote: > How to build it: > > Check out jdk-hs. Apply > http://cr.openjdk.java.net/~aph/jaotc/jdk-hs-1/ to that checkout then > build OpenJDK images. > > Then > > $ git checkout https://github.com/theRealAph/graal.git > $ cd graal > $ git branch aarch64-branch-overflows > > MAKE SURE that JAVA_HOME is pointing at the jdk-hs you just built: > > $ export JAVA_HOME=/local/jdk-hs/build/linux-aarch64-normal-server-release/images/jdk/ > > Follow the "Building Graal" instructions at > https://github.com/theRealAph/graal/tree/aarch64-branch-overflows/compiler > > My graal is in /local/graal/ and my jdk-hs is in /local/jdk-hs/. > To run jaotc, I do something like this: > > /local/jdk-hs/build/linux-aarch64-normal-server-release/images/jdk/bin/jaotc \ > -J--module-path=/local/graal/graal/sdk/mxbuild/modules/org.graalvm.graal_sdk.jar:/local/graal/graal/truffle/mxbuild/modules/com.oracle.truffle.truffle_api.jar \ > -J--upgrade-module-path=/local/graal/graal/compiler/mxbuild/modules/jdk.internal.vm.compiler.jar \ > myjar.jar --output myjar.so > > Note that the "-J" commands point jaotc at the version of Graal you've > just built rather than OpenJDK's built-in version of Graal. > > Enjoy. > From shade at redhat.com Thu Mar 29 15:12:18 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Thu, 29 Mar 2018 17:12:18 +0200 Subject: RFR (XS) 8200430: Remove JTwork and JTreport from the .hgignore files In-Reply-To: <40493ea3-44ef-7a1e-d326-73fb850ac137@oracle.com> References: <40493ea3-44ef-7a1e-d326-73fb850ac137@oracle.com> Message-ID: <4b5b4f00-e23a-2b88-80ac-81966efb1540@redhat.com> On 03/29/2018 05:04 PM, coleen.phillimore at oracle.com wrote: > open webrev at http://cr.openjdk.java.net/~coleenp/8200430.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8200430 Looks good. -Aleksey From aph at redhat.com Thu Mar 29 15:36:05 2018 From: aph at redhat.com (Andrew Haley) Date: Thu, 29 Mar 2018 16:36:05 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> Message-ID: <5ae9dbad-a1c0-083d-3aa7-bcb918ceb8b5@redhat.com> On 03/29/2018 04:08 PM, Dmitry Chuyko wrote: > java.base can be compiled and used successfully in basic scenarios, I'll > try something more complicated. > > But here is some statistics about aot-compiled methods in modules I got: > > Non-tiered > > java.base: 13537 methods compiled, 37750 methods failed > > Tiered (less successful) > > java.base: 12 methods compiled, 51275 methods failed > jdk.compiler: 0 methods compiled, 12495 methods failed > jdk.internal.vm.ci: 0 methods compiled, 1792 methods failed > jdk.scripting.nashorn: 4 methods compiled, 11877 methods failed > > Vast majority of failed compilations is > > org.graalvm.compiler.graph.GraalGraphError: > org.graalvm.compiler.debug.GraalError: Emitting code to load a metaspace > address is not currently supported on aarch64 That's what you get if you don't pick up the external Graal build. > also there are many > > org.graalvm.compiler.graph.GraalGraphError: > org.graalvm.compiler.debug.GraalError: Emitting code to load an object > address is not currently supported on aarch64 I can't replicate that. I'm wondering if you're picking up the correct Graal build. I really need you to tell me *exactly* how you run your tests. Scripts, everything. Otherwise I waste a huge amount of time trying to guess what you're doing. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From leo.korinth at oracle.com Thu Mar 29 15:41:20 2018 From: leo.korinth at oracle.com (Leo Korinth) Date: Thu, 29 Mar 2018 17:41:20 +0200 Subject: RFR: 8176717: GC log file handle leaked to child processes In-Reply-To: <4c216bb1-a3bf-e916-d07a-643431faa341@oracle.com> References: <271f07b2-2a74-c5ff-7a7b-d9805929a23c@oracle.com> <04b0987c-0de4-1e5e-52be-0c603c1fab10@oracle.com> <4c216bb1-a3bf-e916-d07a-643431faa341@oracle.com> Message-ID: <796998bd-78f5-1a2a-a38e-fc5da8f10b7a@oracle.com> On 23/03/18 20:03, Leo Korinth wrote: > Hi! > > Cross-posting this to both hotspot-dev and hotspot-runtime-dev to get > more attention. Sorry. > > Original mail conversation can be found here: > http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030637.html > > I need feedback to know how to continue. > > Thanks, > Leo Hi! Below is a new webrev with Thomas' and Per's suggested name change from: os::fopen_retain(const char* path, const char* mode) to: os::fopen(const char* path, const char* mode) Full webrev: http://cr.openjdk.java.net/~lkorinth/8176717/02/ Review and/or comments please! Thanks, Leo From coleen.phillimore at oracle.com Thu Mar 29 15:42:56 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 29 Mar 2018 11:42:56 -0400 Subject: RFR (XS) 8200430: Remove JTwork and JTreport from the .hgignore files In-Reply-To: <4b5b4f00-e23a-2b88-80ac-81966efb1540@redhat.com> References: <40493ea3-44ef-7a1e-d326-73fb850ac137@oracle.com> <4b5b4f00-e23a-2b88-80ac-81966efb1540@redhat.com> Message-ID: Thanks Aleksey.? I'll wait to see if anyone objects and push as a trivial change.? It's been ok'ed offlist. Coleen On 3/29/18 11:12 AM, Aleksey Shipilev wrote: > On 03/29/2018 05:04 PM, coleen.phillimore at oracle.com wrote: >> open webrev at http://cr.openjdk.java.net/~coleenp/8200430.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8200430 > Looks good. > > -Aleksey > > From aph at redhat.com Thu Mar 29 15:46:30 2018 From: aph at redhat.com (Andrew Haley) Date: Thu, 29 Mar 2018 16:46:30 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: <53daf9a7-5108-06ce-bb05-6b10612c48d8@bell-sw.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <6f669937-6e59-b4e8-e77b-1fe727dfc9e5@bell-sw.com> <53daf9a7-5108-06ce-bb05-6b10612c48d8@bell-sw.com> Message-ID: There's a problem with assertions being fired which is apparently due to a recent Graal bug. Try running without assertions: diff -r ee513596f3ee test/hotspot/jtreg/compiler/aot/AotCompiler.java --- a/test/hotspot/jtreg/compiler/aot/AotCompiler.java Tue Jan 30 16:41:40 2018 +0100 +++ b/test/hotspot/jtreg/compiler/aot/AotCompiler.java Thu Mar 29 16:37:21 2018 +0100 @@ -114,8 +114,8 @@ args.add(linker); } // Execute with asserts - args.add("-J-ea"); - args.add("-J-esa"); + // args.add("-J-ea"); + // args.add("-J-esa"); return launchJaotc(args, extraopts); } With that change, I get two failures. If you're seeing a lot more failures than that, please look t your test configuration. Passed: compiler/aot/calls/fromAot/AotInvokeDynamic2AotTest.java Passed: compiler/aot/calls/fromAot/AotInvokeDynamic2CompiledTest.java Passed: compiler/aot/calls/fromAot/AotInvokeDynamic2NativeTest.java Passed: compiler/aot/calls/fromAot/AotInvokeDynamic2InterpretedTest.java Passed: compiler/aot/calls/fromAot/AotInvokeInterface2InterpretedTest.java Passed: compiler/aot/calls/fromAot/AotInvokeInterface2AotTest.java Passed: compiler/aot/calls/fromAot/AotInvokeInterface2CompiledTest.java Passed: compiler/aot/calls/fromAot/AotInvokeInterface2NativeTest.java Passed: compiler/aot/calls/fromAot/AotInvokeSpecial2AotTest.java Passed: compiler/aot/calls/fromAot/AotInvokeSpecial2NativeTest.java Passed: compiler/aot/calls/fromAot/AotInvokeSpecial2CompiledTest.java Passed: compiler/aot/calls/fromAot/AotInvokeSpecial2InterpretedTest.java Passed: compiler/aot/calls/fromAot/AotInvokeStatic2AotTest.java Passed: compiler/aot/calls/fromAot/AotInvokeStatic2InterpretedTest.java Passed: compiler/aot/calls/fromAot/AotInvokeStatic2NativeTest.java Passed: compiler/aot/calls/fromAot/AotInvokeStatic2CompiledTest.java Passed: compiler/aot/calls/fromAot/AotInvokeVirtual2AotTest.java Passed: compiler/aot/calls/fromAot/AotInvokeVirtual2CompiledTest.java Passed: compiler/aot/calls/fromAot/AotInvokeVirtual2InterpretedTest.java Passed: compiler/aot/calls/fromAot/AotInvokeVirtual2NativeTest.java Passed: compiler/aot/calls/fromCompiled/CompiledInvokeDynamic2AotTest.java Passed: compiler/aot/calls/fromCompiled/CompiledInvokeInterface2AotTest.java Passed: compiler/aot/calls/fromCompiled/CompiledInvokeSpecial2AotTest.java Passed: compiler/aot/calls/fromCompiled/CompiledInvokeStatic2AotTest.java Passed: compiler/aot/calls/fromInterpreted/InterpretedInvokeInterface2AotTest.java Passed: compiler/aot/calls/fromInterpreted/InterpretedInvokeSpecial2AotTest.java Passed: compiler/aot/calls/fromCompiled/CompiledInvokeVirtual2AotTest.java Passed: compiler/aot/calls/fromInterpreted/InterpretedInvokeDynamic2AotTest.java Passed: compiler/aot/calls/fromNative/NativeInvokeStatic2AotTest.java Passed: compiler/aot/calls/fromInterpreted/InterpretedInvokeStatic2AotTest.java Passed: compiler/aot/calls/fromInterpreted/InterpretedInvokeVirtual2AotTest.java Passed: compiler/aot/calls/fromNative/NativeInvokeSpecial2AotTest.java Passed: compiler/aot/cli/jaotc/ClasspathOptionUnknownClassTest.java Passed: compiler/aot/cli/jaotc/CompileClassTest.java Passed: compiler/aot/calls/fromNative/NativeInvokeVirtual2AotTest.java Passed: compiler/aot/cli/jaotc/CompileClassWithDebugTest.java Passed: compiler/aot/cli/jaotc/CompileDirectoryTest.java Passed: compiler/aot/cli/jaotc/CompileModuleTest.java Passed: compiler/aot/cli/jaotc/CompileJarTest.java Passed: compiler/aot/cli/jaotc/ListOptionNotExistingTest.java Passed: compiler/aot/cli/jaotc/ListOptionTest.java Passed: compiler/aot/cli/jaotc/ListOptionWrongFileTest.java Passed: compiler/aot/cli/DisabledAOTWithLibraryTest.java Passed: compiler/aot/cli/IncorrectAOTLibraryTest.java Passed: compiler/aot/cli/NonExistingAOTLibraryTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/directory/DirectorySourceProviderTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/jar/JarSourceProviderTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/module/ModuleSourceProviderTest.java Passed: compiler/aot/cli/SingleAOTLibraryTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/ClassSearchTest.java Passed: compiler/aot/cli/MultipleAOTLibraryTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/ClassSourceTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/SearchPathTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/NativeOrderOutputStreamTest.java Passed: compiler/aot/verification/vmflags/NotTrackedFlagTest.java Passed: compiler/aot/cli/SingleAOTOptionTest.java Passed: compiler/aot/verification/vmflags/TrackedFlagTest.java Passed: compiler/aot/verification/ClassAndLibraryNotMatchTest.java Passed: compiler/aot/SharedUsageTest.java TEST RESULT: Failed. Execution failed: `main' threw exception: java.lang.RuntimeException: Method is unexpectedly compiled after deoptimization: expected false, was true -------------------------------------------------- Passed: compiler/aot/RecompilationTest.java Test results: passed: 60; failed: 1 Report written to /local/jdk-hs/build/linux-aarch64-normal-server-release/test-results/jtreg_test_hotspot_jtreg_compiler_aot/html/report.html Results written to /local/jdk-hs/build/linux-aarch64-normal-server-release/test-support/jtreg_test_hotspot_jtreg_compiler_aot Error: Some tests failed or other problems occurred. Finished running test 'jtreg:test/hotspot/jtreg/compiler/aot' Test report is stored in build/linux-aarch64-normal-server-release/test-results/jtreg_test_hotspot_jtreg_compiler_aot ============================== Test summary ============================== TEST TOTAL PASS FAIL ERROR >> jtreg:test/hotspot/jtreg/compiler/aot 61 60 1 0 << ============================== TEST FAILURE * jtreg:test/hotspot/jtreg/compiler/jvmci Running test 'jtreg:test/hotspot/jtreg/compiler/jvmci' Passed: compiler/jvmci/compilerToVM/AsResolvedJavaMethodTest.java Passed: compiler/jvmci/compilerToVM/CollectCountersTest.java Passed: compiler/jvmci/compilerToVM/DoNotInlineOrCompileTest.java -------------------------------------------------- ACTION: main -- Failed. Execution failed: `main' threw exception: java.lang.RuntimeException: CompileCodeTestCase{executable=public default int compiler.jvmci.compilerToVM.CompileCodeTestCase$Interface.defaultMethod(java.lang.Object), bci=-1} : 2nd invocation returned different value: expected -------------------------------------------------- Passed: compiler/jvmci/compilerToVM/AllocateCompileIdTest.java Passed: compiler/jvmci/compilerToVM/ExecuteInstalledCodeTest.java Passed: compiler/jvmci/compilerToVM/FindUniqueConcreteMethodTest.java Passed: compiler/jvmci/compilerToVM/DebugOutputTest.java Passed: compiler/jvmci/compilerToVM/GetBytecodeTest.java Passed: compiler/jvmci/compilerToVM/GetClassInitializerTest.java Passed: compiler/jvmci/compilerToVM/GetConstantPoolTest.java Passed: compiler/jvmci/compilerToVM/GetExceptionTableTest.java Passed: compiler/jvmci/compilerToVM/GetImplementorTest.java Passed: compiler/jvmci/compilerToVM/GetLineNumberTableTest.java Passed: compiler/jvmci/compilerToVM/GetFlagValueTest.java Passed: compiler/jvmci/compilerToVM/GetMaxCallTargetOffsetTest.java Passed: compiler/jvmci/compilerToVM/GetLocalVariableTableTest.java Passed: compiler/jvmci/compilerToVM/GetNextStackFrameTest.java Passed: compiler/jvmci/compilerToVM/GetResolvedJavaMethodTest.java Passed: compiler/jvmci/compilerToVM/GetStackTraceElementTest.java Passed: compiler/jvmci/compilerToVM/GetVtableIndexForInterfaceTest.java Passed: compiler/jvmci/compilerToVM/GetSymbolTest.java Passed: compiler/jvmci/compilerToVM/HasFinalizableSubclassTest.java Passed: compiler/jvmci/compilerToVM/HasNeverInlineDirectiveTest.java Passed: compiler/jvmci/compilerToVM/IsCompilableTest.java Passed: compiler/jvmci/compilerToVM/HasCompiledCodeForOSRTest.java Passed: compiler/jvmci/compilerToVM/IsMatureTest.java Passed: compiler/jvmci/compilerToVM/JVM_RegisterJVMCINatives.java Passed: compiler/jvmci/compilerToVM/IsMatureVsReprofileTest.java Passed: compiler/jvmci/compilerToVM/LookupKlassInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupKlassRefIndexInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupMethodInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupNameAndTypeRefIndexInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupNameInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupSignatureInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupTypeTest.java Passed: compiler/jvmci/compilerToVM/ReadConfigurationTest.java Passed: compiler/jvmci/compilerToVM/MethodIsIgnoredBySecurityStackWalkTest.java Passed: compiler/jvmci/compilerToVM/ResolveConstantInPoolTest.java Passed: compiler/jvmci/compilerToVM/ResolveFieldInPoolTest.java Passed: compiler/jvmci/compilerToVM/ResolveMethodTest.java Passed: compiler/jvmci/compilerToVM/ResolvePossiblyCachedConstantInPoolTest.java Passed: compiler/jvmci/compilerToVM/ReprofileTest.java Passed: compiler/jvmci/compilerToVM/ResolveTypeInPoolTest.java Passed: compiler/jvmci/compilerToVM/ShouldDebugNonSafepointsTest.java Passed: compiler/jvmci/compilerToVM/ShouldInlineMethodTest.java Passed: compiler/jvmci/errors/TestInvalidCompilationResult.java Passed: compiler/jvmci/errors/TestInvalidDebugInfo.java Passed: compiler/jvmci/errors/TestInvalidOopMap.java Passed: compiler/jvmci/events/JvmciNotifyBootstrapFinishedEventTest.java Passed: compiler/jvmci/events/JvmciShutdownEventTest.java Passed: compiler/jvmci/events/JvmciNotifyInstallEventTest.java Passed: compiler/jvmci/jdk.vm.ci.hotspot.test/src/jdk/vm/ci/hotspot/test/HotSpotConstantReflectionProviderTest.java Passed: compiler/jvmci/jdk.vm.ci.hotspot.test/src/jdk/vm/ci/hotspot/test/MethodHandleAccessProviderTest.java Passed: compiler/jvmci/jdk.vm.ci.hotspot.test/src/jdk/vm/ci/hotspot/test/MemoryAccessProviderTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/ResolvedJavaTypeResolveConcreteMethodTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/ConstantTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/ResolvedJavaTypeResolveMethodTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/RedefineClassTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestConstantReflectionProvider.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestJavaMethod.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestJavaType.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestJavaField.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestMetaAccessProvider.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestResolvedJavaField.java Passed: compiler/jvmci/meta/StableFieldTest.java Passed: compiler/jvmci/compilerToVM/MaterializeVirtualObjectTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestResolvedJavaMethod.java Passed: compiler/jvmci/TestJVMCIPrintProperties.java Passed: compiler/jvmci/JVM_GetJVMCIRuntimeTest.java Passed: compiler/jvmci/TestValidateModules.java Passed: compiler/jvmci/SecurityRestrictionsTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestResolvedJavaType.java Test results: passed: 72; failed: 1 Report written to /local/jdk-hs/build/linux-aarch64-normal-server-release/test-results/jtreg_test_hotspot_jtreg_compiler_jvmci/html/report.html Results written to /local/jdk-hs/build/linux-aarch64-normal-server-release/test-support/jtreg_test_hotspot_jtreg_compiler_jvmci Error: Some tests failed or other problems occurred. Finished running test 'jtreg:test/hotspot/jtreg/compiler/jvmci' Test report is stored in build/linux-aarch64-normal-server-release/test-results/jtreg_test_hotspot_jtreg_compiler_jvmci ============================== Test summary ============================== TEST TOTAL PASS FAIL ERROR >> jtreg:test/hotspot/jtreg/compiler/jvmci 73 72 1 0 << ============================== TEST FAILURE From aph at redhat.com Thu Mar 29 16:18:29 2018 From: aph at redhat.com (Andrew Haley) Date: Thu, 29 Mar 2018 17:18:29 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <6f669937-6e59-b4e8-e77b-1fe727dfc9e5@bell-sw.com> <53daf9a7-5108-06ce-bb05-6b10612c48d8@bell-sw.com> Message-ID: On 03/29/2018 04:46 PM, Andrew Haley wrote: > -------------------------------------------------- > ACTION: main -- Failed. Execution failed: `main' threw exception: java.lang.RuntimeException: CompileCodeTestCase{executable=public default int compiler.jvmci.compilerToVM.CompileCodeTestCase$Interface.defaultMethod(java.lang.Object), bci=-1} : 2nd invocation returned different value: expected This failing JVMCI test is because the first time that the Disassembler runs it outputs the string "[Disassembling for mach='aarch64']\n" I don't know why the disassembler produces that string, but it certainly isn't a bug in JVMCI. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From vladimir.kozlov at oracle.com Thu Mar 29 16:09:05 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 29 Mar 2018 09:09:05 -0700 Subject: [11]RFR(XS) 8200391: clean up test/hotspot/jtreg/ProblemList.txt (compiler related) In-Reply-To: References: Message-ID: <131baf86-2769-92e7-055d-fa937a440b2a@oracle.com> Sent with wrong Subject before. Thanks, Vladimir On 3/28/18 5:35 PM, Vladimir Kozlov wrote: > http://cr.openjdk.java.net/~kvn/8200391/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8200391 > > I removed tests from ProblemList-graal.txt for 8195632 and 8196626 bugs > which were fixed. > I changed 8197446which was closed as duplicate of 8181753. > > I verified that tests listed in 8196626 and 8195632 are passed now. > From shade at redhat.com Thu Mar 29 16:21:54 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Thu, 29 Mar 2018 18:21:54 +0200 Subject: RFR/RFC: Non-PCH x86_32 build failure: err_msg is not defined Message-ID: Bug: https://bugs.openjdk.java.net/browse/JDK-8200438 Obvious fix: diff -r 5a757c0326c7 src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp --- a/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp Thu Mar 29 17:15:26 2018 +0200 +++ b/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp Thu Mar 29 18:17:58 2018 +0200 @@ -41,6 +41,7 @@ #include "runtime/sharedRuntime.hpp" #include "runtime/vframeArray.hpp" #include "utilities/align.hpp" +#include "utilities/formatBuffer.hpp" #include "vm_version_x86.hpp" #include "vmreg_x86.inline.hpp" #ifdef COMPILER1 The non-obvious part (and thus, "RFC") is why x86_64 build works fine in the same config. I don't have the answer for that. Testing: x86_32 build Thanks, -Aleksey From mikhailo.seledtsov at oracle.com Thu Mar 29 16:27:20 2018 From: mikhailo.seledtsov at oracle.com (mikhailo) Date: Thu, 29 Mar 2018 09:27:20 -0700 Subject: [11]RFR(XS) 8200391: clean up test/hotspot/jtreg/ProblemList.txt (compiler related) In-Reply-To: <131baf86-2769-92e7-055d-fa937a440b2a@oracle.com> References: <131baf86-2769-92e7-055d-fa937a440b2a@oracle.com> Message-ID: <5d1a15d7-de8c-2724-c0bf-8f1679a4b01c@oracle.com> Looks good, Misha On 03/29/2018 09:09 AM, Vladimir Kozlov wrote: > Sent with wrong Subject before. > > Thanks, > Vladimir > > On 3/28/18 5:35 PM, Vladimir Kozlov wrote: >> http://cr.openjdk.java.net/~kvn/8200391/webrev.00/ >> https://bugs.openjdk.java.net/browse/JDK-8200391 >> >> I removed tests from ProblemList-graal.txt for 8195632 and 8196626 >> bugs which were fixed. >> I changed 8197446which was closed as duplicate of 8181753. >> >> I verified that tests listed in 8196626 and 8195632 are passed now. >> From aph at redhat.com Thu Mar 29 16:28:47 2018 From: aph at redhat.com (Andrew Haley) Date: Thu, 29 Mar 2018 17:28:47 +0100 Subject: RFR: 8200251: AArch64::CPUFeature out of sync with VM_Version::Feature_Flag In-Reply-To: <453c58a1f5da4db4a96839dab029add2@NASANEXM01E.na.qualcomm.com> References: <1b32c22bf37044cd93e62c8a1a69afe8@NASANEXM01E.na.qualcomm.com> <69f608989b854ba3a32a0ac5ea6e9444@NASANEXM01E.na.qualcomm.com> <453c58a1f5da4db4a96839dab029add2@NASANEXM01E.na.qualcomm.com> Message-ID: On 03/29/2018 03:48 PM, stewartd.qdt wrote: > Could I get another review and a sponsor for this? You don't need another reviewer for this. It can be pushed. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From shade at redhat.com Thu Mar 29 16:30:11 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Thu, 29 Mar 2018 18:30:11 +0200 Subject: RFR/RFC 8200438: Non-PCH x86_32 build failure: err_msg is not defined In-Reply-To: References: Message-ID: <9ef41581-84fa-72a2-9ce1-ca4f1571b456@redhat.com> (correct subject, referencing bug id) Maybe one of the reasons is that x86_32 is the cross-compiled build, but x86_64 is native, and this is why x86_32 fails, when x86_64 is not. -Aleksey On 03/29/2018 06:21 PM, Aleksey Shipilev wrote: > Bug: > https://bugs.openjdk.java.net/browse/JDK-8200438 > > Obvious fix: > > diff -r 5a757c0326c7 src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp > --- a/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp Thu Mar 29 17:15:26 2018 +0200 > +++ b/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp Thu Mar 29 18:17:58 2018 +0200 > @@ -41,6 +41,7 @@ > #include "runtime/sharedRuntime.hpp" > #include "runtime/vframeArray.hpp" > #include "utilities/align.hpp" > +#include "utilities/formatBuffer.hpp" > #include "vm_version_x86.hpp" > #include "vmreg_x86.inline.hpp" > #ifdef COMPILER1 > > > The non-obvious part (and thus, "RFC") is why x86_64 build works fine in the same config. I don't > have the answer for that. > > Testing: x86_32 build > > Thanks, > -Aleksey > From thomas.stuefe at gmail.com Thu Mar 29 16:33:59 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 29 Mar 2018 18:33:59 +0200 Subject: RFR/RFC 8200438: Non-PCH x86_32 build failure: err_msg is not defined In-Reply-To: <9ef41581-84fa-72a2-9ce1-ca4f1571b456@redhat.com> References: <9ef41581-84fa-72a2-9ce1-ca4f1571b456@redhat.com> Message-ID: I am more confused that your 32bit build pulls sharedRuntime_x86_64.cpp .. Thomas On Thu, Mar 29, 2018 at 6:30 PM, Aleksey Shipilev wrote: > (correct subject, referencing bug id) > > Maybe one of the reasons is that x86_32 is the cross-compiled build, but > x86_64 is native, and this > is why x86_32 fails, when x86_64 is not. > > -Aleksey > > On 03/29/2018 06:21 PM, Aleksey Shipilev wrote: > > Bug: > > https://bugs.openjdk.java.net/browse/JDK-8200438 > > > > Obvious fix: > > > > diff -r 5a757c0326c7 src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp > > --- a/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp Thu Mar 29 > 17:15:26 2018 +0200 > > +++ b/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp Thu Mar 29 > 18:17:58 2018 +0200 > > @@ -41,6 +41,7 @@ > > #include "runtime/sharedRuntime.hpp" > > #include "runtime/vframeArray.hpp" > > #include "utilities/align.hpp" > > +#include "utilities/formatBuffer.hpp" > > #include "vm_version_x86.hpp" > > #include "vmreg_x86.inline.hpp" > > #ifdef COMPILER1 > > > > > > The non-obvious part (and thus, "RFC") is why x86_64 build works fine in > the same config. I don't > > have the answer for that. > > > > Testing: x86_32 build > > > > Thanks, > > -Aleksey > > > > > From coleen.phillimore at oracle.com Thu Mar 29 16:37:38 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 29 Mar 2018 12:37:38 -0400 Subject: RFR (M) 8198313: Wrap holder object for ClassLoaderData in a WeakHandle In-Reply-To: <9C48FEF4-59DD-4415-AF18-B95ADBDFACB4@oracle.com> References: <3fe8b4c5-3e1d-d192-07ce-0828e3982e75@oracle.com> <9C48FEF4-59DD-4415-AF18-B95ADBDFACB4@oracle.com> Message-ID: Hi Kim, Thank you for reviewing this. On 3/28/18 5:12 PM, Kim Barrett wrote: >> On Mar 26, 2018, at 1:26 PM, coleen.phillimore at oracle.com wrote: >> >> Summary: Use WeakHandle for ClassLoaderData::_holder so that is_alive closure is not needed >> >> The purpose of WeakHandle is to encapsulate weak oops within the runtime code in the vm. The class was initially written by StefanK. The weak handles are pointers to OopStorage. This code is a basis for future work to move direct pointers to the heap (oops) from runtime structures like the StringTable, into pointers into an area that the GC efficiently manages, in parallel and/or concurrently. >> >> Tested with mach5 tier 1-5. Performance tested with internal dev-submit performance tests, and locally. >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8198313.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8198313 >> >> Thanks, >> Coleen > ------------------------------------------------------------------------------ > src/hotspot/share/oops/weakHandle.cpp > > 59 void WeakHandle::release() const { > 60 Universe::vm_weak_oop_storage()->release(_obj); > > Is WeakHandle::release ever called with a handle that has not been > cleared by GC? The only caller I found is ~ClassLoaderData. Do we > ever construct a CLD with filled-in holder, and then decide we don't > want the CLD after all? I'm thinking of something like an error > during class loading or the like, but without much knowledge. We call WeakHandle::release in ~ClassLoaderData only.? The oop is always null.?? There's a race when adding the ClassLoaderData to the class_loader oop.? If we win this race, we create the WeakHandle. See lines 982-5 of this change. I wanted to avoid creating the WeakHandle and destroying it if we lose this race, so I did not create it in the ClassLoaderData constructor. I have a follow-on change that moves it there however. > > ------------------------------------------------------------------------------ > src/hotspot/share/classfile/classLoaderData.cpp > 59 #include "gc/shared/oopStorage.hpp" > > Why is this include needed? Maybe I missed something, but it looks > like all the OopStorage usage is wrapped up in WeakHandle. It is not needed, removed. > > ------------------------------------------------------------------------------ > src/hotspot/share/oops/instanceKlass.cpp > 1903 void InstanceKlass::clean_implementors_list(BoolObjectClosure* is_alive) { > 1904 assert(class_loader_data()->is_alive(), "this klass should be live"); > ... > 1909 if (!impl->is_loader_alive(is_alive)) { > > I'm kind of surprised we still need the is_alive closure here. But > there are no changes in is_loader_alive. I think I'm not > understanding something. We do not need the is_alive closure here.?? I have a follow on change in my patch queue that removes these. > > ------------------------------------------------------------------------------ > src/hotspot/share/classfile/classLoaderData.hpp > 224 oop _class_loader; // oop used to uniquely identify a class loader > 225 // class loader or a canonical class path > > [Not part of the change, but adjacent to one, so it caught my eye.] > > "class loader \n class loader" in the comment looks redundant? That was left over from the early days.? Rewrote as below.? I have a change to remove this later too. ? oop _class_loader;????????? // The instance of java/lang/ClassLoader associated with ??????????????????????????????????????? // this ClassLoaderData > > ------------------------------------------------------------------------------ > src/hotspot/share/classfile/classLoaderData.cpp > 516 assert(_holder.peek() == NULL, "never replace holders"); > > I think peek is the wrong test here. Shouldn't it be _holder.is_null()? > If not, e.g. if !_holder.is_null() can be true here, then I think > that would be a leak when _holder is (re)assigned. > > Of course, this goes back to my earlier private comment that I find > WeakHandle::is_null() a bit confusing, because I keep thinking it's > about the value of *_handle._obj rather than _handle._obj. is_null() is for a holder that hasn't been set yet or is zero as for the_null_class_loader_data(). This case should definitely be is_null(). > > ------------------------------------------------------------------------------ > src/hotspot/share/classfile/classLoaderData.cpp > 632 bool alive = keep_alive() // null class loader and incomplete anonymous klasses. > 633 || _holder.is_null() > 634 || (_holder.peek() != NULL); // not cleaned by weak reference processing > > I was initially guessing that _holder.is_null() was for null class > loader and/or anonymous classes, but that's covered by the preceeding > keep_alive(). So I don't know why a null holder => alive. Yes, I believe it is redundant.?? Or did I add it for a special case that I can't remember.? I'll verify. Thanks, Coleen > > ------------------------------------------------------------------------------ > From shade at redhat.com Thu Mar 29 16:37:57 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Thu, 29 Mar 2018 18:37:57 +0200 Subject: RFR: 8200251: AArch64::CPUFeature out of sync with VM_Version::Feature_Flag In-Reply-To: References: <1b32c22bf37044cd93e62c8a1a69afe8@NASANEXM01E.na.qualcomm.com> <69f608989b854ba3a32a0ac5ea6e9444@NASANEXM01E.na.qualcomm.com> <453c58a1f5da4db4a96839dab029add2@NASANEXM01E.na.qualcomm.com> Message-ID: <680c71a0-4cae-0d34-31bc-92a05fb182c5@redhat.com> On 03/29/2018 06:28 PM, Andrew Haley wrote: > On 03/29/2018 03:48 PM, stewartd.qdt wrote: >> Could I get another review and a sponsor for this? > > You don't need another reviewer for this. It can be pushed. I can sponsor this, note the Contributed-by line: 8200251: AArch64::CPUFeature out of sync with VM_Version::Feature_Flag Reviewed-by: kvn, aph, shade Contributed-by: Daniel Stewart diff -r 5a757c0326c7 -r 940ab7917a49 src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.aarch64/src/jdk/vm/ci/aarch64/AArch64.java --- a/src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.aarch64/src/jdk/vm/ci/aarch64/AArch64.java Thu Mar 29 17:15:26 2018 +0200 +++ b/src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.aarch64/src/jdk/vm/ci/aarch64/AArch64.java Thu Mar 29 18:35:13 2018 +0200 @@ -171,6 +171,8 @@ SHA1, SHA2, CRC32, + LSE, + STXR_PREFETCH, A53MAC, DMB_ATOMICS } @@ -183,7 +185,11 @@ public enum Flag { UseBarriersForVolatile, UseCRC32, - UseNeon + UseNeon, + UseSIMDForMemoryOps, + AvoidUnalignedAccesses, + UseLSE, + UseBlockZeroing } private final EnumSet flags; Thanks, -Aleksey From vladimir.kozlov at oracle.com Thu Mar 29 16:41:49 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 29 Mar 2018 09:41:49 -0700 Subject: [11]RFR(XS) 8200391: clean up test/hotspot/jtreg/ProblemList.txt (compiler related) In-Reply-To: <5d1a15d7-de8c-2724-c0bf-8f1679a4b01c@oracle.com> References: <131baf86-2769-92e7-055d-fa937a440b2a@oracle.com> <5d1a15d7-de8c-2724-c0bf-8f1679a4b01c@oracle.com> Message-ID: Thank you, Misha Vladimir K On 3/29/18 9:27 AM, mikhailo wrote: > Looks good, > > Misha > > > On 03/29/2018 09:09 AM, Vladimir Kozlov wrote: >> Sent with wrong Subject before. >> >> Thanks, >> Vladimir >> >> On 3/28/18 5:35 PM, Vladimir Kozlov wrote: >>> http://cr.openjdk.java.net/~kvn/8200391/webrev.00/ >>> https://bugs.openjdk.java.net/browse/JDK-8200391 >>> >>> I removed tests from ProblemList-graal.txt for 8195632 and 8196626 >>> bugs which were fixed. >>> I changed 8197446which was closed as duplicate of 8181753. >>> >>> I verified that tests listed in 8196626 and 8195632 are passed now. >>> > From kim.barrett at oracle.com Thu Mar 29 16:44:31 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 29 Mar 2018 12:44:31 -0400 Subject: RFR: 8195972: Refactor oops in JNI to use the Access API In-Reply-To: References: <8BC47508-8585-44EA-8D2B-22C2144E4AF5@oracle.com> <5ABB9B30.7080808@oracle.com> Message-ID: > On Mar 29, 2018, at 9:54 AM, Erik ?sterlund wrote: > > Hi Kim, > > On 2018-03-29 01:35, Kim Barrett wrote: >>> On Mar 28, 2018, at 9:40 AM, Erik ?sterlund wrote: >> Even those (temporary) costs could be mitigated if we weren't forced >> to use the overly generic IN_CONCURRENT_ROOT decorator, and could >> instead provide more precise information to the GC-specific backends >> (e.g. something like IN_JNI_GLOBAL_ROOT), letting each GC defer its >> extra barrier work until the changes to get the pause-time benefits >> are being made. > > Sure. But I'd like to avoid overly specific decorators that describe the exact root being accessed, rather than its semantic properties, unless there are very compelling reasons to do so. I understand that. The problems with that are (1) (I think) the semantics are pretty non-obvious to someone not a GC expert, while I thought part of the point of the GC interface (including Access) is to help non-GC folks to write ?runtime? code that will correctly interact with the GC, and (2) whether those semantics apply may vary even between GCs that do (some) work concurrently. Maybe we won?t have any really interesting non-temporary cases of the latter? Like I expect this one to be temporary. And maybe there will be very few uses of this decorator, and we?ve already identified (nearly) all of them? >> I'd forgotten about jniFastGetField. [,,,] Any idea how important an optimization >> jniFastGetField might be? How bad would it be to turn it off for ZGC? > > I have no idea what the cost would be of turning it off. But I feel more compelled to fix it so that we do not have to turn it off. Should not be impossible, but can be done outside of this RFE. Fixing it should be better. I was just wondering if we have any idea how important these optimizations really are. On the surface they look like they could be fairly important, but... >> And there are two other pre-existing uses of IN_CONCURRENT_ROOT, both >> of which seem suspicious to me. >> >> [?] > > Yes, that sounds like it should be fixed in separate RFEs. > > Your JNI handle changes look good. > > Thanks, > /Erik Thanks. I?ll file those additional RFEs (though maybe there is already one for accessorizing OopHandle). And I will try to get a review of 8199417 done today. From zgu at redhat.com Thu Mar 29 16:54:56 2018 From: zgu at redhat.com (Zhengyu Gu) Date: Thu, 29 Mar 2018 12:54:56 -0400 Subject: RFR/RFC: Non-PCH x86_32 build failure: err_msg is not defined In-Reply-To: References: Message-ID: <8595ffd8-7145-d203-9776-41d0a965d862@redhat.com> Looks good. -Zhengyu On 03/29/2018 12:21 PM, Aleksey Shipilev wrote: > Bug: > https://bugs.openjdk.java.net/browse/JDK-8200438 > > Obvious fix: > > diff -r 5a757c0326c7 src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp > --- a/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp Thu Mar 29 17:15:26 2018 +0200 > +++ b/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp Thu Mar 29 18:17:58 2018 +0200 > @@ -41,6 +41,7 @@ > #include "runtime/sharedRuntime.hpp" > #include "runtime/vframeArray.hpp" > #include "utilities/align.hpp" > +#include "utilities/formatBuffer.hpp" > #include "vm_version_x86.hpp" > #include "vmreg_x86.inline.hpp" > #ifdef COMPILER1 > > > The non-obvious part (and thus, "RFC") is why x86_64 build works fine in the same config. I don't > have the answer for that. > > Testing: x86_32 build > > Thanks, > -Aleksey > From zgu at redhat.com Thu Mar 29 17:01:22 2018 From: zgu at redhat.com (Zhengyu Gu) Date: Thu, 29 Mar 2018 13:01:22 -0400 Subject: RFR/RFC: Non-PCH x86_32 build failure: err_msg is not defined In-Reply-To: <8595ffd8-7145-d203-9776-41d0a965d862@redhat.com> References: <8595ffd8-7145-d203-9776-41d0a965d862@redhat.com> Message-ID: Ah, Thomas is right! Should not see x84_64 in 32 build. -Zhengyu On 03/29/2018 12:54 PM, Zhengyu Gu wrote: > Looks good. > > -Zhengyu > > On 03/29/2018 12:21 PM, Aleksey Shipilev wrote: >> Bug: >> ? https://bugs.openjdk.java.net/browse/JDK-8200438 >> >> Obvious fix: >> >> diff -r 5a757c0326c7 src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp >> --- a/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp??? Thu Mar 29 >> 17:15:26 2018 +0200 >> +++ b/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp??? Thu Mar 29 >> 18:17:58 2018 +0200 >> @@ -41,6 +41,7 @@ >> ? #include "runtime/sharedRuntime.hpp" >> ? #include "runtime/vframeArray.hpp" >> ? #include "utilities/align.hpp" >> +#include "utilities/formatBuffer.hpp" >> ? #include "vm_version_x86.hpp" >> ? #include "vmreg_x86.inline.hpp" >> ? #ifdef COMPILER1 >> >> >> The non-obvious part (and thus, "RFC") is why x86_64 build works fine >> in the same config. I don't >> have the answer for that. >> >> Testing: x86_32 build >> >> Thanks, >> -Aleksey >> From shade at redhat.com Thu Mar 29 17:04:44 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Thu, 29 Mar 2018 19:04:44 +0200 Subject: RFR/RFC 8200438: Non-PCH x86_32 build failure: err_msg is not defined In-Reply-To: References: <9ef41581-84fa-72a2-9ce1-ca4f1571b456@redhat.com> Message-ID: <563093fb-96d5-273c-637f-ac28bb961d27@redhat.com> Me too! Still trying to figure how did that happen. Probably a bug in cross-build? -Aleksey On 03/29/2018 06:33 PM, Thomas St?fe wrote: > I am more confused that your 32bit build pulls?sharedRuntime_x86_64.cpp .. > > Thomas > > On Thu, Mar 29, 2018 at 6:30 PM, Aleksey Shipilev > wrote: > > (correct subject, referencing bug id) > > Maybe one of the reasons is that x86_32 is the cross-compiled build, but x86_64 is native, and this > is why x86_32 fails, when x86_64 is not. > > -Aleksey > > On 03/29/2018 06:21 PM, Aleksey Shipilev wrote: > > Bug: > >? https://bugs.openjdk.java.net/browse/JDK-8200438 > > > > > Obvious fix: > > > > diff -r 5a757c0326c7 src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp > > --- a/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp? ? Thu Mar 29 17:15:26 2018 +0200 > > +++ b/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp? ? Thu Mar 29 18:17:58 2018 +0200 > > @@ -41,6 +41,7 @@ > >? #include "runtime/sharedRuntime.hpp" > >? #include "runtime/vframeArray.hpp" > >? #include "utilities/align.hpp" > > +#include "utilities/formatBuffer.hpp" > >? #include "vm_version_x86.hpp" > >? #include "vmreg_x86.inline.hpp" > >? #ifdef COMPILER1 > > > > > > The non-obvious part (and thus, "RFC") is why x86_64 build works fine in the same config. I don't > > have the answer for that. > > > > Testing: x86_32 build > > > > Thanks, > > -Aleksey > > > > > From mark.reinhold at oracle.com Thu Mar 29 17:12:23 2018 From: mark.reinhold at oracle.com (mark.reinhold at oracle.com) Date: Thu, 29 Mar 2018 10:12:23 -0700 (PDT) Subject: JEP 331: Low-Overhead Heap Profiling Message-ID: <20180329171223.64A4C1973B3@eggemoggin.niobe.net> New JEP Candidate: http://openjdk.java.net/jeps/331 - Mark From dmitry.chuyko at bell-sw.com Thu Mar 29 17:27:54 2018 From: dmitry.chuyko at bell-sw.com (Dmitry Chuyko) Date: Thu, 29 Mar 2018 20:27:54 +0300 Subject: RFD: AOT for AArch64 In-Reply-To: References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <6f669937-6e59-b4e8-e77b-1fe727dfc9e5@bell-sw.com> <53daf9a7-5108-06ce-bb05-6b10612c48d8@bell-sw.com> Message-ID: <548af1e3-da00-1096-dfa9-1f56b2ba4b56@bell-sw.com> On 03/29/2018 07:18 PM, Andrew Haley wrote: > On 03/29/2018 04:46 PM, Andrew Haley wrote: >> -------------------------------------------------- >> ACTION: main -- Failed. Execution failed: `main' threw exception: java.lang.RuntimeException: CompileCodeTestCase{executable=public default int compiler.jvmci.compilerToVM.CompileCodeTestCase$Interface.defaultMethod(java.lang.Object), bci=-1} : 2nd invocation returned different value: expected > This failing JVMCI test is because the first time that the Disassembler > runs it outputs the string > > "[Disassembling for mach='aarch64']\n" > > I don't know why the disassembler produces that string, but it certainly > isn't a bug in JVMCI. > Looks more like a test bug. Just to mention, there are also some x86 specific tests that pretend being a jvmci compiler and thus they are excluded for other platforms. I realized that something went wrong when I checked out from Github and some of your fresh changes were lost. Thanks for giving a hint that the problem was with Graal modules. Things become better after re-taking Graal steps from the start. Updated jtreg results below. -Dmitry AOT. Test results: passed: 41; failed: 12; error: 8 JVMCI. Test results: passed: 73; failed: 2 ==== AOT AArch64 ==== FAILED: compiler/aot/calls/fromAot/AotInvokeDynamic2AotTest.java Passed: compiler/aot/calls/fromAot/AotInvokeDynamic2CompiledTest.java FAILED: compiler/aot/calls/fromAot/AotInvokeDynamic2InterpretedTest.java Error:? compiler/aot/calls/fromAot/AotInvokeDynamic2NativeTest.java FAILED: compiler/aot/calls/fromAot/AotInvokeInterface2AotTest.java Passed: compiler/aot/calls/fromAot/AotInvokeInterface2CompiledTest.java FAILED: compiler/aot/calls/fromAot/AotInvokeInterface2InterpretedTest.java Error: compiler/aot/calls/fromAot/AotInvokeInterface2NativeTest.java FAILED: compiler/aot/calls/fromAot/AotInvokeSpecial2AotTest.java Passed: compiler/aot/calls/fromAot/AotInvokeSpecial2CompiledTest.java FAILED: compiler/aot/calls/fromAot/AotInvokeSpecial2InterpretedTest.java Error:? compiler/aot/calls/fromAot/AotInvokeSpecial2NativeTest.java FAILED: compiler/aot/calls/fromAot/AotInvokeStatic2AotTest.java Passed: compiler/aot/calls/fromAot/AotInvokeStatic2CompiledTest.java FAILED: compiler/aot/calls/fromAot/AotInvokeStatic2InterpretedTest.java Error:? compiler/aot/calls/fromAot/AotInvokeStatic2NativeTest.java FAILED: compiler/aot/calls/fromAot/AotInvokeVirtual2AotTest.java Passed: compiler/aot/calls/fromAot/AotInvokeVirtual2CompiledTest.java FAILED: compiler/aot/calls/fromAot/AotInvokeVirtual2InterpretedTest.java Error:? compiler/aot/calls/fromAot/AotInvokeVirtual2NativeTest.java Passed: compiler/aot/calls/fromCompiled/CompiledInvokeDynamic2AotTest.java Passed: compiler/aot/calls/fromCompiled/CompiledInvokeInterface2AotTest.java Passed: compiler/aot/calls/fromCompiled/CompiledInvokeSpecial2AotTest.java Passed: compiler/aot/calls/fromCompiled/CompiledInvokeStatic2AotTest.java Passed: compiler/aot/calls/fromCompiled/CompiledInvokeVirtual2AotTest.java Passed: compiler/aot/calls/fromInterpreted/InterpretedInvokeDynamic2AotTest.java Passed: compiler/aot/calls/fromInterpreted/InterpretedInvokeInterface2AotTest.java Passed: compiler/aot/calls/fromInterpreted/InterpretedInvokeSpecial2AotTest.java Passed: compiler/aot/calls/fromInterpreted/InterpretedInvokeStatic2AotTest.java Passed: compiler/aot/calls/fromInterpreted/InterpretedInvokeVirtual2AotTest.java Error: compiler/aot/calls/fromNative/NativeInvokeSpecial2AotTest.java Error: compiler/aot/calls/fromNative/NativeInvokeStatic2AotTest.java Error: compiler/aot/calls/fromNative/NativeInvokeVirtual2AotTest.java Passed: compiler/aot/cli/jaotc/ClasspathOptionUnknownClassTest.java Passed: compiler/aot/cli/jaotc/CompileClassTest.java Passed: compiler/aot/cli/jaotc/CompileClassWithDebugTest.java Passed: compiler/aot/cli/jaotc/CompileDirectoryTest.java Passed: compiler/aot/cli/jaotc/CompileJarTest.java Passed: compiler/aot/cli/jaotc/CompileModuleTest.java Passed: compiler/aot/cli/jaotc/ListOptionNotExistingTest.java Passed: compiler/aot/cli/jaotc/ListOptionTest.java Passed: compiler/aot/cli/jaotc/ListOptionWrongFileTest.java Passed: compiler/aot/cli/DisabledAOTWithLibraryTest.java Passed: compiler/aot/cli/IncorrectAOTLibraryTest.java Passed: compiler/aot/cli/MultipleAOTLibraryTest.java Passed: compiler/aot/cli/NonExistingAOTLibraryTest.java Passed: compiler/aot/cli/SingleAOTLibraryTest.java Passed: compiler/aot/cli/SingleAOTOptionTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/directory/DirectorySourceProviderTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/jar/JarSourceProviderTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/module/ModuleSourceProviderTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/ClassSearchTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/ClassSourceTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/collect/SearchPathTest.java Passed: compiler/aot/jdk.tools.jaotc.test/src/jdk/tools/jaotc/test/NativeOrderOutputStreamTest.java Passed: compiler/aot/verification/vmflags/NotTrackedFlagTest.java Passed: compiler/aot/verification/vmflags/TrackedFlagTest.java Passed: compiler/aot/verification/ClassAndLibraryNotMatchTest.java FAILED: compiler/aot/DeoptimizationTest.java FAILED: compiler/aot/RecompilationTest.java Passed: compiler/aot/SharedUsageTest.java ==== JVMCI AArch64 ==== Passed: compiler/jvmci/compilerToVM/AllocateCompileIdTest.java Passed: compiler/jvmci/compilerToVM/AsResolvedJavaMethodTest.java Passed: compiler/jvmci/compilerToVM/CollectCountersTest.java Passed: compiler/jvmci/compilerToVM/DebugOutputTest.java Passed: compiler/jvmci/compilerToVM/DisassembleCodeBlobTest.java Passed: compiler/jvmci/compilerToVM/DoNotInlineOrCompileTest.java Passed: compiler/jvmci/compilerToVM/ExecuteInstalledCodeTest.java Passed: compiler/jvmci/compilerToVM/FindUniqueConcreteMethodTest.java Passed: compiler/jvmci/compilerToVM/GetBytecodeTest.java Passed: compiler/jvmci/compilerToVM/GetClassInitializerTest.java Passed: compiler/jvmci/compilerToVM/GetConstantPoolTest.java Passed: compiler/jvmci/compilerToVM/GetExceptionTableTest.java Passed: compiler/jvmci/compilerToVM/GetFlagValueTest.java Passed: compiler/jvmci/compilerToVM/GetImplementorTest.java Passed: compiler/jvmci/compilerToVM/GetLineNumberTableTest.java Passed: compiler/jvmci/compilerToVM/GetLocalVariableTableTest.java Passed: compiler/jvmci/compilerToVM/GetMaxCallTargetOffsetTest.java Passed: compiler/jvmci/compilerToVM/GetNextStackFrameTest.java Passed: compiler/jvmci/compilerToVM/GetResolvedJavaMethodTest.java FAILED: compiler/jvmci/compilerToVM/GetResolvedJavaTypeTest.java Passed: compiler/jvmci/compilerToVM/GetStackTraceElementTest.java Passed: compiler/jvmci/compilerToVM/GetSymbolTest.java Passed: compiler/jvmci/compilerToVM/GetVtableIndexForInterfaceTest.java Passed: compiler/jvmci/compilerToVM/HasCompiledCodeForOSRTest.java Passed: compiler/jvmci/compilerToVM/HasFinalizableSubclassTest.java Passed: compiler/jvmci/compilerToVM/HasNeverInlineDirectiveTest.java FAILED: compiler/jvmci/compilerToVM/InvalidateInstalledCodeTest.java Passed: compiler/jvmci/compilerToVM/IsCompilableTest.java Passed: compiler/jvmci/compilerToVM/IsMatureTest.java Passed: compiler/jvmci/compilerToVM/IsMatureVsReprofileTest.java Passed: compiler/jvmci/compilerToVM/JVM_RegisterJVMCINatives.java Passed: compiler/jvmci/compilerToVM/LookupKlassInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupKlassRefIndexInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupMethodInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupNameAndTypeRefIndexInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupNameInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupSignatureInPoolTest.java Passed: compiler/jvmci/compilerToVM/LookupTypeTest.java Passed: compiler/jvmci/compilerToVM/MaterializeVirtualObjectTest.java Passed: compiler/jvmci/compilerToVM/MethodIsIgnoredBySecurityStackWalkTest.java Passed: compiler/jvmci/compilerToVM/ReadConfigurationTest.java Passed: compiler/jvmci/compilerToVM/ReprofileTest.java Passed: compiler/jvmci/compilerToVM/ResolveConstantInPoolTest.java Passed: compiler/jvmci/compilerToVM/ResolveFieldInPoolTest.java Passed: compiler/jvmci/compilerToVM/ResolveMethodTest.java Passed: compiler/jvmci/compilerToVM/ResolvePossiblyCachedConstantInPoolTest.java Passed: compiler/jvmci/compilerToVM/ResolveTypeInPoolTest.java Passed: compiler/jvmci/compilerToVM/ShouldDebugNonSafepointsTest.java Passed: compiler/jvmci/compilerToVM/ShouldInlineMethodTest.java Passed: compiler/jvmci/errors/TestInvalidCompilationResult.java Passed: compiler/jvmci/errors/TestInvalidDebugInfo.java Passed: compiler/jvmci/errors/TestInvalidOopMap.java Passed: compiler/jvmci/events/JvmciNotifyBootstrapFinishedEventTest.java Passed: compiler/jvmci/events/JvmciNotifyInstallEventTest.java Passed: compiler/jvmci/events/JvmciShutdownEventTest.java Passed: compiler/jvmci/jdk.vm.ci.hotspot.test/src/jdk/vm/ci/hotspot/test/HotSpotConstantReflectionProviderTest.java Passed: compiler/jvmci/jdk.vm.ci.hotspot.test/src/jdk/vm/ci/hotspot/test/MemoryAccessProviderTest.java Passed: compiler/jvmci/jdk.vm.ci.hotspot.test/src/jdk/vm/ci/hotspot/test/MethodHandleAccessProviderTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/ConstantTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/RedefineClassTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/ResolvedJavaTypeResolveConcreteMethodTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/ResolvedJavaTypeResolveMethodTest.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestConstantReflectionProvider.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestJavaField.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestJavaMethod.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestJavaType.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestMetaAccessProvider.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestResolvedJavaField.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestResolvedJavaMethod.java Passed: compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestResolvedJavaType.java Passed: compiler/jvmci/meta/StableFieldTest.java Passed: compiler/jvmci/JVM_GetJVMCIRuntimeTest.java Passed: compiler/jvmci/SecurityRestrictionsTest.java Passed: compiler/jvmci/TestJVMCIPrintProperties.java Passed: compiler/jvmci/TestValidateModules.java From shade at redhat.com Thu Mar 29 17:46:52 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Thu, 29 Mar 2018 19:46:52 +0200 Subject: RFR/RFC 8200438: Non-PCH x86_32 build failure: err_msg is not defined In-Reply-To: <563093fb-96d5-273c-637f-ac28bb961d27@redhat.com> References: <9ef41581-84fa-72a2-9ce1-ca4f1571b456@redhat.com> <563093fb-96d5-273c-637f-ac28bb961d27@redhat.com> Message-ID: <05d16b26-e74d-b950-4994-30f19df3e89e@redhat.com> Submitted: https://bugs.openjdk.java.net/browse/JDK-8200441 -Aleksey On 03/29/2018 07:04 PM, Aleksey Shipilev wrote: > Me too! Still trying to figure how did that happen. Probably a bug in cross-build? > > -Aleksey > > On 03/29/2018 06:33 PM, Thomas St?fe wrote: >> I am more confused that your 32bit build pulls?sharedRuntime_x86_64.cpp .. >> >> Thomas >> >> On Thu, Mar 29, 2018 at 6:30 PM, Aleksey Shipilev > wrote: >> >> (correct subject, referencing bug id) >> >> Maybe one of the reasons is that x86_32 is the cross-compiled build, but x86_64 is native, and this >> is why x86_32 fails, when x86_64 is not. >> >> -Aleksey >> >> On 03/29/2018 06:21 PM, Aleksey Shipilev wrote: >> > Bug: >> >? https://bugs.openjdk.java.net/browse/JDK-8200438 >> >> > >> > Obvious fix: >> > >> > diff -r 5a757c0326c7 src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp >> > --- a/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp? ? Thu Mar 29 17:15:26 2018 +0200 >> > +++ b/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp? ? Thu Mar 29 18:17:58 2018 +0200 >> > @@ -41,6 +41,7 @@ >> >? #include "runtime/sharedRuntime.hpp" >> >? #include "runtime/vframeArray.hpp" >> >? #include "utilities/align.hpp" >> > +#include "utilities/formatBuffer.hpp" >> >? #include "vm_version_x86.hpp" >> >? #include "vmreg_x86.inline.hpp" >> >? #ifdef COMPILER1 >> > >> > >> > The non-obvious part (and thus, "RFC") is why x86_64 build works fine in the same config. I don't >> > have the answer for that. >> > >> > Testing: x86_32 build >> > >> > Thanks, >> > -Aleksey >> > >> >> >> > > From dmitry.chuyko at bell-sw.com Thu Mar 29 17:49:02 2018 From: dmitry.chuyko at bell-sw.com (Dmitry Chuyko) Date: Thu, 29 Mar 2018 20:49:02 +0300 Subject: RFD: AOT for AArch64 In-Reply-To: <5ae9dbad-a1c0-083d-3aa7-bcb918ceb8b5@redhat.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <5ae9dbad-a1c0-083d-3aa7-bcb918ceb8b5@redhat.com> Message-ID: <3d21e36b-9cb1-0c29-2400-656b618da386@bell-sw.com> On 03/29/2018 06:36 PM, Andrew Haley wrote: > .................... > That's what you get if you don't pick up the external Graal build. Almost that. Some changesets were missing. > >> also there are many >> >> org.graalvm.compiler.graph.GraalGraphError: >> org.graalvm.compiler.debug.GraalError: Emitting code to load an object >> address is not currently supported on aarch64 > I can't replicate that. > > ................ It's gone now. I see almost all methods in java.base are aot'ed, excluding the same ones as for x86. Non-tiered .so is created but for --compile-for-tiered there's a recurring linkage error: Exception in thread "main" java.lang.InternalError: ava.base-coop2: In function `java.io.CharArrayWriter.toCharArray()[C':(.text+0x6e030c): relocation truncated to fit: R_AARCH64_CALL26 against `plt._aot_stub_routines_arrayof_jshort_disjoint_arraycopy'java.base-coop2: In function `java.io.CharArrayWriter.append(C)Ljava/io/CharArrayWriter;':(.text+0x6e09fc): relocation truncated to fit: R_AARCH64_CALL26 against `plt._aot_stub_routines_arrayof_jshort_disjoint_arraycopy'java.base-coop2: In function `java.io.CharArrayWriter.write(I)V':(.text+0x6e132c): relocation truncated to fit: R_AARCH64_CALL26 against `plt._aot_stub_routines_arrayof_jshort_disjoint_arraycopy'java.base-coop2: In function `java.io.CharArrayWriter.append(Ljava/lang/CharSequence;)Ljava/io/CharArrayWriter;':(.text+0x6e1d90): relocation truncated to fit: R_AARCH64_CALL26 against `plt._aot_stub_routines_arrayof_jshort_disjoint_arraycopy'java.base-coop2: In function `java.io.CharArrayWriter.write([CII)V':(.text+0x6e2d4c): relocation truncated to fit: R_AARCH64_CALL26 against `plt._aot_stub_routines_arrayof_jshort_disjoint_arraycopy'java.base-coop2: In function `java.io.CharArrayWriter.write(Ljava/lang/String;II)V':(.text+0x6e3da8): relocation truncated to fit: R_AARCH64_CALL26 against `plt._aot_stub_routines_arrayof_jshort_disjoint_arraycopy'java.base-coop2: In function `java.lang.invoke.VarHandleByteArrayAsInts$ByteBufferHandle.indexRO(Ljava/nio/ByteBuffer;I)I':(.text+0x38ac): relocation truncated to fit: R_AARCH64_CALL26 against `Stub'java.base-coop2: In function `java.lang.invoke.VarHandleByteArrayAsInts$ByteBufferHandle.indexRO(Ljava/nio/ByteBuffer;I)I':(.text+0x38c4): relocation truncated to fit: R_AARCH64_CALL26 against `Stub'java.base-coop2: In function `java.lang.invoke.VarHandleByteArrayAsInts$ByteBufferHandle.set(Ljava/lang/invoke/VarHandleByteArrayAsInts$ByteBufferHandle;Ljava/lang/Object;II)V':(.text+0x5d90): relocation truncated to fit: R_AARCH64_CALL26 against `Stub'java.base-coop2: In function `java.lang.invoke.VarHandleByteArrayAsInts$ByteBufferHandle.set(Ljava/lang/invoke/VarHandleByteArrayAsInts$ByteBufferHandle;Ljava/lang/Object;II)V':(.text+0x5dc4): relocation truncated to fit: R_AARCH64_CALL26 against `Stub'java.base-coop2: In function `java.lang.invoke.VarHandleByteArrayAsInts$ByteBufferHandle.setOpaque(Ljava/lang/invoke/VarHandleByteArrayAsInts$ByteBufferHandle;Ljava/lang/Object;II)V':(.text+0x74e8): additional relocation overflows omitted from the output ??? at jdk.aot/jdk.tools.jaotc.Linker.link(Linker.java:131) ??? at jdk.aot/jdk.tools.jaotc.Main.run(Main.java:220) ??? at jdk.aot/jdk.tools.jaotc.Main.run(Main.java:101) ??? at jdk.aot/jdk.tools.jaotc.Main.main(Main.java:80) -Dmitry From thomas.stuefe at gmail.com Thu Mar 29 17:57:39 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 29 Mar 2018 19:57:39 +0200 Subject: RFR/RFC 8200438: Non-PCH x86_32 build failure: err_msg is not defined In-Reply-To: <05d16b26-e74d-b950-4994-30f19df3e89e@redhat.com> References: <9ef41581-84fa-72a2-9ce1-ca4f1571b456@redhat.com> <563093fb-96d5-273c-637f-ac28bb961d27@redhat.com> <05d16b26-e74d-b950-4994-30f19df3e89e@redhat.com> Message-ID: I did not even know this was possible. I still keep a 32bit Ubuntu vm around just to build 32bit. Will try this sometime (well, when it is fixed :) ..Thomas On Thu, Mar 29, 2018 at 7:46 PM, Aleksey Shipilev wrote: > Submitted: > https://bugs.openjdk.java.net/browse/JDK-8200441 > > -Aleksey > > On 03/29/2018 07:04 PM, Aleksey Shipilev wrote: > > Me too! Still trying to figure how did that happen. Probably a bug in > cross-build? > > > > -Aleksey > > > > On 03/29/2018 06:33 PM, Thomas St?fe wrote: > >> I am more confused that your 32bit build pulls sharedRuntime_x86_64.cpp > .. > >> > >> Thomas > >> > >> On Thu, Mar 29, 2018 at 6:30 PM, Aleksey Shipilev > wrote: > >> > >> (correct subject, referencing bug id) > >> > >> Maybe one of the reasons is that x86_32 is the cross-compiled > build, but x86_64 is native, and this > >> is why x86_32 fails, when x86_64 is not. > >> > >> -Aleksey > >> > >> On 03/29/2018 06:21 PM, Aleksey Shipilev wrote: > >> > Bug: > >> > https://bugs.openjdk.java.net/browse/JDK-8200438 > >> > >> > > >> > Obvious fix: > >> > > >> > diff -r 5a757c0326c7 src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp > >> > --- a/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp Thu Mar 29 > 17:15:26 2018 +0200 > >> > +++ b/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp Thu Mar 29 > 18:17:58 2018 +0200 > >> > @@ -41,6 +41,7 @@ > >> > #include "runtime/sharedRuntime.hpp" > >> > #include "runtime/vframeArray.hpp" > >> > #include "utilities/align.hpp" > >> > +#include "utilities/formatBuffer.hpp" > >> > #include "vm_version_x86.hpp" > >> > #include "vmreg_x86.inline.hpp" > >> > #ifdef COMPILER1 > >> > > >> > > >> > The non-obvious part (and thus, "RFC") is why x86_64 build works > fine in the same config. I don't > >> > have the answer for that. > >> > > >> > Testing: x86_32 build > >> > > >> > Thanks, > >> > -Aleksey > >> > > >> > >> > >> > > > > > > > From thomas.stuefe at gmail.com Thu Mar 29 18:02:10 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 29 Mar 2018 20:02:10 +0200 Subject: RFR/RFC: Non-PCH x86_32 build failure: err_msg is not defined In-Reply-To: References: Message-ID: The fix is of course ok, regardless of the x64 confusion. ..Thomas On Thu, Mar 29, 2018 at 6:21 PM, Aleksey Shipilev wrote: > Bug: > https://bugs.openjdk.java.net/browse/JDK-8200438 > > Obvious fix: > > diff -r 5a757c0326c7 src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp > --- a/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp Thu Mar 29 > 17:15:26 2018 +0200 > +++ b/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp Thu Mar 29 > 18:17:58 2018 +0200 > @@ -41,6 +41,7 @@ > #include "runtime/sharedRuntime.hpp" > #include "runtime/vframeArray.hpp" > #include "utilities/align.hpp" > +#include "utilities/formatBuffer.hpp" > #include "vm_version_x86.hpp" > #include "vmreg_x86.inline.hpp" > #ifdef COMPILER1 > > > The non-obvious part (and thus, "RFC") is why x86_64 build works fine in > the same config. I don't > have the answer for that. > > Testing: x86_32 build > > Thanks, > -Aleksey > > From ioi.lam at oracle.com Thu Mar 29 18:16:12 2018 From: ioi.lam at oracle.com (Ioi Lam) Date: Thu, 29 Mar 2018 11:16:12 -0700 Subject: RFR (XXS) 8183238 Obsolete CheckEndorsedAndExtDirs and remove checks for lib/endorsed and lib/ext In-Reply-To: <1daa59e3-b001-53b4-0cef-0cb15f856a7d@oracle.com> References: <1daa59e3-b001-53b4-0cef-0cb15f856a7d@oracle.com> Message-ID: Hi Alan, Thanks for the reminder of the other files. I checked all the files in the open and closed repos with 'grep -r CheckEndorsedAndExtDirs'. I found only 3 man pages in the open repo that contained references to CheckEndorsedAndExtDirs. No test cases or any other files refer to this option. Here's an updated webrev that added the changes to the mage pages. It also removes the dead code in arguments.cpppointed out by Mandy and David. http://cr.openjdk.java.net/~iklam/jdk11/8183238-obsolete-CheckEndorsedAndExtDirs.v02/ Thanks - Ioi On 3/28/18 12:11 AM, Alan Bateman wrote: > On 28/03/2018 00:46, Ioi Lam wrote: >> Hi please review this very small change: >> >> http://cr.openjdk.java.net/~iklam/jdk11/8183238-obsolete-CheckEndorsedAndExtDirs.v01/ >> >> https://bugs.openjdk.java.net/browse/JDK-8183238 >> >> The CheckEndorsedAndExtDirs flag has been deprecated since JDK 10 and >> all uses of it have been removed from the test cases. > This looks good. I assumed you've checked that we don't have any tests > or man pages or other references to this option. > > -Alan From coleen.phillimore at oracle.com Thu Mar 29 18:18:57 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 29 Mar 2018 14:18:57 -0400 Subject: RFR: 8195972: Refactor oops in JNI to use the Access API In-Reply-To: References: <8BC47508-8585-44EA-8D2B-22C2144E4AF5@oracle.com> <5ABB9B30.7080808@oracle.com> Message-ID: Some comments in advance of reviewing this code. On 3/29/18 9:54 AM, Erik ?sterlund wrote: > Hi Kim, > > On 2018-03-29 01:35, Kim Barrett wrote: >>> On Mar 28, 2018, at 9:40 AM, Erik ?sterlund >>> wrote: >>> >>> Hi Kim, >>> >>> I noticed that jobjects are now IN_CONCURRENT_ROOT in this patch. I >>> wonder if this is the right time to upgrade them to >>> IN_CONCURRENT_ROOT. Until there is at least one GC that actually >>> scans these concurrently, this will only impose extra overheads >>> (unnecessary G1 SATB-enqueue barriers on the store required to >>> release jobjects) with no obvious gains. >>> >>> The platform specific code needs to go along with this. I have a >>> patch out to generalize interpreter code. In there, I am treating >>> resolve jobject as a normal strong root. That would probably need to >>> change. It is also troubling that jniFastGetField shoots raw loads >>> into (hopefully) the heap, dodging all GC barriers, hoping that is >>> okay. I wonder if starting to actually scan jobjects concurrently >>> would force us to disable that optimization completely to be >>> generally useful to all collectors. For example, an >>> IN_CONCURRENT_ROOT load access for ZGC might require a slowpath. But >>> in jniFastGetField, there is no frame, and hence any code that runs >>> in there must not call anything in the runtime. Therefore, with >>> IN_CONCURRENT_ROOT, it is not generally safe to use jniFastGetField, >>> without doing... something about that code. >>> >>> I would like to hear your thoughts about this. Perhaps the intention >>> is just to take incremental steps towards being able to scan >>> jobjects concurrently, and this is just the first step? Still, I >>> would be interested to hear about what you think about the next >>> steps. If we decide to go with IN_CONCURRENT_ROOT now already, then >>> I should change my interpreter changes that are out for review to do >>> the same so that we are consistent. >>> >>> Otherwise, this looks great, and I am glad we finally have jni >>> handles accessorized. >> With this change in place I think it should be straight-forward for G1 >> to do JNI global handle marking concurrently, rather than during a >> pause. >> >> This change does come with some costs. >> >> (1) For G1 (and presumably Shenandoah), a SATB enqueue barrier when >> setting a global handle's value to NULL as part of releasing the >> handle. >> >> (2) For other collectors, selection between the above barrier and >> do-nothing code. >> >> (3) For ZGC, a read barrier when resolving the value of a non-weak >> handle. >> >> (4) For other collectors (when ZGC is present), selection between the >> above barrier and do-nothing code. >> >> (1) and (2) are wasted costs until G1 is changed to do that marking >> concurrently.? But the cost is pretty small. >> >> I think (3) and (4) don't apply to the jdk repo yet.? And even in the >> zgc repo the impact should be small. >> >> All of these are costs that we expect to be taking eventually anyway. >> The real costs today are that we're not getting the pause-time benefit >> from these changes yet. > > Fair enough. > >> Even those (temporary) costs could be mitigated if we weren't forced >> to use the overly generic IN_CONCURRENT_ROOT decorator, and could >> instead provide more precise information to the GC-specific backends >> (e.g. something like IN_JNI_GLOBAL_ROOT), letting each GC defer its >> extra barrier work until the changes to get the pause-time benefits >> are being made. > > Sure. But I'd like to avoid overly specific decorators that describe > the exact root being accessed, rather than its semantic properties, > unless there are very compelling reasons to do so. > >> I'd forgotten about jniFastGetField.? This was discussed when Mikael >> and I were adding the jweak tag support.? At the time it was decided >> it was acceptable (though dirty) for G1 to not ensure the base object >> was kept alive when fetching a primitive field value from it. >> >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-March/026231.html >> >> >> I suspect that choice was driven by the difficulties you noted, and >> knowing that something better to solve all our problems (Access!) was >> coming soon :) Unfortunately, that (among other things here) really >> doesn't work for ZGC, even though it seems okay for all the other >> collectors, at least for now.? Any idea how important an optimization >> jniFastGetField might be?? How bad would it be to turn it off for ZGC? > > I have no idea what the cost would be of turning it off. But I feel > more compelled to fix it so that we do not have to turn it off. Should > not be impossible, but can be done outside of this RFE. I think fixing this is not worth your time.? I tried to dig up the original 1pager but couldn't find it.?? The original bug does not elaborate on the performance merits of this change: https://bugs.openjdk.java.net/browse/JDK-4989773 So ...?? my memory is that this was a speed up for JNI because there were many client applications (think Java2Demo) that had a lot of code that used JNI extensively.?? It had a nice performance boost for that. I don't believe this is the use case for the applications that we support today and especially not for applications that will benefit from ZGC.? Please don't make work for yourself and disable this! > >> For the interpreter, I think you are referring to 8199417?? I hadn't >> looked at that before (I'll try to review it tomorrow).? Yes, I think >> those should be using IN_CONCURRENT_ROOT too, so that eventually ZGC >> can do JNI global marking concurrently. > > Yeah. > >> And there are two other pre-existing uses of IN_CONCURRENT_ROOT, both >> of which seem suspicious to me. >> >> - In ClassLoaderData::remove_handle() we have >> >> ???? // This root is not walked in safepoints, and hence requires an >> appropriate >> ???? // decorator that e.g. maintains the SATB invariant in SATB >> collectors. >> ???? RootAccess::oop_store(ptr, oop(NULL)); >> >> But there aren't any corresponding IN_CONCURRENT_ROOT loads, nor is >> the initializing store (CLD::ChunkedHandleList::add), which seems >> inconsistent.? (To be pedantic, the initializing store should probably >> be using the new RootAccess rather than a raw >> store.)? Oh, the load is OopHandle::resolve; and I think OopHandle is >> still pending accessorizing (and probably needs the access.inline.hpp >> cleanup...). >> >> - In InstanceKlass::klass_holder_phantom() we have >> >> ?? return RootAccess> ON_PHANTOM_OOP_REF>::oop_load(addr); >> >> My understanding of it is that IN_CONCURRENT_ROOT is not correct >> here.? I think this is similar to jweaks, where I only used >> ON_PHANTOM_OOP_REF. > > Yes, that sounds like it should be fixed in separate RFEs. Whether to use IN_CONCURRENT_ROOT vs. IN_ROOT vs. an undecorated RootAccess<> is still a dark art as far as I'm concerned. Couldn't we have an undecorated RootAccess<> and depending on the GC, fill in whether the root is CONCURRENT vs not? Thanks, Coleen > > Your JNI handle changes look good. > > Thanks, > /Erik From Alan.Bateman at oracle.com Thu Mar 29 18:20:59 2018 From: Alan.Bateman at oracle.com (Alan Bateman) Date: Thu, 29 Mar 2018 19:20:59 +0100 Subject: RFR (XXS) 8183238 Obsolete CheckEndorsedAndExtDirs and remove checks for lib/endorsed and lib/ext In-Reply-To: References: <1daa59e3-b001-53b4-0cef-0cb15f856a7d@oracle.com> Message-ID: <4410b61f-b567-97ec-297a-aa28050fc946@oracle.com> On 29/03/2018 19:16, Ioi Lam wrote: > Hi Alan, > > Thanks for the reminder of the other files. I checked all the files in > the open and closed repos with 'grep -r CheckEndorsedAndExtDirs'. I > found only 3 man pages in the open repo that contained references to > CheckEndorsedAndExtDirs. No test cases or any other files refer to > this option. > > Here's an updated webrev that added the changes to the mage pages. It > also removes the dead code in arguments.cpppointed out by Mandy and > David. > > http://cr.openjdk.java.net/~iklam/jdk11/8183238-obsolete-CheckEndorsedAndExtDirs.v02/ > This looks good to me. -Alan From robin.westberg at oracle.com Thu Mar 29 18:38:32 2018 From: robin.westberg at oracle.com (Robin Westberg) Date: Thu, 29 Mar 2018 20:38:32 +0200 Subject: RFR: 8199619: Building HotSpot on Windows should define NOMINMAX In-Reply-To: References: <7545A83F-E296-40B1-9C15-358A999D1AAF@oracle.com> <446F6608-6FC0-4962-AAD3-CC8CF36F60F7@oracle.com> Message-ID: <6AC02BE6-42E6-465A-A3BF-B800DB28155B@oracle.com> Thanks Erik! Best regards, Robin > On 28 Mar 2018, at 17:47, Erik Joelsson wrote: > > I will sponsor the change. > > /Erik > > > On 2018-03-28 06:43, Robin Westberg wrote: >> Hi Kim, >> >>> On 26 Mar 2018, at 18:34, Kim Barrett wrote: >>> >>>> On Mar 26, 2018, at 11:01 AM, Robin Westberg wrote: >>>> >>>> Hi all, >>>> >>>> Please review this small change that defines the NOMINMAX macro when building HotSpot on Windows. >>>> >>>> Issue: https://bugs.openjdk.java.net/browse/JDK-8199619 >>>> Webrev: http://cr.openjdk.java.net/~rwestberg/8199619/webrev.00/ >>>> Testing: building with/without precompiled headers, hs-tier1 >>>> >>>> Best regards, >>>> Robin >>> Looks good. >> Thanks for reviewing! >> >>> This change will have a (easy to resolve) merge conflict with your fix for JDK-8199736, right? >> Indeed, the flag definitions should go on a single line I think. I?ll try to get this one in first and rebase 8199736 afterwards. >> >> So, if anyone would be willing to sponsor this change, here?s an updated webrev with a proper mercurial changeset (no other changes): >> http://cr.openjdk.java.net/~rwestberg/8199619/webrev.01/ >> >> Best regards, >> Robin >> > From ioi.lam at oracle.com Thu Mar 29 18:50:42 2018 From: ioi.lam at oracle.com (Ioi Lam) Date: Thu, 29 Mar 2018 11:50:42 -0700 Subject: RFR (XXS) 8183238 Obsolete CheckEndorsedAndExtDirs and remove checks for lib/endorsed and lib/ext In-Reply-To: <4410b61f-b567-97ec-297a-aa28050fc946@oracle.com> References: <1daa59e3-b001-53b4-0cef-0cb15f856a7d@oracle.com> <4410b61f-b567-97ec-297a-aa28050fc946@oracle.com> Message-ID: Thanks Alan! - Ioi On 3/29/18 11:20 AM, Alan Bateman wrote: > On 29/03/2018 19:16, Ioi Lam wrote: >> Hi Alan, >> >> Thanks for the reminder of the other files. I checked all the files >> in the open and closed repos with 'grep -r CheckEndorsedAndExtDirs'. >> I found only 3 man pages in the open repo that contained references >> to CheckEndorsedAndExtDirs. No test cases or any other files refer to >> this option. >> >> Here's an updated webrev that added the changes to the mage pages. It >> also removes the dead code in arguments.cpppointed out by Mandy and >> David. >> >> http://cr.openjdk.java.net/~iklam/jdk11/8183238-obsolete-CheckEndorsedAndExtDirs.v02/ >> > This looks good to me. > > -Alan From gerard.ziemski at oracle.com Thu Mar 29 19:01:59 2018 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Thu, 29 Mar 2018 14:01:59 -0500 Subject: RFR (XL) 8081519 Split globals.hpp to factor out the Flag class Message-ID: Hi all, Please review this large and tedious (sorry), but simple fix that accomplishes the following: #1 factor out the command option flag related APIs out of globals.hpp/.cpp into its own dedicated files, i.e. jvmFlag.hpp/.cpp #2 merge Flag (too generic name) and CommandLineFlag classes and rename them as JVMFlag #3 cleanup globals.hpp includes originally added by the JEP-245 Note: the renamed file retain their history, but one needs to add ?follow? flag, ex. ?hg log -f file? https://bugs.openjdk.java.net/browse/JDK-8081519 http://cr.openjdk.java.net/~gziemski/8081519_rev1 Passes Mach5 hs_tier1-tier5, jtreg/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges tests. cheers From vladimir.kozlov at oracle.com Thu Mar 29 19:17:34 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 29 Mar 2018 12:17:34 -0700 Subject: RFR (XL) 8081519 Split globals.hpp to factor out the Flag class In-Reply-To: References: Message-ID: <17c47faa-4f6d-c3f3-bb71-ceba576146d0@oracle.com> Should we use this opportunity to move all flags related files into separate directory src/hotspot/share/jvmflags ? Thanks, Vladimir On 3/29/18 12:01 PM, Gerard Ziemski wrote: > Hi all, > > Please review this large and tedious (sorry), but simple fix that accomplishes the following: > > #1 factor out the command option flag related APIs out of globals.hpp/.cpp into its own dedicated files, i.e. jvmFlag.hpp/.cpp > #2 merge Flag (too generic name) and CommandLineFlag classes and rename them as JVMFlag > #3 cleanup globals.hpp includes originally added by the JEP-245 > > Note: the renamed file retain their history, but one needs to add ?follow? flag, ex. ?hg log -f file? > > https://bugs.openjdk.java.net/browse/JDK-8081519 > http://cr.openjdk.java.net/~gziemski/8081519_rev1 > > Passes Mach5 hs_tier1-tier5, jtreg/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges tests. > > > cheers > From coleen.phillimore at oracle.com Thu Mar 29 19:49:20 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Thu, 29 Mar 2018 15:49:20 -0400 Subject: RFR: 8195972: Refactor oops in JNI to use the Access API In-Reply-To: References: <8BC47508-8585-44EA-8D2B-22C2144E4AF5@oracle.com> <5ABB9B30.7080808@oracle.com> Message-ID: On 3/29/18 9:54 AM, Erik ?sterlund wrote: > Hi Kim, > > On 2018-03-29 01:35, Kim Barrett wrote: >>> On Mar 28, 2018, at 9:40 AM, Erik ?sterlund >>> wrote: >>> >>> Hi Kim, >>> >>> I noticed that jobjects are now IN_CONCURRENT_ROOT in this patch. I >>> wonder if this is the right time to upgrade them to >>> IN_CONCURRENT_ROOT. Until there is at least one GC that actually >>> scans these concurrently, this will only impose extra overheads >>> (unnecessary G1 SATB-enqueue barriers on the store required to >>> release jobjects) with no obvious gains. >>> >>> The platform specific code needs to go along with this. I have a >>> patch out to generalize interpreter code. In there, I am treating >>> resolve jobject as a normal strong root. That would probably need to >>> change. It is also troubling that jniFastGetField shoots raw loads >>> into (hopefully) the heap, dodging all GC barriers, hoping that is >>> okay. I wonder if starting to actually scan jobjects concurrently >>> would force us to disable that optimization completely to be >>> generally useful to all collectors. For example, an >>> IN_CONCURRENT_ROOT load access for ZGC might require a slowpath. But >>> in jniFastGetField, there is no frame, and hence any code that runs >>> in there must not call anything in the runtime. Therefore, with >>> IN_CONCURRENT_ROOT, it is not generally safe to use jniFastGetField, >>> without doing... something about that code. >>> >>> I would like to hear your thoughts about this. Perhaps the intention >>> is just to take incremental steps towards being able to scan >>> jobjects concurrently, and this is just the first step? Still, I >>> would be interested to hear about what you think about the next >>> steps. If we decide to go with IN_CONCURRENT_ROOT now already, then >>> I should change my interpreter changes that are out for review to do >>> the same so that we are consistent. >>> >>> Otherwise, this looks great, and I am glad we finally have jni >>> handles accessorized. >> With this change in place I think it should be straight-forward for G1 >> to do JNI global handle marking concurrently, rather than during a >> pause. >> >> This change does come with some costs. >> >> (1) For G1 (and presumably Shenandoah), a SATB enqueue barrier when >> setting a global handle's value to NULL as part of releasing the >> handle. >> >> (2) For other collectors, selection between the above barrier and >> do-nothing code. >> >> (3) For ZGC, a read barrier when resolving the value of a non-weak >> handle. >> >> (4) For other collectors (when ZGC is present), selection between the >> above barrier and do-nothing code. >> >> (1) and (2) are wasted costs until G1 is changed to do that marking >> concurrently.? But the cost is pretty small. >> >> I think (3) and (4) don't apply to the jdk repo yet.? And even in the >> zgc repo the impact should be small. >> >> All of these are costs that we expect to be taking eventually anyway. >> The real costs today are that we're not getting the pause-time benefit >> from these changes yet. > > Fair enough. > >> Even those (temporary) costs could be mitigated if we weren't forced >> to use the overly generic IN_CONCURRENT_ROOT decorator, and could >> instead provide more precise information to the GC-specific backends >> (e.g. something like IN_JNI_GLOBAL_ROOT), letting each GC defer its >> extra barrier work until the changes to get the pause-time benefits >> are being made. > > Sure. But I'd like to avoid overly specific decorators that describe > the exact root being accessed, rather than its semantic properties, > unless there are very compelling reasons to do so. Since the GC has code to process the roots as distinct things (roots on stack vs roots in ClassLoaderData vs roots in JNIHandles), maybe we do need to have a decorator to say which root it is.? Then the GC can figure out how it wants to process it: concurrently, as a strong root in a safepoint, etc. This sort of decoration would make a lot more sense to me because I still haven't figured out what the GC does differently for IN_CONCURRENT_ROOT vs IN_ROOT or what RootAccess<> with nothing means. Cutting and pasting in the runtime code isn't helping anymore! Also, I tried to review this code in jni, which is technically runtime code, and have a separate thread with Kim asking how he picked these decorators. We're trying to reduce the roots/oop references from the runtime so eventually there should be a limited set, and we can remove some as we do this. >> I'd forgotten about jniFastGetField.? This was discussed when Mikael >> and I were adding the jweak tag support.? At the time it was decided >> it was acceptable (though dirty) for G1 to not ensure the base object >> was kept alive when fetching a primitive field value from it. >> >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-March/026231.html >> >> >> I suspect that choice was driven by the difficulties you noted, and >> knowing that something better to solve all our problems (Access!) was >> coming soon :) Unfortunately, that (among other things here) really >> doesn't work for ZGC, even though it seems okay for all the other >> collectors, at least for now.? Any idea how important an optimization >> jniFastGetField might be?? How bad would it be to turn it off for ZGC? > > I have no idea what the cost would be of turning it off. But I feel > more compelled to fix it so that we do not have to turn it off. Should > not be impossible, but can be done outside of this RFE. > >> For the interpreter, I think you are referring to 8199417?? I hadn't >> looked at that before (I'll try to review it tomorrow).? Yes, I think >> those should be using IN_CONCURRENT_ROOT too, so that eventually ZGC >> can do JNI global marking concurrently. > > Yeah. > >> And there are two other pre-existing uses of IN_CONCURRENT_ROOT, both >> of which seem suspicious to me. >> >> - In ClassLoaderData::remove_handle() we have >> >> ???? // This root is not walked in safepoints, and hence requires an >> appropriate >> ???? // decorator that e.g. maintains the SATB invariant in SATB >> collectors. >> ???? RootAccess::oop_store(ptr, oop(NULL)); >> >> But there aren't any corresponding IN_CONCURRENT_ROOT loads, nor is >> the initializing store (CLD::ChunkedHandleList::add), which seems >> inconsistent.? (To be pedantic, the initializing store should probably >> be using the new RootAccess rather than a raw >> store.)? Oh, the load is OopHandle::resolve; and I think OopHandle is >> still pending accessorizing (and probably needs the access.inline.hpp >> cleanup...). >> >> - In InstanceKlass::klass_holder_phantom() we have >> >> ?? return RootAccess> ON_PHANTOM_OOP_REF>::oop_load(addr); >> >> My understanding of it is that IN_CONCURRENT_ROOT is not correct >> here.? I think this is similar to jweaks, where I only used >> ON_PHANTOM_OOP_REF. > Both of these were cut and pasted, ie, they were guesses.?? If they were IN_LOADER_DATA_ROOT (for both of these), IN_JNI_ROOT, or IN_THREAD_ROOT? you can figure out how to apply barriers to these things. You can fix these in a separate RFE.? Also, there's a store in oop* ClassLoaderData::ChunkedHandleList::add(oop o), which I think should be the AS_DEST_NOT_INITIALIZED case (had to cut/paste but this decoration makes sense from a documentation standpoint). Kim's change looks correct as far as I can tell.? IN_CONCURRENT_ROOT is equivalent to IN_ROOT right now, isn't it? Thanks, Coleen > Yes, that sounds like it should be fixed in separate RFEs. > > Your JNI handle changes look good. > > Thanks, > /Erik From stewartd.qdt at qualcommdatacenter.com Thu Mar 29 20:35:52 2018 From: stewartd.qdt at qualcommdatacenter.com (stewartd.qdt) Date: Thu, 29 Mar 2018 20:35:52 +0000 Subject: RFR: 8200251: AArch64::CPUFeature out of sync with VM_Version::Feature_Flag In-Reply-To: <680c71a0-4cae-0d34-31bc-92a05fb182c5@redhat.com> References: <1b32c22bf37044cd93e62c8a1a69afe8@NASANEXM01E.na.qualcomm.com> <69f608989b854ba3a32a0ac5ea6e9444@NASANEXM01E.na.qualcomm.com> <453c58a1f5da4db4a96839dab029add2@NASANEXM01E.na.qualcomm.com> <680c71a0-4cae-0d34-31bc-92a05fb182c5@redhat.com> Message-ID: Thanks Aleksey! I am an author with username dstewart, so I don't think I need a Contributed-by line, just someone to push for me. http://cr.openjdk.java.net/~dstewart/8200251/webrev.02/ Daniel -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Aleksey Shipilev Sent: Thursday, March 29, 2018 12:38 PM To: Andrew Haley ; hotspot-dev at openjdk.java.net Subject: Re: RFR: 8200251: AArch64::CPUFeature out of sync with VM_Version::Feature_Flag On 03/29/2018 06:28 PM, Andrew Haley wrote: > On 03/29/2018 03:48 PM, stewartd.qdt wrote: >> Could I get another review and a sponsor for this? > > You don't need another reviewer for this. It can be pushed. I can sponsor this, note the Contributed-by line: 8200251: AArch64::CPUFeature out of sync with VM_Version::Feature_Flag Reviewed-by: kvn, aph, shade Contributed-by: Daniel Stewart diff -r 5a757c0326c7 -r 940ab7917a49 src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.aarch64/src/jdk/vm/ci/aarch64/AArch64.java --- a/src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.aarch64/src/jdk/vm/ci/aarch64/AArch64.java Thu Mar 29 17:15:26 2018 +0200 +++ b/src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.aarch64/src/jdk/vm/ci/aarch64/AArch64.java Thu Mar 29 18:35:13 2018 +0200 @@ -171,6 +171,8 @@ SHA1, SHA2, CRC32, + LSE, + STXR_PREFETCH, A53MAC, DMB_ATOMICS } @@ -183,7 +185,11 @@ public enum Flag { UseBarriersForVolatile, UseCRC32, - UseNeon + UseNeon, + UseSIMDForMemoryOps, + AvoidUnalignedAccesses, + UseLSE, + UseBlockZeroing } private final EnumSet flags; Thanks, -Aleksey From gerard.ziemski at oracle.com Thu Mar 29 20:37:41 2018 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Thu, 29 Mar 2018 15:37:41 -0500 Subject: RFR (XL) 8081519 Split globals.hpp to factor out the Flag class In-Reply-To: <17c47faa-4f6d-c3f3-bb71-ceba576146d0@oracle.com> References: <17c47faa-4f6d-c3f3-bb71-ceba576146d0@oracle.com> Message-ID: <680CAA14-704C-4B8A-8CB8-C3CFA7DF5B72@oracle.com> That?s a good idea. I?ll work on that and come back with webrev2 shortly... cheers > On Mar 29, 2018, at 2:17 PM, Vladimir Kozlov wrote: > > Should we use this opportunity to move all flags related files into separate directory src/hotspot/share/jvmflags ? > > Thanks, > Vladimir > > On 3/29/18 12:01 PM, Gerard Ziemski wrote: >> Hi all, >> Please review this large and tedious (sorry), but simple fix that accomplishes the following: >> #1 factor out the command option flag related APIs out of globals.hpp/.cpp into its own dedicated files, i.e. jvmFlag.hpp/.cpp >> #2 merge Flag (too generic name) and CommandLineFlag classes and rename them as JVMFlag >> #3 cleanup globals.hpp includes originally added by the JEP-245 >> Note: the renamed file retain their history, but one needs to add ?follow? flag, ex. ?hg log -f file? >> https://bugs.openjdk.java.net/browse/JDK-8081519 >> http://cr.openjdk.java.net/~gziemski/8081519_rev1 >> Passes Mach5 hs_tier1-tier5, jtreg/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges tests. >> cheers From shade at redhat.com Thu Mar 29 21:09:33 2018 From: shade at redhat.com (Aleksey Shipilev) Date: Thu, 29 Mar 2018 23:09:33 +0200 Subject: RFR: 8200251: AArch64::CPUFeature out of sync with VM_Version::Feature_Flag In-Reply-To: References: <1b32c22bf37044cd93e62c8a1a69afe8@NASANEXM01E.na.qualcomm.com> <69f608989b854ba3a32a0ac5ea6e9444@NASANEXM01E.na.qualcomm.com> <453c58a1f5da4db4a96839dab029add2@NASANEXM01E.na.qualcomm.com> <680c71a0-4cae-0d34-31bc-92a05fb182c5@redhat.com> Message-ID: On 03/29/2018 10:35 PM, stewartd.qdt wrote: > Thanks Aleksey! I am an author with username dstewart, so I don't think I need a Contributed-by line, just someone to push for me. > > http://cr.openjdk.java.net/~dstewart/8200251/webrev.02/ There you go: http://hg.openjdk.java.net/jdk/hs/rev/17c6ab93710e -Aleksey From kim.barrett at oracle.com Thu Mar 29 21:38:35 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 29 Mar 2018 17:38:35 -0400 Subject: RFR: 8195972: Refactor oops in JNI to use the Access API In-Reply-To: References: <8BC47508-8585-44EA-8D2B-22C2144E4AF5@oracle.com> <5ABB9B30.7080808@oracle.com> Message-ID: <909A6DC7-F68D-4825-8304-DC9C088817DB@oracle.com> > On Mar 29, 2018, at 3:49 PM, coleen.phillimore at oracle.com wrote: > Kim's change looks correct as far as I can tell. IN_CONCURRENT_ROOT is equivalent to IN_ROOT right now, isn't it? No, they are not equivalent. RootAccess::oop_store(?) must generate the SATB pre-barrier. RootAccess<>::oop_store(?) should not generate the SATB pre-barrier. RootAccess<...> is just Access (Currently RootAccess is a class, in C++11 it could be a template type alias.) I feel a bit unhappy that IN_CONCURRENT_ROOT and IN_ROOT are not mutually exclusive. If I recall correctly, IN_CONCURRENT_ROOT will actually default in IN_ROOT if the latter is not present, but I might be mis-remembering. (Or alternatively, that we don?t have a distinct name for non-concurrent-root.) And I have no idea what the relation between either of those and IN_ARCHIVE_ROOT might be; the latter has no documentation (see JDK-8198381). From vladimir.kozlov at oracle.com Thu Mar 29 21:45:32 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 29 Mar 2018 14:45:32 -0700 Subject: RFR(XS): 8200360: MeetIncompatibleInterfaceArrays fails with "MeetIncompatibleInterfaceArrays0ASM.run() must be compiled at tier 0 !" In-Reply-To: References: <23fde1a0-776b-9a3e-6133-daa6f387150e@oracle.com> Message-ID: Test still failed when run with -Xcomp: STDERR: jib > java.lang.Exception: Method public static compiler.types.TestMeetIncompatibleInterfaceArrays$I1 MeetIncompatibleInterfaceArrays0ASM.run() must be compiled at tier 0 but was compiled at 4 instead! It is for case compiler.types.TestMeetIncompatibleInterfaceArrays 0 I filed JDK-8200461. Vladimir On 3/29/18 5:54 AM, Volker Simonis wrote: > On Thu, Mar 29, 2018 at 2:17 PM, Tobias Hartmann > wrote: >> Hi Volker >> >> On 29.03.2018 10:19, Volker Simonis wrote: >>> I just saw that Daniel added the "hs-tier3" tag to the bug. What does >>> this mean? Are there other (different?) problems when running the >>> "hs-tier3" suite? >> >> It just means that the problem showed up in tier3 as well. >> >>> I'll wait with the push until I get the OK from you that all internal >>> tests have passed. >> >> I've executed the test on tier1-3, no failures. Please go ahead and push! >> > > Thanks, pushed. > >> Best regards, >> Tobias From david.holmes at oracle.com Thu Mar 29 23:22:09 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 30 Mar 2018 09:22:09 +1000 Subject: RFR (XS) 8200430: Remove JTwork and JTreport from the .hgignore files In-Reply-To: <40493ea3-44ef-7a1e-d326-73fb850ac137@oracle.com> References: <40493ea3-44ef-7a1e-d326-73fb850ac137@oracle.com> Message-ID: On 30/03/2018 1:04 AM, coleen.phillimore at oracle.com wrote: > open webrev at http://cr.openjdk.java.net/~coleenp/8200430.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8200430 I can see these were added somewhat incidentally as part of another fix, but I also recall a discussion on slack about it. Why do you want to remove them from .hgignore? FWIW I had these in my local .hgignore anyway so can always add them back there. David > Tested in local repository. > > Thanks, > Coleen From david.holmes at oracle.com Thu Mar 29 23:24:54 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 30 Mar 2018 09:24:54 +1000 Subject: RFR (XXS) 8183238 Obsolete CheckEndorsedAndExtDirs and remove checks for lib/endorsed and lib/ext In-Reply-To: References: <1daa59e3-b001-53b4-0cef-0cb15f856a7d@oracle.com> Message-ID: <6f649638-827a-837c-04c1-62274d8535a4@oracle.com> On 30/03/2018 4:16 AM, Ioi Lam wrote: > Hi Alan, > > Thanks for the reminder of the other files. I checked all the files in > the open and closed repos with 'grep -r CheckEndorsedAndExtDirs'. I > found only 3 man pages in the open repo that contained references to > CheckEndorsedAndExtDirs. No test cases or any other files refer to this > option. > > Here's an updated webrev that added the changes to the mage pages. It > also removes the dead code in arguments.cpppointed out by Mandy and David. > > http://cr.openjdk.java.net/~iklam/jdk11/8183238-obsolete-CheckEndorsedAndExtDirs.v02/ Seems okay, though I thought the man page sources were "dead" anyway. Surprised to still see them there. Thanks, David > > Thanks > - Ioi > > > > On 3/28/18 12:11 AM, Alan Bateman wrote: >> On 28/03/2018 00:46, Ioi Lam wrote: >>> Hi please review this very small change: >>> >>> http://cr.openjdk.java.net/~iklam/jdk11/8183238-obsolete-CheckEndorsedAndExtDirs.v01/ >>> >>> https://bugs.openjdk.java.net/browse/JDK-8183238 >>> >>> The CheckEndorsedAndExtDirs flag has been deprecated since JDK 10 and >>> all uses of it have been removed from the test cases. >> This looks good. I assumed you've checked that we don't have any tests >> or man pages or other references to this option. >> >> -Alan > From vladimir.kozlov at oracle.com Thu Mar 29 23:25:05 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 29 Mar 2018 16:25:05 -0700 Subject: RFR (XS) 8200430: Remove JTwork and JTreport from the .hgignore files In-Reply-To: References: <40493ea3-44ef-7a1e-d326-73fb850ac137@oracle.com> Message-ID: On 3/29/18 4:22 PM, David Holmes wrote: > On 30/03/2018 1:04 AM, coleen.phillimore at oracle.com wrote: >> open webrev at http://cr.openjdk.java.net/~coleenp/8200430.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8200430 > > I can see these were added somewhat incidentally as part of another fix, > but I also recall a discussion on slack about it. Why do you want to > remove them from .hgignore? Yes, why? It is useful to have them there instead of having copy of .hgignore on each machine I run. Vladimir > > FWIW I had these in my local .hgignore anyway so can always add them > back there. > > David > >> Tested in local repository. >> >> Thanks, >> Coleen From mandy.chung at oracle.com Fri Mar 30 00:15:01 2018 From: mandy.chung at oracle.com (mandy chung) Date: Fri, 30 Mar 2018 08:15:01 +0800 Subject: RFR (XXS) 8183238 Obsolete CheckEndorsedAndExtDirs and remove checks for lib/endorsed and lib/ext In-Reply-To: <6f649638-827a-837c-04c1-62274d8535a4@oracle.com> References: <1daa59e3-b001-53b4-0cef-0cb15f856a7d@oracle.com> <6f649638-827a-837c-04c1-62274d8535a4@oracle.com> Message-ID: <0433b714-e8e1-5f88-4368-3fad92650304@oracle.com> On 3/30/18 7:24 AM, David Holmes wrote: > On 30/03/2018 4:16 AM, Ioi Lam wrote: >> Hi Alan, >> >> Thanks for the reminder of the other files. I checked all the files >> in the open and closed repos with 'grep -r CheckEndorsedAndExtDirs'. >> I found only 3 man pages in the open repo that contained references >> to CheckEndorsedAndExtDirs. No test cases or any other files refer to >> this option. >> >> Here's an updated webrev that added the changes to the mage pages. It >> also removes the dead code in arguments.cpppointed out by Mandy and >> David. >> >> http://cr.openjdk.java.net/~iklam/jdk11/8183238-obsolete-CheckEndorsedAndExtDirs.v02/ > > Looks fine. Can you file a docs issue to update the java tool reference guide [1] to take out the option? > Seems okay, though I thought the man page sources were "dead" anyway. > Surprised to still see them there. > These out-of-date man pages are for OpenJDK build and they were generated by the docs team and dropped in the repo. Jon and I have discussed to include the man pages in our source in markdown and the build can generate the man pages in proper formats and include in the docs bundle. ? Developers will update them for any change to CLI options and the man pages will always be up-to-date.? I think that's a good improvement and hope to happen some time. Mandy [1] https://docs.oracle.com/javase/10/tools/java.htm#JSWOR624 From ioi.lam at oracle.com Fri Mar 30 03:11:28 2018 From: ioi.lam at oracle.com (Ioi Lam) Date: Thu, 29 Mar 2018 20:11:28 -0700 Subject: RFR (XXS) 8183238 Obsolete CheckEndorsedAndExtDirs and remove checks for lib/endorsed and lib/ext In-Reply-To: <0433b714-e8e1-5f88-4368-3fad92650304@oracle.com> References: <1daa59e3-b001-53b4-0cef-0cb15f856a7d@oracle.com> <6f649638-827a-837c-04c1-62274d8535a4@oracle.com> <0433b714-e8e1-5f88-4368-3fad92650304@oracle.com> Message-ID: On 3/29/18 5:15 PM, mandy chung wrote: > > > On 3/30/18 7:24 AM, David Holmes wrote: >> On 30/03/2018 4:16 AM, Ioi Lam wrote: >>> Hi Alan, >>> >>> Thanks for the reminder of the other files. I checked all the files >>> in the open and closed repos with 'grep -r CheckEndorsedAndExtDirs'. >>> I found only 3 man pages in the open repo that contained references >>> to CheckEndorsedAndExtDirs. No test cases or any other files refer >>> to this option. >>> >>> Here's an updated webrev that added the changes to the mage pages. >>> It also removes the dead code in arguments.cpppointed out by Mandy >>> and David. >>> >>> http://cr.openjdk.java.net/~iklam/jdk11/8183238-obsolete-CheckEndorsedAndExtDirs.v02/ >> >> > > Looks fine. > > Can you file a docs issue to update the java tool reference guide [1] > to take out the option? Hi Mandy, I've filed https://bugs.openjdk.java.net/browse/JDK-8200476 "Update java tool reference guide to remove CheckEndorsedAndExtDirs option" Thanks - Ioi > >> Seems okay, though I thought the man page sources were "dead" anyway. >> Surprised to still see them there. >> > > These out-of-date man pages are for OpenJDK build and they were > generated by the docs team and dropped in the repo. > > Jon and I have discussed to include the man pages in our source in > markdown and the build can generate the man pages in proper formats > and include in the docs bundle. ? Developers will update them for any > change to CLI options and the man pages will always be up-to-date.? I > think that's a good improvement and hope to happen some time. > > Mandy > [1] https://docs.oracle.com/javase/10/tools/java.htm#JSWOR624 From volker.simonis at gmail.com Fri Mar 30 07:06:53 2018 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 30 Mar 2018 07:06:53 +0000 Subject: RFR(XS): 8200360: MeetIncompatibleInterfaceArrays fails with "MeetIncompatibleInterfaceArrays0ASM.run() must be compiled at tier 0 !" In-Reply-To: References: <23fde1a0-776b-9a3e-6133-daa6f387150e@oracle.com> Message-ID: Vladimir Kozlov schrieb am Do. 29. M?rz 2018 um 23:45: > Test still failed when run with -Xcomp: > > STDERR: > jib > java.lang.Exception: Method public static > compiler.types.TestMeetIncompatibleInterfaceArrays$I1 > MeetIncompatibleInterfaceArrays0ASM.run() must be compiled at tier 0 but > was compiled at 4 instead! > > It is for case compiler.types.TestMeetIncompatibleInterfaceArrays 0 > Sorry! I didn?t thought this small test change would cause so much hassle but you should never change a winning team :) > I filed JDK-8200461. Thanks for fixing this. I?ve just reviewed it. > > Vladimir > > On 3/29/18 5:54 AM, Volker Simonis wrote: > > On Thu, Mar 29, 2018 at 2:17 PM, Tobias Hartmann > > wrote: > >> Hi Volker > >> > >> On 29.03.2018 10:19, Volker Simonis wrote: > >>> I just saw that Daniel added the "hs-tier3" tag to the bug. What does > >>> this mean? Are there other (different?) problems when running the > >>> "hs-tier3" suite? > >> > >> It just means that the problem showed up in tier3 as well. > >> > >>> I'll wait with the push until I get the OK from you that all internal > >>> tests have passed. > >> > >> I've executed the test on tier1-3, no failures. Please go ahead and > push! > >> > > > > Thanks, pushed. > > > >> Best regards, > >> Tobias > From aph at redhat.com Fri Mar 30 09:11:20 2018 From: aph at redhat.com (Andrew Haley) Date: Fri, 30 Mar 2018 10:11:20 +0100 Subject: RFD: AOT for AArch64 In-Reply-To: <3d21e36b-9cb1-0c29-2400-656b618da386@bell-sw.com> References: <83f0e948-d002-6b71-e73f-6f65fce14e3b@redhat.com> <5ae9dbad-a1c0-083d-3aa7-bcb918ceb8b5@redhat.com> <3d21e36b-9cb1-0c29-2400-656b618da386@bell-sw.com> Message-ID: On 03/29/2018 06:49 PM, Dmitry Chuyko wrote: > It's gone now. I see almost all methods in java.base are aot'ed, > excluding the same ones as for x86. Non-tiered .so is created but for > --compile-for-tiered there's a recurring linkage error: > > Exception in thread "main" java.lang.InternalError: ava.base-coop2: In > function `java.io.CharArrayWriter.toCharArray()[C':(.text+0x6e030c): > relocation truncated to fit: R_AARCH64_CALL26 against > `plt._aot_stub_routines_arrayof_jshort_disjoint_arraycopy'java.base-coop2: > In function That's because the shared library is too big, more than 512 M. We need some kind of jaot compiler flag which is the equivalent of GCC's -fPIC. This will cause three instructions [adrp; ldr, blr] to be generated at every call site, rather than one. I don't propose to do anything about this in the JDK 11 timeframe,. People who want to use AOT for performance should not use such huge shared libraries because it'll only make performance worse. So, I'm reluctant to spend much time on the problem. -- Andrew Haley Java Platform Lead Engineer Red Hat UK Ltd. EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From thomas.stuefe at gmail.com Fri Mar 30 11:16:47 2018 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Fri, 30 Mar 2018 13:16:47 +0200 Subject: RFR: 8176717: GC log file handle leaked to child processes In-Reply-To: <796998bd-78f5-1a2a-a38e-fc5da8f10b7a@oracle.com> References: <271f07b2-2a74-c5ff-7a7b-d9805929a23c@oracle.com> <04b0987c-0de4-1e5e-52be-0c603c1fab10@oracle.com> <4c216bb1-a3bf-e916-d07a-643431faa341@oracle.com> <796998bd-78f5-1a2a-a38e-fc5da8f10b7a@oracle.com> Message-ID: Hi Leo, looks okay. I stated my reservations against adding this function and the platform specific code in its current form before, but I can certainly live with this fix. It follows what we did in os::open(), so it is consistent in that matter. Thanks for fixing this! Thomas (btw, I am not sure which of my "2" Per meant, since I accidentally miscounted my points in the earlier mail.) On Thu, Mar 29, 2018 at 5:41 PM, Leo Korinth wrote: > On 23/03/18 20:03, Leo Korinth wrote: > >> Hi! >> >> Cross-posting this to both hotspot-dev and hotspot-runtime-dev to get >> more attention. Sorry. >> >> Original mail conversation can be found here: >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2018-March/030637.html >> >> I need feedback to know how to continue. >> >> Thanks, >> Leo >> > > Hi! > > Below is a new webrev with Thomas' and Per's suggested name change > from: os::fopen_retain(const char* path, const char* mode) > to: os::fopen(const char* path, const char* mode) > > Full webrev: > http://cr.openjdk.java.net/~lkorinth/8176717/02/ > > Review and/or comments please! > > Thanks, > Leo > > From coleen.phillimore at oracle.com Fri Mar 30 11:34:40 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 30 Mar 2018 07:34:40 -0400 Subject: RFR (XS) 8200430: Remove JTwork and JTreport from the .hgignore files In-Reply-To: References: <40493ea3-44ef-7a1e-d326-73fb850ac137@oracle.com> Message-ID: On 3/29/18 7:25 PM, Vladimir Kozlov wrote: > On 3/29/18 4:22 PM, David Holmes wrote: >> On 30/03/2018 1:04 AM, coleen.phillimore at oracle.com wrote: >>> open webrev at http://cr.openjdk.java.net/~coleenp/8200430.01/webrev >>> bug link https://bugs.openjdk.java.net/browse/JDK-8200430 >> >> I can see these were added somewhat incidentally as part of another >> fix, but I also recall a discussion on slack about it. Why do you >> want to remove them from .hgignore? > I remember our slack conversation said it was ok to remove them. > Yes, why? It is useful to have them there instead of having copy of > .hgignore? on each machine I run. Stefan has an offlist discussion with me about this too.? Why do you have them in your .hgignore?? I want to purge them when I say hg purge, but I guess I can also type hg purge -all.??? Is it for hg status to find actually new files that haven't been hg added yet? That seems useful to me also. thanks, Coleen > > Vladimir > >> >> FWIW I had these in my local .hgignore anyway so can always add them >> back there. >> >> David >> >>> Tested in local repository. >>> >>> Thanks, >>> Coleen From stewartd.qdt at qualcommdatacenter.com Fri Mar 30 12:17:17 2018 From: stewartd.qdt at qualcommdatacenter.com (stewartd.qdt) Date: Fri, 30 Mar 2018 12:17:17 +0000 Subject: RFR: 8200251: AArch64::CPUFeature out of sync with VM_Version::Feature_Flag In-Reply-To: References: <1b32c22bf37044cd93e62c8a1a69afe8@NASANEXM01E.na.qualcomm.com> <69f608989b854ba3a32a0ac5ea6e9444@NASANEXM01E.na.qualcomm.com> <453c58a1f5da4db4a96839dab029add2@NASANEXM01E.na.qualcomm.com> <680c71a0-4cae-0d34-31bc-92a05fb182c5@redhat.com> Message-ID: <62d4d821916c42cdb317e8baf8038b5c@NASANEXM01E.na.qualcomm.com> Thank you! -----Original Message----- From: Aleksey Shipilev [mailto:shade at redhat.com] Sent: Thursday, March 29, 2018 5:10 PM To: stewartd.qdt ; hotspot-dev at openjdk.java.net Subject: Re: RFR: 8200251: AArch64::CPUFeature out of sync with VM_Version::Feature_Flag On 03/29/2018 10:35 PM, stewartd.qdt wrote: > Thanks Aleksey! I am an author with username dstewart, so I don't think I need a Contributed-by line, just someone to push for me. > > http://cr.openjdk.java.net/~dstewart/8200251/webrev.02/ There you go: http://hg.openjdk.java.net/jdk/hs/rev/17c6ab93710e -Aleksey From david.holmes at oracle.com Fri Mar 30 13:13:25 2018 From: david.holmes at oracle.com (David Holmes) Date: Fri, 30 Mar 2018 23:13:25 +1000 Subject: RFR (XS) 8200430: Remove JTwork and JTreport from the .hgignore files In-Reply-To: References: <40493ea3-44ef-7a1e-d326-73fb850ac137@oracle.com> Message-ID: <0a0d0272-71dc-3720-0997-5ee404841d88@oracle.com> On 30/03/2018 9:34 PM, coleen.phillimore at oracle.com wrote: > On 3/29/18 7:25 PM, Vladimir Kozlov wrote: >> On 3/29/18 4:22 PM, David Holmes wrote: >>> On 30/03/2018 1:04 AM, coleen.phillimore at oracle.com wrote: >>>> open webrev at http://cr.openjdk.java.net/~coleenp/8200430.01/webrev >>>> bug link https://bugs.openjdk.java.net/browse/JDK-8200430 >>> >>> I can see these were added somewhat incidentally as part of another >>> fix, but I also recall a discussion on slack about it. Why do you >>> want to remove them from .hgignore? >> > > I remember our slack conversation said it was ok to remove them. I was referring to a conversation a while ago to add them. >> Yes, why? It is useful to have them there instead of having copy of >> .hgignore? on each machine I run. > > Stefan has an offlist discussion with me about this too.? Why do you > have them in your .hgignore?? I want to purge them when I say hg purge, > but I guess I can also type hg purge -all.??? Is it for hg status to > find actually new files that haven't been hg added yet? That seems > useful to me also. Yes I want hg status to ignore them so I don't have to type "hg status -mard". I have a least three "home" directories with a .hgignore that I have to copy across. :) I was not familiar with "hg purge" but it seems to me that an extension that specifically deletes untracked file should not be honouring the .hgignore rules. Though I guess if it uses hg status to find those untracked files ... Not sure what consideration weighs the most here. :) David > thanks, > Coleen > >> >> Vladimir >> >>> >>> FWIW I had these in my local .hgignore anyway so can always add them >>> back there. >>> >>> David >>> >>>> Tested in local repository. >>>> >>>> Thanks, >>>> Coleen > From coleen.phillimore at oracle.com Fri Mar 30 14:58:52 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 30 Mar 2018 10:58:52 -0400 Subject: RFR (XS) 8200430: Remove JTwork and JTreport from the .hgignore files In-Reply-To: <0a0d0272-71dc-3720-0997-5ee404841d88@oracle.com> References: <40493ea3-44ef-7a1e-d326-73fb850ac137@oracle.com> <0a0d0272-71dc-3720-0997-5ee404841d88@oracle.com> Message-ID: On 3/30/18 9:13 AM, David Holmes wrote: > On 30/03/2018 9:34 PM, coleen.phillimore at oracle.com wrote: >> On 3/29/18 7:25 PM, Vladimir Kozlov wrote: >>> On 3/29/18 4:22 PM, David Holmes wrote: >>>> On 30/03/2018 1:04 AM, coleen.phillimore at oracle.com wrote: >>>>> open webrev at http://cr.openjdk.java.net/~coleenp/8200430.01/webrev >>>>> bug link https://bugs.openjdk.java.net/browse/JDK-8200430 >>>> >>>> I can see these were added somewhat incidentally as part of another >>>> fix, but I also recall a discussion on slack about it. Why do you >>>> want to remove them from .hgignore? >>> >> >> I remember our slack conversation said it was ok to remove them. > > I was referring to a conversation a while ago to add them. > >>> Yes, why? It is useful to have them there instead of having copy of >>> .hgignore? on each machine I run. >> >> Stefan has an offlist discussion with me about this too.? Why do you >> have them in your .hgignore?? I want to purge them when I say hg >> purge, but I guess I can also type hg purge -all.??? Is it for hg >> status to find actually new files that haven't been hg added yet? >> That seems useful to me also. > > Yes I want hg status to ignore them so I don't have to type "hg status > -mard". I have a least three "home" directories with a .hgignore that > I have to copy across. :) > > I was not familiar with "hg purge" but it seems to me that an > extension that specifically deletes untracked file should not be > honouring the .hgignore rules. Though I guess if it uses hg status to > find those untracked files ... > > Not sure what consideration weighs the most here. :) Yours does!? It always does :)? I'm going to withdraw this RFR and document this discussion in the RFE.? I have an hg purge set of options that removes JTwork directories now. Coleen > > David > >> thanks, >> Coleen >> >>> >>> Vladimir >>> >>>> >>>> FWIW I had these in my local .hgignore anyway so can always add >>>> them back there. >>>> >>>> David >>>> >>>>> Tested in local repository. >>>>> >>>>> Thanks, >>>>> Coleen >> From stewartd.qdt at qualcommdatacenter.com Fri Mar 30 15:35:29 2018 From: stewartd.qdt at qualcommdatacenter.com (stewartd.qdt) Date: Fri, 30 Mar 2018 15:35:29 +0000 Subject: RFR: 8200524 - AArch64: CPUFeature and Flag enums are not passed through JVMCI Message-ID: <98b24cb083e34d6fa465a17bc9b81acd@NASANEXM01E.na.qualcomm.com> Please review this webrev [1] which implements the transfer of AArch64::CPUFeature flags and AArch64::Flag enums over the JVMCI interface. This patch sets the CPUFeature enums corresponding to which VM_Version flags are set. It also sets the Flag enums corresponding to which use flags have been set on the command line. This mirrors what is done for AMD64. The bug report is filed at [2]. I am happy to modify the patch as necessary. Regards, Daniel Stewart [1] - http://cr.openjdk.java.net/~dstewart/8200524/webrev.00/ [2] - https://bugs.openjdk.java.net/browse/JDK-8200524 From vladimir.kozlov at oracle.com Fri Mar 30 16:10:03 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 30 Mar 2018 09:10:03 -0700 Subject: RFR: 8200524 - AArch64: CPUFeature and Flag enums are not passed through JVMCI In-Reply-To: <98b24cb083e34d6fa465a17bc9b81acd@NASANEXM01E.na.qualcomm.com> References: <98b24cb083e34d6fa465a17bc9b81acd@NASANEXM01E.na.qualcomm.com> Message-ID: <7dc80da1-3a55-717c-eb90-676d191c8c08@oracle.com> Changes looks good to me. They follow the same code pattern as on other architectures. Thanks, Vladimir On 3/30/18 8:35 AM, stewartd.qdt wrote: > Please review this webrev [1] which implements the transfer of AArch64::CPUFeature flags and AArch64::Flag enums over the JVMCI interface. > > This patch sets the CPUFeature enums corresponding to which VM_Version flags are set. It also sets the Flag enums corresponding to which use flags have been set on the command line. This mirrors what is done for AMD64. > > The bug report is filed at [2]. > > I am happy to modify the patch as necessary. > > Regards, > Daniel Stewart > > [1] - http://cr.openjdk.java.net/~dstewart/8200524/webrev.00/ > [2] - https://bugs.openjdk.java.net/browse/JDK-8200524 > From stewartd.qdt at qualcommdatacenter.com Fri Mar 30 16:11:36 2018 From: stewartd.qdt at qualcommdatacenter.com (stewartd.qdt) Date: Fri, 30 Mar 2018 16:11:36 +0000 Subject: RFR: 8200524 - AArch64: CPUFeature and Flag enums are not passed through JVMCI In-Reply-To: <7dc80da1-3a55-717c-eb90-676d191c8c08@oracle.com> References: <98b24cb083e34d6fa465a17bc9b81acd@NASANEXM01E.na.qualcomm.com> <7dc80da1-3a55-717c-eb90-676d191c8c08@oracle.com> Message-ID: <09892b14336d47469db1554c69db9aa8@NASANEXM01E.na.qualcomm.com> Thanks, Vladimir. -----Original Message----- From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] Sent: Friday, March 30, 2018 12:10 PM To: stewartd.qdt ; hotspot-dev at openjdk.java.net Subject: Re: RFR: 8200524 - AArch64: CPUFeature and Flag enums are not passed through JVMCI Changes looks good to me. They follow the same code pattern as on other architectures. Thanks, Vladimir On 3/30/18 8:35 AM, stewartd.qdt wrote: > Please review this webrev [1] which implements the transfer of AArch64::CPUFeature flags and AArch64::Flag enums over the JVMCI interface. > > This patch sets the CPUFeature enums corresponding to which VM_Version flags are set. It also sets the Flag enums corresponding to which use flags have been set on the command line. This mirrors what is done for AMD64. > > The bug report is filed at [2]. > > I am happy to modify the patch as necessary. > > Regards, > Daniel Stewart > > [1] - http://cr.openjdk.java.net/~dstewart/8200524/webrev.00/ > [2] - https://bugs.openjdk.java.net/browse/JDK-8200524 > From gerard.ziemski at oracle.com Fri Mar 30 17:27:17 2018 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Fri, 30 Mar 2018 12:27:17 -0500 Subject: RFR (XL) 8081519 Split globals.hpp to factor out the Flag class In-Reply-To: <680CAA14-704C-4B8A-8CB8-C3CFA7DF5B72@oracle.com> References: <17c47faa-4f6d-c3f3-bb71-ceba576146d0@oracle.com> <680CAA14-704C-4B8A-8CB8-C3CFA7DF5B72@oracle.com> Message-ID: <04FF41C3-FA1C-4570-BC35-BC986DCC7216@oracle.com> Hi all, Please review this large and tedious (sorry), but simple fix that accomplishes the following: #1 factor out the command option flag related APIs out of globals.hpp/.cpp into its own dedicated files, i.e. jvmFlag.hpp/.cpp #2 moved all jvmFlag* files into its own dedicated folder (i.e. src/hotspot/share/runtime/jvmFlag/) #3 merge Flag (too generic name) and CommandLineFlag classes and rename them as JVMFlag #4 cleanup globals.hpp includes originally added by the JEP-245 Note: the renamed file retain their history, but one needs to add ?follow? flag, ex. ?hg log -f file? https://bugs.openjdk.java.net/browse/JDK-8081519 http://cr.openjdk.java.net/~gziemski/8081519_rev2 Passes Mach5 hs_tier1-tier5, jtreg/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges tests. cheers From vladimir.kozlov at oracle.com Fri Mar 30 17:33:55 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 30 Mar 2018 10:33:55 -0700 Subject: RFR (XL) 8081519 Split globals.hpp to factor out the Flag class In-Reply-To: <04FF41C3-FA1C-4570-BC35-BC986DCC7216@oracle.com> References: <17c47faa-4f6d-c3f3-bb71-ceba576146d0@oracle.com> <680CAA14-704C-4B8A-8CB8-C3CFA7DF5B72@oracle.com> <04FF41C3-FA1C-4570-BC35-BC986DCC7216@oracle.com> Message-ID: <8f1611c2-ac40-2ea9-3e1a-ae5561722aae@oracle.com> Renaming and other changes looks good to me. But my suggestion about new directory was to have it on the same level as runtime/ directory and not inside it. I also thought you will move globals* files and test_globals.cpp but I am fine with leaving them as they are. Thanks, Vladimir On 3/30/18 10:27 AM, Gerard Ziemski wrote: > Hi all, > > Please review this large and tedious (sorry), but simple fix that accomplishes the following: > > #1 factor out the command option flag related APIs out of globals.hpp/.cpp into its own dedicated files, i.e. jvmFlag.hpp/.cpp > #2 moved all jvmFlag* files into its own dedicated folder (i.e. src/hotspot/share/runtime/jvmFlag/) > #3 merge Flag (too generic name) and CommandLineFlag classes and rename them as JVMFlag > #4 cleanup globals.hpp includes originally added by the JEP-245 > > Note: the renamed file retain their history, but one needs to add ?follow? flag, ex. ?hg log -f file? > > https://bugs.openjdk.java.net/browse/JDK-8081519 > http://cr.openjdk.java.net/~gziemski/8081519_rev2 > > Passes Mach5 hs_tier1-tier5, jtreg/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges tests. > > > cheers > From gerard.ziemski at oracle.com Fri Mar 30 17:47:36 2018 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Fri, 30 Mar 2018 12:47:36 -0500 Subject: RFR (XL) 8081519 Split globals.hpp to factor out the Flag class In-Reply-To: <8f1611c2-ac40-2ea9-3e1a-ae5561722aae@oracle.com> References: <17c47faa-4f6d-c3f3-bb71-ceba576146d0@oracle.com> <680CAA14-704C-4B8A-8CB8-C3CFA7DF5B72@oracle.com> <04FF41C3-FA1C-4570-BC35-BC986DCC7216@oracle.com> <8f1611c2-ac40-2ea9-3e1a-ae5561722aae@oracle.com> Message-ID: Thank you Vladimir for the review. > On Mar 30, 2018, at 12:33 PM, Vladimir Kozlov wrote: > > Renaming and other changes looks good to me. > > But my suggestion about new directory was to have it on the same level as runtime/ directory and not inside it. I thought about that, and decided that "share/runtime/jvmFlag" better describes the purpose of the files there than just "share/jvmFlag". We have the precedence of others that extend the hierarchy in similar matter like ?share/gc/g1, share/gc/parallel?, though ?share/gc/? itself has no files on its own. So, can we leave thing as proposed or do we need further discussion? cheers > I also thought you will move globals* files and test_globals.cpp but I am fine with leaving them as they are. > > Thanks, > Vladimir > > On 3/30/18 10:27 AM, Gerard Ziemski wrote: >> Hi all, >> Please review this large and tedious (sorry), but simple fix that accomplishes the following: >> #1 factor out the command option flag related APIs out of globals.hpp/.cpp into its own dedicated files, i.e. jvmFlag.hpp/.cpp >> #2 moved all jvmFlag* files into its own dedicated folder (i.e. src/hotspot/share/runtime/jvmFlag/) >> #3 merge Flag (too generic name) and CommandLineFlag classes and rename them as JVMFlag >> #4 cleanup globals.hpp includes originally added by the JEP-245 >> Note: the renamed file retain their history, but one needs to add ?follow? flag, ex. ?hg log -f file? >> https://bugs.openjdk.java.net/browse/JDK-8081519 >> http://cr.openjdk.java.net/~gziemski/8081519_rev2 >> Passes Mach5 hs_tier1-tier5, jtreg/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges tests. >> cheers From stewartd.qdt at qualcommdatacenter.com Fri Mar 30 17:48:28 2018 From: stewartd.qdt at qualcommdatacenter.com (stewartd.qdt) Date: Fri, 30 Mar 2018 17:48:28 +0000 Subject: RFR: 8200524 - AArch64: CPUFeature and Flag enums are not passed through JVMCI In-Reply-To: <09892b14336d47469db1554c69db9aa8@NASANEXM01E.na.qualcomm.com> References: <98b24cb083e34d6fa465a17bc9b81acd@NASANEXM01E.na.qualcomm.com> <7dc80da1-3a55-717c-eb90-676d191c8c08@oracle.com> <09892b14336d47469db1554c69db9aa8@NASANEXM01E.na.qualcomm.com> Message-ID: <17dc271bed3d4cb1b2c816680aeabbaa@NASANEXM01E.na.qualcomm.com> Might I get a sponsor for this change? http://cr.openjdk.java.net/~dstewart/8200524/webrev.01/ Thank you, Daniel -----Original Message----- From: stewartd.qdt Sent: Friday, March 30, 2018 12:12 PM To: Vladimir Kozlov ; stewartd.qdt ; hotspot-dev at openjdk.java.net Subject: RE: RFR: 8200524 - AArch64: CPUFeature and Flag enums are not passed through JVMCI Thanks, Vladimir. -----Original Message----- From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] Sent: Friday, March 30, 2018 12:10 PM To: stewartd.qdt ; hotspot-dev at openjdk.java.net Subject: Re: RFR: 8200524 - AArch64: CPUFeature and Flag enums are not passed through JVMCI Changes looks good to me. They follow the same code pattern as on other architectures. Thanks, Vladimir On 3/30/18 8:35 AM, stewartd.qdt wrote: > Please review this webrev [1] which implements the transfer of AArch64::CPUFeature flags and AArch64::Flag enums over the JVMCI interface. > > This patch sets the CPUFeature enums corresponding to which VM_Version flags are set. It also sets the Flag enums corresponding to which use flags have been set on the command line. This mirrors what is done for AMD64. > > The bug report is filed at [2]. > > I am happy to modify the patch as necessary. > > Regards, > Daniel Stewart > > [1] - http://cr.openjdk.java.net/~dstewart/8200524/webrev.00/ > [2] - https://bugs.openjdk.java.net/browse/JDK-8200524 > From coleen.phillimore at oracle.com Fri Mar 30 17:50:40 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 30 Mar 2018 13:50:40 -0400 Subject: RFR (XL) 8081519 Split globals.hpp to factor out the Flag class In-Reply-To: References: <17c47faa-4f6d-c3f3-bb71-ceba576146d0@oracle.com> <680CAA14-704C-4B8A-8CB8-C3CFA7DF5B72@oracle.com> <04FF41C3-FA1C-4570-BC35-BC986DCC7216@oracle.com> <8f1611c2-ac40-2ea9-3e1a-ae5561722aae@oracle.com> Message-ID: On 3/30/18 1:47 PM, Gerard Ziemski wrote: > Thank you Vladimir for the review. > >> On Mar 30, 2018, at 12:33 PM, Vladimir Kozlov wrote: >> >> Renaming and other changes looks good to me. >> >> But my suggestion about new directory was to have it on the same level as runtime/ directory and not inside it. > I thought about that, and decided that "share/runtime/jvmFlag" better describes the purpose of the files there than just "share/jvmFlag". > > We have the precedence of others that extend the hierarchy in similar matter like ?share/gc/g1, share/gc/parallel?, though ?share/gc/? itself has no files on its own. > > So, can we leave thing as proposed or do we need further discussion? I like the sub runtime level for these flags.?? I'll review the rest later. Coleen > > > cheers > >> I also thought you will move globals* files and test_globals.cpp but I am fine with leaving them as they are. >> >> Thanks, >> Vladimir >> >> On 3/30/18 10:27 AM, Gerard Ziemski wrote: >>> Hi all, >>> Please review this large and tedious (sorry), but simple fix that accomplishes the following: >>> #1 factor out the command option flag related APIs out of globals.hpp/.cpp into its own dedicated files, i.e. jvmFlag.hpp/.cpp >>> #2 moved all jvmFlag* files into its own dedicated folder (i.e. src/hotspot/share/runtime/jvmFlag/) >>> #3 merge Flag (too generic name) and CommandLineFlag classes and rename them as JVMFlag >>> #4 cleanup globals.hpp includes originally added by the JEP-245 >>> Note: the renamed file retain their history, but one needs to add ?follow? flag, ex. ?hg log -f file? >>> https://bugs.openjdk.java.net/browse/JDK-8081519 >>> http://cr.openjdk.java.net/~gziemski/8081519_rev2 >>> Passes Mach5 hs_tier1-tier5, jtreg/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges tests. >>> cheers From coleen.phillimore at oracle.com Fri Mar 30 17:53:02 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 30 Mar 2018 13:53:02 -0400 Subject: RFR (M) 8198313: Wrap holder object for ClassLoaderData in a WeakHandle In-Reply-To: References: <3fe8b4c5-3e1d-d192-07ce-0828e3982e75@oracle.com> <9C48FEF4-59DD-4415-AF18-B95ADBDFACB4@oracle.com> Message-ID: <304110c1-f4e0-63c8-d94b-bc779b737411@oracle.com> I have an incremental and full .02 version with the changes discussed here. open webrev at http://cr.openjdk.java.net/~coleenp/8198313.02.incr/webrev open webrev at http://cr.openjdk.java.net/~coleenp/8198313.02/webrev These have been retested on x86, all hotspot jtreg tests. thanks, Coleen On 3/29/18 12:37 PM, coleen.phillimore at oracle.com wrote: > > Hi Kim, > Thank you for reviewing this. > > On 3/28/18 5:12 PM, Kim Barrett wrote: >>> On Mar 26, 2018, at 1:26 PM, coleen.phillimore at oracle.com wrote: >>> >>> Summary: Use WeakHandle for ClassLoaderData::_holder so that >>> is_alive closure is not needed >>> >>> The purpose of WeakHandle is to encapsulate weak oops within the >>> runtime code in the vm.? The class was initially written by >>> StefanK.?? The weak handles are pointers to OopStorage. This code is >>> a basis for future work to move direct pointers to the heap (oops) >>> from runtime structures like the StringTable, into pointers into an >>> area that the GC efficiently manages, in parallel and/or concurrently. >>> >>> Tested with mach5 tier 1-5.? Performance tested with internal >>> dev-submit performance tests, and locally. >>> >>> open webrev at http://cr.openjdk.java.net/~coleenp/8198313.01/webrev >>> bug link https://bugs.openjdk.java.net/browse/JDK-8198313 >>> >>> Thanks, >>> Coleen >> ------------------------------------------------------------------------------ >> >> src/hotspot/share/oops/weakHandle.cpp >> >> ?? 59 void WeakHandle::release() const { >> ?? 60?? Universe::vm_weak_oop_storage()->release(_obj); >> >> Is WeakHandle::release ever called with a handle that has not been >> cleared by GC?? The only caller I found is ~ClassLoaderData.? Do we >> ever construct a CLD with filled-in holder, and then decide we don't >> want the CLD after all?? I'm thinking of something like an error >> during class loading or the like, but without much knowledge. > > We call WeakHandle::release in ~ClassLoaderData only.? The oop is > always null.?? There's a race when adding the ClassLoaderData to the > class_loader oop.? If we win this race, we create the WeakHandle. See > lines 982-5 of this change. > > I wanted to avoid creating the WeakHandle and destroying it if we lose > this race, so I did not create it in the ClassLoaderData constructor. > I have a follow-on change that moves it there however. > >> >> ------------------------------------------------------------------------------ >> >> src/hotspot/share/classfile/classLoaderData.cpp >> ?? 59 #include "gc/shared/oopStorage.hpp" >> >> Why is this include needed?? Maybe I missed something, but it looks >> like all the OopStorage usage is wrapped up in WeakHandle. > > It is not needed, removed. >> >> ------------------------------------------------------------------------------ >> >> src/hotspot/share/oops/instanceKlass.cpp >> 1903 void InstanceKlass::clean_implementors_list(BoolObjectClosure* >> is_alive) { >> 1904?? assert(class_loader_data()->is_alive(), "this klass should be >> live"); >> ... >> 1909???????? if (!impl->is_loader_alive(is_alive)) { >> >> I'm kind of surprised we still need the is_alive closure here. But >> there are no changes in is_loader_alive.? I think I'm not >> understanding something. > > We do not need the is_alive closure here.?? I have a follow on change > in my patch queue that removes these. >> >> ------------------------------------------------------------------------------ >> >> src/hotspot/share/classfile/classLoaderData.hpp >> ? 224?? oop _class_loader;????????? // oop used to uniquely identify >> a class loader >> ? 225?????????????????????????????? // class loader or a canonical >> class path >> >> [Not part of the change, but adjacent to one, so it caught my eye.] >> >> "class loader \n class loader" in the comment looks redundant? > > That was left over from the early days.? Rewrote as below.? I have a > change to remove this later too. > > ? oop _class_loader;????????? // The instance of java/lang/ClassLoader > associated with > ??????????????????????????????????????? // this ClassLoaderData > >> >> ------------------------------------------------------------------------------ >> >> src/hotspot/share/classfile/classLoaderData.cpp >> ? 516?? assert(_holder.peek() == NULL, "never replace holders"); >> >> I think peek is the wrong test here.? Shouldn't it be _holder.is_null()? >> If not, e.g. if !_holder.is_null() can be true here, then I think >> that would be a leak when _holder is (re)assigned. >> >> Of course, this goes back to my earlier private comment that I find >> WeakHandle::is_null() a bit confusing, because I keep thinking it's >> about the value of *_handle._obj rather than _handle._obj. > > is_null() is for a holder that hasn't been set yet or is zero as for > the_null_class_loader_data(). > > This case should definitely be is_null(). >> >> ------------------------------------------------------------------------------ >> >> src/hotspot/share/classfile/classLoaderData.cpp >> ? 632?? bool alive = keep_alive()???????? // null class loader and >> incomplete anonymous klasses. >> ? 633?????? || _holder.is_null() >> ? 634?????? || (_holder.peek() != NULL);? // not cleaned by weak >> reference processing >> >> I was initially guessing that _holder.is_null() was for null class >> loader and/or anonymous classes, but that's covered by the preceeding >> keep_alive().? So I don't know why a null holder => alive. > > Yes, I believe it is redundant.?? Or did I add it for a special case > that I can't remember.? I'll verify. > > Thanks, > Coleen > >> >> ------------------------------------------------------------------------------ >> >> > From vladimir.kozlov at oracle.com Fri Mar 30 18:08:01 2018 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 30 Mar 2018 11:08:01 -0700 Subject: RFR (XL) 8081519 Split globals.hpp to factor out the Flag class In-Reply-To: References: <17c47faa-4f6d-c3f3-bb71-ceba576146d0@oracle.com> <680CAA14-704C-4B8A-8CB8-C3CFA7DF5B72@oracle.com> <04FF41C3-FA1C-4570-BC35-BC986DCC7216@oracle.com> <8f1611c2-ac40-2ea9-3e1a-ae5561722aae@oracle.com> Message-ID: <07952014-8118-1ec8-d530-0cb38d2aee85@oracle.com> On 3/30/18 10:47 AM, Gerard Ziemski wrote: > Thank you Vladimir for the review. > >> On Mar 30, 2018, at 12:33 PM, Vladimir Kozlov wrote: >> >> Renaming and other changes looks good to me. >> >> But my suggestion about new directory was to have it on the same level as runtime/ directory and not inside it. > > I thought about that, and decided that "share/runtime/jvmFlag" better describes the purpose of the files there than just "share/jvmFlag". > > We have the precedence of others that extend the hierarchy in similar matter like ?share/gc/g1, share/gc/parallel?, though ?share/gc/? itself has no files on its own. Okay, I am fine with that too. Thanks, Vladimir > > So, can we leave thing as proposed or do we need further discussion? > > > cheers > >> I also thought you will move globals* files and test_globals.cpp but I am fine with leaving them as they are. >> >> Thanks, >> Vladimir >> >> On 3/30/18 10:27 AM, Gerard Ziemski wrote: >>> Hi all, >>> Please review this large and tedious (sorry), but simple fix that accomplishes the following: >>> #1 factor out the command option flag related APIs out of globals.hpp/.cpp into its own dedicated files, i.e. jvmFlag.hpp/.cpp >>> #2 moved all jvmFlag* files into its own dedicated folder (i.e. src/hotspot/share/runtime/jvmFlag/) >>> #3 merge Flag (too generic name) and CommandLineFlag classes and rename them as JVMFlag >>> #4 cleanup globals.hpp includes originally added by the JEP-245 >>> Note: the renamed file retain their history, but one needs to add ?follow? flag, ex. ?hg log -f file? >>> https://bugs.openjdk.java.net/browse/JDK-8081519 >>> http://cr.openjdk.java.net/~gziemski/8081519_rev2 >>> Passes Mach5 hs_tier1-tier5, jtreg/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges tests. >>> cheers > From coleen.phillimore at oracle.com Fri Mar 30 18:57:27 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 30 Mar 2018 14:57:27 -0400 Subject: RFR (XL) 8081519 Split globals.hpp to factor out the Flag class In-Reply-To: <04FF41C3-FA1C-4570-BC35-BC986DCC7216@oracle.com> References: <17c47faa-4f6d-c3f3-bb71-ceba576146d0@oracle.com> <680CAA14-704C-4B8A-8CB8-C3CFA7DF5B72@oracle.com> <04FF41C3-FA1C-4570-BC35-BC986DCC7216@oracle.com> Message-ID: http://cr.openjdk.java.net/~gziemski/8081519_rev2/src/hotspot/share/services/diagnosticCommand.cpp.udiff.html How does this file get the include for JVMFlag??? I don't see jvmFlag.hpp included anywhere but one cpp file. http://cr.openjdk.java.net/~gziemski/8081519_rev2/src/hotspot/share/runtime/jvmFlag/jvmFlagConstraintsCompiler.hpp.udiff.html Shouldn't this #include jvmFlag.hpp too? I can't find anything that includes jvmFlag.hpp except jvmFlag.cpp but there are many files that have JVMFlag::Error in them. This is a nice refactoring! thanks, Coleen On 3/30/18 1:27 PM, Gerard Ziemski wrote: > Hi all, > > Please review this large and tedious (sorry), but simple fix that accomplishes the following: > > #1 factor out the command option flag related APIs out of globals.hpp/.cpp into its own dedicated files, i.e. jvmFlag.hpp/.cpp > #2 moved all jvmFlag* files into its own dedicated folder (i.e. src/hotspot/share/runtime/jvmFlag/) > #3 merge Flag (too generic name) and CommandLineFlag classes and rename them as JVMFlag > #4 cleanup globals.hpp includes originally added by the JEP-245 > > Note: the renamed file retain their history, but one needs to add ?follow? flag, ex. ?hg log -f file? > > https://bugs.openjdk.java.net/browse/JDK-8081519 > http://cr.openjdk.java.net/~gziemski/8081519_rev2 > > Passes Mach5 hs_tier1-tier5, jtreg/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges tests. > > > cheers From coleen.phillimore at oracle.com Fri Mar 30 19:08:31 2018 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 30 Mar 2018 15:08:31 -0400 Subject: RFR (XL) 8081519 Split globals.hpp to factor out the Flag class In-Reply-To: <04FF41C3-FA1C-4570-BC35-BC986DCC7216@oracle.com> References: <17c47faa-4f6d-c3f3-bb71-ceba576146d0@oracle.com> <680CAA14-704C-4B8A-8CB8-C3CFA7DF5B72@oracle.com> <04FF41C3-FA1C-4570-BC35-BC986DCC7216@oracle.com> Message-ID: <89767f2d-f42e-dbf0-ebb9-dd281b46592d@oracle.com> http://cr.openjdk.java.net/~gziemski/8081519_rev2/src/hotspot/share/runtime/jvmFlag/jvmFlag.hpp.html Is CounterSetting unused??? Are these FlagSetting used??? SizeTFlagSetting ? Can you make plain FlagSetting inherit from public StackObj.?? The same with FlagGuard (and remove comment and new/delete operators) but this class appears unused as well.? Can you remove the unused classes?? If they're ever needed, they can be added again in the model of FlagSetting, which is used. This will make jvmFlag.hpp have less in it, which is good because it needs to be included in more places. thanks, Coleen On 3/30/18 1:27 PM, Gerard Ziemski wrote: > Hi all, > > Please review this large and tedious (sorry), but simple fix that accomplishes the following: > > #1 factor out the command option flag related APIs out of globals.hpp/.cpp into its own dedicated files, i.e. jvmFlag.hpp/.cpp > #2 moved all jvmFlag* files into its own dedicated folder (i.e. src/hotspot/share/runtime/jvmFlag/) > #3 merge Flag (too generic name) and CommandLineFlag classes and rename them as JVMFlag > #4 cleanup globals.hpp includes originally added by the JEP-245 > > Note: the renamed file retain their history, but one needs to add ?follow? flag, ex. ?hg log -f file? > > https://bugs.openjdk.java.net/browse/JDK-8081519 > http://cr.openjdk.java.net/~gziemski/8081519_rev2 > > Passes Mach5 hs_tier1-tier5, jtreg/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges tests. > > > cheers From gerard.ziemski at oracle.com Fri Mar 30 19:23:04 2018 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Fri, 30 Mar 2018 14:23:04 -0500 Subject: RFR (XL) 8081519 Split globals.hpp to factor out the Flag class In-Reply-To: <89767f2d-f42e-dbf0-ebb9-dd281b46592d@oracle.com> References: <17c47faa-4f6d-c3f3-bb71-ceba576146d0@oracle.com> <680CAA14-704C-4B8A-8CB8-C3CFA7DF5B72@oracle.com> <04FF41C3-FA1C-4570-BC35-BC986DCC7216@oracle.com> <89767f2d-f42e-dbf0-ebb9-dd281b46592d@oracle.com> Message-ID: <116DE10D-32BE-414B-B279-6B29F1B78F88@oracle.com> Will do. > On Mar 30, 2018, at 2:08 PM, coleen.phillimore at oracle.com wrote: > > > > http://cr.openjdk.java.net/~gziemski/8081519_rev2/src/hotspot/share/runtime/jvmFlag/jvmFlag.hpp.html > > Is CounterSetting unused? Are these FlagSetting used? SizeTFlagSetting ? > > Can you make plain FlagSetting inherit from public StackObj. The same with FlagGuard (and remove comment and new/delete operators) but this class appears unused as well. Can you remove the unused classes? If they're ever needed, they can be added again in the model of FlagSetting, which is used. > > This will make jvmFlag.hpp have less in it, which is good because it needs to be included in more places. > > thanks, > Coleen > > On 3/30/18 1:27 PM, Gerard Ziemski wrote: >> Hi all, >> >> Please review this large and tedious (sorry), but simple fix that accomplishes the following: >> >> #1 factor out the command option flag related APIs out of globals.hpp/.cpp into its own dedicated files, i.e. jvmFlag.hpp/.cpp >> #2 moved all jvmFlag* files into its own dedicated folder (i.e. src/hotspot/share/runtime/jvmFlag/) >> #3 merge Flag (too generic name) and CommandLineFlag classes and rename them as JVMFlag >> #4 cleanup globals.hpp includes originally added by the JEP-245 >> >> Note: the renamed file retain their history, but one needs to add ?follow? flag, ex. ?hg log -f file? >> >> https://bugs.openjdk.java.net/browse/JDK-8081519 >> http://cr.openjdk.java.net/~gziemski/8081519_rev2 >> >> Passes Mach5 hs_tier1-tier5, jtreg/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges tests. >> >> >> cheers > From david.holmes at oracle.com Sat Mar 31 07:46:08 2018 From: david.holmes at oracle.com (David Holmes) Date: Sat, 31 Mar 2018 17:46:08 +1000 Subject: RFR (XL) 8081519 Split globals.hpp to factor out the Flag class In-Reply-To: <04FF41C3-FA1C-4570-BC35-BC986DCC7216@oracle.com> References: <17c47faa-4f6d-c3f3-bb71-ceba576146d0@oracle.com> <680CAA14-704C-4B8A-8CB8-C3CFA7DF5B72@oracle.com> <04FF41C3-FA1C-4570-BC35-BC986DCC7216@oracle.com> Message-ID: <4fb0eaf2-b27b-339e-7ea9-04ff3b9477ed@oracle.com> Hi Gerard, Scanning through this it seems okay. Like Coleen I'm wondering why so many "flag" using .cpp files don't include jvmFlag.hpp? I expect they transitively included globals.hpp previously but now will need an explicit include. The two new files have a slight formatting error with their copyright, you need a comma after 2018. The include guards for all the relocated header files and the new files eg: 25 #ifndef SHARE_VM_RUNTIME_JVMFLAG_HPP need updating now you added the extra directory. Bikeshed: I'm not sure about calling the directory jvmFlags. First everything under src/hotspot is "jvm" so it is somewhat redundant to state that. Second you end up with jvmFlag/jvmFlag*.* which is more redundancy. I think runtime/flags would suffice. Thanks, David On 31/03/2018 3:27 AM, Gerard Ziemski wrote: > Hi all, > > Please review this large and tedious (sorry), but simple fix that accomplishes the following: > > #1 factor out the command option flag related APIs out of globals.hpp/.cpp into its own dedicated files, i.e. jvmFlag.hpp/.cpp > #2 moved all jvmFlag* files into its own dedicated folder (i.e. src/hotspot/share/runtime/jvmFlag/) > #3 merge Flag (too generic name) and CommandLineFlag classes and rename them as JVMFlag > #4 cleanup globals.hpp includes originally added by the JEP-245 > > Note: the renamed file retain their history, but one needs to add ?follow? flag, ex. ?hg log -f file? > > https://bugs.openjdk.java.net/browse/JDK-8081519 > http://cr.openjdk.java.net/~gziemski/8081519_rev2 > > Passes Mach5 hs_tier1-tier5, jtreg/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges tests. > > > cheers > From kim.barrett at oracle.com Sat Mar 31 18:40:13 2018 From: kim.barrett at oracle.com (Kim Barrett) Date: Sat, 31 Mar 2018 14:40:13 -0400 Subject: RFR (M) 8198313: Wrap holder object for ClassLoaderData in a WeakHandle In-Reply-To: <304110c1-f4e0-63c8-d94b-bc779b737411@oracle.com> References: <3fe8b4c5-3e1d-d192-07ce-0828e3982e75@oracle.com> <9C48FEF4-59DD-4415-AF18-B95ADBDFACB4@oracle.com> <304110c1-f4e0-63c8-d94b-bc779b737411@oracle.com> Message-ID: <4C4FB5D1-A0BA-4CC8-ADB7-8BED6BC2B88C@oracle.com> > On Mar 30, 2018, at 1:53 PM, coleen.phillimore at oracle.com wrote: > > > I have an incremental and full .02 version with the changes discussed here. > > open webrev at http://cr.openjdk.java.net/~coleenp/8198313.02.incr/webrev > open webrev at http://cr.openjdk.java.net/~coleenp/8198313.02/webrev > > These have been retested on x86, all hotspot jtreg tests. > thanks, > Coleen Looks good. In InstanceKlass::klass_holder_phantom, the klass_ prefix seems unnecessary. I don't need a new webrev if you decide to change the name.