From edward.nevill at linaro.org Fri May 1 12:19:17 2015 From: edward.nevill at linaro.org (Edward Nevill) Date: Fri, 1 May 2015 13:19:17 +0100 Subject: RFR: 8079203: aarch64: need to cater for different partner implementations Message-ID: Hi, The following webrev addresses issue http://bugs.openjdk.java.net/browse/JDK-8079203 http://cr.openjdk.java.net/~enevill/8079203/webrev The patch parses /proc/cpuinfo to obtain cpu, model, variant and revision information. This information is used to drive C1 & C2 code generation for minor differences in implementation. Please review and if approved I will push, Thanks, Ed. From adinn at redhat.com Fri May 1 14:05:17 2015 From: adinn at redhat.com (Andrew Dinn) Date: Fri, 01 May 2015 15:05:17 +0100 Subject: [aarch64-port-dev ] RFR: 8079203: aarch64: need to cater for different partner implementations In-Reply-To: References: Message-ID: <5543881D.8030205@redhat.com> On 01/05/15 13:19, Edward Nevill wrote: > The following webrev addresses issue > http://bugs.openjdk.java.net/browse/JDK-8079203 > > http://cr.openjdk.java.net/~enevill/8079203/webrev > > The patch parses /proc/cpuinfo to obtain cpu, model, variant and revision > information. > > This information is used to drive C1 & C2 code generation for minor > differences in implementation. > > Please review and if approved I will push, The patch looks ok to me -- although I have no idea why you need those extra nops before madd, msub etc when CPU_A53MAC is set. Is there an explanation available beyond 'black magic demands . . .'? regards, Andrew Dinn ----------- From edward.nevill at linaro.org Fri May 1 14:15:37 2015 From: edward.nevill at linaro.org (Edward Nevill) Date: Fri, 1 May 2015 15:15:37 +0100 Subject: [aarch64-port-dev ] RFR: 8079203: aarch64: need to cater for different partner implementations In-Reply-To: <5543881D.8030205@redhat.com> References: <5543881D.8030205@redhat.com> Message-ID: On 1 May 2015 at 15:05, Andrew Dinn wrote: > On 01/05/15 13:19, Edward Nevill wrote:The patch looks ok to me -- > although I have no idea why you need those > extra nops before madd, msub etc when CPU_A53MAC is set. Is there an > explanation available beyond 'black magic demands . . .'? > Read the A53 errata document @ http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.epm048406/index.html Page 14 is the relevant page. All the best, Ed. From mark.rutland at arm.com Fri May 1 14:23:57 2015 From: mark.rutland at arm.com (Mark Rutland) Date: Fri, 1 May 2015 15:23:57 +0100 Subject: [aarch64-port-dev ] RFR: 8079203: aarch64: need to cater for different partner implementations In-Reply-To: References: Message-ID: <20150501142357.GD28975@leverpostej> Hi, > The following webrev addresses issue > http://bugs.openjdk.java.net/browse/JDK-8079203 > > http://cr.openjdk.java.net/~enevill/8079203/webrev > > The patch parses /proc/cpuinfo to obtain cpu, model, variant and revision > information. >From a look at the proposed patch, I guess this assumes a uniform SMP system (i.e. no big.LITTLE)? A while back [1] the /proc/cpuinfo format was fixed to show information per-cpu (example form a Juno system below), and it looks like the CPU information parsed will only refer to ther final CPU listed, and may not be representative of the system as a whole. That format has been packported to the various stable kernels too. Thanks, Mark. [1] https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/arch/arm64/kernel/setup.c?id=44b82b7700d05a52cd983799d3ecde1a976b3bed ---->8---- $ cat /proc/cpuinfo processor : 0 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x0 CPU part : 0xd03 CPU revision : 0 processor : 1 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x0 CPU part : 0xd07 CPU revision : 0 processor : 2 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x0 CPU part : 0xd07 CPU revision : 0 processor : 3 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x0 CPU part : 0xd03 CPU revision : 0 processor : 4 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x0 CPU part : 0xd03 CPU revision : 0 processor : 5 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x0 CPU part : 0xd03 CPU revision : 0 From vladimir.kozlov at oracle.com Fri May 1 16:40:10 2015 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 01 May 2015 09:40:10 -0700 Subject: RFR: 8079203: aarch64: need to cater for different partner implementations In-Reply-To: References: Message-ID: <5543AC6A.4000006@oracle.com> Wow, dependence on cpuinfo format. We are struggling with PICL and KSTAT format versions on SPARC. I hope Linux have more fixed cpuinfo format. Should you add this info to _features_str? Thanks, Vladimir On 5/1/15 5:19 AM, Edward Nevill wrote: > Hi, > > The following webrev addresses issue > http://bugs.openjdk.java.net/browse/JDK-8079203 > > http://cr.openjdk.java.net/~enevill/8079203/webrev > > The patch parses /proc/cpuinfo to obtain cpu, model, variant and revision > information. > > This information is used to drive C1 & C2 code generation for minor > differences in implementation. > > Please review and if approved I will push, > > Thanks, > Ed. > From staffan.larsen at oracle.com Mon May 4 06:52:26 2015 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Mon, 4 May 2015 08:52:26 +0200 Subject: RFR: JDK-8079248 JDK fails with "jdk\\bin\\management_ext.dll: The specified procedure could not be found" Message-ID: <480D0346-4F6B-492F-B243-6A045FC40193@oracle.com> This is a P1 bug that surfaced when changes from jdk9/dev and jdk9/hs-rt met in jdk9/hs. In this case the windows compiler upgrades in jdk9/dev met changes in jdk9/hs-rt that moved a call to GetProcessMemoryInfo from management.dll to management_ext.dll. With the compiler upgrades PSAPI_VERSION=1 is needed when compiling the library calling GetProcessMemoryInfo. This fix simply moves that patch from make/lib/Lib-java.management.gmk to make/lib/Lib-jdk.management.gmk. The patch was introduced in JDK-8076557. I will push the change below directly to jdk9/hs. Thanks, /Staffan diff --git a/make/lib/Lib-java.management.gmk b/make/lib/Lib-java.management.gmk --- a/make/lib/Lib-java.management.gmk +++ b/make/lib/Lib-java.management.gmk @@ -38,11 +38,6 @@ $(LIBJAVA_HEADER_FLAGS) \ # -# In (at least) VS2013 and later, -DPSAPI_VERSION=1 is needed to generate -# a binary that is compatible with windows versions older than 7/2008R2. -# See MSDN documentation for GetProcessMemoryInfo for more information. -BUILD_LIBMANAGEMENT_CFLAGS += -DPSAPI_VERSION=1 - LIBMANAGEMENT_OPTIMIZATION := HIGH ifneq ($(findstring $(OPENJDK_TARGET_OS), solaris linux), ) ifeq ($(ENABLE_DEBUG_SYMBOLS), true) diff --git a/make/lib/Lib-jdk.management.gmk b/make/lib/Lib-jdk.management.gmk --- a/make/lib/Lib-jdk.management.gmk +++ b/make/lib/Lib-jdk.management.gmk @@ -39,6 +39,11 @@ $(LIBJAVA_HEADER_FLAGS) \ # +# In (at least) VS2013 and later, -DPSAPI_VERSION=1 is needed to generate +# a binary that is compatible with windows versions older than 7/2008R2. +# See MSDN documentation for GetProcessMemoryInfo for more information. +BUILD_LIBMANAGEMENT_EXT_CFLAGS += -DPSAPI_VERSION=1 + LIBMANAGEMENT_EXT_OPTIMIZATION := HIGH ifneq ($(findstring $(OPENJDK_TARGET_OS), solaris linux), ) ifeq ($(ENABLE_DEBUG_SYMBOLS), true) From david.holmes at oracle.com Mon May 4 07:27:16 2015 From: david.holmes at oracle.com (David Holmes) Date: Mon, 04 May 2015 17:27:16 +1000 Subject: RFR: JDK-8079248 JDK fails with "jdk\\bin\\management_ext.dll: The specified procedure could not be found" In-Reply-To: <480D0346-4F6B-492F-B243-6A045FC40193@oracle.com> References: <480D0346-4F6B-492F-B243-6A045FC40193@oracle.com> Message-ID: <55471F54.6090804@oracle.com> Hi Staffan, Seems fine as a spot fix but I'm wondering if this shouldn't be a common option for all the dlls now we are building with VS2013? Thanks, David On 4/05/2015 4:52 PM, Staffan Larsen wrote: > This is a P1 bug that surfaced when changes from jdk9/dev and jdk9/hs-rt met in jdk9/hs. In this case the windows compiler upgrades in jdk9/dev met changes in jdk9/hs-rt that moved a call to GetProcessMemoryInfo from management.dll to management_ext.dll. With the compiler upgrades PSAPI_VERSION=1 is needed when compiling the library calling GetProcessMemoryInfo. This fix simply moves that patch from make/lib/Lib-java.management.gmk to make/lib/Lib-jdk.management.gmk. The patch was introduced in JDK-8076557. > > I will push the change below directly to jdk9/hs. > > Thanks, > /Staffan > > > diff --git a/make/lib/Lib-java.management.gmk b/make/lib/Lib-java.management.gmk > --- a/make/lib/Lib-java.management.gmk > +++ b/make/lib/Lib-java.management.gmk > @@ -38,11 +38,6 @@ > $(LIBJAVA_HEADER_FLAGS) \ > # > > -# In (at least) VS2013 and later, -DPSAPI_VERSION=1 is needed to generate > -# a binary that is compatible with windows versions older than 7/2008R2. > -# See MSDN documentation for GetProcessMemoryInfo for more information. > -BUILD_LIBMANAGEMENT_CFLAGS += -DPSAPI_VERSION=1 > - > LIBMANAGEMENT_OPTIMIZATION := HIGH > ifneq ($(findstring $(OPENJDK_TARGET_OS), solaris linux), ) > ifeq ($(ENABLE_DEBUG_SYMBOLS), true) > diff --git a/make/lib/Lib-jdk.management.gmk b/make/lib/Lib-jdk.management.gmk > --- a/make/lib/Lib-jdk.management.gmk > +++ b/make/lib/Lib-jdk.management.gmk > @@ -39,6 +39,11 @@ > $(LIBJAVA_HEADER_FLAGS) \ > # > > +# In (at least) VS2013 and later, -DPSAPI_VERSION=1 is needed to generate > +# a binary that is compatible with windows versions older than 7/2008R2. > +# See MSDN documentation for GetProcessMemoryInfo for more information. > +BUILD_LIBMANAGEMENT_EXT_CFLAGS += -DPSAPI_VERSION=1 > + > LIBMANAGEMENT_EXT_OPTIMIZATION := HIGH > ifneq ($(findstring $(OPENJDK_TARGET_OS), solaris linux), ) > ifeq ($(ENABLE_DEBUG_SYMBOLS), true) > From staffan.larsen at oracle.com Mon May 4 07:33:18 2015 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Mon, 4 May 2015 09:33:18 +0200 Subject: RFR: JDK-8079248 JDK fails with "jdk\\bin\\management_ext.dll: The specified procedure could not be found" In-Reply-To: <55471F54.6090804@oracle.com> References: <480D0346-4F6B-492F-B243-6A045FC40193@oracle.com> <55471F54.6090804@oracle.com> Message-ID: <2108C368-9023-4AFA-B35C-C954A63C391A@oracle.com> > On 4 maj 2015, at 09:27, David Holmes wrote: > > Hi Staffan, > > Seems fine as a spot fix but I'm wondering if this shouldn't be a common option for all the dlls now we are building with VS2013? I?ll let the build team comment on that. /Staffan > > Thanks, > David > > On 4/05/2015 4:52 PM, Staffan Larsen wrote: >> This is a P1 bug that surfaced when changes from jdk9/dev and jdk9/hs-rt met in jdk9/hs. In this case the windows compiler upgrades in jdk9/dev met changes in jdk9/hs-rt that moved a call to GetProcessMemoryInfo from management.dll to management_ext.dll. With the compiler upgrades PSAPI_VERSION=1 is needed when compiling the library calling GetProcessMemoryInfo. This fix simply moves that patch from make/lib/Lib-java.management.gmk to make/lib/Lib-jdk.management.gmk. The patch was introduced in JDK-8076557. >> >> I will push the change below directly to jdk9/hs. >> >> Thanks, >> /Staffan >> >> >> diff --git a/make/lib/Lib-java.management.gmk b/make/lib/Lib-java.management.gmk >> --- a/make/lib/Lib-java.management.gmk >> +++ b/make/lib/Lib-java.management.gmk >> @@ -38,11 +38,6 @@ >> $(LIBJAVA_HEADER_FLAGS) \ >> # >> >> -# In (at least) VS2013 and later, -DPSAPI_VERSION=1 is needed to generate >> -# a binary that is compatible with windows versions older than 7/2008R2. >> -# See MSDN documentation for GetProcessMemoryInfo for more information. >> -BUILD_LIBMANAGEMENT_CFLAGS += -DPSAPI_VERSION=1 >> - >> LIBMANAGEMENT_OPTIMIZATION := HIGH >> ifneq ($(findstring $(OPENJDK_TARGET_OS), solaris linux), ) >> ifeq ($(ENABLE_DEBUG_SYMBOLS), true) >> diff --git a/make/lib/Lib-jdk.management.gmk b/make/lib/Lib-jdk.management.gmk >> --- a/make/lib/Lib-jdk.management.gmk >> +++ b/make/lib/Lib-jdk.management.gmk >> @@ -39,6 +39,11 @@ >> $(LIBJAVA_HEADER_FLAGS) \ >> # >> >> +# In (at least) VS2013 and later, -DPSAPI_VERSION=1 is needed to generate >> +# a binary that is compatible with windows versions older than 7/2008R2. >> +# See MSDN documentation for GetProcessMemoryInfo for more information. >> +BUILD_LIBMANAGEMENT_EXT_CFLAGS += -DPSAPI_VERSION=1 >> + >> LIBMANAGEMENT_EXT_OPTIMIZATION := HIGH >> ifneq ($(findstring $(OPENJDK_TARGET_OS), solaris linux), ) >> ifeq ($(ENABLE_DEBUG_SYMBOLS), true) >> From magnus.ihse.bursie at oracle.com Mon May 4 07:54:38 2015 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Mon, 4 May 2015 09:54:38 +0200 Subject: RFR: JDK-8079248 JDK fails with "jdk\\bin\\management_ext.dll: The specified procedure could not be found" In-Reply-To: <55471F54.6090804@oracle.com> References: <480D0346-4F6B-492F-B243-6A045FC40193@oracle.com> <55471F54.6090804@oracle.com> Message-ID: > 4 maj 2015 kl. 09:27 skrev David Holmes : > > Hi Staffan, > > Seems fine as a spot fix but I'm wondering if this shouldn't be a common option for all the dlls now we are building with VS2013? Or maybe as a define in the source code before the include section for the specific source code that needs a legacy version of GetProcessMemoryInfo? That seems more prudent to me. /Magnus > > Thanks, > David > >> On 4/05/2015 4:52 PM, Staffan Larsen wrote: >> This is a P1 bug that surfaced when changes from jdk9/dev and jdk9/hs-rt met in jdk9/hs. In this case the windows compiler upgrades in jdk9/dev met changes in jdk9/hs-rt that moved a call to GetProcessMemoryInfo from management.dll to management_ext.dll. With the compiler upgrades PSAPI_VERSION=1 is needed when compiling the library calling GetProcessMemoryInfo. This fix simply moves that patch from make/lib/Lib-java.management.gmk to make/lib/Lib-jdk.management.gmk. The patch was introduced in JDK-8076557. >> >> I will push the change below directly to jdk9/hs. >> >> Thanks, >> /Staffan >> >> >> diff --git a/make/lib/Lib-java.management.gmk b/make/lib/Lib-java.management.gmk >> --- a/make/lib/Lib-java.management.gmk >> +++ b/make/lib/Lib-java.management.gmk >> @@ -38,11 +38,6 @@ >> $(LIBJAVA_HEADER_FLAGS) \ >> # >> >> -# In (at least) VS2013 and later, -DPSAPI_VERSION=1 is needed to generate >> -# a binary that is compatible with windows versions older than 7/2008R2. >> -# See MSDN documentation for GetProcessMemoryInfo for more information. >> -BUILD_LIBMANAGEMENT_CFLAGS += -DPSAPI_VERSION=1 >> - >> LIBMANAGEMENT_OPTIMIZATION := HIGH >> ifneq ($(findstring $(OPENJDK_TARGET_OS), solaris linux), ) >> ifeq ($(ENABLE_DEBUG_SYMBOLS), true) >> diff --git a/make/lib/Lib-jdk.management.gmk b/make/lib/Lib-jdk.management.gmk >> --- a/make/lib/Lib-jdk.management.gmk >> +++ b/make/lib/Lib-jdk.management.gmk >> @@ -39,6 +39,11 @@ >> $(LIBJAVA_HEADER_FLAGS) \ >> # >> >> +# In (at least) VS2013 and later, -DPSAPI_VERSION=1 is needed to generate >> +# a binary that is compatible with windows versions older than 7/2008R2. >> +# See MSDN documentation for GetProcessMemoryInfo for more information. >> +BUILD_LIBMANAGEMENT_EXT_CFLAGS += -DPSAPI_VERSION=1 >> + >> LIBMANAGEMENT_EXT_OPTIMIZATION := HIGH >> ifneq ($(findstring $(OPENJDK_TARGET_OS), solaris linux), ) >> ifeq ($(ENABLE_DEBUG_SYMBOLS), true) >> From staffan.larsen at oracle.com Mon May 4 08:25:05 2015 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Mon, 4 May 2015 10:25:05 +0200 Subject: RFR: JDK-8079248 JDK fails with "jdk\\bin\\management_ext.dll: The specified procedure could not be found" In-Reply-To: References: <480D0346-4F6B-492F-B243-6A045FC40193@oracle.com> <55471F54.6090804@oracle.com> Message-ID: > On 4 maj 2015, at 09:54, Magnus Ihse Bursie wrote: > > >> 4 maj 2015 kl. 09:27 skrev David Holmes : >> >> Hi Staffan, >> >> Seems fine as a spot fix but I'm wondering if this shouldn't be a common option for all the dlls now we are building with VS2013? > > Or maybe as a define in the source code before the include section for the specific source code that needs a legacy version of GetProcessMemoryInfo? That seems more prudent to me. In this case the function is called the same and works the same in all versions of Windows (there is no ?legacy version?). What differs is that it is linked against different names in different libs for different versions of Windows. So from a source code perspective there is no difference, but from a build perspective there is. /Staffan > > /Magnus > >> >> Thanks, >> David >> >>> On 4/05/2015 4:52 PM, Staffan Larsen wrote: >>> This is a P1 bug that surfaced when changes from jdk9/dev and jdk9/hs-rt met in jdk9/hs. In this case the windows compiler upgrades in jdk9/dev met changes in jdk9/hs-rt that moved a call to GetProcessMemoryInfo from management.dll to management_ext.dll. With the compiler upgrades PSAPI_VERSION=1 is needed when compiling the library calling GetProcessMemoryInfo. This fix simply moves that patch from make/lib/Lib-java.management.gmk to make/lib/Lib-jdk.management.gmk. The patch was introduced in JDK-8076557. >>> >>> I will push the change below directly to jdk9/hs. >>> >>> Thanks, >>> /Staffan >>> >>> >>> diff --git a/make/lib/Lib-java.management.gmk b/make/lib/Lib-java.management.gmk >>> --- a/make/lib/Lib-java.management.gmk >>> +++ b/make/lib/Lib-java.management.gmk >>> @@ -38,11 +38,6 @@ >>> $(LIBJAVA_HEADER_FLAGS) \ >>> # >>> >>> -# In (at least) VS2013 and later, -DPSAPI_VERSION=1 is needed to generate >>> -# a binary that is compatible with windows versions older than 7/2008R2. >>> -# See MSDN documentation for GetProcessMemoryInfo for more information. >>> -BUILD_LIBMANAGEMENT_CFLAGS += -DPSAPI_VERSION=1 >>> - >>> LIBMANAGEMENT_OPTIMIZATION := HIGH >>> ifneq ($(findstring $(OPENJDK_TARGET_OS), solaris linux), ) >>> ifeq ($(ENABLE_DEBUG_SYMBOLS), true) >>> diff --git a/make/lib/Lib-jdk.management.gmk b/make/lib/Lib-jdk.management.gmk >>> --- a/make/lib/Lib-jdk.management.gmk >>> +++ b/make/lib/Lib-jdk.management.gmk >>> @@ -39,6 +39,11 @@ >>> $(LIBJAVA_HEADER_FLAGS) \ >>> # >>> >>> +# In (at least) VS2013 and later, -DPSAPI_VERSION=1 is needed to generate >>> +# a binary that is compatible with windows versions older than 7/2008R2. >>> +# See MSDN documentation for GetProcessMemoryInfo for more information. >>> +BUILD_LIBMANAGEMENT_EXT_CFLAGS += -DPSAPI_VERSION=1 >>> + >>> LIBMANAGEMENT_EXT_OPTIMIZATION := HIGH >>> ifneq ($(findstring $(OPENJDK_TARGET_OS), solaris linux), ) >>> ifeq ($(ENABLE_DEBUG_SYMBOLS), true) >>> From per.liden at oracle.com Mon May 4 08:35:56 2015 From: per.liden at oracle.com (Per Liden) Date: Mon, 04 May 2015 10:35:56 +0200 Subject: RFR(s): 8013171: G1: C1 x86_64 barriers use 32-bit accesses to 64-bit PtrQueue::_index In-Reply-To: <5540C7CC.2080209@oracle.com> References: <5538B2C5.3040404@oracle.com> <1429787817.3334.3.camel@oracle.com> <5540C7CC.2080209@oracle.com> Message-ID: <55472F6C.30504@oracle.com> On 2015-04-29 14:00, Per Liden wrote: > Hi, > > On 2015-04-27 18:20, Christian Thalinger wrote: >> >>> On Apr 23, 2015, at 9:40 AM, Per Liden wrote: >>> >>> Hi Thomas, >>> >>>> On 23 Apr 2015, at 13:16, Thomas Schatzl >>>> wrote: >>>> >>>> Hi, >>>> >>>> On Thu, 2015-04-23 at 10:52 +0200, Per Liden wrote: >>>>> Hi, >>>>> >>>>> (This change affects G1, but it's touching code in C1 so I'd like >>>>> to ask >>>>> someone from the compiler team to also reviewed this) >>>>> >>>>> Summary: The G1 barriers loads and updates the PrtQueue::_index field. >>>>> This field is a size_t but the C1 version of these barriers aren't >>>>> 64-bit clean. The bug has more details. >>>>> >>>>> In addition I've massaged the code a little bit, so that the 32-bit >>>>> and >>>>> 64-bit sections look more similar (and as a bonus I think we avoid an >>>>> extra memory load on 32-bit). >>>>> >>>>> Webrev: http://cr.openjdk.java.net/~pliden/8013171/webrev.0/ >>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8013171 >>>>> >>>>> Testing: >>>>> * gc-test-suite on both 32 and 64-bit builds (with -XX:+UseG1GC >>>>> -XX:+TieredCompilation -XX:TieredStopAtLevel=3 -XX:+VerifyAfterGC) >>>>> * Passes jprt >>>> >>>> Looks good, with the following caveats which should be decided by >>>> somebody else if they are important as they are micro-opts: >>>> >>>> - instead of using cmp to compare against zero in a register, it would >>>> be better to use the test instruction (e.g. __ testX(tmp, tmp)) as >>>> it saves >>>> a byte of encoding per instruction with the same effect. >> >> Tighter code is always better. For barriers it might be important in >> tight loops to better fit in the cache. > > I'll make it a testprt(). > >> >>>> >>>> - post barrier stub: I would prefer if the 64 bit code did not >>>> push/pop the rdx register to free tmp. There are explicit rscratch1/2 >>>> registers for temporaries available on that platform. At least >>>> rscratch1 >>>> (=r8) seems to be used without save/restore in the original code >>>> already. >>>> This would also remove the need for 64 bit code to push/pop any >>>> register it >>>> seems to me. >> >> Sounds like a good suggestion if it doesn?t complicate the code too much. > > I'd like to avoid reintroducing different code paths for 32 and 64-bit, > which I think complicates the code. However, I can defer the pushing of > tmp until it's actually needed, which essentially gets us to the same > situation as before this change in terms of register usage for 64-bit. > > Updated webrev: http://cr.openjdk.java.net/~pliden/8013171/webrev.2/ Thomas/Roland, are you ok with the latest version? /Per From goetz.lindenmaier at sap.com Mon May 4 08:40:47 2015 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Mon, 4 May 2015 08:40:47 +0000 Subject: RFR(S): 8078593: [TESTBUG] ppc: Enable jtreg tests for new features Message-ID: <4295855A5C1DE049A61835A1887419CC2CFCC677@DEWDFEMB12A.global.corp.sap> Hi, this patch enables the tests to deal with rtm and mathexact intrinsics on ppc. These were implemented on ppc in "8077838: Recent developments for ppc." Please review this change. I please need a sponsor. http://cr.openjdk.java.net/~goetz/webrevs/8078593-ppcJtreg/webrev.01/ Best regards, Goetz. From erik.joelsson at oracle.com Mon May 4 09:11:12 2015 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Mon, 04 May 2015 11:11:12 +0200 Subject: RFR: JDK-8079248 JDK fails with "jdk\\bin\\management_ext.dll: The specified procedure could not be found" In-Reply-To: References: <480D0346-4F6B-492F-B243-6A045FC40193@oracle.com> <55471F54.6090804@oracle.com> Message-ID: <554737B0.7030507@oracle.com> On 2015-05-04 10:25, Staffan Larsen wrote: >> On 4 maj 2015, at 09:54, Magnus Ihse Bursie wrote: >> >> >>> 4 maj 2015 kl. 09:27 skrev David Holmes : >>> >>> Hi Staffan, >>> >>> Seems fine as a spot fix but I'm wondering if this shouldn't be a common option for all the dlls now we are building with VS2013? >> Or maybe as a define in the source code before the include section for the specific source code that needs a legacy version of GetProcessMemoryInfo? That seems more prudent to me. > In this case the function is called the same and works the same in all versions of Windows (there is no ?legacy version?). What differs is that it is linked against different names in different libs for different versions of Windows. So from a source code perspective there is no difference, but from a build perspective there is. I think the placement suggested here is fine. It could be moved to configure for global application, but I doubt it will ever be needed. As soon as we drop support for Windows versions older than 7, we can remove this option. /Erik > /Staffan > >> /Magnus >> >>> Thanks, >>> David >>> >>>> On 4/05/2015 4:52 PM, Staffan Larsen wrote: >>>> This is a P1 bug that surfaced when changes from jdk9/dev and jdk9/hs-rt met in jdk9/hs. In this case the windows compiler upgrades in jdk9/dev met changes in jdk9/hs-rt that moved a call to GetProcessMemoryInfo from management.dll to management_ext.dll. With the compiler upgrades PSAPI_VERSION=1 is needed when compiling the library calling GetProcessMemoryInfo. This fix simply moves that patch from make/lib/Lib-java.management.gmk to make/lib/Lib-jdk.management.gmk. The patch was introduced in JDK-8076557. >>>> >>>> I will push the change below directly to jdk9/hs. >>>> >>>> Thanks, >>>> /Staffan >>>> >>>> >>>> diff --git a/make/lib/Lib-java.management.gmk b/make/lib/Lib-java.management.gmk >>>> --- a/make/lib/Lib-java.management.gmk >>>> +++ b/make/lib/Lib-java.management.gmk >>>> @@ -38,11 +38,6 @@ >>>> $(LIBJAVA_HEADER_FLAGS) \ >>>> # >>>> >>>> -# In (at least) VS2013 and later, -DPSAPI_VERSION=1 is needed to generate >>>> -# a binary that is compatible with windows versions older than 7/2008R2. >>>> -# See MSDN documentation for GetProcessMemoryInfo for more information. >>>> -BUILD_LIBMANAGEMENT_CFLAGS += -DPSAPI_VERSION=1 >>>> - >>>> LIBMANAGEMENT_OPTIMIZATION := HIGH >>>> ifneq ($(findstring $(OPENJDK_TARGET_OS), solaris linux), ) >>>> ifeq ($(ENABLE_DEBUG_SYMBOLS), true) >>>> diff --git a/make/lib/Lib-jdk.management.gmk b/make/lib/Lib-jdk.management.gmk >>>> --- a/make/lib/Lib-jdk.management.gmk >>>> +++ b/make/lib/Lib-jdk.management.gmk >>>> @@ -39,6 +39,11 @@ >>>> $(LIBJAVA_HEADER_FLAGS) \ >>>> # >>>> >>>> +# In (at least) VS2013 and later, -DPSAPI_VERSION=1 is needed to generate >>>> +# a binary that is compatible with windows versions older than 7/2008R2. >>>> +# See MSDN documentation for GetProcessMemoryInfo for more information. >>>> +BUILD_LIBMANAGEMENT_EXT_CFLAGS += -DPSAPI_VERSION=1 >>>> + >>>> LIBMANAGEMENT_EXT_OPTIMIZATION := HIGH >>>> ifneq ($(findstring $(OPENJDK_TARGET_OS), solaris linux), ) >>>> ifeq ($(ENABLE_DEBUG_SYMBOLS), true) >>>> From roland.westrelin at oracle.com Mon May 4 09:12:26 2015 From: roland.westrelin at oracle.com (Roland Westrelin) Date: Mon, 4 May 2015 11:12:26 +0200 Subject: RFR(s): 8013171: G1: C1 x86_64 barriers use 32-bit accesses to 64-bit PtrQueue::_index In-Reply-To: <55472F6C.30504@oracle.com> References: <5538B2C5.3040404@oracle.com> <1429787817.3334.3.camel@oracle.com> <5540C7CC.2080209@oracle.com> <55472F6C.30504@oracle.com> Message-ID: <163ACD65-678F-4D0C-8A4E-922DD85C42BF@oracle.com> > Thomas/Roland, are you ok with the latest version? Yes, that looks good to me. Roland. From staffan.larsen at oracle.com Mon May 4 09:43:42 2015 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Mon, 4 May 2015 11:43:42 +0200 Subject: RFR: JDK-8079248 JDK fails with "jdk\\bin\\management_ext.dll: The specified procedure could not be found" In-Reply-To: <554737B0.7030507@oracle.com> References: <480D0346-4F6B-492F-B243-6A045FC40193@oracle.com> <55471F54.6090804@oracle.com> <554737B0.7030507@oracle.com> Message-ID: <1B30D60F-7C14-4BB3-875D-181DC47B85BE@oracle.com> Thanks for the reviews - the fix is now in the JPRT queue. /Staffan > On 4 maj 2015, at 11:11, Erik Joelsson wrote: > > > On 2015-05-04 10:25, Staffan Larsen wrote: >>> On 4 maj 2015, at 09:54, Magnus Ihse Bursie wrote: >>> >>> >>>> 4 maj 2015 kl. 09:27 skrev David Holmes : >>>> >>>> Hi Staffan, >>>> >>>> Seems fine as a spot fix but I'm wondering if this shouldn't be a common option for all the dlls now we are building with VS2013? >>> Or maybe as a define in the source code before the include section for the specific source code that needs a legacy version of GetProcessMemoryInfo? That seems more prudent to me. >> In this case the function is called the same and works the same in all versions of Windows (there is no ?legacy version?). What differs is that it is linked against different names in different libs for different versions of Windows. So from a source code perspective there is no difference, but from a build perspective there is. > I think the placement suggested here is fine. It could be moved to configure for global application, but I doubt it will ever be needed. As soon as we drop support for Windows versions older than 7, we can remove this option. > > /Erik >> /Staffan >> >>> /Magnus >>> >>>> Thanks, >>>> David >>>> >>>>> On 4/05/2015 4:52 PM, Staffan Larsen wrote: >>>>> This is a P1 bug that surfaced when changes from jdk9/dev and jdk9/hs-rt met in jdk9/hs. In this case the windows compiler upgrades in jdk9/dev met changes in jdk9/hs-rt that moved a call to GetProcessMemoryInfo from management.dll to management_ext.dll. With the compiler upgrades PSAPI_VERSION=1 is needed when compiling the library calling GetProcessMemoryInfo. This fix simply moves that patch from make/lib/Lib-java.management.gmk to make/lib/Lib-jdk.management.gmk. The patch was introduced in JDK-8076557. >>>>> >>>>> I will push the change below directly to jdk9/hs. >>>>> >>>>> Thanks, >>>>> /Staffan >>>>> >>>>> >>>>> diff --git a/make/lib/Lib-java.management.gmk b/make/lib/Lib-java.management.gmk >>>>> --- a/make/lib/Lib-java.management.gmk >>>>> +++ b/make/lib/Lib-java.management.gmk >>>>> @@ -38,11 +38,6 @@ >>>>> $(LIBJAVA_HEADER_FLAGS) \ >>>>> # >>>>> >>>>> -# In (at least) VS2013 and later, -DPSAPI_VERSION=1 is needed to generate >>>>> -# a binary that is compatible with windows versions older than 7/2008R2. >>>>> -# See MSDN documentation for GetProcessMemoryInfo for more information. >>>>> -BUILD_LIBMANAGEMENT_CFLAGS += -DPSAPI_VERSION=1 >>>>> - >>>>> LIBMANAGEMENT_OPTIMIZATION := HIGH >>>>> ifneq ($(findstring $(OPENJDK_TARGET_OS), solaris linux), ) >>>>> ifeq ($(ENABLE_DEBUG_SYMBOLS), true) >>>>> diff --git a/make/lib/Lib-jdk.management.gmk b/make/lib/Lib-jdk.management.gmk >>>>> --- a/make/lib/Lib-jdk.management.gmk >>>>> +++ b/make/lib/Lib-jdk.management.gmk >>>>> @@ -39,6 +39,11 @@ >>>>> $(LIBJAVA_HEADER_FLAGS) \ >>>>> # >>>>> >>>>> +# In (at least) VS2013 and later, -DPSAPI_VERSION=1 is needed to generate >>>>> +# a binary that is compatible with windows versions older than 7/2008R2. >>>>> +# See MSDN documentation for GetProcessMemoryInfo for more information. >>>>> +BUILD_LIBMANAGEMENT_EXT_CFLAGS += -DPSAPI_VERSION=1 >>>>> + >>>>> LIBMANAGEMENT_EXT_OPTIMIZATION := HIGH >>>>> ifneq ($(findstring $(OPENJDK_TARGET_OS), solaris linux), ) >>>>> ifeq ($(ENABLE_DEBUG_SYMBOLS), true) From volker.simonis at gmail.com Mon May 4 15:45:46 2015 From: volker.simonis at gmail.com (Volker Simonis) Date: Mon, 4 May 2015 17:45:46 +0200 Subject: RFR(XXS): 8079280: Fix format warning/error in vm_version_ppc.cpp Message-ID: Hi, could you please review the following tiny change: http://cr.openjdk.java.net/~simonis/webrevs/2015/8079280/ https://bugs.openjdk.java.net/browse/JDK-8079280 With newer GCCs we currently get the following error: /usr/work/d046063/OpenJDK/jdk9-hs-comp/hotspot/src/cpu/ppc/vm/vm_version_ppc.cpp: In static member function ?static void VM_Version::config_dscr()?: /usr/work/d046063/OpenJDK/jdk9-hs-comp/hotspot/src/cpu/ppc/vm/vm_version_ppc.cpp:632:98: error: format ?%lx? expects argument of type ?long unsigned int?, but argument 3 has type ?uint32_t* {aka unsigned int*}? [-Werror=format=] tty->print_cr("Decoding dscr configuration stub at " INTPTR_FORMAT " before execution:", code); The fix is trivial - just use the "p2i()" helper function to cast the pointers to the appropriate type: tty->print_cr("Decoding dscr configuration stub at " INTPTR_FORMAT " before execution:", p2i(code)); Thank you and best regards, Volker From stefan.karlsson at oracle.com Mon May 4 15:53:33 2015 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 04 May 2015 17:53:33 +0200 Subject: RFR(XXS): 8079280: Fix format warning/error in vm_version_ppc.cpp In-Reply-To: References: Message-ID: <554795FD.8070308@oracle.com> On 2015-05-04 17:45, Volker Simonis wrote: > Hi, > > could you please review the following tiny change: > > http://cr.openjdk.java.net/~simonis/webrevs/2015/8079280/ > https://bugs.openjdk.java.net/browse/JDK-8079280 > > With newer GCCs we currently get the following error: > > /usr/work/d046063/OpenJDK/jdk9-hs-comp/hotspot/src/cpu/ppc/vm/vm_version_ppc.cpp: > In static member function ?static void VM_Version::config_dscr()?: > /usr/work/d046063/OpenJDK/jdk9-hs-comp/hotspot/src/cpu/ppc/vm/vm_version_ppc.cpp:632:98: > error: format ?%lx? expects argument of type ?long unsigned int?, but > argument 3 has type ?uint32_t* {aka unsigned int*}? [-Werror=format=] > tty->print_cr("Decoding dscr configuration stub at " > INTPTR_FORMAT " before execution:", code); > > The fix is trivial - just use the "p2i()" helper function to cast the > pointers to the appropriate type: > > tty->print_cr("Decoding dscr configuration stub at " INTPTR_FORMAT > " before execution:", p2i(code)); Looks good. StefanK > > Thank you and best regards, > Volker From goetz.lindenmaier at sap.com Mon May 4 20:22:40 2015 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Mon, 4 May 2015 20:22:40 +0000 Subject: RFR(XXS): 8079280: Fix format warning/error in vm_version_ppc.cpp In-Reply-To: References: Message-ID: <4295855A5C1DE049A61835A1887419CC2CFCF8EF@DEWDFEMB12A.global.corp.sap> Hi Volker, looks good! Best regards, Goetz. -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Volker Simonis Sent: Monday, May 04, 2015 5:46 PM To: HotSpot Open Source Developers Subject: RFR(XXS): 8079280: Fix format warning/error in vm_version_ppc.cpp Hi, could you please review the following tiny change: http://cr.openjdk.java.net/~simonis/webrevs/2015/8079280/ https://bugs.openjdk.java.net/browse/JDK-8079280 With newer GCCs we currently get the following error: /usr/work/d046063/OpenJDK/jdk9-hs-comp/hotspot/src/cpu/ppc/vm/vm_version_ppc.cpp: In static member function ?static void VM_Version::config_dscr()?: /usr/work/d046063/OpenJDK/jdk9-hs-comp/hotspot/src/cpu/ppc/vm/vm_version_ppc.cpp:632:98: error: format ?%lx? expects argument of type ?long unsigned int?, but argument 3 has type ?uint32_t* {aka unsigned int*}? [-Werror=format=] tty->print_cr("Decoding dscr configuration stub at " INTPTR_FORMAT " before execution:", code); The fix is trivial - just use the "p2i()" helper function to cast the pointers to the appropriate type: tty->print_cr("Decoding dscr configuration stub at " INTPTR_FORMAT " before execution:", p2i(code)); Thank you and best regards, Volker From thomas.schatzl at oracle.com Tue May 5 00:16:48 2015 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Mon, 04 May 2015 20:16:48 -0400 Subject: RFR(s): 8013171: G1: C1 x86_64 barriers use 32-bit accesses to 64-bit PtrQueue::_index In-Reply-To: <55472F6C.30504@oracle.com> References: <5538B2C5.3040404@oracle.com> <1429787817.3334.3.camel@oracle.com> <5540C7CC.2080209@oracle.com> <55472F6C.30504@oracle.com> Message-ID: <1430785008.4971.0.camel@dhcp-whq-twvpn-1-vpnpool-10-159-155-250.vpn.oracle.com> Hi Per, On Mon, 2015-05-04 at 10:35 +0200, Per Liden wrote: > On 2015-04-29 14:00, Per Liden wrote: > > Hi, > > > > On 2015-04-27 18:20, Christian Thalinger wrote: > >> > >>> On Apr 23, 2015, at 9:40 AM, Per Liden wrote: > >>> [...] > >>>> - post barrier stub: I would prefer if the 64 bit code did not > >>>> push/pop the rdx register to free tmp. There are explicit rscratch1/2 > >>>> registers for temporaries available on that platform. At least > >>>> rscratch1 > >>>> (=r8) seems to be used without save/restore in the original code > >>>> already. > >>>> This would also remove the need for 64 bit code to push/pop any > >>>> register it > >>>> seems to me. > >> > >> Sounds like a good suggestion if it doesn?t complicate the code too much. > > > > I'd like to avoid reintroducing different code paths for 32 and 64-bit, > > which I think complicates the code. However, I can defer the pushing of > > tmp until it's actually needed, which essentially gets us to the same > > situation as before this change in terms of register usage for 64-bit. > > > > Updated webrev: http://cr.openjdk.java.net/~pliden/8013171/webrev.2/ > > Thomas/Roland, are you ok with the latest version? > I am okay with it. Thanks, Thomas From vladimir.kozlov at oracle.com Tue May 5 01:57:56 2015 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 04 May 2015 18:57:56 -0700 Subject: RFR(S): 8078593: [TESTBUG] ppc: Enable jtreg tests for new features In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CFCC677@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CFCC677@DEWDFEMB12A.global.corp.sap> Message-ID: <554823A4.3060803@oracle.com> Looks good. I will push it into hs-comp. Thanks, Vladimir On 5/4/15 1:40 AM, Lindenmaier, Goetz wrote: > Hi, > > this patch enables the tests to deal with rtm and mathexact intrinsics on ppc. > These were implemented on ppc in "8077838: Recent developments for ppc." > > Please review this change. I please need a sponsor. > http://cr.openjdk.java.net/~goetz/webrevs/8078593-ppcJtreg/webrev.01/ > > Best regards, > Goetz. > From goetz.lindenmaier at sap.com Tue May 5 05:35:27 2015 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Tue, 5 May 2015 05:35:27 +0000 Subject: RFR(S): 8078593: [TESTBUG] ppc: Enable jtreg tests for new features In-Reply-To: <554823A4.3060803@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CFCC677@DEWDFEMB12A.global.corp.sap> <554823A4.3060803@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CFD4223@DEWDFEMB12A.global.corp.sap> Hi Vladimir, Thanks a lot! Best regards, Goetz. -----Original Message----- From: Vladimir Kozlov [mailto:vladimir.kozlov at oracle.com] Sent: Tuesday, May 05, 2015 3:58 AM To: Lindenmaier, Goetz; hotspot-dev at openjdk.java.net Subject: Re: RFR(S): 8078593: [TESTBUG] ppc: Enable jtreg tests for new features Looks good. I will push it into hs-comp. Thanks, Vladimir On 5/4/15 1:40 AM, Lindenmaier, Goetz wrote: > Hi, > > this patch enables the tests to deal with rtm and mathexact intrinsics on ppc. > These were implemented on ppc in "8077838: Recent developments for ppc." > > Please review this change. I please need a sponsor. > http://cr.openjdk.java.net/~goetz/webrevs/8078593-ppcJtreg/webrev.01/ > > Best regards, > Goetz. > From per.liden at oracle.com Tue May 5 09:34:41 2015 From: per.liden at oracle.com (Per Liden) Date: Tue, 05 May 2015 11:34:41 +0200 Subject: RFR(s): 8013171: G1: C1 x86_64 barriers use 32-bit accesses to 64-bit PtrQueue::_index In-Reply-To: <163ACD65-678F-4D0C-8A4E-922DD85C42BF@oracle.com> References: <5538B2C5.3040404@oracle.com> <1429787817.3334.3.camel@oracle.com> <5540C7CC.2080209@oracle.com> <55472F6C.30504@oracle.com> <163ACD65-678F-4D0C-8A4E-922DD85C42BF@oracle.com> Message-ID: <55488EB1.70301@oracle.com> Thanks for reviewing Roland. /Per On 2015-05-04 11:12, Roland Westrelin wrote: >> Thomas/Roland, are you ok with the latest version? > > Yes, that looks good to me. > > Roland. > From per.liden at oracle.com Tue May 5 09:35:20 2015 From: per.liden at oracle.com (Per Liden) Date: Tue, 05 May 2015 11:35:20 +0200 Subject: RFR(s): 8013171: G1: C1 x86_64 barriers use 32-bit accesses to 64-bit PtrQueue::_index In-Reply-To: <1430785008.4971.0.camel@dhcp-whq-twvpn-1-vpnpool-10-159-155-250.vpn.oracle.com> References: <5538B2C5.3040404@oracle.com> <1429787817.3334.3.camel@oracle.com> <5540C7CC.2080209@oracle.com> <55472F6C.30504@oracle.com> <1430785008.4971.0.camel@dhcp-whq-twvpn-1-vpnpool-10-159-155-250.vpn.oracle.com> Message-ID: <55488ED8.4040704@oracle.com> On 2015-05-05 02:16, Thomas Schatzl wrote: > Hi Per, > > On Mon, 2015-05-04 at 10:35 +0200, Per Liden wrote: >> On 2015-04-29 14:00, Per Liden wrote: >>> Hi, >>> >>> On 2015-04-27 18:20, Christian Thalinger wrote: >>>> >>>>> On Apr 23, 2015, at 9:40 AM, Per Liden wrote: >>>>> > [...] >>>>>> - post barrier stub: I would prefer if the 64 bit code did not >>>>>> push/pop the rdx register to free tmp. There are explicit rscratch1/2 >>>>>> registers for temporaries available on that platform. At least >>>>>> rscratch1 >>>>>> (=r8) seems to be used without save/restore in the original code >>>>>> already. >>>>>> This would also remove the need for 64 bit code to push/pop any >>>>>> register it >>>>>> seems to me. >>>> >>>> Sounds like a good suggestion if it doesn?t complicate the code too much. >>> >>> I'd like to avoid reintroducing different code paths for 32 and 64-bit, >>> which I think complicates the code. However, I can defer the pushing of >>> tmp until it's actually needed, which essentially gets us to the same >>> situation as before this change in terms of register usage for 64-bit. >>> >>> Updated webrev: http://cr.openjdk.java.net/~pliden/8013171/webrev.2/ >> >> Thomas/Roland, are you ok with the latest version? >> > > I am okay with it. Thanks for reviewing Thomas. /Per From edward.nevill at linaro.org Tue May 5 14:23:41 2015 From: edward.nevill at linaro.org (Edward Nevill) Date: Tue, 05 May 2015 15:23:41 +0100 Subject: [aarch64-port-dev ] RFR: 8079203: aarch64: need to cater for different partner implementations In-Reply-To: <20150501142357.GD28975@leverpostej> References: <20150501142357.GD28975@leverpostej> Message-ID: <1430835821.14347.20.camel@mylittlepony.linaroharston> On Fri, 2015-05-01 at 15:23 +0100, Mark Rutland wrote: > From a look at the proposed patch, I guess this assumes a uniform SMP > system (i.e. no big.LITTLE)? Not quite: The assumption is that any CPU implementer == ARM could be big little and therefore could contain A53 and therefor the A53 feature must be enabled. > A while back [1] the /proc/cpuinfo format was fixed to show information per-cpu > (example form a Juno system below), and it looks like the CPU information > parsed will only refer to ther final CPU listed, and may not be representative > of the system as a whole. > > That format has been packported to the various stable kernels too. > Nonetheless, we must still cater for olde style /proc/cpuinfo. I have modified the webrev below so that if it is a new style /proc/cpuinfo, it will only enable the A53 feature if it can positively identify an A53 core. Otherwise, if it is an old style /proc/cpuinfo it will assume the A53 feature needs enabling if it finds an A57 (or, of course if it is identified as an A53). http://cr.openjdk.java.net/~enevill/8079203/webrev.01 > Thanks, > Mark. You're welcome. It would be much easier if the kernel exported this information in a machine readable form. I hate having to grub around in /proc/cpuinfo to find this information, and I know that others in the OpenJDK community hate this also. Its fragile and non portable. Why can the kernel not make MIDR readable at EL0 and provide a HWCAP_BIGLITTLE in auxv. All the best, Ed. From edward.nevill at linaro.org Tue May 5 14:29:38 2015 From: edward.nevill at linaro.org (Edward Nevill) Date: Tue, 05 May 2015 15:29:38 +0100 Subject: RFR: 8079203: aarch64: need to cater for different partner implementations In-Reply-To: <5543AC6A.4000006@oracle.com> References: <5543AC6A.4000006@oracle.com> Message-ID: <1430836178.14347.25.camel@mylittlepony.linaroharston> On Fri, 2015-05-01 at 09:40 -0700, Vladimir Kozlov wrote: > Wow, dependence on cpuinfo format. We are struggling with PICL and KSTAT format versions on SPARC. I hope Linux have > more fixed cpuinfo format. > > Should you add this info to _features_str? > Good idea. This will be very useful for debugging those hs_err logs. I have added this in the latest webrev http://cr.openjdk.java.net/~enevill/8079203/webrev.01 The cpuinfo is just dumped in the features string as a sequence of hex values. Eg 0x41:0x0:0xd03:0(0xd07), simd, crc, aes, sha1, sha256 Where the fields are Implementer:Variant:Part:Revision(Part2) Part2 is only printed in the case of a big little (hetrogeneous) system. All the best, Ed. From aph at redhat.com Tue May 5 14:52:03 2015 From: aph at redhat.com (Andrew Haley) Date: Tue, 05 May 2015 15:52:03 +0100 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning Message-ID: <5548D913.1030507@redhat.com> I've added StoreLoad barriers everywhere they're needed. http://cr.openjdk.java.net/~aph/8079315/ This patch depends on the patch for 8078438, which is still not committed. http://cr.openjdk.java.net/~shade/8078438/webrev.02/ This is x86 only, will do AArch64 parts in a separate patch once we've agreed on this. Andrew. From aph at redhat.com Tue May 5 14:57:17 2015 From: aph at redhat.com (Andrew Haley) Date: Tue, 05 May 2015 15:57:17 +0100 Subject: [aarch64-port-dev ] RFR: 8079203: aarch64: need to cater for different partner implementations In-Reply-To: <20150505145254.GA23758@leverpostej> References: <20150501142357.GD28975@leverpostej> <1430835821.14347.20.camel@mylittlepony.linaroharston> <20150505145254.GA23758@leverpostej> Message-ID: <5548DA4D.2090305@redhat.com> On 05/05/2015 03:52 PM, Mark Rutland wrote: > We're looking into exposing such information to userspace in a more > structured manner at the moment, but we don't have an RFC just yet. I > take it that you would be interested when that appears? You're not kidding. This is something we need desperately, some time last year. JIT compilers really are different to all other userspace programs in this regard: we really do need to know exact models, pipelines, and so on, in order to generate best code. In some cases we even need to minor CPU versions to work around bugs. We really don't want to have to parse /proc/cpuinfo. Andrew. From vitalyd at gmail.com Tue May 5 15:02:51 2015 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Tue, 5 May 2015 11:02:51 -0400 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <5548D913.1030507@redhat.com> References: <5548D913.1030507@redhat.com> Message-ID: Andrew, I realize this is a correctness fix, but isn't this going to possibly defeat any perf gain from using conditional card marking in the first place (for CMS)? Didn't someone suggest a different approach that allows store-store to be used still? On Tue, May 5, 2015 at 10:52 AM, Andrew Haley wrote: > I've added StoreLoad barriers everywhere they're needed. > > http://cr.openjdk.java.net/~aph/8079315/ > > This patch depends on the patch for 8078438, which is still not > committed. http://cr.openjdk.java.net/~shade/8078438/webrev.02/ > > This is x86 only, will do AArch64 parts in a separate patch once > we've agreed on this. > > Andrew. > From aph at redhat.com Tue May 5 15:18:51 2015 From: aph at redhat.com (Andrew Haley) Date: Tue, 05 May 2015 16:18:51 +0100 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: References: <5548D913.1030507@redhat.com> Message-ID: <5548DF5B.7000404@redhat.com> Hi, On 05/05/2015 04:02 PM, Vitaly Davidovich wrote: > I realize this is a correctness fix, but isn't this going to possibly > defeat any perf gain from using conditional card marking in the first place > (for CMS)? It shouldn't do. The idea of the UseCondCardMark AIUI is to reduce thrashing of card table cache lines across cores. StoreLoad, while possibly slow, should still be a local operation. So it all depends on what we're trying to optimize: it's still the right thing to use on a many-core machine, perhaps not for something smaller. > Didn't someone suggest a different approach that allows > store-store to be used still? Mikael wrote: > For UseCondCardMark to be correct with CMS with precleaning it would > require a StoreLoad between the field write and the card table load. > > Another approach to solve this partially would be to change the > condition for the conditional card mark from an equality test aginst 0x0 > (dirty_card_val) > to a negated equality test of 0xff (clean_card_val) > > In that case we would slightly reduce the number of false-sharing > inducing card writes and still survive the precleaning phase since > precleaning sets the card to 0x1 (precleaned_card_val). But I'm at a loss to understand how that helps with the problem of reading an old value for the card mark. There still has to be some sort of happens-before relationship between something and the read of the card mark. What good would a store-store do here? Andrew. From vladimir.kozlov at oracle.com Tue May 5 15:19:07 2015 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 05 May 2015 08:19:07 -0700 Subject: RFR: 8079203: aarch64: need to cater for different partner implementations In-Reply-To: <1430836178.14347.25.camel@mylittlepony.linaroharston> References: <5543AC6A.4000006@oracle.com> <1430836178.14347.25.camel@mylittlepony.linaroharston> Message-ID: <5548DF6B.7000805@oracle.com> Looks good. Thanks, Vladimir On 5/5/15 7:29 AM, Edward Nevill wrote: > On Fri, 2015-05-01 at 09:40 -0700, Vladimir Kozlov wrote: >> Wow, dependence on cpuinfo format. We are struggling with PICL and KSTAT format versions on SPARC. I hope Linux have >> more fixed cpuinfo format. >> >> Should you add this info to _features_str? >> > > Good idea. This will be very useful for debugging those hs_err logs. > > I have added this in the latest webrev > > http://cr.openjdk.java.net/~enevill/8079203/webrev.01 > > The cpuinfo is just dumped in the features string as a sequence of hex > values. Eg > > 0x41:0x0:0xd03:0(0xd07), simd, crc, aes, sha1, sha256 > > Where the fields are > > Implementer:Variant:Part:Revision(Part2) > > Part2 is only printed in the case of a big little (hetrogeneous) system. > > All the best, > Ed. > From vitalyd at gmail.com Tue May 5 15:36:50 2015 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Tue, 5 May 2015 11:36:50 -0400 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <5548DF5B.7000404@redhat.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> Message-ID: > > But I'm at a loss to understand how that helps with the problem of > reading an old value for the card mark. There still has to be some > sort of happens-before relationship between something and the read of > the card mark. What good would a store-store do here? My understanding was that the card can be in 1 of 3 states: clean, dirty, precleaned. Right now if mutator's read of the card floats above the store and it sees "dirty" (e.g. it read "precleaned"), it will not mark the card as dirty and the write is lost. I think Mikael's idea was that by changing the mutator to check for "!= clean", it will automatically cover either "dirty" or "precleaned", and the 3rd state ("clean") can't happen since CMS preclean thread does not set it to such. So although you're right in the general sense of there needing to be a happens-before relationship, I think this is a targeted suggestion given the protocol involved. Or, of course, I misunderstood Mikael's rationale/suggestion. As for performance, I don't know the impact, hence "possibly defeat" in my email. Intuitively, putting a store-load after every ref write (well, ones not elided) doesn't sound cheap, even if it's a core-local operation. On Tue, May 5, 2015 at 11:18 AM, Andrew Haley wrote: > Hi, > > On 05/05/2015 04:02 PM, Vitaly Davidovich wrote: > > > I realize this is a correctness fix, but isn't this going to possibly > > defeat any perf gain from using conditional card marking in the first > place > > (for CMS)? > > It shouldn't do. The idea of the UseCondCardMark AIUI is to reduce > thrashing of card table cache lines across cores. StoreLoad, while > possibly slow, should still be a local operation. So it all depends > on what we're trying to optimize: it's still the right thing to use > on a many-core machine, perhaps not for something smaller. > > > Didn't someone suggest a different approach that allows > > store-store to be used still? > > Mikael wrote: > > > For UseCondCardMark to be correct with CMS with precleaning it would > > require a StoreLoad between the field write and the card table load. > > > > Another approach to solve this partially would be to change the > > condition for the conditional card mark from an equality test aginst 0x0 > > (dirty_card_val) > > to a negated equality test of 0xff (clean_card_val) > > > > In that case we would slightly reduce the number of false-sharing > > inducing card writes and still survive the precleaning phase since > > precleaning sets the card to 0x1 (precleaned_card_val). > > But I'm at a loss to understand how that helps with the problem of > reading an old value for the card mark. There still has to be some > sort of happens-before relationship between something and the read of > the card mark. What good would a store-store do here? > > Andrew. > From aph at redhat.com Tue May 5 16:20:57 2015 From: aph at redhat.com (Andrew Haley) Date: Tue, 05 May 2015 17:20:57 +0100 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> Message-ID: <5548EDE9.8090803@redhat.com> On 05/05/2015 04:36 PM, Vitaly Davidovich wrote: >> >> But I'm at a loss to understand how that helps with the problem of >> reading an old value for the card mark. There still has to be some >> sort of happens-before relationship between something and the read of >> the card mark. What good would a store-store do here? > > My understanding was that the card can be in 1 of 3 states: clean, > dirty, precleaned. Right now if mutator's read of the card floats > above the store and it sees "dirty" (e.g. it read "precleaned"), it > will not mark the card as dirty and the write is lost. I think > Mikael's idea was that by changing the mutator to check for "!= > clean", it will automatically cover either "dirty" or "precleaned", > and the 3rd state ("clean") can't happen since CMS preclean thread > does not set it to such. So although you're right in the general > sense of there needing to be a happens-before relationship, I think > this is a targeted suggestion given the protocol involved. Or, of > course, I misunderstood Mikael's rationale/suggestion. Well, yeah, alright. I'd be happy to submit a patch which does that if I had any idea that it was correct. Mikael's language about a "partial solution" worries me. Andrew. From aleksey.shipilev at oracle.com Tue May 5 16:53:49 2015 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Tue, 05 May 2015 19:53:49 +0300 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <5548EDE9.8090803@redhat.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> Message-ID: <5548F59D.6090008@oracle.com> On 05.05.2015 19:20, Andrew Haley wrote: > On 05/05/2015 04:36 PM, Vitaly Davidovich wrote: >>> >>> But I'm at a loss to understand how that helps with the problem of >>> reading an old value for the card mark. There still has to be some >>> sort of happens-before relationship between something and the read of >>> the card mark. What good would a store-store do here? >> >> My understanding was that the card can be in 1 of 3 states: clean, >> dirty, precleaned. Right now if mutator's read of the card floats >> above the store and it sees "dirty" (e.g. it read "precleaned"), it >> will not mark the card as dirty and the write is lost. I think >> Mikael's idea was that by changing the mutator to check for "!= >> clean", it will automatically cover either "dirty" or "precleaned", >> and the 3rd state ("clean") can't happen since CMS preclean thread >> does not set it to such. So although you're right in the general >> sense of there needing to be a happens-before relationship, I think >> this is a targeted suggestion given the protocol involved. Or, of >> course, I misunderstood Mikael's rationale/suggestion. > > Well, yeah, alright. I'd be happy to submit a patch which does that > if I had any idea that it was correct. Mikael's language about a > "partial solution" worries me. I think it is misnomer to talk about happens-before at this level. Just checking: are we assuming that collector recovers from mutator blindly flipping the card to "dirty", even though collector's precleaning updates the card to "precleaned"? That seems to be the case, since otherwise everything is broken even without UseCondCardMark. Mikael suggestion seems correct to fix the issue at hand: we need to maintain the order of (store, cardmark-dirty) as detected by collector -- this requires StoreStore. Plus, mutator should care about "precleaned" state, thus changing the tested predicate to "!clean". But I am suspicious about the whole interaction between mutator and collector. Cautiously speaking, every time I see a conditional update in a concurrent code, I expect some form of atomic CAS that provides a global order w.r.t. the particular memory location. What exactly happens when collector transits card to "clean"? How does the mutator sees the proper "clean" value for the card mark? Can it miss the card mark update with UseCondCardMark because it saw the "outdated" card mark value? Thanks, -Aleksey From aph at redhat.com Tue May 5 19:34:23 2015 From: aph at redhat.com (Andrew Haley) Date: Tue, 05 May 2015 20:34:23 +0100 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <5548F59D.6090008@oracle.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> Message-ID: <55491B3F.6000501@redhat.com> On 05/05/2015 05:53 PM, Aleksey Shipilev wrote: > > But I am suspicious about the whole interaction between mutator and collector. Cautiously speaking, every time I see a conditional update in a concurrent code, I expect some form of atomic CAS that provides a global order w.r.t. the particular memory location. > > What exactly happens when collector transits card to "clean"? How does the mutator sees the proper "clean" value for the card mark? Can it miss the card mark update with UseCondCardMark because it saw the "outdated" card mark value? That's what worries me too. But my attitude is coloured by some of the hardware I've been using recently, where some stale values in cache I've seen can only be explained by missing updates that happened not microseconds nor even milliseconds, but actual seconds ago. And so I know this: whatever is not forbidden will happen, and more often than I expect! Andrew. From vitalyd at gmail.com Tue May 5 19:51:47 2015 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Tue, 5 May 2015 15:51:47 -0400 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <5548F59D.6090008@oracle.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> Message-ID: If mutator doesn't see "clean" due to staleness, won't it just mark it dirty "unnecessarily" using Mikael's suggestion? Current code, however, is less clear. Is the load of the card in the mutator an acquiring load or at least C++ volatile? sent from my phone On May 5, 2015 12:53 PM, "Aleksey Shipilev" wrote: > On 05.05.2015 19:20, Andrew Haley wrote: > > On 05/05/2015 04:36 PM, Vitaly Davidovich wrote: > >>> > >>> But I'm at a loss to understand how that helps with the problem of > >>> reading an old value for the card mark. There still has to be some > >>> sort of happens-before relationship between something and the read of > >>> the card mark. What good would a store-store do here? > >> > >> My understanding was that the card can be in 1 of 3 states: clean, > >> dirty, precleaned. Right now if mutator's read of the card floats > >> above the store and it sees "dirty" (e.g. it read "precleaned"), it > >> will not mark the card as dirty and the write is lost. I think > >> Mikael's idea was that by changing the mutator to check for "!= > >> clean", it will automatically cover either "dirty" or "precleaned", > >> and the 3rd state ("clean") can't happen since CMS preclean thread > >> does not set it to such. So although you're right in the general > >> sense of there needing to be a happens-before relationship, I think > >> this is a targeted suggestion given the protocol involved. Or, of > >> course, I misunderstood Mikael's rationale/suggestion. > > > > Well, yeah, alright. I'd be happy to submit a patch which does that > > if I had any idea that it was correct. Mikael's language about a > > "partial solution" worries me. > > I think it is misnomer to talk about happens-before at this level. > > Just checking: are we assuming that collector recovers from mutator > blindly flipping the card to "dirty", even though collector's > precleaning updates the card to "precleaned"? That seems to be the case, > since otherwise everything is broken even without UseCondCardMark. > > Mikael suggestion seems correct to fix the issue at hand: we need to > maintain the order of (store, cardmark-dirty) as detected by collector > -- this requires StoreStore. Plus, mutator should care about > "precleaned" state, thus changing the tested predicate to "!clean". > > But I am suspicious about the whole interaction between mutator and > collector. Cautiously speaking, every time I see a conditional update in > a concurrent code, I expect some form of atomic CAS that provides a > global order w.r.t. the particular memory location. > > What exactly happens when collector transits card to "clean"? How does > the mutator sees the proper "clean" value for the card mark? Can it miss > the card mark update with UseCondCardMark because it saw the "outdated" > card mark value? > > Thanks, > -Aleksey > > From vitalyd at gmail.com Tue May 5 19:54:15 2015 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Tue, 5 May 2015 15:54:15 -0400 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <55491B3F.6000501@redhat.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <55491B3F.6000501@redhat.com> Message-ID: I'm intrigued - what hardware delays writes in the seconds? sent from my phone On May 5, 2015 3:34 PM, "Andrew Haley" wrote: > On 05/05/2015 05:53 PM, Aleksey Shipilev wrote: > > > > But I am suspicious about the whole interaction between mutator and > collector. Cautiously speaking, every time I see a conditional update in a > concurrent code, I expect some form of atomic CAS that provides a global > order w.r.t. the particular memory location. > > > > What exactly happens when collector transits card to "clean"? How does > the mutator sees the proper "clean" value for the card mark? Can it miss > the card mark update with UseCondCardMark because it saw the "outdated" > card mark value? > > That's what worries me too. But my attitude is coloured by some of > the hardware I've been using recently, where some stale values in > cache I've seen can only be explained by missing updates that happened > not microseconds nor even milliseconds, but actual seconds ago. And > so I know this: whatever is not forbidden will happen, and more often > than I expect! > > Andrew. > From mark.rutland at arm.com Tue May 5 14:52:55 2015 From: mark.rutland at arm.com (Mark Rutland) Date: Tue, 5 May 2015 15:52:55 +0100 Subject: [aarch64-port-dev ] RFR: 8079203: aarch64: need to cater for different partner implementations In-Reply-To: <1430835821.14347.20.camel@mylittlepony.linaroharston> References: <20150501142357.GD28975@leverpostej> <1430835821.14347.20.camel@mylittlepony.linaroharston> Message-ID: <20150505145254.GA23758@leverpostej> On Tue, May 05, 2015 at 03:23:41PM +0100, Edward Nevill wrote: > On Fri, 2015-05-01 at 15:23 +0100, Mark Rutland wrote: > > > From a look at the proposed patch, I guess this assumes a uniform SMP > > system (i.e. no big.LITTLE)? > > Not quite: The assumption is that any CPU implementer == ARM could be > big little and therefore could contain A53 and therefor the A53 > feature must be enabled. Ah, I see. > > A while back [1] the /proc/cpuinfo format was fixed to show information per-cpu > > (example form a Juno system below), and it looks like the CPU information > > parsed will only refer to ther final CPU listed, and may not be representative > > of the system as a whole. > > > > That format has been packported to the various stable kernels too. > > > > Nonetheless, we must still cater for olde style /proc/cpuinfo. I have > modified the webrev below so that if it is a new style /proc/cpuinfo, > it will only enable the A53 feature if it can positively identify an > A53 core. Great! > Otherwise, if it is an old style /proc/cpuinfo it will assume the A53 > feature needs enabling if it finds an A57 (or, of course if it is > identified as an A53). Sure. The old format is something that will hopefully die off eventually, but unfortunately we're stuck with it for the timebeing. > http://cr.openjdk.java.net/~enevill/8079203/webrev.01 > > > Thanks, > > Mark. > > You're welcome. It would be much easier if the kernel exported this > information in a machine readable form. I hate having to grub around > in /proc/cpuinfo to find this information, and I know that others in > the OpenJDK community hate this also. Its fragile and non portable. > > Why can the kernel not make MIDR readable at EL0 and provide a > HWCAP_BIGLITTLE in auxv. We're looking into exposing such information to userspace in a more structured manner at the moment, but we don't have an RFC just yet. I take it that you would be interested when that appears? Is there anything in particular other than MIDR that you'd like to see exposed? Thanks, Mark. From kim.barrett at oracle.com Wed May 6 01:25:04 2015 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 5 May 2015 21:25:04 -0400 Subject: RFR(XXS): 8079280: Fix format warning/error in vm_version_ppc.cpp In-Reply-To: References: Message-ID: <7121C96C-715E-4106-8231-CDE56CF5DA88@oracle.com> On May 4, 2015, at 11:45 AM, Volker Simonis wrote: > > Hi, > > could you please review the following tiny change: > > http://cr.openjdk.java.net/~simonis/webrevs/2015/8079280/ > https://bugs.openjdk.java.net/browse/JDK-8079280 > > With newer GCCs we currently get the following error: > > /usr/work/d046063/OpenJDK/jdk9-hs-comp/hotspot/src/cpu/ppc/vm/vm_version_ppc.cpp: > In static member function ?static void VM_Version::config_dscr()?: > /usr/work/d046063/OpenJDK/jdk9-hs-comp/hotspot/src/cpu/ppc/vm/vm_version_ppc.cpp:632:98: > error: format ?%lx? expects argument of type ?long unsigned int?, but > argument 3 has type ?uint32_t* {aka unsigned int*}? [-Werror=format=] > tty->print_cr("Decoding dscr configuration stub at " > INTPTR_FORMAT " before execution:", code); > > The fix is trivial - just use the "p2i()" helper function to cast the > pointers to the appropriate type: > > tty->print_cr("Decoding dscr configuration stub at " INTPTR_FORMAT > " before execution:", p2i(code)); > > Thank you and best regards, > Volker Looks good. From jcm at redhat.com Wed May 6 03:45:48 2015 From: jcm at redhat.com (Jon Masters) Date: Tue, 05 May 2015 23:45:48 -0400 Subject: [aarch64-port-dev ] RFR: 8079203: aarch64: need to cater for different partner implementations In-Reply-To: <5548DA4D.2090305@redhat.com> References: <20150501142357.GD28975@leverpostej> <1430835821.14347.20.camel@mylittlepony.linaroharston> <20150505145254.GA23758@leverpostej> <5548DA4D.2090305@redhat.com> Message-ID: <55498E6C.2080400@redhat.com> On 05/05/2015 10:57 AM, Andrew Haley wrote: > On 05/05/2015 03:52 PM, Mark Rutland wrote: >> We're looking into exposing such information to userspace in a more >> structured manner at the moment, but we don't have an RFC just yet. I >> take it that you would be interested when that appears? > > You're not kidding. This is something we need desperately, some time > last year. JIT compilers really are different to all other userspace > programs in this regard: we really do need to know exact models, > pipelines, and so on, in order to generate best code. In some cases > we even need to minor CPU versions to work around bugs. We really > don't want to have to parse /proc/cpuinfo. I've got an action to get a Linaro card (development request) setup to provide this info as an AUXVEC into userspace in some sane way. I'll ping Mark and see what he has in mind before proceeding. Jon. From aph at redhat.com Wed May 6 08:53:21 2015 From: aph at redhat.com (Andrew Haley) Date: Wed, 06 May 2015 09:53:21 +0100 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> Message-ID: <5549D681.3000806@redhat.com> On 05/05/15 20:51, Vitaly Davidovich wrote: > If mutator doesn't see "clean" due to staleness, won't it just mark it > dirty "unnecessarily" using Mikael's suggestion? No. The mutator may see a stale "dirty" and not write anything. At least I haven't seen anything which certainly will prevent that from happening. Andrew. From mikael.gerdin at oracle.com Wed May 6 09:18:55 2015 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Wed, 06 May 2015 11:18:55 +0200 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <5549D681.3000806@redhat.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> Message-ID: <5549DC7F.9030903@oracle.com> On 2015-05-06 10:53, Andrew Haley wrote: > On 05/05/15 20:51, Vitaly Davidovich wrote: >> If mutator doesn't see "clean" due to staleness, won't it just mark it >> dirty "unnecessarily" using Mikael's suggestion? > > No. The mutator may see a stale "dirty" and not write anything. At least > I haven't seen anything which certainly will prevent that from happening. I think you are correct. My suggestion does not solve the problem. /Mikael > > Andrew. > > From staffan.larsen at oracle.com Wed May 6 09:24:58 2015 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Wed, 6 May 2015 11:24:58 +0200 Subject: RFR: 8079345: After 8079248 fixed JDK still fails with "jdk\\bin\\management_ext.dll: The specified procedure could not be found" Message-ID: My fix for 8079248 was broken, so here is a new attempt. I intend to push this directly to jdk9/hs since that is where 8079248 was pushed. bug: https://bugs.openjdk.java.net/browse/JDK-8079345#comment-13638237 webrev: http://cr.openjdk.java.net/~sla/8079345/webrev.00 Thanks, /Staffan From erik.joelsson at oracle.com Wed May 6 09:39:34 2015 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Wed, 06 May 2015 11:39:34 +0200 Subject: RFR: 8079345: After 8079248 fixed JDK still fails with "jdk\\bin\\management_ext.dll: The specified procedure could not be found" In-Reply-To: References: Message-ID: <5549E156.2020600@oracle.com> This one looks better. Sorry for not spotting the problem in the previous review. /Erik On 2015-05-06 11:24, Staffan Larsen wrote: > My fix for 8079248 was broken, so here is a new attempt. I intend to push this directly to jdk9/hs since that is where 8079248 was pushed. > > bug: https://bugs.openjdk.java.net/browse/JDK-8079345#comment-13638237 > webrev: http://cr.openjdk.java.net/~sla/8079345/webrev.00 > > Thanks, > /Staffan From magnus.ihse.bursie at oracle.com Wed May 6 09:46:58 2015 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Wed, 06 May 2015 11:46:58 +0200 Subject: RFR: 8079345: After 8079248 fixed JDK still fails with "jdk\\bin\\management_ext.dll: The specified procedure could not be found" In-Reply-To: <5549E156.2020600@oracle.com> References: <5549E156.2020600@oracle.com> Message-ID: <5549E312.3050703@oracle.com> On 2015-05-06 11:39, Erik Joelsson wrote: > This one looks better. Sorry for not spotting the problem in the > previous review. > > /Erik > > On 2015-05-06 11:24, Staffan Larsen wrote: >> My fix for 8079248 was broken, so here is a new attempt. I intend to >> push this directly to jdk9/hs since that is where 8079248 was pushed. >> >> bug: https://bugs.openjdk.java.net/browse/JDK-8079345#comment-13638237 >> webrev: http://cr.openjdk.java.net/~sla/8079345/webrev.00 >> Looks good to me. If you care, maybe you could move (and properly indent) the comment about the flag to inside the "if windows" clause? You don't need to respin the webrev if you do that. /Magnus From adinn at redhat.com Wed May 6 11:04:26 2015 From: adinn at redhat.com (Andrew Dinn) Date: Wed, 06 May 2015 12:04:26 +0100 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <5548F59D.6090008@oracle.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> Message-ID: <5549F53A.6040407@redhat.com> On 05/05/15 17:53, Aleksey Shipilev wrote: > On 05.05.2015 19:20, Andrew Haley wrote: But I am suspicious about > the whole interaction between mutator and collector. Cautiously > speaking, every time I see a conditional update in a concurrent > code, I expect some form of atomic CAS that provides a global order > w.r.t. the particular memory location. Yeah, me too. Even on x86 this ought to be an issue. I suspect we don't ever see it because in order for an error to manifest we require a basket-case program. I am imagining code which continuously writes an object location with successive objects that are not referenced from anywhere else after each write (so as to make the GC lose a valid object reference) yet does this without performing any (well, not very many) operations that involve a memory sync. It's hard to think of how you might implement a test case. regards, Andrew Dinn ----------- From adinn at redhat.com Wed May 6 11:04:55 2015 From: adinn at redhat.com (Andrew Dinn) Date: Wed, 06 May 2015 12:04:55 +0100 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <5549DC7F.9030903@oracle.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <5549DC7F.9030903@oracle.com> Message-ID: <5549F557.8070305@redhat.com> On 06/05/15 10:18, Mikael Gerdin wrote: > On 2015-05-06 10:53, Andrew Haley wrote: >> On 05/05/15 20:51, Vitaly Davidovich wrote: >>> If mutator doesn't see "clean" due to staleness, won't it just mark it >>> dirty "unnecessarily" using Mikael's suggestion? >> >> No. The mutator may see a stale "dirty" and not write anything. At >> least >> I haven't seen anything which certainly will prevent that from happening. > > I think you are correct. My suggestion does not solve the problem. Yes, I agree with Andrew Haley that this is a possible outcome. regards, Andrew Dinn ----------- From staffan.larsen at oracle.com Wed May 6 11:29:13 2015 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Wed, 6 May 2015 13:29:13 +0200 Subject: RFR: 8079345: After 8079248 fixed JDK still fails with "jdk\\bin\\management_ext.dll: The specified procedure could not be found" In-Reply-To: <5549E312.3050703@oracle.com> References: <5549E156.2020600@oracle.com> <5549E312.3050703@oracle.com> Message-ID: > On 6 maj 2015, at 11:46, Magnus Ihse Bursie wrote: > > On 2015-05-06 11:39, Erik Joelsson wrote: >> This one looks better. Sorry for not spotting the problem in the previous review. >> >> /Erik >> >> On 2015-05-06 11:24, Staffan Larsen wrote: >>> My fix for 8079248 was broken, so here is a new attempt. I intend to push this directly to jdk9/hs since that is where 8079248 was pushed. >>> >>> bug: https://bugs.openjdk.java.net/browse/JDK-8079345#comment-13638237 >>> webrev: http://cr.openjdk.java.net/~sla/8079345/webrev.00 > > Looks good to me. If you care, maybe you could move (and properly indent) the comment about the flag to inside the "if windows" clause? You don't need to respin the webrev if you do that. Done: diff --git a/make/lib/Lib-jdk.management.gmk b/make/lib/Lib-jdk.management.gmk --- a/make/lib/Lib-jdk.management.gmk +++ b/make/lib/Lib-jdk.management.gmk @@ -39,10 +39,12 @@ $(LIBJAVA_HEADER_FLAGS) \ # -# In (at least) VS2013 and later, -DPSAPI_VERSION=1 is needed to generate -# a binary that is compatible with windows versions older than 7/2008R2. -# See MSDN documentation for GetProcessMemoryInfo for more information. -BUILD_LIBMANAGEMENT_EXT_CFLAGS += -DPSAPI_VERSION=1 +ifeq ($(OPENJDK_TARGET_OS), windows) + # In (at least) VS2013 and later, -DPSAPI_VERSION=1 is needed to generate + # a binary that is compatible with windows versions older than 7/2008R2. + # See MSDN documentation for GetProcessMemoryInfo for more information. + LIBMANAGEMENT_EXT_CFLAGS += -DPSAPI_VERSION=1 +endif LIBMANAGEMENT_EXT_OPTIMIZATION := HIGH ifneq ($(findstring $(OPENJDK_TARGET_OS), solaris linux), ) From serguei.spitsyn at oracle.com Wed May 6 11:55:23 2015 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Wed, 06 May 2015 04:55:23 -0700 Subject: RFR: 8079345: After 8079248 fixed JDK still fails with "jdk\\bin\\management_ext.dll: The specified procedure could not be found" In-Reply-To: References: <5549E156.2020600@oracle.com> <5549E312.3050703@oracle.com> Message-ID: <554A012B.9040106@oracle.com> This looks good. It is better with the moved comment. Thanks, Serguei On 5/6/15 4:29 AM, Staffan Larsen wrote: >> On 6 maj 2015, at 11:46, Magnus Ihse Bursie wrote: >> >> On 2015-05-06 11:39, Erik Joelsson wrote: >>> This one looks better. Sorry for not spotting the problem in the previous review. >>> >>> /Erik >>> >>> On 2015-05-06 11:24, Staffan Larsen wrote: >>>> My fix for 8079248 was broken, so here is a new attempt. I intend to push this directly to jdk9/hs since that is where 8079248 was pushed. >>>> >>>> bug: https://bugs.openjdk.java.net/browse/JDK-8079345#comment-13638237 >>>> webrev: http://cr.openjdk.java.net/~sla/8079345/webrev.00 >> Looks good to me. If you care, maybe you could move (and properly indent) the comment about the flag to inside the "if windows" clause? You don't need to respin the webrev if you do that. > Done: > > diff --git a/make/lib/Lib-jdk.management.gmk b/make/lib/Lib-jdk.management.gmk > --- a/make/lib/Lib-jdk.management.gmk > +++ b/make/lib/Lib-jdk.management.gmk > @@ -39,10 +39,12 @@ > $(LIBJAVA_HEADER_FLAGS) \ > # > > -# In (at least) VS2013 and later, -DPSAPI_VERSION=1 is needed to generate > -# a binary that is compatible with windows versions older than 7/2008R2. > -# See MSDN documentation for GetProcessMemoryInfo for more information. > -BUILD_LIBMANAGEMENT_EXT_CFLAGS += -DPSAPI_VERSION=1 > +ifeq ($(OPENJDK_TARGET_OS), windows) > + # In (at least) VS2013 and later, -DPSAPI_VERSION=1 is needed to generate > + # a binary that is compatible with windows versions older than 7/2008R2. > + # See MSDN documentation for GetProcessMemoryInfo for more information. > + LIBMANAGEMENT_EXT_CFLAGS += -DPSAPI_VERSION=1 > +endif > > LIBMANAGEMENT_EXT_OPTIMIZATION := HIGH > ifneq ($(findstring $(OPENJDK_TARGET_OS), solaris linux), ) > From vitalyd at gmail.com Wed May 6 12:41:13 2015 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Wed, 6 May 2015 08:41:13 -0400 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <5549D681.3000806@redhat.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> Message-ID: Mikael's suggestion was to make mutator check for !clean and then mark dirty. If it sees stale dirty, it will write dirty again no? Today's code would have this problem because it's checking for !dirty, but I thought the suggested change would prevent that. sent from my phone On May 6, 2015 4:53 AM, "Andrew Haley" wrote: > On 05/05/15 20:51, Vitaly Davidovich wrote: > > If mutator doesn't see "clean" due to staleness, won't it just mark it > > dirty "unnecessarily" using Mikael's suggestion? > > No. The mutator may see a stale "dirty" and not write anything. At least > I haven't seen anything which certainly will prevent that from happening. > > Andrew. > > > From mikael.gerdin at oracle.com Wed May 6 13:52:32 2015 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Wed, 06 May 2015 15:52:32 +0200 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> Message-ID: <554A1CA0.6070801@oracle.com> Hi Vitaly, On 2015-05-06 14:41, Vitaly Davidovich wrote: > Mikael's suggestion was to make mutator check for !clean and then mark > dirty. If it sees stale dirty, it will write dirty again no? Today's code > would have this problem because it's checking for !dirty, but I thought the > suggested change would prevent that. Unfortunately I don't think my suggestion would solve anything. If the conditional card mark would write dirty again if it sees a stale dirty it's not really solving the false sharing problem. The problem is not the value that the precleaner writes to the card entry, it's that the mutator may see the old "dirty" value which was overwritten as part of precleaning but not necessarily visible to the mutator thread. /Mikael > > sent from my phone > On May 6, 2015 4:53 AM, "Andrew Haley" wrote: > >> On 05/05/15 20:51, Vitaly Davidovich wrote: >>> If mutator doesn't see "clean" due to staleness, won't it just mark it >>> dirty "unnecessarily" using Mikael's suggestion? >> >> No. The mutator may see a stale "dirty" and not write anything. At least >> I haven't seen anything which certainly will prevent that from happening. >> >> Andrew. >> >> >> From vitalyd at gmail.com Wed May 6 14:10:16 2015 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Wed, 6 May 2015 10:10:16 -0400 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <554A1CA0.6070801@oracle.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> Message-ID: Hi Mikael, The duplicate store of dirty would only happen if cache coherence hasn't caught up yet, which is of course timing dependent but the window is fairly small on x86 (which is what this CR targets) IME. It's also not necessarily the case that the involved cacheline will falsely share with any other thread. So, this seems like a benign data race. Also, isn't this already the case with the current code? A mutator may not observe another mutator already marking the card dirty and will smash a "duplicate" in there. Having said that, you guys are in better position to make the call. If nothing else, I think this change should be highlighted in whichever release(s) it appears in as people should be on the lookout for possible perf regressions associated with this. sent from my phone On May 6, 2015 9:52 AM, "Mikael Gerdin" wrote: > Hi Vitaly, > > On 2015-05-06 14:41, Vitaly Davidovich wrote: > >> Mikael's suggestion was to make mutator check for !clean and then mark >> dirty. If it sees stale dirty, it will write dirty again no? Today's >> code >> would have this problem because it's checking for !dirty, but I thought >> the >> suggested change would prevent that. >> > > Unfortunately I don't think my suggestion would solve anything. > > If the conditional card mark would write dirty again if it sees a stale > dirty it's not really solving the false sharing problem. > > The problem is not the value that the precleaner writes to the card entry, > it's that the mutator may see the old "dirty" value which was overwritten > as part of precleaning but not necessarily visible to the mutator thread. > > /Mikael > > > >> sent from my phone >> On May 6, 2015 4:53 AM, "Andrew Haley" wrote: >> >> On 05/05/15 20:51, Vitaly Davidovich wrote: >>> >>>> If mutator doesn't see "clean" due to staleness, won't it just mark it >>>> dirty "unnecessarily" using Mikael's suggestion? >>>> >>> >>> No. The mutator may see a stale "dirty" and not write anything. At >>> least >>> I haven't seen anything which certainly will prevent that from happening. >>> >>> Andrew. >>> >>> >>> >>> From mikael.gerdin at oracle.com Wed May 6 14:18:03 2015 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Wed, 06 May 2015 16:18:03 +0200 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <5548D913.1030507@redhat.com> References: <5548D913.1030507@redhat.com> Message-ID: <554A229B.2000303@oracle.com> Hi Andrew, On 2015-05-05 16:52, Andrew Haley wrote: > I've added StoreLoad barriers everywhere they're needed. > > http://cr.openjdk.java.net/~aph/8079315/ > > This patch depends on the patch for 8078438, which is still not > committed. http://cr.openjdk.java.net/~shade/8078438/webrev.02/ > > This is x86 only, will do AArch64 parts in a separate patch once > we've agreed on this. Taking this discussion down a slightly different route in a new thread, we could potentially change FinalMarking to look at all nonclean cards instead of only dirty cards if UseCondCardMark is enabled. I suspect that there is a fair bit of performance work needed to determine which of these approaches is the most beneficial. /Mikael > > Andrew. > From erik.osterlund at lnu.se Wed May 6 15:01:43 2015 From: erik.osterlund at lnu.se (=?utf-8?B?RXJpayDDlnN0ZXJsdW5k?=) Date: Wed, 6 May 2015 15:01:43 +0000 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <554A1CA0.6070801@oracle.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> Message-ID: <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> Hi everyone, I just read through the discussion and thought I?d share a potential solution that I believe would solve the problem. Previously I implemented something that struck me as very similar for G1 to get rid of its storeload fence in the barrier that suffered from similar symptoms. The idea is to process cards in batches instead of one by one and issue a global store serialization event (e.g. using mprotect to a dummy page) when cleaning. It worked pretty well but after Thomas Schatzel ran some benchmarks we decided the gain wasn?t worth the trouble for G1 since it fences only rarely when encountering interregional pointers (premature optimization). But maybe here it happens more often and is more worth the trouble to get rid of the fence? Here is a proposed new algorithm candidate (small change to algorithm in bug description): mutator (exactly as before): x.a = something StoreStore if (card[@x.a] != dirty) { card[@x.a] = dirty } preclean: for card in batched_cards { if (card[@x.a] == dirty) { card[@x.a] = precleaned } } global_store_fence() for card in batched_cards { read x.a } The global fence will incur some local overhead (quite ouchy) and some global overhead fencing on all remote CPUs the process is scheduled to run on (not necessarily all) using cross calls in the kernel to invalidate remote TLB buffers in the L1 cache (not so ouchy) and by batching the cards, this ?global" cost is amortized arbitrarily so that even on systems with a ridiculous amount of CPUs, it?s probably still a good idea. It is also possible to let multiple precleaning CPUs share the same global store fence using timestamps since it is in fact global. This guarantees scalability on many-core systems but is a bit less straightforward to implement. If you are interested in this and think it?s a good idea, I could try to patch a solution for this, but I would need some help benchmarking this in your systems so we can verify it performs the way I hope. Thanks, /Erik > On 06 May 2015, at 14:52, Mikael Gerdin wrote: > > Hi Vitaly, > > On 2015-05-06 14:41, Vitaly Davidovich wrote: >> Mikael's suggestion was to make mutator check for !clean and then mark >> dirty. If it sees stale dirty, it will write dirty again no? Today's code >> would have this problem because it's checking for !dirty, but I thought the >> suggested change would prevent that. > > Unfortunately I don't think my suggestion would solve anything. > > If the conditional card mark would write dirty again if it sees a stale dirty it's not really solving the false sharing problem. > > The problem is not the value that the precleaner writes to the card entry, it's that the mutator may see the old "dirty" value which was overwritten as part of precleaning but not necessarily visible to the mutator thread. > > /Mikael > > >> >> sent from my phone >> On May 6, 2015 4:53 AM, "Andrew Haley" wrote: >> >>> On 05/05/15 20:51, Vitaly Davidovich wrote: >>>> If mutator doesn't see "clean" due to staleness, won't it just mark it >>>> dirty "unnecessarily" using Mikael's suggestion? >>> >>> No. The mutator may see a stale "dirty" and not write anything. At least >>> I haven't seen anything which certainly will prevent that from happening. >>> >>> Andrew. >>> >>> >>> From edward.nevill at linaro.org Wed May 6 15:45:48 2015 From: edward.nevill at linaro.org (Edward Nevill) Date: Wed, 06 May 2015 16:45:48 +0100 Subject: RFR: 8079507: jdk9: aarch64: fails to build after merge from hs-comp Message-ID: <1430927148.3592.16.camel@mylittlepony.linaroharston> Hi, After the latest merge from hs-comp, jdk9/dev fails to build for aarch64. JIRA report here https://bugs.openjdk.java.net/browse/JDK-8079507 webrev here http://cr.openjdk.java.net/~enevill/8079507/webrev.00/ Please review, Ed. PS: Should I keep sending these build failures to both hotspot-dev and aarch64-port-dev, or is it sufficient to send aarch64 specific build failures to aarch64-port-dev only? From vladimir.kozlov at oracle.com Wed May 6 16:06:20 2015 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 06 May 2015 09:06:20 -0700 Subject: RFR: 8079507: jdk9: aarch64: fails to build after merge from hs-comp In-Reply-To: <1430927148.3592.16.camel@mylittlepony.linaroharston> References: <1430927148.3592.16.camel@mylittlepony.linaroharston> Message-ID: <554A3BFC.2040802@oracle.com> On 5/6/15 8:45 AM, Edward Nevill wrote: > Hi, > > After the latest merge from hs-comp, jdk9/dev fails to build for aarch64. > > JIRA report here > > https://bugs.openjdk.java.net/browse/JDK-8079507 > > webrev here > > http://cr.openjdk.java.net/~enevill/8079507/webrev.00/ Instead of repeated (!is_static && rc == may_rewrite) checks can you add and use one boolean calculated before the code? > > Please review, > Ed. > > PS: Should I keep sending these build failures to both hotspot-dev and aarch64-port-dev, or is it sufficient to send aarch64 specific build failures to aarch64-port-dev only? > Not all *Reviewers* are subscribed to aarch64-port-dev. You need at least one Reviewer to review changes. See 12.: http://openjdk.java.net/guide/changePlanning.html#bug Regards, Vladimir From christian.tornqvist at oracle.com Wed May 6 16:12:21 2015 From: christian.tornqvist at oracle.com (Christian Tornqvist) Date: Wed, 6 May 2015 12:12:21 -0400 Subject: RFR(XS): 8075966 - Update ProjectCreator to create projects using Visual Studio 2013 toolset Message-ID: <003c01d08817$6efa09d0$4cee1d70$@oracle.com> Hi everyone, Please review this small change that modifies ProjectCreator to generate project files using the Visual Studio 2013 toolset. Webrev: http://cr.openjdk.java.net/~ctornqvi/webrev/8075966/webrev.00/ Bug: https://bugs.openjdk.java.net/browse/JDK-8075966 Thanks, Christian From volker.simonis at gmail.com Wed May 6 16:31:16 2015 From: volker.simonis at gmail.com (Volker Simonis) Date: Wed, 6 May 2015 18:31:16 +0200 Subject: RFR: 8079507: jdk9: aarch64: fails to build after merge from hs-comp In-Reply-To: <1430927148.3592.16.camel@mylittlepony.linaroharston> References: <1430927148.3592.16.camel@mylittlepony.linaroharston> Message-ID: On Wed, May 6, 2015 at 5:45 PM, Edward Nevill wrote: > Hi, > > After the latest merge from hs-comp, jdk9/dev fails to build for aarch64. > > JIRA report here > > https://bugs.openjdk.java.net/browse/JDK-8079507 > > webrev here > > http://cr.openjdk.java.net/~enevill/8079507/webrev.00/ > > Please review, > Ed. > > PS: Should I keep sending these build failures to both hotspot-dev and aarch64-port-dev, or is it sufficient to send aarch64 specific build failures to aarch64-port-dev only? We have the same problem with our PowerPC port. In reality most (probably all) aarch64 developers are subscribed to hotspot-dev while usually only very few hotspot developers are subscribed to the special *-port lists. So posting to hotspot-dev should be enough to get a review. For our port I nevertheless encourage people to cc the corresponding *-port alias because I like to have a clean reference of all the port relevant changes in a single place. Regards, Volker > > From lois.foltan at oracle.com Wed May 6 16:35:14 2015 From: lois.foltan at oracle.com (Lois Foltan) Date: Wed, 06 May 2015 12:35:14 -0400 Subject: RFR(XS): 8075966 - Update ProjectCreator to create projects using Visual Studio 2013 toolset In-Reply-To: <003c01d08817$6efa09d0$4cee1d70$@oracle.com> References: <003c01d08817$6efa09d0$4cee1d70$@oracle.com> Message-ID: <554A42C2.3010906@oracle.com> Looks good Christian. Lois On 5/6/2015 12:12 PM, Christian Tornqvist wrote: > Hi everyone, > > > > Please review this small change that modifies ProjectCreator to generate > project files using the Visual Studio 2013 toolset. > > > > Webrev: > > http://cr.openjdk.java.net/~ctornqvi/webrev/8075966/webrev.00/ > > > > Bug: > > https://bugs.openjdk.java.net/browse/JDK-8075966 > > > > Thanks, > > Christian > From dean.long at oracle.com Wed May 6 18:53:57 2015 From: dean.long at oracle.com (Dean Long) Date: Wed, 06 May 2015 11:53:57 -0700 Subject: PING Re: RFR: 8078521: AARCH64: Add AArch64 SA support In-Reply-To: <554A26D5.4040003@redhat.com> References: <55391C34.3070502@redhat.com> <553954D8.2070506@oracle.com> <553A012C.1070008@redhat.com> <553A8B20.9050109@oracle.com> <55408B69.8050108@redhat.com> <554A26D5.4040003@redhat.com> Message-ID: <554A6345.4040004@oracle.com> I added hotspot-dev at openjdk.java.net again. It looks reasonable to me, but I'm not a Reviewer. dl On 5/6/2015 7:36 AM, Andrew Haley wrote: > On 04/29/2015 08:42 AM, Andrew Haley wrote: >> http://cr.openjdk.java.net/~aph/8078521-2/ > Any news on this? It shouldn't be controversial at this point. > > Thanks, > Andrew. > > From staffan.larsen at oracle.com Wed May 6 18:55:02 2015 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Wed, 6 May 2015 20:55:02 +0200 Subject: RFR(XS): 8075966 - Update ProjectCreator to create projects using Visual Studio 2013 toolset In-Reply-To: <003c01d08817$6efa09d0$4cee1d70$@oracle.com> References: <003c01d08817$6efa09d0$4cee1d70$@oracle.com> Message-ID: <35192FFE-FE84-4FC8-B1CB-B60F843A7F32@oracle.com> Looks good! Thanks, /Staffan > On 6 maj 2015, at 18:12, Christian Tornqvist wrote: > > Hi everyone, > > > > Please review this small change that modifies ProjectCreator to generate > project files using the Visual Studio 2013 toolset. > > > > Webrev: > > http://cr.openjdk.java.net/~ctornqvi/webrev/8075966/webrev.00/ > > > > Bug: > > https://bugs.openjdk.java.net/browse/JDK-8075966 > > > > Thanks, > > Christian > From mikael.vidstedt at oracle.com Thu May 7 00:23:58 2015 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Wed, 06 May 2015 17:23:58 -0700 Subject: RFR(XS): 8079545: [TESTBUG] hotspot_basicvmtest doesn't fail even if VM crashes Message-ID: <554AB09E.5010802@oracle.com> Please review this small fix which fixes a problem introduced by 8078017[1]. Bug: https://bugs.openjdk.java.net/browse/JDK-8079545 Webrev: http://cr.openjdk.java.net/~mikael/webrevs/8079545/webrev.00/webrev/ The problem is that the exit value from the inner Makefile invocation is not being checked, so even if one of the tests fail the outer make will not signal en error. The exit code needs to be explicitly checked. Cheers, Mikael [1] https://bugs.openjdk.java.net/browse/JDK-8078017 From christian.tornqvist at oracle.com Thu May 7 00:30:57 2015 From: christian.tornqvist at oracle.com (Christian Tornqvist) Date: Wed, 6 May 2015 20:30:57 -0400 Subject: RFR(XS): 8079545: [TESTBUG] hotspot_basicvmtest doesn't fail even if VM crashes In-Reply-To: <554AB09E.5010802@oracle.com> References: <554AB09E.5010802@oracle.com> Message-ID: <013701d0885d$16707830$43516890$@oracle.com> Hi Mikael, Looks good, thanks for fixing this. Thanks, Christian -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Mikael Vidstedt Sent: Wednesday, May 6, 2015 8:24 PM To: hotspot-dev at openjdk.java.net Subject: RFR(XS): 8079545: [TESTBUG] hotspot_basicvmtest doesn't fail even if VM crashes Please review this small fix which fixes a problem introduced by 8078017[1]. Bug: https://bugs.openjdk.java.net/browse/JDK-8079545 Webrev: http://cr.openjdk.java.net/~mikael/webrevs/8079545/webrev.00/webrev/ The problem is that the exit value from the inner Makefile invocation is not being checked, so even if one of the tests fail the outer make will not signal en error. The exit code needs to be explicitly checked. Cheers, Mikael [1] https://bugs.openjdk.java.net/browse/JDK-8078017 From david.holmes at oracle.com Thu May 7 00:41:34 2015 From: david.holmes at oracle.com (David Holmes) Date: Thu, 07 May 2015 10:41:34 +1000 Subject: RFR(XS): 8079545: [TESTBUG] hotspot_basicvmtest doesn't fail even if VM crashes In-Reply-To: <554AB09E.5010802@oracle.com> References: <554AB09E.5010802@oracle.com> Message-ID: <554AB4BE.6080206@oracle.com> Looks good but needs to fixed in hs-comp otherwise you'll fail because of 8079357 Thanks, David On 7/05/2015 10:23 AM, Mikael Vidstedt wrote: > > Please review this small fix which fixes a problem introduced by > 8078017[1]. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8079545 > Webrev: > http://cr.openjdk.java.net/~mikael/webrevs/8079545/webrev.00/webrev/ > > The problem is that the exit value from the inner Makefile invocation is > not being checked, so even if one of the tests fail the outer make will > not signal en error. The exit code needs to be explicitly checked. > > Cheers, > Mikael > > [1] https://bugs.openjdk.java.net/browse/JDK-8078017 > From per.liden at oracle.com Thu May 7 08:23:26 2015 From: per.liden at oracle.com (Per Liden) Date: Thu, 07 May 2015 10:23:26 +0200 Subject: Heads Up! GC directory structure cleanup Message-ID: <554B20FE.5070607@oracle.com> Hi all, This is a heads up to let everyone know that the GC team is planning to do a cleanup of the directory structure for GC code. This change will affect people working on changes which touch GC-related code, and could mean that such patches need to be updated before applying cleanly to the new directory structure. Background ---------- In the continuous work to address technical debt, the time has come to make some changes to how the GC code is organized. Over time the GC code has spread out across a number of directories, and currently looks like this: - There are three "top-level" directories which contain GC-related code: src/share/vm/gc_interface/ src/share/vm/gc_implementation/ src/share/vm/memory/ - Our collectors are roughly spread out like this: src/share/vm/gc_implementation/parallelScavenge/ (ParallelGC) src/share/vm/gc_implementation/g1/ (G1) src/share/vm/gc_implementation/concurrentMarkSweep/ (CMS) src/share/vm/gc_implementation/parNew/ (ParNewGC) src/share/vm/gc_implementation/shared/ (MarkSweep) src/share/vm/memory/ (DefNew) - We have common/shared code in the following places: src/share/vm/gc_interface/ (CollectedHeap, etc) src/share/vm/gc_implementation/shared/ (counters, utilities, etc) src/share/vm/memory/ (BarrierSet, GenCollectedHeap, etc) New Structure ------------- The plan is for the new structure to look like this: - A single "top-level" directory for GC code: src/share/vm/gc/ - One sub-directory per GC: src/share/vm/gc/cms/ src/share/vm/gc/g1/ src/share/vm/gc/parallel/ src/share/vm/gc/serial/ - A single directory for common/shared GC code: src/share/gc/shared/ FAQ --- Q: How will this affect me? A: Moving files around could mean that the patch you are working on will fail to apply cleanly. hg does a fairly good job of tracking moves/renames, but if you're using other tools (like patch, mq, etc) you might need to update/merge your patch manually. Q: When will this happen? A: A patch for this is currently being worked on and tested. A review request will be sent to hotspot-dev in the near future. Q: Why do this now? A: All major back-porting work to 8u has been completed. If we want to do this type of cleanup in jdk9, then now is a good time. The next opportunity to do this will be in jdk10, after all major back-porting work to 9u has been completed. We would prefer to do it now. regards, The GC Team From yekaterina.kantserova at oracle.com Thu May 7 08:54:10 2015 From: yekaterina.kantserova at oracle.com (Yekaterina Kantserova) Date: Thu, 07 May 2015 10:54:10 +0200 Subject: 8079200: Fix heapdump tests to validate heapdump after jhat is removed Message-ID: <554B2832.1030400@oracle.com> Hi, Could I please have a review of this fix. bug: https://bugs.openjdk.java.net/browse/JDK-8079200 webrev: http://cr.openjdk.java.net/~ykantser/8079200/webrev.00 The fix makes sure the HprofParser is available for all types of test frameworks, not only JTreg. It will be a part of test-lib.jar. Thanks, Katja From aph at redhat.com Thu May 7 08:58:38 2015 From: aph at redhat.com (Andrew Haley) Date: Thu, 07 May 2015 09:58:38 +0100 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> Message-ID: <554B293E.30203@redhat.com> On 06/05/15 15:10, Vitaly Davidovich wrote: > > Having said that, you guys are in better position to make the call. If > nothing else, I think this change should be highlighted in whichever > release(s) it appears in as people should be on the lookout for possible > perf regressions associated with this. CondCardMark is for highly-parallel programs running on many cores. In such situations surely you make the choice to sacrifice some local performance for much better global scaling. Given that, barriers are not such a big deal, IMO, but correctness surely is. Andrew. From staffan.larsen at oracle.com Thu May 7 09:01:02 2015 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Thu, 7 May 2015 11:01:02 +0200 Subject: 8079200: Fix heapdump tests to validate heapdump after jhat is removed In-Reply-To: <554B2832.1030400@oracle.com> References: <554B2832.1030400@oracle.com> Message-ID: Looks good! Thanks, /Staffan > On 7 maj 2015, at 10:54, Yekaterina Kantserova wrote: > > Hi, > > Could I please have a review of this fix. > > bug: https://bugs.openjdk.java.net/browse/JDK-8079200 > webrev: http://cr.openjdk.java.net/~ykantser/8079200/webrev.00 > > The fix makes sure the HprofParser is available for all types of test frameworks, not only JTreg. It will be a part of test-lib.jar. > > Thanks, > Katja > > From jesper.wilhelmsson at oracle.com Thu May 7 09:08:23 2015 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Thu, 07 May 2015 11:08:23 +0200 Subject: Hold any pushes to jdk9/hs-gc Message-ID: <554B2B87.2070305@oracle.com> Hi, There are two regressions in hs-gc that needs to be fixed today before we can push hs-gc to main. There are important fixes in hs-gc that needs to get in to main asap. Therefore, please hold any pushes to jdk9/hs-gc until these two regressions have been fixed. https://bugs.openjdk.java.net/browse/JDK-8078904 https://bugs.openjdk.java.net/browse/JDK-8079409 I'll let you know when the fixes are in place. Thanks, /Jesper From mikael.gerdin at oracle.com Thu May 7 10:15:58 2015 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Thu, 07 May 2015 12:15:58 +0200 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> Message-ID: <554B3B5E.8000705@oracle.com> Hi Erik, On 2015-05-06 17:01, Erik ?sterlund wrote: > Hi everyone, > > I just read through the discussion and thought I?d share a potential solution that I believe would solve the problem. > > Previously I implemented something that struck me as very similar for G1 to get rid of its storeload fence in the barrier that suffered from similar symptoms. > The idea is to process cards in batches instead of one by one and issue a global store serialization event (e.g. using mprotect to a dummy page) when cleaning. It worked pretty well but after Thomas Schatzel ran some benchmarks we decided the gain wasn?t worth the trouble for G1 since it fences only rarely when encountering interregional pointers (premature optimization). But maybe here it happens more often and is more worth the trouble to get rid of the fence? > > Here is a proposed new algorithm candidate (small change to algorithm in bug description): > > mutator (exactly as before): > > x.a = something > StoreStore > if (card[@x.a] != dirty) { > card[@x.a] = dirty > } > > preclean: > > for card in batched_cards { > if (card[@x.a] == dirty) { > card[@x.a] = precleaned > } > } > > global_store_fence() > > for card in batched_cards { > read x.a > } > > The global fence will incur some local overhead (quite ouchy) and some global overhead fencing on all remote CPUs the process is scheduled to run on (not necessarily all) using cross calls in the kernel to invalidate remote TLB buffers in the L1 cache (not so ouchy) and by batching the cards, this ?global" cost is amortized arbitrarily so that even on systems with a ridiculous amount of CPUs, it?s probably still a good idea. It is also possible to let multiple precleaning CPUs share the same global store fence using timestamps since it is in fact global. This guarantees scalability on many-core systems but is a bit less straightforward to implement. > > If you are interested in this and think it?s a good idea, I could try to patch a solution for this, but I would need some help benchmarking this in your systems so we can verify it performs the way I hope. I think this is a good idea. The problem is asymmetric in that the CMS thread should be fine with taking a larger local overhead, batching the setting of cards to precleaned and then scanning the cards later. Do you know how the global_store_fence() would look on different cpu architectures? The VM already uses this sort of synchronization for the thread state transitions, see references to UseMemBar, os::serialize_thread_states, os::serialize_memory. Perhaps that code can be reused somehow? /Mikael > > Thanks, > /Erik > > >> On 06 May 2015, at 14:52, Mikael Gerdin wrote: >> >> Hi Vitaly, >> >> On 2015-05-06 14:41, Vitaly Davidovich wrote: >>> Mikael's suggestion was to make mutator check for !clean and then mark >>> dirty. If it sees stale dirty, it will write dirty again no? Today's code >>> would have this problem because it's checking for !dirty, but I thought the >>> suggested change would prevent that. >> >> Unfortunately I don't think my suggestion would solve anything. >> >> If the conditional card mark would write dirty again if it sees a stale dirty it's not really solving the false sharing problem. >> >> The problem is not the value that the precleaner writes to the card entry, it's that the mutator may see the old "dirty" value which was overwritten as part of precleaning but not necessarily visible to the mutator thread. >> >> /Mikael >> >> >>> >>> sent from my phone >>> On May 6, 2015 4:53 AM, "Andrew Haley" wrote: >>> >>>> On 05/05/15 20:51, Vitaly Davidovich wrote: >>>>> If mutator doesn't see "clean" due to staleness, won't it just mark it >>>>> dirty "unnecessarily" using Mikael's suggestion? >>>> >>>> No. The mutator may see a stale "dirty" and not write anything. At least >>>> I haven't seen anything which certainly will prevent that from happening. >>>> >>>> Andrew. >>>> >>>> >>>> > From mattis.castegren at oracle.com Thu May 7 10:45:54 2015 From: mattis.castegren at oracle.com (Mattis Castegren) Date: Thu, 7 May 2015 03:45:54 -0700 (PDT) Subject: Heads Up! GC directory structure cleanup In-Reply-To: <554B20FE.5070607@oracle.com> References: <554B20FE.5070607@oracle.com> Message-ID: Hi Will this just be a change in directory names, or will the code be changed as well? If it is just a change in directory names, would it make sense to add this to the unshuffle script, http://cr.openjdk.java.net/~chegar/docs/portingScript.html This script currently unshuffles the directory name changes for the jigsaw project to allow backporting of fixes between 9 and 8. We in Sustaining will still backport bug fixes and security fixes to JDK 8 and below, so it should be good if we did the same for GC changes. Do you think that would be possible? Kind Regards /Mattis -----Original Message----- From: Per Liden Sent: den 7 maj 2015 10:23 To: hotspot-dev at openjdk.java.net Subject: Heads Up! GC directory structure cleanup Hi all, This is a heads up to let everyone know that the GC team is planning to do a cleanup of the directory structure for GC code. This change will affect people working on changes which touch GC-related code, and could mean that such patches need to be updated before applying cleanly to the new directory structure. Background ---------- In the continuous work to address technical debt, the time has come to make some changes to how the GC code is organized. Over time the GC code has spread out across a number of directories, and currently looks like this: - There are three "top-level" directories which contain GC-related code: src/share/vm/gc_interface/ src/share/vm/gc_implementation/ src/share/vm/memory/ - Our collectors are roughly spread out like this: src/share/vm/gc_implementation/parallelScavenge/ (ParallelGC) src/share/vm/gc_implementation/g1/ (G1) src/share/vm/gc_implementation/concurrentMarkSweep/ (CMS) src/share/vm/gc_implementation/parNew/ (ParNewGC) src/share/vm/gc_implementation/shared/ (MarkSweep) src/share/vm/memory/ (DefNew) - We have common/shared code in the following places: src/share/vm/gc_interface/ (CollectedHeap, etc) src/share/vm/gc_implementation/shared/ (counters, utilities, etc) src/share/vm/memory/ (BarrierSet, GenCollectedHeap, etc) New Structure ------------- The plan is for the new structure to look like this: - A single "top-level" directory for GC code: src/share/vm/gc/ - One sub-directory per GC: src/share/vm/gc/cms/ src/share/vm/gc/g1/ src/share/vm/gc/parallel/ src/share/vm/gc/serial/ - A single directory for common/shared GC code: src/share/gc/shared/ FAQ --- Q: How will this affect me? A: Moving files around could mean that the patch you are working on will fail to apply cleanly. hg does a fairly good job of tracking moves/renames, but if you're using other tools (like patch, mq, etc) you might need to update/merge your patch manually. Q: When will this happen? A: A patch for this is currently being worked on and tested. A review request will be sent to hotspot-dev in the near future. Q: Why do this now? A: All major back-porting work to 8u has been completed. If we want to do this type of cleanup in jdk9, then now is a good time. The next opportunity to do this will be in jdk10, after all major back-porting work to 9u has been completed. We would prefer to do it now. regards, The GC Team From per.liden at oracle.com Thu May 7 11:42:25 2015 From: per.liden at oracle.com (Per Liden) Date: Thu, 07 May 2015 13:42:25 +0200 Subject: Heads Up! GC directory structure cleanup In-Reply-To: References: <554B20FE.5070607@oracle.com> Message-ID: <554B4FA1.7040607@oracle.com> Hi Mattis, On 2015-05-07 12:45, Mattis Castegren wrote: > Hi > > Will this just be a change in directory names, or will the code be changed as well? This is mainly a change of directory names, but this means that a number of #include "gc_impl..." and #ifndef SHARE_VM_GC_IMPL..., etc will also need to change. Other than that there will be no changes to any C++ code. Btw, same goes for the SA, where some package and import lines will be updated to reflect the new paths. > > If it is just a change in directory names, would it make sense to add this to the unshuffle script, http://cr.openjdk.java.net/~chegar/docs/portingScript.html > > This script currently unshuffles the directory name changes for the jigsaw project to allow backporting of fixes between 9 and 8. We in Sustaining will still backport bug fixes and security fixes to JDK 8 and below, so it should be good if we did the same for GC changes. > > Do you think that would be possible? That sounds like a good idea. I will look into it. cheers, /Per > > Kind Regards > /Mattis > > -----Original Message----- > From: Per Liden > Sent: den 7 maj 2015 10:23 > To: hotspot-dev at openjdk.java.net > Subject: Heads Up! GC directory structure cleanup > > Hi all, > > This is a heads up to let everyone know that the GC team is planning to > do a cleanup of the directory structure for GC code. This change will > affect people working on changes which touch GC-related code, and could > mean that such patches need to be updated before applying cleanly to the > new directory structure. > > > Background > ---------- > In the continuous work to address technical debt, the time has come to > make some changes to how the GC code is organized. Over time the GC code > has spread out across a number of directories, and currently looks like > this: > > - There are three "top-level" directories which contain GC-related code: > src/share/vm/gc_interface/ > src/share/vm/gc_implementation/ > src/share/vm/memory/ > > - Our collectors are roughly spread out like this: > src/share/vm/gc_implementation/parallelScavenge/ (ParallelGC) > src/share/vm/gc_implementation/g1/ (G1) > src/share/vm/gc_implementation/concurrentMarkSweep/ (CMS) > src/share/vm/gc_implementation/parNew/ (ParNewGC) > src/share/vm/gc_implementation/shared/ (MarkSweep) > src/share/vm/memory/ (DefNew) > > - We have common/shared code in the following places: > src/share/vm/gc_interface/ (CollectedHeap, etc) > src/share/vm/gc_implementation/shared/ (counters, utilities, etc) > src/share/vm/memory/ (BarrierSet, GenCollectedHeap, etc) > > > New Structure > ------------- > The plan is for the new structure to look like this: > > - A single "top-level" directory for GC code: > src/share/vm/gc/ > > - One sub-directory per GC: > src/share/vm/gc/cms/ > src/share/vm/gc/g1/ > src/share/vm/gc/parallel/ > src/share/vm/gc/serial/ > > - A single directory for common/shared GC code: > src/share/gc/shared/ > > > FAQ > --- > Q: How will this affect me? > A: Moving files around could mean that the patch you are working on will > fail to apply cleanly. hg does a fairly good job of tracking > moves/renames, but if you're using other tools (like patch, mq, etc) you > might need to update/merge your patch manually. > > Q: When will this happen? > A: A patch for this is currently being worked on and tested. A review > request will be sent to hotspot-dev in the near future. > > Q: Why do this now? > A: All major back-porting work to 8u has been completed. If we want to > do this type of cleanup in jdk9, then now is a good time. The next > opportunity to do this will be in jdk10, after all major back-porting > work to 9u has been completed. We would prefer to do it now. > > regards, > The GC Team > From jesper.wilhelmsson at oracle.com Thu May 7 13:18:47 2015 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Thu, 07 May 2015 15:18:47 +0200 Subject: Hold any pushes to jdk9/hs-gc In-Reply-To: <554B2B87.2070305@oracle.com> References: <554B2B87.2070305@oracle.com> Message-ID: <554B6637.7030707@oracle.com> jdk9/hs-gc is now open again. It turned out to be the same change that caused both failures, and that change has now been backed out. Thanks for your patience! /Jesper Jesper Wilhelmsson skrev den 7/5/15 11:08: > Hi, > > There are two regressions in hs-gc that needs to be fixed today before we can > push hs-gc to main. There are important fixes in hs-gc that needs to get in to > main asap. > > Therefore, please hold any pushes to jdk9/hs-gc until these two regressions have > been fixed. > > https://bugs.openjdk.java.net/browse/JDK-8078904 > https://bugs.openjdk.java.net/browse/JDK-8079409 > > I'll let you know when the fixes are in place. > > Thanks, > /Jesper From edward.nevill at linaro.org Thu May 7 14:03:02 2015 From: edward.nevill at linaro.org (Edward Nevill) Date: Thu, 07 May 2015 15:03:02 +0100 Subject: RFR: 8079507: jdk9: aarch64: fails to build after merge from hs-comp In-Reply-To: <554A3BFC.2040802@oracle.com> References: <1430927148.3592.16.camel@mylittlepony.linaroharston> <554A3BFC.2040802@oracle.com> Message-ID: <1431007382.18342.19.camel@mylittlepony.linaroharston> On Wed, 2015-05-06 at 09:06 -0700, Vladimir Kozlov wrote: > On 5/6/15 8:45 AM, Edward Nevill wrote: > > Hi, > > > > After the latest merge from hs-comp, jdk9/dev fails to build for aarch64. > > > > JIRA report here > > > > https://bugs.openjdk.java.net/browse/JDK-8079507 > > > > webrev here > > > > http://cr.openjdk.java.net/~enevill/8079507/webrev.00/ > > Instead of repeated (!is_static && rc == may_rewrite) checks can you add > and use one boolean calculated before the code? New webrev at http://cr.openjdk.java.net/~enevill/8079507/webrev.02/ Rather than create a new bool I set the RewriteControl rc to may_not_rewrite like so // Dont rewrite getstatic, only getfield if (is_static) rc = may_not_rewrite; Then the condition for each case becomes if (rc == may_rewrite) { ... } I think this is more in the spirit of the RewriteControl enum having values may_rewrite and may_not_rewrite than creating a separate bool. Vladimir: I have marked you as reviewer. If you are happy with this change please let me know and I will go ahead and push. All the best, Ed. From edward.nevill at linaro.org Thu May 7 14:23:44 2015 From: edward.nevill at linaro.org (Edward Nevill) Date: Thu, 07 May 2015 15:23:44 +0100 Subject: RFR: 8079564: aarch64: Use FP register as proper frame pointer Message-ID: <1431008624.18342.29.camel@mylittlepony.linaroharston> Hi, The following webrev adds support for the -XX:+PreserveFramePointer option. http://cr.openjdk.java.net/~enevill/8079564/webrev.00/ Support for proper frame pointers for use in debug/profile was added to x86 in the following changeset http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/382e9e4b3b71 This webrev mirrors that changeset to add the support to aarch64. Associated JIRA issue is https://bugs.openjdk.java.net/browse/JDK-8079564 Tested with JTreg hotspot and langtools. Without frame pointer (-XX:-PreserverFramePointer) which is the default Hotspot: passed: 805; failed: 34; error: 3 Langtools: passed: 3,267; error: 9 With frame pointer (-XX:+PreserveFramePointer) Hotspot: passed: 805; failed: 34; error: 3 Langtools: passed: 3,266; error: 10 Thanks for the review! Ed. From zoltan.majo at oracle.com Thu May 7 14:33:05 2015 From: zoltan.majo at oracle.com (=?UTF-8?B?Wm9sdMOhbiBNYWrDsw==?=) Date: Thu, 07 May 2015 16:33:05 +0200 Subject: RFR: 8079564: aarch64: Use FP register as proper frame pointer In-Reply-To: <1431008624.18342.29.camel@mylittlepony.linaroharston> References: <1431008624.18342.29.camel@mylittlepony.linaroharston> Message-ID: <554B77A1.6010405@oracle.com> Hi Ed, this change looks good to me (I'm not a *R*eviewer). I was wondering, though, which is the langtools test that passes with -XX:-PreserveFramePointer disabled and fails with -XX:+PreserveFramePointer. I would like to see if the same problem appears on x86. Could you maybe also send me the error message that you get? Thanks a lot in advance! Best regards, Zolt?n On 05/07/2015 04:23 PM, Edward Nevill wrote: > Hi, > > The following webrev adds support for the -XX:+PreserveFramePointer option. > > http://cr.openjdk.java.net/~enevill/8079564/webrev.00/ > > Support for proper frame pointers for use in debug/profile was added to x86 in the following changeset > > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/382e9e4b3b71 > > This webrev mirrors that changeset to add the support to aarch64. > > Associated JIRA issue is > > https://bugs.openjdk.java.net/browse/JDK-8079564 > > Tested with JTreg hotspot and langtools. > > Without frame pointer (-XX:-PreserverFramePointer) which is the default > > Hotspot: passed: 805; failed: 34; error: 3 > Langtools: passed: 3,267; error: 9 > > With frame pointer (-XX:+PreserveFramePointer) > > Hotspot: passed: 805; failed: 34; error: 3 > Langtools: passed: 3,266; error: 10 > > Thanks for the review! > Ed. > > From aph at redhat.com Thu May 7 14:39:00 2015 From: aph at redhat.com (Andrew Haley) Date: Thu, 07 May 2015 15:39:00 +0100 Subject: RFR: 8079564: aarch64: Use FP register as proper frame pointer In-Reply-To: <1431008624.18342.29.camel@mylittlepony.linaroharston> References: <1431008624.18342.29.camel@mylittlepony.linaroharston> Message-ID: <554B7904.2000806@redhat.com> Please explain the changes to method handle calls. Andrew. From edward.nevill at linaro.org Thu May 7 15:02:45 2015 From: edward.nevill at linaro.org (Edward Nevill) Date: Thu, 07 May 2015 16:02:45 +0100 Subject: [aarch64-port-dev ] RFR: 8079564: aarch64: Use FP register as proper frame pointer In-Reply-To: <554B77A1.6010405@oracle.com> References: <1431008624.18342.29.camel@mylittlepony.linaroharston> <554B77A1.6010405@oracle.com> Message-ID: <1431010965.18342.36.camel@mylittlepony.linaroharston> On Thu, 2015-05-07 at 16:33 +0200, Zolt?n Maj? wrote: > Hi Ed, > > > this change looks good to me (I'm not a *R*eviewer). > > I was wondering, though, which is the langtools test that passes with > -XX:-PreserveFramePointer disabled and fails with > -XX:+PreserveFramePointer. I would like to see if the same problem > appears on x86. Could you maybe also send me the error message that you get? > > Thanks a lot in advance! > > Best regards, > The test in question is > Error: tools/javac/failover/CheckAttributedTree.java Its a timeout. From the .jtr test result: Error. Program `/home/ed/images/jdk-release-r29/bin/java' timed out! I have put the .jtr @ http://cr.openjdk.java.net/~enevill/8079564/CheckAttributedTree.jtr Rerunning the test in isolation passes with both -XX:+/-PreserveFramePointer All the best, Ed. From dmitry.samersoff at oracle.com Thu May 7 15:20:09 2015 From: dmitry.samersoff at oracle.com (Dmitry Samersoff) Date: Thu, 07 May 2015 18:20:09 +0300 Subject: 8079200: Fix heapdump tests to validate heapdump after jhat is removed In-Reply-To: <554B2832.1030400@oracle.com> References: <554B2832.1030400@oracle.com> Message-ID: <554B82A9.9090702@oracle.com> Katja, Looks good for me. -Dmitry On 2015-05-07 11:54, Yekaterina Kantserova wrote: > Hi, > > Could I please have a review of this fix. > > bug: https://bugs.openjdk.java.net/browse/JDK-8079200 > webrev: http://cr.openjdk.java.net/~ykantser/8079200/webrev.00 > > The fix makes sure the HprofParser is available for all types of test > frameworks, not only JTreg. It will be a part of test-lib.jar. > > Thanks, > Katja > > -- Dmitry Samersoff Oracle Java development team, Saint Petersburg, Russia * I would love to change the world, but they won't give me the sources. From vladimir.kozlov at oracle.com Thu May 7 15:33:19 2015 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 07 May 2015 08:33:19 -0700 Subject: RFR: 8079507: jdk9: aarch64: fails to build after merge from hs-comp In-Reply-To: <1431007382.18342.19.camel@mylittlepony.linaroharston> References: <1430927148.3592.16.camel@mylittlepony.linaroharston> <554A3BFC.2040802@oracle.com> <1431007382.18342.19.camel@mylittlepony.linaroharston> Message-ID: <554B85BF.10205@oracle.com> This looks good. Thanks, Vladimir On 5/7/15 7:03 AM, Edward Nevill wrote: > On Wed, 2015-05-06 at 09:06 -0700, Vladimir Kozlov wrote: >> On 5/6/15 8:45 AM, Edward Nevill wrote: >>> Hi, >>> >>> After the latest merge from hs-comp, jdk9/dev fails to build for aarch64. >>> >>> JIRA report here >>> >>> https://bugs.openjdk.java.net/browse/JDK-8079507 >>> >>> webrev here >>> >>> http://cr.openjdk.java.net/~enevill/8079507/webrev.00/ >> >> Instead of repeated (!is_static && rc == may_rewrite) checks can you add >> and use one boolean calculated before the code? > > New webrev at > > http://cr.openjdk.java.net/~enevill/8079507/webrev.02/ > > Rather than create a new bool I set the RewriteControl rc to may_not_rewrite like so > > // Dont rewrite getstatic, only getfield > if (is_static) rc = may_not_rewrite; > > Then the condition for each case becomes > > if (rc == may_rewrite) { > ... > } > > I think this is more in the spirit of the RewriteControl enum having values may_rewrite and may_not_rewrite than creating a separate bool. > > Vladimir: I have marked you as reviewer. If you are happy with this change please let me know and I will go ahead and push. > > All the best, > Ed. > > From coleen.phillimore at oracle.com Thu May 7 15:37:02 2015 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Thu, 07 May 2015 11:37:02 -0400 Subject: RFR: 8079507: jdk9: aarch64: fails to build after merge from hs-comp In-Reply-To: <1431007382.18342.19.camel@mylittlepony.linaroharston> References: <1430927148.3592.16.camel@mylittlepony.linaroharston> <554A3BFC.2040802@oracle.com> <1431007382.18342.19.camel@mylittlepony.linaroharston> Message-ID: <554B869E.3090405@oracle.com> Hi, I had a look at this. This seems fine although it's arbitrarily different than the other platforms. I guess it doesn't matter. thanks, Coleen On 5/7/15, 10:03 AM, Edward Nevill wrote: > On Wed, 2015-05-06 at 09:06 -0700, Vladimir Kozlov wrote: >> On 5/6/15 8:45 AM, Edward Nevill wrote: >>> Hi, >>> >>> After the latest merge from hs-comp, jdk9/dev fails to build for aarch64. >>> >>> JIRA report here >>> >>> https://bugs.openjdk.java.net/browse/JDK-8079507 >>> >>> webrev here >>> >>> http://cr.openjdk.java.net/~enevill/8079507/webrev.00/ >> Instead of repeated (!is_static && rc == may_rewrite) checks can you add >> and use one boolean calculated before the code? > New webrev at > > http://cr.openjdk.java.net/~enevill/8079507/webrev.02/ > > Rather than create a new bool I set the RewriteControl rc to may_not_rewrite like so > > // Dont rewrite getstatic, only getfield > if (is_static) rc = may_not_rewrite; > > Then the condition for each case becomes > > if (rc == may_rewrite) { > ... > } > > I think this is more in the spirit of the RewriteControl enum having values may_rewrite and may_not_rewrite than creating a separate bool. > > Vladimir: I have marked you as reviewer. If you are happy with this change please let me know and I will go ahead and push. > > All the best, > Ed. > > From vladimir.kozlov at oracle.com Thu May 7 17:10:57 2015 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 07 May 2015 10:10:57 -0700 Subject: RFR: 8079564: aarch64: Use FP register as proper frame pointer In-Reply-To: <1431008624.18342.29.camel@mylittlepony.linaroharston> References: <1431008624.18342.29.camel@mylittlepony.linaroharston> Message-ID: <554B9CA1.8040809@oracle.com> Looks good. Thanks, Vladimir On 5/7/15 7:23 AM, Edward Nevill wrote: > Hi, > > The following webrev adds support for the -XX:+PreserveFramePointer option. > > http://cr.openjdk.java.net/~enevill/8079564/webrev.00/ > > Support for proper frame pointers for use in debug/profile was added to x86 in the following changeset > > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/382e9e4b3b71 > > This webrev mirrors that changeset to add the support to aarch64. > > Associated JIRA issue is > > https://bugs.openjdk.java.net/browse/JDK-8079564 > > Tested with JTreg hotspot and langtools. > > Without frame pointer (-XX:-PreserverFramePointer) which is the default > > Hotspot: passed: 805; failed: 34; error: 3 > Langtools: passed: 3,267; error: 9 > > With frame pointer (-XX:+PreserveFramePointer) > > Hotspot: passed: 805; failed: 34; error: 3 > Langtools: passed: 3,266; error: 10 > > Thanks for the review! > Ed. > > From zoltan.majo at oracle.com Fri May 8 06:15:47 2015 From: zoltan.majo at oracle.com (=?UTF-8?B?Wm9sdMOhbiBNYWrDsw==?=) Date: Fri, 08 May 2015 08:15:47 +0200 Subject: [aarch64-port-dev ] RFR: 8079564: aarch64: Use FP register as proper frame pointer In-Reply-To: <1431010965.18342.36.camel@mylittlepony.linaroharston> References: <1431008624.18342.29.camel@mylittlepony.linaroharston> <554B77A1.6010405@oracle.com> <1431010965.18342.36.camel@mylittlepony.linaroharston> Message-ID: <554C5493.7060900@oracle.com> On 05/07/2015 05:02 PM, Edward Nevill wrote: > On Thu, 2015-05-07 at 16:33 +0200, Zolt?n Maj? wrote: >> Hi Ed, >> >> >> this change looks good to me (I'm not a *R*eviewer). >> >> I was wondering, though, which is the langtools test that passes with >> -XX:-PreserveFramePointer disabled and fails with >> -XX:+PreserveFramePointer. I would like to see if the same problem >> appears on x86. Could you maybe also send me the error message that you get? >> >> Thanks a lot in advance! >> >> Best regards, >> > The test in question is > >> Error: tools/javac/failover/CheckAttributedTree.java > Its a timeout. From the .jtr > > test result: Error. Program `/home/ed/images/jdk-release-r29/bin/java' timed out! > > I have put the .jtr @ > > http://cr.openjdk.java.net/~enevill/8079564/CheckAttributedTree.jtr > > Rerunning the test in isolation passes with both -XX:+/-PreserveFramePointer Thank you, Ed! Best regards, Zolt?n > > All the best, > Ed. > > From per.liden at oracle.com Fri May 8 10:38:22 2015 From: per.liden at oracle.com (Per Liden) Date: Fri, 08 May 2015 12:38:22 +0200 Subject: Heads Up! GC directory structure cleanup In-Reply-To: <554B4FA1.7040607@oracle.com> References: <554B20FE.5070607@oracle.com> <554B4FA1.7040607@oracle.com> Message-ID: <554C921E.10300@oracle.com> On 2015-05-07 13:42, Per Liden wrote: > Hi Mattis, > > On 2015-05-07 12:45, Mattis Castegren wrote: >> Hi >> >> Will this just be a change in directory names, or will the code be >> changed as well? > > This is mainly a change of directory names, but this means that a number > of #include "gc_impl..." and #ifndef SHARE_VM_GC_IMPL..., etc will also > need to change. Other than that there will be no changes to any C++ code. > > Btw, same goes for the SA, where some package and import lines will be > updated to reflect the new paths. > >> >> If it is just a change in directory names, would it make sense to add >> this to the unshuffle script, >> http://cr.openjdk.java.net/~chegar/docs/portingScript.html >> >> This script currently unshuffles the directory name changes for the >> jigsaw project to allow backporting of fixes between 9 and 8. We in >> Sustaining will still backport bug fixes and security fixes to JDK 8 >> and below, so it should be good if we did the same for GC changes. >> >> Do you think that would be possible? > > That sounds like a good idea. I will look into it. I had look at the unshuffle scipt and talked to Mattis about their needs. The conclusion is that with some adjustments to the unshuffle_patch.sh script it can do what we want. However, the sustaining org is almost always interested in back porting, forward porting is very rare. For back porting, it is fairly easy to query the hg repo to figure out if/how a file has moved around. This also allows you to backport a patch to any given revision (not just post -> pre the GC directory restructure) and works for any file in any repo (not just hotspot and the GC files). Attaching an example of what such a script could look like. cheers, /Per > > cheers, > /Per > >> >> Kind Regards >> /Mattis >> >> -----Original Message----- >> From: Per Liden >> Sent: den 7 maj 2015 10:23 >> To: hotspot-dev at openjdk.java.net >> Subject: Heads Up! GC directory structure cleanup >> >> Hi all, >> >> This is a heads up to let everyone know that the GC team is planning to >> do a cleanup of the directory structure for GC code. This change will >> affect people working on changes which touch GC-related code, and could >> mean that such patches need to be updated before applying cleanly to the >> new directory structure. >> >> >> Background >> ---------- >> In the continuous work to address technical debt, the time has come to >> make some changes to how the GC code is organized. Over time the GC code >> has spread out across a number of directories, and currently looks like >> this: >> >> - There are three "top-level" directories which contain GC-related code: >> src/share/vm/gc_interface/ >> src/share/vm/gc_implementation/ >> src/share/vm/memory/ >> >> - Our collectors are roughly spread out like this: >> src/share/vm/gc_implementation/parallelScavenge/ (ParallelGC) >> src/share/vm/gc_implementation/g1/ (G1) >> src/share/vm/gc_implementation/concurrentMarkSweep/ (CMS) >> src/share/vm/gc_implementation/parNew/ (ParNewGC) >> src/share/vm/gc_implementation/shared/ (MarkSweep) >> src/share/vm/memory/ (DefNew) >> >> - We have common/shared code in the following places: >> src/share/vm/gc_interface/ (CollectedHeap, etc) >> src/share/vm/gc_implementation/shared/ (counters, utilities, etc) >> src/share/vm/memory/ (BarrierSet, GenCollectedHeap, >> etc) >> >> >> New Structure >> ------------- >> The plan is for the new structure to look like this: >> >> - A single "top-level" directory for GC code: >> src/share/vm/gc/ >> >> - One sub-directory per GC: >> src/share/vm/gc/cms/ >> src/share/vm/gc/g1/ >> src/share/vm/gc/parallel/ >> src/share/vm/gc/serial/ >> >> - A single directory for common/shared GC code: >> src/share/gc/shared/ >> >> >> FAQ >> --- >> Q: How will this affect me? >> A: Moving files around could mean that the patch you are working on will >> fail to apply cleanly. hg does a fairly good job of tracking >> moves/renames, but if you're using other tools (like patch, mq, etc) you >> might need to update/merge your patch manually. >> >> Q: When will this happen? >> A: A patch for this is currently being worked on and tested. A review >> request will be sent to hotspot-dev in the near future. >> >> Q: Why do this now? >> A: All major back-porting work to 8u has been completed. If we want to >> do this type of cleanup in jdk9, then now is a good time. The next >> opportunity to do this will be in jdk10, after all major back-porting >> work to 9u has been completed. We would prefer to do it now. >> >> regards, >> The GC Team >> From per.liden at oracle.com Fri May 8 10:44:32 2015 From: per.liden at oracle.com (Per Liden) Date: Fri, 08 May 2015 12:44:32 +0200 Subject: Heads Up! GC directory structure cleanup In-Reply-To: <554C921E.10300@oracle.com> References: <554B20FE.5070607@oracle.com> <554B4FA1.7040607@oracle.com> <554C921E.10300@oracle.com> Message-ID: <554C9390.9050001@oracle.com> On 2015-05-08 12:38, Per Liden wrote: > On 2015-05-07 13:42, Per Liden wrote: >> Hi Mattis, >> >> On 2015-05-07 12:45, Mattis Castegren wrote: >>> Hi >>> >>> Will this just be a change in directory names, or will the code be >>> changed as well? >> >> This is mainly a change of directory names, but this means that a number >> of #include "gc_impl..." and #ifndef SHARE_VM_GC_IMPL..., etc will also >> need to change. Other than that there will be no changes to any C++ code. >> >> Btw, same goes for the SA, where some package and import lines will be >> updated to reflect the new paths. >> >>> >>> If it is just a change in directory names, would it make sense to add >>> this to the unshuffle script, >>> http://cr.openjdk.java.net/~chegar/docs/portingScript.html >>> >>> This script currently unshuffles the directory name changes for the >>> jigsaw project to allow backporting of fixes between 9 and 8. We in >>> Sustaining will still backport bug fixes and security fixes to JDK 8 >>> and below, so it should be good if we did the same for GC changes. >>> >>> Do you think that would be possible? >> >> That sounds like a good idea. I will look into it. > > I had look at the unshuffle scipt and talked to Mattis about their > needs. The conclusion is that with some adjustments to the > unshuffle_patch.sh script it can do what we want. However, the > sustaining org is almost always interested in back porting, forward > porting is very rare. For back porting, it is fairly easy to query the > hg repo to figure out if/how a file has moved around. This also allows > you to backport a patch to any given revision (not just post -> pre the > GC directory restructure) and works for any file in any repo (not just > hotspot and the GC files). > > Attaching an example of what such a script could look like. Ok, the mail server/list ate the patch, here it is inline. #!/bin/bash # # backport_patch: Adjust patch to match old file/path names # if [ $# != 3 ]; then echo "usage: backport_patch " echo "" echo " Revision to backport to" echo " Patch file to backport" echo " Output result in new patch file" exit 1 fi REV=$1 PATCH=$2 NEW_PATCH=$3 if [ ! -f "$PATCH" ]; then echo "File not found: $PATCH" exit 1 fi if [ -f "$NEW_PATCH" ]; then echo "File already exists: $NEW_PATCH" exit 1 fi if ! hg log -r $REV &> /dev/null; then echo "Unknown revision \"$REV\"" exit 1 fi get_old_name() { hg log -fpg -r $REV..tip $1 | awk ' BEGIN { from=""; } /^(copy|rename) from / { from=$3; } END { print from; }' } echo "Backporting to revision $REV" cat $PATCH > $NEW_PATCH FILES=$(grep '^diff --git ' $PATCH | sed 's|diff --git a/\([^ ]*\) b/.*$|\1|') for NEW_FILE in $FILES; do OLD_FILE=$(get_old_name $NEW_FILE) if [ "$OLD_FILE" ]; then echo " * $NEW_FILE -> $OLD_FILE" sed -i "s|\( [ab]/\)$NEW_FILE|\1$OLD_FILE|g" $NEW_PATCH else echo " * $NEW_FILE -> $NEW_FILE (Same)" fi done echo "Done" # End of file cheers, /Per > > cheers, > /Per > >> >> cheers, >> /Per >> >>> >>> Kind Regards >>> /Mattis >>> >>> -----Original Message----- >>> From: Per Liden >>> Sent: den 7 maj 2015 10:23 >>> To: hotspot-dev at openjdk.java.net >>> Subject: Heads Up! GC directory structure cleanup >>> >>> Hi all, >>> >>> This is a heads up to let everyone know that the GC team is planning to >>> do a cleanup of the directory structure for GC code. This change will >>> affect people working on changes which touch GC-related code, and could >>> mean that such patches need to be updated before applying cleanly to the >>> new directory structure. >>> >>> >>> Background >>> ---------- >>> In the continuous work to address technical debt, the time has come to >>> make some changes to how the GC code is organized. Over time the GC code >>> has spread out across a number of directories, and currently looks like >>> this: >>> >>> - There are three "top-level" directories which contain GC-related code: >>> src/share/vm/gc_interface/ >>> src/share/vm/gc_implementation/ >>> src/share/vm/memory/ >>> >>> - Our collectors are roughly spread out like this: >>> src/share/vm/gc_implementation/parallelScavenge/ (ParallelGC) >>> src/share/vm/gc_implementation/g1/ (G1) >>> src/share/vm/gc_implementation/concurrentMarkSweep/ (CMS) >>> src/share/vm/gc_implementation/parNew/ (ParNewGC) >>> src/share/vm/gc_implementation/shared/ (MarkSweep) >>> src/share/vm/memory/ (DefNew) >>> >>> - We have common/shared code in the following places: >>> src/share/vm/gc_interface/ (CollectedHeap, etc) >>> src/share/vm/gc_implementation/shared/ (counters, utilities, etc) >>> src/share/vm/memory/ (BarrierSet, GenCollectedHeap, >>> etc) >>> >>> >>> New Structure >>> ------------- >>> The plan is for the new structure to look like this: >>> >>> - A single "top-level" directory for GC code: >>> src/share/vm/gc/ >>> >>> - One sub-directory per GC: >>> src/share/vm/gc/cms/ >>> src/share/vm/gc/g1/ >>> src/share/vm/gc/parallel/ >>> src/share/vm/gc/serial/ >>> >>> - A single directory for common/shared GC code: >>> src/share/gc/shared/ >>> >>> >>> FAQ >>> --- >>> Q: How will this affect me? >>> A: Moving files around could mean that the patch you are working on will >>> fail to apply cleanly. hg does a fairly good job of tracking >>> moves/renames, but if you're using other tools (like patch, mq, etc) you >>> might need to update/merge your patch manually. >>> >>> Q: When will this happen? >>> A: A patch for this is currently being worked on and tested. A review >>> request will be sent to hotspot-dev in the near future. >>> >>> Q: Why do this now? >>> A: All major back-porting work to 8u has been completed. If we want to >>> do this type of cleanup in jdk9, then now is a good time. The next >>> opportunity to do this will be in jdk10, after all major back-porting >>> work to 9u has been completed. We would prefer to do it now. >>> >>> regards, >>> The GC Team >>> From volker.simonis at gmail.com Fri May 8 14:42:48 2015 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 8 May 2015 16:42:48 +0200 Subject: We REALLY nead a NON-PCH build in JPRT NOW! In-Reply-To: <553831F0.70201@oracle.com> References: <480AE697-2A40-433A-A728-51C64C6DEA57@oracle.com> <551EF8EE.8090707@oracle.com> <5526322D.1070008@oracle.com> <55263408.3040200@redhat.com> <4295855A5C1DE049A61835A1887419CC2CFBF607@DEWDFEMB12A.global.corp.sap> <5530DEF3.5070301@oracle.com> <4295855A5C1DE049A61835A1887419CC2CFBF65B@DEWDFEMB12A.global.corp.sap> <553568C5.5000202@oracle.com> <5535A85D.101@oracle.com> <5536922F.3030204@oracle.com> <55370107.6010207@oracle.com> <55375477.7020804@redhat.com> <553831F0.70201@oracle.com> Message-ID: On Thu, Apr 23, 2015 at 1:42 AM, Coleen Phillimore wrote: > > On 4/22/15, 3:57 AM, Andrew Haley wrote: >> >> On 22/04/15 03:01, Daniel D. Daugherty wrote: >>> >>> Personally, I like the idea of not adding any more new JPRT targets >>> and reconfiguring to have fastdebug and/or debug builds run as non-PCH... >>> It's a much simpler policy to explain: >>> >>> If we support PCH builds with a particular toolset then product >>> builds default to PCH and non-product builds default to no-PCH. >> >> But the debug builds are used in development all the time. It's >> these that really benefit from PCH. > > > I feel like PCH makes development slower. I change a header file and all > the files in the system are recompiled because it happens to be in the > precompiled file. I wouldn't miss it for debug mode. > While working on the new HotSpot build (in the build-infra project) I've just realized the we are currently using the option '-fpch-deps'. Citing from the gcc man-page: "When using precompiled headers, this flag will cause the dependency-output flags to also list the files from the precompiled header's dependencies. If not specified only the precompiled header would be listed and not the files that were used to create it because those files are not consulted when a precompiled header is used." This actually means that a .cpp file which uses PCH will depend on all the headers in the PCH-file while while it should actually only depend an the header files it explicitly includes itself. So this is probably the reason why with PCH the change of a single header unnecessarily triggers the recompilation of most of the .cpp files. That said we of course still have certain header which are included in all compilation units so there's nothing we can make once such a file is changed. > Coleen > >> >> Andrew. >> > From stefan.karlsson at oracle.com Fri May 8 15:01:00 2015 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 08 May 2015 17:01:00 +0200 Subject: We REALLY nead a NON-PCH build in JPRT NOW! In-Reply-To: References: <480AE697-2A40-433A-A728-51C64C6DEA57@oracle.com> <551EF8EE.8090707@oracle.com> <5526322D.1070008@oracle.com> <55263408.3040200@redhat.com> <4295855A5C1DE049A61835A1887419CC2CFBF607@DEWDFEMB12A.global.corp.sap> <5530DEF3.5070301@oracle.com> <4295855A5C1DE049A61835A1887419CC2CFBF65B@DEWDFEMB12A.global.corp.sap> <553568C5.5000202@oracle.com> <5535A85D.101@oracle.com> <5536922F.3030204@oracle.com> <55370107.6010207@oracle.com> <55375477.7020804@redhat.com> <553831F0.70201@oracle.com> Message-ID: <554CCFAC.7040407@oracle.com> Volker, On 2015-05-08 16:42, Volker Simonis wrote: > On Thu, Apr 23, 2015 at 1:42 AM, Coleen Phillimore > wrote: >> On 4/22/15, 3:57 AM, Andrew Haley wrote: >>> On 22/04/15 03:01, Daniel D. Daugherty wrote: >>>> Personally, I like the idea of not adding any more new JPRT targets >>>> and reconfiguring to have fastdebug and/or debug builds run as non-PCH... >>>> It's a much simpler policy to explain: >>>> >>>> If we support PCH builds with a particular toolset then product >>>> builds default to PCH and non-product builds default to no-PCH. >>> But the debug builds are used in development all the time. It's >>> these that really benefit from PCH. >> >> I feel like PCH makes development slower. I change a header file and all >> the files in the system are recompiled because it happens to be in the >> precompiled file. I wouldn't miss it for debug mode. >> > While working on the new HotSpot build (in the build-infra project) > I've just realized the we are currently using the option '-fpch-deps'. > > Citing from the gcc man-page: "When using precompiled headers, this > flag will cause the dependency-output flags to also list the files > from the precompiled header's dependencies. If not specified only the > precompiled header would be listed and not the files that were used to > create it because those files are not consulted when a precompiled > header is used." > > This actually means that a .cpp file which uses PCH will depend on all > the headers in the PCH-file while while it should actually only depend > an the header files it explicitly includes itself. So this is probably > the reason why with PCH the change of a single header unnecessarily > triggers the recompilation of most of the > .cpp files. That said we of course still have certain header which are > included in all compilation units so there's nothing we can make once > such a file is changed. Yes, I think you found that running without -fpch-deps was broken: http://mail.openjdk.java.net/pipermail/hotspot-dev/2012-June/006016.html :) StefanK > > >> Coleen >> >>> Andrew. >>> From volker.simonis at gmail.com Fri May 8 17:04:07 2015 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 8 May 2015 19:04:07 +0200 Subject: We REALLY nead a NON-PCH build in JPRT NOW! In-Reply-To: <55392850.5090508@oracle.com> References: <480AE697-2A40-433A-A728-51C64C6DEA57@oracle.com> <551EF8EE.8090707@oracle.com> <5526322D.1070008@oracle.com> <55263408.3040200@redhat.com> <4295855A5C1DE049A61835A1887419CC2CFBF607@DEWDFEMB12A.global.corp.sap> <5530DEF3.5070301@oracle.com> <4295855A5C1DE049A61835A1887419CC2CFBF65B@DEWDFEMB12A.global.corp.sap> <553568C5.5000202@oracle.com> <5535A85D.101@oracle.com> <5536922F.3030204@oracle.com> <55370107.6010207@oracle.com> <55375477.7020804@redhat.com> <553831F0.70201@oracle.com> <5538A27F.5010705@redhat.com> <55390D8B.9090405@oracle.com> <55392850.5090508@oracle.com> Message-ID: On Thu, Apr 23, 2015 at 7:13 PM, Dmitry Samersoff wrote: > Everyone, > > FYI, > > Did small benchmark for entire JDK clean build on my laptop > - linux-xfs, core I5, 8G, SSD, idle. > > As expected, PCH doesn't really affects a clean build. > > Numbers below: > > ** ccache, no pch, no javac > > Finished building Java(TM) for target 'default' > 12079.395u 320.784s 58:20.54 354.2% 0+0k 0+0io 56pf+0w > > ** no ccache, pch, no javac > > Finished building Java(TM) for target 'default' > 12847.053u 340.212s 59:07.99 371.6% 0+0k 0+0io 0pf+0w > > ** no ccache, no pch, no javac > > Finished building Java(TM) for target 'default' > 12493.502u 298.802s 54:36.23 390.4% 0+0k 0+0io 17pf+0w > Hi Dmitry, from my experience, PCH helps most if you have a slow file system. You have measured with a SSD which reduces the gains of PCH. Also, because currently only the HotSpot build supports PCH, comparing build numbers of a complete JDK builds is a little misleading because the HotSpot build is only a small part of the complete build. Doing a 'make hotspot-only JOBS=8' build on Linux/ppc64 with the sources on a NFS share gives the following results: with PCH: real 3m6.580s user 6m44.423s sys 2m5.348s without PCH: real 3m48.296s user 12m20.448s sys 1m41.423s As you can see, the non-PCH build consumes nearly twice as much user time as the PCH build, so the PCH build can still be useful in certain environments. Regards, Volker > -Dmitry > > > > On 2015-04-23 18:19, Coleen Phillimore wrote: >> >> On 4/23/15, 3:42 AM, Andrew Haley wrote: >>> On 23/04/15 00:42, Coleen Phillimore wrote: >>>> On 4/22/15, 3:57 AM, Andrew Haley wrote: >>>>> On 22/04/15 03:01, Daniel D. Daugherty wrote: >>>>>> Personally, I like the idea of not adding any more new JPRT targets >>>>>> and reconfiguring to have fastdebug and/or debug builds run as >>>>>> non-PCH... >>>>>> It's a much simpler policy to explain: >>>>>> >>>>>> If we support PCH builds with a particular toolset then product >>>>>> builds default to PCH and non-product builds default to no-PCH. >>>>> But the debug builds are used in development all the time. It's >>>>> these that really benefit from PCH. >>>> I feel like PCH makes development slower. I change a header file and >>>> all the files in the system are recompiled because it happens to be in >>>> the precompiled file. >>> But that'll happen anyway if the change you made is to one of the common >>> headers. I can't see that it makes any difference. >>> >>> Maybe it depends on what you're working on? >> >> linkResolver.hpp - maybe it's one that doesn't belong in precompiled.hpp >> but I suspect a lot of files fall into that category. If PCH doesn't >> really make builds that much faster, why have it? I'm planning to >> change my script that calls configure to not use precompiled headers. >> >> Thanks, >> Coleen >> >>> >>> Andrew. >>> >> > > > -- > Dmitry Samersoff > Oracle Java development team, Saint Petersburg, Russia > * I would love to change the world, but they won't give me the sources. From volker.simonis at gmail.com Fri May 8 17:07:54 2015 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 8 May 2015 19:07:54 +0200 Subject: We REALLY nead a NON-PCH build in JPRT NOW! In-Reply-To: <554CCFAC.7040407@oracle.com> References: <480AE697-2A40-433A-A728-51C64C6DEA57@oracle.com> <551EF8EE.8090707@oracle.com> <5526322D.1070008@oracle.com> <55263408.3040200@redhat.com> <4295855A5C1DE049A61835A1887419CC2CFBF607@DEWDFEMB12A.global.corp.sap> <5530DEF3.5070301@oracle.com> <4295855A5C1DE049A61835A1887419CC2CFBF65B@DEWDFEMB12A.global.corp.sap> <553568C5.5000202@oracle.com> <5535A85D.101@oracle.com> <5536922F.3030204@oracle.com> <55370107.6010207@oracle.com> <55375477.7020804@redhat.com> <553831F0.70201@oracle.com> <554CCFAC.7040407@oracle.com> Message-ID: On Fri, May 8, 2015 at 5:01 PM, Stefan Karlsson wrote: > Volker, > > > On 2015-05-08 16:42, Volker Simonis wrote: >> >> On Thu, Apr 23, 2015 at 1:42 AM, Coleen Phillimore >> wrote: >>> >>> On 4/22/15, 3:57 AM, Andrew Haley wrote: >>>> >>>> On 22/04/15 03:01, Daniel D. Daugherty wrote: >>>>> >>>>> Personally, I like the idea of not adding any more new JPRT targets >>>>> and reconfiguring to have fastdebug and/or debug builds run as >>>>> non-PCH... >>>>> It's a much simpler policy to explain: >>>>> >>>>> If we support PCH builds with a particular toolset then product >>>>> builds default to PCH and non-product builds default to no-PCH. >>>> >>>> But the debug builds are used in development all the time. It's >>>> these that really benefit from PCH. >>> >>> >>> I feel like PCH makes development slower. I change a header file and all >>> the files in the system are recompiled because it happens to be in the >>> precompiled file. I wouldn't miss it for debug mode. >>> >> While working on the new HotSpot build (in the build-infra project) >> I've just realized the we are currently using the option '-fpch-deps'. >> >> Citing from the gcc man-page: "When using precompiled headers, this >> flag will cause the dependency-output flags to also list the files >> from the precompiled header's dependencies. If not specified only the >> precompiled header would be listed and not the files that were used to >> create it because those files are not consulted when a precompiled >> header is used." >> >> This actually means that a .cpp file which uses PCH will depend on all >> the headers in the PCH-file while while it should actually only depend >> an the header files it explicitly includes itself. So this is probably >> the reason why with PCH the change of a single header unnecessarily >> triggers the recompilation of most of the >> .cpp files. That said we of course still have certain header which are >> included in all compilation units so there's nothing we can make once >> such a file is changed. > > > Yes, I think you found that running without -fpch-deps was broken: > http://mail.openjdk.java.net/pipermail/hotspot-dev/2012-June/006016.html > > :) > Oh my good - I completely forgot about this change! I though about doing a 'hg annotate' before posting to see who introduced it but than I forgot about it :) So we should really check for the mentioned compiler bug before dropping '-fpch-deps' in the new HotSpot build. Thanks for the link, Volker > StefanK > >> >> >>> Coleen >>> >>>> Andrew. >>>> > From dmitry.samersoff at oracle.com Fri May 8 22:35:12 2015 From: dmitry.samersoff at oracle.com (Dmitry Samersoff) Date: Sat, 09 May 2015 01:35:12 +0300 Subject: We REALLY nead a NON-PCH build in JPRT NOW! In-Reply-To: References: <480AE697-2A40-433A-A728-51C64C6DEA57@oracle.com> <551EF8EE.8090707@oracle.com> <5526322D.1070008@oracle.com> <55263408.3040200@redhat.com> <4295855A5C1DE049A61835A1887419CC2CFBF607@DEWDFEMB12A.global.corp.sap> <5530DEF3.5070301@oracle.com> <4295855A5C1DE049A61835A1887419CC2CFBF65B@DEWDFEMB12A.global.corp.sap> <553568C5.5000202@oracle.com> <5535A85D.101@oracle.com> <5536922F.3030204@oracle.com> <55370107.6010207@oracle.com> <55375477.7020804@redhat.com> <553831F0.70201@oracle.com> <5538A27F.5010705@redhat.com> <55390D8B.9090405@oracle.com> <55392850.5090508@oracle.com> Message-ID: <554D3A20.4030209@oracle.com> Volker, Thank you for the numbers. I had a goal to get my own opinion about PCH and JPRT. And I think we can safely turn PCH off for all JPRT builds. -Dmitry On 2015-05-08 20:04, Volker Simonis wrote: > On Thu, Apr 23, 2015 at 7:13 PM, Dmitry Samersoff > wrote: >> Everyone, >> >> FYI, >> >> Did small benchmark for entire JDK clean build on my laptop >> - linux-xfs, core I5, 8G, SSD, idle. >> >> As expected, PCH doesn't really affects a clean build. >> >> Numbers below: >> >> ** ccache, no pch, no javac >> >> Finished building Java(TM) for target 'default' >> 12079.395u 320.784s 58:20.54 354.2% 0+0k 0+0io 56pf+0w >> >> ** no ccache, pch, no javac >> >> Finished building Java(TM) for target 'default' >> 12847.053u 340.212s 59:07.99 371.6% 0+0k 0+0io 0pf+0w >> >> ** no ccache, no pch, no javac >> >> Finished building Java(TM) for target 'default' >> 12493.502u 298.802s 54:36.23 390.4% 0+0k 0+0io 17pf+0w >> > > Hi Dmitry, > > from my experience, PCH helps most if you have a slow file system. You > have measured with a SSD which reduces the gains of PCH. > > Also, because currently only the HotSpot build supports PCH, comparing > build numbers of a complete JDK builds is a little misleading because > the HotSpot build is only a small part of the complete build. > > Doing a 'make hotspot-only JOBS=8' build on Linux/ppc64 with the > sources on a NFS share gives the following results: > > with PCH: > real 3m6.580s > user 6m44.423s > sys 2m5.348s > > without PCH: > real 3m48.296s > user 12m20.448s > sys 1m41.423s > > As you can see, the non-PCH build consumes nearly twice as much user > time as the PCH build, so the PCH build can still be useful in certain > environments. > > Regards, > Volker > >> -Dmitry >> >> >> >> On 2015-04-23 18:19, Coleen Phillimore wrote: >>> >>> On 4/23/15, 3:42 AM, Andrew Haley wrote: >>>> On 23/04/15 00:42, Coleen Phillimore wrote: >>>>> On 4/22/15, 3:57 AM, Andrew Haley wrote: >>>>>> On 22/04/15 03:01, Daniel D. Daugherty wrote: >>>>>>> Personally, I like the idea of not adding any more new JPRT targets >>>>>>> and reconfiguring to have fastdebug and/or debug builds run as >>>>>>> non-PCH... >>>>>>> It's a much simpler policy to explain: >>>>>>> >>>>>>> If we support PCH builds with a particular toolset then product >>>>>>> builds default to PCH and non-product builds default to no-PCH. >>>>>> But the debug builds are used in development all the time. It's >>>>>> these that really benefit from PCH. >>>>> I feel like PCH makes development slower. I change a header file and >>>>> all the files in the system are recompiled because it happens to be in >>>>> the precompiled file. >>>> But that'll happen anyway if the change you made is to one of the common >>>> headers. I can't see that it makes any difference. >>>> >>>> Maybe it depends on what you're working on? >>> >>> linkResolver.hpp - maybe it's one that doesn't belong in precompiled.hpp >>> but I suspect a lot of files fall into that category. If PCH doesn't >>> really make builds that much faster, why have it? I'm planning to >>> change my script that calls configure to not use precompiled headers. >>> >>> Thanks, >>> Coleen >>> >>>> >>>> Andrew. >>>> >>> >> >> >> -- >> Dmitry Samersoff >> Oracle Java development team, Saint Petersburg, Russia >> * I would love to change the world, but they won't give me the sources. -- Dmitry Samersoff Oracle Java development team, Saint Petersburg, Russia * I would love to change the world, but they won't give me the sources. From erik.osterlund at lnu.se Sat May 9 11:38:22 2015 From: erik.osterlund at lnu.se (=?utf-8?B?RXJpayDDlnN0ZXJsdW5k?=) Date: Sat, 9 May 2015 11:38:22 +0000 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <554B3B5E.8000705@oracle.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> Message-ID: <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> Hi Mikael, > On 07 May 2015, at 11:15, Mikael Gerdin wrote: > > Hi Erik, > > On 2015-05-06 17:01, Erik ?sterlund wrote: >> Hi everyone, >> >> I just read through the discussion and thought I?d share a potential solution that I believe would solve the problem. >> >> Previously I implemented something that struck me as very similar for G1 to get rid of its storeload fence in the barrier that suffered from similar symptoms. >> The idea is to process cards in batches instead of one by one and issue a global store serialization event (e.g. using mprotect to a dummy page) when cleaning. It worked pretty well but after Thomas Schatzel ran some benchmarks we decided the gain wasn?t worth the trouble for G1 since it fences only rarely when encountering interregional pointers (premature optimization). But maybe here it happens more often and is more worth the trouble to get rid of the fence? >> >> Here is a proposed new algorithm candidate (small change to algorithm in bug description): >> >> mutator (exactly as before): >> >> x.a = something >> StoreStore >> if (card[@x.a] != dirty) { >> card[@x.a] = dirty >> } >> >> preclean: >> >> for card in batched_cards { >> if (card[@x.a] == dirty) { >> card[@x.a] = precleaned >> } >> } >> >> global_store_fence() >> >> for card in batched_cards { >> read x.a >> } >> >> The global fence will incur some local overhead (quite ouchy) and some global overhead fencing on all remote CPUs the process is scheduled to run on (not necessarily all) using cross calls in the kernel to invalidate remote TLB buffers in the L1 cache (not so ouchy) and by batching the cards, this ?global" cost is amortized arbitrarily so that even on systems with a ridiculous amount of CPUs, it?s probably still a good idea. It is also possible to let multiple precleaning CPUs share the same global store fence using timestamps since it is in fact global. This guarantees scalability on many-core systems but is a bit less straightforward to implement. >> >> If you are interested in this and think it?s a good idea, I could try to patch a solution for this, but I would need some help benchmarking this in your systems so we can verify it performs the way I hope. > > I think this is a good idea. The problem is asymmetric in that the CMS thread should be fine with taking a larger local overhead, batching the setting of cards to precleaned and then scanning the cards later. I?m glad you like the solution. :) > Do you know how the global_store_fence() would look on different cpu architectures? The way I envision it is quite portable: One dummy page per thread (lazily initialized) which is first write protected and then unprotected. Unprotecting can be implemented lazily by the kernel but protecting can not, so we are guaranteed this will trigger a global store serialization event and flush remote store buffers. It would go something like this: *thread->dummy_page = 0; // make sure page is in memory. If it?s offloaded to disk after the write, then that offloading will globally serialize stores too (this is paranoia to avoid potential optimizations that avoid remote store flushing if the physical page isn?t loaded to memory) write_protect(thread->dummy_page); // serialize writes on remote CPUs write_unprotect(thread->dummy_page); I do remember there were concerns raised that this technique might not work for RMO machines, but I see no reason why it would not in this case since we would still emit StoreStore as normal between reference store and dirty value write, and that second store is dependent on the card value read sharing the same address. If anyone thinks this would not work for RMO machines I?m happy to discuss that. Then it?s possible to make potentially better platform specific versions. Like windows already has a system call that does just that - issue a global store serializing fence using ipi, and nothing else. > The VM already uses this sort of synchronization for the thread state transitions, see references to UseMemBar, os::serialize_thread_states, os::serialize_memory. Perhaps that code can be reused somehow? Yeah I had a look at that before, but it?s used in a slightly different way. As far as I could see it deviated from my version in two ways: 1) It used 1 global dummy page instead of one per thread. This means either only one thread serializes stores or some kind of timestamping + locking mechanism is needed for the one store serializing page. I imagined 1 page per thread instead, but I can probably have 1 virtual page per thread sharing the same underlying physical memory if we are worried about the memory footprint of the technique. 2) It enqueued dummy stores to this dummy page when doing such thread transitions instead of fencing, which I believe is too conservative and unnecessary for us here. I suspect the idea behind it came from some kind of discussion of what a portable guarantee for the OS to serialize stores might be and that the conclusion of the discussion must have been something like this: ?stores issued to the dummy page happen-before the write protection, in total order? <- obvious ?and therefore a store to the dummy page was squeezed in, in place of a fence, just because it was uncertain whether remote stores would be flushed if there were no remote stores targeting the memory serializing page, am I right? Short story: This is a bit too conservative (we don?t want an extra store that could trap in the reference write barrier do we). I argue the extra store is unnecessary; OS/hardware is not aware of whether there are remote CPU stores pending to the specific page and therefore flush all remote stores, wherever they may be stored to. Instead I would give this guarantee: ?all stores issued in the same process affinity before the write protection happen-before the write protection is observable, in total order" Longer story: I could verify this guarantee by reading kernel sources. Linux: Since the source is open, I checked the implementation for the architectures we support (arm, aarch64, x86, x86_64, ppc, sparc) in the linux kernel sources and it will always flush /all/ remote store buffers regardless if there may not be pending remote stores to that page or not, as long as there is a change to be made to the permissions of the page (and hence TLB) which we guarantee by having 1 dummy page per thread that we flip permissions on. BSD: I also checked the XNU kernel sources (BSD) and it?s the same story here: cross calls using IPI/APIC, where the APIC message itself acts as a fence when received, regardless of the code to be run remotely. Windows: Windows, I do not know what it does since I can?t browse the source code, but it has a system call, FlushProcessWriteBuffers, that flushes remote CPU stores that we can use instead on this platform, which is probably best suited to do the job on windows anyway. However, for x86/x86_64 it?s AFAIK impossible for kernel implementors to avoid that cross call using APIC (which by itself flushes the store buffers invariant of TLB flushing procedure). General: I can?t imagine any fancy magic OS/hardware solution that would know if remote store flushing is unnecessary because there are no latent remote CPU stores to the specific page being purged. The closest architecture to do something like this I came across was itanium (which we don?t need to support?) which has a specific instruction ptc.ga to purge remote TLB entries with no apparent need for a cross call. But according to the developer manual, ?Global TLB purge instructions (ptc.g and ptc.ga) follow release semantics both on the local and the remote processors?, obviously meaning all stores are still flushed. This is true for WC and UC write buffers too according to the manuals. If fancy hardware would go to such extremes, there are solutions for that too, but no need to cross that bridge unless such hardware and OS appears, right? Thanks, /Erik > /Mikael > >> >> Thanks, >> /Erik >> >> >>> On 06 May 2015, at 14:52, Mikael Gerdin wrote: >>> >>> Hi Vitaly, >>> >>> On 2015-05-06 14:41, Vitaly Davidovich wrote: >>>> Mikael's suggestion was to make mutator check for !clean and then mark >>>> dirty. If it sees stale dirty, it will write dirty again no? Today's code >>>> would have this problem because it's checking for !dirty, but I thought the >>>> suggested change would prevent that. >>> >>> Unfortunately I don't think my suggestion would solve anything. >>> >>> If the conditional card mark would write dirty again if it sees a stale dirty it's not really solving the false sharing problem. >>> >>> The problem is not the value that the precleaner writes to the card entry, it's that the mutator may see the old "dirty" value which was overwritten as part of precleaning but not necessarily visible to the mutator thread. >>> >>> /Mikael >>> >>> >>>> >>>> sent from my phone >>>> On May 6, 2015 4:53 AM, "Andrew Haley" wrote: >>>> >>>>> On 05/05/15 20:51, Vitaly Davidovich wrote: >>>>>> If mutator doesn't see "clean" due to staleness, won't it just mark it >>>>>> dirty "unnecessarily" using Mikael's suggestion? >>>>> >>>>> No. The mutator may see a stale "dirty" and not write anything. At least >>>>> I haven't seen anything which certainly will prevent that from happening. >>>>> >>>>> Andrew. >>>>> >>>>> >>>>> >> > From david.holmes at oracle.com Sun May 10 20:58:23 2015 From: david.holmes at oracle.com (David Holmes) Date: Mon, 11 May 2015 06:58:23 +1000 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> Message-ID: <554FC66F.20307@oracle.com> Hi Erik, For background on memory serialization page see Dave Dice's blog entries: https://blogs.oracle.com/dave/entry/qpi_quiescence https://blogs.oracle.com/dave/resource/Asymmetric-Dekker-Synchronization-140215.txt Cheers, David H. On 9/05/2015 9:38 PM, Erik ?sterlund wrote: > Hi Mikael, > >> On 07 May 2015, at 11:15, Mikael Gerdin wrote: >> >> Hi Erik, >> >> On 2015-05-06 17:01, Erik ?sterlund wrote: >>> Hi everyone, >>> >>> I just read through the discussion and thought I?d share a potential solution that I believe would solve the problem. >>> >>> Previously I implemented something that struck me as very similar for G1 to get rid of its storeload fence in the barrier that suffered from similar symptoms. >>> The idea is to process cards in batches instead of one by one and issue a global store serialization event (e.g. using mprotect to a dummy page) when cleaning. It worked pretty well but after Thomas Schatzel ran some benchmarks we decided the gain wasn?t worth the trouble for G1 since it fences only rarely when encountering interregional pointers (premature optimization). But maybe here it happens more often and is more worth the trouble to get rid of the fence? >>> >>> Here is a proposed new algorithm candidate (small change to algorithm in bug description): >>> >>> mutator (exactly as before): >>> >>> x.a = something >>> StoreStore >>> if (card[@x.a] != dirty) { >>> card[@x.a] = dirty >>> } >>> >>> preclean: >>> >>> for card in batched_cards { >>> if (card[@x.a] == dirty) { >>> card[@x.a] = precleaned >>> } >>> } >>> >>> global_store_fence() >>> >>> for card in batched_cards { >>> read x.a >>> } >>> >>> The global fence will incur some local overhead (quite ouchy) and some global overhead fencing on all remote CPUs the process is scheduled to run on (not necessarily all) using cross calls in the kernel to invalidate remote TLB buffers in the L1 cache (not so ouchy) and by batching the cards, this ?global" cost is amortized arbitrarily so that even on systems with a ridiculous amount of CPUs, it?s probably still a good idea. It is also possible to let multiple precleaning CPUs share the same global store fence using timestamps since it is in fact global. This guarantees scalability on many-core systems but is a bit less straightforward to implement. >>> >>> If you are interested in this and think it?s a good idea, I could try to patch a solution for this, but I would need some help benchmarking this in your systems so we can verify it performs the way I hope. >> >> I think this is a good idea. The problem is asymmetric in that the CMS thread should be fine with taking a larger local overhead, batching the setting of cards to precleaned and then scanning the cards later. > > I?m glad you like the solution. :) > >> Do you know how the global_store_fence() would look on different cpu architectures? > > The way I envision it is quite portable: One dummy page per thread (lazily initialized) which is first write protected and then unprotected. Unprotecting can be implemented lazily by the kernel but protecting can not, so we are guaranteed this will trigger a global store serialization event and flush remote store buffers. It would go something like this: > > *thread->dummy_page = 0; // make sure page is in memory. If it?s offloaded to disk after the write, then that offloading will globally serialize stores too (this is paranoia to avoid potential optimizations that avoid remote store flushing if the physical page isn?t loaded to memory) > write_protect(thread->dummy_page); // serialize writes on remote CPUs > write_unprotect(thread->dummy_page); > > I do remember there were concerns raised that this technique might not work for RMO machines, but I see no reason why it would not in this case since we would still emit StoreStore as normal between reference store and dirty value write, and that second store is dependent on the card value read sharing the same address. If anyone thinks this would not work for RMO machines I?m happy to discuss that. > > Then it?s possible to make potentially better platform specific versions. Like windows already has a system call that does just that - issue a global store serializing fence using ipi, and nothing else. > >> The VM already uses this sort of synchronization for the thread state transitions, see references to UseMemBar, os::serialize_thread_states, os::serialize_memory. Perhaps that code can be reused somehow? > > Yeah I had a look at that before, but it?s used in a slightly different way. As far as I could see it deviated from my version in two ways: > > 1) It used 1 global dummy page instead of one per thread. This means either only one thread serializes stores or some kind of timestamping + locking mechanism is needed for the one store serializing page. I imagined 1 page per thread instead, but I can probably have 1 virtual page per thread sharing the same underlying physical memory if we are worried about the memory footprint of the technique. > > 2) It enqueued dummy stores to this dummy page when doing such thread transitions instead of fencing, which I believe is too conservative and unnecessary for us here. I suspect the idea behind it came from some kind of discussion of what a portable guarantee for the OS to serialize stores might be and that the conclusion of the discussion must have been something like this: > > ?stores issued to the dummy page happen-before the write protection, in total order? <- obvious > > ?and therefore a store to the dummy page was squeezed in, in place of a fence, just because it was uncertain whether remote stores would be flushed if there were no remote stores targeting the memory serializing page, am I right? > > Short story: > This is a bit too conservative (we don?t want an extra store that could trap in the reference write barrier do we). I argue the extra store is unnecessary; OS/hardware is not aware of whether there are remote CPU stores pending to the specific page and therefore flush all remote stores, wherever they may be stored to. Instead I would give this guarantee: > > ?all stores issued in the same process affinity before the write protection happen-before the write protection is observable, in total order" > > Longer story: > > I could verify this guarantee by reading kernel sources. > > Linux: > Since the source is open, I checked the implementation for the architectures we support (arm, aarch64, x86, x86_64, ppc, sparc) in the linux kernel sources and it will always flush /all/ remote store buffers regardless if there may not be pending remote stores to that page or not, as long as there is a change to be made to the permissions of the page (and hence TLB) which we guarantee by having 1 dummy page per thread that we flip permissions on. > > BSD: > I also checked the XNU kernel sources (BSD) and it?s the same story here: cross calls using IPI/APIC, where the APIC message itself acts as a fence when received, regardless of the code to be run remotely. > > Windows: > Windows, I do not know what it does since I can?t browse the source code, but it has a system call, FlushProcessWriteBuffers, that flushes remote CPU stores that we can use instead on this platform, which is probably best suited to do the job on windows anyway. However, for x86/x86_64 it?s AFAIK impossible for kernel implementors to avoid that cross call using APIC (which by itself flushes the store buffers invariant of TLB flushing procedure). > > General: > I can?t imagine any fancy magic OS/hardware solution that would know if remote store flushing is unnecessary because there are no latent remote CPU stores to the specific page being purged. The closest architecture to do something like this I came across was itanium (which we don?t need to support?) which has a specific instruction ptc.ga to purge remote TLB entries with no apparent need for a cross call. But according to the developer manual, ?Global TLB purge instructions (ptc.g and ptc.ga) follow release semantics both on the local and the remote processors?, obviously meaning all stores are still flushed. This is true for WC and UC write buffers too according to the manuals. If fancy hardware would go to such extremes, there are solutions for that too, but no need to cross that bridge unless such hardware and OS appears, right? > > Thanks, > /Erik > >> /Mikael >> >>> >>> Thanks, >>> /Erik >>> >>> >>>> On 06 May 2015, at 14:52, Mikael Gerdin wrote: >>>> >>>> Hi Vitaly, >>>> >>>> On 2015-05-06 14:41, Vitaly Davidovich wrote: >>>>> Mikael's suggestion was to make mutator check for !clean and then mark >>>>> dirty. If it sees stale dirty, it will write dirty again no? Today's code >>>>> would have this problem because it's checking for !dirty, but I thought the >>>>> suggested change would prevent that. >>>> >>>> Unfortunately I don't think my suggestion would solve anything. >>>> >>>> If the conditional card mark would write dirty again if it sees a stale dirty it's not really solving the false sharing problem. >>>> >>>> The problem is not the value that the precleaner writes to the card entry, it's that the mutator may see the old "dirty" value which was overwritten as part of precleaning but not necessarily visible to the mutator thread. >>>> >>>> /Mikael >>>> >>>> >>>>> >>>>> sent from my phone >>>>> On May 6, 2015 4:53 AM, "Andrew Haley" wrote: >>>>> >>>>>> On 05/05/15 20:51, Vitaly Davidovich wrote: >>>>>>> If mutator doesn't see "clean" due to staleness, won't it just mark it >>>>>>> dirty "unnecessarily" using Mikael's suggestion? >>>>>> >>>>>> No. The mutator may see a stale "dirty" and not write anything. At least >>>>>> I haven't seen anything which certainly will prevent that from happening. >>>>>> >>>>>> Andrew. >>>>>> >>>>>> >>>>>> >>> >> > From david.holmes at oracle.com Mon May 11 00:48:31 2015 From: david.holmes at oracle.com (David Holmes) Date: Mon, 11 May 2015 10:48:31 +1000 Subject: [9] Backport approval request: 8078470: [Linux] Replace syscall use in os::fork_and_exec with glibc fork() and execve() Message-ID: <554FFC5F.6000606@oracle.com> This is the "backport" to 9 as the original change had to go into 8u first for logistical/scheduling reasons. Bug: https://bugs.openjdk.java.net/browse/JDK-8078470 8u changeset: http://hg.openjdk.java.net/jdk8u/jdk8u/hotspot/rev/915ca3e9d15e Original review thread: http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-April/018171.html The changeset mostly applied cleanly with some manual tweaking in one spot as the 9 code refers to AARCH64 where the 8u code does not. Thanks, David From david.holmes at oracle.com Mon May 11 03:15:43 2015 From: david.holmes at oracle.com (David Holmes) Date: Mon, 11 May 2015 13:15:43 +1000 Subject: [9] Backport approval request: 8078470: [Linux] Replace syscall use in os::fork_and_exec with glibc fork() and execve() In-Reply-To: <554FFC5F.6000606@oracle.com> References: <554FFC5F.6000606@oracle.com> Message-ID: <55501EDF.4080403@oracle.com> On 11/05/2015 10:48 AM, David Holmes wrote: > This is the "backport" to 9 as the original change had to go into 8u > first for logistical/scheduling reasons. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8078470 > > 8u changeset: > http://hg.openjdk.java.net/jdk8u/jdk8u/hotspot/rev/915ca3e9d15e > > Original review thread: > http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-April/018171.html > > > The changeset mostly applied cleanly with some manual tweaking in one > spot as the 9 code refers to AARCH64 where the 8u code does not. I also had to tweak the tests due to: 8067013: Rename the com.oracle.java.testlibary package For good measure here's a webrev: http://cr.openjdk.java.net/~dholmes/8078470/webrev.jdk9/ Thanks, David > Thanks, > David From aph at redhat.com Mon May 11 09:52:36 2015 From: aph at redhat.com (Andrew Haley) Date: Mon, 11 May 2015 10:52:36 +0100 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <554FC66F.20307@oracle.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> Message-ID: <55507BE4.3050605@redhat.com> On 05/10/2015 09:58 PM, David Holmes wrote: > For background on memory serialization page see Dave Dice's blog entries: > > https://blogs.oracle.com/dave/entry/qpi_quiescence > > https://blogs.oracle.com/dave/resource/Asymmetric-Dekker-Synchronization-140215.txt Note that this approach isn't safe except on TSO. We'll still need some kind of barriers for everyone else. Andrew. From erik.osterlund at lnu.se Mon May 11 10:10:55 2015 From: erik.osterlund at lnu.se (=?utf-8?B?RXJpayDDlnN0ZXJsdW5k?=) Date: Mon, 11 May 2015 10:10:55 +0000 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <554FC66F.20307@oracle.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> Message-ID: Hi David, Thank you for the information! I suspected this and have already read these. :) However, in the approach Dave describes (as well as the implementation in OpenJDK) a store is put on the ?fast side? of the synchronization targeting the potentially protected page (where the fence would have been). I?m arguing this is unnecessary since the OS and hardware don?t know if there are latent stores to the specific page being TLB purged or not and therefore issues a ?global" store fence regardless of such problematic store existing or not. ;) Thanks, /Erik > On 10 May 2015, at 21:58, David Holmes wrote: > > Hi Erik, > > For background on memory serialization page see Dave Dice's blog entries: > > https://blogs.oracle.com/dave/entry/qpi_quiescence > > https://blogs.oracle.com/dave/resource/Asymmetric-Dekker-Synchronization-140215.txt > > Cheers, > David H. > > On 9/05/2015 9:38 PM, Erik ?sterlund wrote: >> Hi Mikael, >> >>> On 07 May 2015, at 11:15, Mikael Gerdin wrote: >>> >>> Hi Erik, >>> >>> On 2015-05-06 17:01, Erik ?sterlund wrote: >>>> Hi everyone, >>>> >>>> I just read through the discussion and thought I?d share a potential solution that I believe would solve the problem. >>>> >>>> Previously I implemented something that struck me as very similar for G1 to get rid of its storeload fence in the barrier that suffered from similar symptoms. >>>> The idea is to process cards in batches instead of one by one and issue a global store serialization event (e.g. using mprotect to a dummy page) when cleaning. It worked pretty well but after Thomas Schatzel ran some benchmarks we decided the gain wasn?t worth the trouble for G1 since it fences only rarely when encountering interregional pointers (premature optimization). But maybe here it happens more often and is more worth the trouble to get rid of the fence? >>>> >>>> Here is a proposed new algorithm candidate (small change to algorithm in bug description): >>>> >>>> mutator (exactly as before): >>>> >>>> x.a = something >>>> StoreStore >>>> if (card[@x.a] != dirty) { >>>> card[@x.a] = dirty >>>> } >>>> >>>> preclean: >>>> >>>> for card in batched_cards { >>>> if (card[@x.a] == dirty) { >>>> card[@x.a] = precleaned >>>> } >>>> } >>>> >>>> global_store_fence() >>>> >>>> for card in batched_cards { >>>> read x.a >>>> } >>>> >>>> The global fence will incur some local overhead (quite ouchy) and some global overhead fencing on all remote CPUs the process is scheduled to run on (not necessarily all) using cross calls in the kernel to invalidate remote TLB buffers in the L1 cache (not so ouchy) and by batching the cards, this ?global" cost is amortized arbitrarily so that even on systems with a ridiculous amount of CPUs, it?s probably still a good idea. It is also possible to let multiple precleaning CPUs share the same global store fence using timestamps since it is in fact global. This guarantees scalability on many-core systems but is a bit less straightforward to implement. >>>> >>>> If you are interested in this and think it?s a good idea, I could try to patch a solution for this, but I would need some help benchmarking this in your systems so we can verify it performs the way I hope. >>> >>> I think this is a good idea. The problem is asymmetric in that the CMS thread should be fine with taking a larger local overhead, batching the setting of cards to precleaned and then scanning the cards later. >> >> I?m glad you like the solution. :) >> >>> Do you know how the global_store_fence() would look on different cpu architectures? >> >> The way I envision it is quite portable: One dummy page per thread (lazily initialized) which is first write protected and then unprotected. Unprotecting can be implemented lazily by the kernel but protecting can not, so we are guaranteed this will trigger a global store serialization event and flush remote store buffers. It would go something like this: >> >> *thread->dummy_page = 0; // make sure page is in memory. If it?s offloaded to disk after the write, then that offloading will globally serialize stores too (this is paranoia to avoid potential optimizations that avoid remote store flushing if the physical page isn?t loaded to memory) >> write_protect(thread->dummy_page); // serialize writes on remote CPUs >> write_unprotect(thread->dummy_page); >> >> I do remember there were concerns raised that this technique might not work for RMO machines, but I see no reason why it would not in this case since we would still emit StoreStore as normal between reference store and dirty value write, and that second store is dependent on the card value read sharing the same address. If anyone thinks this would not work for RMO machines I?m happy to discuss that. >> >> Then it?s possible to make potentially better platform specific versions. Like windows already has a system call that does just that - issue a global store serializing fence using ipi, and nothing else. >> >>> The VM already uses this sort of synchronization for the thread state transitions, see references to UseMemBar, os::serialize_thread_states, os::serialize_memory. Perhaps that code can be reused somehow? >> >> Yeah I had a look at that before, but it?s used in a slightly different way. As far as I could see it deviated from my version in two ways: >> >> 1) It used 1 global dummy page instead of one per thread. This means either only one thread serializes stores or some kind of timestamping + locking mechanism is needed for the one store serializing page. I imagined 1 page per thread instead, but I can probably have 1 virtual page per thread sharing the same underlying physical memory if we are worried about the memory footprint of the technique. >> >> 2) It enqueued dummy stores to this dummy page when doing such thread transitions instead of fencing, which I believe is too conservative and unnecessary for us here. I suspect the idea behind it came from some kind of discussion of what a portable guarantee for the OS to serialize stores might be and that the conclusion of the discussion must have been something like this: >> >> ?stores issued to the dummy page happen-before the write protection, in total order? <- obvious >> >> ?and therefore a store to the dummy page was squeezed in, in place of a fence, just because it was uncertain whether remote stores would be flushed if there were no remote stores targeting the memory serializing page, am I right? >> >> Short story: >> This is a bit too conservative (we don?t want an extra store that could trap in the reference write barrier do we). I argue the extra store is unnecessary; OS/hardware is not aware of whether there are remote CPU stores pending to the specific page and therefore flush all remote stores, wherever they may be stored to. Instead I would give this guarantee: >> >> ?all stores issued in the same process affinity before the write protection happen-before the write protection is observable, in total order" >> >> Longer story: >> >> I could verify this guarantee by reading kernel sources. >> >> Linux: >> Since the source is open, I checked the implementation for the architectures we support (arm, aarch64, x86, x86_64, ppc, sparc) in the linux kernel sources and it will always flush /all/ remote store buffers regardless if there may not be pending remote stores to that page or not, as long as there is a change to be made to the permissions of the page (and hence TLB) which we guarantee by having 1 dummy page per thread that we flip permissions on. >> >> BSD: >> I also checked the XNU kernel sources (BSD) and it?s the same story here: cross calls using IPI/APIC, where the APIC message itself acts as a fence when received, regardless of the code to be run remotely. >> >> Windows: >> Windows, I do not know what it does since I can?t browse the source code, but it has a system call, FlushProcessWriteBuffers, that flushes remote CPU stores that we can use instead on this platform, which is probably best suited to do the job on windows anyway. However, for x86/x86_64 it?s AFAIK impossible for kernel implementors to avoid that cross call using APIC (which by itself flushes the store buffers invariant of TLB flushing procedure). >> >> General: >> I can?t imagine any fancy magic OS/hardware solution that would know if remote store flushing is unnecessary because there are no latent remote CPU stores to the specific page being purged. The closest architecture to do something like this I came across was itanium (which we don?t need to support?) which has a specific instruction ptc.ga to purge remote TLB entries with no apparent need for a cross call. But according to the developer manual, ?Global TLB purge instructions (ptc.g and ptc.ga) follow release semantics both on the local and the remote processors?, obviously meaning all stores are still flushed. This is true for WC and UC write buffers too according to the manuals. If fancy hardware would go to such extremes, there are solutions for that too, but no need to cross that bridge unless such hardware and OS appears, right? >> >> Thanks, >> /Erik >> >>> /Mikael >>> >>>> >>>> Thanks, >>>> /Erik >>>> >>>> >>>>> On 06 May 2015, at 14:52, Mikael Gerdin wrote: >>>>> >>>>> Hi Vitaly, >>>>> >>>>> On 2015-05-06 14:41, Vitaly Davidovich wrote: >>>>>> Mikael's suggestion was to make mutator check for !clean and then mark >>>>>> dirty. If it sees stale dirty, it will write dirty again no? Today's code >>>>>> would have this problem because it's checking for !dirty, but I thought the >>>>>> suggested change would prevent that. >>>>> >>>>> Unfortunately I don't think my suggestion would solve anything. >>>>> >>>>> If the conditional card mark would write dirty again if it sees a stale dirty it's not really solving the false sharing problem. >>>>> >>>>> The problem is not the value that the precleaner writes to the card entry, it's that the mutator may see the old "dirty" value which was overwritten as part of precleaning but not necessarily visible to the mutator thread. >>>>> >>>>> /Mikael >>>>> >>>>> >>>>>> >>>>>> sent from my phone >>>>>> On May 6, 2015 4:53 AM, "Andrew Haley" wrote: >>>>>> >>>>>>> On 05/05/15 20:51, Vitaly Davidovich wrote: >>>>>>>> If mutator doesn't see "clean" due to staleness, won't it just mark it >>>>>>>> dirty "unnecessarily" using Mikael's suggestion? >>>>>>> >>>>>>> No. The mutator may see a stale "dirty" and not write anything. At least >>>>>>> I haven't seen anything which certainly will prevent that from happening. >>>>>>> >>>>>>> Andrew. >>>>>>> >>>>>>> >>>>>>> >>>> >>> >> From erik.osterlund at lnu.se Mon May 11 10:40:49 2015 From: erik.osterlund at lnu.se (=?utf-8?B?RXJpayDDlnN0ZXJsdW5k?=) Date: Mon, 11 May 2015 10:40:49 +0000 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <55507BE4.3050605@redhat.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> Message-ID: <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> Hi Andrew, I have heard statements like this that such mechanism would not work on RMO, but never got an explanation why it would work only on TSO. Could you please elaborate? I studied some kernel sources for a bunch of architectures and kernels, and it seems as far as I can see all good for RMO too. Even PPC does the following on each relevant CPU when flushing TLBs from mprotect: sync; purge_tlbs; isync; Therefore it?s not just serializing stores but flushing loads also, making it pretty similar on both TSO and RMO. I don?t see any problematic reordering that would make this technique unreliable on RMO machines. Can you? 1: x.a = something 2: StoreStore 3: if (card[@x.a] != dirty) { 4: card[@x.a] = dirty 5: } In the following example (the problem at hand), which RMO reordering could harm us? The one relevant reordering that can happen that I can see (and is what this whole issue is all about) is that the store on line 1 is delayed and reorders with the conditional mark check on line 3. The global fence technique would guarantee that in the event of such a reordering, both the store write (line 1) and conditional mark check (line 3) will both be observed as-if they did not reorder because they will both be serialized and globally observed before any consistency is violated. To be more exact, the violation would be that the cleaning thread would after cleaning, read a stale reference (from line 1) that was deferred in the store buffer while the card is being precleaned while the mutator assumes there is no need to dirty the card again. If every processor has to issue a full fence which kernel sources imply (e.g. sync on PPC), then this consistency violation is equally impossible even on RMO as far as I can see. Thanks, /Erik > On 11 May 2015, at 10:52, Andrew Haley wrote: > > On 05/10/2015 09:58 PM, David Holmes wrote: > >> For background on memory serialization page see Dave Dice's blog entries: >> >> https://blogs.oracle.com/dave/entry/qpi_quiescence >> >> https://blogs.oracle.com/dave/resource/Asymmetric-Dekker-Synchronization-140215.txt > > Note that this approach isn't safe except on TSO. We'll still > need some kind of barriers for everyone else. > > Andrew. > From aph at redhat.com Mon May 11 10:58:40 2015 From: aph at redhat.com (Andrew Haley) Date: Mon, 11 May 2015 11:58:40 +0100 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> Message-ID: <55508B60.1000200@redhat.com> On 05/11/2015 11:40 AM, Erik ?sterlund wrote: > I have heard statements like this that such mechanism would not work > on RMO, but never got an explanation why it would work only on > TSO. Could you please elaborate? I studied some kernel sources for > a bunch of architectures and kernels, and it seems as far as I can > see all good for RMO too. Dave Dice himself told me that the algorithm is not in general safe for non-TSO. Perhaps, though, it is safe in this particular case. Of course, I may be misunderstanding him. I'm not sure of his reasoning but perhaps we should include him in this discussion. >From my point of view, I can't see a strong argument for doing this on AArch64. StoreLoad barriers are not fantastically expensive there so it may not be worth going to such extremes. The cost of a StoreLoad barrier doesn't seem to be so much more than the StoreStore that we have to have anyway. Andrew. From erik.osterlund at lnu.se Mon May 11 11:33:38 2015 From: erik.osterlund at lnu.se (=?utf-8?B?RXJpayDDlnN0ZXJsdW5k?=) Date: Mon, 11 May 2015 11:33:38 +0000 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <55508B60.1000200@redhat.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> Message-ID: <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> Hi Andrew, > On 11 May 2015, at 11:58, Andrew Haley wrote: > > On 05/11/2015 11:40 AM, Erik ?sterlund wrote: > >> I have heard statements like this that such mechanism would not work >> on RMO, but never got an explanation why it would work only on >> TSO. Could you please elaborate? I studied some kernel sources for >> a bunch of architectures and kernels, and it seems as far as I can >> see all good for RMO too. > > Dave Dice himself told me that the algorithm is not in general safe > for non-TSO. Perhaps, though, it is safe in this particular case. Of > course, I may be misunderstanding him. I'm not sure of his reasoning > but perhaps we should include him in this discussion. I see. It would be interesting to hear his reasoning, because it is not clear to me. > From my point of view, I can't see a strong argument for doing this on > AArch64. StoreLoad barriers are not fantastically expensive there so > it may not be worth going to such extremes. The cost of a StoreLoad > barrier doesn't seem to be so much more than the StoreStore that we > have to have anyway. Yeah about performance I?m not sure when it?s worth removing these fences and on what hardware. In this case though, if it makes us any happier, I think we could probably get rid of the storestore barrier too: The latent reference store is forced to serialize anyway after the dirty card value write is observable and about to be cleaned. So the potential consistency violation that the card looks dirty and then cleaning thread reads a stale reference value could not happen with my approach even without storestore hardware protection. I didn?t give it too much thought but on the top of my mind I can?t see any problems. If we want to get rid of storestore too I can give it some more thought. But you know much better than me if these fences are problematic or not. :) Thanks, /Erik > > Andrew. From aph at redhat.com Mon May 11 13:41:22 2015 From: aph at redhat.com (Andrew Haley) Date: Mon, 11 May 2015 14:41:22 +0100 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> Message-ID: <5550B182.7090009@redhat.com> On 05/11/2015 12:33 PM, Erik ?sterlund wrote: > Hi Andrew, > >> On 11 May 2015, at 11:58, Andrew Haley wrote: >> >> On 05/11/2015 11:40 AM, Erik ?sterlund wrote: >> >>> I have heard statements like this that such mechanism would not work >>> on RMO, but never got an explanation why it would work only on >>> TSO. Could you please elaborate? I studied some kernel sources for >>> a bunch of architectures and kernels, and it seems as far as I can >>> see all good for RMO too. >> >> Dave Dice himself told me that the algorithm is not in general safe >> for non-TSO. Perhaps, though, it is safe in this particular case. Of >> course, I may be misunderstanding him. I'm not sure of his reasoning >> but perhaps we should include him in this discussion. > > I see. It would be interesting to hear his reasoning, because it is > not clear to me. > >> From my point of view, I can't see a strong argument for doing this on >> AArch64. StoreLoad barriers are not fantastically expensive there so >> it may not be worth going to such extremes. The cost of a StoreLoad >> barrier doesn't seem to be so much more than the StoreStore that we >> have to have anyway. > > Yeah about performance I?m not sure when it?s worth removing these > fences and on what hardware. Your algorithm (as I understand it) trades a moderately expensive (but purely local) operation for a very expensive global operation, albeit with much lower frequency. It's not clear to me how much we value continuous operation versus faster operation with occasional global stalls. I suppose it must be application-dependent. > In this case though, if it makes us any happier, I think we could > probably get rid of the storestore barrier too: > > The latent reference store is forced to serialize anyway after the > dirty card value write is observable and about to be cleaned. So the > potential consistency violation that the card looks dirty and then > cleaning thread reads a stale reference value could not happen with > my approach even without storestore hardware protection. I didn?t > give it too much thought but on the top of my mind I can?t see any > problems. If we want to get rid of storestore too I can give it some > more thought. That is very interesting. > But you know much better than me if these fences are problematic or > not. :) Not really. AArch64 is an architecture not an implementation, and is designed to be implemented using a wide range of techniques. Instead of having very complex cores, some designers seem have decided it makes sense to have many of them on a die. It may well be, though, that some implementers will adopt an x86-like highly-superscalar architecture with a great deal of speculative execution. I can only predict the past... My approach with this project has been to do things in the most straightforward way rather than trying to optimize for whatever implementations I happen to have available. Andrew. From vitalyd at gmail.com Mon May 11 13:49:58 2015 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Mon, 11 May 2015 09:49:58 -0400 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> Message-ID: Erik, What would prevent compiler based reordering in your suggestions? sent from my phone On May 11, 2015 7:33 AM, "Erik ?sterlund" wrote: > Hi Andrew, > > > On 11 May 2015, at 11:58, Andrew Haley wrote: > > > > On 05/11/2015 11:40 AM, Erik ?sterlund wrote: > > > >> I have heard statements like this that such mechanism would not work > >> on RMO, but never got an explanation why it would work only on > >> TSO. Could you please elaborate? I studied some kernel sources for > >> a bunch of architectures and kernels, and it seems as far as I can > >> see all good for RMO too. > > > > Dave Dice himself told me that the algorithm is not in general safe > > for non-TSO. Perhaps, though, it is safe in this particular case. Of > > course, I may be misunderstanding him. I'm not sure of his reasoning > > but perhaps we should include him in this discussion. > > I see. It would be interesting to hear his reasoning, because it is not > clear to me. > > > From my point of view, I can't see a strong argument for doing this on > > AArch64. StoreLoad barriers are not fantastically expensive there so > > it may not be worth going to such extremes. The cost of a StoreLoad > > barrier doesn't seem to be so much more than the StoreStore that we > > have to have anyway. > > Yeah about performance I?m not sure when it?s worth removing these fences > and on what hardware. > > In this case though, if it makes us any happier, I think we could probably > get rid of the storestore barrier too: > > The latent reference store is forced to serialize anyway after the dirty > card value write is observable and about to be cleaned. So the potential > consistency violation that the card looks dirty and then cleaning thread > reads a stale reference value could not happen with my approach even > without storestore hardware protection. I didn?t give it too much thought > but on the top of my mind I can?t see any problems. If we want to get rid > of storestore too I can give it some more thought. > > But you know much better than me if these fences are problematic or not. :) > > Thanks, > /Erik > > > > > Andrew. > > From erik.osterlund at lnu.se Mon May 11 15:59:37 2015 From: erik.osterlund at lnu.se (=?utf-8?B?RXJpayDDlnN0ZXJsdW5k?=) Date: Mon, 11 May 2015 15:59:37 +0000 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <5550B182.7090009@redhat.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> Message-ID: Hi Andrew, On 11 May 2015, at 14:41, Andrew Haley > wrote: On 05/11/2015 12:33 PM, Erik ?sterlund wrote: Hi Andrew, On 11 May 2015, at 11:58, Andrew Haley > wrote: On 05/11/2015 11:40 AM, Erik ?sterlund wrote: I have heard statements like this that such mechanism would not work on RMO, but never got an explanation why it would work only on TSO. Could you please elaborate? I studied some kernel sources for a bunch of architectures and kernels, and it seems as far as I can see all good for RMO too. Dave Dice himself told me that the algorithm is not in general safe for non-TSO. Perhaps, though, it is safe in this particular case. Of course, I may be misunderstanding him. I'm not sure of his reasoning but perhaps we should include him in this discussion. I see. It would be interesting to hear his reasoning, because it is not clear to me. From my point of view, I can't see a strong argument for doing this on AArch64. StoreLoad barriers are not fantastically expensive there so it may not be worth going to such extremes. The cost of a StoreLoad barrier doesn't seem to be so much more than the StoreStore that we have to have anyway. Yeah about performance I?m not sure when it?s worth removing these fences and on what hardware. Your algorithm (as I understand it) trades a moderately expensive (but purely local) operation for a very expensive global operation, albeit with much lower frequency. It's not clear to me how much we value continuous operation versus faster operation with occasional global stalls. I suppose it must be application-dependent. From my perspective the idea is to move the synchronization overhead from a place where it cannot be amortized away (memory access) to a code path where it can be pretty much arbitrarily amortized away (batched cleaning). We couldn?t fence every n memory accesses, but we certainly can global fence every n cards (batched), where we can pick a suitable n where the related synchronization overheads seem to vanish. Also the global operation is not purely, but ?mostly" locally expensive for the thread performing the global fence. The cost on global CPUs is pretty much simply a normal fence (roughly). Of course there is always gonna be that one guy with 4000 CPUs which might be a bit awkward. But even then, with high enough n, shared, timestamped global fences etc, even such ridiculous scalability should be within reach. BTW do we normally have some kind of reasonable scalability window we optimize for, and don?t care as much about optimizing for that potential one guy? ;) In this case though, if it makes us any happier, I think we could probably get rid of the storestore barrier too: The latent reference store is forced to serialize anyway after the dirty card value write is observable and about to be cleaned. So the potential consistency violation that the card looks dirty and then cleaning thread reads a stale reference value could not happen with my approach even without storestore hardware protection. I didn?t give it too much thought but on the top of my mind I can?t see any problems. If we want to get rid of storestore too I can give it some more thought. That is very interesting. Indeed! :) But you know much better than me if these fences are problematic or not. :) Not really. AArch64 is an architecture not an implementation, and is designed to be implemented using a wide range of techniques. Instead of having very complex cores, some designers seem have decided it makes sense to have many of them on a die. It may well be, though, that some implementers will adopt an x86-like highly-superscalar architecture with a great deal of speculative execution. I can only predict the past... My approach with this project has been to do things in the most straightforward way rather than trying to optimize for whatever implementations I happen to have available. I see your point of view: you don?t want to be that dependent on the hardware and elected to go with a straightforward synchronization solution for this reason. This makes sense. But I think since we are dealing with an optimization feature here (UseCondCardMark), I believe a less straight forward solution makes us less dependent on such hardware details. Because it is an optimization, the highest possible performance is probably expected and even important, which suddenly becomes very tightly dependent on the cost of fencing which probably varies a lot from different hardware vendors. Conversely, the possibly less straightforward synchronization solution dodges this bullet by simply not fencing and arbitrarily amortizing away the related synchronization costs until they vanish. :) Thanks, /Erik Andrew. From erik.osterlund at lnu.se Mon May 11 16:05:48 2015 From: erik.osterlund at lnu.se (=?utf-8?B?RXJpayDDlnN0ZXJsdW5k?=) Date: Mon, 11 May 2015 16:05:48 +0000 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> Message-ID: <6FC60F4E-6897-4835-A3A8-D9A89C54A6A2@lnu.se> Hi Vitaly, A compiler barrier I guess. ;) In my original proposal, the storestore would still be there so nothing would really change regarding compiler reordering. The card value load and write are dependent so the compiler wouldn?t change them and the storestore would already contain the necessary compiler barrier. But yeah, even that storestore can probably be removed too and replaced by a compiler barrier if that makes us happier. :) Thanks, /Erik On 11 May 2015, at 14:49, Vitaly Davidovich > wrote: Erik, What would prevent compiler based reordering in your suggestions? sent from my phone On May 11, 2015 7:33 AM, "Erik ?sterlund" > wrote: Hi Andrew, > On 11 May 2015, at 11:58, Andrew Haley > wrote: > > On 05/11/2015 11:40 AM, Erik ?sterlund wrote: > >> I have heard statements like this that such mechanism would not work >> on RMO, but never got an explanation why it would work only on >> TSO. Could you please elaborate? I studied some kernel sources for >> a bunch of architectures and kernels, and it seems as far as I can >> see all good for RMO too. > > Dave Dice himself told me that the algorithm is not in general safe > for non-TSO. Perhaps, though, it is safe in this particular case. Of > course, I may be misunderstanding him. I'm not sure of his reasoning > but perhaps we should include him in this discussion. I see. It would be interesting to hear his reasoning, because it is not clear to me. > From my point of view, I can't see a strong argument for doing this on > AArch64. StoreLoad barriers are not fantastically expensive there so > it may not be worth going to such extremes. The cost of a StoreLoad > barrier doesn't seem to be so much more than the StoreStore that we > have to have anyway. Yeah about performance I?m not sure when it?s worth removing these fences and on what hardware. In this case though, if it makes us any happier, I think we could probably get rid of the storestore barrier too: The latent reference store is forced to serialize anyway after the dirty card value write is observable and about to be cleaned. So the potential consistency violation that the card looks dirty and then cleaning thread reads a stale reference value could not happen with my approach even without storestore hardware protection. I didn?t give it too much thought but on the top of my mind I can?t see any problems. If we want to get rid of storestore too I can give it some more thought. But you know much better than me if these fences are problematic or not. :) Thanks, /Erik > > Andrew. From vitalyd at gmail.com Mon May 11 16:06:48 2015 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Mon, 11 May 2015 12:06:48 -0400 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> Message-ID: Erik, > Also the global operation is not purely, but ?mostly" locally expensive > for the thread performing the global fence. The cost on global CPUs is > pretty much simply a normal fence (roughly). Of course there is always > gonna be that one guy with 4000 CPUs which might be a bit awkward. But even > then, with high enough n, shared, timestamped global fences etc, even such > ridiculous scalability should be within reach. Is it roughly like a normal fence for remote CPUs? You mentioned TLB being invalidated on remote CPUs, which seems a bit more involved than a normal fence. I think it's an interesting approach, although I wonder if it's worth the trouble given that G1 is aiming to replace CMS in the not-too-distant future? On Mon, May 11, 2015 at 11:59 AM, Erik ?sterlund wrote: > Hi Andrew, > > On 11 May 2015, at 14:41, Andrew Haley wrote: > > On 05/11/2015 12:33 PM, Erik ?sterlund wrote: > > Hi Andrew, > > On 11 May 2015, at 11:58, Andrew Haley wrote: > > On 05/11/2015 11:40 AM, Erik ?sterlund wrote: > > I have heard statements like this that such mechanism would not work > on RMO, but never got an explanation why it would work only on > TSO. Could you please elaborate? I studied some kernel sources for > a bunch of architectures and kernels, and it seems as far as I can > see all good for RMO too. > > > Dave Dice himself told me that the algorithm is not in general safe > for non-TSO. Perhaps, though, it is safe in this particular case. Of > course, I may be misunderstanding him. I'm not sure of his reasoning > but perhaps we should include him in this discussion. > > > I see. It would be interesting to hear his reasoning, because it is > not clear to me. > > From my point of view, I can't see a strong argument for doing this on > AArch64. StoreLoad barriers are not fantastically expensive there so > it may not be worth going to such extremes. The cost of a StoreLoad > barrier doesn't seem to be so much more than the StoreStore that we > have to have anyway. > > > Yeah about performance I?m not sure when it?s worth removing these > fences and on what hardware. > > > Your algorithm (as I understand it) trades a moderately expensive (but > purely local) operation for a very expensive global operation, albeit > with much lower frequency. It's not clear to me how much we value > continuous operation versus faster operation with occasional global > stalls. I suppose it must be application-dependent. > > > From my perspective the idea is to move the synchronization overhead > from a place where it cannot be amortized away (memory access) to a code > path where it can be pretty much arbitrarily amortized away (batched > cleaning). We couldn?t fence every n memory accesses, but we certainly can > global fence every n cards (batched), where we can pick a suitable n where > the related synchronization overheads seem to vanish. > > Also the global operation is not purely, but ?mostly" locally expensive > for the thread performing the global fence. The cost on global CPUs is > pretty much simply a normal fence (roughly). Of course there is always > gonna be that one guy with 4000 CPUs which might be a bit awkward. But even > then, with high enough n, shared, timestamped global fences etc, even such > ridiculous scalability should be within reach. > > BTW do we normally have some kind of reasonable scalability window we > optimize for, and don?t care as much about optimizing for that potential > one guy? ;) > > > In this case though, if it makes us any happier, I think we could > probably get rid of the storestore barrier too: > > The latent reference store is forced to serialize anyway after the > dirty card value write is observable and about to be cleaned. So the > potential consistency violation that the card looks dirty and then > cleaning thread reads a stale reference value could not happen with > my approach even without storestore hardware protection. I didn?t > give it too much thought but on the top of my mind I can?t see any > problems. If we want to get rid of storestore too I can give it some > more thought. > > > That is very interesting. > > > Indeed! :) > > > But you know much better than me if these fences are problematic or > not. :) > > > Not really. AArch64 is an architecture not an implementation, and is > designed to be implemented using a wide range of techniques. Instead > of having very complex cores, some designers seem have decided it > makes sense to have many of them on a die. It may well be, though, > that some implementers will adopt an x86-like highly-superscalar > architecture with a great deal of speculative execution. I can only > predict the past... My approach with this project has been to do > things in the most straightforward way rather than trying to optimize > for whatever implementations I happen to have available. > > > I see your point of view: you don?t want to be that dependent on the > hardware and elected to go with a straightforward synchronization solution > for this reason. This makes sense. But I think since we are dealing with an > optimization feature here (UseCondCardMark), I believe a less straight > forward solution makes us less dependent on such hardware details. Because > it is an optimization, the highest possible performance is probably > expected and even important, which suddenly becomes very tightly dependent > on the cost of fencing which probably varies a lot from different hardware > vendors. > > Conversely, the possibly less straightforward synchronization solution > dodges this bullet by simply not fencing and arbitrarily amortizing away > the related synchronization costs until they vanish. :) > > Thanks, > /Erik > > Andrew. > > > From vitalyd at gmail.com Mon May 11 16:08:13 2015 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Mon, 11 May 2015 12:08:13 -0400 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <6FC60F4E-6897-4835-A3A8-D9A89C54A6A2@lnu.se> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <6FC60F4E-6897-4835-A3A8-D9A89C54A6A2@lnu.se> Message-ID: Yeah, I was curious about your removing storestore suggestion. On Intel, it's just a compiler barrier anyway, so I was wondering what you meant exactly (I see now though) since there's nothing really to remove there :). On Mon, May 11, 2015 at 12:05 PM, Erik ?sterlund wrote: > Hi Vitaly, > > A compiler barrier I guess. ;) > > In my original proposal, the storestore would still be there so nothing > would really change regarding compiler reordering. > The card value load and write are dependent so the compiler wouldn?t > change them and the storestore would already contain the necessary compiler > barrier. > > But yeah, even that storestore can probably be removed too and replaced > by a compiler barrier if that makes us happier. :) > > Thanks, > /Erik > > On 11 May 2015, at 14:49, Vitaly Davidovich wrote: > > Erik, > > What would prevent compiler based reordering in your suggestions? > > sent from my phone > On May 11, 2015 7:33 AM, "Erik ?sterlund" wrote: > >> Hi Andrew, >> >> > On 11 May 2015, at 11:58, Andrew Haley wrote: >> > >> > On 05/11/2015 11:40 AM, Erik ?sterlund wrote: >> > >> >> I have heard statements like this that such mechanism would not work >> >> on RMO, but never got an explanation why it would work only on >> >> TSO. Could you please elaborate? I studied some kernel sources for >> >> a bunch of architectures and kernels, and it seems as far as I can >> >> see all good for RMO too. >> > >> > Dave Dice himself told me that the algorithm is not in general safe >> > for non-TSO. Perhaps, though, it is safe in this particular case. Of >> > course, I may be misunderstanding him. I'm not sure of his reasoning >> > but perhaps we should include him in this discussion. >> >> I see. It would be interesting to hear his reasoning, because it is not >> clear to me. >> >> > From my point of view, I can't see a strong argument for doing this on >> > AArch64. StoreLoad barriers are not fantastically expensive there so >> > it may not be worth going to such extremes. The cost of a StoreLoad >> > barrier doesn't seem to be so much more than the StoreStore that we >> > have to have anyway. >> >> Yeah about performance I?m not sure when it?s worth removing these fences >> and on what hardware. >> >> In this case though, if it makes us any happier, I think we could >> probably get rid of the storestore barrier too: >> >> The latent reference store is forced to serialize anyway after the dirty >> card value write is observable and about to be cleaned. So the potential >> consistency violation that the card looks dirty and then cleaning thread >> reads a stale reference value could not happen with my approach even >> without storestore hardware protection. I didn?t give it too much thought >> but on the top of my mind I can?t see any problems. If we want to get rid >> of storestore too I can give it some more thought. >> >> But you know much better than me if these fences are problematic or not. >> :) >> >> Thanks, >> /Erik >> >> > >> > Andrew. >> >> > From aph at redhat.com Mon May 11 16:21:21 2015 From: aph at redhat.com (Andrew Haley) Date: Mon, 11 May 2015 17:21:21 +0100 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> Message-ID: <5550D701.8070507@redhat.com> On 05/11/2015 05:06 PM, Vitaly Davidovich wrote: >> Also the global operation is not purely, but ?mostly" locally expensive >> for the thread performing the global fence. The cost on global CPUs is >> pretty much simply a normal fence (roughly). Of course there is always >> gonna be that one guy with 4000 CPUs which might be a bit awkward. Well yes, but that guy with 4000 CPUs is precisely the target for UseCondCardMark. >> But even then, with high enough n, shared, timestamped global >> fences etc, even such ridiculous scalability should be within >> reach. > > Is it roughly like a normal fence for remote CPUs? I would not think so. Surely you'd have to interrupt every core in the process and do a bunch of flushes. A TLB flush is expensive, as is interrupting the core itself. I'm fairly sure there's no way to flush a remote core's TLB without interrupting it. Andrew. From erik.osterlund at lnu.se Mon May 11 16:27:50 2015 From: erik.osterlund at lnu.se (=?utf-8?B?RXJpayDDlnN0ZXJsdW5k?=) Date: Mon, 11 May 2015 16:27:50 +0000 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> Message-ID: <9AA602B4-E60F-444C-9664-A22ED327D6AA@lnu.se> Hi Vitaly, It will purge the relevant TLB entries. In this case for one page that the remote CPUs won?t even have in their TLB entries (e.g. L1 cache) since each thread would have its own memory serializing page (at least virtual page). In other words it won?t do anything really concerning TLBs remotely more than note that all is good already, yet it still fences before finding out about that. If it?s worth the trouble though I do not know. :) Thanks, /Erik On 11 May 2015, at 17:06, Vitaly Davidovich > wrote: Erik, Also the global operation is not purely, but ?mostly" locally expensive for the thread performing the global fence. The cost on global CPUs is pretty much simply a normal fence (roughly). Of course there is always gonna be that one guy with 4000 CPUs which might be a bit awkward. But even then, with high enough n, shared, timestamped global fences etc, even such ridiculous scalability should be within reach. Is it roughly like a normal fence for remote CPUs? You mentioned TLB being invalidated on remote CPUs, which seems a bit more involved than a normal fence. I think it's an interesting approach, although I wonder if it's worth the trouble given that G1 is aiming to replace CMS in the not-too-distant future? On Mon, May 11, 2015 at 11:59 AM, Erik ?sterlund > wrote: Hi Andrew, On 11 May 2015, at 14:41, Andrew Haley > wrote: On 05/11/2015 12:33 PM, Erik ?sterlund wrote: Hi Andrew, On 11 May 2015, at 11:58, Andrew Haley > wrote: On 05/11/2015 11:40 AM, Erik ?sterlund wrote: I have heard statements like this that such mechanism would not work on RMO, but never got an explanation why it would work only on TSO. Could you please elaborate? I studied some kernel sources for a bunch of architectures and kernels, and it seems as far as I can see all good for RMO too. Dave Dice himself told me that the algorithm is not in general safe for non-TSO. Perhaps, though, it is safe in this particular case. Of course, I may be misunderstanding him. I'm not sure of his reasoning but perhaps we should include him in this discussion. I see. It would be interesting to hear his reasoning, because it is not clear to me. From my point of view, I can't see a strong argument for doing this on AArch64. StoreLoad barriers are not fantastically expensive there so it may not be worth going to such extremes. The cost of a StoreLoad barrier doesn't seem to be so much more than the StoreStore that we have to have anyway. Yeah about performance I?m not sure when it?s worth removing these fences and on what hardware. Your algorithm (as I understand it) trades a moderately expensive (but purely local) operation for a very expensive global operation, albeit with much lower frequency. It's not clear to me how much we value continuous operation versus faster operation with occasional global stalls. I suppose it must be application-dependent. From my perspective the idea is to move the synchronization overhead from a place where it cannot be amortized away (memory access) to a code path where it can be pretty much arbitrarily amortized away (batched cleaning). We couldn?t fence every n memory accesses, but we certainly can global fence every n cards (batched), where we can pick a suitable n where the related synchronization overheads seem to vanish. Also the global operation is not purely, but ?mostly" locally expensive for the thread performing the global fence. The cost on global CPUs is pretty much simply a normal fence (roughly). Of course there is always gonna be that one guy with 4000 CPUs which might be a bit awkward. But even then, with high enough n, shared, timestamped global fences etc, even such ridiculous scalability should be within reach. BTW do we normally have some kind of reasonable scalability window we optimize for, and don?t care as much about optimizing for that potential one guy? ;) In this case though, if it makes us any happier, I think we could probably get rid of the storestore barrier too: The latent reference store is forced to serialize anyway after the dirty card value write is observable and about to be cleaned. So the potential consistency violation that the card looks dirty and then cleaning thread reads a stale reference value could not happen with my approach even without storestore hardware protection. I didn?t give it too much thought but on the top of my mind I can?t see any problems. If we want to get rid of storestore too I can give it some more thought. That is very interesting. Indeed! :) But you know much better than me if these fences are problematic or not. :) Not really. AArch64 is an architecture not an implementation, and is designed to be implemented using a wide range of techniques. Instead of having very complex cores, some designers seem have decided it makes sense to have many of them on a die. It may well be, though, that some implementers will adopt an x86-like highly-superscalar architecture with a great deal of speculative execution. I can only predict the past... My approach with this project has been to do things in the most straightforward way rather than trying to optimize for whatever implementations I happen to have available. I see your point of view: you don?t want to be that dependent on the hardware and elected to go with a straightforward synchronization solution for this reason. This makes sense. But I think since we are dealing with an optimization feature here (UseCondCardMark), I believe a less straight forward solution makes us less dependent on such hardware details. Because it is an optimization, the highest possible performance is probably expected and even important, which suddenly becomes very tightly dependent on the cost of fencing which probably varies a lot from different hardware vendors. Conversely, the possibly less straightforward synchronization solution dodges this bullet by simply not fencing and arbitrarily amortizing away the related synchronization costs until they vanish. :) Thanks, /Erik Andrew. From erik.osterlund at lnu.se Mon May 11 16:51:06 2015 From: erik.osterlund at lnu.se (=?utf-8?B?RXJpayDDlnN0ZXJsdW5k?=) Date: Mon, 11 May 2015 16:51:06 +0000 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <5550D701.8070507@redhat.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> <5550D701.8070507@redhat.com> Message-ID: Hi Andrew, > On 11 May 2015, at 17:21, Andrew Haley wrote: > > On 05/11/2015 05:06 PM, Vitaly Davidovich wrote: > >>> Also the global operation is not purely, but ?mostly" locally expensive >>> for the thread performing the global fence. The cost on global CPUs is >>> pretty much simply a normal fence (roughly). Of course there is always >>> gonna be that one guy with 4000 CPUs which might be a bit awkward. > > Well yes, but that guy with 4000 CPUs is precisely the target for > UseCondCardMark. Okay. That should be fine still as I described, but a bit expensive to benchmark it and fine tune I guess. I don?t have access to any such machines. :( If somebody does we could find out. > >>> But even then, with high enough n, shared, timestamped global >>> fences etc, even such ridiculous scalability should be within >>> reach. >> >> Is it roughly like a normal fence for remote CPUs? > > I would not think so. Surely you'd have to interrupt every core in > the process and do a bunch of flushes. A TLB flush is expensive, as > is interrupting the core itself. I'm fairly sure there's no way to > flush a remote core's TLB without interrupting it. > Yes but in a round robin fashion using e.g. APIC on x86, not necessarily all globally at the same time. It?s like message passing. And the TLBs will only be purged for the range of the memory protection; this is a single page that those remote CPUs don?t even have in their TLB caches, and therefore no remote TLB caches will be changed. For e.g. x86_64, the APIC message itself will fence and then it will run the code to find out that no TLB entries needs changing and that?s pretty much it. This is not a scalability bottleneck at all and the constant costs I already know are not problematic because I use this technique quite a lot myself and Thomas Schatzl was kind enough to thoroughly benchmark such a card cleaning solution for me on G1 around new year on a number of benchmarks and machines. The conclusion for G1 was that it didn?t matter performance wise. Also that constant cost is amortized away arbitrarily by regulating its frequency. Thanks, /Erik > Andrew. From vitalyd at gmail.com Tue May 12 00:54:12 2015 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Mon, 11 May 2015 20:54:12 -0400 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> <5550D701.8070507@redhat.com> Message-ID: Erik, Thanks for the explanation - this is a clever trick! :) Out of curiosity, was there an explanation/theory why this didn't matter for G1? Are most write barriers there eliminated via some other means? sent from my phone On May 11, 2015 12:51 PM, "Erik ?sterlund" wrote: > Hi Andrew, > > > On 11 May 2015, at 17:21, Andrew Haley wrote: > > > > On 05/11/2015 05:06 PM, Vitaly Davidovich wrote: > > > >>> Also the global operation is not purely, but ?mostly" locally expensive > >>> for the thread performing the global fence. The cost on global CPUs is > >>> pretty much simply a normal fence (roughly). Of course there is always > >>> gonna be that one guy with 4000 CPUs which might be a bit awkward. > > > > Well yes, but that guy with 4000 CPUs is precisely the target for > > UseCondCardMark. > > Okay. That should be fine still as I described, but a bit expensive to > benchmark it and fine tune I guess. I don?t have access to any such > machines. :( If somebody does we could find out. > > > > >>> But even then, with high enough n, shared, timestamped global > >>> fences etc, even such ridiculous scalability should be within > >>> reach. > >> > >> Is it roughly like a normal fence for remote CPUs? > > > > I would not think so. Surely you'd have to interrupt every core in > > the process and do a bunch of flushes. A TLB flush is expensive, as > > is interrupting the core itself. I'm fairly sure there's no way to > > flush a remote core's TLB without interrupting it. > > > > Yes but in a round robin fashion using e.g. APIC on x86, not necessarily > all globally at the same time. It?s like message passing. And the TLBs will > only be purged for the range of the memory protection; this is a single > page that those remote CPUs don?t even have in their TLB caches, and > therefore no remote TLB caches will be changed. > > For e.g. x86_64, the APIC message itself will fence and then it will run > the code to find out that no TLB entries needs changing and that?s pretty > much it. > > This is not a scalability bottleneck at all and the constant costs I > already know are not problematic because I use this technique quite a lot > myself and Thomas Schatzl was kind enough to thoroughly benchmark such a > card cleaning solution for me on G1 around new year on a number of > benchmarks and machines. The conclusion for G1 was that it didn?t matter > performance wise. Also that constant cost is amortized away arbitrarily by > regulating its frequency. > > Thanks, > /Erik > > > Andrew. > > From mikael.gerdin at oracle.com Tue May 12 07:59:01 2015 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Tue, 12 May 2015 09:59:01 +0200 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> <5550D701.8070507@redhat.com> Message-ID: <5551B2C5.50000@oracle.com> Vitaly, On 2015-05-12 02:54, Vitaly Davidovich wrote: > Erik, > > Thanks for the explanation - this is a clever trick! :) > > Out of curiosity, was there an explanation/theory why this didn't matter > for G1? Are most write barriers there eliminated via some other means? The G1 write barrier has a conditional check for writes to objects in young regions and elides the storeload barrier for those. /Mikael > > sent from my phone > On May 11, 2015 12:51 PM, "Erik ?sterlund" wrote: > >> Hi Andrew, >> >>> On 11 May 2015, at 17:21, Andrew Haley wrote: >>> >>> On 05/11/2015 05:06 PM, Vitaly Davidovich wrote: >>> >>>>> Also the global operation is not purely, but ?mostly" locally expensive >>>>> for the thread performing the global fence. The cost on global CPUs is >>>>> pretty much simply a normal fence (roughly). Of course there is always >>>>> gonna be that one guy with 4000 CPUs which might be a bit awkward. >>> >>> Well yes, but that guy with 4000 CPUs is precisely the target for >>> UseCondCardMark. >> >> Okay. That should be fine still as I described, but a bit expensive to >> benchmark it and fine tune I guess. I don?t have access to any such >> machines. :( If somebody does we could find out. >> >>> >>>>> But even then, with high enough n, shared, timestamped global >>>>> fences etc, even such ridiculous scalability should be within >>>>> reach. >>>> >>>> Is it roughly like a normal fence for remote CPUs? >>> >>> I would not think so. Surely you'd have to interrupt every core in >>> the process and do a bunch of flushes. A TLB flush is expensive, as >>> is interrupting the core itself. I'm fairly sure there's no way to >>> flush a remote core's TLB without interrupting it. >>> >> >> Yes but in a round robin fashion using e.g. APIC on x86, not necessarily >> all globally at the same time. It?s like message passing. And the TLBs will >> only be purged for the range of the memory protection; this is a single >> page that those remote CPUs don?t even have in their TLB caches, and >> therefore no remote TLB caches will be changed. >> >> For e.g. x86_64, the APIC message itself will fence and then it will run >> the code to find out that no TLB entries needs changing and that?s pretty >> much it. >> >> This is not a scalability bottleneck at all and the constant costs I >> already know are not problematic because I use this technique quite a lot >> myself and Thomas Schatzl was kind enough to thoroughly benchmark such a >> card cleaning solution for me on G1 around new year on a number of >> benchmarks and machines. The conclusion for G1 was that it didn?t matter >> performance wise. Also that constant cost is amortized away arbitrarily by >> regulating its frequency. >> >> Thanks, >> /Erik >> >>> Andrew. >> >> From yekaterina.kantserova at oracle.com Tue May 12 08:09:02 2015 From: yekaterina.kantserova at oracle.com (Yekaterina Kantserova) Date: Tue, 12 May 2015 10:09:02 +0200 Subject: RFR(XXS): 8080100: compiler/rtm/* tests fail due to Compilation failed Message-ID: <5551B51E.8000406@oracle.com> Hi, Could I please have a review of this very small fix. bug: https://bugs.openjdk.java.net/browse/JDK-8080100 webrev: http://cr.openjdk.java.net/~ykantser/8080100/webrev.00/ Thanks, Katja From staffan.larsen at oracle.com Tue May 12 08:20:10 2015 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Tue, 12 May 2015 10:20:10 +0200 Subject: RFR(XXS): 8080100: compiler/rtm/* tests fail due to Compilation failed In-Reply-To: <5551B51E.8000406@oracle.com> References: <5551B51E.8000406@oracle.com> Message-ID: <3B0470A5-2FC8-42A7-A85C-33537AB12963@oracle.com> Looks good! Thanks, /Staffan > On 12 maj 2015, at 10:09, Yekaterina Kantserova wrote: > > Hi, > > Could I please have a review of this very small fix. > > bug: https://bugs.openjdk.java.net/browse/JDK-8080100 > webrev: http://cr.openjdk.java.net/~ykantser/8080100/webrev.00/ > > Thanks, > Katja From yekaterina.kantserova at oracle.com Tue May 12 08:27:50 2015 From: yekaterina.kantserova at oracle.com (Yekaterina Kantserova) Date: Tue, 12 May 2015 10:27:50 +0200 Subject: RFR(XXS): 8080100: compiler/rtm/* tests fail due to Compilation failed In-Reply-To: <3B0470A5-2FC8-42A7-A85C-33537AB12963@oracle.com> References: <5551B51E.8000406@oracle.com> <3B0470A5-2FC8-42A7-A85C-33537AB12963@oracle.com> Message-ID: <5551B986.102@oracle.com> Staffan, thanks! On 05/12/2015 10:20 AM, Staffan Larsen wrote: > Looks good! > > Thanks, > /Staffan > > >> On 12 maj 2015, at 10:09, Yekaterina Kantserova wrote: >> >> Hi, >> >> Could I please have a review of this very small fix. >> >> bug: https://bugs.openjdk.java.net/browse/JDK-8080100 >> webrev: http://cr.openjdk.java.net/~ykantser/8080100/webrev.00/ >> >> Thanks, >> Katja From serguei.spitsyn at oracle.com Tue May 12 10:09:44 2015 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Tue, 12 May 2015 03:09:44 -0700 Subject: RFR(XXS): 8080100: compiler/rtm/* tests fail due to Compilation failed In-Reply-To: <5551B51E.8000406@oracle.com> References: <5551B51E.8000406@oracle.com> Message-ID: <5551D168.20501@oracle.com> Looks good. Thanks, Serguei On 5/12/15 1:09 AM, Yekaterina Kantserova wrote: > Hi, > > Could I please have a review of this very small fix. > > bug: https://bugs.openjdk.java.net/browse/JDK-8080100 > webrev: http://cr.openjdk.java.net/~ykantser/8080100/webrev.00/ > > Thanks, > Katja From per.liden at oracle.com Tue May 12 10:16:50 2015 From: per.liden at oracle.com (Per Liden) Date: Tue, 12 May 2015 12:16:50 +0200 Subject: RFR(XXL): 8079792: GC directory structure cleanup Message-ID: <5551D312.7000401@oracle.com> Hi, As previously mentioned [1], the GC team is doing a cleanup of the directory structure for the GC code. Here's the patch for that cleanup. First, a recap of the new directory structure: - A single "top-level" directory for GC code: src/share/vm/gc/ - One sub-directory per GC: src/share/vm/gc/cms/ src/share/vm/gc/g1/ src/share/vm/gc/parallel/ src/share/vm/gc/serial/ - A single directory for common/shared GC code: src/share/gc/shared/ A number of GC files previously located in memory and utilities have been moved in under the gc directory (mostly into gc/shared), these are: memory/barrierSet.* memory/blockOffsetTable.* memory/cardGeneration.* memory/cardTableModRefBS.* memory/cardTableRS.* memory/collectorPolicy.* memory/gcLocker.* memory/genCollectedHeap.* memory/generation.* memory/generationSpec.* memory/genOopClosures.* memory/genMarkSweep.* memory/genRemSet.* memory/modRefBarrierSet.* memory/referencePolicy.* memory/referenceProcessor.* memory/referenceProcessorStats.* memory/space.* memory/specialized_oop_closures.* memory/strongRootsScope.* memory/tenuredGeneration.* memory/threadLocalAllocBuffer.* memory/watermark.* utilities/workgroup.* utilities/taskqueue.* The patch is very large because it touches a lot of files, but the individual changes are trivial. The main bulk of the changes consists of adjustments to #includes "gc_implementation/... and #ifndef SHARE_VM_GC_IMPL... The rest (minor part) of the patch include adjustments to some makefiles, SA and jtreg tests. Webrev: http://cr.openjdk.java.net/~pliden/8079792/webrev.0/ Here's the same webrev split into the following pieces: - Change to cpp/hpp files http://cr.openjdk.java.net/~pliden/8079792/webrev.0-hotspot/ - Changes to makefiles http://cr.openjdk.java.net/~pliden/8079792/webrev.0-make/ - Changes to SA http://cr.openjdk.java.net/~pliden/8079792/webrev.0-sa/ - Changes to jtreg tests http://cr.openjdk.java.net/~pliden/8079792/webrev.0-test/ Bug: https://bugs.openjdk.java.net/browse/JDK-8079792 Testing: JPRT, Aurora adhoc GC nightly, bigapps cheers, /Per [1] http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018439.html From david.holmes at oracle.com Tue May 12 10:22:44 2015 From: david.holmes at oracle.com (David Holmes) Date: Tue, 12 May 2015 20:22:44 +1000 Subject: [9] Backport approval request: 8078470: [Linux] Replace syscall use in os::fork_and_exec with glibc fork() and execve() In-Reply-To: <55501EDF.4080403@oracle.com> References: <554FFC5F.6000606@oracle.com> <55501EDF.4080403@oracle.com> Message-ID: <5551D474.8000309@oracle.com> Ping! Please. David On 11/05/2015 1:15 PM, David Holmes wrote: > On 11/05/2015 10:48 AM, David Holmes wrote: >> This is the "backport" to 9 as the original change had to go into 8u >> first for logistical/scheduling reasons. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8078470 >> >> 8u changeset: >> http://hg.openjdk.java.net/jdk8u/jdk8u/hotspot/rev/915ca3e9d15e >> >> Original review thread: >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-April/018171.html >> >> >> The changeset mostly applied cleanly with some manual tweaking in one >> spot as the 9 code refers to AARCH64 where the 8u code does not. > > I also had to tweak the tests due to: > > 8067013: Rename the com.oracle.java.testlibary package > > For good measure here's a webrev: > > http://cr.openjdk.java.net/~dholmes/8078470/webrev.jdk9/ > > Thanks, > David > > >> Thanks, >> David From yekaterina.kantserova at oracle.com Tue May 12 11:38:32 2015 From: yekaterina.kantserova at oracle.com (Yekaterina Kantserova) Date: Tue, 12 May 2015 13:38:32 +0200 Subject: RFR(XXS): 8080100: compiler/rtm/* tests fail due to Compilation failed In-Reply-To: <5551D168.20501@oracle.com> References: <5551B51E.8000406@oracle.com> <5551D168.20501@oracle.com> Message-ID: <5551E638.8090006@oracle.com> Serguei, thanks! // Katja On 05/12/2015 12:09 PM, serguei.spitsyn at oracle.com wrote: > Looks good. > > Thanks, > Serguei > > On 5/12/15 1:09 AM, Yekaterina Kantserova wrote: >> Hi, >> >> Could I please have a review of this very small fix. >> >> bug: https://bugs.openjdk.java.net/browse/JDK-8080100 >> webrev: http://cr.openjdk.java.net/~ykantser/8080100/webrev.00/ >> >> Thanks, >> Katja > From erik.osterlund at lnu.se Tue May 12 12:08:42 2015 From: erik.osterlund at lnu.se (=?utf-8?B?RXJpayDDlnN0ZXJsdW5k?=) Date: Tue, 12 May 2015 12:08:42 +0000 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <5551B2C5.50000@oracle.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> <5550D701.8070507@redhat.com> <5551B2C5.50000@oracle.com> Message-ID: Hi Mikael and Vitaly, Yeah G1 skips storeload for young regions, and also pointers to the same region (which are probably pretty common). Just to clear things up - it seems like my approach might be interesting here. Would anyone volunteer to help out and do some benchmarking if I send a patch? Cheers, /Erik > On 12 May 2015, at 08:59, Mikael Gerdin wrote: > > Vitaly, > > On 2015-05-12 02:54, Vitaly Davidovich wrote: >> Erik, >> >> Thanks for the explanation - this is a clever trick! :) >> >> Out of curiosity, was there an explanation/theory why this didn't matter >> for G1? Are most write barriers there eliminated via some other means? > > The G1 write barrier has a conditional check for writes to objects in young regions and elides the storeload barrier for those. > > /Mikael > >> >> sent from my phone >> On May 11, 2015 12:51 PM, "Erik ?sterlund" wrote: >> >>> Hi Andrew, >>> >>>> On 11 May 2015, at 17:21, Andrew Haley wrote: >>>> >>>> On 05/11/2015 05:06 PM, Vitaly Davidovich wrote: >>>> >>>>>> Also the global operation is not purely, but ?mostly" locally expensive >>>>>> for the thread performing the global fence. The cost on global CPUs is >>>>>> pretty much simply a normal fence (roughly). Of course there is always >>>>>> gonna be that one guy with 4000 CPUs which might be a bit awkward. >>>> >>>> Well yes, but that guy with 4000 CPUs is precisely the target for >>>> UseCondCardMark. >>> >>> Okay. That should be fine still as I described, but a bit expensive to >>> benchmark it and fine tune I guess. I don?t have access to any such >>> machines. :( If somebody does we could find out. >>> >>>> >>>>>> But even then, with high enough n, shared, timestamped global >>>>>> fences etc, even such ridiculous scalability should be within >>>>>> reach. >>>>> >>>>> Is it roughly like a normal fence for remote CPUs? >>>> >>>> I would not think so. Surely you'd have to interrupt every core in >>>> the process and do a bunch of flushes. A TLB flush is expensive, as >>>> is interrupting the core itself. I'm fairly sure there's no way to >>>> flush a remote core's TLB without interrupting it. >>>> >>> >>> Yes but in a round robin fashion using e.g. APIC on x86, not necessarily >>> all globally at the same time. It?s like message passing. And the TLBs will >>> only be purged for the range of the memory protection; this is a single >>> page that those remote CPUs don?t even have in their TLB caches, and >>> therefore no remote TLB caches will be changed. >>> >>> For e.g. x86_64, the APIC message itself will fence and then it will run >>> the code to find out that no TLB entries needs changing and that?s pretty >>> much it. >>> >>> This is not a scalability bottleneck at all and the constant costs I >>> already know are not problematic because I use this technique quite a lot >>> myself and Thomas Schatzl was kind enough to thoroughly benchmark such a >>> card cleaning solution for me on G1 around new year on a number of >>> benchmarks and machines. The conclusion for G1 was that it didn?t matter >>> performance wise. Also that constant cost is amortized away arbitrarily by >>> regulating its frequency. >>> >>> Thanks, >>> /Erik >>> >>>> Andrew. >>> >>> From aleksey.shipilev at oracle.com Tue May 12 13:05:27 2015 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Tue, 12 May 2015 16:05:27 +0300 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <5550B182.7090009@redhat.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> Message-ID: <5551FA97.9090502@oracle.com> On 11.05.2015 16:41, Andrew Haley wrote: > On 05/11/2015 12:33 PM, Erik ?sterlund wrote: >> Hi Andrew, >> >>> On 11 May 2015, at 11:58, Andrew Haley wrote: >>> >>> On 05/11/2015 11:40 AM, Erik ?sterlund wrote: >>> >>>> I have heard statements like this that such mechanism would not work >>>> on RMO, but never got an explanation why it would work only on >>>> TSO. Could you please elaborate? I studied some kernel sources for >>>> a bunch of architectures and kernels, and it seems as far as I can >>>> see all good for RMO too. >>> >>> Dave Dice himself told me that the algorithm is not in general safe >>> for non-TSO. Perhaps, though, it is safe in this particular case. Of >>> course, I may be misunderstanding him. I'm not sure of his reasoning >>> but perhaps we should include him in this discussion. >> >> I see. It would be interesting to hear his reasoning, because it is >> not clear to me. >> >>> From my point of view, I can't see a strong argument for doing this on >>> AArch64. StoreLoad barriers are not fantastically expensive there so >>> it may not be worth going to such extremes. The cost of a StoreLoad >>> barrier doesn't seem to be so much more than the StoreStore that we >>> have to have anyway. >> >> Yeah about performance I?m not sure when it?s worth removing these >> fences and on what hardware. > > Your algorithm (as I understand it) trades a moderately expensive (but > purely local) operation for a very expensive global operation, albeit > with much lower frequency. It's not clear to me how much we value > continuous operation versus faster operation with occasional global > stalls. I suppose it must be application-dependent. Okay, Dice's asymmetric trick is nice. In fact, that is arguably what Parallel is using already: it serializes the mutator stores by stopping the mutator at safepoint. Using mprotect and TLB tricks as the serialization actions is cute and dandy. However, I have doubts that employing the system-wide synchronization mechanism for concurrent collector is a good thing, when we can't predict and control the long-term performance of it. For example, we are basically coming at the mercy of underlying OS performance with mprotect calls. There are industrial GCs that rely on OS performance (*cough* *cough*), you can see what do those require to guarantee performance. Also, given the problem is specific to CMS that arguably goes away in favor of G1, I would think introducing special-case-for-CMS barriers in mutator code is a sane interim solution. Especially if we can backport the G1-like barrier "filtering" in CMS case? If I read this thread right, Erik and Thomas concluded there is no clear benefit of introducing the mprotect-like mechanics with G1, which probably means the overheads are bearable with appropriate mutator-side changes. Thanks, -Aleksey From bengt.rutisson at oracle.com Tue May 12 13:13:36 2015 From: bengt.rutisson at oracle.com (Bengt Rutisson) Date: Tue, 12 May 2015 15:13:36 +0200 Subject: RFR(XXL): 8079792: GC directory structure cleanup In-Reply-To: <5551D312.7000401@oracle.com> References: <5551D312.7000401@oracle.com> Message-ID: <5551FC80.2020507@oracle.com> Hi Per, Thanks for doing this refactoring! Looks good! Bengt On 2015-05-12 12:16, Per Liden wrote: > Hi, > > As previously mentioned [1], the GC team is doing a cleanup of the > directory structure for the GC code. Here's the patch for that cleanup. > > First, a recap of the new directory structure: > > - A single "top-level" directory for GC code: > src/share/vm/gc/ > > - One sub-directory per GC: > src/share/vm/gc/cms/ > src/share/vm/gc/g1/ > src/share/vm/gc/parallel/ > src/share/vm/gc/serial/ > > - A single directory for common/shared GC code: > src/share/gc/shared/ > > > A number of GC files previously located in memory and utilities have > been moved in under the gc directory (mostly into gc/shared), these are: > > memory/barrierSet.* > memory/blockOffsetTable.* > memory/cardGeneration.* > memory/cardTableModRefBS.* > memory/cardTableRS.* > memory/collectorPolicy.* > memory/gcLocker.* > memory/genCollectedHeap.* > memory/generation.* > memory/generationSpec.* > memory/genOopClosures.* > memory/genMarkSweep.* > memory/genRemSet.* > memory/modRefBarrierSet.* > memory/referencePolicy.* > memory/referenceProcessor.* > memory/referenceProcessorStats.* > memory/space.* > memory/specialized_oop_closures.* > memory/strongRootsScope.* > memory/tenuredGeneration.* > memory/threadLocalAllocBuffer.* > memory/watermark.* > utilities/workgroup.* > utilities/taskqueue.* > > > The patch is very large because it touches a lot of files, but the > individual changes are trivial. The main bulk of the changes consists > of adjustments to #includes "gc_implementation/... and #ifndef > SHARE_VM_GC_IMPL... The rest (minor part) of the patch include > adjustments to some makefiles, SA and jtreg tests. > > > Webrev: http://cr.openjdk.java.net/~pliden/8079792/webrev.0/ > > Here's the same webrev split into the following pieces: > > - Change to cpp/hpp files > http://cr.openjdk.java.net/~pliden/8079792/webrev.0-hotspot/ > > - Changes to makefiles > http://cr.openjdk.java.net/~pliden/8079792/webrev.0-make/ > > - Changes to SA > http://cr.openjdk.java.net/~pliden/8079792/webrev.0-sa/ > > - Changes to jtreg tests > http://cr.openjdk.java.net/~pliden/8079792/webrev.0-test/ > > > Bug: https://bugs.openjdk.java.net/browse/JDK-8079792 > > Testing: JPRT, Aurora adhoc GC nightly, bigapps > > cheers, > /Per > > [1] > http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018439.html From mikael.gerdin at oracle.com Tue May 12 13:17:38 2015 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Tue, 12 May 2015 15:17:38 +0200 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <5551FA97.9090502@oracle.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> <5551FA97.9090502@oracle.com> Message-ID: <5551FD72.1020504@oracle.com> On 2015-05-12 15:05, Aleksey Shipilev wrote: > On 11.05.2015 16:41, Andrew Haley wrote: >> On 05/11/2015 12:33 PM, Erik ?sterlund wrote: >>> Hi Andrew, >>> >>>> On 11 May 2015, at 11:58, Andrew Haley wrote: >>>> >>>> On 05/11/2015 11:40 AM, Erik ?sterlund wrote: >>>> >>>>> I have heard statements like this that such mechanism would not work >>>>> on RMO, but never got an explanation why it would work only on >>>>> TSO. Could you please elaborate? I studied some kernel sources for >>>>> a bunch of architectures and kernels, and it seems as far as I can >>>>> see all good for RMO too. >>>> >>>> Dave Dice himself told me that the algorithm is not in general safe >>>> for non-TSO. Perhaps, though, it is safe in this particular case. Of >>>> course, I may be misunderstanding him. I'm not sure of his reasoning >>>> but perhaps we should include him in this discussion. >>> >>> I see. It would be interesting to hear his reasoning, because it is >>> not clear to me. >>> >>>> From my point of view, I can't see a strong argument for doing this on >>>> AArch64. StoreLoad barriers are not fantastically expensive there so >>>> it may not be worth going to such extremes. The cost of a StoreLoad >>>> barrier doesn't seem to be so much more than the StoreStore that we >>>> have to have anyway. >>> >>> Yeah about performance I?m not sure when it?s worth removing these >>> fences and on what hardware. >> >> Your algorithm (as I understand it) trades a moderately expensive (but >> purely local) operation for a very expensive global operation, albeit >> with much lower frequency. It's not clear to me how much we value >> continuous operation versus faster operation with occasional global >> stalls. I suppose it must be application-dependent. > > Okay, Dice's asymmetric trick is nice. In fact, that is arguably what > Parallel is using already: it serializes the mutator stores by stopping > the mutator at safepoint. Using mprotect and TLB tricks as the > serialization actions is cute and dandy. > > However, I have doubts that employing the system-wide synchronization > mechanism for concurrent collector is a good thing, when we can't > predict and control the long-term performance of it. For example, we are > basically coming at the mercy of underlying OS performance with mprotect > calls. There are industrial GCs that rely on OS performance (*cough* > *cough*), you can see what do those require to guarantee performance. Just to be clear, this type of synchronization is in fact already implemented in the JVM to synchronize thread states for the safepoint protocol, so it's not exactly new and unexplained territory. However it's not clear to me that the code complexity involved with using that type of synchronization for conditional card marking in CMS is worth it. > > Also, given the problem is specific to CMS that arguably goes away in > favor of G1, I would think introducing special-case-for-CMS barriers in > mutator code is a sane interim solution. I agree. > > Especially if we can backport the G1-like barrier "filtering" in CMS > case? If I read this thread right, Erik and Thomas concluded there is no > clear benefit of introducing the mprotect-like mechanics with G1, which > probably means the overheads are bearable with appropriate mutator-side > changes. I don't think it would be easy to implement barrier "filtering" in CMS. Keep in mind that even before the storeload was added to G1's barriers they were fairly heavy-weight. CMS' barriers are not, if we start to add conditionals and storeload barriers to them the runtime overhead may increase more than what it did when we added the storeload to G1. /Mikael > > Thanks, > -Aleksey > From per.liden at oracle.com Tue May 12 13:20:06 2015 From: per.liden at oracle.com (Per Liden) Date: Tue, 12 May 2015 15:20:06 +0200 Subject: RFR(XXL): 8079792: GC directory structure cleanup In-Reply-To: <5551FC80.2020507@oracle.com> References: <5551D312.7000401@oracle.com> <5551FC80.2020507@oracle.com> Message-ID: <5551FE06.1070600@oracle.com> Thanks for reviewing Bengt! /Per On 2015-05-12 15:13, Bengt Rutisson wrote: > > Hi Per, > > Thanks for doing this refactoring! > > Looks good! > > Bengt > > On 2015-05-12 12:16, Per Liden wrote: >> Hi, >> >> As previously mentioned [1], the GC team is doing a cleanup of the >> directory structure for the GC code. Here's the patch for that cleanup. >> >> First, a recap of the new directory structure: >> >> - A single "top-level" directory for GC code: >> src/share/vm/gc/ >> >> - One sub-directory per GC: >> src/share/vm/gc/cms/ >> src/share/vm/gc/g1/ >> src/share/vm/gc/parallel/ >> src/share/vm/gc/serial/ >> >> - A single directory for common/shared GC code: >> src/share/gc/shared/ >> >> >> A number of GC files previously located in memory and utilities have >> been moved in under the gc directory (mostly into gc/shared), these are: >> >> memory/barrierSet.* >> memory/blockOffsetTable.* >> memory/cardGeneration.* >> memory/cardTableModRefBS.* >> memory/cardTableRS.* >> memory/collectorPolicy.* >> memory/gcLocker.* >> memory/genCollectedHeap.* >> memory/generation.* >> memory/generationSpec.* >> memory/genOopClosures.* >> memory/genMarkSweep.* >> memory/genRemSet.* >> memory/modRefBarrierSet.* >> memory/referencePolicy.* >> memory/referenceProcessor.* >> memory/referenceProcessorStats.* >> memory/space.* >> memory/specialized_oop_closures.* >> memory/strongRootsScope.* >> memory/tenuredGeneration.* >> memory/threadLocalAllocBuffer.* >> memory/watermark.* >> utilities/workgroup.* >> utilities/taskqueue.* >> >> >> The patch is very large because it touches a lot of files, but the >> individual changes are trivial. The main bulk of the changes consists >> of adjustments to #includes "gc_implementation/... and #ifndef >> SHARE_VM_GC_IMPL... The rest (minor part) of the patch include >> adjustments to some makefiles, SA and jtreg tests. >> >> >> Webrev: http://cr.openjdk.java.net/~pliden/8079792/webrev.0/ >> >> Here's the same webrev split into the following pieces: >> >> - Change to cpp/hpp files >> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-hotspot/ >> >> - Changes to makefiles >> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-make/ >> >> - Changes to SA >> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-sa/ >> >> - Changes to jtreg tests >> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-test/ >> >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8079792 >> >> Testing: JPRT, Aurora adhoc GC nightly, bigapps >> >> cheers, >> /Per >> >> [1] >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018439.html > From daniel.daugherty at oracle.com Tue May 12 14:24:20 2015 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Tue, 12 May 2015 08:24:20 -0600 Subject: [9] Backport approval request: 8078470: [Linux] Replace syscall use in os::fork_and_exec with glibc fork() and execve() In-Reply-To: <55501EDF.4080403@oracle.com> References: <554FFC5F.6000606@oracle.com> <55501EDF.4080403@oracle.com> Message-ID: <55520D14.6020909@oracle.com> > http://cr.openjdk.java.net/~dholmes/8078470/webrev.jdk9/ src/os/linux/vm/os_linux.cpp No comments. src/share/vm/utilities/vmError.cpp No comments. test/runtime/ErrorHandling/TestOnError.java No comments. test/runtime/ErrorHandling/TestOnOutOfMemoryError.java No comments. Thumbs up! Dan On 5/10/15 9:15 PM, David Holmes wrote: > On 11/05/2015 10:48 AM, David Holmes wrote: >> This is the "backport" to 9 as the original change had to go into 8u >> first for logistical/scheduling reasons. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8078470 >> >> 8u changeset: >> http://hg.openjdk.java.net/jdk8u/jdk8u/hotspot/rev/915ca3e9d15e >> >> Original review thread: >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-April/018171.html >> >> >> >> The changeset mostly applied cleanly with some manual tweaking in one >> spot as the 9 code refers to AARCH64 where the 8u code does not. > > I also had to tweak the tests due to: > > 8067013: Rename the com.oracle.java.testlibary package > > For good measure here's a webrev: > > http://cr.openjdk.java.net/~dholmes/8078470/webrev.jdk9/ > > Thanks, > David > > >> Thanks, >> David > > > From stefan.karlsson at oracle.com Tue May 12 15:24:28 2015 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Tue, 12 May 2015 17:24:28 +0200 Subject: RFR(XXL): 8079792: GC directory structure cleanup In-Reply-To: <5551D312.7000401@oracle.com> References: <5551D312.7000401@oracle.com> Message-ID: <55521B2C.8050902@oracle.com> Hi Per, On 2015-05-12 12:16, Per Liden wrote: > Hi, > > As previously mentioned [1], the GC team is doing a cleanup of the > directory structure for the GC code. Here's the patch for that cleanup. > > First, a recap of the new directory structure: > > - A single "top-level" directory for GC code: > src/share/vm/gc/ > > - One sub-directory per GC: > src/share/vm/gc/cms/ > src/share/vm/gc/g1/ > src/share/vm/gc/parallel/ > src/share/vm/gc/serial/ > > - A single directory for common/shared GC code: > src/share/gc/shared/ > > > A number of GC files previously located in memory and utilities have > been moved in under the gc directory (mostly into gc/shared), these are: > > memory/barrierSet.* > memory/blockOffsetTable.* > memory/cardGeneration.* > memory/cardTableModRefBS.* > memory/cardTableRS.* > memory/collectorPolicy.* > memory/gcLocker.* > memory/genCollectedHeap.* > memory/generation.* > memory/generationSpec.* > memory/genOopClosures.* > memory/genMarkSweep.* > memory/genRemSet.* > memory/modRefBarrierSet.* > memory/referencePolicy.* > memory/referenceProcessor.* > memory/referenceProcessorStats.* > memory/space.* > memory/specialized_oop_closures.* > memory/strongRootsScope.* > memory/tenuredGeneration.* > memory/threadLocalAllocBuffer.* > memory/watermark.* > utilities/workgroup.* > utilities/taskqueue.* > > > The patch is very large because it touches a lot of files, but the > individual changes are trivial. The main bulk of the changes consists > of adjustments to #includes "gc_implementation/... and #ifndef > SHARE_VM_GC_IMPL... The rest (minor part) of the patch include > adjustments to some makefiles, SA and jtreg tests. > > > Webrev: http://cr.openjdk.java.net/~pliden/8079792/webrev.0/ This is a huge patch and I haven't looked at all the files yet, but most of it looks good. I won't be able to review every single updated include/guard line, but I don't think that's too important as long as the patch builds. http://cr.openjdk.java.net/~pliden/8079792/webrev.0/make/excludeSrc.make.udiff.html The gc_shared_keep variable was changed to include almost all files in gc/shared, but there a few files in gc/shared that are not listed. Most of them should probably be moved to GC specific directories. With those files moved, we might want to consider removing the gc_shared_keep variable entirely. thanks, StefanK > > Here's the same webrev split into the following pieces: > > - Change to cpp/hpp files > http://cr.openjdk.java.net/~pliden/8079792/webrev.0-hotspot/ > > - Changes to makefiles > http://cr.openjdk.java.net/~pliden/8079792/webrev.0-make/ > > - Changes to SA > http://cr.openjdk.java.net/~pliden/8079792/webrev.0-sa/ > > - Changes to jtreg tests > http://cr.openjdk.java.net/~pliden/8079792/webrev.0-test/ > > > Bug: https://bugs.openjdk.java.net/browse/JDK-8079792 > > Testing: JPRT, Aurora adhoc GC nightly, bigapps > > cheers, > /Per > > [1] > http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018439.html From edward.nevill at linaro.org Tue May 12 15:41:28 2015 From: edward.nevill at linaro.org (Edward Nevill) Date: Tue, 12 May 2015 16:41:28 +0100 Subject: RFR: 8079564: aarch64: Use FP register as proper frame pointer In-Reply-To: <554B7904.2000806@redhat.com> References: <1431008624.18342.29.camel@mylittlepony.linaroharston> <554B7904.2000806@redhat.com> Message-ID: <1431445288.3135.13.camel@mylittlepony.linaroharston> On Thu, 2015-05-07 at 15:39 +0100, Andrew Haley wrote: > Please explain the changes to method handle calls. > Hi, The changes are based on the premise that SP is preserved across methodhandle calls. This is certainly the case for any compiled code and also for any native code. The only case in question is the c2i_adapter. In this case SP is saved in the 'senderSP' (R13). Here is the code that drops the stack in the c2i_adapter // Since all args are passed on the stack, total_args_passed * // Interpreter::stackElementSize is the space we need. int extraspace = total_args_passed * Interpreter::stackElementSize; __ mov(r13, sp); // stack is aligned, keep it that way extraspace = round_to(extraspace, 2*wordSize); if (extraspace) __ sub(sp, sp, extraspace); .... It then jumps to the the interpreter with a dropped stack. ... __ mov(esp, sp); // Interp expects args on caller's expression stack __ ldr(rscratch1, Address(rmethod, in_bytes(Method::interpreter_entry_offset()))); __ br(rscratch1); This would create an unbalanced stack if the interpreter returned directly. But the interpreter restores SP from senderSP. I have tested this by changing the above to __ mov(r13, -1024) and running it in gdb, and I do indeed get a SEGV with SP having the value -1024. So, all the changes to the method handle calls are to remove the special case code where it had to save/restore from rfp if it was a method handle call. Does this answer your question? All the best, Ed. From erik.osterlund at lnu.se Tue May 12 17:23:26 2015 From: erik.osterlund at lnu.se (=?utf-8?B?RXJpayDDlnN0ZXJsdW5k?=) Date: Tue, 12 May 2015 17:23:26 +0000 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <5551FD72.1020504@oracle.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> <5551FA97.9090502@oracle.com> <5551FD72.1020504@oracle.com> Message-ID: <38CAAA0B-6F32-4508-818F-AD9D248E8B35@lnu.se> Hi Mikael and Andrew, Unless I missed something, I don?t think we introduce that much code complexity. Of course I agree that G1 will make fixes in CMS a bit wasted in the long run. However, until then it would be good if CMS still works. And a few lines shared code (handful for the actual GC) seems, to me, both less painful from an engineering point of view and better performant than going through all mutator code paths that need changing (interpreter, c1, c2, for potentially many architectures). Out of curiosity I patched the thing and my fix can be found here: http://cr.openjdk.java.net/~eosterlund/8079315/webrev.v1/ Fortunately it looks like CMS is already batching cards pretty well for me so the change turned out to be very small. I logged to see how often this global fence is triggered and it?s very rare so I feel quite convinced it won?t impact performance negatively even on ?that guy?s? machine and with a terrible OS implementation. I benchmarked it using DaCapo benchmarks locally on my computer (macbook x86_64 BSD) and there were no traces of any performance artefacts/regression. If anyone happens to have a larger machine than my macbook, it would be interesting to take it for a spin. ;) Disclaimer: I haven?t poked around a lot in CMS in the past, so I hope I didn?t miss any important card value transitions! Thanks, /Erik On 12 May 2015, at 14:17, Mikael Gerdin > wrote: On 2015-05-12 15:05, Aleksey Shipilev wrote: On 11.05.2015 16:41, Andrew Haley wrote: On 05/11/2015 12:33 PM, Erik ?sterlund wrote: Hi Andrew, On 11 May 2015, at 11:58, Andrew Haley > wrote: On 05/11/2015 11:40 AM, Erik ?sterlund wrote: I have heard statements like this that such mechanism would not work on RMO, but never got an explanation why it would work only on TSO. Could you please elaborate? I studied some kernel sources for a bunch of architectures and kernels, and it seems as far as I can see all good for RMO too. Dave Dice himself told me that the algorithm is not in general safe for non-TSO. Perhaps, though, it is safe in this particular case. Of course, I may be misunderstanding him. I'm not sure of his reasoning but perhaps we should include him in this discussion. I see. It would be interesting to hear his reasoning, because it is not clear to me. From my point of view, I can't see a strong argument for doing this on AArch64. StoreLoad barriers are not fantastically expensive there so it may not be worth going to such extremes. The cost of a StoreLoad barrier doesn't seem to be so much more than the StoreStore that we have to have anyway. Yeah about performance I?m not sure when it?s worth removing these fences and on what hardware. Your algorithm (as I understand it) trades a moderately expensive (but purely local) operation for a very expensive global operation, albeit with much lower frequency. It's not clear to me how much we value continuous operation versus faster operation with occasional global stalls. I suppose it must be application-dependent. Okay, Dice's asymmetric trick is nice. In fact, that is arguably what Parallel is using already: it serializes the mutator stores by stopping the mutator at safepoint. Using mprotect and TLB tricks as the serialization actions is cute and dandy. However, I have doubts that employing the system-wide synchronization mechanism for concurrent collector is a good thing, when we can't predict and control the long-term performance of it. For example, we are basically coming at the mercy of underlying OS performance with mprotect calls. There are industrial GCs that rely on OS performance (*cough* *cough*), you can see what do those require to guarantee performance. Just to be clear, this type of synchronization is in fact already implemented in the JVM to synchronize thread states for the safepoint protocol, so it's not exactly new and unexplained territory. However it's not clear to me that the code complexity involved with using that type of synchronization for conditional card marking in CMS is worth it. Also, given the problem is specific to CMS that arguably goes away in favor of G1, I would think introducing special-case-for-CMS barriers in mutator code is a sane interim solution. I agree. Especially if we can backport the G1-like barrier "filtering" in CMS case? If I read this thread right, Erik and Thomas concluded there is no clear benefit of introducing the mprotect-like mechanics with G1, which probably means the overheads are bearable with appropriate mutator-side changes. I don't think it would be easy to implement barrier "filtering" in CMS. Keep in mind that even before the storeload was added to G1's barriers they were fairly heavy-weight. CMS' barriers are not, if we start to add conditionals and storeload barriers to them the runtime overhead may increase more than what it did when we added the storeload to G1. /Mikael Thanks, -Aleksey From aph at redhat.com Tue May 12 17:29:07 2015 From: aph at redhat.com (Andrew Haley) Date: Tue, 12 May 2015 18:29:07 +0100 Subject: RFR: 8079564: aarch64: Use FP register as proper frame pointer In-Reply-To: <1431445288.3135.13.camel@mylittlepony.linaroharston> References: <1431008624.18342.29.camel@mylittlepony.linaroharston> <554B7904.2000806@redhat.com> <1431445288.3135.13.camel@mylittlepony.linaroharston> Message-ID: <55523863.7090506@redhat.com> On 05/12/2015 04:41 PM, Edward Nevill wrote: > So, all the changes to the method handle calls are to remove the special case code where it had to save/restore from rfp if it was a method handle call. > > Does this answer your question? I think so. I only wish I knew what the original problem the saving of SP into FP was supposed to achieve. I guess it might have been that the old design of method handle adapters pushed some junk onto the stack, but that's gone now. Andrew. From vitalyd at gmail.com Tue May 12 17:58:43 2015 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Tue, 12 May 2015 13:58:43 -0400 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <38CAAA0B-6F32-4508-818F-AD9D248E8B35@lnu.se> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> <5551FA97.9090502@oracle.com> <5551FD72.1020504@oracle.com> <38CAAA0B-6F32-4508-818F-AD9D248E8B35@lnu.se> Message-ID: Erik, I tend to agree with you that this seems like a good solution to the current problem at hand, irrespective of when/if G1 fully supplants CMS. Given that similar mechanism is used for safepointing, I don't think this introduces some completely new construct that nobody has yet seen in Hotspot. However, this is obviously not my decision to make :). Given that you have DaCapo benchmarks set up, have you tried benching Andrew's storeload proposal? Would be interesting to see if anything's revealed there. On Tue, May 12, 2015 at 1:23 PM, Erik ?sterlund wrote: > Hi Mikael and Andrew, > > Unless I missed something, I don?t think we introduce that much code > complexity. > Of course I agree that G1 will make fixes in CMS a bit wasted in the long > run. > However, until then it would be good if CMS still works. And a few lines > shared code (handful for the actual GC) seems, to me, both less painful > from an engineering point of view and better performant than going through > all mutator code paths that need changing (interpreter, c1, c2, for > potentially many architectures). > > Out of curiosity I patched the thing and my fix can be found here: > http://cr.openjdk.java.net/~eosterlund/8079315/webrev.v1/ > > Fortunately it looks like CMS is already batching cards pretty well for > me so the change turned out to be very small. I logged to see how often > this global fence is triggered and it?s very rare so I feel quite convinced > it won?t impact performance negatively even on ?that guy?s? machine and > with a terrible OS implementation. > > I benchmarked it using DaCapo benchmarks locally on my computer (macbook > x86_64 BSD) and there were no traces of any performance > artefacts/regression. > > If anyone happens to have a larger machine than my macbook, it would be > interesting to take it for a spin. ;) > > Disclaimer: I haven?t poked around a lot in CMS in the past, so I hope I > didn?t miss any important card value transitions! > > Thanks, > /Erik > > On 12 May 2015, at 14:17, Mikael Gerdin wrote: > > > > On 2015-05-12 15:05, Aleksey Shipilev wrote: > > On 11.05.2015 16:41, Andrew Haley wrote: > > On 05/11/2015 12:33 PM, Erik ?sterlund wrote: > > Hi Andrew, > > On 11 May 2015, at 11:58, Andrew Haley wrote: > > On 05/11/2015 11:40 AM, Erik ?sterlund wrote: > > I have heard statements like this that such mechanism would not work > on RMO, but never got an explanation why it would work only on > TSO. Could you please elaborate? I studied some kernel sources for > a bunch of architectures and kernels, and it seems as far as I can > see all good for RMO too. > > > Dave Dice himself told me that the algorithm is not in general safe > for non-TSO. Perhaps, though, it is safe in this particular case. Of > course, I may be misunderstanding him. I'm not sure of his reasoning > but perhaps we should include him in this discussion. > > > I see. It would be interesting to hear his reasoning, because it is > not clear to me. > > From my point of view, I can't see a strong argument for doing this on > AArch64. StoreLoad barriers are not fantastically expensive there so > it may not be worth going to such extremes. The cost of a StoreLoad > barrier doesn't seem to be so much more than the StoreStore that we > have to have anyway. > > > Yeah about performance I?m not sure when it?s worth removing these > fences and on what hardware. > > > Your algorithm (as I understand it) trades a moderately expensive (but > purely local) operation for a very expensive global operation, albeit > with much lower frequency. It's not clear to me how much we value > continuous operation versus faster operation with occasional global > stalls. I suppose it must be application-dependent. > > > Okay, Dice's asymmetric trick is nice. In fact, that is arguably what > Parallel is using already: it serializes the mutator stores by stopping > the mutator at safepoint. Using mprotect and TLB tricks as the > serialization actions is cute and dandy. > > However, I have doubts that employing the system-wide synchronization > mechanism for concurrent collector is a good thing, when we can't > predict and control the long-term performance of it. For example, we are > basically coming at the mercy of underlying OS performance with mprotect > calls. There are industrial GCs that rely on OS performance (*cough* > *cough*), you can see what do those require to guarantee performance. > > > Just to be clear, this type of synchronization is in fact already > implemented in the JVM to synchronize thread states for the safepoint > protocol, so it's not exactly new and unexplained territory. > > However it's not clear to me that the code complexity involved with using > that type of synchronization for conditional card marking in CMS is worth > it. > > > Also, given the problem is specific to CMS that arguably goes away in > favor of G1, I would think introducing special-case-for-CMS barriers in > mutator code is a sane interim solution. > > > I agree. > > > Especially if we can backport the G1-like barrier "filtering" in CMS > case? If I read this thread right, Erik and Thomas concluded there is no > clear benefit of introducing the mprotect-like mechanics with G1, which > probably means the overheads are bearable with appropriate mutator-side > changes. > > > I don't think it would be easy to implement barrier "filtering" in CMS. > Keep in mind that even before the storeload was added to G1's barriers > they were fairly heavy-weight. CMS' barriers are not, if we start to add > conditionals and storeload barriers to them the runtime overhead may > increase more than what it did when we added the storeload to G1. > > /Mikael > > > Thanks, > -Aleksey > > > From derek.white at oracle.com Tue May 12 18:08:40 2015 From: derek.white at oracle.com (Derek White) Date: Tue, 12 May 2015 14:08:40 -0400 Subject: RFR(XXL): 8079792: GC directory structure cleanup In-Reply-To: <5551D312.7000401@oracle.com> References: <5551D312.7000401@oracle.com> Message-ID: <555241A8.8060906@oracle.com> On 5/12/15 6:16 AM, Per Liden wrote: > Hi, > > As previously mentioned [1], the GC team is doing a cleanup of the > directory structure for the GC code. Here's the patch for that cleanup. > > First, a recap of the new directory structure: > > - A single "top-level" directory for GC code: > src/share/vm/gc/ > > - One sub-directory per GC: > src/share/vm/gc/cms/ > src/share/vm/gc/g1/ > src/share/vm/gc/parallel/ > src/share/vm/gc/serial/ > > - A single directory for common/shared GC code: > src/share/gc/shared/ Typo? I think you meant: src/share/vm/gc/shared/ That's what the webrev looks like. Looking forward to the simpler structure (luckily my fingers don't have the old structure memorized :-) - Derek From coleen.phillimore at oracle.com Tue May 12 18:25:12 2015 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Tue, 12 May 2015 14:25:12 -0400 Subject: RFR(XXL): 8079792: GC directory structure cleanup In-Reply-To: <555241A8.8060906@oracle.com> References: <5551D312.7000401@oracle.com> <555241A8.8060906@oracle.com> Message-ID: <55524588.9040409@oracle.com> On 5/12/15, 2:08 PM, Derek White wrote: > On 5/12/15 6:16 AM, Per Liden wrote: >> Hi, >> >> As previously mentioned [1], the GC team is doing a cleanup of the >> directory structure for the GC code. Here's the patch for that cleanup. >> >> First, a recap of the new directory structure: >> >> - A single "top-level" directory for GC code: >> src/share/vm/gc/ >> >> - One sub-directory per GC: >> src/share/vm/gc/cms/ >> src/share/vm/gc/g1/ >> src/share/vm/gc/parallel/ >> src/share/vm/gc/serial/ >> >> - A single directory for common/shared GC code: >> src/share/gc/shared/ > > Typo? I think you meant: > src/share/vm/gc/shared/ > > That's what the webrev looks like. > > Looking forward to the simpler structure (luckily my fingers don't > have the old structure memorized :-) My fingers had the old structure somewhat memorized since it never made mental sense to me, but even so, I'm very pleased to see this change! I can't tell from the giant webrev, but did you reorganize the SA duplicated directory structure? It seems like some files moved but not all of them. If not with this change, can you do a follow-on an RFE to fix it also? Thanks, Coleen > > - Derek From aleksey.shipilev at oracle.com Tue May 12 18:54:16 2015 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Tue, 12 May 2015 21:54:16 +0300 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <38CAAA0B-6F32-4508-818F-AD9D248E8B35@lnu.se> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> <5551FA97.9090502@oracle.com> <5551FD72.1020504@oracle.com> <38CAAA0B-6F32-4508-818F-AD9D248E8B35@lnu.se> Message-ID: <55524C58.4030904@oracle.com> On 12.05.2015 20:23, Erik ?sterlund wrote: > Out of curiosity I patched the thing and my fix can be found > here: http://cr.openjdk.java.net/~eosterlund/8079315/webrev.v1/ > Wait, how does it work? I presumed you need to poll the serialization page (and then handle the possible trap) in mutator, between storing the reference and reading the card mark. Just mprotect-ing a page does not smell like a serializing event, if you don't actually access the page. If that is so, you *need* to do the work in mutator code, as well as more collector and general VM work. > Fortunately it looks like CMS is already batching cards pretty well for > me so the change turned out to be very small. I logged to see how often > this global fence is triggered and it?s very rare so I feel quite > convinced it won?t impact performance negatively even on ?that guy?s? > machine and with a terrible OS implementation. Okay, if those events are rare, I can buy the mprotect scheme. Thanks, -Aleksey From aph at redhat.com Tue May 12 19:03:40 2015 From: aph at redhat.com (Andrew Haley) Date: Tue, 12 May 2015 20:03:40 +0100 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <55524C58.4030904@oracle.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> <5551FA97.9090502@oracle.com> <5551FD72.1020504@oracle.com> <38CAAA0B-6F32-4508-818F-AD9D248E8B35@lnu.se> <55524C58.4030904@oracle.com> Message-ID: <55524E8C.6080709@redhat.com> On 05/12/2015 07:54 PM, Aleksey Shipilev wrote: > On 12.05.2015 20:23, Erik ?sterlund wrote: >> Out of curiosity I patched the thing and my fix can be found here: http://cr.openjdk.java.net/~eosterlund/8079315/webrev.v1/ > > Wait, how does it work? I presumed you need to poll the serialization page (and then handle the possible trap) in mutator, between storing the reference and reading the card mark. Just mprotect-ing a page does not smell like a serializing event, if you don't actually access the page. I think it is, because the kernel has to interrupt every CPU in that process to flush its write buffer and TLB. I can't think of any other way of making munmap() work. Andrew. From aleksey.shipilev at oracle.com Tue May 12 19:25:11 2015 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Tue, 12 May 2015 22:25:11 +0300 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <55524E8C.6080709@redhat.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> <5551FA97.9090502@oracle.com> <5551FD72.1020504@oracle.com> <38CAAA0B-6F32-4508-818F-AD9D248E8B35@lnu.se> <55524C58.4030904@oracle.com> <55524E8C.6080709@redhat.com> Message-ID: <55525397.2070803@oracle.com> On 12.05.2015 22:03, Andrew Haley wrote: > On 05/12/2015 07:54 PM, Aleksey Shipilev wrote: >> On 12.05.2015 20:23, Erik ?sterlund wrote: >>> Out of curiosity I patched the thing and my fix can be found here: http://cr.openjdk.java.net/~eosterlund/8079315/webrev.v1/ >> >> Wait, how does it work? I presumed you need to poll the serialization page (and then handle the possible trap) in mutator, between storing the reference and reading the card mark. Just mprotect-ing a page does not smell like a serializing event, if you don't actually access the page. > > I think it is, because the kernel has to interrupt every CPU in that > process to flush its write buffer and TLB. I can't think of any other > way of making munmap() work. If you have a platform with a software-filled TLB, can't you accurately track what CPUs should perform the TLB shootdowns? Or if there is some other way for a CPU to communicate the TLB contents back to OS. If that is in place, then mprotect can only affect the cores that actually accessed the serialization page. I feel that relying on the premise that page mapping changes are globally synchronized is dangerous. Thanks, -Aleksey From aleksey.shipilev at oracle.com Tue May 12 19:28:59 2015 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Tue, 12 May 2015 22:28:59 +0300 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <55525397.2070803@oracle.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> <5551FA97.9090502@oracle.com> <5551FD72.1020504@oracle.com> <38CAAA0B-6F32-4508-818F-AD9D248E8B35@lnu.se> <55524C58.4030904@oracle.com> <55524E8C.6080709@redhat.com> <55525397.2070803@oracl! e.com> Message-ID: <5552547B.3020003@oracle.com> On 12.05.2015 22:25, Aleksey Shipilev wrote: > On 12.05.2015 22:03, Andrew Haley wrote: >> On 05/12/2015 07:54 PM, Aleksey Shipilev wrote: >>> On 12.05.2015 20:23, Erik ?sterlund wrote: >>>> Out of curiosity I patched the thing and my fix can be found here: http://cr.openjdk.java.net/~eosterlund/8079315/webrev.v1/ >>> >>> Wait, how does it work? I presumed you need to poll the serialization page (and then handle the possible trap) in mutator, between storing the reference and reading the card mark. Just mprotect-ing a page does not smell like a serializing event, if you don't actually access the page. >> >> I think it is, because the kernel has to interrupt every CPU in that >> process to flush its write buffer and TLB. I can't think of any other >> way of making munmap() work. > > If you have a platform with a software-filled TLB, can't you accurately > track what CPUs should perform the TLB shootdowns? Or if there is some > other way for a CPU to communicate the TLB contents back to OS. If that > is in place, then mprotect can only affect the cores that actually > accessed the serialization page. I feel that relying on the premise that > page mapping changes are globally synchronized is dangerous. Err... They *are* globally synchronized in the sense that all CPUs have a consistent view on the mappings. Should be: "relying on page mapping changes acting as the global serialization events is dangerous". -Aleksey. From aph at redhat.com Tue May 12 20:19:13 2015 From: aph at redhat.com (Andrew Haley) Date: Tue, 12 May 2015 21:19:13 +0100 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <55525397.2070803@oracle.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> <5551FA97.9090502@oracle.com> <5551FD72.1020504@oracle.com> <38CAAA0B-6F32-4508-818F-AD9D248E8B35@lnu.se> <55524C58.4030904@oracle.com> <55524E8C.6080709@redhat.com> <55525397.2070803@oracl! e.com> Message-ID: <55526041.1030709@redhat.com> On 05/12/2015 08:25 PM, Aleksey Shipilev wrote: > On 12.05.2015 22:03, Andrew Haley wrote: >> On 05/12/2015 07:54 PM, Aleksey Shipilev wrote: >>> On 12.05.2015 20:23, Erik ?sterlund wrote: >>>> Out of curiosity I patched the thing and my fix can be found here: http://cr.openjdk.java.net/~eosterlund/8079315/webrev.v1/ >>> >>> Wait, how does it work? I presumed you need to poll the serialization page (and then handle the possible trap) in mutator, between storing the reference and reading the card mark. Just mprotect-ing a page does not smell like a serializing event, if you don't actually access the page. >> >> I think it is, because the kernel has to interrupt every CPU in that >> process to flush its write buffer and TLB. I can't think of any other >> way of making munmap() work. > > If you have a platform with a software-filled TLB, can't you accurately > track what CPUs should perform the TLB shootdowns? Or if there is some > other way for a CPU to communicate the TLB contents back to OS. There isn't AFAIK: the TLB is always a CPU-local structure. > If that is in place, then mprotect can only affect the cores that > actually accessed the serialization page. I feel that relying on the > premise that page mapping changes are globally synchronized is > dangerous. You're probably right about that. I suppose it would be possible in theory for there to be a broadcast TLB invalidate event as part of the cache coherency protocol. I wonder if the kernel people would be receptive to the idea of a "execute a memory fence on every CPU" system call. It would not be hard to do and it would be very useful. Andrew. From erik.osterlund at lnu.se Tue May 12 20:44:14 2015 From: erik.osterlund at lnu.se (=?utf-8?B?RXJpayDDlnN0ZXJsdW5k?=) Date: Tue, 12 May 2015 20:44:14 +0000 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <55526041.1030709@redhat.com> References: <5548D913.1030507@redhat.com> <5548DF5B.7000404@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> <5551FA97.9090502@oracle.com> <5551FD72.1020504@oracle.com> <38CAAA0B-6F32-4508-818F-AD9D248E8B35@lnu.se> <55524C58.4030904@oracle.com> <55524E8C.6080709@redhat.com> <55525397.2070803@oracl! e.com> <55526041.1030709@redhat.com> Message-ID: <7C4CBC57-D7B2-4779-9353-DEACCB626370@lnu.se> Hi Andrew and Aleksey, It is of course true that a hypothetical OS + hardware combination could /in theory/ be smart enough to not send the TLB purge message to certain CPUs and hence not flush latent stores there. But in practice none does that which I know of and certainly none that we support. As I said in the original email where I proposed the solution, I already had a look at our architectures (and a few more) in the linux kernel and XNU/BSD - and it?s safe everywhere. And as I said earlier, the closest match I found to out-smart the barrier is itanium that broadcasts the TLB purge with a special instruction rather than IPI: ptc.ga. It takes an address range and purges the corresponding TLB entries. However, according to Intel?s own documentation, even such fancy solution still flushes all latent memory accesses on remote CPUs regardless. I don?t know what windows does because it?s open source but we only have x86 there and its hardware has no support for doing it any other way than with IPI messages which is all we need. And if we feel that scared, windows has a system call that does exactly what we want and with the architecture I propose it?s trivial to specialize the barrier for windows do use this instead. If there was to suddenly pop up a magical fancy OS + hardware solution that is too clever for this optimization (seems unlikely to me) then there are other ways of issuing such a global fence. But I don?t see the point in doing that now when there is no such problem in sight. Thanks, /Erik On 12 May 2015, at 21:19, Andrew Haley > wrote: On 05/12/2015 08:25 PM, Aleksey Shipilev wrote: On 12.05.2015 22:03, Andrew Haley wrote: On 05/12/2015 07:54 PM, Aleksey Shipilev wrote: On 12.05.2015 20:23, Erik ?sterlund wrote: Out of curiosity I patched the thing and my fix can be found here: http://cr.openjdk.java.net/~eosterlund/8079315/webrev.v1/ Wait, how does it work? I presumed you need to poll the serialization page (and then handle the possible trap) in mutator, between storing the reference and reading the card mark. Just mprotect-ing a page does not smell like a serializing event, if you don't actually access the page. I think it is, because the kernel has to interrupt every CPU in that process to flush its write buffer and TLB. I can't think of any other way of making munmap() work. If you have a platform with a software-filled TLB, can't you accurately track what CPUs should perform the TLB shootdowns? Or if there is some other way for a CPU to communicate the TLB contents back to OS. There isn't AFAIK: the TLB is always a CPU-local structure. If that is in place, then mprotect can only affect the cores that actually accessed the serialization page. I feel that relying on the premise that page mapping changes are globally synchronized is dangerous. You're probably right about that. I suppose it would be possible in theory for there to be a broadcast TLB invalidate event as part of the cache coherency protocol. I wonder if the kernel people would be receptive to the idea of a "execute a memory fence on every CPU" system call. It would not be hard to do and it would be very useful. Andrew. From aleksey.shipilev at oracle.com Tue May 12 22:49:47 2015 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Wed, 13 May 2015 01:49:47 +0300 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <7C4CBC57-D7B2-4779-9353-DEACCB626370@lnu.se> References: <5548D913.1030507@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> <5551FA97.9090502@oracle.com> <5551FD72.1020504@oracle.com> <38CAAA0B-6F32-4508-818F-AD9D248E8B35@lnu.se> <55524C58.4030904@oracle.com> <55524E8C.6080709@redhat.com> <55525397.2070803@oracl! e.com> <55526041.1030709@redhat.com> <7C4CBC57-D7B2-4779-9353-DEACCB626370@lnu.se> Message-ID: <5552838B.3020305@oracle.com> On 12.05.2015 23:44, Erik ?sterlund wrote: > It is of course true that a hypothetical OS + hardware combination could > /in theory/ be smart enough to not send the TLB purge message to certain > CPUs and hence not flush latent stores there. But in practice none does > that which I know of and certainly none that we support. Famous last words :) > As I said in > the original email where I proposed the solution, I already had a look > at our architectures (and a few more) in the linux kernel and XNU/BSD - > and it?s safe everywhere. And as I said earlier, the closest match I > found to out-smart the barrier is itanium that broadcasts the TLB purge > with a special instruction rather than IPI: ptc.ga. It takes an address > range and purges the corresponding TLB entries. However, according to > Intel?s own documentation, even such fancy solution still flushes all > latent memory accesses on remote CPUs regardless. Ah, apologies, I must have missed that note. It's here: http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2015-May/013308.html > I don?t know what windows does because it?s open source but we only have > x86 there and its hardware has no support for doing it any other way > than with IPI messages which is all we need. And if we feel that scared, > windows has a system call that does exactly what we want and with the > architecture I propose it?s trivial to specialize the barrier for > windows do use this instead. I think I get what you tell, but I am not convinced. The thing about reading stuff in the mutator is to align the actions in collector with the actions in mutator. So what if you push the IPI to all processors. Some lucky processor will get that interrupt *after* (e.g. too late!) both the reference store and (reordered/stale) card mark read => same problem, right? In other words, asking a mutator to do a fence-like op after an already missed card mark update solves what? Even Dice's article on asymmetric Dekker idioms that is very brave in suggesting arcane tricks, AFAIU, doesn't cover the case of "blind" mprotect in "slow thread" without reading the protected page in the "fast thread". The point of Dice's mprotect construction, AFAIU, is to resolve the ordering conundrum by reading the mprotected page in "fast thread", so to coordinate "fast thread" with "slow thread". > If there was to suddenly pop up a magical fancy OS + hardware solution > that is too clever for this optimization (seems unlikely to me) then > there are other ways of issuing such a global fence. But I don?t see the > point in doing that now when there is no such problem in sight. When you are dealing with a platform that has a billion of installations, millions of developers, countless different hardware and OS flavors, it does not seem very sane to lock in the correctness guarantees on an undocumented implementation detail and/or guesses. (Aside: doing that for performance is totally fine, we do that all the time) Thanks, -Aleksey From vitalyd at gmail.com Tue May 12 23:43:47 2015 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Tue, 12 May 2015 19:43:47 -0400 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <5552838B.3020305@oracle.com> References: <5548D913.1030507@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> <5551FA97.9090502@oracle.com> <5551FD72.1020504@oracle.com> <38CAAA0B-6F32-4508-818F-AD9D248E8B35@lnu.se> <55524C58.4030904@oracle.com> <55524E8C.6080709@redhat.com> <55526041.1030709@redhat.com> <7C4CBC57-D7B2-4779-9353-DEACCB626370@lnu.se> <5552838B.3020305@oracle.com> Message-ID: So let's see, mutator stores a reference, reads a stale card value, and skips the card dirtying - the reference has been stored though. The way this could happen is if GC thread flipped the card value to precleaned but mutator hadn't seen it yet. However, the card must've been dirty right before (hence mutator skipped dirtying it again), which means GC thread is going to scan it for references and expect to find the right ref value. The global fence is issued after GC flips the card values but before it processes the cards. So any pending stores in remote write buffers will flush, be made globally visible, and GC will see them by the time it goes to process the cards. sent from my phone On May 12, 2015 6:50 PM, "Aleksey Shipilev" wrote: > On 12.05.2015 23:44, Erik ?sterlund wrote: > > It is of course true that a hypothetical OS + hardware combination could > > /in theory/ be smart enough to not send the TLB purge message to certain > > CPUs and hence not flush latent stores there. But in practice none does > > that which I know of and certainly none that we support. > > Famous last words :) > > > As I said in > > the original email where I proposed the solution, I already had a look > > at our architectures (and a few more) in the linux kernel and XNU/BSD - > > and it?s safe everywhere. And as I said earlier, the closest match I > > found to out-smart the barrier is itanium that broadcasts the TLB purge > > with a special instruction rather than IPI: ptc.ga. It takes an address > > range and purges the corresponding TLB entries. However, according to > > Intel?s own documentation, even such fancy solution still flushes all > > latent memory accesses on remote CPUs regardless. > > Ah, apologies, I must have missed that note. It's here: > > http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2015-May/013308.html > > > > I don?t know what windows does because it?s open source but we only have > > x86 there and its hardware has no support for doing it any other way > > than with IPI messages which is all we need. And if we feel that scared, > > windows has a system call that does exactly what we want and with the > > architecture I propose it?s trivial to specialize the barrier for > > windows do use this instead. > > I think I get what you tell, but I am not convinced. The thing about > reading stuff in the mutator is to align the actions in collector with > the actions in mutator. So what if you push the IPI to all processors. > Some lucky processor will get that interrupt *after* (e.g. too late!) > both the reference store and (reordered/stale) card mark read => same > problem, right? In other words, asking a mutator to do a fence-like op > after an already missed card mark update solves what? > > Even Dice's article on asymmetric Dekker idioms that is very brave in > suggesting arcane tricks, AFAIU, doesn't cover the case of "blind" > mprotect in "slow thread" without reading the protected page in the > "fast thread". The point of Dice's mprotect construction, AFAIU, is to > resolve the ordering conundrum by reading the mprotected page in "fast > thread", so to coordinate "fast thread" with "slow thread". > > > > If there was to suddenly pop up a magical fancy OS + hardware solution > > that is too clever for this optimization (seems unlikely to me) then > > there are other ways of issuing such a global fence. But I don?t see the > > point in doing that now when there is no such problem in sight. > > When you are dealing with a platform that has a billion of > installations, millions of developers, countless different hardware and > OS flavors, it does not seem very sane to lock in the correctness > guarantees on an undocumented implementation detail and/or guesses. > (Aside: doing that for performance is totally fine, we do that all the > time) > > > Thanks, > -Aleksey > > From david.holmes at oracle.com Wed May 13 02:07:20 2015 From: david.holmes at oracle.com (David Holmes) Date: Wed, 13 May 2015 12:07:20 +1000 Subject: [9] Backport approval request: 8078470: [Linux] Replace syscall use in os::fork_and_exec with glibc fork() and execve() In-Reply-To: <55520D14.6020909@oracle.com> References: <554FFC5F.6000606@oracle.com> <55501EDF.4080403@oracle.com> <55520D14.6020909@oracle.com> Message-ID: <5552B1D8.30002@oracle.com> Thanks Dan! David On 13/05/2015 12:24 AM, Daniel D. Daugherty wrote: > > http://cr.openjdk.java.net/~dholmes/8078470/webrev.jdk9/ > > src/os/linux/vm/os_linux.cpp > No comments. > > src/share/vm/utilities/vmError.cpp > No comments. > > test/runtime/ErrorHandling/TestOnError.java > No comments. > > test/runtime/ErrorHandling/TestOnOutOfMemoryError.java > No comments. > > Thumbs up! > > Dan > > > On 5/10/15 9:15 PM, David Holmes wrote: >> On 11/05/2015 10:48 AM, David Holmes wrote: >>> This is the "backport" to 9 as the original change had to go into 8u >>> first for logistical/scheduling reasons. >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8078470 >>> >>> 8u changeset: >>> http://hg.openjdk.java.net/jdk8u/jdk8u/hotspot/rev/915ca3e9d15e >>> >>> Original review thread: >>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-April/018171.html >>> >>> >>> >>> The changeset mostly applied cleanly with some manual tweaking in one >>> spot as the 9 code refers to AARCH64 where the 8u code does not. >> >> I also had to tweak the tests due to: >> >> 8067013: Rename the com.oracle.java.testlibary package >> >> For good measure here's a webrev: >> >> http://cr.openjdk.java.net/~dholmes/8078470/webrev.jdk9/ >> >> Thanks, >> David >> >> >>> Thanks, >>> David >> >> >> > From tobias.hartmann at oracle.com Wed May 13 06:06:41 2015 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 13 May 2015 08:06:41 +0200 Subject: [8u60] Bulk backport request: 8079343, 8078497 Message-ID: <5552E9F1.6060508@oracle.com> Hi, please review the following backports: (1) 8079343: Crash in PhaseIdealLoop with "assert(!had_error) failed: bad dominance" https://bugs.openjdk.java.net/browse/JDK-8079343 http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/86687b76907d (2) 8078497: C2's superword optimization causes unaligned memory accesses https://bugs.openjdk.java.net/browse/JDK-8078497 http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/aec198eb37bc The changes were pushed to JDK 9 on Monday and nightly testing showed no problems. (2) does not apply cleanly because JDK-8076523 [1] was not backported. Here is the updated webrev: http://cr.openjdk.java.net/~thartmann/8078497_8u/webrev.00/ Thanks, Tobias [1] https://bugs.openjdk.java.net/browse/JDK-8076523 From vladimir.kozlov at oracle.com Wed May 13 06:44:30 2015 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 12 May 2015 23:44:30 -0700 Subject: [8u60] Bulk backport request: 8079343, 8078497 In-Reply-To: <5552E9F1.6060508@oracle.com> References: <5552E9F1.6060508@oracle.com> Message-ID: <5552F2CE.1070004@oracle.com> Looks good. But, please, backport 8076523 too. It is still problem in 8u60 even so it is not triggered yet. Also code will be more consistent. Thanks, Vladimir On 5/12/15 11:06 PM, Tobias Hartmann wrote: > Hi, > > please review the following backports: > > (1) 8079343: Crash in PhaseIdealLoop with "assert(!had_error) failed: bad dominance" > https://bugs.openjdk.java.net/browse/JDK-8079343 > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/86687b76907d > > (2) 8078497: C2's superword optimization causes unaligned memory accesses > https://bugs.openjdk.java.net/browse/JDK-8078497 > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/aec198eb37bc > > The changes were pushed to JDK 9 on Monday and nightly testing showed no problems. > > (2) does not apply cleanly because JDK-8076523 [1] was not backported. Here is the updated webrev: > http://cr.openjdk.java.net/~thartmann/8078497_8u/webrev.00/ > > Thanks, > Tobias > > [1] https://bugs.openjdk.java.net/browse/JDK-8076523 > From tobias.hartmann at oracle.com Wed May 13 06:51:33 2015 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 13 May 2015 08:51:33 +0200 Subject: [8u60] Bulk backport request: 8079343, 8078497 In-Reply-To: <5552F2CE.1070004@oracle.com> References: <5552E9F1.6060508@oracle.com> <5552F2CE.1070004@oracle.com> Message-ID: <5552F475.6000108@oracle.com> Hi Vladimir, On 13.05.2015 08:44, Vladimir Kozlov wrote: > Looks good. But, please, backport 8076523 too. It is still problem in 8u60 even so it is not triggered yet. Also code will be more consistent. Okay, I'll do so. With 8076523, 8079343 and 8078497 apply cleanly. Best, Tobias > > Thanks, > Vladimir > > On 5/12/15 11:06 PM, Tobias Hartmann wrote: >> Hi, >> >> please review the following backports: >> >> (1) 8079343: Crash in PhaseIdealLoop with "assert(!had_error) failed: bad dominance" >> https://bugs.openjdk.java.net/browse/JDK-8079343 >> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/86687b76907d >> >> (2) 8078497: C2's superword optimization causes unaligned memory accesses >> https://bugs.openjdk.java.net/browse/JDK-8078497 >> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/aec198eb37bc >> >> The changes were pushed to JDK 9 on Monday and nightly testing showed no problems. >> >> (2) does not apply cleanly because JDK-8076523 [1] was not backported. Here is the updated webrev: >> http://cr.openjdk.java.net/~thartmann/8078497_8u/webrev.00/ >> >> Thanks, >> Tobias >> >> [1] https://bugs.openjdk.java.net/browse/JDK-8076523 >> From david.lindholm at oracle.com Wed May 13 08:14:17 2015 From: david.lindholm at oracle.com (David Lindholm) Date: Wed, 13 May 2015 10:14:17 +0200 Subject: RFR(XXL): 8079792: GC directory structure cleanup In-Reply-To: <5551D312.7000401@oracle.com> References: <5551D312.7000401@oracle.com> Message-ID: <555307D9.9080906@oracle.com> Per, Changes looks good! Great cleanup! /David On 2015-05-12 12:16, Per Liden wrote: > Hi, > > As previously mentioned [1], the GC team is doing a cleanup of the > directory structure for the GC code. Here's the patch for that cleanup. > > First, a recap of the new directory structure: > > - A single "top-level" directory for GC code: > src/share/vm/gc/ > > - One sub-directory per GC: > src/share/vm/gc/cms/ > src/share/vm/gc/g1/ > src/share/vm/gc/parallel/ > src/share/vm/gc/serial/ > > - A single directory for common/shared GC code: > src/share/gc/shared/ > > > A number of GC files previously located in memory and utilities have > been moved in under the gc directory (mostly into gc/shared), these are: > > memory/barrierSet.* > memory/blockOffsetTable.* > memory/cardGeneration.* > memory/cardTableModRefBS.* > memory/cardTableRS.* > memory/collectorPolicy.* > memory/gcLocker.* > memory/genCollectedHeap.* > memory/generation.* > memory/generationSpec.* > memory/genOopClosures.* > memory/genMarkSweep.* > memory/genRemSet.* > memory/modRefBarrierSet.* > memory/referencePolicy.* > memory/referenceProcessor.* > memory/referenceProcessorStats.* > memory/space.* > memory/specialized_oop_closures.* > memory/strongRootsScope.* > memory/tenuredGeneration.* > memory/threadLocalAllocBuffer.* > memory/watermark.* > utilities/workgroup.* > utilities/taskqueue.* > > > The patch is very large because it touches a lot of files, but the > individual changes are trivial. The main bulk of the changes consists > of adjustments to #includes "gc_implementation/... and #ifndef > SHARE_VM_GC_IMPL... The rest (minor part) of the patch include > adjustments to some makefiles, SA and jtreg tests. > > > Webrev: http://cr.openjdk.java.net/~pliden/8079792/webrev.0/ > > Here's the same webrev split into the following pieces: > > - Change to cpp/hpp files > http://cr.openjdk.java.net/~pliden/8079792/webrev.0-hotspot/ > > - Changes to makefiles > http://cr.openjdk.java.net/~pliden/8079792/webrev.0-make/ > > - Changes to SA > http://cr.openjdk.java.net/~pliden/8079792/webrev.0-sa/ > > - Changes to jtreg tests > http://cr.openjdk.java.net/~pliden/8079792/webrev.0-test/ > > > Bug: https://bugs.openjdk.java.net/browse/JDK-8079792 > > Testing: JPRT, Aurora adhoc GC nightly, bigapps > > cheers, > /Per > > [1] > http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018439.html From per.liden at oracle.com Wed May 13 08:29:01 2015 From: per.liden at oracle.com (Per Liden) Date: Wed, 13 May 2015 10:29:01 +0200 Subject: RFR(XXL): 8079792: GC directory structure cleanup In-Reply-To: <55521B2C.8050902@oracle.com> References: <5551D312.7000401@oracle.com> <55521B2C.8050902@oracle.com> Message-ID: <6BF90DC6-2BF4-4389-9012-BB7E014A64B9@oracle.com> > On 12 May 2015, at 17:24, Stefan Karlsson wrote: > > Hi Per, > > On 2015-05-12 12:16, Per Liden wrote: >> Hi, >> >> As previously mentioned [1], the GC team is doing a cleanup of the directory structure for the GC code. Here's the patch for that cleanup. >> >> First, a recap of the new directory structure: >> >> - A single "top-level" directory for GC code: >> src/share/vm/gc/ >> >> - One sub-directory per GC: >> src/share/vm/gc/cms/ >> src/share/vm/gc/g1/ >> src/share/vm/gc/parallel/ >> src/share/vm/gc/serial/ >> >> - A single directory for common/shared GC code: >> src/share/gc/shared/ >> >> >> A number of GC files previously located in memory and utilities have been moved in under the gc directory (mostly into gc/shared), these are: >> >> memory/barrierSet.* >> memory/blockOffsetTable.* >> memory/cardGeneration.* >> memory/cardTableModRefBS.* >> memory/cardTableRS.* >> memory/collectorPolicy.* >> memory/gcLocker.* >> memory/genCollectedHeap.* >> memory/generation.* >> memory/generationSpec.* >> memory/genOopClosures.* >> memory/genMarkSweep.* >> memory/genRemSet.* >> memory/modRefBarrierSet.* >> memory/referencePolicy.* >> memory/referenceProcessor.* >> memory/referenceProcessorStats.* >> memory/space.* >> memory/specialized_oop_closures.* >> memory/strongRootsScope.* >> memory/tenuredGeneration.* >> memory/threadLocalAllocBuffer.* >> memory/watermark.* >> utilities/workgroup.* >> utilities/taskqueue.* >> >> >> The patch is very large because it touches a lot of files, but the individual changes are trivial. The main bulk of the changes consists of adjustments to #includes "gc_implementation/... and #ifndef SHARE_VM_GC_IMPL... The rest (minor part) of the patch include adjustments to some makefiles, SA and jtreg tests. >> >> >> Webrev: http://cr.openjdk.java.net/~pliden/8079792/webrev.0/ > > This is a huge patch and I haven't looked at all the files yet, but most of it looks good. I won't be able to review every single updated include/guard line, but I don't think that's too important as long as the patch builds. Thanks for reviewing Stefan. > > http://cr.openjdk.java.net/~pliden/8079792/webrev.0/make/excludeSrc.make.udiff.html > > The gc_shared_keep variable was changed to include almost all files in gc/shared, but there a few files in gc/shared that are not listed. Most of them should probably be moved to GC specific directories. > > With those files moved, we might want to consider removing the gc_shared_keep variable entirely. Agree, I?ll remove gc_shared_keep and move the [hgc]SpaceCounters.* to their respective directory (I had missed that those are actually GC-specific). /Per > > thanks, > StefanK > >> >> Here's the same webrev split into the following pieces: >> >> - Change to cpp/hpp files >> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-hotspot/ >> >> - Changes to makefiles >> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-make/ >> >> - Changes to SA >> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-sa/ >> >> - Changes to jtreg tests >> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-test/ >> >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8079792 >> >> Testing: JPRT, Aurora adhoc GC nightly, bigapps >> >> cheers, >> /Per >> >> [1] http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018439.html > From per.liden at oracle.com Wed May 13 08:29:31 2015 From: per.liden at oracle.com (Per Liden) Date: Wed, 13 May 2015 10:29:31 +0200 Subject: RFR(XXL): 8079792: GC directory structure cleanup In-Reply-To: <555307D9.9080906@oracle.com> References: <5551D312.7000401@oracle.com> <555307D9.9080906@oracle.com> Message-ID: <588132B7-BCF8-4C71-8CD0-D6D2FC21F784@oracle.com> Thanks for reviewing David. /Per > On 13 May 2015, at 10:14, David Lindholm wrote: > > Per, > > Changes looks good! Great cleanup! > > > /David > > On 2015-05-12 12:16, Per Liden wrote: >> Hi, >> >> As previously mentioned [1], the GC team is doing a cleanup of the directory structure for the GC code. Here's the patch for that cleanup. >> >> First, a recap of the new directory structure: >> >> - A single "top-level" directory for GC code: >> src/share/vm/gc/ >> >> - One sub-directory per GC: >> src/share/vm/gc/cms/ >> src/share/vm/gc/g1/ >> src/share/vm/gc/parallel/ >> src/share/vm/gc/serial/ >> >> - A single directory for common/shared GC code: >> src/share/gc/shared/ >> >> >> A number of GC files previously located in memory and utilities have been moved in under the gc directory (mostly into gc/shared), these are: >> >> memory/barrierSet.* >> memory/blockOffsetTable.* >> memory/cardGeneration.* >> memory/cardTableModRefBS.* >> memory/cardTableRS.* >> memory/collectorPolicy.* >> memory/gcLocker.* >> memory/genCollectedHeap.* >> memory/generation.* >> memory/generationSpec.* >> memory/genOopClosures.* >> memory/genMarkSweep.* >> memory/genRemSet.* >> memory/modRefBarrierSet.* >> memory/referencePolicy.* >> memory/referenceProcessor.* >> memory/referenceProcessorStats.* >> memory/space.* >> memory/specialized_oop_closures.* >> memory/strongRootsScope.* >> memory/tenuredGeneration.* >> memory/threadLocalAllocBuffer.* >> memory/watermark.* >> utilities/workgroup.* >> utilities/taskqueue.* >> >> >> The patch is very large because it touches a lot of files, but the individual changes are trivial. The main bulk of the changes consists of adjustments to #includes "gc_implementation/... and #ifndef SHARE_VM_GC_IMPL... The rest (minor part) of the patch include adjustments to some makefiles, SA and jtreg tests. >> >> >> Webrev: http://cr.openjdk.java.net/~pliden/8079792/webrev.0/ >> >> Here's the same webrev split into the following pieces: >> >> - Change to cpp/hpp files >> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-hotspot/ >> >> - Changes to makefiles >> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-make/ >> >> - Changes to SA >> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-sa/ >> >> - Changes to jtreg tests >> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-test/ >> >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8079792 >> >> Testing: JPRT, Aurora adhoc GC nightly, bigapps >> >> cheers, >> /Per >> >> [1] http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018439.html > From per.liden at oracle.com Wed May 13 08:32:45 2015 From: per.liden at oracle.com (Per Liden) Date: Wed, 13 May 2015 10:32:45 +0200 Subject: RFR(XXL): 8079792: GC directory structure cleanup In-Reply-To: <555241A8.8060906@oracle.com> References: <5551D312.7000401@oracle.com> <555241A8.8060906@oracle.com> Message-ID: Hi Derek, > On 12 May 2015, at 20:08, Derek White wrote: > > On 5/12/15 6:16 AM, Per Liden wrote: >> Hi, >> >> As previously mentioned [1], the GC team is doing a cleanup of the directory structure for the GC code. Here's the patch for that cleanup. >> >> First, a recap of the new directory structure: >> >> - A single "top-level" directory for GC code: >> src/share/vm/gc/ >> >> - One sub-directory per GC: >> src/share/vm/gc/cms/ >> src/share/vm/gc/g1/ >> src/share/vm/gc/parallel/ >> src/share/vm/gc/serial/ >> >> - A single directory for common/shared GC code: >> src/share/gc/shared/ > > Typo? I think you meant: > src/share/vm/gc/shared/ > > That's what the webrev looks like. Ah yes, that?s a typo. cheers, /Per > > Looking forward to the simpler structure (luckily my fingers don't have the old structure memorized :-) > > - Derek From per.liden at oracle.com Wed May 13 08:44:40 2015 From: per.liden at oracle.com (Per Liden) Date: Wed, 13 May 2015 10:44:40 +0200 Subject: RFR(XXL): 8079792: GC directory structure cleanup In-Reply-To: <55524588.9040409@oracle.com> References: <5551D312.7000401@oracle.com> <555241A8.8060906@oracle.com> <55524588.9040409@oracle.com> Message-ID: Hi Coleen, > On 12 May 2015, at 20:25, Coleen Phillimore wrote: > > > On 5/12/15, 2:08 PM, Derek White wrote: >> On 5/12/15 6:16 AM, Per Liden wrote: >>> Hi, >>> >>> As previously mentioned [1], the GC team is doing a cleanup of the directory structure for the GC code. Here's the patch for that cleanup. >>> >>> First, a recap of the new directory structure: >>> >>> - A single "top-level" directory for GC code: >>> src/share/vm/gc/ >>> >>> - One sub-directory per GC: >>> src/share/vm/gc/cms/ >>> src/share/vm/gc/g1/ >>> src/share/vm/gc/parallel/ >>> src/share/vm/gc/serial/ >>> >>> - A single directory for common/shared GC code: >>> src/share/gc/shared/ >> >> Typo? I think you meant: >> src/share/vm/gc/shared/ >> >> That's what the webrev looks like. >> >> Looking forward to the simpler structure (luckily my fingers don't have the old structure memorized :-) > > My fingers had the old structure somewhat memorized since it never made mental sense to me, but even so, I'm very pleased to see this change! I?m happy to hear that. > > I can't tell from the giant webrev, but did you reorganize the SA duplicated directory structure? It seems like some files moved but not all of them. If not with this change, can you do a follow-on an RFE to fix it also? The GCs which had their own directory in the SA were moved/renamed according to the new structure. However, the SA had a slightly broken structure to begin with (like CMS-files live in the memory directory), and those are left as is at the moment. I plan to file a separate bug to fix that. cheers, /Per > > Thanks, > Coleen > >> >> - Derek > From david.holmes at oracle.com Wed May 13 09:03:44 2015 From: david.holmes at oracle.com (David Holmes) Date: Wed, 13 May 2015 19:03:44 +1000 Subject: RFR(XXL): 8079792: GC directory structure cleanup In-Reply-To: <6BF90DC6-2BF4-4389-9012-BB7E014A64B9@oracle.com> References: <5551D312.7000401@oracle.com> <55521B2C.8050902@oracle.com> <6BF90DC6-2BF4-4389-9012-BB7E014A64B9@oracle.com> Message-ID: <55531370.4040000@oracle.com> Hi Per, On 13/05/2015 6:29 PM, Per Liden wrote: >> On 12 May 2015, at 17:24, Stefan Karlsson wrote: >> On 2015-05-12 12:16, Per Liden wrote: >> >> http://cr.openjdk.java.net/~pliden/8079792/webrev.0/make/excludeSrc.make.udiff.html >> >> The gc_shared_keep variable was changed to include almost all files in gc/shared, but there a few files in gc/shared that are not listed. Most of them should probably be moved to GC specific directories. >> >> With those files moved, we might want to consider removing the gc_shared_keep variable entirely. > > > Agree, I?ll remove gc_shared_keep and move the [hgc]SpaceCounters.* to their respective directory (I had missed that those are actually GC-specific). Just as a sanity check can you do a local build that includes the minimal VM and verify that there are no unexpected gc related .o files produced. Thanks, David > /Per > >> >> thanks, >> StefanK >> >>> >>> Here's the same webrev split into the following pieces: >>> >>> - Change to cpp/hpp files >>> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-hotspot/ >>> >>> - Changes to makefiles >>> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-make/ >>> >>> - Changes to SA >>> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-sa/ >>> >>> - Changes to jtreg tests >>> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-test/ >>> >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8079792 >>> >>> Testing: JPRT, Aurora adhoc GC nightly, bigapps >>> >>> cheers, >>> /Per >>> >>> [1] http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018439.html >> > From per.liden at oracle.com Wed May 13 11:22:15 2015 From: per.liden at oracle.com (Per Liden) Date: Wed, 13 May 2015 13:22:15 +0200 Subject: RFR(XXL): 8079792: GC directory structure cleanup In-Reply-To: <55531370.4040000@oracle.com> References: <5551D312.7000401@oracle.com> <55521B2C.8050902@oracle.com> <6BF90DC6-2BF4-4389-9012-BB7E014A64B9@oracle.com> <55531370.4040000@oracle.com> Message-ID: <555333E7.1050809@oracle.com> Hi David, On 2015-05-13 11:03, David Holmes wrote: > Hi Per, > > On 13/05/2015 6:29 PM, Per Liden wrote: >>> On 12 May 2015, at 17:24, Stefan Karlsson >>> wrote: >>> On 2015-05-12 12:16, Per Liden wrote: >>> >>> http://cr.openjdk.java.net/~pliden/8079792/webrev.0/make/excludeSrc.make.udiff.html >>> >>> >>> The gc_shared_keep variable was changed to include almost all files >>> in gc/shared, but there a few files in gc/shared that are not listed. >>> Most of them should probably be moved to GC specific directories. >>> >>> With those files moved, we might want to consider removing the >>> gc_shared_keep variable entirely. >> >> >> Agree, I?ll remove gc_shared_keep and move the [hgc]SpaceCounters.* to >> their respective directory (I had missed that those are actually >> GC-specific). > > Just as a sanity check can you do a local build that includes the > minimal VM and verify that there are no unexpected gc related .o files > produced. There are two files (concurrentGCThread.o and plab.o) we compile for minimal, which aren't needed and symbols in there are never referenced. But since they still make libjvm.so grow a little bit I'll append those two to the exclude list. So instead of listing all but two files in a keep list, we just list the two in the exclude list. cheers, /Per > > Thanks, > David > >> /Per >> >>> >>> thanks, >>> StefanK >>> >>>> >>>> Here's the same webrev split into the following pieces: >>>> >>>> - Change to cpp/hpp files >>>> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-hotspot/ >>>> >>>> - Changes to makefiles >>>> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-make/ >>>> >>>> - Changes to SA >>>> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-sa/ >>>> >>>> - Changes to jtreg tests >>>> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-test/ >>>> >>>> >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8079792 >>>> >>>> Testing: JPRT, Aurora adhoc GC nightly, bigapps >>>> >>>> cheers, >>>> /Per >>>> >>>> [1] >>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018439.html >>> >> From david.holmes at oracle.com Wed May 13 11:28:22 2015 From: david.holmes at oracle.com (David Holmes) Date: Wed, 13 May 2015 21:28:22 +1000 Subject: RFR(XXL): 8079792: GC directory structure cleanup In-Reply-To: <555333E7.1050809@oracle.com> References: <5551D312.7000401@oracle.com> <55521B2C.8050902@oracle.com> <6BF90DC6-2BF4-4389-9012-BB7E014A64B9@oracle.com> <55531370.4040000@oracle.com> <555333E7.1050809@oracle.com> Message-ID: <55533556.90206@oracle.com> On 13/05/2015 9:22 PM, Per Liden wrote: > Hi David, > > On 2015-05-13 11:03, David Holmes wrote: >> Hi Per, >> >> On 13/05/2015 6:29 PM, Per Liden wrote: >>>> On 12 May 2015, at 17:24, Stefan Karlsson >>>> wrote: >>>> On 2015-05-12 12:16, Per Liden wrote: >>>> >>>> http://cr.openjdk.java.net/~pliden/8079792/webrev.0/make/excludeSrc.make.udiff.html >>>> >>>> >>>> >>>> The gc_shared_keep variable was changed to include almost all files >>>> in gc/shared, but there a few files in gc/shared that are not listed. >>>> Most of them should probably be moved to GC specific directories. >>>> >>>> With those files moved, we might want to consider removing the >>>> gc_shared_keep variable entirely. >>> >>> >>> Agree, I?ll remove gc_shared_keep and move the [hgc]SpaceCounters.* to >>> their respective directory (I had missed that those are actually >>> GC-specific). >> >> Just as a sanity check can you do a local build that includes the >> minimal VM and verify that there are no unexpected gc related .o files >> produced. > > There are two files (concurrentGCThread.o and plab.o) we compile for > minimal, which aren't needed and symbols in there are never referenced. > But since they still make libjvm.so grow a little bit I'll append those > two to the exclude list. So instead of listing all but two files in a > keep list, we just list the two in the exclude list. Thanks! David > cheers, > /Per > >> >> Thanks, >> David >> >>> /Per >>> >>>> >>>> thanks, >>>> StefanK >>>> >>>>> >>>>> Here's the same webrev split into the following pieces: >>>>> >>>>> - Change to cpp/hpp files >>>>> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-hotspot/ >>>>> >>>>> - Changes to makefiles >>>>> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-make/ >>>>> >>>>> - Changes to SA >>>>> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-sa/ >>>>> >>>>> - Changes to jtreg tests >>>>> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-test/ >>>>> >>>>> >>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8079792 >>>>> >>>>> Testing: JPRT, Aurora adhoc GC nightly, bigapps >>>>> >>>>> cheers, >>>>> /Per >>>>> >>>>> [1] >>>>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018439.html >>>>> >>>> >>> From per.liden at oracle.com Wed May 13 12:19:44 2015 From: per.liden at oracle.com (Per Liden) Date: Wed, 13 May 2015 14:19:44 +0200 Subject: RFR(XXL): 8079792: GC directory structure cleanup In-Reply-To: <6BF90DC6-2BF4-4389-9012-BB7E014A64B9@oracle.com> References: <5551D312.7000401@oracle.com> <55521B2C.8050902@oracle.com> <6BF90DC6-2BF4-4389-9012-BB7E014A64B9@oracle.com> Message-ID: <55534160.7020603@oracle.com> Hi, Here's an updated webrev: http://cr.openjdk.java.net/~pliden/8079792/webrev.1/ Changes compared to previous webrev: * gc/shared/allocationStats.* -> gc/cms/ * gc/shared/gSpaceCounters.* -> gc/cms/ * gc/shared/hSpaceCounters.* -> gc/g1/ * gc/shared/spaceCounters.* -> gc/parallel/ * gc/shared/mutableSpace.* -> gc/parallel/ * gc/shared/immutableSpace.* -> gc/parallel/ * gc/shared/gcAdaptivePolicyCounters.* -> gc/parallel/ * gc/shared/cSpaceCounters.* -> gc/serial/ * make/excludeSrc.make: gc_shared_all/gc_shared_keep removed, instead concurrentGCThread.cpp and plab.cpp were appended to Src_Files_EXCLUDE cheers, /Per On 2015-05-13 10:29, Per Liden wrote: > >> On 12 May 2015, at 17:24, Stefan Karlsson wrote: >> >> Hi Per, >> >> On 2015-05-12 12:16, Per Liden wrote: >>> Hi, >>> >>> As previously mentioned [1], the GC team is doing a cleanup of the directory structure for the GC code. Here's the patch for that cleanup. >>> >>> First, a recap of the new directory structure: >>> >>> - A single "top-level" directory for GC code: >>> src/share/vm/gc/ >>> >>> - One sub-directory per GC: >>> src/share/vm/gc/cms/ >>> src/share/vm/gc/g1/ >>> src/share/vm/gc/parallel/ >>> src/share/vm/gc/serial/ >>> >>> - A single directory for common/shared GC code: >>> src/share/gc/shared/ >>> >>> >>> A number of GC files previously located in memory and utilities have been moved in under the gc directory (mostly into gc/shared), these are: >>> >>> memory/barrierSet.* >>> memory/blockOffsetTable.* >>> memory/cardGeneration.* >>> memory/cardTableModRefBS.* >>> memory/cardTableRS.* >>> memory/collectorPolicy.* >>> memory/gcLocker.* >>> memory/genCollectedHeap.* >>> memory/generation.* >>> memory/generationSpec.* >>> memory/genOopClosures.* >>> memory/genMarkSweep.* >>> memory/genRemSet.* >>> memory/modRefBarrierSet.* >>> memory/referencePolicy.* >>> memory/referenceProcessor.* >>> memory/referenceProcessorStats.* >>> memory/space.* >>> memory/specialized_oop_closures.* >>> memory/strongRootsScope.* >>> memory/tenuredGeneration.* >>> memory/threadLocalAllocBuffer.* >>> memory/watermark.* >>> utilities/workgroup.* >>> utilities/taskqueue.* >>> >>> >>> The patch is very large because it touches a lot of files, but the individual changes are trivial. The main bulk of the changes consists of adjustments to #includes "gc_implementation/... and #ifndef SHARE_VM_GC_IMPL... The rest (minor part) of the patch include adjustments to some makefiles, SA and jtreg tests. >>> >>> >>> Webrev: http://cr.openjdk.java.net/~pliden/8079792/webrev.0/ >> >> This is a huge patch and I haven't looked at all the files yet, but most of it looks good. I won't be able to review every single updated include/guard line, but I don't think that's too important as long as the patch builds. > > > Thanks for reviewing Stefan. > > >> >> http://cr.openjdk.java.net/~pliden/8079792/webrev.0/make/excludeSrc.make.udiff.html >> >> The gc_shared_keep variable was changed to include almost all files in gc/shared, but there a few files in gc/shared that are not listed. Most of them should probably be moved to GC specific directories. >> >> With those files moved, we might want to consider removing the gc_shared_keep variable entirely. > > > Agree, I?ll remove gc_shared_keep and move the [hgc]SpaceCounters.* to their respective directory (I had missed that those are actually GC-specific). > > /Per > >> >> thanks, >> StefanK >> >>> >>> Here's the same webrev split into the following pieces: >>> >>> - Change to cpp/hpp files >>> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-hotspot/ >>> >>> - Changes to makefiles >>> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-make/ >>> >>> - Changes to SA >>> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-sa/ >>> >>> - Changes to jtreg tests >>> http://cr.openjdk.java.net/~pliden/8079792/webrev.0-test/ >>> >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8079792 >>> >>> Testing: JPRT, Aurora adhoc GC nightly, bigapps >>> >>> cheers, >>> /Per >>> >>> [1] http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018439.html >> > From alexander.kulyakhtin at oracle.com Wed May 13 14:52:20 2015 From: alexander.kulyakhtin at oracle.com (Alexander Kulyakhtin) Date: Wed, 13 May 2015 07:52:20 -0700 (PDT) Subject: Changes fro JDK-8075327: moving jdk testlibraty files duplicated in hotspot to the common test repository Message-ID: <1b2aa136-3d8b-4c3c-ab80-405346898578@default> Hi, Could you please, review the following tests-only changes to the hs-rt/jdk and hs-rt/test repositories. These changes are a part of the changes for "JDK-8075327: Merge jdk and hotspot test libraries" The changes are as follows: http://cr.openjdk.java.net/~akulyakh/8075327/jdk_patch/webrev/ http://cr.openjdk.java.net/~akulyakh/8075327/test_patch/webrev/ 1) Renaming jdk.testlibrary package to jdk.test.lib in the hs-rt/jdk repo, so it has the same name as the jdk.test.lib package in the hotspot repo. 2) Several files from the jdk/testlibrary have duplicates in the hotspot/testlibrary. We are moving those files from jdk/testlibrary to the upper-level hs-rt/test so they can be shared by jdk and hotspot (also updating @library directives to reflect that). If these proposed changes are acceptable then we'll merge the duplicates between the hs-rt/hotspot and hs-rt/test/lib files into hs-rt/test/lib and prepare a full review. Thank you very much for the review. Best regards, Alexander From chris.hegarty at oracle.com Wed May 13 15:17:32 2015 From: chris.hegarty at oracle.com (Chris Hegarty) Date: Wed, 13 May 2015 16:17:32 +0100 Subject: Changes fro JDK-8075327: moving jdk testlibraty files duplicated in hotspot to the common test repository In-Reply-To: <1b2aa136-3d8b-4c3c-ab80-405346898578@default> References: <1b2aa136-3d8b-4c3c-ab80-405346898578@default> Message-ID: <55536B0C.1070604@oracle.com> Hi Alexander, On 13/05/15 15:52, Alexander Kulyakhtin wrote: > Hi, > > Could you please, review the following tests-only changes to the hs-rt/jdk and hs-rt/test repositories. These changes are a part of the changes for "JDK-8075327: Merge jdk and hotspot test libraries" I suspect that these changes are best going directly into jdk9/dev, as opposed to a a downstream forest. > The changes are as follows: > > http://cr.openjdk.java.net/~akulyakh/8075327/jdk_patch/webrev/ In may places '@library /lib/testlibrary ...' remains. Is this redundant, in many tests? If so, it be removed. > http://cr.openjdk.java.net/~akulyakh/8075327/test_patch/webrev/ > > 1) Renaming jdk.testlibrary package to jdk.test.lib in the hs-rt/jdk repo, so it has the same name as the jdk.test.lib package in the hotspot repo. > > 2) Several files from the jdk/testlibrary have duplicates in the hotspot/testlibrary. We are moving those files from jdk/testlibrary to the upper-level hs-rt/test The changes are to the 'test' directory in the "top" repo? You are not proposing to add a new repo, right? test/lib/testlibrary/jdk/testlibrary/RandomFactory.java was updated recently in jdk9/dev. The version in your webrev is a little out of date. Is there any special update needed to jtreg to support this? > so they can be shared by jdk and hotspot (also updating @library directives to reflect that). > > If these proposed changes are acceptable then we'll merge the duplicates between the hs-rt/hotspot and hs-rt/test/lib files into hs-rt/test/lib and prepare a full review. > > Thank you very much for the review. > > Best regards, > Alexander -Chris. From alexander.kulyakhtin at oracle.com Wed May 13 15:46:04 2015 From: alexander.kulyakhtin at oracle.com (Alexander Kulyakhtin) Date: Wed, 13 May 2015 08:46:04 -0700 (PDT) Subject: Changes fro JDK-8075327: moving jdk testlibraty files duplicated in hotspot to the common test repository Message-ID: <5e28a49a-4ff3-43b4-8e02-7bfe73f0bfd6@default> Hi Chris, > I suspect that these changes are best going directly into jdk9/dev, as > opposed to a a downstream forest. Yes, they are going directly to jdk9/dev, I forgot to add the group, adding now. > In may places '@library /lib/testlibrary ...' remains. Is this > redundant, in many tests? If so, it be removed. No, this is mostly not redundant, as the jdk/testlibrary comntains many files specific for jdk only. Only a relatively small number of files are duplicated between hotspot and jdk. >The changes are to the 'test' directory in the "top" repo? You are not >proposing to add a new repo, right? No, we are not proposing a new repo. The changes are in the 'top' repo. >test/lib/testlibrary/jdk/testlibrary/RandomFactory.java was updated >recently in jdk9/dev. The version in your webrev is a little out of date. We are going to upmerge the changes then >Is there any special update needed to jtreg to support this? For thse changes no jtreg update is required. However, there are plans for the jtreg to prohibit referencing any @libraries above the repo root unless such libraries are directly specified in TEST.properties If jtreg thus prohibits our using /../../test/lib (because it's above the root) we will have to additionally make a change to jdk and hotspot TEST.properties Best regards, Alexander From alexander.kulyakhtin at oracle.com Wed May 13 15:48:48 2015 From: alexander.kulyakhtin at oracle.com (Alexander Kulyakhtin) Date: Wed, 13 May 2015 08:48:48 -0700 (PDT) Subject: Changes fro JDK-8075327: moving jdk testlibraty files duplicated in hotspot to the common test repository Message-ID: <15629659-f6ca-4ad4-8458-9174f8ce473f@default> Adding jdk9-dev mailing list ----- Original Message ----- From: alexander.kulyakhtin at oracle.com To: core-libs-dev at openjdk.java.net, awt-dev at openjdk.java.net, hotspot-dev at openjdk.java.net Cc: stefan.sarne at oracle.com, yekaterina.kantserova at oracle.com, alan.bateman at oracle.com Sent: Wednesday, May 13, 2015 5:52:20 PM GMT +03:00 Iraq Subject: Changes fro JDK-8075327: moving jdk testlibraty files duplicated in hotspot to the common test repository Hi, Could you please, review the following tests-only changes to the hs-rt/jdk and hs-rt/test repositories. These changes are a part of the changes for "JDK-8075327: Merge jdk and hotspot test libraries" The changes are as follows: http://cr.openjdk.java.net/~akulyakh/8075327/jdk_patch/webrev/ http://cr.openjdk.java.net/~akulyakh/8075327/test_patch/webrev/ 1) Renaming jdk.testlibrary package to jdk.test.lib in the hs-rt/jdk repo, so it has the same name as the jdk.test.lib package in the hotspot repo. 2) Several files from the jdk/testlibrary have duplicates in the hotspot/testlibrary. We are moving those files from jdk/testlibrary to the upper-level hs-rt/test so they can be shared by jdk and hotspot (also updating @library directives to reflect that). If these proposed changes are acceptable then we'll merge the duplicates between the hs-rt/hotspot and hs-rt/test/lib files into hs-rt/test/lib and prepare a full review. Thank you very much for the review. Best regards, Alexander From chris.hegarty at oracle.com Wed May 13 16:48:02 2015 From: chris.hegarty at oracle.com (Chris Hegarty) Date: Wed, 13 May 2015 17:48:02 +0100 Subject: Changes fro JDK-8075327: moving jdk testlibraty files duplicated in hotspot to the common test repository In-Reply-To: <5e28a49a-4ff3-43b4-8e02-7bfe73f0bfd6@default> References: <5e28a49a-4ff3-43b4-8e02-7bfe73f0bfd6@default> Message-ID: <34348D14-9BBC-40E6-A1DB-923D88AFCD90@oracle.com> Alexander, > On 13 May 2015, at 16:46, Alexander Kulyakhtin wrote: > > Hi Chris, > > >> I suspect that these changes are best going directly into jdk9/dev, as >> opposed to a a downstream forest. > Yes, they are going directly to jdk9/dev, I forgot to add the group, adding now. > >> In may places '@library /lib/testlibrary ...' remains. Is this >> redundant, in many tests? If so, it be removed. > No, this is mostly not redundant, as the jdk/testlibrary comntains many files specific for jdk only. Only a relatively small number of files are duplicated between hotspot and jdk. Right, but mostly is not all. For example test/com/sun/net/httpserver/Test1.java should no longer include @library /lib/testlibrary. It depends only on SimpleSSLContext which is being moved. It is easy to see this by looking at the imports. Whatever script you have for updating these tests should also remove redundant values in the @library tag. I suspect there are a significant number of these ( a lot of tests just use one library class ). Just on that, is it for a library class to indicate that it depends on a class from another library? I believe this is the case for some library utilities. How can a test know this. It needs to be a library class that indicates this dependency.I see nothing in the webrev for this ( or maybe I missed it ). -Chris. > >> The changes are to the 'test' directory in the "top" repo? You are not >> proposing to add a new repo, right? > No, we are not proposing a new repo. The changes are in the 'top' repo. > >> test/lib/testlibrary/jdk/testlibrary/RandomFactory.java was updated >> recently in jdk9/dev. The version in your webrev is a little out of date. > We are going to upmerge the changes then > >> Is there any special update needed to jtreg to support this? > For thse changes no jtreg update is required. > However, there are plans for the jtreg to prohibit referencing any @libraries above the repo root unless such libraries are directly specified in TEST.properties > If jtreg thus prohibits our using /../../test/lib (because it's above the root) we will have to additionally make a change to jdk and hotspot TEST.properties > > Best regards, > Alexander From alexander.kulyakhtin at oracle.com Wed May 13 17:08:29 2015 From: alexander.kulyakhtin at oracle.com (Alexander Kulyakhtin) Date: Wed, 13 May 2015 10:08:29 -0700 (PDT) Subject: Changes for JDK-8075327: moving jdk testlibraty files duplicated in hotspot to the common test repository Message-ID: Chris, Following the feedback from Jon Gibbons, we have changed our current plan to the following: 1) We are waiting till the new jtreg is promoted. The new jtreg allows for specifying in the repository's TEST.properties file one or more paths to search for libraries 2) After the jtreg is out we are adding both path to jdk/testlibrary and path to toplevel/test/lib/share library to the jdk and hotspot TEST.properties files Under those testlibrary roots we are rearranging the directories to have the same jdk/test/lib structure (to provide for the jdk.test.lib package) 3) Same as with the current changes we are changing import package jdk.testlib.* to import package jdk.test.lib.* in the jdk and hotspot tests. This way, if the jtreg does not find a required class from the jdk.test.lib in the jdk/testlibrary it will look for it in the toplevel/test/lib/share test library. The same will be done for the hotspot tests. Best regards, Alexander ----- Original Message ----- From: chris.hegarty at oracle.com To: alexander.kulyakhtin at oracle.com Cc: core-libs-dev at openjdk.java.net, awt-dev at openjdk.java.net, stefan.sarne at oracle.com, yekaterina.kantserova at oracle.com, hotspot-dev at openjdk.java.net, jdk9-dev at openjdk.java.net Sent: Wednesday, May 13, 2015 7:48:06 PM GMT +03:00 Iraq Subject: Re: Changes fro JDK-8075327: moving jdk testlibraty files duplicated in hotspot to the common test repository Alexander, > On 13 May 2015, at 16:46, Alexander Kulyakhtin wrote: > > Hi Chris, > > >> I suspect that these changes are best going directly into jdk9/dev, as >> opposed to a a downstream forest. > Yes, they are going directly to jdk9/dev, I forgot to add the group, adding now. > >> In may places '@library /lib/testlibrary ...' remains. Is this >> redundant, in many tests? If so, it be removed. > No, this is mostly not redundant, as the jdk/testlibrary comntains many files specific for jdk only. Only a relatively small number of files are duplicated between hotspot and jdk. Right, but mostly is not all. For example test/com/sun/net/httpserver/Test1.java should no longer include @library /lib/testlibrary. It depends only on SimpleSSLContext which is being moved. It is easy to see this by looking at the imports. Whatever script you have for updating these tests should also remove redundant values in the @library tag. I suspect there are a significant number of these ( a lot of tests just use one library class ). Just on that, is it for a library class to indicate that it depends on a class from another library? I believe this is the case for some library utilities. How can a test know this. It needs to be a library class that indicates this dependency.I see nothing in the webrev for this ( or maybe I missed it ). -Chris. > >> The changes are to the 'test' directory in the "top" repo? You are not >> proposing to add a new repo, right? > No, we are not proposing a new repo. The changes are in the 'top' repo. > >> test/lib/testlibrary/jdk/testlibrary/RandomFactory.java was updated >> recently in jdk9/dev. The version in your webrev is a little out of date. > We are going to upmerge the changes then > >> Is there any special update needed to jtreg to support this? > For thse changes no jtreg update is required. > However, there are plans for the jtreg to prohibit referencing any @libraries above the repo root unless such libraries are directly specified in TEST.properties > If jtreg thus prohibits our using /../../test/lib (because it's above the root) we will have to additionally make a change to jdk and hotspot TEST.properties > > Best regards, > Alexander From vladimir.kozlov at oracle.com Wed May 13 20:48:37 2015 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 13 May 2015 13:48:37 -0700 Subject: [8u60] backport RFR(XS): 8078113: 8011102 changes may cause incorrect results Message-ID: <5553B8A5.8040606@oracle.com> 8u60 backport request. Changes were pushed into jdk9 last month, no problems were found since then. Changes are applied to 8u cleanly. https://bugs.openjdk.java.net/browse/JDK-8078113 jdk9 webrev: http://cr.openjdk.java.net/~kvn/8078113/webrev.00 jdk9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/5be37a65b137 Thanks, Vladimir From igor.veresov at oracle.com Wed May 13 20:53:44 2015 From: igor.veresov at oracle.com (Igor Veresov) Date: Wed, 13 May 2015 13:53:44 -0700 Subject: [8u60] backport RFR(XS): 8078113: 8011102 changes may cause incorrect results In-Reply-To: <5553B8A5.8040606@oracle.com> References: <5553B8A5.8040606@oracle.com> Message-ID: <53D0DB85-D31D-415C-A13D-1B87AC983E86@oracle.com> Looks fine. igor > On May 13, 2015, at 1:48 PM, Vladimir Kozlov wrote: > > 8u60 backport request. Changes were pushed into jdk9 last month, no problems were found since then. Changes are applied to 8u cleanly. > > https://bugs.openjdk.java.net/browse/JDK-8078113 > > jdk9 webrev: > http://cr.openjdk.java.net/~kvn/8078113/webrev.00 > jdk9 changeset: > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/5be37a65b137 > > Thanks, > Vladimir From vladimir.kozlov at oracle.com Wed May 13 20:56:21 2015 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 13 May 2015 13:56:21 -0700 Subject: [8u60] backport RFR(XS): 8078113: 8011102 changes may cause incorrect results In-Reply-To: <53D0DB85-D31D-415C-A13D-1B87AC983E86@oracle.com> References: <5553B8A5.8040606@oracle.com> <53D0DB85-D31D-415C-A13D-1B87AC983E86@oracle.com> Message-ID: <5553BA75.6070109@oracle.com> Thanks. Vladimir On 5/13/15 1:53 PM, Igor Veresov wrote: > Looks fine. > > igor > >> On May 13, 2015, at 1:48 PM, Vladimir Kozlov wrote: >> >> 8u60 backport request. Changes were pushed into jdk9 last month, no problems were found since then. Changes are applied to 8u cleanly. >> >> https://bugs.openjdk.java.net/browse/JDK-8078113 >> >> jdk9 webrev: >> http://cr.openjdk.java.net/~kvn/8078113/webrev.00 >> jdk9 changeset: >> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/5be37a65b137 >> >> Thanks, >> Vladimir > From roland.westrelin at oracle.com Wed May 13 20:56:44 2015 From: roland.westrelin at oracle.com (Roland Westrelin) Date: Wed, 13 May 2015 22:56:44 +0200 Subject: [8u60] backport RFR(XS): 8078113: 8011102 changes may cause incorrect results In-Reply-To: <5553B8A5.8040606@oracle.com> References: <5553B8A5.8040606@oracle.com> Message-ID: <32554C1B-FE1A-4A53-8944-E9354500E063@oracle.com> Looks good to me. Roland. > On May 13, 2015, at 10:48 PM, Vladimir Kozlov wrote: > > 8u60 backport request. Changes were pushed into jdk9 last month, no problems were found since then. Changes are applied to 8u cleanly. > > https://bugs.openjdk.java.net/browse/JDK-8078113 > > jdk9 webrev: > http://cr.openjdk.java.net/~kvn/8078113/webrev.00 > jdk9 changeset: > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/5be37a65b137 > > Thanks, > Vladimir From vladimir.kozlov at oracle.com Wed May 13 20:59:07 2015 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 13 May 2015 13:59:07 -0700 Subject: [8u60] backport RFR(XS): 8078113: 8011102 changes may cause incorrect results In-Reply-To: <32554C1B-FE1A-4A53-8944-E9354500E063@oracle.com> References: <5553B8A5.8040606@oracle.com> <32554C1B-FE1A-4A53-8944-E9354500E063@oracle.com> Message-ID: <5553BB1B.50201@oracle.com> Thank you, Roland Vladimir On 5/13/15 1:56 PM, Roland Westrelin wrote: > > Looks good to me. > > Roland. > >> On May 13, 2015, at 10:48 PM, Vladimir Kozlov wrote: >> >> 8u60 backport request. Changes were pushed into jdk9 last month, no problems were found since then. Changes are applied to 8u cleanly. >> >> https://bugs.openjdk.java.net/browse/JDK-8078113 >> >> jdk9 webrev: >> http://cr.openjdk.java.net/~kvn/8078113/webrev.00 >> jdk9 changeset: >> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/5be37a65b137 >> >> Thanks, >> Vladimir > From mark.reinhold at oracle.com Wed May 13 22:55:14 2015 From: mark.reinhold at oracle.com (mark.reinhold at oracle.com) Date: Wed, 13 May 2015 15:55:14 -0700 (PDT) Subject: JEP 250: Store Interned Strings in CDS Archives Message-ID: <20150513225514.9572262A9B@eggemoggin.niobe.net> New JEP Candidate: http://openjdk.java.net/jeps/250 - Mark From gerard.ziemski at oracle.com Thu May 14 18:15:55 2015 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Thu, 14 May 2015 13:15:55 -0500 Subject: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments Message-ID: <5554E65B.801@oracle.com> hi all, This JEP addresses a potential security concern where a runtime flag with an invalid value can cause VM crash or open up a security hole. To address this issue we introduce a mechanism that allows specification of a valid range per flag that is then used to automatically validate given flag's value every time it changes. Ranges values must be constant and can not change. Optionally, a constraint can also be specified and applied every time a flag value changes for those flags whose valid value can not be trivially checked by a simple min and max (ex. whether it's power of 2, or bigger or smaller than some other flag that can also change) I have chosen to modify the table macros (ex. RUNTIME_FLAGS in globals.hpp) instead of using a more sophisticated solution, such as C++ templates, because even though macros were unfriendly when initially developing, once a solution was arrived at, subsequent additions to the tables of new ranges, or constraint are trivial from developer's point of view. (The intial development unfriendliness of macros was mitigated by using a preprocessor, which for those using a modern IDE like Xcode, is easily available from a menu). Using macros also allowed for more minimal code changes. The presented solution is based on expansion of macros using variadic functions and can be readily seen in runtime/commandLineFlagConstraintList.cpp and runtime/commandLineFlagRangeList.cpp In commandLineFlagConstraintList.cpp or commandLineFlagRangesList.cpp, there is bunch of classes and methods that seems to beg for C++ template to be used. I have tried, but when the compiler tries to generate code for both uintx and size_t, which happen to have the same underlying type (on BSD), it fails to compile overridden methods with same type, but different name. If someone has a way of simplifying the new code via C++ templates, however, we can file a new enhancement request to address that. Constraints are factored out and implemented in runtime/commandLineFlagConstraintsRuntime.cpp and runtime/commandLineFlagConstraintsGC.cpp to make the team ownership more clear. The compiler team might need to add their own runtime/commandLineFlagConstraintsCompiler.cpp file as needed later. This webrev represents only the initial range checking framework and only 100 or so flags that were ported from an existing ad hoc range checking code to this new mechanism. There are about 250 remaining flags that still need their ranges determined and ported over to this new mechansim and they are tracked by individual subtask issues assigned to each team (Compiler, GC and Runtime) I had to modify several existing tests to change the error message that they expected when VM refuses to run, which was changed to provide uniform error messages. To help with testing and subtask efforts I have introduced a new runtime flag: PrintFlagsRanges: "Print VM flags and their ranges and exit VM" which in addition to the already existing flags: "PrintFlagsInitial" and "PrintFlagsFinal" allow for thourugh examination of the flags values and their ranges. The code change builds and passes JPRT (-testset hotspot) and UTE (vm.quick.testlist) References: Webrev: http://cr.openjdk.java.net/~gziemski/8059557_rev0/ note: due to "awk" limit of 50 pats the Frames diff is not available for "src/share/vm/runtime/arguments.cpp" JEP:https://bugs.openjdk.java.net/browse/JDK-8059557 Compiler subtask:https://bugs.openjdk.java.net/browse/JDK-8078554 GC subtask:https://bugs.openjdk.java.net/browse/JDK-8078555 Runtime subtask:https://bugs.openjdk.java.net/browse/JDK-8078556 hgstat: src/cpu/ppc/vm/globals_ppc.hpp | 2 +- src/cpu/sparc/vm/globals_sparc.hpp | 2 +- src/cpu/x86/vm/globals_x86.hpp | 2 +- src/cpu/zero/vm/globals_zero.hpp | 3 +- src/os/aix/vm/globals_aix.hpp | 4 +- src/os/bsd/vm/globals_bsd.hpp | 29 +- src/os/linux/vm/globals_linux.hpp | 9 +- src/os/solaris/vm/globals_solaris.hpp | 4 +- src/os/windows/vm/globals_windows.hpp | 5 +- src/share/vm/c1/c1_globals.cpp | 4 +- src/share/vm/c1/c1_globals.hpp | 17 +- src/share/vm/gc_implementation/g1/g1_globals.cpp | 16 +- src/share/vm/gc_implementation/g1/g1_globals.hpp | 38 ++- src/share/vm/opto/c2_globals.cpp | 12 +- src/share/vm/opto/c2_globals.hpp | 39 ++- src/share/vm/prims/whitebox.cpp | 12 +- src/share/vm/runtime/arguments.cpp | 711 +++++++++++++++++++++++++++------------------------------------ src/share/vm/runtime/arguments.hpp | 24 +- src/share/vm/runtime/commandLineFlagConstraintList.cpp | 242 +++++++++++++++++++++ src/share/vm/runtime/commandLineFlagConstraintList.hpp | 72 ++++++ src/share/vm/runtime/commandLineFlagConstraintsGC.cpp | 226 ++++++++++++++++++++ src/share/vm/runtime/commandLineFlagConstraintsGC.hpp | 57 +++++ src/share/vm/runtime/commandLineFlagConstraintsRuntime.cpp | 50 ++++ src/share/vm/runtime/commandLineFlagConstraintsRuntime.hpp | 39 +++ src/share/vm/runtime/commandLineFlagRangeList.cpp | 304 +++++++++++++++++++++++++++ src/share/vm/runtime/commandLineFlagRangeList.hpp | 67 ++++++ src/share/vm/runtime/globals.cpp | 681 ++++++++++++++++++++++++++++++++++++++++++++++++------------ src/share/vm/runtime/globals.hpp | 302 +++++++++++++++++++++----- src/share/vm/runtime/globals_extension.hpp | 101 +++++++- src/share/vm/runtime/init.cpp | 6 +- src/share/vm/runtime/os.hpp | 17 + src/share/vm/runtime/os_ext.hpp | 7 +- src/share/vm/runtime/thread.cpp | 6 + src/share/vm/services/attachListener.cpp | 4 +- src/share/vm/services/classLoadingService.cpp | 6 +- src/share/vm/services/diagnosticCommand.cpp | 3 +- src/share/vm/services/management.cpp | 6 +- src/share/vm/services/memoryService.cpp | 2 +- src/share/vm/services/writeableFlags.cpp | 161 ++++++++++---- src/share/vm/services/writeableFlags.hpp | 52 +--- test/compiler/c2/7200264/Test7200264.sh | 5 +- test/compiler/startup/NumCompilerThreadsCheck.java | 2 +- test/gc/arguments/TestHeapFreeRatio.java | 23 +- test/gc/g1/TestStringDeduplicationTools.java | 8 +- test/runtime/CompressedOops/CompressedClassSpaceSize.java | 4 +- test/runtime/CompressedOops/ObjectAlignment.java | 9 +- test/runtime/contended/Options.java | 12 +- 47 files changed, 2572 insertions(+), 835 deletions(-) From gerard.ziemski at oracle.com Thu May 14 19:57:33 2015 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Thu, 14 May 2015 14:57:33 -0500 Subject: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments Message-ID: <5554FE2D.6020405@oracle.com> hi all, We introduce a new mechanism that allows specification of a valid range per flag that is then used to automatically validate given flag's value every time it changes. Ranges values must be constant and can not change. Optionally, a constraint can also be specified and applied every time a flag value changes for those flags whose valid value can not be trivially checked by a simple min and max (ex. whether it's power of 2, or bigger or smaller than some other flag that can also change) I have chosen to modify the table macros (ex. RUNTIME_FLAGS in globals.hpp) instead of using a more sophisticated solution, such as C++ templates, because even though macros were unfriendly when initially developing, once a solution was arrived at, subsequent additions to the tables of new ranges, or constraint are trivial from developer's point of view. (The intial development unfriendliness of macros was mitigated by using a pre-processor, which for those using a modern IDE like Xcode, is easily available from a menu). Using macros also allowed for more minimal code changes. The presented solution is based on expansion of macros using variadic functions and can be readily seen in runtime/commandLineFlagConstraintList.cpp and runtime/commandLineFlagRangeList.cpp In commandLineFlagConstraintList.cpp or commandLineFlagRangesList.cpp, there is bunch of classes and methods that seems to beg for C++ template to be used. I have tried, but when the compiler tries to generate code for both uintx and size_t, which happen to have the same underlying type (on BSD), it fails to compile overridden methods with same type, but different name. If someone has a way of simplifying the new code via C++ templates, however, we can file a new enhancement request to address that. This webrev represents only the initial range checking framework and only 100 or so flags that were ported from an existing ad hoc range checking code to this new mechanism. There are about 250 remaining flags that still need their ranges determined and ported over to this new mechansim and they are tracked by individual subtasks. I had to modify several existing tests to change the error message that they expected when VM refuses to run, which was changed to provide uniform error messages. To help with testing and subtask efforts I have introduced a new runtime flag: PrintFlagsRanges: "Print VM flags and their ranges and exit VM" which in addition to the already existing flags: "PrintFlagsInitial" and "PrintFlagsFinal" allow for thorough examination of the flags values and their ranges. The code change builds and passes JPRT (-testset hotspot) and UTE (vm.quick.testlist) References: Webrev: http://cr.openjdk.java.net/~gziemski/8059557_rev0/ note: due to "awk" limit of 50 pats the Frames diff is not available for "src/share/vm/runtime/arguments.cpp" JEP:https://bugs.openjdk.java.net/browse/JDK-8059557 Compiler subtask:https://bugs.openjdk.java.net/browse/JDK-8078554 GC subtask:https://bugs.openjdk.java.net/browse/JDK-8078555 Runtime subtask:https://bugs.openjdk.java.net/browse/JDK-8078556 hgstat: src/cpu/ppc/vm/globals_ppc.hpp | 2 +- src/cpu/sparc/vm/globals_sparc.hpp | 2 +- src/cpu/x86/vm/globals_x86.hpp | 2 +- src/cpu/zero/vm/globals_zero.hpp | 3 +- src/os/aix/vm/globals_aix.hpp | 4 +- src/os/bsd/vm/globals_bsd.hpp | 29 +- src/os/linux/vm/globals_linux.hpp | 9 +- src/os/solaris/vm/globals_solaris.hpp | 4 +- src/os/windows/vm/globals_windows.hpp | 5 +- src/share/vm/c1/c1_globals.cpp | 4 +- src/share/vm/c1/c1_globals.hpp | 17 +- src/share/vm/gc_implementation/g1/g1_globals.cpp | 16 +- src/share/vm/gc_implementation/g1/g1_globals.hpp | 38 ++- src/share/vm/opto/c2_globals.cpp | 12 +- src/share/vm/opto/c2_globals.hpp | 39 ++- src/share/vm/prims/whitebox.cpp | 12 +- src/share/vm/runtime/arguments.cpp | 711 +++++++++++++++++++++++++++------------------------------------ src/share/vm/runtime/arguments.hpp | 24 +- src/share/vm/runtime/commandLineFlagConstraintList.cpp | 242 +++++++++++++++++++++ src/share/vm/runtime/commandLineFlagConstraintList.hpp | 72 ++++++ src/share/vm/runtime/commandLineFlagConstraintsGC.cpp | 226 ++++++++++++++++++++ src/share/vm/runtime/commandLineFlagConstraintsGC.hpp | 57 +++++ src/share/vm/runtime/commandLineFlagConstraintsRuntime.cpp | 50 ++++ src/share/vm/runtime/commandLineFlagConstraintsRuntime.hpp | 39 +++ src/share/vm/runtime/commandLineFlagRangeList.cpp | 304 +++++++++++++++++++++++++++ src/share/vm/runtime/commandLineFlagRangeList.hpp | 67 ++++++ src/share/vm/runtime/globals.cpp | 681 ++++++++++++++++++++++++++++++++++++++++++++++++------------ src/share/vm/runtime/globals.hpp | 302 +++++++++++++++++++++----- src/share/vm/runtime/globals_extension.hpp | 101 +++++++- src/share/vm/runtime/init.cpp | 6 +- src/share/vm/runtime/os.hpp | 17 + src/share/vm/runtime/os_ext.hpp | 7 +- src/share/vm/runtime/thread.cpp | 6 + src/share/vm/services/attachListener.cpp | 4 +- src/share/vm/services/classLoadingService.cpp | 6 +- src/share/vm/services/diagnosticCommand.cpp | 3 +- src/share/vm/services/management.cpp | 6 +- src/share/vm/services/memoryService.cpp | 2 +- src/share/vm/services/writeableFlags.cpp | 161 ++++++++++---- src/share/vm/services/writeableFlags.hpp | 52 +--- test/compiler/c2/7200264/Test7200264.sh | 5 +- test/compiler/startup/NumCompilerThreadsCheck.java | 2 +- test/gc/arguments/TestHeapFreeRatio.java | 23 +- test/gc/g1/TestStringDeduplicationTools.java | 8 +- test/runtime/CompressedOops/CompressedClassSpaceSize.java | 4 +- test/runtime/CompressedOops/ObjectAlignment.java | 9 +- test/runtime/contended/Options.java | 12 +- 47 files changed, 2572 insertions(+), 835 deletions(-) From serguei.spitsyn at oracle.com Fri May 15 00:15:35 2015 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Thu, 14 May 2015 17:15:35 -0700 Subject: RFR (XS) 8079644: memory stomping error with ResourceManagement and TestAgentStress.java Message-ID: <55553AA7.6000901@oracle.com> Please, review the jdk 9 fix for: https://bugs.openjdk.java.net/browse/JDK-8079644 9 hotspot webrev: http://cr.openjdk.java.net/~sspitsyn/webrevs/2015/hotspot/8079644-JVMTI-memstomp.1 Summary: The cached class file structure must be deallocated instead of the cached class file bytes. It is because the cached class file bytes array is a part of the cached class file structure. Testing in progress: In progress: nsk.redefine.testlist, JTREG com/sun/jdi tests, ad-hog closed/serviceability/resource/TestAgentStress.java test from the bug report Thanks, Serguei From coleen.phillimore at oracle.com Fri May 15 20:27:36 2015 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Fri, 15 May 2015 16:27:36 -0400 Subject: RFR (XS) 8079644: memory stomping error with ResourceManagement and TestAgentStress.java In-Reply-To: <55553AA7.6000901@oracle.com> References: <55553AA7.6000901@oracle.com> Message-ID: <555656B8.3020304@oracle.com> Serguei, This looks good. Coleen On 5/14/15, 8:15 PM, serguei.spitsyn at oracle.com wrote: > Please, review the jdk 9 fix for: > https://bugs.openjdk.java.net/browse/JDK-8079644 > > > 9 hotspot webrev: > http://cr.openjdk.java.net/~sspitsyn/webrevs/2015/hotspot/8079644-JVMTI-memstomp.1 > > > > Summary: > > The cached class file structure must be deallocated instead of the > cached class file bytes. > It is because the cached class file bytes array is a part of the > cached class file structure. > > > Testing in progress: > In progress: nsk.redefine.testlist, JTREG com/sun/jdi tests, > ad-hog > closed/serviceability/resource/TestAgentStress.java test from the bug > report > > > Thanks, > Serguei From serguei.spitsyn at oracle.com Fri May 15 20:32:26 2015 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Fri, 15 May 2015 13:32:26 -0700 Subject: RFR (XS) 8079644: memory stomping error with ResourceManagement and TestAgentStress.java In-Reply-To: <555656B8.3020304@oracle.com> References: <55553AA7.6000901@oracle.com> <555656B8.3020304@oracle.com> Message-ID: <555657DA.6020801@oracle.com> Thanks a lot, Coleen! Serguei On 5/15/15 1:27 PM, Coleen Phillimore wrote: > > Serguei, > This looks good. > Coleen > > On 5/14/15, 8:15 PM, serguei.spitsyn at oracle.com wrote: >> Please, review the jdk 9 fix for: >> https://bugs.openjdk.java.net/browse/JDK-8079644 >> >> >> 9 hotspot webrev: >> http://cr.openjdk.java.net/~sspitsyn/webrevs/2015/hotspot/8079644-JVMTI-memstomp.1 >> >> >> >> Summary: >> >> The cached class file structure must be deallocated instead of the >> cached class file bytes. >> It is because the cached class file bytes array is a part of the >> cached class file structure. >> >> >> Testing in progress: >> In progress: nsk.redefine.testlist, JTREG com/sun/jdi tests, >> ad-hog >> closed/serviceability/resource/TestAgentStress.java test from the bug >> report >> >> >> Thanks, >> Serguei > From coleen.phillimore at oracle.com Fri May 15 20:49:04 2015 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Fri, 15 May 2015 16:49:04 -0400 Subject: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <5554FE2D.6020405@oracle.com> References: <5554FE2D.6020405@oracle.com> Message-ID: <55565BC0.4000308@oracle.com> Gerard, This is significant work! The macro re-expansions are daunting but the simpler user interface makes it worth while. Someone better at macros should review this in more detail to see if there's any gotchas, especially wrt to C++11 and beyond. Hopefully there aren't any surprises. I think there's some things people should know about globals.hpp: *+ /* NB: The default value of UseLinuxPosixThreadCPUClocks may be */ \* *+ /* overridden in Arguments::parse_each_vm_init_arg. */ \* * product(bool, UseLinuxPosixThreadCPUClocks, true, \* * "enable fast Linux Posix clocks where available") \* * /* NB: The default value of UseLinuxPosixThreadCPUClocks may be \* * overridden in Arguments::parse_each_vm_init_arg. */ \* It looks like if you have a comment for an option, do you need to have it above the option, or is this just nicer? In http://cr.openjdk.java.net/~gziemski/8059557_rev0/src/share/vm/runtime/globals.cpp.udiff.html This should be out->print_cr() *+ if (printRanges == false) {* * out->print_cr("[Global flags]");* *+ } else {* *+ tty->print_cr("[Global flags ranges]");* *+ }* *+* There's some ifdef debugging code left in but I think that's ok for now, because it's not much and may be helpful but not helpful enough in the long run to add a XX:OnlyPrintProductFlags option. The name checkAllRangesAndConstraints should be check_all_ranges_and_constraints as per the coding standard, but the constraint functions in http://cr.openjdk.java.net/~gziemski/8059557_rev0/src/share/vm/runtime/commandLineFlagConstraintsGC.hpp.html seem okay in mixed case so you can find them when you're grepping for the flag. I didn't review the tests yet. If someone else reviews them I'd be glad. This looks very good with these minor suggestions! thanks, Coleen On 5/14/15, 3:57 PM, Gerard Ziemski wrote: > hi all, > > We introduce a new mechanism that allows specification of a valid > range per flag that is then used to automatically validate given > flag's value every time it changes. Ranges values must be constant and > can not change. Optionally, a constraint can also be specified and > applied every time a flag value changes for those flags whose valid > value can not be trivially checked by a simple min and max (ex. > whether it's power of 2, or bigger or smaller than some other flag > that can also change) > > I have chosen to modify the table macros (ex. RUNTIME_FLAGS in > globals.hpp) instead of using a more sophisticated solution, such as > C++ templates, because even though macros were unfriendly when > initially developing, once a solution was arrived at, subsequent > additions to the tables of new ranges, or constraint are trivial from > developer's point of view. (The intial development unfriendliness of > macros was mitigated by using a pre-processor, which for those using a > modern IDE like Xcode, is easily available from a menu). Using macros > also allowed for more minimal code changes. > > The presented solution is based on expansion of macros using variadic > functions and can be readily seen in > runtime/commandLineFlagConstraintList.cpp and > runtime/commandLineFlagRangeList.cpp > > In commandLineFlagConstraintList.cpp or commandLineFlagRangesList.cpp, > there is bunch of classes and methods that seems to beg for C++ > template to be used. I have tried, but when the compiler tries to > generate code for both uintx and size_t, which happen to have the same > underlying type (on BSD), it fails to compile overridden methods with > same type, but different name. If someone has a way of simplifying the > new code via C++ templates, however, we can file a new enhancement > request to address that. > > This webrev represents only the initial range checking framework and > only 100 or so flags that were ported from an existing ad hoc range > checking code to this new mechanism. There are about 250 remaining > flags that still need their ranges determined and ported over to this > new mechansim and they are tracked by individual subtasks. > > I had to modify several existing tests to change the error message > that they expected when VM refuses to run, which was changed to > provide uniform error messages. > > To help with testing and subtask efforts I have introduced a new > runtime flag: > > PrintFlagsRanges: "Print VM flags and their ranges and exit VM" > > which in addition to the already existing flags: "PrintFlagsInitial" > and "PrintFlagsFinal" allow for thorough examination of the flags > values and their ranges. > > The code change builds and passes JPRT (-testset hotspot) and UTE > (vm.quick.testlist) > > > References: > > Webrev: http://cr.openjdk.java.net/~gziemski/8059557_rev0/ > note: due to "awk" limit of 50 pats the Frames diff is not > available for "src/share/vm/runtime/arguments.cpp" > > JEP:https://bugs.openjdk.java.net/browse/JDK-8059557 > Compiler subtask:https://bugs.openjdk.java.net/browse/JDK-8078554 > GC subtask:https://bugs.openjdk.java.net/browse/JDK-8078555 > Runtime subtask:https://bugs.openjdk.java.net/browse/JDK-8078556 > > > hgstat: > > src/cpu/ppc/vm/globals_ppc.hpp | 2 +- > src/cpu/sparc/vm/globals_sparc.hpp | 2 +- > src/cpu/x86/vm/globals_x86.hpp | 2 +- > src/cpu/zero/vm/globals_zero.hpp | 3 +- > src/os/aix/vm/globals_aix.hpp | 4 +- > src/os/bsd/vm/globals_bsd.hpp | 29 +- > src/os/linux/vm/globals_linux.hpp | 9 +- > src/os/solaris/vm/globals_solaris.hpp | 4 +- > src/os/windows/vm/globals_windows.hpp | 5 +- > src/share/vm/c1/c1_globals.cpp | 4 +- > src/share/vm/c1/c1_globals.hpp | 17 +- > src/share/vm/gc_implementation/g1/g1_globals.cpp | 16 +- > src/share/vm/gc_implementation/g1/g1_globals.hpp | 38 ++- > src/share/vm/opto/c2_globals.cpp | 12 +- > src/share/vm/opto/c2_globals.hpp | 39 ++- > src/share/vm/prims/whitebox.cpp | 12 +- > src/share/vm/runtime/arguments.cpp | 711 > +++++++++++++++++++++++++++------------------------------------ > src/share/vm/runtime/arguments.hpp | 24 +- > src/share/vm/runtime/commandLineFlagConstraintList.cpp | 242 > +++++++++++++++++++++ > src/share/vm/runtime/commandLineFlagConstraintList.hpp | 72 ++++++ > src/share/vm/runtime/commandLineFlagConstraintsGC.cpp | 226 > ++++++++++++++++++++ > src/share/vm/runtime/commandLineFlagConstraintsGC.hpp | 57 +++++ > src/share/vm/runtime/commandLineFlagConstraintsRuntime.cpp | 50 ++++ > src/share/vm/runtime/commandLineFlagConstraintsRuntime.hpp | 39 +++ > src/share/vm/runtime/commandLineFlagRangeList.cpp | 304 > +++++++++++++++++++++++++++ > src/share/vm/runtime/commandLineFlagRangeList.hpp | 67 ++++++ > src/share/vm/runtime/globals.cpp | 681 > ++++++++++++++++++++++++++++++++++++++++++++++++------------ > src/share/vm/runtime/globals.hpp | 302 > +++++++++++++++++++++----- > src/share/vm/runtime/globals_extension.hpp | 101 > +++++++- > src/share/vm/runtime/init.cpp | 6 +- > src/share/vm/runtime/os.hpp | 17 + > src/share/vm/runtime/os_ext.hpp | 7 +- > src/share/vm/runtime/thread.cpp | 6 + > src/share/vm/services/attachListener.cpp | 4 +- > src/share/vm/services/classLoadingService.cpp | 6 +- > src/share/vm/services/diagnosticCommand.cpp | 3 +- > src/share/vm/services/management.cpp | 6 +- > src/share/vm/services/memoryService.cpp | 2 +- > src/share/vm/services/writeableFlags.cpp | 161 > ++++++++++---- > src/share/vm/services/writeableFlags.hpp | 52 +--- > test/compiler/c2/7200264/Test7200264.sh | 5 +- > test/compiler/startup/NumCompilerThreadsCheck.java | 2 +- > test/gc/arguments/TestHeapFreeRatio.java | 23 +- > test/gc/g1/TestStringDeduplicationTools.java | 8 +- > test/runtime/CompressedOops/CompressedClassSpaceSize.java | 4 +- > test/runtime/CompressedOops/ObjectAlignment.java | 9 +- > test/runtime/contended/Options.java | 12 +- > 47 files changed, 2572 insertions(+), 835 deletions(-) > > > > From christopherberner at gmail.com Sat May 16 00:38:53 2015 From: christopherberner at gmail.com (Christopher Berner) Date: Fri, 15 May 2015 17:38:53 -0700 Subject: Very long safepoint pauses for RevokeBias Message-ID: I work on the Presto project, which is a distributed SQL engine, and intermittently (roughly once an hour) I see very long application stopped times (as reported by -XX:+PrintGCApplicationStoppedTime). I enabled safepoint statistics, and see that the pause seems to be coming from a RevokeBias safepoint. Any suggestions as to how I can debug this? I already tried adding -XX:+PerfDisableSharedMem, in case this was related to https://bugs.openjdk.java.net/browse/JDK-8076103 See below for safepoint statistics: vmop [threads: total initially_running wait_to_block] [time: spin block sync cleanup vmop] page_trap_count 2528.893: RevokeBias [ 872 0 6 ] [ 0 12826 13142 6476 8177 ] 0 From david.holmes at oracle.com Sun May 17 20:54:00 2015 From: david.holmes at oracle.com (David Holmes) Date: Mon, 18 May 2015 06:54:00 +1000 Subject: Very long safepoint pauses for RevokeBias In-Reply-To: References: Message-ID: <5558FFE8.4090404@oracle.com> Hi Christopher, On 16/05/2015 10:38 AM, Christopher Berner wrote: > I work on the Presto project, which is a distributed SQL engine, and > intermittently (roughly once an hour) I see very long application stopped > times (as reported by > -XX:+PrintGCApplicationStoppedTime). I enabled safepoint statistics, and > see that the pause seems to be coming from a RevokeBias safepoint. > > Any suggestions as to how I can debug this? I already tried adding > -XX:+PerfDisableSharedMem, in case this was related to > https://bugs.openjdk.java.net/browse/JDK-8076103 > > See below for safepoint statistics: > > vmop [threads: total initially_running > wait_to_block] [time: spin block sync cleanup vmop] page_trap_count > > 2528.893: RevokeBias [ 872 0 > 6 ] [ 0 12826 13142 6476 8177 ] 0 Your email seems to be truncated? But just to be sure, does the application run okay with biased-locking disabled? David From david.holmes at oracle.com Mon May 18 05:28:00 2015 From: david.holmes at oracle.com (David Holmes) Date: Mon, 18 May 2015 15:28:00 +1000 Subject: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <5554FE2D.6020405@oracle.com> References: <5554FE2D.6020405@oracle.com> Message-ID: <55597860.2050403@oracle.com> Hi Gerard, I did a lengthy but still preliminary walk through of this code. We need a lot of eyes on this. :) On 15/05/2015 5:57 AM, Gerard Ziemski wrote: > hi all, > > We introduce a new mechanism that allows specification of a valid range > per flag that is then used to automatically validate given flag's value > every time it changes. Ranges values must be constant and can not > change. Optionally, a constraint can also be specified and applied every > time a flag value changes for those flags whose valid value can not be > trivially checked by a simple min and max (ex. whether it's power of 2, > or bigger or smaller than some other flag that can also change) > > I have chosen to modify the table macros (ex. RUNTIME_FLAGS in > globals.hpp) instead of using a more sophisticated solution, such as C++ > templates, because even though macros were unfriendly when initially > developing, once a solution was arrived at, subsequent additions to the > tables of new ranges, or constraint are trivial from developer's point > of view. (The intial development unfriendliness of macros was mitigated > by using a pre-processor, which for those using a modern IDE like Xcode, > is easily available from a menu). Using macros also allowed for more > minimal code changes. > > The presented solution is based on expansion of macros using variadic > functions and can be readily seen in > runtime/commandLineFlagConstraintList.cpp and > runtime/commandLineFlagRangeList.cpp > > In commandLineFlagConstraintList.cpp or commandLineFlagRangesList.cpp, > there is bunch of classes and methods that seems to beg for C++ template > to be used. I have tried, but when the compiler tries to generate code > for both uintx and size_t, which happen to have the same underlying type > (on BSD), it fails to compile overridden methods with same type, but > different name. If someone has a way of simplifying the new code via C++ > templates, however, we can file a new enhancement request to address that. > > This webrev represents only the initial range checking framework and > only 100 or so flags that were ported from an existing ad hoc range > checking code to this new mechanism. There are about 250 remaining flags > that still need their ranges determined and ported over to this new > mechansim and they are tracked by individual subtasks. > > I had to modify several existing tests to change the error message that > they expected when VM refuses to run, which was changed to provide > uniform error messages. > > To help with testing and subtask efforts I have introduced a new runtime > flag: > > PrintFlagsRanges: "Print VM flags and their ranges and exit VM" > > which in addition to the already existing flags: "PrintFlagsInitial" and > "PrintFlagsFinal" allow for thorough examination of the flags values and > their ranges. > > The code change builds and passes JPRT (-testset hotspot) and UTE > (vm.quick.testlist) That was a lot to digest, though many of the changes are syntactic adjustments (check for success versus zero etc). But overall this looks really good (macro magic notwithstanding :) ). I like the way you handled the constraints, and it's nice to see the constraint functions in with the related code rather than being expressed in a checking function in arguments.cpp (though see comments for cases where this does not occur). A few comments: For constructs like: void emit_range_bool(const char* name) { (void)name; /* NOP */ } I seem to recall that this was not sufficient to avoid "unused" warnings with some compilers - which is presumably why you did it? --- In src/share/vm/runtime/globals.hpp it says: see "checkRanges" function in arguments.cpp but there is no such function in arguments.cpp --- src/share/vm/runtime/arguments.cpp The set_object_alignment() functions seems to have some redundant constraint assertions. As does verify_object_alignment() - seems to me that everything in verify_object_alignment should either be in the constraint function for ObjectAlignmentInBytes or one for SurvivorAlignmentInBytes - though the combination of verification and the actual setting of SurvivorAlignmentInBytes may be a problem in the new architecture. If you can't get rid of verify_object_alignment() I'd be tempted to not process it the new way at all, as splitting the constraint checking just leads to confusion IMO. The changes to test/runtime/contended/Options.java suggest to me that a constraint is missing on ContendedPaddingWidth - that it is a multiple of 8 (BytesPerLong). That is (still) checked in arguments.cpp (and given it is still checked there is no need to have removed it from the test). --- Some of the test changes, such as: test/gc/g1/TestStringDeduplicationTools.java seem to be losing some of what they test. Not only is the test checking the value is detected as erroneous, but it also detects that the user is told in what way it is erroneous. The updated test doesn't validate that part of the argument processing logic ---- There is some inconsistency in the test changes, sometimes you use: shouldContain("outside the allowed range") and sometimes: shouldContain("is outside the allowed range") That's all for now. Thanks, David ----- > > References: > > Webrev: http://cr.openjdk.java.net/~gziemski/8059557_rev0/ > note: due to "awk" limit of 50 pats the Frames diff is not > available for "src/share/vm/runtime/arguments.cpp" > > JEP:https://bugs.openjdk.java.net/browse/JDK-8059557 > Compiler subtask:https://bugs.openjdk.java.net/browse/JDK-8078554 > GC subtask:https://bugs.openjdk.java.net/browse/JDK-8078555 > Runtime subtask:https://bugs.openjdk.java.net/browse/JDK-8078556 > > > hgstat: > > src/cpu/ppc/vm/globals_ppc.hpp | 2 +- > src/cpu/sparc/vm/globals_sparc.hpp | 2 +- > src/cpu/x86/vm/globals_x86.hpp | 2 +- > src/cpu/zero/vm/globals_zero.hpp | 3 +- > src/os/aix/vm/globals_aix.hpp | 4 +- > src/os/bsd/vm/globals_bsd.hpp | 29 +- > src/os/linux/vm/globals_linux.hpp | 9 +- > src/os/solaris/vm/globals_solaris.hpp | 4 +- > src/os/windows/vm/globals_windows.hpp | 5 +- > src/share/vm/c1/c1_globals.cpp | 4 +- > src/share/vm/c1/c1_globals.hpp | 17 +- > src/share/vm/gc_implementation/g1/g1_globals.cpp | 16 +- > src/share/vm/gc_implementation/g1/g1_globals.hpp | 38 ++- > src/share/vm/opto/c2_globals.cpp | 12 +- > src/share/vm/opto/c2_globals.hpp | 39 ++- > src/share/vm/prims/whitebox.cpp | 12 +- > src/share/vm/runtime/arguments.cpp | 711 > +++++++++++++++++++++++++++------------------------------------ > src/share/vm/runtime/arguments.hpp | 24 +- > src/share/vm/runtime/commandLineFlagConstraintList.cpp | 242 > +++++++++++++++++++++ > src/share/vm/runtime/commandLineFlagConstraintList.hpp | 72 ++++++ > src/share/vm/runtime/commandLineFlagConstraintsGC.cpp | 226 > ++++++++++++++++++++ > src/share/vm/runtime/commandLineFlagConstraintsGC.hpp | 57 +++++ > src/share/vm/runtime/commandLineFlagConstraintsRuntime.cpp | 50 ++++ > src/share/vm/runtime/commandLineFlagConstraintsRuntime.hpp | 39 +++ > src/share/vm/runtime/commandLineFlagRangeList.cpp | 304 > +++++++++++++++++++++++++++ > src/share/vm/runtime/commandLineFlagRangeList.hpp | 67 ++++++ > src/share/vm/runtime/globals.cpp | 681 > ++++++++++++++++++++++++++++++++++++++++++++++++------------ > src/share/vm/runtime/globals.hpp | 302 > +++++++++++++++++++++----- > src/share/vm/runtime/globals_extension.hpp | 101 > +++++++- > src/share/vm/runtime/init.cpp | 6 +- > src/share/vm/runtime/os.hpp | 17 + > src/share/vm/runtime/os_ext.hpp | 7 +- > src/share/vm/runtime/thread.cpp | 6 + > src/share/vm/services/attachListener.cpp | 4 +- > src/share/vm/services/classLoadingService.cpp | 6 +- > src/share/vm/services/diagnosticCommand.cpp | 3 +- > src/share/vm/services/management.cpp | 6 +- > src/share/vm/services/memoryService.cpp | 2 +- > src/share/vm/services/writeableFlags.cpp | 161 > ++++++++++---- > src/share/vm/services/writeableFlags.hpp | 52 +--- > test/compiler/c2/7200264/Test7200264.sh | 5 +- > test/compiler/startup/NumCompilerThreadsCheck.java | 2 +- > test/gc/arguments/TestHeapFreeRatio.java | 23 +- > test/gc/g1/TestStringDeduplicationTools.java | 8 +- > test/runtime/CompressedOops/CompressedClassSpaceSize.java | 4 +- > test/runtime/CompressedOops/ObjectAlignment.java | 9 +- > test/runtime/contended/Options.java | 12 +- > 47 files changed, 2572 insertions(+), 835 deletions(-) > > > > From staffan.larsen at oracle.com Mon May 18 07:24:48 2015 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Mon, 18 May 2015 09:24:48 +0200 Subject: RFR (XS) 8079644: memory stomping error with ResourceManagement and TestAgentStress.java In-Reply-To: <55553AA7.6000901@oracle.com> References: <55553AA7.6000901@oracle.com> Message-ID: <464DD67C-11FD-46CA-A5EA-568A4601EA46@oracle.com> Looks good! Thanks, /Staffan > On 15 maj 2015, at 02:15, serguei.spitsyn at oracle.com wrote: > > Please, review the jdk 9 fix for: > https://bugs.openjdk.java.net/browse/JDK-8079644 > > > 9 hotspot webrev: > http://cr.openjdk.java.net/~sspitsyn/webrevs/2015/hotspot/8079644-JVMTI-memstomp.1 > > > Summary: > > The cached class file structure must be deallocated instead of the cached class file bytes. > It is because the cached class file bytes array is a part of the cached class file structure. > > > Testing in progress: > In progress: nsk.redefine.testlist, JTREG com/sun/jdi tests, > ad-hog closed/serviceability/resource/TestAgentStress.java test from the bug report > > > Thanks, > Serguei From david.simms at oracle.com Mon May 18 09:07:27 2015 From: david.simms at oracle.com (David Simms) Date: Mon, 18 May 2015 11:07:27 +0200 Subject: RFR JDK-8079466: JNI Specification Update and Clean-up Message-ID: <5559ABCF.7040009@oracle.com> Greetings, Posting this JNI Specification docs clean up for public review/comment... JDK Bug: https://bugs.openjdk.java.net/browse/JDK-8079466 Web review: http://cr.openjdk.java.net/~dsimms/8079466/rev0/ Original Document for HTML comparison: http://docs.oracle.com/javase/8/docs/technotes/guides/jni/spec/jniTOC.html *** Summary of changes *** Wholly confined to documentation changes, no code modifications made. Since there were a number of conflicts with previous doc change review, all have incorporated all current patches for one push: ------------------------------------------------------------------------------------------ JDK-8051947 JNI spec for ExceptionDescribe contradicts hotspot behavior - Added text explaining pending exception is cleared as a side effect JDK-4907359 JNI spec should describe functions more strictly - Made the split between function definitions more obvious (hr) - Added a general note on OOM to the beginning of chapter 4: "Functions whose definition may both return NULL and throw an exception on error, may choose only to return NULL to indicate an error, but not throw any exception. For example, a JNI implementation may consider an "out of memory" condition temporary, and may not wish to throw an OutOfMemoryError since this would appear fatal (JDK API java.lang.Error documentation: "indicates serious problems that a reasonable application should not try to catch")." - Every function needs "Parameters", "Returns" and "Throws" documented (chapters 4 & 5). -- Have documented parameters as must not be "null" when appropriate (although there are cases where the HotSpot reference implementation does not crash): JDK-7172129 Integration of the JNI spec updates for JDK 1.2 was incomplete - Previously reviewed JDK-8034923 JNI: static linking assertions specs are incomplete and are in the wrong section of spec - Previously reviewed JDK-6462398 jni spec should specify which characters are unicode-escaped when mangling - Previously reviewed JDK-6590839 JNI Spec should point out Java objects created in JNI using AllocObject are not finalized - Previously reviewed JDK-8039184 JNI Spec missing documentation on calling default methods - Previously reviewed JDK-6616502 JNI specification should discuss multiple invocations of DetachCurrentThread -Previously reviewed JDK-6681965 The documentation for GetStringChars and GetStringUTFChars is unclear -Previously reviewed ------------------------------------------------------------------------------------------ Cheers /David Simms From dmitry.dmitriev at oracle.com Mon May 18 12:55:46 2015 From: dmitry.dmitriev at oracle.com (Dmitry Dmitriev) Date: Mon, 18 May 2015 15:55:46 +0300 Subject: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <5554FE2D.6020405@oracle.com> References: <5554FE2D.6020405@oracle.com> Message-ID: <5559E152.50105@oracle.com> Hi Gerard, Can you please correct format string for jio_fprintf function in the following functions(all from src/share/vm/runtime/commandLineFlagRangeList.cpp): Flag::Error check_intx(intx value, bool verbose = true) Portion of format string from "intx %s = %ld is outside ..." to "intx %s = "INTX_FORMAT" is outside ..." Flag::Error check_uintx(uintx value, bool verbose = true) Portion of format string from "uintx %s = %lu is outside ..." to "uintx %s = "UINTX_FORMAT" is outside ..." Flag::Error check_uint64_t(uint64_t value, bool verbose = true) Portion of format string from "uint64_t %s = %lu is outside ..." to "uint64_t %s = "UINT64_FORMAT" is outside ..." Thank you, Dmitry On 14.05.2015 22:57, Gerard Ziemski wrote: > hi all, > > We introduce a new mechanism that allows specification of a valid > range per flag that is then used to automatically validate given > flag's value every time it changes. Ranges values must be constant and > can not change. Optionally, a constraint can also be specified and > applied every time a flag value changes for those flags whose valid > value can not be trivially checked by a simple min and max (ex. > whether it's power of 2, or bigger or smaller than some other flag > that can also change) > > I have chosen to modify the table macros (ex. RUNTIME_FLAGS in > globals.hpp) instead of using a more sophisticated solution, such as > C++ templates, because even though macros were unfriendly when > initially developing, once a solution was arrived at, subsequent > additions to the tables of new ranges, or constraint are trivial from > developer's point of view. (The intial development unfriendliness of > macros was mitigated by using a pre-processor, which for those using a > modern IDE like Xcode, is easily available from a menu). Using macros > also allowed for more minimal code changes. > > The presented solution is based on expansion of macros using variadic > functions and can be readily seen in > runtime/commandLineFlagConstraintList.cpp and > runtime/commandLineFlagRangeList.cpp > > In commandLineFlagConstraintList.cpp or commandLineFlagRangesList.cpp, > there is bunch of classes and methods that seems to beg for C++ > template to be used. I have tried, but when the compiler tries to > generate code for both uintx and size_t, which happen to have the same > underlying type (on BSD), it fails to compile overridden methods with > same type, but different name. If someone has a way of simplifying the > new code via C++ templates, however, we can file a new enhancement > request to address that. > > This webrev represents only the initial range checking framework and > only 100 or so flags that were ported from an existing ad hoc range > checking code to this new mechanism. There are about 250 remaining > flags that still need their ranges determined and ported over to this > new mechansim and they are tracked by individual subtasks. > > I had to modify several existing tests to change the error message > that they expected when VM refuses to run, which was changed to > provide uniform error messages. > > To help with testing and subtask efforts I have introduced a new > runtime flag: > > PrintFlagsRanges: "Print VM flags and their ranges and exit VM" > > which in addition to the already existing flags: "PrintFlagsInitial" > and "PrintFlagsFinal" allow for thorough examination of the flags > values and their ranges. > > The code change builds and passes JPRT (-testset hotspot) and UTE > (vm.quick.testlist) > > > References: > > Webrev: http://cr.openjdk.java.net/~gziemski/8059557_rev0/ > note: due to "awk" limit of 50 pats the Frames diff is not > available for "src/share/vm/runtime/arguments.cpp" > > JEP:https://bugs.openjdk.java.net/browse/JDK-8059557 > Compiler subtask:https://bugs.openjdk.java.net/browse/JDK-8078554 > GC subtask:https://bugs.openjdk.java.net/browse/JDK-8078555 > Runtime subtask:https://bugs.openjdk.java.net/browse/JDK-8078556 > > > hgstat: > > src/cpu/ppc/vm/globals_ppc.hpp | 2 +- > src/cpu/sparc/vm/globals_sparc.hpp | 2 +- > src/cpu/x86/vm/globals_x86.hpp | 2 +- > src/cpu/zero/vm/globals_zero.hpp | 3 +- > src/os/aix/vm/globals_aix.hpp | 4 +- > src/os/bsd/vm/globals_bsd.hpp | 29 +- > src/os/linux/vm/globals_linux.hpp | 9 +- > src/os/solaris/vm/globals_solaris.hpp | 4 +- > src/os/windows/vm/globals_windows.hpp | 5 +- > src/share/vm/c1/c1_globals.cpp | 4 +- > src/share/vm/c1/c1_globals.hpp | 17 +- > src/share/vm/gc_implementation/g1/g1_globals.cpp | 16 +- > src/share/vm/gc_implementation/g1/g1_globals.hpp | 38 ++- > src/share/vm/opto/c2_globals.cpp | 12 +- > src/share/vm/opto/c2_globals.hpp | 39 ++- > src/share/vm/prims/whitebox.cpp | 12 +- > src/share/vm/runtime/arguments.cpp | 711 > +++++++++++++++++++++++++++------------------------------------ > src/share/vm/runtime/arguments.hpp | 24 +- > src/share/vm/runtime/commandLineFlagConstraintList.cpp | 242 > +++++++++++++++++++++ > src/share/vm/runtime/commandLineFlagConstraintList.hpp | 72 ++++++ > src/share/vm/runtime/commandLineFlagConstraintsGC.cpp | 226 > ++++++++++++++++++++ > src/share/vm/runtime/commandLineFlagConstraintsGC.hpp | 57 +++++ > src/share/vm/runtime/commandLineFlagConstraintsRuntime.cpp | 50 ++++ > src/share/vm/runtime/commandLineFlagConstraintsRuntime.hpp | 39 +++ > src/share/vm/runtime/commandLineFlagRangeList.cpp | 304 > +++++++++++++++++++++++++++ > src/share/vm/runtime/commandLineFlagRangeList.hpp | 67 ++++++ > src/share/vm/runtime/globals.cpp | 681 > ++++++++++++++++++++++++++++++++++++++++++++++++------------ > src/share/vm/runtime/globals.hpp | 302 > +++++++++++++++++++++----- > src/share/vm/runtime/globals_extension.hpp | 101 > +++++++- > src/share/vm/runtime/init.cpp | 6 +- > src/share/vm/runtime/os.hpp | 17 + > src/share/vm/runtime/os_ext.hpp | 7 +- > src/share/vm/runtime/thread.cpp | 6 + > src/share/vm/services/attachListener.cpp | 4 +- > src/share/vm/services/classLoadingService.cpp | 6 +- > src/share/vm/services/diagnosticCommand.cpp | 3 +- > src/share/vm/services/management.cpp | 6 +- > src/share/vm/services/memoryService.cpp | 2 +- > src/share/vm/services/writeableFlags.cpp | 161 > ++++++++++---- > src/share/vm/services/writeableFlags.hpp | 52 +--- > test/compiler/c2/7200264/Test7200264.sh | 5 +- > test/compiler/startup/NumCompilerThreadsCheck.java | 2 +- > test/gc/arguments/TestHeapFreeRatio.java | 23 +- > test/gc/g1/TestStringDeduplicationTools.java | 8 +- > test/runtime/CompressedOops/CompressedClassSpaceSize.java | 4 +- > test/runtime/CompressedOops/ObjectAlignment.java | 9 +- > test/runtime/contended/Options.java | 12 +- > 47 files changed, 2572 insertions(+), 835 deletions(-) > > > > From edward.nevill at linaro.org Mon May 18 13:43:52 2015 From: edward.nevill at linaro.org (Edward Nevill) Date: Mon, 18 May 2015 14:43:52 +0100 Subject: RFR: 8080586: aarch64: hotspot test compiler/codegen/7184394/TestAESMain.java fails Message-ID: Hi, 8080586: aarch64: hotspot test compiler/codegen/7184394/TestAESMain.java fails wth the error java.lang.IllegalArgumentException: Bad arguments This is due to it returning the incorect length from generate_cipherBlockChaining_encryptAESCrypt This is because it saves the length in the wrong register (rscratch1) instead of rscratch2. Webrev at http://cr.openjdk.java.net/~enevill/8080586/webrev.00 Please revew and let me know if it is OK to push, Thanks, Ed. From aph at redhat.com Mon May 18 14:00:12 2015 From: aph at redhat.com (Andrew Haley) Date: Mon, 18 May 2015 15:00:12 +0100 Subject: [aarch64-port-dev ] RFR: 8080586: aarch64: hotspot test compiler/codegen/7184394/TestAESMain.java fails In-Reply-To: References: Message-ID: <5559F06C.3030808@redhat.com> On 05/18/2015 02:43 PM, Edward Nevill wrote: > Webrev at > > http://cr.openjdk.java.net/~enevill/8080586/webrev.00 > > Please revew and let me know if it is OK to push, Yes, I'm sure that's right, but I am not a reviewer... Thanks, Andrew. From per.liden at oracle.com Mon May 18 14:32:36 2015 From: per.liden at oracle.com (Per Liden) Date: Mon, 18 May 2015 16:32:36 +0200 Subject: RFR: 8080581: Align SA with new GC directory structure Message-ID: <5559F804.3070908@oracle.com> Hi, This is a follow up patch to the GC directory restructure change, which aligns the remaining part of the SA with the new GC structure. The patch moves some GC-related files into new locations and updates package and import lines. Webrev: http://cr.openjdk.java.net/~pliden/8080581/webrev.0/ Bug: https://bugs.openjdk.java.net/browse/JDK-8080581 cheers, /Per From dmitry.samersoff at oracle.com Mon May 18 14:55:19 2015 From: dmitry.samersoff at oracle.com (Dmitry Samersoff) Date: Mon, 18 May 2015 17:55:19 +0300 Subject: RFR: 8080581: Align SA with new GC directory structure In-Reply-To: <5559F804.3070908@oracle.com> References: <5559F804.3070908@oracle.com> Message-ID: <5559FD57.2040802@oracle.com> Looks good for me. On 2015-05-18 17:32, Per Liden wrote: > Hi, > > This is a follow up patch to the GC directory restructure change, which > aligns the remaining part of the SA with the new GC structure. > > The patch moves some GC-related files into new locations and updates > package and import lines. > > Webrev: http://cr.openjdk.java.net/~pliden/8080581/webrev.0/ > Bug: https://bugs.openjdk.java.net/browse/JDK-8080581 > > cheers, > /Per -- Dmitry Samersoff Oracle Java development team, Saint Petersburg, Russia * I would love to change the world, but they won't give me the sources. From aph at redhat.com Mon May 18 14:56:49 2015 From: aph at redhat.com (Andrew Haley) Date: Mon, 18 May 2015 15:56:49 +0100 Subject: RFR: 8080600: AARCH64: testlibrary does not support AArch64 Message-ID: <5559FDB1.3020808@redhat.com> Fixes all the mathexact tests. http://cr.openjdk.java.net/~aph/8080600/ Andrew. From roland.westrelin at oracle.com Mon May 18 15:01:39 2015 From: roland.westrelin at oracle.com (Roland Westrelin) Date: Mon, 18 May 2015 17:01:39 +0200 Subject: RFR: 8080586: aarch64: hotspot test compiler/codegen/7184394/TestAESMain.java fails In-Reply-To: References: Message-ID: <756F1DD7-221B-4549-A70A-D30125C26154@oracle.com> > http://cr.openjdk.java.net/~enevill/8080586/webrev.00 That looks good to me. Roland. From roland.westrelin at oracle.com Mon May 18 15:04:24 2015 From: roland.westrelin at oracle.com (Roland Westrelin) Date: Mon, 18 May 2015 17:04:24 +0200 Subject: RFR: 8080600: AARCH64: testlibrary does not support AArch64 In-Reply-To: <5559FDB1.3020808@redhat.com> References: <5559FDB1.3020808@redhat.com> Message-ID: <8F5EBAD7-7596-43E7-BD74-3552A51DBB3F@oracle.com> > http://cr.openjdk.java.net/~aph/8080600/ That looks good to me. Roland. From per.liden at oracle.com Mon May 18 15:08:30 2015 From: per.liden at oracle.com (Per Liden) Date: Mon, 18 May 2015 17:08:30 +0200 Subject: RFR: 8080581: Align SA with new GC directory structure In-Reply-To: <5559FD57.2040802@oracle.com> References: <5559F804.3070908@oracle.com> <5559FD57.2040802@oracle.com> Message-ID: <555A006E.1030607@oracle.com> Thanks for reviewing Dmitry. /Per On 2015-05-18 16:55, Dmitry Samersoff wrote: > Looks good for me. > > On 2015-05-18 17:32, Per Liden wrote: >> Hi, >> >> This is a follow up patch to the GC directory restructure change, which >> aligns the remaining part of the SA with the new GC structure. >> >> The patch moves some GC-related files into new locations and updates >> package and import lines. >> >> Webrev: http://cr.openjdk.java.net/~pliden/8080581/webrev.0/ >> Bug: https://bugs.openjdk.java.net/browse/JDK-8080581 >> >> cheers, >> /Per > > From aph at redhat.com Mon May 18 15:10:24 2015 From: aph at redhat.com (Andrew Haley) Date: Mon, 18 May 2015 16:10:24 +0100 Subject: RFR: 8080600: AARCH64: testlibrary does not support AArch64 In-Reply-To: <8F5EBAD7-7596-43E7-BD74-3552A51DBB3F@oracle.com> References: <5559FDB1.3020808@redhat.com> <8F5EBAD7-7596-43E7-BD74-3552A51DBB3F@oracle.com> Message-ID: <555A00E0.8020309@redhat.com> On 05/18/2015 04:04 PM, Roland Westrelin wrote: >> http://cr.openjdk.java.net/~aph/8080600/ > > That looks good to me. OK, thanks. I can't push it to shared code. Andrew. From dmitry.dmitriev at oracle.com Mon May 18 15:48:08 2015 From: dmitry.dmitriev at oracle.com (Dmitry Dmitriev) Date: Mon, 18 May 2015 18:48:08 +0300 Subject: RFR 8059557: Test set for "Validate JVM Command-Line Flag Arguments" Message-ID: <555A09B8.7010402@oracle.com> Hello all, Please review test set for verifying functionality implemented by JEP 245 "Validate JVM Command-Line Flag Arguments"(JDK-8059557). Review request for this JEP can be found there: http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018539.html I create 3 tests for verifying options with ranges. The tests mostly rely on common/optionsvalidation/JVMOptionsUtils.java. Class in this file contains functions to get options with ranges as list(by parsing new option "-XX:+PrintFlagsRanges" output), run command line test for list of options and other. The actual test code contained in common/optionsvalidation/JVMOption.java file - testCommandLine(), testDynamic(), testJcmd() and testAttach() methods. common/optionsvalidation/IntJVMOption.java and common/optionsvalidation/DoubleJVMOption.java source files contain classes derived from JVMOption class for integer and double JVM options correspondingly. Here are description of the tests: 1) hotspot/test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java This test get all options with ranges by parsing output of new option "-XX:+PrintFlagsRanges" and verify these options by starting Java and passing options in command line with valid and invalid values. Currently it verifies about 106 options which have ranges. Invalid values are values which out-of-range. In test used values "min-1" and "max+1".In this case Java should always exit with code 1 and print error message about out-of-range value(with one exception, if option is unsigned and passing negative value, then out-of-range error message is not printed because error occurred earlier). Valid values are values in range, e.g. min&max and also several additional values. In this case Java should successfully exit(exit code 0) or exit with error code 1 for other reasons(low memory with certain option value etc.). In any case for values in range Java should not print messages about out of range value. In any case Java should not crash. This test excluded from JPRT because it takes long time to execute and also fails - some options with value in valid range cause Java to crash(bugs are submitted). 2) hotspot/test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java This test get all writeable options with ranges by parsing output of new option "-XX:+PrintFlagsRanges" and verify these options by dynamically changing it's values to the valid and invalid values. Used 3 methods for that: DynamicVMOption isValidValue and isInvalidValue methods, Jcmd and by attach method. Currently 3 writeable options with ranges are verified by this test. This test pass in JPRT. 3) hotspot/test/runtime/CommandLine/OptionsValidation/TestJcmdOutput.java This test verified output of Jcmd when out-of-range value is set to the writeable option or value violates option constraint. Also this test verify that jcmd not write error message to the target process. This test pass in JPRT. I am not write special tests for constraints for this JEP because there are exist test for that(e.g. test/runtime/CompressedOops/ObjectAlignment.java for ObjectAlignmentInBytes or hotspot/test/gc/arguments/TestHeapFreeRatio.java for MinHeapFreeRatio/MaxHeapFreeRatio). Webrev: http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.00/ JEP: https://bugs.openjdk.java.net/browse/JDK-8059557 Thanks, Dmitry From christopherberner at gmail.com Mon May 18 16:24:52 2015 From: christopherberner at gmail.com (Christopher Berner) Date: Mon, 18 May 2015 09:24:52 -0700 Subject: Very long safepoint pauses for RevokeBias In-Reply-To: <5558FFE8.4090404@oracle.com> References: <5558FFE8.4090404@oracle.com> Message-ID: Don't think it got truncated, but I only included the first line of log about the safepoint statistics. Turning off biased locks seem to have fixed those pauses, thanks! Just to help me understand, what circumstances would cause biased locking to induce 28sec pauses? Is that because a thread was holding the lock for 28sec? On Sun, May 17, 2015 at 1:54 PM, David Holmes wrote: > Hi Christopher, > > > On 16/05/2015 10:38 AM, Christopher Berner wrote: > >> I work on the Presto project, which is a distributed SQL engine, and >> intermittently (roughly once an hour) I see very long application stopped >> times (as reported by >> -XX:+PrintGCApplicationStoppedTime). I enabled safepoint statistics, >> and >> see that the pause seems to be coming from a RevokeBias safepoint. >> >> Any suggestions as to how I can debug this? I already tried adding >> -XX:+PerfDisableSharedMem, in case this was related to >> https://bugs.openjdk.java.net/browse/JDK-8076103 >> >> See below for safepoint statistics: >> >> vmop [threads: total initially_running >> wait_to_block] [time: spin block sync cleanup vmop] page_trap_count >> >> 2528.893: RevokeBias [ 872 0 >> 6 ] [ 0 12826 13142 6476 8177 ] 0 >> > > Your email seems to be truncated? > > But just to be sure, does the application run okay with biased-locking > disabled? > > David > > From serguei.spitsyn at oracle.com Mon May 18 16:59:08 2015 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Mon, 18 May 2015 09:59:08 -0700 Subject: RFR (XS) 8079644: memory stomping error with ResourceManagement and TestAgentStress.java In-Reply-To: <464DD67C-11FD-46CA-A5EA-568A4601EA46@oracle.com> References: <55553AA7.6000901@oracle.com> <464DD67C-11FD-46CA-A5EA-568A4601EA46@oracle.com> Message-ID: <555A1A5C.1020005@oracle.com> Thanks a lot, Staffan! Serguei On 5/18/15 12:24 AM, Staffan Larsen wrote: > Looks good! > > Thanks, > /Staffan > >> On 15 maj 2015, at 02:15, serguei.spitsyn at oracle.com wrote: >> >> Please, review the jdk 9 fix for: >> https://bugs.openjdk.java.net/browse/JDK-8079644 >> >> >> 9 hotspot webrev: >> http://cr.openjdk.java.net/~sspitsyn/webrevs/2015/hotspot/8079644-JVMTI-memstomp.1 >> >> >> Summary: >> >> The cached class file structure must be deallocated instead of the cached class file bytes. >> It is because the cached class file bytes array is a part of the cached class file structure. >> >> >> Testing in progress: >> In progress: nsk.redefine.testlist, JTREG com/sun/jdi tests, >> ad-hog closed/serviceability/resource/TestAgentStress.java test from the bug report >> >> >> Thanks, >> Serguei From vladimir.x.ivanov at oracle.com Mon May 18 17:32:24 2015 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Mon, 18 May 2015 20:32:24 +0300 Subject: [9] RFR (S): 8059340: ConstantPool::_resolved_references is missing in heap dump In-Reply-To: <5536975F.60108@oracle.com> References: <542971DC.2090803@oracle.com> <24133E27-C024-41D8-80EF-B5707CFF0D0C@oracle.com> <552FFBAA.6050103@oracle.com> <55358D0A.1020407@oracle.com> <21C42551-29AA-44DC-AFE5-6156D681503D@oracle.com> <5536975F.60108@oracle.com> Message-ID: <555A2228.3080908@oracle.com> Here's updated version: http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 Moved ConstantPool::_resolved_references to mirror class instance. Fixed a couple of issues in CDS and JVMTI (class redefinition) caused by this change. I had to hard code Class::resolved_references offset since it is used in template interpreter which is generated earlier than j.l.Class is loaded during VM bootstrap. Testing: hotspot/test, vm testbase (in progress) Best regards, Vladimir Ivanov On 4/21/15 9:30 PM, Vladimir Ivanov wrote: > Coleen, Chris, > > I'll proceed with moving ConstantPool::_resolved_references to j.l.Class > instance then. > > Thanks for the feedback. > > Best regards, > Vladimir Ivanov > > On 4/21/15 3:22 AM, Christian Thalinger wrote: >> >>> On Apr 20, 2015, at 4:34 PM, Coleen Phillimore >>> > >>> wrote: >>> >>> >>> Vladimir, >>> >>> I think that changing the format of the heap dump isn't a good idea >>> either. >>> >>> On 4/16/15, 2:12 PM, Vladimir Ivanov wrote: >>>> (sorry for really late response; just got enough time to return to >>>> the bug) >>> >>> I'd forgotten about it! >>>> >>>> Coleen, Staffan, >>>> >>>> Thanks a lot for the feedback! >>>> >>>> After thinking about the fix more, I don't think that using reserved >>>> oop slot in CLASS DUMP for recording _resolved_references is the best >>>> thing to do. IMO the change causes too much work for the users (heap >>>> dump analysis tools). >>>> >>>> It needs specification update and then heap dump analyzers should be >>>> updated as well. >>>> >>>> I have 2 alternative approaches (hacky and not-so-hacky :-)): >>>> >>>> - artificial class static field in the dump ("" >>>> + optional id to guarantee unique name); >>>> >>>> - add j.l.Class::_resolved_references field; >>>> Not sure how much overhead (mostly reads from bytecode) the move >>>> from ConstantPool to j.l.Class adds, so I propose just to duplicate >>>> it for now. >>> >>> I really like this second approach, so much so that I had a prototype >>> for moving resolved_references directly to the j.l.Class object about >>> a year ago. I couldn't find any benefit other than consolidating oops >>> so the GC would have less work to do. If the resolved_references are >>> moved to j.l.C instance, they can not be jobjects and the >>> ClassLoaderData::_handles area wouldn't have to contain them (but >>> there are other things that could go there so don't delete the >>> _handles field yet). >>> >>> The change I had was relatively simple. The only annoying part was >>> that getting to the resolved references has to be in macroAssembler >>> and do: >>> >>> go through method->cpCache->constants->instanceKlass->java_mirror() >>> rather than >>> method->cpCache->constants->resolved_references->jmethod indirection >>> >>> I think it only affects the interpreter so the extra indirection >>> wouldn't affect performance, so don't duplicate it! You don't want to >>> increase space used by j.l.C without taking it out somewhere else! >> >> I like this approach. Can we do this? >> >>> >>>> >>>> What do you think about that? >>> >>> Is this bug worth doing this? I don't know but I'd really like it. >>> >>> Coleen >>> >>>> >>>> Best regards, >>>> Vladimir Ivanov >>>> >>>> On 10/6/14 11:35 AM, Staffan Larsen wrote: >>>>> This looks like a good approach. However, there are a couple of more >>>>> places that need to be updated. >>>>> >>>>> The hprof binary format is described in >>>>> jdk/src/jdk.hprof.agent/share/native/libhprof/manual.html and needs >>>>> to be updated. It?s also more formally specified in hprof_b_spec.h >>>>> in the same directory. >>>>> >>>>> The hprof JVMTI agent in jdk/src/jdk.hprof.agent code would also >>>>> need to be updated to show this field. Since this is a JVMTI agent >>>>> it needs to be possible to find the resolved_refrences array via the >>>>> JVMTI heap walking API. Perhaps that already works? - I haven?t >>>>> looked. >>>>> >>>>> Finally, the Serviceability Agent implements yet another hprof >>>>> binary dumper in >>>>> hotspot/agent//src/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java >>>>> >>>>> which also needs to write this reference. >>>>> >>>>> Thanks, >>>>> /Staffan >>>>> >>>>> On 29 sep 2014, at 16:51, Vladimir Ivanov >>>>> > >>>>> wrote: >>>>> >>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.00/ >>>>>> https://bugs.openjdk.java.net/browse/JDK-8059340 >>>>>> >>>>>> VM heap dump doesn't contain ConstantPool::_resolved_references for >>>>>> classes which have resolved references. >>>>>> >>>>>> ConstantPool::_resolved_references points to an Object[] holding >>>>>> resolved constant pool entries (patches for VM anonymous classes, >>>>>> linked CallSite & MethodType for invokedynamic instructions). >>>>>> >>>>>> I've decided to use reserved slot in HPROF class header format. >>>>>> It requires an update in jhat to correctly display new info. >>>>>> >>>>>> The other approach I tried was to dump the reference as a fake >>>>>> static field [1], but storing VM internal >>>>>> ConstantPool::_resolved_references among user defined fields looks >>>>>> confusing. >>>>>> >>>>>> Testing: manual (verified that corresponding arrays are properly >>>>>> linked in Nashorn heap dump). >>>>>> >>>>>> Thanks! >>>>>> >>>>>> Best regards, >>>>>> Vladimir Ivanov >>>>>> >>>>>> [1] http://cr.openjdk.java.net/~vlivanov/8059340/static >> From vladimir.kozlov at oracle.com Mon May 18 22:04:07 2015 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 18 May 2015 15:04:07 -0700 Subject: RFR: 8080600: AARCH64: testlibrary does not support AArch64 In-Reply-To: <555A00E0.8020309@redhat.com> References: <5559FDB1.3020808@redhat.com> <8F5EBAD7-7596-43E7-BD74-3552A51DBB3F@oracle.com> <555A00E0.8020309@redhat.com> Message-ID: <555A61D7.8000804@oracle.com> Andrew, Please, update patch. Code in jdk9/hs-comp/hotspot was modified and your patch can't be applied. Next change modified IntrinsicBase.java (PPC64): http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/67729f5f33c4 And next moved Platform.java to test/testlibrary/jdk/test/lib/Platform.java: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/ed6389f70257 Thanks, Vladimir On 5/18/15 8:10 AM, Andrew Haley wrote: > On 05/18/2015 04:04 PM, Roland Westrelin wrote: >>> http://cr.openjdk.java.net/~aph/8080600/ >> >> That looks good to me. > > OK, thanks. I can't push it to shared code. > > Andrew. > > From david.holmes at oracle.com Tue May 19 05:22:33 2015 From: david.holmes at oracle.com (David Holmes) Date: Tue, 19 May 2015 15:22:33 +1000 Subject: [aarch64-port-dev ] RFR: 8080586: aarch64: hotspot test compiler/codegen/7184394/TestAESMain.java fails In-Reply-To: <5559F06C.3030808@redhat.com> References: <5559F06C.3030808@redhat.com> Message-ID: <555AC899.3080003@oracle.com> On 19/05/2015 12:00 AM, Andrew Haley wrote: > On 05/18/2015 02:43 PM, Edward Nevill wrote: >> Webrev at >> >> http://cr.openjdk.java.net/~enevill/8080586/webrev.00 >> >> Please revew and let me know if it is OK to push, > > Yes, I'm sure that's right, but I am not a reviewer... Reviewed by proxy. ;-) David > Thanks, > Andrew. > From david.holmes at oracle.com Tue May 19 05:26:02 2015 From: david.holmes at oracle.com (David Holmes) Date: Tue, 19 May 2015 15:26:02 +1000 Subject: [aarch64-port-dev ] RFR: 8080586: aarch64: hotspot test compiler/codegen/7184394/TestAESMain.java fails In-Reply-To: <555AC899.3080003@oracle.com> References: <5559F06C.3030808@redhat.com> <555AC899.3080003@oracle.com> Message-ID: <555AC96A.6060901@oracle.com> Looks like Roland already reviewed this somewhere but I don't see the email on hotspot-dev. David On 19/05/2015 3:22 PM, David Holmes wrote: > On 19/05/2015 12:00 AM, Andrew Haley wrote: >> On 05/18/2015 02:43 PM, Edward Nevill wrote: >>> Webrev at >>> >>> http://cr.openjdk.java.net/~enevill/8080586/webrev.00 >>> >>> Please revew and let me know if it is OK to push, >> >> Yes, I'm sure that's right, but I am not a reviewer... > > Reviewed by proxy. ;-) > > David > >> Thanks, >> Andrew. >> From staffan.larsen at oracle.com Tue May 19 06:37:52 2015 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Tue, 19 May 2015 08:37:52 +0200 Subject: RFR: 8080581: Align SA with new GC directory structure In-Reply-To: <5559F804.3070908@oracle.com> References: <5559F804.3070908@oracle.com> Message-ID: Looks good! Thanks, /Staffan > On 18 maj 2015, at 16:32, Per Liden wrote: > > Hi, > > This is a follow up patch to the GC directory restructure change, which aligns the remaining part of the SA with the new GC structure. > > The patch moves some GC-related files into new locations and updates package and import lines. > > Webrev: http://cr.openjdk.java.net/~pliden/8080581/webrev.0/ > Bug: https://bugs.openjdk.java.net/browse/JDK-8080581 > > cheers, > /Per From per.liden at oracle.com Tue May 19 07:29:55 2015 From: per.liden at oracle.com (Per Liden) Date: Tue, 19 May 2015 09:29:55 +0200 Subject: RFR: 8080581: Align SA with new GC directory structure In-Reply-To: References: <5559F804.3070908@oracle.com> Message-ID: <555AE673.7020608@oracle.com> Thanks Staffan. /Per On 2015-05-19 08:37, Staffan Larsen wrote: > Looks good! > > Thanks, > /Staffan > >> On 18 maj 2015, at 16:32, Per Liden wrote: >> >> Hi, >> >> This is a follow up patch to the GC directory restructure change, which aligns the remaining part of the SA with the new GC structure. >> >> The patch moves some GC-related files into new locations and updates package and import lines. >> >> Webrev: http://cr.openjdk.java.net/~pliden/8080581/webrev.0/ >> Bug: https://bugs.openjdk.java.net/browse/JDK-8080581 >> >> cheers, >> /Per > From serguei.spitsyn at oracle.com Tue May 19 07:47:36 2015 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Tue, 19 May 2015 00:47:36 -0700 Subject: [9] RFR (S): 8059340: ConstantPool::_resolved_references is missing in heap dump In-Reply-To: <555A2228.3080908@oracle.com> References: <542971DC.2090803@oracle.com> <24133E27-C024-41D8-80EF-B5707CFF0D0C@oracle.com> <552FFBAA.6050103@oracle.com> <55358D0A.1020407@oracle.com> <21C42551-29AA-44DC-AFE5-6156D681503D@oracle.com> <5536975F.60108@oracle.com> <555A2228.3080908@oracle.com> Message-ID: <555AEA98.3060104@oracle.com> Hi Vladimir, It looks good in general. Some comments are below. || *src/share/vm/oops/cpCache.cpp* @@ -281,11 +281,11 @@ // Competing writers must acquire exclusive access via a lock. // A losing writer waits on the lock until the winner writes f1 and leaves // the lock, so that when the losing writer returns, he can use the linked // cache entry. - objArrayHandle resolved_references = cpool->resolved_references(); + objArrayHandle resolved_references = cpool->pool_holder()->resolved_references(); // Use the resolved_references() lock for this cpCache entry. // resolved_references are created for all classes with Invokedynamic, MethodHandle // or MethodType constant pool cache entries. assert(resolved_references() != NULL, "a resolved_references array should have been created for this class"); ------------------------------------------------------------------------ @@ -410,20 +410,20 @@ oop ConstantPoolCacheEntry::appendix_if_resolved(constantPoolHandle cpool) { if (!has_appendix()) return NULL; const int ref_index = f2_as_index() + _indy_resolved_references_appendix_offset; - objArrayOop resolved_references = cpool->resolved_references(); + objArrayOop resolved_references = cpool->pool_holder()->resolved_references(); return resolved_references->obj_at(ref_index); } oop ConstantPoolCacheEntry::method_type_if_resolved(constantPoolHandle cpool) { if (!has_method_type()) return NULL; const int ref_index = f2_as_index() + _indy_resolved_references_method_type_offset; - objArrayOop resolved_references = cpool->resolved_references(); + objArrayOop resolved_references = cpool->pool_holder()->resolved_references(); return resolved_references->obj_at(ref_index); } There is no need in the update above as the constant pool still has the function resolved_references(): +objArrayOop ConstantPool::resolved_references() const { + return pool_holder()->resolved_references(); +} The same is true for the files: src/share/vm/interpreter/interpreterRuntime.cpp src/share/vm/interpreter/bytecodeTracer.cpp || src/share/vm/ci/ciEnv.cpp || src/share/vm/runtime/vmStructs.cpp* *@@ -286,11 +286,10 @@ nonstatic_field(ConstantPool, _tags, Array*) \ nonstatic_field(ConstantPool, _cache, ConstantPoolCache*) \ nonstatic_field(ConstantPool, _pool_holder, InstanceKlass*) \ nonstatic_field(ConstantPool, _operands, Array*) \ nonstatic_field(ConstantPool, _length, int) \ - nonstatic_field(ConstantPool, _resolved_references, jobject) \ nonstatic_field(ConstantPool, _reference_map, Array*) \ nonstatic_field(ConstantPoolCache, _length, int) \ nonstatic_field(ConstantPoolCache, _constant_pool, ConstantPool*) \ nonstatic_field(InstanceKlass, _array_klasses, Klass*) \ nonstatic_field(InstanceKlass, _methods, Array*) \ * *I guess, we need to cover the same field in the InstanceKlass instead. Thanks, Serguei On 5/18/15 10:32 AM, Vladimir Ivanov wrote: > Here's updated version: > http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 > > Moved ConstantPool::_resolved_references to mirror class instance. > > Fixed a couple of issues in CDS and JVMTI (class redefinition) caused > by this change. > > I had to hard code Class::resolved_references offset since it is used > in template interpreter which is generated earlier than j.l.Class is > loaded during VM bootstrap. > > Testing: hotspot/test, vm testbase (in progress) > > Best regards, > Vladimir Ivanov > > On 4/21/15 9:30 PM, Vladimir Ivanov wrote: >> Coleen, Chris, >> >> I'll proceed with moving ConstantPool::_resolved_references to j.l.Class >> instance then. >> >> Thanks for the feedback. >> >> Best regards, >> Vladimir Ivanov >> >> On 4/21/15 3:22 AM, Christian Thalinger wrote: >>> >>>> On Apr 20, 2015, at 4:34 PM, Coleen Phillimore >>>> > >>>> wrote: >>>> >>>> >>>> Vladimir, >>>> >>>> I think that changing the format of the heap dump isn't a good idea >>>> either. >>>> >>>> On 4/16/15, 2:12 PM, Vladimir Ivanov wrote: >>>>> (sorry for really late response; just got enough time to return to >>>>> the bug) >>>> >>>> I'd forgotten about it! >>>>> >>>>> Coleen, Staffan, >>>>> >>>>> Thanks a lot for the feedback! >>>>> >>>>> After thinking about the fix more, I don't think that using reserved >>>>> oop slot in CLASS DUMP for recording _resolved_references is the best >>>>> thing to do. IMO the change causes too much work for the users (heap >>>>> dump analysis tools). >>>>> >>>>> It needs specification update and then heap dump analyzers should be >>>>> updated as well. >>>>> >>>>> I have 2 alternative approaches (hacky and not-so-hacky :-)): >>>>> >>>>> - artificial class static field in the dump ("" >>>>> + optional id to guarantee unique name); >>>>> >>>>> - add j.l.Class::_resolved_references field; >>>>> Not sure how much overhead (mostly reads from bytecode) the move >>>>> from ConstantPool to j.l.Class adds, so I propose just to duplicate >>>>> it for now. >>>> >>>> I really like this second approach, so much so that I had a prototype >>>> for moving resolved_references directly to the j.l.Class object about >>>> a year ago. I couldn't find any benefit other than consolidating oops >>>> so the GC would have less work to do. If the resolved_references are >>>> moved to j.l.C instance, they can not be jobjects and the >>>> ClassLoaderData::_handles area wouldn't have to contain them (but >>>> there are other things that could go there so don't delete the >>>> _handles field yet). >>>> >>>> The change I had was relatively simple. The only annoying part was >>>> that getting to the resolved references has to be in macroAssembler >>>> and do: >>>> >>>> go through method->cpCache->constants->instanceKlass->java_mirror() >>>> rather than >>>> method->cpCache->constants->resolved_references->jmethod indirection >>>> >>>> I think it only affects the interpreter so the extra indirection >>>> wouldn't affect performance, so don't duplicate it! You don't want to >>>> increase space used by j.l.C without taking it out somewhere else! >>> >>> I like this approach. Can we do this? >>> >>>> >>>>> >>>>> What do you think about that? >>>> >>>> Is this bug worth doing this? I don't know but I'd really like it. >>>> >>>> Coleen >>>> >>>>> >>>>> Best regards, >>>>> Vladimir Ivanov >>>>> >>>>> On 10/6/14 11:35 AM, Staffan Larsen wrote: >>>>>> This looks like a good approach. However, there are a couple of more >>>>>> places that need to be updated. >>>>>> >>>>>> The hprof binary format is described in >>>>>> jdk/src/jdk.hprof.agent/share/native/libhprof/manual.html and needs >>>>>> to be updated. It?s also more formally specified in hprof_b_spec.h >>>>>> in the same directory. >>>>>> >>>>>> The hprof JVMTI agent in jdk/src/jdk.hprof.agent code would also >>>>>> need to be updated to show this field. Since this is a JVMTI agent >>>>>> it needs to be possible to find the resolved_refrences array via the >>>>>> JVMTI heap walking API. Perhaps that already works? - I haven?t >>>>>> looked. >>>>>> >>>>>> Finally, the Serviceability Agent implements yet another hprof >>>>>> binary dumper in >>>>>> hotspot/agent//src/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java >>>>>> >>>>>> >>>>>> which also needs to write this reference. >>>>>> >>>>>> Thanks, >>>>>> /Staffan >>>>>> >>>>>> On 29 sep 2014, at 16:51, Vladimir Ivanov >>>>>> > >>>>>> wrote: >>>>>> >>>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.00/ >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8059340 >>>>>>> >>>>>>> VM heap dump doesn't contain ConstantPool::_resolved_references for >>>>>>> classes which have resolved references. >>>>>>> >>>>>>> ConstantPool::_resolved_references points to an Object[] holding >>>>>>> resolved constant pool entries (patches for VM anonymous classes, >>>>>>> linked CallSite & MethodType for invokedynamic instructions). >>>>>>> >>>>>>> I've decided to use reserved slot in HPROF class header format. >>>>>>> It requires an update in jhat to correctly display new info. >>>>>>> >>>>>>> The other approach I tried was to dump the reference as a fake >>>>>>> static field [1], but storing VM internal >>>>>>> ConstantPool::_resolved_references among user defined fields looks >>>>>>> confusing. >>>>>>> >>>>>>> Testing: manual (verified that corresponding arrays are properly >>>>>>> linked in Nashorn heap dump). >>>>>>> >>>>>>> Thanks! >>>>>>> >>>>>>> Best regards, >>>>>>> Vladimir Ivanov >>>>>>> >>>>>>> [1] http://cr.openjdk.java.net/~vlivanov/8059340/static >>> From david.holmes at oracle.com Tue May 19 07:55:34 2015 From: david.holmes at oracle.com (David Holmes) Date: Tue, 19 May 2015 17:55:34 +1000 Subject: [aarch64-port-dev ] RFR: 8080586: aarch64: hotspot test compiler/codegen/7184394/TestAESMain.java fails In-Reply-To: <555AC96A.6060901@oracle.com> References: <5559F06C.3030808@redhat.com> <555AC899.3080003@oracle.com> <555AC96A.6060901@oracle.com> Message-ID: <555AEC76.9090806@oracle.com> On 19/05/2015 3:26 PM, David Holmes wrote: > Looks like Roland already reviewed this somewhere but I don't see the > email on hotspot-dev. Different subject line due to the inserted [aarch64-port-dev ]. David > David > > On 19/05/2015 3:22 PM, David Holmes wrote: >> On 19/05/2015 12:00 AM, Andrew Haley wrote: >>> On 05/18/2015 02:43 PM, Edward Nevill wrote: >>>> Webrev at >>>> >>>> http://cr.openjdk.java.net/~enevill/8080586/webrev.00 >>>> >>>> Please revew and let me know if it is OK to push, >>> >>> Yes, I'm sure that's right, but I am not a reviewer... >> >> Reviewed by proxy. ;-) >> >> David >> >>> Thanks, >>> Andrew. >>> From david.holmes at oracle.com Tue May 19 08:01:42 2015 From: david.holmes at oracle.com (David Holmes) Date: Tue, 19 May 2015 18:01:42 +1000 Subject: Very long safepoint pauses for RevokeBias In-Reply-To: References: <5558FFE8.4090404@oracle.com> Message-ID: <555AEDE6.1010902@oracle.com> On 19/05/2015 2:24 AM, Christopher Berner wrote: > Don't think it got truncated, but I only included the first line of log > about the safepoint statistics. Turning off biased locks seem to have > fixed those pauses, thanks! > > Just to help me understand, what circumstances would cause biased > locking to induce 28sec pauses? Is that because a thread was holding the > lock for 28sec? No. Based on the stats you showed the actual revocation is only a fraction of the time spent. It is taking a long time to get the system to a safepoint and then a reasonable amount of time is also being spent on safepoint cleanup tasks. Do you have hundreds of active threads? David ----- > On Sun, May 17, 2015 at 1:54 PM, David Holmes > wrote: > > Hi Christopher, > > > On 16/05/2015 10:38 AM, Christopher Berner wrote: > > I work on the Presto project, which is a distributed SQL engine, and > intermittently (roughly once an hour) I see very long > application stopped > times (as reported by > -XX:+PrintGCApplicationStoppedTime). I enabled safepoint > statistics, and > see that the pause seems to be coming from a RevokeBias safepoint. > > Any suggestions as to how I can debug this? I already tried adding > -XX:+PerfDisableSharedMem, in case this was related to > https://bugs.openjdk.java.net/browse/JDK-8076103 > > See below for safepoint statistics: > > vmop [threads: total initially_running > wait_to_block] [time: spin block sync cleanup vmop] > page_trap_count > > 2528.893: RevokeBias [ 872 0 > 6 ] [ 0 12826 13142 6476 8177 ] 0 > > > Your email seems to be truncated? > > But just to be sure, does the application run okay with > biased-locking disabled? > > David > > From volker.simonis at gmail.com Tue May 19 13:54:59 2015 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 19 May 2015 15:54:59 +0200 Subject: [8u60] request for approval: 8080190: PPC64: Fix wrong rotate instructions in the .ad file Message-ID: Hi, could you please approve the downport of the following, ppc-only fix to jdk8u/hs-dev/hotspot Bug: https://bugs.openjdk.java.net/browse/JDK-8080190 Webrev: http://cr.openjdk.java.net/~simonis/webrevs/2015/8080190.8u/ Review: http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2015-May/thread.html#17920 URL: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/4140f485ba27 The change cleanly applies to the jdk8u/hs-dev/hotspot repository. I would really like to bring this into 8u60 as this is a serious bug which leads to incorrect computations. Is this still possible by goiong trough jdk8u/hs-dev/hotspot? The other question is if I can push this ppc-only change myself to jdk8u/hs-dev/hotspot or if I need a sponsor? Thank you and best regards, Volker From aph at redhat.com Tue May 19 15:52:19 2015 From: aph at redhat.com (Andrew Haley) Date: Tue, 19 May 2015 16:52:19 +0100 Subject: RFR: 8080600: AARCH64: testlibrary does not support AArch64 In-Reply-To: <555A61D7.8000804@oracle.com> References: <5559FDB1.3020808@redhat.com> <8F5EBAD7-7596-43E7-BD74-3552A51DBB3F@oracle.com> <555A00E0.8020309@redhat.com> <555A61D7.8000804@oracle.com> Message-ID: <555B5C33.4060601@redhat.com> On 05/18/2015 11:04 PM, Vladimir Kozlov wrote: > Please, update patch. Code in jdk9/hs-comp/hotspot was modified and your > patch can't be applied. > > Next change modified IntrinsicBase.java (PPC64): > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/67729f5f33c4 > > And next moved Platform.java to test/testlibrary/jdk/test/lib/Platform.java: > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/ed6389f70257 http://cr.openjdk.java.net/~aph/8080600-2/ OK. As a general rule should I base hotspot patches on hs-comp/hotspot rather than dev/hotspot? It's a bit awkward because I don't push to dev/hotspot. Thanks, Andrew. From roland.westrelin at oracle.com Tue May 19 16:06:13 2015 From: roland.westrelin at oracle.com (Roland Westrelin) Date: Tue, 19 May 2015 18:06:13 +0200 Subject: RFR: 8080600: AARCH64: testlibrary does not support AArch64 In-Reply-To: <555B5C33.4060601@redhat.com> References: <5559FDB1.3020808@redhat.com> <8F5EBAD7-7596-43E7-BD74-3552A51DBB3F@oracle.com> <555A00E0.8020309@redhat.com> <555A61D7.8000804@oracle.com> <555B5C33.4060601@redhat.com> Message-ID: <28A40172-DC4F-4EBA-929B-4D61E954A392@oracle.com> > http://cr.openjdk.java.net/~aph/8080600-2/ I?m pushing it. > OK. As a general rule should I base hotspot patches on hs-comp/hotspot > rather than dev/hotspot? It's a bit awkward because I don't push to > dev/hotspot. Patches should be made against the repo that they are the most likely to be pushed to i.e. compiler changes against hs-comp. Roland. From volker.simonis at gmail.com Tue May 19 16:41:45 2015 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 19 May 2015 18:41:45 +0200 Subject: RFR(S): 8080684: PPC64: Fix little-endian build after "8077838: Recent developments for ppc" Message-ID: Hi, can I please get a review for the following fix of the ppc64le build. @Sasha, Tiago: I would be also especially interested if somebody with a little-endian ppc64 system can check if the fix really works there as I have currently no access to such a system. https://bugs.openjdk.java.net/browse/JDK-8080684 http://cr.openjdk.java.net/~simonis/webrevs/2015/8080684/ Following the details: On big-endian ppc64 we need so called 'function descriptors' instead of simple pointers in order to call functions. That's why the Assembler class on ppc64 offers an 'emit_fd()' method which can be used to create such a function descriptor. On little-endian ppc64 the ABI was changed (i.e. ABI_ELFv2) and function descriptors have been removed. That's why the before mentioned 'emit_fd()' is being excluded from the build with the help of preprocessor macros if the HotSpot is being build in a little endian environment: #if !defined(ABI_ELFv2) inline address emit_fd(...) #endif The drawback of this approach is that every call site which uses 'emit_fd()' has to conditionally handle the case where 'emit_fd()' isn't present as well. This was exactly the problem with change "8077838: Recent developments for ppc" which introduced an unconditional call to 'emit_fd()' in 'VM_Version::config_dscr() which lead to a build failure on little endian. A better approach would be to make 'emit_fd()' available on both, little- and big-endian platforms and handle the difference internally, within the function itself. On little-endian, the function will just return the current PC without emitting any code at all while the big-endian variant emits the required function descriptor. Thank you and best regards, Volker From vladimir.x.ivanov at oracle.com Tue May 19 16:58:43 2015 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Tue, 19 May 2015 19:58:43 +0300 Subject: [9] RFR (S): 8059340: ConstantPool::_resolved_references is missing in heap dump In-Reply-To: <555AEA98.3060104@oracle.com> References: <542971DC.2090803@oracle.com> <24133E27-C024-41D8-80EF-B5707CFF0D0C@oracle.com> <552FFBAA.6050103@oracle.com> <55358D0A.1020407@oracle.com> <21C42551-29AA-44DC-AFE5-6156D681503D@oracle.com> <5536975F.60108@oracle.com> <555A2228.3080908@oracle.com> <555AEA98.3060104@oracle.com> Message-ID: <555B6BC3.1020809@oracle.com> Thanks for the review, Serguei. Updated webrev in place: http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 Switched to ConstantPool::resolved_references() as you suggested. Regarding declaring the field in vmStructs.cpp, it is not needed since the field is located in Java mirror and not in InstanceKlass. Best regards, Vladimir Ivanov On 5/19/15 10:47 AM, serguei.spitsyn at oracle.com wrote: > Hi Vladimir, > > It looks good in general. > Some comments are below. > > || *src/share/vm/oops/cpCache.cpp* > > @@ -281,11 +281,11 @@ > // Competing writers must acquire exclusive access via a lock. > // A losing writer waits on the lock until the winner writes f1 and leaves > // the lock, so that when the losing writer returns, he can use the linked > // cache entry. > > - objArrayHandle resolved_references = cpool->resolved_references(); > + objArrayHandle resolved_references = cpool->pool_holder()->resolved_references(); > // Use the resolved_references() lock for this cpCache entry. > // resolved_references are created for all classes with Invokedynamic, MethodHandle > // or MethodType constant pool cache entries. > assert(resolved_references() != NULL, > "a resolved_references array should have been created for this class"); > > ------------------------------------------------------------------------ > > @@ -410,20 +410,20 @@ > > oop ConstantPoolCacheEntry::appendix_if_resolved(constantPoolHandle cpool) { > if (!has_appendix()) > return NULL; > const int ref_index = f2_as_index() + _indy_resolved_references_appendix_offset; > - objArrayOop resolved_references = cpool->resolved_references(); > + objArrayOop resolved_references = cpool->pool_holder()->resolved_references(); > return resolved_references->obj_at(ref_index); > } > > > oop ConstantPoolCacheEntry::method_type_if_resolved(constantPoolHandle cpool) { > if (!has_method_type()) > return NULL; > const int ref_index = f2_as_index() + _indy_resolved_references_method_type_offset; > - objArrayOop resolved_references = cpool->resolved_references(); > + objArrayOop resolved_references = cpool->pool_holder()->resolved_references(); > return resolved_references->obj_at(ref_index); > } > > There is no need in the update above as the constant pool still has the > function resolved_references(): > +objArrayOop ConstantPool::resolved_references() const { > + return pool_holder()->resolved_references(); > +} > > The same is true for the files: > src/share/vm/interpreter/interpreterRuntime.cpp > src/share/vm/interpreter/bytecodeTracer.cpp > || src/share/vm/ci/ciEnv.cpp > > > || src/share/vm/runtime/vmStructs.cpp* > > *@@ -286,11 +286,10 @@ > nonstatic_field(ConstantPool, _tags, > Array*) \ > nonstatic_field(ConstantPool, _cache, > ConstantPoolCache*) \ > nonstatic_field(ConstantPool, _pool_holder, > InstanceKlass*) \ > nonstatic_field(ConstantPool, _operands, > Array*) \ > nonstatic_field(ConstantPool, _length, > int) \ > - nonstatic_field(ConstantPool, _resolved_references, > jobject) \ > nonstatic_field(ConstantPool, _reference_map, > Array*) \ > nonstatic_field(ConstantPoolCache, _length, > int) \ > nonstatic_field(ConstantPoolCache, _constant_pool, > ConstantPool*) \ > nonstatic_field(InstanceKlass, _array_klasses, > Klass*) \ > nonstatic_field(InstanceKlass, _methods, > Array*) \ > * > > *I guess, we need to cover the same field in the InstanceKlass instead. > > Thanks, > Serguei > > > On 5/18/15 10:32 AM, Vladimir Ivanov wrote: >> Here's updated version: >> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 >> >> Moved ConstantPool::_resolved_references to mirror class instance. >> >> Fixed a couple of issues in CDS and JVMTI (class redefinition) caused >> by this change. >> >> I had to hard code Class::resolved_references offset since it is used >> in template interpreter which is generated earlier than j.l.Class is >> loaded during VM bootstrap. >> >> Testing: hotspot/test, vm testbase (in progress) >> >> Best regards, >> Vladimir Ivanov >> >> On 4/21/15 9:30 PM, Vladimir Ivanov wrote: >>> Coleen, Chris, >>> >>> I'll proceed with moving ConstantPool::_resolved_references to j.l.Class >>> instance then. >>> >>> Thanks for the feedback. >>> >>> Best regards, >>> Vladimir Ivanov >>> >>> On 4/21/15 3:22 AM, Christian Thalinger wrote: >>>> >>>>> On Apr 20, 2015, at 4:34 PM, Coleen Phillimore >>>>> > >>>>> wrote: >>>>> >>>>> >>>>> Vladimir, >>>>> >>>>> I think that changing the format of the heap dump isn't a good idea >>>>> either. >>>>> >>>>> On 4/16/15, 2:12 PM, Vladimir Ivanov wrote: >>>>>> (sorry for really late response; just got enough time to return to >>>>>> the bug) >>>>> >>>>> I'd forgotten about it! >>>>>> >>>>>> Coleen, Staffan, >>>>>> >>>>>> Thanks a lot for the feedback! >>>>>> >>>>>> After thinking about the fix more, I don't think that using reserved >>>>>> oop slot in CLASS DUMP for recording _resolved_references is the best >>>>>> thing to do. IMO the change causes too much work for the users (heap >>>>>> dump analysis tools). >>>>>> >>>>>> It needs specification update and then heap dump analyzers should be >>>>>> updated as well. >>>>>> >>>>>> I have 2 alternative approaches (hacky and not-so-hacky :-)): >>>>>> >>>>>> - artificial class static field in the dump ("" >>>>>> + optional id to guarantee unique name); >>>>>> >>>>>> - add j.l.Class::_resolved_references field; >>>>>> Not sure how much overhead (mostly reads from bytecode) the move >>>>>> from ConstantPool to j.l.Class adds, so I propose just to duplicate >>>>>> it for now. >>>>> >>>>> I really like this second approach, so much so that I had a prototype >>>>> for moving resolved_references directly to the j.l.Class object about >>>>> a year ago. I couldn't find any benefit other than consolidating oops >>>>> so the GC would have less work to do. If the resolved_references are >>>>> moved to j.l.C instance, they can not be jobjects and the >>>>> ClassLoaderData::_handles area wouldn't have to contain them (but >>>>> there are other things that could go there so don't delete the >>>>> _handles field yet). >>>>> >>>>> The change I had was relatively simple. The only annoying part was >>>>> that getting to the resolved references has to be in macroAssembler >>>>> and do: >>>>> >>>>> go through method->cpCache->constants->instanceKlass->java_mirror() >>>>> rather than >>>>> method->cpCache->constants->resolved_references->jmethod indirection >>>>> >>>>> I think it only affects the interpreter so the extra indirection >>>>> wouldn't affect performance, so don't duplicate it! You don't want to >>>>> increase space used by j.l.C without taking it out somewhere else! >>>> >>>> I like this approach. Can we do this? >>>> >>>>> >>>>>> >>>>>> What do you think about that? >>>>> >>>>> Is this bug worth doing this? I don't know but I'd really like it. >>>>> >>>>> Coleen >>>>> >>>>>> >>>>>> Best regards, >>>>>> Vladimir Ivanov >>>>>> >>>>>> On 10/6/14 11:35 AM, Staffan Larsen wrote: >>>>>>> This looks like a good approach. However, there are a couple of more >>>>>>> places that need to be updated. >>>>>>> >>>>>>> The hprof binary format is described in >>>>>>> jdk/src/jdk.hprof.agent/share/native/libhprof/manual.html and needs >>>>>>> to be updated. It?s also more formally specified in hprof_b_spec.h >>>>>>> in the same directory. >>>>>>> >>>>>>> The hprof JVMTI agent in jdk/src/jdk.hprof.agent code would also >>>>>>> need to be updated to show this field. Since this is a JVMTI agent >>>>>>> it needs to be possible to find the resolved_refrences array via the >>>>>>> JVMTI heap walking API. Perhaps that already works? - I haven?t >>>>>>> looked. >>>>>>> >>>>>>> Finally, the Serviceability Agent implements yet another hprof >>>>>>> binary dumper in >>>>>>> hotspot/agent//src/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java >>>>>>> >>>>>>> >>>>>>> which also needs to write this reference. >>>>>>> >>>>>>> Thanks, >>>>>>> /Staffan >>>>>>> >>>>>>> On 29 sep 2014, at 16:51, Vladimir Ivanov >>>>>>> > >>>>>>> wrote: >>>>>>> >>>>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.00/ >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8059340 >>>>>>>> >>>>>>>> VM heap dump doesn't contain ConstantPool::_resolved_references for >>>>>>>> classes which have resolved references. >>>>>>>> >>>>>>>> ConstantPool::_resolved_references points to an Object[] holding >>>>>>>> resolved constant pool entries (patches for VM anonymous classes, >>>>>>>> linked CallSite & MethodType for invokedynamic instructions). >>>>>>>> >>>>>>>> I've decided to use reserved slot in HPROF class header format. >>>>>>>> It requires an update in jhat to correctly display new info. >>>>>>>> >>>>>>>> The other approach I tried was to dump the reference as a fake >>>>>>>> static field [1], but storing VM internal >>>>>>>> ConstantPool::_resolved_references among user defined fields looks >>>>>>>> confusing. >>>>>>>> >>>>>>>> Testing: manual (verified that corresponding arrays are properly >>>>>>>> linked in Nashorn heap dump). >>>>>>>> >>>>>>>> Thanks! >>>>>>>> >>>>>>>> Best regards, >>>>>>>> Vladimir Ivanov >>>>>>>> >>>>>>>> [1] http://cr.openjdk.java.net/~vlivanov/8059340/static >>>> > From vladimir.kozlov at oracle.com Tue May 19 17:25:02 2015 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 19 May 2015 10:25:02 -0700 Subject: [8u60] request for approval: 8080190: PPC64: Fix wrong rotate instructions in the .ad file In-Reply-To: References: Message-ID: <555B71EE.30908@oracle.com> Reviewed and approved. Yes, you can push to jdk8u/hs-dev/hotspot. You have 8u committer status. Thanks, Vladimir On 5/19/15 6:54 AM, Volker Simonis wrote: > Hi, > > could you please approve the downport of the following, ppc-only fix > to jdk8u/hs-dev/hotspot > > Bug: https://bugs.openjdk.java.net/browse/JDK-8080190 > Webrev: http://cr.openjdk.java.net/~simonis/webrevs/2015/8080190.8u/ > Review: http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2015-May/thread.html#17920 > URL: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/4140f485ba27 > > The change cleanly applies to the jdk8u/hs-dev/hotspot repository. > > I would really like to bring this into 8u60 as this is a serious bug > which leads to incorrect computations. > Is this still possible by goiong trough jdk8u/hs-dev/hotspot? > > The other question is if I can push this ppc-only change myself to > jdk8u/hs-dev/hotspot or if I need a sponsor? > > Thank you and best regards, > Volker > From rob.mckenna at oracle.com Tue May 19 16:46:29 2015 From: rob.mckenna at oracle.com (Rob McKenna) Date: Tue, 19 May 2015 17:46:29 +0100 Subject: [8u60] request for approval: 8080190: PPC64: Fix wrong rotate instructions in the .ad file In-Reply-To: References: Message-ID: <555B68E5.2010304@oracle.com> Hi Volker, You can push directly to the hotspot team repo. It will make 8u60 assuming you push before RDP 2. http://openjdk.java.net/projects/jdk8u/releases/8u60.html You should have committer access to that repo. Let us know if not. -Rob On 19/05/15 14:54, Volker Simonis wrote: > Hi, > > could you please approve the downport of the following, ppc-only fix > to jdk8u/hs-dev/hotspot > > Bug: https://bugs.openjdk.java.net/browse/JDK-8080190 > Webrev: http://cr.openjdk.java.net/~simonis/webrevs/2015/8080190.8u/ > Review: http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2015-May/thread.html#17920 > URL: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/4140f485ba27 > > The change cleanly applies to the jdk8u/hs-dev/hotspot repository. > > I would really like to bring this into 8u60 as this is a serious bug > which leads to incorrect computations. > Is this still possible by goiong trough jdk8u/hs-dev/hotspot? > > The other question is if I can push this ppc-only change myself to > jdk8u/hs-dev/hotspot or if I need a sponsor? > > Thank you and best regards, > Volker > From alejandro.murillo at oracle.com Tue May 19 17:38:40 2015 From: alejandro.murillo at oracle.com (Alejandro E Murillo) Date: Tue, 19 May 2015 11:38:40 -0600 Subject: [8u60] request for approval: 8080190: PPC64: Fix wrong rotate instructions in the .ad file In-Reply-To: <555B68E5.2010304@oracle.com> References: <555B68E5.2010304@oracle.com> Message-ID: <555B7520.4070006@oracle.com> Volker, if this is only affecting ppc then go ahead and push manually. if not, let me know and I can grab the patch and push it via jprt cheers Alejandro On 5/19/2015 10:46 AM, Rob McKenna wrote: > Hi Volker, > > You can push directly to the hotspot team repo. It will make 8u60 > assuming you push before RDP 2. > > http://openjdk.java.net/projects/jdk8u/releases/8u60.html > > You should have committer access to that repo. Let us know if not. > > -Rob > > On 19/05/15 14:54, Volker Simonis wrote: >> Hi, >> >> could you please approve the downport of the following, ppc-only fix >> to jdk8u/hs-dev/hotspot >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8080190 >> Webrev: http://cr.openjdk.java.net/~simonis/webrevs/2015/8080190.8u/ >> Review: >> http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2015-May/thread.html#17920 >> URL: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/4140f485ba27 >> >> The change cleanly applies to the jdk8u/hs-dev/hotspot repository. >> >> I would really like to bring this into 8u60 as this is a serious bug >> which leads to incorrect computations. >> Is this still possible by goiong trough jdk8u/hs-dev/hotspot? >> >> The other question is if I can push this ppc-only change myself to >> jdk8u/hs-dev/hotspot or if I need a sponsor? >> >> Thank you and best regards, >> Volker >> -- Alejandro From roland.westrelin at oracle.com Tue May 19 18:08:31 2015 From: roland.westrelin at oracle.com (Roland Westrelin) Date: Tue, 19 May 2015 20:08:31 +0200 Subject: RFR: 8080600: AARCH64: testlibrary does not support AArch64 In-Reply-To: <28A40172-DC4F-4EBA-929B-4D61E954A392@oracle.com> References: <5559FDB1.3020808@redhat.com> <8F5EBAD7-7596-43E7-BD74-3552A51DBB3F@oracle.com> <555A00E0.8020309@redhat.com> <555A61D7.8000804@oracle.com> <555B5C33.4060601@redhat.com> <28A40172-DC4F-4EBA-929B-4D61E954A392@oracle.com> Message-ID: >> http://cr.openjdk.java.net/~aph/8080600-2/ > > I?m pushing it. The push failed: test/testlibrary_tests/TestMutuallyExclusivePlatformPredicates.java failed on OSX: Verifying method group: ARCH Trying to evaluate predicate with name isARM Predicate evaluated to: false Trying to evaluate predicate with name isPPC Predicate evaluated to: false Trying to evaluate predicate with name isSparc Predicate evaluated to: false Trying to evaluate predicate with name isX86 Predicate evaluated to: false Trying to evaluate predicate with name isX64 Predicate evaluated to: true Verifying method group: BITNESS Trying to evaluate predicate with name is32bit Predicate evaluated to: false Trying to evaluate predicate with name is64bit Predicate evaluated to: true Verifying method group: OS Trying to evaluate predicate with name isAix Predicate evaluated to: false Trying to evaluate predicate with name isLinux Predicate evaluated to: false Trying to evaluate predicate with name isOSX Predicate evaluated to: true Trying to evaluate predicate with name isSolaris Predicate evaluated to: false Trying to evaluate predicate with name isWindows Predicate evaluated to: false Verifying method group: VM_TYPE Trying to evaluate predicate with name isClient Predicate evaluated to: false Trying to evaluate predicate with name isServer Predicate evaluated to: true Trying to evaluate predicate with name isGraal Predicate evaluated to: false Trying to evaluate predicate with name isMinimal Predicate evaluated to: false Trying to evaluate predicate with name isZero Predicate evaluated to: false STDERR: java.lang.RuntimeException: All Platform's methods with signature '():Z' should be tested at jdk.test.lib.Asserts.error(Asserts.java:444) at jdk.test.lib.Asserts.assertTrue(Asserts.java:371) at TestMutuallyExclusivePlatformPredicates.verifyCoverage(TestMutuallyExclusivePlatformPredicates.java:106) at TestMutuallyExclusivePlatformPredicates.main(TestMutuallyExclusivePlatformPredicates.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:502) at com.sun.javatest.regtest.agent.MainActionHelper$SameVMRunnable.run(MainActionHelper.java:218) at java.lang.Thread.run(Thread.java:745) Any idea what?s going on? Roland. From serguei.spitsyn at oracle.com Tue May 19 19:02:04 2015 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Tue, 19 May 2015 12:02:04 -0700 Subject: [9] RFR (S): 8059340: ConstantPool::_resolved_references is missing in heap dump In-Reply-To: <555B6BC3.1020809@oracle.com> References: <542971DC.2090803@oracle.com> <24133E27-C024-41D8-80EF-B5707CFF0D0C@oracle.com> <552FFBAA.6050103@oracle.com> <55358D0A.1020407@oracle.com> <21C42551-29AA-44DC-AFE5-6156D681503D@oracle.com> <5536975F.60108@oracle.com> <555A2228.3080908@oracle.com> <555AEA98.3060104@oracle.com> <555B6BC3.1020809@oracle.com> Message-ID: <555B88AC.8050108@oracle.com> Thank you for the update! You are right about the vmStructs.cpp. One more minor comment: InterpreterMacroAssembler::get_resolved_references for aarch64 is defined in the .hpp but for other architectures it is in the .cpp which is inconsistent. But I leave it up to you, consider it reviewed. Now I wonder if we have to merge the resolved references array when a class is redefined and the old version is still in use. I'll check if there is already a bug covering this potential issue. Thanks, Serguei On 5/19/15 9:58 AM, Vladimir Ivanov wrote: > Thanks for the review, Serguei. > > Updated webrev in place: > http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 > > Switched to ConstantPool::resolved_references() as you suggested. > > Regarding declaring the field in vmStructs.cpp, it is not needed since > the field is located in Java mirror and not in InstanceKlass. > > Best regards, > Vladimir Ivanov > > On 5/19/15 10:47 AM, serguei.spitsyn at oracle.com wrote: >> Hi Vladimir, >> >> It looks good in general. >> Some comments are below. >> >> || *src/share/vm/oops/cpCache.cpp* >> >> @@ -281,11 +281,11 @@ >> // Competing writers must acquire exclusive access via a lock. >> // A losing writer waits on the lock until the winner writes f1 >> and leaves >> // the lock, so that when the losing writer returns, he can use >> the linked >> // cache entry. >> >> - objArrayHandle resolved_references = cpool->resolved_references(); >> + objArrayHandle resolved_references = >> cpool->pool_holder()->resolved_references(); >> // Use the resolved_references() lock for this cpCache entry. >> // resolved_references are created for all classes with >> Invokedynamic, MethodHandle >> // or MethodType constant pool cache entries. >> assert(resolved_references() != NULL, >> "a resolved_references array should have been created for >> this class"); >> >> ------------------------------------------------------------------------ >> >> @@ -410,20 +410,20 @@ >> >> oop ConstantPoolCacheEntry::appendix_if_resolved(constantPoolHandle >> cpool) { >> if (!has_appendix()) >> return NULL; >> const int ref_index = f2_as_index() + >> _indy_resolved_references_appendix_offset; >> - objArrayOop resolved_references = cpool->resolved_references(); >> + objArrayOop resolved_references = >> cpool->pool_holder()->resolved_references(); >> return resolved_references->obj_at(ref_index); >> } >> >> >> oop >> ConstantPoolCacheEntry::method_type_if_resolved(constantPoolHandle >> cpool) { >> if (!has_method_type()) >> return NULL; >> const int ref_index = f2_as_index() + >> _indy_resolved_references_method_type_offset; >> - objArrayOop resolved_references = cpool->resolved_references(); >> + objArrayOop resolved_references = >> cpool->pool_holder()->resolved_references(); >> return resolved_references->obj_at(ref_index); >> } >> >> There is no need in the update above as the constant pool still has the >> function resolved_references(): >> +objArrayOop ConstantPool::resolved_references() const { >> + return pool_holder()->resolved_references(); >> +} >> >> The same is true for the files: >> src/share/vm/interpreter/interpreterRuntime.cpp >> src/share/vm/interpreter/bytecodeTracer.cpp >> || src/share/vm/ci/ciEnv.cpp >> >> >> || src/share/vm/runtime/vmStructs.cpp* >> >> *@@ -286,11 +286,10 @@ >> nonstatic_field(ConstantPool, _tags, >> Array*) \ >> nonstatic_field(ConstantPool, _cache, >> ConstantPoolCache*) \ >> nonstatic_field(ConstantPool, _pool_holder, >> InstanceKlass*) \ >> nonstatic_field(ConstantPool, _operands, >> Array*) \ >> nonstatic_field(ConstantPool, _length, >> int) \ >> - nonstatic_field(ConstantPool, _resolved_references, >> jobject) \ >> nonstatic_field(ConstantPool, _reference_map, >> Array*) \ >> nonstatic_field(ConstantPoolCache, _length, >> int) \ >> nonstatic_field(ConstantPoolCache, _constant_pool, >> ConstantPool*) \ >> nonstatic_field(InstanceKlass, _array_klasses, >> Klass*) \ >> nonstatic_field(InstanceKlass, _methods, >> Array*) \ >> * >> >> *I guess, we need to cover the same field in the InstanceKlass instead. >> >> Thanks, >> Serguei >> >> >> On 5/18/15 10:32 AM, Vladimir Ivanov wrote: >>> Here's updated version: >>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 >>> >>> Moved ConstantPool::_resolved_references to mirror class instance. >>> >>> Fixed a couple of issues in CDS and JVMTI (class redefinition) caused >>> by this change. >>> >>> I had to hard code Class::resolved_references offset since it is used >>> in template interpreter which is generated earlier than j.l.Class is >>> loaded during VM bootstrap. >>> >>> Testing: hotspot/test, vm testbase (in progress) >>> >>> Best regards, >>> Vladimir Ivanov >>> >>> On 4/21/15 9:30 PM, Vladimir Ivanov wrote: >>>> Coleen, Chris, >>>> >>>> I'll proceed with moving ConstantPool::_resolved_references to >>>> j.l.Class >>>> instance then. >>>> >>>> Thanks for the feedback. >>>> >>>> Best regards, >>>> Vladimir Ivanov >>>> >>>> On 4/21/15 3:22 AM, Christian Thalinger wrote: >>>>> >>>>>> On Apr 20, 2015, at 4:34 PM, Coleen Phillimore >>>>>> > >>>>>> wrote: >>>>>> >>>>>> >>>>>> Vladimir, >>>>>> >>>>>> I think that changing the format of the heap dump isn't a good idea >>>>>> either. >>>>>> >>>>>> On 4/16/15, 2:12 PM, Vladimir Ivanov wrote: >>>>>>> (sorry for really late response; just got enough time to return to >>>>>>> the bug) >>>>>> >>>>>> I'd forgotten about it! >>>>>>> >>>>>>> Coleen, Staffan, >>>>>>> >>>>>>> Thanks a lot for the feedback! >>>>>>> >>>>>>> After thinking about the fix more, I don't think that using >>>>>>> reserved >>>>>>> oop slot in CLASS DUMP for recording _resolved_references is the >>>>>>> best >>>>>>> thing to do. IMO the change causes too much work for the users >>>>>>> (heap >>>>>>> dump analysis tools). >>>>>>> >>>>>>> It needs specification update and then heap dump analyzers >>>>>>> should be >>>>>>> updated as well. >>>>>>> >>>>>>> I have 2 alternative approaches (hacky and not-so-hacky :-)): >>>>>>> >>>>>>> - artificial class static field in the dump >>>>>>> ("" >>>>>>> + optional id to guarantee unique name); >>>>>>> >>>>>>> - add j.l.Class::_resolved_references field; >>>>>>> Not sure how much overhead (mostly reads from bytecode) the move >>>>>>> from ConstantPool to j.l.Class adds, so I propose just to duplicate >>>>>>> it for now. >>>>>> >>>>>> I really like this second approach, so much so that I had a >>>>>> prototype >>>>>> for moving resolved_references directly to the j.l.Class object >>>>>> about >>>>>> a year ago. I couldn't find any benefit other than consolidating >>>>>> oops >>>>>> so the GC would have less work to do. If the resolved_references >>>>>> are >>>>>> moved to j.l.C instance, they can not be jobjects and the >>>>>> ClassLoaderData::_handles area wouldn't have to contain them (but >>>>>> there are other things that could go there so don't delete the >>>>>> _handles field yet). >>>>>> >>>>>> The change I had was relatively simple. The only annoying part was >>>>>> that getting to the resolved references has to be in macroAssembler >>>>>> and do: >>>>>> >>>>>> go through method->cpCache->constants->instanceKlass->java_mirror() >>>>>> rather than >>>>>> method->cpCache->constants->resolved_references->jmethod indirection >>>>>> >>>>>> I think it only affects the interpreter so the extra indirection >>>>>> wouldn't affect performance, so don't duplicate it! You don't >>>>>> want to >>>>>> increase space used by j.l.C without taking it out somewhere else! >>>>> >>>>> I like this approach. Can we do this? >>>>> >>>>>> >>>>>>> >>>>>>> What do you think about that? >>>>>> >>>>>> Is this bug worth doing this? I don't know but I'd really like it. >>>>>> >>>>>> Coleen >>>>>> >>>>>>> >>>>>>> Best regards, >>>>>>> Vladimir Ivanov >>>>>>> >>>>>>> On 10/6/14 11:35 AM, Staffan Larsen wrote: >>>>>>>> This looks like a good approach. However, there are a couple of >>>>>>>> more >>>>>>>> places that need to be updated. >>>>>>>> >>>>>>>> The hprof binary format is described in >>>>>>>> jdk/src/jdk.hprof.agent/share/native/libhprof/manual.html and >>>>>>>> needs >>>>>>>> to be updated. It?s also more formally specified in hprof_b_spec.h >>>>>>>> in the same directory. >>>>>>>> >>>>>>>> The hprof JVMTI agent in jdk/src/jdk.hprof.agent code would also >>>>>>>> need to be updated to show this field. Since this is a JVMTI agent >>>>>>>> it needs to be possible to find the resolved_refrences array >>>>>>>> via the >>>>>>>> JVMTI heap walking API. Perhaps that already works? - I haven?t >>>>>>>> looked. >>>>>>>> >>>>>>>> Finally, the Serviceability Agent implements yet another hprof >>>>>>>> binary dumper in >>>>>>>> hotspot/agent//src/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> which also needs to write this reference. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> /Staffan >>>>>>>> >>>>>>>> On 29 sep 2014, at 16:51, Vladimir Ivanov >>>>>>>> >>>>>>> > >>>>>>>> wrote: >>>>>>>> >>>>>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.00/ >>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8059340 >>>>>>>>> >>>>>>>>> VM heap dump doesn't contain >>>>>>>>> ConstantPool::_resolved_references for >>>>>>>>> classes which have resolved references. >>>>>>>>> >>>>>>>>> ConstantPool::_resolved_references points to an Object[] holding >>>>>>>>> resolved constant pool entries (patches for VM anonymous classes, >>>>>>>>> linked CallSite & MethodType for invokedynamic instructions). >>>>>>>>> >>>>>>>>> I've decided to use reserved slot in HPROF class header format. >>>>>>>>> It requires an update in jhat to correctly display new info. >>>>>>>>> >>>>>>>>> The other approach I tried was to dump the reference as a fake >>>>>>>>> static field [1], but storing VM internal >>>>>>>>> ConstantPool::_resolved_references among user defined fields >>>>>>>>> looks >>>>>>>>> confusing. >>>>>>>>> >>>>>>>>> Testing: manual (verified that corresponding arrays are properly >>>>>>>>> linked in Nashorn heap dump). >>>>>>>>> >>>>>>>>> Thanks! >>>>>>>>> >>>>>>>>> Best regards, >>>>>>>>> Vladimir Ivanov >>>>>>>>> >>>>>>>>> [1] http://cr.openjdk.java.net/~vlivanov/8059340/static >>>>> >> From aph at redhat.com Tue May 19 19:12:24 2015 From: aph at redhat.com (Andrew Haley) Date: Tue, 19 May 2015 20:12:24 +0100 Subject: RFR: 8080600: AARCH64: testlibrary does not support AArch64 In-Reply-To: References: <5559FDB1.3020808@redhat.com> <8F5EBAD7-7596-43E7-BD74-3552A51DBB3F@oracle.com> <555A00E0.8020309@redhat.com> <555A61D7.8000804@oracle.com> <555B5C33.4060601@redhat.com> <28A40172-DC4F-4EBA-929B-4D61E954A392@oracle.com> Message-ID: <555B8B18.4020900@redhat.com> On 05/19/2015 07:08 PM, Roland Westrelin wrote: > Any idea what?s going on? No. :-) I suppose it's possible that I added isAArch64 to a Linux-only file. I'll check tomorrow. Thanks, Andrew. From vladimir.kozlov at oracle.com Tue May 19 21:55:40 2015 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 19 May 2015 14:55:40 -0700 Subject: RFR(S): 8080684: PPC64: Fix little-endian build after "8077838: Recent developments for ppc" In-Reply-To: References: Message-ID: <555BB15C.4020305@oracle.com> Looks good to me. Thanks, Vladimir On 5/19/15 9:41 AM, Volker Simonis wrote: > Hi, > > can I please get a review for the following fix of the ppc64le build. > > @Sasha, Tiago: I would be also especially interested if somebody with > a little-endian ppc64 system can check if the fix really works there > as I have currently no access to such a system. > > https://bugs.openjdk.java.net/browse/JDK-8080684 > http://cr.openjdk.java.net/~simonis/webrevs/2015/8080684/ > > Following the details: > > On big-endian ppc64 we need so called 'function descriptors' instead > of simple pointers in order to call functions. That's why the > Assembler class on ppc64 offers an 'emit_fd()' method which can be > used to create such a function descriptor. > > On little-endian ppc64 the ABI was changed (i.e. ABI_ELFv2) and > function descriptors have been removed. That's why the before > mentioned 'emit_fd()' is being excluded from the build with the help > of preprocessor macros if the HotSpot is being build in a little > endian environment: > > #if !defined(ABI_ELFv2) > inline address emit_fd(...) > #endif > > The drawback of this approach is that every call site which uses > 'emit_fd()' has to conditionally handle the case where 'emit_fd()' > isn't present as well. This was exactly the problem with change > "8077838: Recent developments for ppc" which introduced an > unconditional call to 'emit_fd()' in 'VM_Version::config_dscr() which > lead to a build failure on little endian. > > A better approach would be to make 'emit_fd()' available on both, > little- and big-endian platforms and handle the difference internally, > within the function itself. On little-endian, the function will just > return the current PC without emitting any code at all while the > big-endian variant emits the required function descriptor. > > Thank you and best regards, > Volker > From asmundak at google.com Wed May 20 01:25:58 2015 From: asmundak at google.com (Alexander Smundak) Date: Tue, 19 May 2015 18:25:58 -0700 Subject: RFR(S): 8080684: PPC64: Fix little-endian build after "8077838: Recent developments for ppc" In-Reply-To: References: Message-ID: It fails to compile because the code still references FileDescriptor, even if ABI_ELFv2 is defined. The revised patch which succeeds is here: http://cr.openjdk.java.net/~asmundak/8080684/hotspot/webrev (it's for jdk8u-dev branch) but IMHO the proposed change isn't semantically right -- if a method is called emit_fd, it should create a FileDescriptor. I am not sure that rationale behind this patch is right -- a compile-time error is usually easy to fix and verify that the semantics is correct while fixing it, whereas runtime error usually requires more effort. On Tue, May 19, 2015 at 9:41 AM, Volker Simonis wrote: > Hi, > > can I please get a review for the following fix of the ppc64le build. > > @Sasha, Tiago: I would be also especially interested if somebody with > a little-endian ppc64 system can check if the fix really works there > as I have currently no access to such a system. > > https://bugs.openjdk.java.net/browse/JDK-8080684 > http://cr.openjdk.java.net/~simonis/webrevs/2015/8080684/ > > Following the details: > > On big-endian ppc64 we need so called 'function descriptors' instead > of simple pointers in order to call functions. That's why the > Assembler class on ppc64 offers an 'emit_fd()' method which can be > used to create such a function descriptor. > > On little-endian ppc64 the ABI was changed (i.e. ABI_ELFv2) and > function descriptors have been removed. That's why the before > mentioned 'emit_fd()' is being excluded from the build with the help > of preprocessor macros if the HotSpot is being build in a little > endian environment: > > #if !defined(ABI_ELFv2) > inline address emit_fd(...) > #endif > > The drawback of this approach is that every call site which uses > 'emit_fd()' has to conditionally handle the case where 'emit_fd()' > isn't present as well. This was exactly the problem with change > "8077838: Recent developments for ppc" which introduced an > unconditional call to 'emit_fd()' in 'VM_Version::config_dscr() which > lead to a build failure on little endian. > > A better approach would be to make 'emit_fd()' available on both, > little- and big-endian platforms and handle the difference internally, > within the function itself. On little-endian, the function will just > return the current PC without emitting any code at all while the > big-endian variant emits the required function descriptor. > > Thank you and best regards, > Volker From christopherberner at gmail.com Wed May 20 01:29:54 2015 From: christopherberner at gmail.com (Christopher Berner) Date: Tue, 19 May 2015 18:29:54 -0700 Subject: Very long safepoint pauses for RevokeBias In-Reply-To: <555AEDE6.1010902@oracle.com> References: <5558FFE8.4090404@oracle.com> <555AEDE6.1010902@oracle.com> Message-ID: Yep, we often have a hundred or so threads active. If I read the output correctly, it was only waiting on 6 threads which took 13secs and then the revocation took 8secs. Is that normal to take even 8secs? Our GC and other safe point pauses are typically in the hundreds of milliseconds. Thanks for all the help! Christopher Sent from my phone On May 19, 2015 1:01 AM, "David Holmes" wrote: > On 19/05/2015 2:24 AM, Christopher Berner wrote: > >> Don't think it got truncated, but I only included the first line of log >> about the safepoint statistics. Turning off biased locks seem to have >> fixed those pauses, thanks! >> >> Just to help me understand, what circumstances would cause biased >> locking to induce 28sec pauses? Is that because a thread was holding the >> lock for 28sec? >> > > No. Based on the stats you showed the actual revocation is only a fraction > of the time spent. It is taking a long time to get the system to a > safepoint and then a reasonable amount of time is also being spent on > safepoint cleanup tasks. > > Do you have hundreds of active threads? > > David > ----- > > On Sun, May 17, 2015 at 1:54 PM, David Holmes > > wrote: >> >> Hi Christopher, >> >> >> On 16/05/2015 10:38 AM, Christopher Berner wrote: >> >> I work on the Presto project, which is a distributed SQL engine, >> and >> intermittently (roughly once an hour) I see very long >> application stopped >> times (as reported by >> -XX:+PrintGCApplicationStoppedTime). I enabled safepoint >> statistics, and >> see that the pause seems to be coming from a RevokeBias safepoint. >> >> Any suggestions as to how I can debug this? I already tried adding >> -XX:+PerfDisableSharedMem, in case this was related to >> https://bugs.openjdk.java.net/browse/JDK-8076103 >> >> See below for safepoint statistics: >> >> vmop [threads: total >> initially_running >> wait_to_block] [time: spin block sync cleanup vmop] >> page_trap_count >> >> 2528.893: RevokeBias [ 872 0 >> 6 ] [ 0 12826 13142 6476 8177 ] 0 >> >> >> Your email seems to be truncated? >> >> But just to be sure, does the application run okay with >> biased-locking disabled? >> >> David >> >> >> From vitalyd at gmail.com Wed May 20 01:45:39 2015 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Tue, 19 May 2015 21:45:39 -0400 Subject: Very long safepoint pauses for RevokeBias In-Reply-To: <555AEDE6.1010902@oracle.com> References: <5558FFE8.4090404@oracle.com> <555AEDE6.1010902@oracle.com> Message-ID: Slight tangent - is biased locking even a worthwhile feature on modern CPUs where non-contended (or even better, cache hitting) CAS is fairly cheap? sent from my phone On May 19, 2015 4:02 AM, "David Holmes" wrote: > On 19/05/2015 2:24 AM, Christopher Berner wrote: > >> Don't think it got truncated, but I only included the first line of log >> about the safepoint statistics. Turning off biased locks seem to have >> fixed those pauses, thanks! >> >> Just to help me understand, what circumstances would cause biased >> locking to induce 28sec pauses? Is that because a thread was holding the >> lock for 28sec? >> > > No. Based on the stats you showed the actual revocation is only a fraction > of the time spent. It is taking a long time to get the system to a > safepoint and then a reasonable amount of time is also being spent on > safepoint cleanup tasks. > > Do you have hundreds of active threads? > > David > ----- > > On Sun, May 17, 2015 at 1:54 PM, David Holmes > > wrote: >> >> Hi Christopher, >> >> >> On 16/05/2015 10:38 AM, Christopher Berner wrote: >> >> I work on the Presto project, which is a distributed SQL engine, >> and >> intermittently (roughly once an hour) I see very long >> application stopped >> times (as reported by >> -XX:+PrintGCApplicationStoppedTime). I enabled safepoint >> statistics, and >> see that the pause seems to be coming from a RevokeBias safepoint. >> >> Any suggestions as to how I can debug this? I already tried adding >> -XX:+PerfDisableSharedMem, in case this was related to >> https://bugs.openjdk.java.net/browse/JDK-8076103 >> >> See below for safepoint statistics: >> >> vmop [threads: total >> initially_running >> wait_to_block] [time: spin block sync cleanup vmop] >> page_trap_count >> >> 2528.893: RevokeBias [ 872 0 >> 6 ] [ 0 12826 13142 6476 8177 ] 0 >> >> >> Your email seems to be truncated? >> >> But just to be sure, does the application run okay with >> biased-locking disabled? >> >> David >> >> >> From christian.thalinger at oracle.com Wed May 20 01:54:37 2015 From: christian.thalinger at oracle.com (Christian Thalinger) Date: Tue, 19 May 2015 18:54:37 -0700 Subject: [9] RFR (S): 8059340: ConstantPool::_resolved_references is missing in heap dump In-Reply-To: <555B6BC3.1020809@oracle.com> References: <542971DC.2090803@oracle.com> <24133E27-C024-41D8-80EF-B5707CFF0D0C@oracle.com> <552FFBAA.6050103@oracle.com> <55358D0A.1020407@oracle.com> <21C42551-29AA-44DC-AFE5-6156D681503D@oracle.com> <5536975F.60108@oracle.com> <555A2228.3080908@oracle.com> <555AEA98.3060104@oracle.com> <555B6BC3.1020809@oracle.com> Message-ID: > On May 19, 2015, at 9:58 AM, Vladimir Ivanov wrote: > > Thanks for the review, Serguei. > > Updated webrev in place: > http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 Shouldn?t there be some GC code in InstanceKlass that can be removed now? + private transient Object[] resolved_references; We should follow Java naming conventions and use ?resolvedReferences?. > > Switched to ConstantPool::resolved_references() as you suggested. > > Regarding declaring the field in vmStructs.cpp, it is not needed since the field is located in Java mirror and not in InstanceKlass. > > Best regards, > Vladimir Ivanov > > On 5/19/15 10:47 AM, serguei.spitsyn at oracle.com wrote: >> Hi Vladimir, >> >> It looks good in general. >> Some comments are below. >> >> || *src/share/vm/oops/cpCache.cpp* >> >> @@ -281,11 +281,11 @@ >> // Competing writers must acquire exclusive access via a lock. >> // A losing writer waits on the lock until the winner writes f1 and leaves >> // the lock, so that when the losing writer returns, he can use the linked >> // cache entry. >> >> - objArrayHandle resolved_references = cpool->resolved_references(); >> + objArrayHandle resolved_references = cpool->pool_holder()->resolved_references(); >> // Use the resolved_references() lock for this cpCache entry. >> // resolved_references are created for all classes with Invokedynamic, MethodHandle >> // or MethodType constant pool cache entries. >> assert(resolved_references() != NULL, >> "a resolved_references array should have been created for this class"); >> >> ------------------------------------------------------------------------ >> >> @@ -410,20 +410,20 @@ >> >> oop ConstantPoolCacheEntry::appendix_if_resolved(constantPoolHandle cpool) { >> if (!has_appendix()) >> return NULL; >> const int ref_index = f2_as_index() + _indy_resolved_references_appendix_offset; >> - objArrayOop resolved_references = cpool->resolved_references(); >> + objArrayOop resolved_references = cpool->pool_holder()->resolved_references(); >> return resolved_references->obj_at(ref_index); >> } >> >> >> oop ConstantPoolCacheEntry::method_type_if_resolved(constantPoolHandle cpool) { >> if (!has_method_type()) >> return NULL; >> const int ref_index = f2_as_index() + _indy_resolved_references_method_type_offset; >> - objArrayOop resolved_references = cpool->resolved_references(); >> + objArrayOop resolved_references = cpool->pool_holder()->resolved_references(); >> return resolved_references->obj_at(ref_index); >> } >> >> There is no need in the update above as the constant pool still has the >> function resolved_references(): >> +objArrayOop ConstantPool::resolved_references() const { >> + return pool_holder()->resolved_references(); >> +} >> >> The same is true for the files: >> src/share/vm/interpreter/interpreterRuntime.cpp >> src/share/vm/interpreter/bytecodeTracer.cpp >> || src/share/vm/ci/ciEnv.cpp >> >> >> || src/share/vm/runtime/vmStructs.cpp* >> >> *@@ -286,11 +286,10 @@ >> nonstatic_field(ConstantPool, _tags, >> Array*) \ >> nonstatic_field(ConstantPool, _cache, >> ConstantPoolCache*) \ >> nonstatic_field(ConstantPool, _pool_holder, >> InstanceKlass*) \ >> nonstatic_field(ConstantPool, _operands, >> Array*) \ >> nonstatic_field(ConstantPool, _length, >> int) \ >> - nonstatic_field(ConstantPool, _resolved_references, >> jobject) \ >> nonstatic_field(ConstantPool, _reference_map, >> Array*) \ >> nonstatic_field(ConstantPoolCache, _length, >> int) \ >> nonstatic_field(ConstantPoolCache, _constant_pool, >> ConstantPool*) \ >> nonstatic_field(InstanceKlass, _array_klasses, >> Klass*) \ >> nonstatic_field(InstanceKlass, _methods, >> Array*) \ >> * >> >> *I guess, we need to cover the same field in the InstanceKlass instead. >> >> Thanks, >> Serguei >> >> >> On 5/18/15 10:32 AM, Vladimir Ivanov wrote: >>> Here's updated version: >>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 >>> >>> Moved ConstantPool::_resolved_references to mirror class instance. >>> >>> Fixed a couple of issues in CDS and JVMTI (class redefinition) caused >>> by this change. >>> >>> I had to hard code Class::resolved_references offset since it is used >>> in template interpreter which is generated earlier than j.l.Class is >>> loaded during VM bootstrap. >>> >>> Testing: hotspot/test, vm testbase (in progress) >>> >>> Best regards, >>> Vladimir Ivanov >>> >>> On 4/21/15 9:30 PM, Vladimir Ivanov wrote: >>>> Coleen, Chris, >>>> >>>> I'll proceed with moving ConstantPool::_resolved_references to j.l.Class >>>> instance then. >>>> >>>> Thanks for the feedback. >>>> >>>> Best regards, >>>> Vladimir Ivanov >>>> >>>> On 4/21/15 3:22 AM, Christian Thalinger wrote: >>>>> >>>>>> On Apr 20, 2015, at 4:34 PM, Coleen Phillimore >>>>>> > >>>>>> wrote: >>>>>> >>>>>> >>>>>> Vladimir, >>>>>> >>>>>> I think that changing the format of the heap dump isn't a good idea >>>>>> either. >>>>>> >>>>>> On 4/16/15, 2:12 PM, Vladimir Ivanov wrote: >>>>>>> (sorry for really late response; just got enough time to return to >>>>>>> the bug) >>>>>> >>>>>> I'd forgotten about it! >>>>>>> >>>>>>> Coleen, Staffan, >>>>>>> >>>>>>> Thanks a lot for the feedback! >>>>>>> >>>>>>> After thinking about the fix more, I don't think that using reserved >>>>>>> oop slot in CLASS DUMP for recording _resolved_references is the best >>>>>>> thing to do. IMO the change causes too much work for the users (heap >>>>>>> dump analysis tools). >>>>>>> >>>>>>> It needs specification update and then heap dump analyzers should be >>>>>>> updated as well. >>>>>>> >>>>>>> I have 2 alternative approaches (hacky and not-so-hacky :-)): >>>>>>> >>>>>>> - artificial class static field in the dump ("" >>>>>>> + optional id to guarantee unique name); >>>>>>> >>>>>>> - add j.l.Class::_resolved_references field; >>>>>>> Not sure how much overhead (mostly reads from bytecode) the move >>>>>>> from ConstantPool to j.l.Class adds, so I propose just to duplicate >>>>>>> it for now. >>>>>> >>>>>> I really like this second approach, so much so that I had a prototype >>>>>> for moving resolved_references directly to the j.l.Class object about >>>>>> a year ago. I couldn't find any benefit other than consolidating oops >>>>>> so the GC would have less work to do. If the resolved_references are >>>>>> moved to j.l.C instance, they can not be jobjects and the >>>>>> ClassLoaderData::_handles area wouldn't have to contain them (but >>>>>> there are other things that could go there so don't delete the >>>>>> _handles field yet). >>>>>> >>>>>> The change I had was relatively simple. The only annoying part was >>>>>> that getting to the resolved references has to be in macroAssembler >>>>>> and do: >>>>>> >>>>>> go through method->cpCache->constants->instanceKlass->java_mirror() >>>>>> rather than >>>>>> method->cpCache->constants->resolved_references->jmethod indirection >>>>>> >>>>>> I think it only affects the interpreter so the extra indirection >>>>>> wouldn't affect performance, so don't duplicate it! You don't want to >>>>>> increase space used by j.l.C without taking it out somewhere else! >>>>> >>>>> I like this approach. Can we do this? >>>>> >>>>>> >>>>>>> >>>>>>> What do you think about that? >>>>>> >>>>>> Is this bug worth doing this? I don't know but I'd really like it. >>>>>> >>>>>> Coleen >>>>>> >>>>>>> >>>>>>> Best regards, >>>>>>> Vladimir Ivanov >>>>>>> >>>>>>> On 10/6/14 11:35 AM, Staffan Larsen wrote: >>>>>>>> This looks like a good approach. However, there are a couple of more >>>>>>>> places that need to be updated. >>>>>>>> >>>>>>>> The hprof binary format is described in >>>>>>>> jdk/src/jdk.hprof.agent/share/native/libhprof/manual.html and needs >>>>>>>> to be updated. It?s also more formally specified in hprof_b_spec.h >>>>>>>> in the same directory. >>>>>>>> >>>>>>>> The hprof JVMTI agent in jdk/src/jdk.hprof.agent code would also >>>>>>>> need to be updated to show this field. Since this is a JVMTI agent >>>>>>>> it needs to be possible to find the resolved_refrences array via the >>>>>>>> JVMTI heap walking API. Perhaps that already works? - I haven?t >>>>>>>> looked. >>>>>>>> >>>>>>>> Finally, the Serviceability Agent implements yet another hprof >>>>>>>> binary dumper in >>>>>>>> hotspot/agent//src/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java >>>>>>>> >>>>>>>> >>>>>>>> which also needs to write this reference. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> /Staffan >>>>>>>> >>>>>>>> On 29 sep 2014, at 16:51, Vladimir Ivanov >>>>>>>> > >>>>>>>> wrote: >>>>>>>> >>>>>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.00/ >>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8059340 >>>>>>>>> >>>>>>>>> VM heap dump doesn't contain ConstantPool::_resolved_references for >>>>>>>>> classes which have resolved references. >>>>>>>>> >>>>>>>>> ConstantPool::_resolved_references points to an Object[] holding >>>>>>>>> resolved constant pool entries (patches for VM anonymous classes, >>>>>>>>> linked CallSite & MethodType for invokedynamic instructions). >>>>>>>>> >>>>>>>>> I've decided to use reserved slot in HPROF class header format. >>>>>>>>> It requires an update in jhat to correctly display new info. >>>>>>>>> >>>>>>>>> The other approach I tried was to dump the reference as a fake >>>>>>>>> static field [1], but storing VM internal >>>>>>>>> ConstantPool::_resolved_references among user defined fields looks >>>>>>>>> confusing. >>>>>>>>> >>>>>>>>> Testing: manual (verified that corresponding arrays are properly >>>>>>>>> linked in Nashorn heap dump). >>>>>>>>> >>>>>>>>> Thanks! >>>>>>>>> >>>>>>>>> Best regards, >>>>>>>>> Vladimir Ivanov >>>>>>>>> >>>>>>>>> [1] http://cr.openjdk.java.net/~vlivanov/8059340/static >>>>> >> From stefan.karlsson at oracle.com Wed May 20 06:15:53 2015 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 20 May 2015 08:15:53 +0200 Subject: [9] RFR (S): 8059340: ConstantPool::_resolved_references is missing in heap dump In-Reply-To: References: <542971DC.2090803@oracle.com> <24133E27-C024-41D8-80EF-B5707CFF0D0C@oracle.com> <552FFBAA.6050103@oracle.com> <55358D0A.1020407@oracle.com> <21C42551-29AA-44DC-AFE5-6156D681503D@oracle.com> <5536975F.60108@oracle.com> <555A2228.3080908@oracle.com> <555AEA98.3060104@oracle.com> <555B6BC3.1020809@oracle.com> Message-ID: <555C2699.9030808@oracle.com> On 2015-05-20 03:54, Christian Thalinger wrote: >> On May 19, 2015, at 9:58 AM, Vladimir Ivanov wrote: >> >> Thanks for the review, Serguei. >> >> Updated webrev in place: >> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 > Shouldn?t there be some GC code in InstanceKlass that can be removed now? Yes, this is a nice patch from the GCs perspective, since it removes some of the work that we need to perform during the root processing. Unless I'm mistaken, you removed the only calls to ClassLoadeData::add_handle, so I think you should remove the handles block in ClassLoaderData. Thanks, StefanK > > + private transient Object[] resolved_references; > > We should follow Java naming conventions and use ?resolvedReferences?. > >> Switched to ConstantPool::resolved_references() as you suggested. >> >> Regarding declaring the field in vmStructs.cpp, it is not needed since the field is located in Java mirror and not in InstanceKlass. >> >> Best regards, >> Vladimir Ivanov >> >> On 5/19/15 10:47 AM, serguei.spitsyn at oracle.com wrote: >>> Hi Vladimir, >>> >>> It looks good in general. >>> Some comments are below. >>> >>> || *src/share/vm/oops/cpCache.cpp* >>> >>> @@ -281,11 +281,11 @@ >>> // Competing writers must acquire exclusive access via a lock. >>> // A losing writer waits on the lock until the winner writes f1 and leaves >>> // the lock, so that when the losing writer returns, he can use the linked >>> // cache entry. >>> >>> - objArrayHandle resolved_references = cpool->resolved_references(); >>> + objArrayHandle resolved_references = cpool->pool_holder()->resolved_references(); >>> // Use the resolved_references() lock for this cpCache entry. >>> // resolved_references are created for all classes with Invokedynamic, MethodHandle >>> // or MethodType constant pool cache entries. >>> assert(resolved_references() != NULL, >>> "a resolved_references array should have been created for this class"); >>> >>> ------------------------------------------------------------------------ >>> >>> @@ -410,20 +410,20 @@ >>> >>> oop ConstantPoolCacheEntry::appendix_if_resolved(constantPoolHandle cpool) { >>> if (!has_appendix()) >>> return NULL; >>> const int ref_index = f2_as_index() + _indy_resolved_references_appendix_offset; >>> - objArrayOop resolved_references = cpool->resolved_references(); >>> + objArrayOop resolved_references = cpool->pool_holder()->resolved_references(); >>> return resolved_references->obj_at(ref_index); >>> } >>> >>> >>> oop ConstantPoolCacheEntry::method_type_if_resolved(constantPoolHandle cpool) { >>> if (!has_method_type()) >>> return NULL; >>> const int ref_index = f2_as_index() + _indy_resolved_references_method_type_offset; >>> - objArrayOop resolved_references = cpool->resolved_references(); >>> + objArrayOop resolved_references = cpool->pool_holder()->resolved_references(); >>> return resolved_references->obj_at(ref_index); >>> } >>> >>> There is no need in the update above as the constant pool still has the >>> function resolved_references(): >>> +objArrayOop ConstantPool::resolved_references() const { >>> + return pool_holder()->resolved_references(); >>> +} >>> >>> The same is true for the files: >>> src/share/vm/interpreter/interpreterRuntime.cpp >>> src/share/vm/interpreter/bytecodeTracer.cpp >>> || src/share/vm/ci/ciEnv.cpp >>> >>> >>> || src/share/vm/runtime/vmStructs.cpp* >>> >>> *@@ -286,11 +286,10 @@ >>> nonstatic_field(ConstantPool, _tags, >>> Array*) \ >>> nonstatic_field(ConstantPool, _cache, >>> ConstantPoolCache*) \ >>> nonstatic_field(ConstantPool, _pool_holder, >>> InstanceKlass*) \ >>> nonstatic_field(ConstantPool, _operands, >>> Array*) \ >>> nonstatic_field(ConstantPool, _length, >>> int) \ >>> - nonstatic_field(ConstantPool, _resolved_references, >>> jobject) \ >>> nonstatic_field(ConstantPool, _reference_map, >>> Array*) \ >>> nonstatic_field(ConstantPoolCache, _length, >>> int) \ >>> nonstatic_field(ConstantPoolCache, _constant_pool, >>> ConstantPool*) \ >>> nonstatic_field(InstanceKlass, _array_klasses, >>> Klass*) \ >>> nonstatic_field(InstanceKlass, _methods, >>> Array*) \ >>> * >>> >>> *I guess, we need to cover the same field in the InstanceKlass instead. >>> >>> Thanks, >>> Serguei >>> >>> >>> On 5/18/15 10:32 AM, Vladimir Ivanov wrote: >>>> Here's updated version: >>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 >>>> >>>> Moved ConstantPool::_resolved_references to mirror class instance. >>>> >>>> Fixed a couple of issues in CDS and JVMTI (class redefinition) caused >>>> by this change. >>>> >>>> I had to hard code Class::resolved_references offset since it is used >>>> in template interpreter which is generated earlier than j.l.Class is >>>> loaded during VM bootstrap. >>>> >>>> Testing: hotspot/test, vm testbase (in progress) >>>> >>>> Best regards, >>>> Vladimir Ivanov >>>> >>>> On 4/21/15 9:30 PM, Vladimir Ivanov wrote: >>>>> Coleen, Chris, >>>>> >>>>> I'll proceed with moving ConstantPool::_resolved_references to j.l.Class >>>>> instance then. >>>>> >>>>> Thanks for the feedback. >>>>> >>>>> Best regards, >>>>> Vladimir Ivanov >>>>> >>>>> On 4/21/15 3:22 AM, Christian Thalinger wrote: >>>>>>> On Apr 20, 2015, at 4:34 PM, Coleen Phillimore >>>>>>> > >>>>>>> wrote: >>>>>>> >>>>>>> >>>>>>> Vladimir, >>>>>>> >>>>>>> I think that changing the format of the heap dump isn't a good idea >>>>>>> either. >>>>>>> >>>>>>> On 4/16/15, 2:12 PM, Vladimir Ivanov wrote: >>>>>>>> (sorry for really late response; just got enough time to return to >>>>>>>> the bug) >>>>>>> I'd forgotten about it! >>>>>>>> Coleen, Staffan, >>>>>>>> >>>>>>>> Thanks a lot for the feedback! >>>>>>>> >>>>>>>> After thinking about the fix more, I don't think that using reserved >>>>>>>> oop slot in CLASS DUMP for recording _resolved_references is the best >>>>>>>> thing to do. IMO the change causes too much work for the users (heap >>>>>>>> dump analysis tools). >>>>>>>> >>>>>>>> It needs specification update and then heap dump analyzers should be >>>>>>>> updated as well. >>>>>>>> >>>>>>>> I have 2 alternative approaches (hacky and not-so-hacky :-)): >>>>>>>> >>>>>>>> - artificial class static field in the dump ("" >>>>>>>> + optional id to guarantee unique name); >>>>>>>> >>>>>>>> - add j.l.Class::_resolved_references field; >>>>>>>> Not sure how much overhead (mostly reads from bytecode) the move >>>>>>>> from ConstantPool to j.l.Class adds, so I propose just to duplicate >>>>>>>> it for now. >>>>>>> I really like this second approach, so much so that I had a prototype >>>>>>> for moving resolved_references directly to the j.l.Class object about >>>>>>> a year ago. I couldn't find any benefit other than consolidating oops >>>>>>> so the GC would have less work to do. If the resolved_references are >>>>>>> moved to j.l.C instance, they can not be jobjects and the >>>>>>> ClassLoaderData::_handles area wouldn't have to contain them (but >>>>>>> there are other things that could go there so don't delete the >>>>>>> _handles field yet). >>>>>>> >>>>>>> The change I had was relatively simple. The only annoying part was >>>>>>> that getting to the resolved references has to be in macroAssembler >>>>>>> and do: >>>>>>> >>>>>>> go through method->cpCache->constants->instanceKlass->java_mirror() >>>>>>> rather than >>>>>>> method->cpCache->constants->resolved_references->jmethod indirection >>>>>>> >>>>>>> I think it only affects the interpreter so the extra indirection >>>>>>> wouldn't affect performance, so don't duplicate it! You don't want to >>>>>>> increase space used by j.l.C without taking it out somewhere else! >>>>>> I like this approach. Can we do this? >>>>>> >>>>>>>> What do you think about that? >>>>>>> Is this bug worth doing this? I don't know but I'd really like it. >>>>>>> >>>>>>> Coleen >>>>>>> >>>>>>>> Best regards, >>>>>>>> Vladimir Ivanov >>>>>>>> >>>>>>>> On 10/6/14 11:35 AM, Staffan Larsen wrote: >>>>>>>>> This looks like a good approach. However, there are a couple of more >>>>>>>>> places that need to be updated. >>>>>>>>> >>>>>>>>> The hprof binary format is described in >>>>>>>>> jdk/src/jdk.hprof.agent/share/native/libhprof/manual.html and needs >>>>>>>>> to be updated. It?s also more formally specified in hprof_b_spec.h >>>>>>>>> in the same directory. >>>>>>>>> >>>>>>>>> The hprof JVMTI agent in jdk/src/jdk.hprof.agent code would also >>>>>>>>> need to be updated to show this field. Since this is a JVMTI agent >>>>>>>>> it needs to be possible to find the resolved_refrences array via the >>>>>>>>> JVMTI heap walking API. Perhaps that already works? - I haven?t >>>>>>>>> looked. >>>>>>>>> >>>>>>>>> Finally, the Serviceability Agent implements yet another hprof >>>>>>>>> binary dumper in >>>>>>>>> hotspot/agent//src/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java >>>>>>>>> >>>>>>>>> >>>>>>>>> which also needs to write this reference. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> /Staffan >>>>>>>>> >>>>>>>>> On 29 sep 2014, at 16:51, Vladimir Ivanov >>>>>>>>> > >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.00/ >>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8059340 >>>>>>>>>> >>>>>>>>>> VM heap dump doesn't contain ConstantPool::_resolved_references for >>>>>>>>>> classes which have resolved references. >>>>>>>>>> >>>>>>>>>> ConstantPool::_resolved_references points to an Object[] holding >>>>>>>>>> resolved constant pool entries (patches for VM anonymous classes, >>>>>>>>>> linked CallSite & MethodType for invokedynamic instructions). >>>>>>>>>> >>>>>>>>>> I've decided to use reserved slot in HPROF class header format. >>>>>>>>>> It requires an update in jhat to correctly display new info. >>>>>>>>>> >>>>>>>>>> The other approach I tried was to dump the reference as a fake >>>>>>>>>> static field [1], but storing VM internal >>>>>>>>>> ConstantPool::_resolved_references among user defined fields looks >>>>>>>>>> confusing. >>>>>>>>>> >>>>>>>>>> Testing: manual (verified that corresponding arrays are properly >>>>>>>>>> linked in Nashorn heap dump). >>>>>>>>>> >>>>>>>>>> Thanks! >>>>>>>>>> >>>>>>>>>> Best regards, >>>>>>>>>> Vladimir Ivanov >>>>>>>>>> >>>>>>>>>> [1] http://cr.openjdk.java.net/~vlivanov/8059340/static From filipp.zhinkin at gmail.com Wed May 20 06:47:28 2015 From: filipp.zhinkin at gmail.com (Filipp Zhinkin) Date: Wed, 20 May 2015 09:47:28 +0300 Subject: RFR: 8080600: AARCH64: testlibrary does not support AArch64 In-Reply-To: <555B8B18.4020900@redhat.com> References: <5559FDB1.3020808@redhat.com> <8F5EBAD7-7596-43E7-BD74-3552A51DBB3F@oracle.com> <555A00E0.8020309@redhat.com> <555A61D7.8000804@oracle.com> <555B5C33.4060601@redhat.com> <28A40172-DC4F-4EBA-929B-4D61E954A392@oracle.com> <555B8B18.4020900@redhat.com> Message-ID: That's because "isAArch64" has to be added to TestMutuallyExclusivePlatformPredicates test: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/file/e8b95332ff4c/test/testlibrary_tests/TestMutuallyExclusivePlatformPredicates.java#l48 Regards, Filipp. On Tue, May 19, 2015 at 10:12 PM, Andrew Haley wrote: > On 05/19/2015 07:08 PM, Roland Westrelin wrote: >> Any idea what?s going on? > > No. :-) > > I suppose it's possible that I added isAArch64 to a Linux-only file. > I'll check tomorrow. > > Thanks, > Andrew. > From goetz.lindenmaier at sap.com Wed May 20 07:15:59 2015 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Wed, 20 May 2015 07:15:59 +0000 Subject: RFR: 8080600: AARCH64: testlibrary does not support AArch64 In-Reply-To: References: <5559FDB1.3020808@redhat.com> <8F5EBAD7-7596-43E7-BD74-3552A51DBB3F@oracle.com> <555A00E0.8020309@redhat.com> <555A61D7.8000804@oracle.com> <555B5C33.4060601@redhat.com> <28A40172-DC4F-4EBA-929B-4D61E954A392@oracle.com> <555B8B18.4020900@redhat.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CFE284B@DEWDFEMB12A.global.corp.sap> Hi, I missed the initial discussion on this change, but I found before that the check for aarch is missing in test/test_env.sh. This is used by runtime/StackGuardPages/testme.sh and runtime/jsig/Test8017498.sh. Fixing this here would eventually fit to the basic intention of this change. Best regards, Goetz. -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Filipp Zhinkin Sent: Mittwoch, 20. Mai 2015 08:47 To: Andrew Haley Cc: HotSpot Developers Subject: Re: RFR: 8080600: AARCH64: testlibrary does not support AArch64 That's because "isAArch64" has to be added to TestMutuallyExclusivePlatformPredicates test: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/file/e8b95332ff4c/test/testlibrary_tests/TestMutuallyExclusivePlatformPredicates.java#l48 Regards, Filipp. On Tue, May 19, 2015 at 10:12 PM, Andrew Haley wrote: > On 05/19/2015 07:08 PM, Roland Westrelin wrote: >> Any idea what?s going on? > > No. :-) > > I suppose it's possible that I added isAArch64 to a Linux-only file. > I'll check tomorrow. > > Thanks, > Andrew. > From david.holmes at oracle.com Wed May 20 07:26:07 2015 From: david.holmes at oracle.com (David Holmes) Date: Wed, 20 May 2015 17:26:07 +1000 Subject: Very long safepoint pauses for RevokeBias In-Reply-To: References: <5558FFE8.4090404@oracle.com> <555AEDE6.1010902@oracle.com> Message-ID: <555C370F.2090500@oracle.com> On 20/05/2015 11:29 AM, Christopher Berner wrote: > Yep, we often have a hundred or so threads active. If I read the output > correctly, it was only waiting on 6 threads which took 13secs and then > the revocation took 8secs. Is that normal to take even 8secs? Our GC and > other safe point pauses are typically in the hundreds of milliseconds. 8 seconds seems an extreme amount of time, but it suggests to me that that the VMThread couldn't get enough CPU cycles. How many active threads and how many processors on the system overall? 13 seconds to reach a safepoint is also extreme. It is very hard to try and remote diagnoze these things. You really need much more fine-grained information about the actual VM operation as it executes. :( David > Thanks for all the help! > Christopher > > Sent from my phone > > On May 19, 2015 1:01 AM, "David Holmes" > wrote: > > On 19/05/2015 2:24 AM, Christopher Berner wrote: > > Don't think it got truncated, but I only included the first line > of log > about the safepoint statistics. Turning off biased locks seem to > have > fixed those pauses, thanks! > > Just to help me understand, what circumstances would cause biased > locking to induce 28sec pauses? Is that because a thread was > holding the > lock for 28sec? > > > No. Based on the stats you showed the actual revocation is only a > fraction of the time spent. It is taking a long time to get the > system to a safepoint and then a reasonable amount of time is also > being spent on safepoint cleanup tasks. > > Do you have hundreds of active threads? > > David > ----- > > On Sun, May 17, 2015 at 1:54 PM, David Holmes > > >> wrote: > > Hi Christopher, > > > On 16/05/2015 10:38 AM, Christopher Berner wrote: > > I work on the Presto project, which is a distributed > SQL engine, and > intermittently (roughly once an hour) I see very long > application stopped > times (as reported by > -XX:+PrintGCApplicationStoppedTime). I enabled > safepoint > statistics, and > see that the pause seems to be coming from a RevokeBias > safepoint. > > Any suggestions as to how I can debug this? I already > tried adding > -XX:+PerfDisableSharedMem, in case this was related to > https://bugs.openjdk.java.net/browse/JDK-8076103 > > See below for safepoint statistics: > > vmop [threads: total > initially_running > wait_to_block] [time: spin block sync cleanup vmop] > page_trap_count > > 2528.893: RevokeBias [ 872 > 0 > 6 ] [ 0 12826 13142 6476 8177 ] 0 > > > Your email seems to be truncated? > > But just to be sure, does the application run okay with > biased-locking disabled? > > David > > From david.holmes at oracle.com Wed May 20 08:08:31 2015 From: david.holmes at oracle.com (David Holmes) Date: Wed, 20 May 2015 18:08:31 +1000 Subject: RFR: 8080600: AARCH64: testlibrary does not support AArch64 In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CFE284B@DEWDFEMB12A.global.corp.sap> References: <5559FDB1.3020808@redhat.com> <8F5EBAD7-7596-43E7-BD74-3552A51DBB3F@oracle.com> <555A00E0.8020309@redhat.com> <555A61D7.8000804@oracle.com> <555B5C33.4060601@redhat.com> <28A40172-DC4F-4EBA-929B-4D61E954A392@oracle.com> <555B8B18.4020900@redhat.com> <4295855A5C1DE049A61835A1887419CC2CFE284B@DEWDFEMB12A.global.corp.sap> Message-ID: <555C40FF.2000206@oracle.com> Hi Goetz, On 20/05/2015 5:15 PM, Lindenmaier, Goetz wrote: > Hi, > > I missed the initial discussion on this change, but I found before that the check > for aarch is missing in test/test_env.sh. This is being fixed in: https://bugs.openjdk.java.net/browse/JDK-8078834 which is not public unfortunately but the open webrv is here: http://cr.openjdk.java.net/~skovalev/8078834/webrev.00/test/test_env.sh.cdiff.html David ---- > This is used by runtime/StackGuardPages/testme.sh and runtime/jsig/Test8017498.sh. > > Fixing this here would eventually fit to the basic intention of this change. > > Best regards, > Goetz. > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Filipp Zhinkin > Sent: Mittwoch, 20. Mai 2015 08:47 > To: Andrew Haley > Cc: HotSpot Developers > Subject: Re: RFR: 8080600: AARCH64: testlibrary does not support AArch64 > > That's because "isAArch64" has to be added to > TestMutuallyExclusivePlatformPredicates test: > > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/file/e8b95332ff4c/test/testlibrary_tests/TestMutuallyExclusivePlatformPredicates.java#l48 > > Regards, > Filipp. > > On Tue, May 19, 2015 at 10:12 PM, Andrew Haley wrote: >> On 05/19/2015 07:08 PM, Roland Westrelin wrote: >>> Any idea what?s going on? >> >> No. :-) >> >> I suppose it's possible that I added isAArch64 to a Linux-only file. >> I'll check tomorrow. >> >> Thanks, >> Andrew. >> From bengt.rutisson at oracle.com Wed May 20 08:15:33 2015 From: bengt.rutisson at oracle.com (Bengt Rutisson) Date: Wed, 20 May 2015 10:15:33 +0200 Subject: RFR: JDK-8080627: JavaThread::satb_mark_queue_offset() is too big for an ARM ldrsb instruction Message-ID: <555C42A5.1050307@oracle.com> Hi everyone, This is a fix for the C1 generated G1 write barriers. It is a bit unclear if this is compiler or GC code, so I'm using the broader mailing list for this review. http://cr.openjdk.java.net/~brutisso/8080627/webrev.00/ https://bugs.openjdk.java.net/browse/JDK-8080627 The problem is that on ARM the T_BYTE type will boil down to using the ldrsb instruction, which has a limitation on the offset it can load from. It can only load from offsets -256 to 256. But in the G1 pre barrier we want to load the _satb_mark_queue field in JavaThread, which is on offset 760. Changing the type from T_BYTE to T_BOOLEAN will use the unsigned instruction ldrb instead, which can handle offsets up to 4096. Ideally we would have a T_UBYTE type to use unsigned instructions for this load, but that does not exist. On the other platforms (x86 and Sparc) we treat T_BYTE and T_BOOLEAN the same, it is only on ARM that we have the distinction between these to types. I assume that is to get the sign extension for free when we use T_BYTE type. The fact that we treat T_BYTE and T_BOOLEAN the same on the other platforms makes it safe to do this change. I got some great help with this change from Dean Long. Thanks, Dean! I tried a couple of different solutions. Moving the _satb_mark_queue field earlier in JavaThread did not help since the Thread superclass already has enough members to exceed the 256 limit for offsets. It also didn't seem like a stable solution. Loading the field into a register would work, but keeping the load an immediate seems like a nicer solution. Changing to treat T_BYTE and T_BOOLEAN the same on ARM (similarly to x86 and Sparc) would mean to have to do explicit sign extension, which seems like a more complex solution than just switching the type in this case. Bengt From stefan.johansson at oracle.com Wed May 20 08:58:02 2015 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Wed, 20 May 2015 10:58:02 +0200 Subject: RFR: 8080746: Refactor oop iteration macros to be more general Message-ID: <555C4C9A.4080508@oracle.com> Hi, Please review this change to generalize the oop iteration macros: https://bugs.openjdk.java.net/browse/JDK-8080746 Webrev: http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/ Summary: The macros for the oop_oop_iterate functions were defined for all *Klass types even though they were very similar. This change extracts and generalizes the macros to klass.hpp and arrayKlass.hpp. For the arrays the *_OOP_OOP_ITERATE_BACKWARDS_* macros is now called OOP_OOP_ITERATE_NO_BACKWARDS_* to reflect that for arrays we currently don't have a reverse implementation. Thanks, Stefan From aph at redhat.com Wed May 20 09:28:36 2015 From: aph at redhat.com (Andrew Haley) Date: Wed, 20 May 2015 10:28:36 +0100 Subject: RFR: 8080600: AARCH64: testlibrary does not support AArch64 In-Reply-To: References: <5559FDB1.3020808@redhat.com> <8F5EBAD7-7596-43E7-BD74-3552A51DBB3F@oracle.com> <555A00E0.8020309@redhat.com> <555A61D7.8000804@oracle.com> <555B5C33.4060601@redhat.com> <28A40172-DC4F-4EBA-929B-4D61E954A392@oracle.com> <555B8B18.4020900@redhat.com> Message-ID: <555C53C4.7010709@redhat.com> On 05/20/2015 07:47 AM, Filipp Zhinkin wrote: > That's because "isAArch64" has to be added to > TestMutuallyExclusivePlatformPredicates test: Thank you, new webrev: http://cr.openjdk.java.net/~aph/8080600-3/ I looked at adding some comments beside these predicates to indicate that they must be added to TestMutuallyExclusivePlatformPredicates, but they are scattered over several files and I would have to add a great many comments to ensure they were noticed. Andrew. From stefan.karlsson at oracle.com Wed May 20 09:44:54 2015 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 20 May 2015 11:44:54 +0200 Subject: RFR: 8080746: Refactor oop iteration macros to be more general In-Reply-To: <555C4C9A.4080508@oracle.com> References: <555C4C9A.4080508@oracle.com> Message-ID: <555C5796.3070304@oracle.com> Hi Stefan, On 2015-05-20 10:58, Stefan Johansson wrote: > Hi, > > Please review this change to generalize the oop iteration macros: > https://bugs.openjdk.java.net/browse/JDK-8080746 > > Webrev: > http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/ This looks great! Here's a couple of cleanup/style comments: ======================================================================== http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/klass.hpp.udiff.html http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/arrayKlass.hpp.udiff.html -------------------------------------------------------------------------------- Could you visually separate the DEFN and DECL defines so that it's more obvious that they serve different purposes. It might be worth adding a comment describing how the DEFN definitions are used. -------------------------------------------------------------------------------- + int oop_oop_iterate_range##nv_suffix(oop obj, OopClosureType* blk, \ + int start, int end); Could you combine these two lines. -------------------------------------------------------------------------------- The indentation of the ending backslashes are inconsistent. -------------------------------------------------------------------------------- Pre-existing naming issue: + int oop_oop_iterate_backwards##nv_suffix(oop obj, OopClosureType* blk); +int KlassType::oop_oop_iterate_backwards##nv_suffix(oop obj, OopClosureType* closure) { \ Could you change parameter name blk to closure? -------------------------------------------------------------------------------- ======================================================================== http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/objArrayKlass.inline.hpp.frames.html -------------------------------------------------------------------------------- 155 OOP_OOP_ITERATE_DEFN( ObjArrayKlass, OopClosureType, nv_suffix) \ 156 OOP_OOP_ITERATE_DEFN_m( ObjArrayKlass, OopClosureType, nv_suffix) \ 157 OOP_OOP_ITERATE_RANGE_DEFN( ObjArrayKlass, OopClosureType, nv_suffix) \ 158 OOP_OOP_ITERATE_NO_BACKWARDS_DEFN(ObjArrayKlass, OopClosureType, nv_suffix) It would be nice to prefix all these macros with OOP_OOP_ITERATE_DEFN -------------------------------------------------------------------------------- ======================================================================== http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/typeArrayKlass.inline.hpp.frames.html -------------------------------------------------------------------------------- 44 template 45 int TypeArrayKlass::oop_oop_iterate(oop obj, OopClosureType* closure) { 46 return oop_oop_iterate_impl(obj, closure); 47 } 48 49 template 50 int TypeArrayKlass::oop_oop_iterate_bounded(oop obj, OopClosureType* closure, MemRegion mr) { 51 return oop_oop_iterate_impl(obj, closure); 52 } I think you should add the inline keyword to these functions. -------------------------------------------------------------------------------- Thanks, StefanK > > Summary: > The macros for the oop_oop_iterate functions were defined for all > *Klass types even though they were very similar. This change extracts > and generalizes the macros to klass.hpp and arrayKlass.hpp. > > For the arrays the *_OOP_OOP_ITERATE_BACKWARDS_* macros is now called > OOP_OOP_ITERATE_NO_BACKWARDS_* to reflect that for arrays we currently > don't have a reverse implementation. > > Thanks, > Stefan From stefan.johansson at oracle.com Wed May 20 12:57:39 2015 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Wed, 20 May 2015 14:57:39 +0200 Subject: RFR: 8080746: Refactor oop iteration macros to be more general In-Reply-To: <555C5796.3070304@oracle.com> References: <555C4C9A.4080508@oracle.com> <555C5796.3070304@oracle.com> Message-ID: <555C84C3.4090405@oracle.com> Thanks for looking at this Stefan, New webrevs: Full: http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.01/ Inc: http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00-01/ On 2015-05-20 11:44, Stefan Karlsson wrote: > Hi Stefan, > > On 2015-05-20 10:58, Stefan Johansson wrote: >> Hi, >> >> Please review this change to generalize the oop iteration macros: >> https://bugs.openjdk.java.net/browse/JDK-8080746 >> >> Webrev: >> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/ > > This looks great! > > Here's a couple of cleanup/style comments: > > ======================================================================== > http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/klass.hpp.udiff.html > > http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/arrayKlass.hpp.udiff.html > > > -------------------------------------------------------------------------------- > > Could you visually separate the DEFN and DECL defines so that it's > more obvious that they serve different purposes. It might be worth > adding a comment describing how the DEFN definitions are used. > Fixed, added an extra new line and extended the comments. > -------------------------------------------------------------------------------- > > + int oop_oop_iterate_range##nv_suffix(oop obj, OopClosureType* blk, \ > + int start, int end); > > Could you combine these two lines. > Fixed. > -------------------------------------------------------------------------------- > > The indentation of the ending backslashes are inconsistent. > Fixed. > -------------------------------------------------------------------------------- > > Pre-existing naming issue: > > + int oop_oop_iterate_backwards##nv_suffix(oop obj, OopClosureType* > blk); > > +int KlassType::oop_oop_iterate_backwards##nv_suffix(oop obj, > OopClosureType* closure) { \ > > Could you change parameter name blk to closure? > Fixed. > -------------------------------------------------------------------------------- > > > ======================================================================== > http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/objArrayKlass.inline.hpp.frames.html > > > -------------------------------------------------------------------------------- > > 155 OOP_OOP_ITERATE_DEFN( ObjArrayKlass, > OopClosureType, nv_suffix) \ > 156 OOP_OOP_ITERATE_DEFN_m( ObjArrayKlass, > OopClosureType, nv_suffix) \ > 157 OOP_OOP_ITERATE_RANGE_DEFN( ObjArrayKlass, > OopClosureType, nv_suffix) \ > 158 OOP_OOP_ITERATE_NO_BACKWARDS_DEFN(ObjArrayKlass, > OopClosureType, nv_suffix) > > It would be nice to prefix all these macros with OOP_OOP_ITERATE_DEFN Fixed, did the same for OOP_OOP_ITERATE_DECL. Change _m to BOUNDED. > -------------------------------------------------------------------------------- > > > ======================================================================== > http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/typeArrayKlass.inline.hpp.frames.html > > > -------------------------------------------------------------------------------- > > 44 template > 45 int TypeArrayKlass::oop_oop_iterate(oop obj, OopClosureType* > closure) { > 46 return oop_oop_iterate_impl(obj, closure); > 47 } > 48 > 49 template > 50 int TypeArrayKlass::oop_oop_iterate_bounded(oop obj, > OopClosureType* closure, MemRegion mr) { > 51 return oop_oop_iterate_impl(obj, closure); > 52 } > > I think you should add the inline keyword to these functions. Skipped this, does not seem to be needed and leaving it out matches how objArrayKlass.inline.hpp is handled. > > -------------------------------------------------------------------------------- > > Thanks for the review, StefanJ > Thanks, > StefanK > >> >> Summary: >> The macros for the oop_oop_iterate functions were defined for all >> *Klass types even though they were very similar. This change extracts >> and generalizes the macros to klass.hpp and arrayKlass.hpp. >> >> For the arrays the *_OOP_OOP_ITERATE_BACKWARDS_* macros is now called >> OOP_OOP_ITERATE_NO_BACKWARDS_* to reflect that for arrays we >> currently don't have a reverse implementation. >> >> Thanks, >> Stefan > From stefan.karlsson at oracle.com Wed May 20 13:00:29 2015 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 20 May 2015 15:00:29 +0200 Subject: RFR: 8080746: Refactor oop iteration macros to be more general In-Reply-To: <555C84C3.4090405@oracle.com> References: <555C4C9A.4080508@oracle.com> <555C5796.3070304@oracle.com> <555C84C3.4090405@oracle.com> Message-ID: <555C856D.4010208@oracle.com> On 2015-05-20 14:57, Stefan Johansson wrote: > Thanks for looking at this Stefan, > > New webrevs: > Full: http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.01/ > Inc: http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00-01/ Looks good. StefanK > > On 2015-05-20 11:44, Stefan Karlsson wrote: >> Hi Stefan, >> >> On 2015-05-20 10:58, Stefan Johansson wrote: >>> Hi, >>> >>> Please review this change to generalize the oop iteration macros: >>> https://bugs.openjdk.java.net/browse/JDK-8080746 >>> >>> Webrev: >>> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/ >> >> This looks great! >> >> Here's a couple of cleanup/style comments: >> >> ======================================================================== >> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/klass.hpp.udiff.html >> >> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/arrayKlass.hpp.udiff.html >> >> >> -------------------------------------------------------------------------------- >> >> Could you visually separate the DEFN and DECL defines so that it's >> more obvious that they serve different purposes. It might be worth >> adding a comment describing how the DEFN definitions are used. >> > Fixed, added an extra new line and extended the comments. >> -------------------------------------------------------------------------------- >> >> + int oop_oop_iterate_range##nv_suffix(oop obj, OopClosureType* blk, \ >> + int start, int end); >> >> Could you combine these two lines. >> > Fixed. >> -------------------------------------------------------------------------------- >> >> The indentation of the ending backslashes are inconsistent. >> > Fixed. >> -------------------------------------------------------------------------------- >> >> Pre-existing naming issue: >> >> + int oop_oop_iterate_backwards##nv_suffix(oop obj, OopClosureType* >> blk); >> >> +int KlassType::oop_oop_iterate_backwards##nv_suffix(oop obj, >> OopClosureType* closure) { \ >> >> Could you change parameter name blk to closure? >> > Fixed. >> -------------------------------------------------------------------------------- >> >> >> ======================================================================== >> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/objArrayKlass.inline.hpp.frames.html >> >> >> -------------------------------------------------------------------------------- >> >> 155 OOP_OOP_ITERATE_DEFN( ObjArrayKlass, >> OopClosureType, nv_suffix) \ >> 156 OOP_OOP_ITERATE_DEFN_m( ObjArrayKlass, >> OopClosureType, nv_suffix) \ >> 157 OOP_OOP_ITERATE_RANGE_DEFN( ObjArrayKlass, >> OopClosureType, nv_suffix) \ >> 158 OOP_OOP_ITERATE_NO_BACKWARDS_DEFN(ObjArrayKlass, >> OopClosureType, nv_suffix) >> >> It would be nice to prefix all these macros with OOP_OOP_ITERATE_DEFN > Fixed, did the same for OOP_OOP_ITERATE_DECL. Change _m to BOUNDED. >> -------------------------------------------------------------------------------- >> >> >> ======================================================================== >> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/typeArrayKlass.inline.hpp.frames.html >> >> >> -------------------------------------------------------------------------------- >> >> 44 template >> 45 int TypeArrayKlass::oop_oop_iterate(oop obj, OopClosureType* >> closure) { >> 46 return oop_oop_iterate_impl(obj, closure); >> 47 } >> 48 >> 49 template >> 50 int TypeArrayKlass::oop_oop_iterate_bounded(oop obj, >> OopClosureType* closure, MemRegion mr) { >> 51 return oop_oop_iterate_impl(obj, closure); >> 52 } >> >> I think you should add the inline keyword to these functions. > Skipped this, does not seem to be needed and leaving it out matches > how objArrayKlass.inline.hpp is handled. >> >> -------------------------------------------------------------------------------- >> >> > Thanks for the review, > StefanJ >> Thanks, >> StefanK >> >>> >>> Summary: >>> The macros for the oop_oop_iterate functions were defined for all >>> *Klass types even though they were very similar. This change >>> extracts and generalizes the macros to klass.hpp and arrayKlass.hpp. >>> >>> For the arrays the *_OOP_OOP_ITERATE_BACKWARDS_* macros is now >>> called OOP_OOP_ITERATE_NO_BACKWARDS_* to reflect that for arrays we >>> currently don't have a reverse implementation. >>> >>> Thanks, >>> Stefan >> > From vladimir.x.ivanov at oracle.com Wed May 20 13:06:44 2015 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Wed, 20 May 2015 16:06:44 +0300 Subject: [9] RFR (S): 8059340: ConstantPool::_resolved_references is missing in heap dump In-Reply-To: References: <542971DC.2090803@oracle.com> <24133E27-C024-41D8-80EF-B5707CFF0D0C@oracle.com> <552FFBAA.6050103@oracle.com> <55358D0A.1020407@oracle.com> <21C42551-29AA-44DC-AFE5-6156D681503D@oracle.com> <5536975F.60108@oracle.com> <555A2228.3080908@oracle.com> <555AEA98.3060104@oracle.com> <555B6BC3.1020809@oracle.com> Message-ID: <555C86E4.2090109@oracle.com> Thanks for review, Chris. >> Updated webrev in place: >> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 > > Shouldn?t there be some GC code in InstanceKlass that can be removed now? I don't think so. It was a handle, so no special GC logic was needed. And now it is a raw oop. > + private transient Object[] resolved_references; > > We should follow Java naming conventions and use ?resolvedReferences?. Good point. Will fix as you suggest. Best regards, Vladimir Ivanov > >> >> Switched to ConstantPool::resolved_references() as you suggested. >> >> Regarding declaring the field in vmStructs.cpp, it is not needed since >> the field is located in Java mirror and not in InstanceKlass. >> >> Best regards, >> Vladimir Ivanov >> >> On 5/19/15 10:47 AM, serguei.spitsyn at oracle.com >> wrote: >>> Hi Vladimir, >>> >>> It looks good in general. >>> Some comments are below. >>> >>> || *src/share/vm/oops/cpCache.cpp* >>> >>> @@ -281,11 +281,11 @@ >>> // Competing writers must acquire exclusive access via a lock. >>> // A losing writer waits on the lock until the winner writes f1 >>> and leaves >>> // the lock, so that when the losing writer returns, he can use >>> the linked >>> // cache entry. >>> >>> - objArrayHandle resolved_references = cpool->resolved_references(); >>> + objArrayHandle resolved_references = >>> cpool->pool_holder()->resolved_references(); >>> // Use the resolved_references() lock for this cpCache entry. >>> // resolved_references are created for all classes with >>> Invokedynamic, MethodHandle >>> // or MethodType constant pool cache entries. >>> assert(resolved_references() != NULL, >>> "a resolved_references array should have been created for >>> this class"); >>> >>> ------------------------------------------------------------------------ >>> >>> @@ -410,20 +410,20 @@ >>> >>> oop ConstantPoolCacheEntry::appendix_if_resolved(constantPoolHandle >>> cpool) { >>> if (!has_appendix()) >>> return NULL; >>> const int ref_index = f2_as_index() + >>> _indy_resolved_references_appendix_offset; >>> - objArrayOop resolved_references = cpool->resolved_references(); >>> + objArrayOop resolved_references = >>> cpool->pool_holder()->resolved_references(); >>> return resolved_references->obj_at(ref_index); >>> } >>> >>> >>> oop >>> ConstantPoolCacheEntry::method_type_if_resolved(constantPoolHandle >>> cpool) { >>> if (!has_method_type()) >>> return NULL; >>> const int ref_index = f2_as_index() + >>> _indy_resolved_references_method_type_offset; >>> - objArrayOop resolved_references = cpool->resolved_references(); >>> + objArrayOop resolved_references = >>> cpool->pool_holder()->resolved_references(); >>> return resolved_references->obj_at(ref_index); >>> } >>> >>> There is no need in the update above as the constant pool still has the >>> function resolved_references(): >>> +objArrayOop ConstantPool::resolved_references() const { >>> + return pool_holder()->resolved_references(); >>> +} >>> >>> The same is true for the files: >>> src/share/vm/interpreter/interpreterRuntime.cpp >>> src/share/vm/interpreter/bytecodeTracer.cpp >>> || src/share/vm/ci/ciEnv.cpp >>> >>> >>> || src/share/vm/runtime/vmStructs.cpp* >>> >>> *@@ -286,11 +286,10 @@ >>> nonstatic_field(ConstantPool, _tags, >>> Array*) \ >>> nonstatic_field(ConstantPool, _cache, >>> ConstantPoolCache*) \ >>> nonstatic_field(ConstantPool, _pool_holder, >>> InstanceKlass*) \ >>> nonstatic_field(ConstantPool, _operands, >>> Array*) \ >>> nonstatic_field(ConstantPool, _length, >>> int) \ >>> - nonstatic_field(ConstantPool, _resolved_references, >>> jobject) \ >>> nonstatic_field(ConstantPool, _reference_map, >>> Array*) \ >>> nonstatic_field(ConstantPoolCache, _length, >>> int) \ >>> nonstatic_field(ConstantPoolCache, _constant_pool, >>> ConstantPool*) \ >>> nonstatic_field(InstanceKlass, _array_klasses, >>> Klass*) \ >>> nonstatic_field(InstanceKlass, _methods, >>> Array*) \ >>> * >>> >>> *I guess, we need to cover the same field in the InstanceKlass instead. >>> >>> Thanks, >>> Serguei >>> >>> >>> On 5/18/15 10:32 AM, Vladimir Ivanov wrote: >>>> Here's updated version: >>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 >>>> >>>> Moved ConstantPool::_resolved_references to mirror class instance. >>>> >>>> Fixed a couple of issues in CDS and JVMTI (class redefinition) caused >>>> by this change. >>>> >>>> I had to hard code Class::resolved_references offset since it is used >>>> in template interpreter which is generated earlier than j.l.Class is >>>> loaded during VM bootstrap. >>>> >>>> Testing: hotspot/test, vm testbase (in progress) >>>> >>>> Best regards, >>>> Vladimir Ivanov >>>> >>>> On 4/21/15 9:30 PM, Vladimir Ivanov wrote: >>>>> Coleen, Chris, >>>>> >>>>> I'll proceed with moving ConstantPool::_resolved_references to >>>>> j.l.Class >>>>> instance then. >>>>> >>>>> Thanks for the feedback. >>>>> >>>>> Best regards, >>>>> Vladimir Ivanov >>>>> >>>>> On 4/21/15 3:22 AM, Christian Thalinger wrote: >>>>>> >>>>>>> On Apr 20, 2015, at 4:34 PM, Coleen Phillimore >>>>>>> > >>>>>>> wrote: >>>>>>> >>>>>>> >>>>>>> Vladimir, >>>>>>> >>>>>>> I think that changing the format of the heap dump isn't a good idea >>>>>>> either. >>>>>>> >>>>>>> On 4/16/15, 2:12 PM, Vladimir Ivanov wrote: >>>>>>>> (sorry for really late response; just got enough time to return to >>>>>>>> the bug) >>>>>>> >>>>>>> I'd forgotten about it! >>>>>>>> >>>>>>>> Coleen, Staffan, >>>>>>>> >>>>>>>> Thanks a lot for the feedback! >>>>>>>> >>>>>>>> After thinking about the fix more, I don't think that using reserved >>>>>>>> oop slot in CLASS DUMP for recording _resolved_references is the >>>>>>>> best >>>>>>>> thing to do. IMO the change causes too much work for the users (heap >>>>>>>> dump analysis tools). >>>>>>>> >>>>>>>> It needs specification update and then heap dump analyzers should be >>>>>>>> updated as well. >>>>>>>> >>>>>>>> I have 2 alternative approaches (hacky and not-so-hacky :-)): >>>>>>>> >>>>>>>> - artificial class static field in the dump ("" >>>>>>>> + optional id to guarantee unique name); >>>>>>>> >>>>>>>> - add j.l.Class::_resolved_references field; >>>>>>>> Not sure how much overhead (mostly reads from bytecode) the move >>>>>>>> from ConstantPool to j.l.Class adds, so I propose just to duplicate >>>>>>>> it for now. >>>>>>> >>>>>>> I really like this second approach, so much so that I had a prototype >>>>>>> for moving resolved_references directly to the j.l.Class object about >>>>>>> a year ago. I couldn't find any benefit other than consolidating >>>>>>> oops >>>>>>> so the GC would have less work to do. If the resolved_references are >>>>>>> moved to j.l.C instance, they can not be jobjects and the >>>>>>> ClassLoaderData::_handles area wouldn't have to contain them (but >>>>>>> there are other things that could go there so don't delete the >>>>>>> _handles field yet). >>>>>>> >>>>>>> The change I had was relatively simple. The only annoying part was >>>>>>> that getting to the resolved references has to be in macroAssembler >>>>>>> and do: >>>>>>> >>>>>>> go through method->cpCache->constants->instanceKlass->java_mirror() >>>>>>> rather than >>>>>>> method->cpCache->constants->resolved_references->jmethod indirection >>>>>>> >>>>>>> I think it only affects the interpreter so the extra indirection >>>>>>> wouldn't affect performance, so don't duplicate it! You don't >>>>>>> want to >>>>>>> increase space used by j.l.C without taking it out somewhere else! >>>>>> >>>>>> I like this approach. Can we do this? >>>>>> >>>>>>> >>>>>>>> >>>>>>>> What do you think about that? >>>>>>> >>>>>>> Is this bug worth doing this? I don't know but I'd really like it. >>>>>>> >>>>>>> Coleen >>>>>>> >>>>>>>> >>>>>>>> Best regards, >>>>>>>> Vladimir Ivanov >>>>>>>> >>>>>>>> On 10/6/14 11:35 AM, Staffan Larsen wrote: >>>>>>>>> This looks like a good approach. However, there are a couple of >>>>>>>>> more >>>>>>>>> places that need to be updated. >>>>>>>>> >>>>>>>>> The hprof binary format is described in >>>>>>>>> jdk/src/jdk.hprof.agent/share/native/libhprof/manual.html and needs >>>>>>>>> to be updated. It?s also more formally specified in hprof_b_spec.h >>>>>>>>> in the same directory. >>>>>>>>> >>>>>>>>> The hprof JVMTI agent in jdk/src/jdk.hprof.agent code would also >>>>>>>>> need to be updated to show this field. Since this is a JVMTI agent >>>>>>>>> it needs to be possible to find the resolved_refrences array >>>>>>>>> via the >>>>>>>>> JVMTI heap walking API. Perhaps that already works? - I haven?t >>>>>>>>> looked. >>>>>>>>> >>>>>>>>> Finally, the Serviceability Agent implements yet another hprof >>>>>>>>> binary dumper in >>>>>>>>> hotspot/agent//src/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java >>>>>>>>> >>>>>>>>> >>>>>>>>> which also needs to write this reference. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> /Staffan >>>>>>>>> >>>>>>>>> On 29 sep 2014, at 16:51, Vladimir Ivanov >>>>>>>>> >>>>>>>> > >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.00/ >>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8059340 >>>>>>>>>> >>>>>>>>>> VM heap dump doesn't contain >>>>>>>>>> ConstantPool::_resolved_references for >>>>>>>>>> classes which have resolved references. >>>>>>>>>> >>>>>>>>>> ConstantPool::_resolved_references points to an Object[] holding >>>>>>>>>> resolved constant pool entries (patches for VM anonymous classes, >>>>>>>>>> linked CallSite & MethodType for invokedynamic instructions). >>>>>>>>>> >>>>>>>>>> I've decided to use reserved slot in HPROF class header format. >>>>>>>>>> It requires an update in jhat to correctly display new info. >>>>>>>>>> >>>>>>>>>> The other approach I tried was to dump the reference as a fake >>>>>>>>>> static field [1], but storing VM internal >>>>>>>>>> ConstantPool::_resolved_references among user defined fields looks >>>>>>>>>> confusing. >>>>>>>>>> >>>>>>>>>> Testing: manual (verified that corresponding arrays are properly >>>>>>>>>> linked in Nashorn heap dump). >>>>>>>>>> >>>>>>>>>> Thanks! >>>>>>>>>> >>>>>>>>>> Best regards, >>>>>>>>>> Vladimir Ivanov >>>>>>>>>> >>>>>>>>>> [1] http://cr.openjdk.java.net/~vlivanov/8059340/static >>>>>> >>> > From volker.simonis at gmail.com Wed May 20 13:43:09 2015 From: volker.simonis at gmail.com (Volker Simonis) Date: Wed, 20 May 2015 15:43:09 +0200 Subject: [8u60] request for approval: 8080190: PPC64: Fix wrong rotate instructions in the .ad file In-Reply-To: <555B7520.4070006@oracle.com> References: <555B68E5.2010304@oracle.com> <555B7520.4070006@oracle.com> Message-ID: Hi everybody, thanks a lot for your support. I've just successfully pushed the change to jdk8u/hs-dev/hotspot. Regards, Volker On Tue, May 19, 2015 at 7:38 PM, Alejandro E Murillo wrote: > > Volker, > if this is only affecting ppc then go ahead and push manually. > if not, let me know and I can grab the patch and push it via jprt > > cheers > Alejandro > > > On 5/19/2015 10:46 AM, Rob McKenna wrote: >> >> Hi Volker, >> >> You can push directly to the hotspot team repo. It will make 8u60 assuming >> you push before RDP 2. >> >> http://openjdk.java.net/projects/jdk8u/releases/8u60.html >> >> You should have committer access to that repo. Let us know if not. >> >> -Rob >> >> On 19/05/15 14:54, Volker Simonis wrote: >>> >>> Hi, >>> >>> could you please approve the downport of the following, ppc-only fix >>> to jdk8u/hs-dev/hotspot >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8080190 >>> Webrev: http://cr.openjdk.java.net/~simonis/webrevs/2015/8080190.8u/ >>> Review: >>> http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2015-May/thread.html#17920 >>> URL: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/4140f485ba27 >>> >>> The change cleanly applies to the jdk8u/hs-dev/hotspot repository. >>> >>> I would really like to bring this into 8u60 as this is a serious bug >>> which leads to incorrect computations. >>> Is this still possible by goiong trough jdk8u/hs-dev/hotspot? >>> >>> The other question is if I can push this ppc-only change myself to >>> jdk8u/hs-dev/hotspot or if I need a sponsor? >>> >>> Thank you and best regards, >>> Volker >>> > > -- > Alejandro > From roland.westrelin at oracle.com Wed May 20 14:59:36 2015 From: roland.westrelin at oracle.com (Roland Westrelin) Date: Wed, 20 May 2015 16:59:36 +0200 Subject: RFR: 8080600: AARCH64: testlibrary does not support AArch64 In-Reply-To: <555C53C4.7010709@redhat.com> References: <5559FDB1.3020808@redhat.com> <8F5EBAD7-7596-43E7-BD74-3552A51DBB3F@oracle.com> <555A00E0.8020309@redhat.com> <555A61D7.8000804@oracle.com> <555B5C33.4060601@redhat.com> <28A40172-DC4F-4EBA-929B-4D61E954A392@oracle.com> <555B8B18.4020900@redhat.com> <555C53C4.7010709@redhat.com> Message-ID: > http://cr.openjdk.java.net/~aph/8080600-3/ I?m pushing it. Roland. From vladimir.x.ivanov at oracle.com Wed May 20 15:09:04 2015 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Wed, 20 May 2015 18:09:04 +0300 Subject: [9] RFR (S): 8059340: ConstantPool::_resolved_references is missing in heap dump In-Reply-To: <555C2699.9030808@oracle.com> References: <542971DC.2090803@oracle.com> <24133E27-C024-41D8-80EF-B5707CFF0D0C@oracle.com> <552FFBAA.6050103@oracle.com> <55358D0A.1020407@oracle.com> <21C42551-29AA-44DC-AFE5-6156D681503D@oracle.com> <5536975F.60108@oracle.com> <555A2228.3080908@oracle.com> <555AEA98.3060104@oracle.com> <555B6BC3.1020809@oracle.com> <555C2699.9030808@oracle.com> Message-ID: <555CA390.9000803@oracle.com> Stefan, Chris, Yes, you are right. ClassLoaderData::_handles isn't used anymore and can go away. Updated webrev: http://cr.openjdk.java.net/~vlivanov/8059340/webrev.02 Best regards, Vladimir Ivanov On 5/20/15 9:15 AM, Stefan Karlsson wrote: > On 2015-05-20 03:54, Christian Thalinger wrote: >>> On May 19, 2015, at 9:58 AM, Vladimir Ivanov >>> wrote: >>> >>> Thanks for the review, Serguei. >>> >>> Updated webrev in place: >>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 >>> >> Shouldn?t there be some GC code in InstanceKlass that can be removed now? > > Yes, this is a nice patch from the GCs perspective, since it removes > some of the work that we need to perform during the root processing. > > Unless I'm mistaken, you removed the only calls to > ClassLoadeData::add_handle, so I think you should remove the handles > block in ClassLoaderData. > > Thanks, > StefanK > >> >> + private transient Object[] resolved_references; >> >> We should follow Java naming conventions and use ?resolvedReferences?. >> >>> Switched to ConstantPool::resolved_references() as you suggested. >>> >>> Regarding declaring the field in vmStructs.cpp, it is not needed >>> since the field is located in Java mirror and not in InstanceKlass. >>> >>> Best regards, >>> Vladimir Ivanov >>> >>> On 5/19/15 10:47 AM, serguei.spitsyn at oracle.com wrote: >>>> Hi Vladimir, >>>> >>>> It looks good in general. >>>> Some comments are below. >>>> >>>> || *src/share/vm/oops/cpCache.cpp* >>>> >>>> @@ -281,11 +281,11 @@ >>>> // Competing writers must acquire exclusive access via a lock. >>>> // A losing writer waits on the lock until the winner writes f1 >>>> and leaves >>>> // the lock, so that when the losing writer returns, he can use >>>> the linked >>>> // cache entry. >>>> >>>> - objArrayHandle resolved_references = cpool->resolved_references(); >>>> + objArrayHandle resolved_references = >>>> cpool->pool_holder()->resolved_references(); >>>> // Use the resolved_references() lock for this cpCache entry. >>>> // resolved_references are created for all classes with >>>> Invokedynamic, MethodHandle >>>> // or MethodType constant pool cache entries. >>>> assert(resolved_references() != NULL, >>>> "a resolved_references array should have been created for >>>> this class"); >>>> >>>> ------------------------------------------------------------------------ >>>> >>>> >>>> @@ -410,20 +410,20 @@ >>>> >>>> oop >>>> ConstantPoolCacheEntry::appendix_if_resolved(constantPoolHandle >>>> cpool) { >>>> if (!has_appendix()) >>>> return NULL; >>>> const int ref_index = f2_as_index() + >>>> _indy_resolved_references_appendix_offset; >>>> - objArrayOop resolved_references = cpool->resolved_references(); >>>> + objArrayOop resolved_references = >>>> cpool->pool_holder()->resolved_references(); >>>> return resolved_references->obj_at(ref_index); >>>> } >>>> >>>> >>>> oop >>>> ConstantPoolCacheEntry::method_type_if_resolved(constantPoolHandle >>>> cpool) { >>>> if (!has_method_type()) >>>> return NULL; >>>> const int ref_index = f2_as_index() + >>>> _indy_resolved_references_method_type_offset; >>>> - objArrayOop resolved_references = cpool->resolved_references(); >>>> + objArrayOop resolved_references = >>>> cpool->pool_holder()->resolved_references(); >>>> return resolved_references->obj_at(ref_index); >>>> } >>>> >>>> There is no need in the update above as the constant pool still has the >>>> function resolved_references(): >>>> +objArrayOop ConstantPool::resolved_references() const { >>>> + return pool_holder()->resolved_references(); >>>> +} >>>> >>>> The same is true for the files: >>>> src/share/vm/interpreter/interpreterRuntime.cpp >>>> src/share/vm/interpreter/bytecodeTracer.cpp >>>> || src/share/vm/ci/ciEnv.cpp >>>> >>>> >>>> || src/share/vm/runtime/vmStructs.cpp* >>>> >>>> *@@ -286,11 +286,10 @@ >>>> nonstatic_field(ConstantPool, _tags, >>>> Array*) \ >>>> nonstatic_field(ConstantPool, _cache, >>>> ConstantPoolCache*) \ >>>> nonstatic_field(ConstantPool, _pool_holder, >>>> InstanceKlass*) \ >>>> nonstatic_field(ConstantPool, _operands, >>>> Array*) \ >>>> nonstatic_field(ConstantPool, _length, >>>> int) \ >>>> - nonstatic_field(ConstantPool, _resolved_references, >>>> jobject) \ >>>> nonstatic_field(ConstantPool, _reference_map, >>>> Array*) \ >>>> nonstatic_field(ConstantPoolCache, _length, >>>> int) \ >>>> nonstatic_field(ConstantPoolCache, _constant_pool, >>>> ConstantPool*) \ >>>> nonstatic_field(InstanceKlass, _array_klasses, >>>> Klass*) \ >>>> nonstatic_field(InstanceKlass, _methods, >>>> Array*) \ >>>> * >>>> >>>> *I guess, we need to cover the same field in the InstanceKlass instead. >>>> >>>> Thanks, >>>> Serguei >>>> >>>> >>>> On 5/18/15 10:32 AM, Vladimir Ivanov wrote: >>>>> Here's updated version: >>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 >>>>> >>>>> Moved ConstantPool::_resolved_references to mirror class instance. >>>>> >>>>> Fixed a couple of issues in CDS and JVMTI (class redefinition) caused >>>>> by this change. >>>>> >>>>> I had to hard code Class::resolved_references offset since it is used >>>>> in template interpreter which is generated earlier than j.l.Class is >>>>> loaded during VM bootstrap. >>>>> >>>>> Testing: hotspot/test, vm testbase (in progress) >>>>> >>>>> Best regards, >>>>> Vladimir Ivanov >>>>> >>>>> On 4/21/15 9:30 PM, Vladimir Ivanov wrote: >>>>>> Coleen, Chris, >>>>>> >>>>>> I'll proceed with moving ConstantPool::_resolved_references to >>>>>> j.l.Class >>>>>> instance then. >>>>>> >>>>>> Thanks for the feedback. >>>>>> >>>>>> Best regards, >>>>>> Vladimir Ivanov >>>>>> >>>>>> On 4/21/15 3:22 AM, Christian Thalinger wrote: >>>>>>>> On Apr 20, 2015, at 4:34 PM, Coleen Phillimore >>>>>>>> >>>>>>> > >>>>>>>> wrote: >>>>>>>> >>>>>>>> >>>>>>>> Vladimir, >>>>>>>> >>>>>>>> I think that changing the format of the heap dump isn't a good idea >>>>>>>> either. >>>>>>>> >>>>>>>> On 4/16/15, 2:12 PM, Vladimir Ivanov wrote: >>>>>>>>> (sorry for really late response; just got enough time to return to >>>>>>>>> the bug) >>>>>>>> I'd forgotten about it! >>>>>>>>> Coleen, Staffan, >>>>>>>>> >>>>>>>>> Thanks a lot for the feedback! >>>>>>>>> >>>>>>>>> After thinking about the fix more, I don't think that using >>>>>>>>> reserved >>>>>>>>> oop slot in CLASS DUMP for recording _resolved_references is >>>>>>>>> the best >>>>>>>>> thing to do. IMO the change causes too much work for the users >>>>>>>>> (heap >>>>>>>>> dump analysis tools). >>>>>>>>> >>>>>>>>> It needs specification update and then heap dump analyzers >>>>>>>>> should be >>>>>>>>> updated as well. >>>>>>>>> >>>>>>>>> I have 2 alternative approaches (hacky and not-so-hacky :-)): >>>>>>>>> >>>>>>>>> - artificial class static field in the dump >>>>>>>>> ("" >>>>>>>>> + optional id to guarantee unique name); >>>>>>>>> >>>>>>>>> - add j.l.Class::_resolved_references field; >>>>>>>>> Not sure how much overhead (mostly reads from bytecode) the >>>>>>>>> move >>>>>>>>> from ConstantPool to j.l.Class adds, so I propose just to >>>>>>>>> duplicate >>>>>>>>> it for now. >>>>>>>> I really like this second approach, so much so that I had a >>>>>>>> prototype >>>>>>>> for moving resolved_references directly to the j.l.Class object >>>>>>>> about >>>>>>>> a year ago. I couldn't find any benefit other than >>>>>>>> consolidating oops >>>>>>>> so the GC would have less work to do. If the >>>>>>>> resolved_references are >>>>>>>> moved to j.l.C instance, they can not be jobjects and the >>>>>>>> ClassLoaderData::_handles area wouldn't have to contain them (but >>>>>>>> there are other things that could go there so don't delete the >>>>>>>> _handles field yet). >>>>>>>> >>>>>>>> The change I had was relatively simple. The only annoying part was >>>>>>>> that getting to the resolved references has to be in macroAssembler >>>>>>>> and do: >>>>>>>> >>>>>>>> go through method->cpCache->constants->instanceKlass->java_mirror() >>>>>>>> rather than >>>>>>>> method->cpCache->constants->resolved_references->jmethod >>>>>>>> indirection >>>>>>>> >>>>>>>> I think it only affects the interpreter so the extra indirection >>>>>>>> wouldn't affect performance, so don't duplicate it! You don't >>>>>>>> want to >>>>>>>> increase space used by j.l.C without taking it out somewhere else! >>>>>>> I like this approach. Can we do this? >>>>>>> >>>>>>>>> What do you think about that? >>>>>>>> Is this bug worth doing this? I don't know but I'd really like it. >>>>>>>> >>>>>>>> Coleen >>>>>>>> >>>>>>>>> Best regards, >>>>>>>>> Vladimir Ivanov >>>>>>>>> >>>>>>>>> On 10/6/14 11:35 AM, Staffan Larsen wrote: >>>>>>>>>> This looks like a good approach. However, there are a couple >>>>>>>>>> of more >>>>>>>>>> places that need to be updated. >>>>>>>>>> >>>>>>>>>> The hprof binary format is described in >>>>>>>>>> jdk/src/jdk.hprof.agent/share/native/libhprof/manual.html and >>>>>>>>>> needs >>>>>>>>>> to be updated. It?s also more formally specified in >>>>>>>>>> hprof_b_spec.h >>>>>>>>>> in the same directory. >>>>>>>>>> >>>>>>>>>> The hprof JVMTI agent in jdk/src/jdk.hprof.agent code would also >>>>>>>>>> need to be updated to show this field. Since this is a JVMTI >>>>>>>>>> agent >>>>>>>>>> it needs to be possible to find the resolved_refrences array >>>>>>>>>> via the >>>>>>>>>> JVMTI heap walking API. Perhaps that already works? - I haven?t >>>>>>>>>> looked. >>>>>>>>>> >>>>>>>>>> Finally, the Serviceability Agent implements yet another hprof >>>>>>>>>> binary dumper in >>>>>>>>>> hotspot/agent//src/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> which also needs to write this reference. >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> /Staffan >>>>>>>>>> >>>>>>>>>> On 29 sep 2014, at 16:51, Vladimir Ivanov >>>>>>>>>> >>>>>>>>> > >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.00/ >>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8059340 >>>>>>>>>>> >>>>>>>>>>> VM heap dump doesn't contain >>>>>>>>>>> ConstantPool::_resolved_references for >>>>>>>>>>> classes which have resolved references. >>>>>>>>>>> >>>>>>>>>>> ConstantPool::_resolved_references points to an Object[] holding >>>>>>>>>>> resolved constant pool entries (patches for VM anonymous >>>>>>>>>>> classes, >>>>>>>>>>> linked CallSite & MethodType for invokedynamic instructions). >>>>>>>>>>> >>>>>>>>>>> I've decided to use reserved slot in HPROF class header format. >>>>>>>>>>> It requires an update in jhat to correctly display new info. >>>>>>>>>>> >>>>>>>>>>> The other approach I tried was to dump the reference as a fake >>>>>>>>>>> static field [1], but storing VM internal >>>>>>>>>>> ConstantPool::_resolved_references among user defined fields >>>>>>>>>>> looks >>>>>>>>>>> confusing. >>>>>>>>>>> >>>>>>>>>>> Testing: manual (verified that corresponding arrays are properly >>>>>>>>>>> linked in Nashorn heap dump). >>>>>>>>>>> >>>>>>>>>>> Thanks! >>>>>>>>>>> >>>>>>>>>>> Best regards, >>>>>>>>>>> Vladimir Ivanov >>>>>>>>>>> >>>>>>>>>>> [1] http://cr.openjdk.java.net/~vlivanov/8059340/static > From stefan.karlsson at oracle.com Wed May 20 17:48:27 2015 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 20 May 2015 19:48:27 +0200 Subject: [9] RFR (S): 8059340: ConstantPool::_resolved_references is missing in heap dump In-Reply-To: <555CA390.9000803@oracle.com> References: <542971DC.2090803@oracle.com> <24133E27-C024-41D8-80EF-B5707CFF0D0C@oracle.com> <552FFBAA.6050103@oracle.com> <55358D0A.1020407@oracle.com> <21C42551-29AA-44DC-AFE5-6156D681503D@oracle.com> <5536975F.60108@oracle.com> <555A2228.3080908@oracle.com> <555AEA98.3060104@oracle.com> <555B6BC3.1020809@oracle.com> <555C2699.9030808@oracle.com> <555CA390.9000803@oracle.com> Message-ID: <555CC8EB.90200@oracle.com> On 2015-05-20 17:09, Vladimir Ivanov wrote: > Stefan, Chris, > > Yes, you are right. ClassLoaderData::_handles isn't used anymore and > can go away. Great. > > Updated webrev: > http://cr.openjdk.java.net/~vlivanov/8059340/webrev.02 http://cr.openjdk.java.net/~vlivanov/8059340/webrev.02/hotspot/src/share/vm/classfile/classLoaderData.cpp.udiff.html // release the handles - if (_handles != NULL) { - JNIHandleBlock::release_block(_handles); - _handles = NULL; - } The comment should be removed. Could this include be removed? 63 #include "runtime/jniHandles.hpp" http://cr.openjdk.java.net/~vlivanov/8059340/webrev.02/hotspot/src/share/vm/classfile/classLoaderData.hpp.frames.html 54 class JNIHandleBlock; The forward declaration could be removed. Thanks, StefanK > > Best regards, > Vladimir Ivanov > > On 5/20/15 9:15 AM, Stefan Karlsson wrote: >> On 2015-05-20 03:54, Christian Thalinger wrote: >>>> On May 19, 2015, at 9:58 AM, Vladimir Ivanov >>>> wrote: >>>> >>>> Thanks for the review, Serguei. >>>> >>>> Updated webrev in place: >>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 >>>> >>> Shouldn?t there be some GC code in InstanceKlass that can be removed >>> now? >> >> Yes, this is a nice patch from the GCs perspective, since it removes >> some of the work that we need to perform during the root processing. >> >> Unless I'm mistaken, you removed the only calls to >> ClassLoadeData::add_handle, so I think you should remove the handles >> block in ClassLoaderData. >> >> Thanks, >> StefanK >> >>> >>> + private transient Object[] resolved_references; >>> >>> We should follow Java naming conventions and use ?resolvedReferences?. >>> >>>> Switched to ConstantPool::resolved_references() as you suggested. >>>> >>>> Regarding declaring the field in vmStructs.cpp, it is not needed >>>> since the field is located in Java mirror and not in InstanceKlass. >>>> >>>> Best regards, >>>> Vladimir Ivanov >>>> >>>> On 5/19/15 10:47 AM, serguei.spitsyn at oracle.com wrote: >>>>> Hi Vladimir, >>>>> >>>>> It looks good in general. >>>>> Some comments are below. >>>>> >>>>> || *src/share/vm/oops/cpCache.cpp* >>>>> >>>>> @@ -281,11 +281,11 @@ >>>>> // Competing writers must acquire exclusive access via a lock. >>>>> // A losing writer waits on the lock until the winner writes f1 >>>>> and leaves >>>>> // the lock, so that when the losing writer returns, he can use >>>>> the linked >>>>> // cache entry. >>>>> >>>>> - objArrayHandle resolved_references = cpool->resolved_references(); >>>>> + objArrayHandle resolved_references = >>>>> cpool->pool_holder()->resolved_references(); >>>>> // Use the resolved_references() lock for this cpCache entry. >>>>> // resolved_references are created for all classes with >>>>> Invokedynamic, MethodHandle >>>>> // or MethodType constant pool cache entries. >>>>> assert(resolved_references() != NULL, >>>>> "a resolved_references array should have been created for >>>>> this class"); >>>>> >>>>> ------------------------------------------------------------------------ >>>>> >>>>> >>>>> >>>>> @@ -410,20 +410,20 @@ >>>>> >>>>> oop >>>>> ConstantPoolCacheEntry::appendix_if_resolved(constantPoolHandle >>>>> cpool) { >>>>> if (!has_appendix()) >>>>> return NULL; >>>>> const int ref_index = f2_as_index() + >>>>> _indy_resolved_references_appendix_offset; >>>>> - objArrayOop resolved_references = cpool->resolved_references(); >>>>> + objArrayOop resolved_references = >>>>> cpool->pool_holder()->resolved_references(); >>>>> return resolved_references->obj_at(ref_index); >>>>> } >>>>> >>>>> >>>>> oop >>>>> ConstantPoolCacheEntry::method_type_if_resolved(constantPoolHandle >>>>> cpool) { >>>>> if (!has_method_type()) >>>>> return NULL; >>>>> const int ref_index = f2_as_index() + >>>>> _indy_resolved_references_method_type_offset; >>>>> - objArrayOop resolved_references = cpool->resolved_references(); >>>>> + objArrayOop resolved_references = >>>>> cpool->pool_holder()->resolved_references(); >>>>> return resolved_references->obj_at(ref_index); >>>>> } >>>>> >>>>> There is no need in the update above as the constant pool still >>>>> has the >>>>> function resolved_references(): >>>>> +objArrayOop ConstantPool::resolved_references() const { >>>>> + return pool_holder()->resolved_references(); >>>>> +} >>>>> >>>>> The same is true for the files: >>>>> src/share/vm/interpreter/interpreterRuntime.cpp >>>>> src/share/vm/interpreter/bytecodeTracer.cpp >>>>> || src/share/vm/ci/ciEnv.cpp >>>>> >>>>> >>>>> || src/share/vm/runtime/vmStructs.cpp* >>>>> >>>>> *@@ -286,11 +286,10 @@ >>>>> nonstatic_field(ConstantPool, _tags, >>>>> Array*) \ >>>>> nonstatic_field(ConstantPool, _cache, >>>>> ConstantPoolCache*) \ >>>>> nonstatic_field(ConstantPool, _pool_holder, >>>>> InstanceKlass*) \ >>>>> nonstatic_field(ConstantPool, _operands, >>>>> Array*) \ >>>>> nonstatic_field(ConstantPool, _length, >>>>> int) \ >>>>> - nonstatic_field(ConstantPool, _resolved_references, >>>>> jobject) \ >>>>> nonstatic_field(ConstantPool, _reference_map, >>>>> Array*) \ >>>>> nonstatic_field(ConstantPoolCache, _length, >>>>> int) \ >>>>> nonstatic_field(ConstantPoolCache, _constant_pool, >>>>> ConstantPool*) \ >>>>> nonstatic_field(InstanceKlass, _array_klasses, >>>>> Klass*) \ >>>>> nonstatic_field(InstanceKlass, _methods, >>>>> Array*) \ >>>>> * >>>>> >>>>> *I guess, we need to cover the same field in the InstanceKlass >>>>> instead. >>>>> >>>>> Thanks, >>>>> Serguei >>>>> >>>>> >>>>> On 5/18/15 10:32 AM, Vladimir Ivanov wrote: >>>>>> Here's updated version: >>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 >>>>>> >>>>>> Moved ConstantPool::_resolved_references to mirror class instance. >>>>>> >>>>>> Fixed a couple of issues in CDS and JVMTI (class redefinition) >>>>>> caused >>>>>> by this change. >>>>>> >>>>>> I had to hard code Class::resolved_references offset since it is >>>>>> used >>>>>> in template interpreter which is generated earlier than j.l.Class is >>>>>> loaded during VM bootstrap. >>>>>> >>>>>> Testing: hotspot/test, vm testbase (in progress) >>>>>> >>>>>> Best regards, >>>>>> Vladimir Ivanov >>>>>> >>>>>> On 4/21/15 9:30 PM, Vladimir Ivanov wrote: >>>>>>> Coleen, Chris, >>>>>>> >>>>>>> I'll proceed with moving ConstantPool::_resolved_references to >>>>>>> j.l.Class >>>>>>> instance then. >>>>>>> >>>>>>> Thanks for the feedback. >>>>>>> >>>>>>> Best regards, >>>>>>> Vladimir Ivanov >>>>>>> >>>>>>> On 4/21/15 3:22 AM, Christian Thalinger wrote: >>>>>>>>> On Apr 20, 2015, at 4:34 PM, Coleen Phillimore >>>>>>>>> >>>>>>>> > >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> Vladimir, >>>>>>>>> >>>>>>>>> I think that changing the format of the heap dump isn't a good >>>>>>>>> idea >>>>>>>>> either. >>>>>>>>> >>>>>>>>> On 4/16/15, 2:12 PM, Vladimir Ivanov wrote: >>>>>>>>>> (sorry for really late response; just got enough time to >>>>>>>>>> return to >>>>>>>>>> the bug) >>>>>>>>> I'd forgotten about it! >>>>>>>>>> Coleen, Staffan, >>>>>>>>>> >>>>>>>>>> Thanks a lot for the feedback! >>>>>>>>>> >>>>>>>>>> After thinking about the fix more, I don't think that using >>>>>>>>>> reserved >>>>>>>>>> oop slot in CLASS DUMP for recording _resolved_references is >>>>>>>>>> the best >>>>>>>>>> thing to do. IMO the change causes too much work for the users >>>>>>>>>> (heap >>>>>>>>>> dump analysis tools). >>>>>>>>>> >>>>>>>>>> It needs specification update and then heap dump analyzers >>>>>>>>>> should be >>>>>>>>>> updated as well. >>>>>>>>>> >>>>>>>>>> I have 2 alternative approaches (hacky and not-so-hacky :-)): >>>>>>>>>> >>>>>>>>>> - artificial class static field in the dump >>>>>>>>>> ("" >>>>>>>>>> + optional id to guarantee unique name); >>>>>>>>>> >>>>>>>>>> - add j.l.Class::_resolved_references field; >>>>>>>>>> Not sure how much overhead (mostly reads from bytecode) the >>>>>>>>>> move >>>>>>>>>> from ConstantPool to j.l.Class adds, so I propose just to >>>>>>>>>> duplicate >>>>>>>>>> it for now. >>>>>>>>> I really like this second approach, so much so that I had a >>>>>>>>> prototype >>>>>>>>> for moving resolved_references directly to the j.l.Class object >>>>>>>>> about >>>>>>>>> a year ago. I couldn't find any benefit other than >>>>>>>>> consolidating oops >>>>>>>>> so the GC would have less work to do. If the >>>>>>>>> resolved_references are >>>>>>>>> moved to j.l.C instance, they can not be jobjects and the >>>>>>>>> ClassLoaderData::_handles area wouldn't have to contain them (but >>>>>>>>> there are other things that could go there so don't delete the >>>>>>>>> _handles field yet). >>>>>>>>> >>>>>>>>> The change I had was relatively simple. The only annoying >>>>>>>>> part was >>>>>>>>> that getting to the resolved references has to be in >>>>>>>>> macroAssembler >>>>>>>>> and do: >>>>>>>>> >>>>>>>>> go through >>>>>>>>> method->cpCache->constants->instanceKlass->java_mirror() >>>>>>>>> rather than >>>>>>>>> method->cpCache->constants->resolved_references->jmethod >>>>>>>>> indirection >>>>>>>>> >>>>>>>>> I think it only affects the interpreter so the extra indirection >>>>>>>>> wouldn't affect performance, so don't duplicate it! You don't >>>>>>>>> want to >>>>>>>>> increase space used by j.l.C without taking it out somewhere >>>>>>>>> else! >>>>>>>> I like this approach. Can we do this? >>>>>>>> >>>>>>>>>> What do you think about that? >>>>>>>>> Is this bug worth doing this? I don't know but I'd really >>>>>>>>> like it. >>>>>>>>> >>>>>>>>> Coleen >>>>>>>>> >>>>>>>>>> Best regards, >>>>>>>>>> Vladimir Ivanov >>>>>>>>>> >>>>>>>>>> On 10/6/14 11:35 AM, Staffan Larsen wrote: >>>>>>>>>>> This looks like a good approach. However, there are a couple >>>>>>>>>>> of more >>>>>>>>>>> places that need to be updated. >>>>>>>>>>> >>>>>>>>>>> The hprof binary format is described in >>>>>>>>>>> jdk/src/jdk.hprof.agent/share/native/libhprof/manual.html and >>>>>>>>>>> needs >>>>>>>>>>> to be updated. It?s also more formally specified in >>>>>>>>>>> hprof_b_spec.h >>>>>>>>>>> in the same directory. >>>>>>>>>>> >>>>>>>>>>> The hprof JVMTI agent in jdk/src/jdk.hprof.agent code would >>>>>>>>>>> also >>>>>>>>>>> need to be updated to show this field. Since this is a JVMTI >>>>>>>>>>> agent >>>>>>>>>>> it needs to be possible to find the resolved_refrences array >>>>>>>>>>> via the >>>>>>>>>>> JVMTI heap walking API. Perhaps that already works? - I haven?t >>>>>>>>>>> looked. >>>>>>>>>>> >>>>>>>>>>> Finally, the Serviceability Agent implements yet another hprof >>>>>>>>>>> binary dumper in >>>>>>>>>>> hotspot/agent//src/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> which also needs to write this reference. >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> /Staffan >>>>>>>>>>> >>>>>>>>>>> On 29 sep 2014, at 16:51, Vladimir Ivanov >>>>>>>>>>> >>>>>>>>>> > >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.00/ >>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8059340 >>>>>>>>>>>> >>>>>>>>>>>> VM heap dump doesn't contain >>>>>>>>>>>> ConstantPool::_resolved_references for >>>>>>>>>>>> classes which have resolved references. >>>>>>>>>>>> >>>>>>>>>>>> ConstantPool::_resolved_references points to an Object[] >>>>>>>>>>>> holding >>>>>>>>>>>> resolved constant pool entries (patches for VM anonymous >>>>>>>>>>>> classes, >>>>>>>>>>>> linked CallSite & MethodType for invokedynamic instructions). >>>>>>>>>>>> >>>>>>>>>>>> I've decided to use reserved slot in HPROF class header >>>>>>>>>>>> format. >>>>>>>>>>>> It requires an update in jhat to correctly display new info. >>>>>>>>>>>> >>>>>>>>>>>> The other approach I tried was to dump the reference as a fake >>>>>>>>>>>> static field [1], but storing VM internal >>>>>>>>>>>> ConstantPool::_resolved_references among user defined fields >>>>>>>>>>>> looks >>>>>>>>>>>> confusing. >>>>>>>>>>>> >>>>>>>>>>>> Testing: manual (verified that corresponding arrays are >>>>>>>>>>>> properly >>>>>>>>>>>> linked in Nashorn heap dump). >>>>>>>>>>>> >>>>>>>>>>>> Thanks! >>>>>>>>>>>> >>>>>>>>>>>> Best regards, >>>>>>>>>>>> Vladimir Ivanov >>>>>>>>>>>> >>>>>>>>>>>> [1] http://cr.openjdk.java.net/~vlivanov/8059340/static >> From vladimir.x.ivanov at oracle.com Wed May 20 17:58:13 2015 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Wed, 20 May 2015 20:58:13 +0300 Subject: [9] RFR (S): 8059340: ConstantPool::_resolved_references is missing in heap dump In-Reply-To: <555CC8EB.90200@oracle.com> References: <542971DC.2090803@oracle.com> <24133E27-C024-41D8-80EF-B5707CFF0D0C@oracle.com> <552FFBAA.6050103@oracle.com> <55358D0A.1020407@oracle.com> <21C42551-29AA-44DC-AFE5-6156D681503D@oracle.com> <5536975F.60108@oracle.com> <555A2228.3080908@oracle.com> <555AEA98.3060104@oracle.com> <555B6BC3.1020809@oracle.com> <555C2699.9030808@oracle.com> <555CA390.9000803@oracle.com> <555CC8EB.90200@oracle.com> Message-ID: <555CCB35.8000305@oracle.com> Thanks for spotting that, Stefan. Updated webrev in place: http://cr.openjdk.java.net/~vlivanov/8059340/webrev.02 Best regards, Vladimir Ivanov On 5/20/15 8:48 PM, Stefan Karlsson wrote: > On 2015-05-20 17:09, Vladimir Ivanov wrote: >> Stefan, Chris, >> >> Yes, you are right. ClassLoaderData::_handles isn't used anymore and >> can go away. > > Great. >> >> Updated webrev: >> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.02 > > http://cr.openjdk.java.net/~vlivanov/8059340/webrev.02/hotspot/src/share/vm/classfile/classLoaderData.cpp.udiff.html > > > // release the handles > - if (_handles != NULL) { > - JNIHandleBlock::release_block(_handles); > - _handles = NULL; > - } > > The comment should be removed. > > Could this include be removed? > > 63 #include "runtime/jniHandles.hpp" > > > http://cr.openjdk.java.net/~vlivanov/8059340/webrev.02/hotspot/src/share/vm/classfile/classLoaderData.hpp.frames.html > > > 54 class JNIHandleBlock; > > The forward declaration could be removed. > > Thanks, > StefanK >> >> Best regards, >> Vladimir Ivanov >> >> On 5/20/15 9:15 AM, Stefan Karlsson wrote: >>> On 2015-05-20 03:54, Christian Thalinger wrote: >>>>> On May 19, 2015, at 9:58 AM, Vladimir Ivanov >>>>> wrote: >>>>> >>>>> Thanks for the review, Serguei. >>>>> >>>>> Updated webrev in place: >>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 >>>>> >>>> Shouldn?t there be some GC code in InstanceKlass that can be removed >>>> now? >>> >>> Yes, this is a nice patch from the GCs perspective, since it removes >>> some of the work that we need to perform during the root processing. >>> >>> Unless I'm mistaken, you removed the only calls to >>> ClassLoadeData::add_handle, so I think you should remove the handles >>> block in ClassLoaderData. >>> >>> Thanks, >>> StefanK >>> >>>> >>>> + private transient Object[] resolved_references; >>>> >>>> We should follow Java naming conventions and use ?resolvedReferences?. >>>> >>>>> Switched to ConstantPool::resolved_references() as you suggested. >>>>> >>>>> Regarding declaring the field in vmStructs.cpp, it is not needed >>>>> since the field is located in Java mirror and not in InstanceKlass. >>>>> >>>>> Best regards, >>>>> Vladimir Ivanov >>>>> >>>>> On 5/19/15 10:47 AM, serguei.spitsyn at oracle.com wrote: >>>>>> Hi Vladimir, >>>>>> >>>>>> It looks good in general. >>>>>> Some comments are below. >>>>>> >>>>>> || *src/share/vm/oops/cpCache.cpp* >>>>>> >>>>>> @@ -281,11 +281,11 @@ >>>>>> // Competing writers must acquire exclusive access via a lock. >>>>>> // A losing writer waits on the lock until the winner writes f1 >>>>>> and leaves >>>>>> // the lock, so that when the losing writer returns, he can use >>>>>> the linked >>>>>> // cache entry. >>>>>> >>>>>> - objArrayHandle resolved_references = cpool->resolved_references(); >>>>>> + objArrayHandle resolved_references = >>>>>> cpool->pool_holder()->resolved_references(); >>>>>> // Use the resolved_references() lock for this cpCache entry. >>>>>> // resolved_references are created for all classes with >>>>>> Invokedynamic, MethodHandle >>>>>> // or MethodType constant pool cache entries. >>>>>> assert(resolved_references() != NULL, >>>>>> "a resolved_references array should have been created for >>>>>> this class"); >>>>>> >>>>>> ------------------------------------------------------------------------ >>>>>> >>>>>> >>>>>> >>>>>> @@ -410,20 +410,20 @@ >>>>>> >>>>>> oop >>>>>> ConstantPoolCacheEntry::appendix_if_resolved(constantPoolHandle >>>>>> cpool) { >>>>>> if (!has_appendix()) >>>>>> return NULL; >>>>>> const int ref_index = f2_as_index() + >>>>>> _indy_resolved_references_appendix_offset; >>>>>> - objArrayOop resolved_references = cpool->resolved_references(); >>>>>> + objArrayOop resolved_references = >>>>>> cpool->pool_holder()->resolved_references(); >>>>>> return resolved_references->obj_at(ref_index); >>>>>> } >>>>>> >>>>>> >>>>>> oop >>>>>> ConstantPoolCacheEntry::method_type_if_resolved(constantPoolHandle >>>>>> cpool) { >>>>>> if (!has_method_type()) >>>>>> return NULL; >>>>>> const int ref_index = f2_as_index() + >>>>>> _indy_resolved_references_method_type_offset; >>>>>> - objArrayOop resolved_references = cpool->resolved_references(); >>>>>> + objArrayOop resolved_references = >>>>>> cpool->pool_holder()->resolved_references(); >>>>>> return resolved_references->obj_at(ref_index); >>>>>> } >>>>>> >>>>>> There is no need in the update above as the constant pool still >>>>>> has the >>>>>> function resolved_references(): >>>>>> +objArrayOop ConstantPool::resolved_references() const { >>>>>> + return pool_holder()->resolved_references(); >>>>>> +} >>>>>> >>>>>> The same is true for the files: >>>>>> src/share/vm/interpreter/interpreterRuntime.cpp >>>>>> src/share/vm/interpreter/bytecodeTracer.cpp >>>>>> || src/share/vm/ci/ciEnv.cpp >>>>>> >>>>>> >>>>>> || src/share/vm/runtime/vmStructs.cpp* >>>>>> >>>>>> *@@ -286,11 +286,10 @@ >>>>>> nonstatic_field(ConstantPool, _tags, >>>>>> Array*) \ >>>>>> nonstatic_field(ConstantPool, _cache, >>>>>> ConstantPoolCache*) \ >>>>>> nonstatic_field(ConstantPool, _pool_holder, >>>>>> InstanceKlass*) \ >>>>>> nonstatic_field(ConstantPool, _operands, >>>>>> Array*) \ >>>>>> nonstatic_field(ConstantPool, _length, >>>>>> int) \ >>>>>> - nonstatic_field(ConstantPool, _resolved_references, >>>>>> jobject) \ >>>>>> nonstatic_field(ConstantPool, _reference_map, >>>>>> Array*) \ >>>>>> nonstatic_field(ConstantPoolCache, _length, >>>>>> int) \ >>>>>> nonstatic_field(ConstantPoolCache, _constant_pool, >>>>>> ConstantPool*) \ >>>>>> nonstatic_field(InstanceKlass, _array_klasses, >>>>>> Klass*) \ >>>>>> nonstatic_field(InstanceKlass, _methods, >>>>>> Array*) \ >>>>>> * >>>>>> >>>>>> *I guess, we need to cover the same field in the InstanceKlass >>>>>> instead. >>>>>> >>>>>> Thanks, >>>>>> Serguei >>>>>> >>>>>> >>>>>> On 5/18/15 10:32 AM, Vladimir Ivanov wrote: >>>>>>> Here's updated version: >>>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 >>>>>>> >>>>>>> Moved ConstantPool::_resolved_references to mirror class instance. >>>>>>> >>>>>>> Fixed a couple of issues in CDS and JVMTI (class redefinition) >>>>>>> caused >>>>>>> by this change. >>>>>>> >>>>>>> I had to hard code Class::resolved_references offset since it is >>>>>>> used >>>>>>> in template interpreter which is generated earlier than j.l.Class is >>>>>>> loaded during VM bootstrap. >>>>>>> >>>>>>> Testing: hotspot/test, vm testbase (in progress) >>>>>>> >>>>>>> Best regards, >>>>>>> Vladimir Ivanov >>>>>>> >>>>>>> On 4/21/15 9:30 PM, Vladimir Ivanov wrote: >>>>>>>> Coleen, Chris, >>>>>>>> >>>>>>>> I'll proceed with moving ConstantPool::_resolved_references to >>>>>>>> j.l.Class >>>>>>>> instance then. >>>>>>>> >>>>>>>> Thanks for the feedback. >>>>>>>> >>>>>>>> Best regards, >>>>>>>> Vladimir Ivanov >>>>>>>> >>>>>>>> On 4/21/15 3:22 AM, Christian Thalinger wrote: >>>>>>>>>> On Apr 20, 2015, at 4:34 PM, Coleen Phillimore >>>>>>>>>> >>>>>>>>> > >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Vladimir, >>>>>>>>>> >>>>>>>>>> I think that changing the format of the heap dump isn't a good >>>>>>>>>> idea >>>>>>>>>> either. >>>>>>>>>> >>>>>>>>>> On 4/16/15, 2:12 PM, Vladimir Ivanov wrote: >>>>>>>>>>> (sorry for really late response; just got enough time to >>>>>>>>>>> return to >>>>>>>>>>> the bug) >>>>>>>>>> I'd forgotten about it! >>>>>>>>>>> Coleen, Staffan, >>>>>>>>>>> >>>>>>>>>>> Thanks a lot for the feedback! >>>>>>>>>>> >>>>>>>>>>> After thinking about the fix more, I don't think that using >>>>>>>>>>> reserved >>>>>>>>>>> oop slot in CLASS DUMP for recording _resolved_references is >>>>>>>>>>> the best >>>>>>>>>>> thing to do. IMO the change causes too much work for the users >>>>>>>>>>> (heap >>>>>>>>>>> dump analysis tools). >>>>>>>>>>> >>>>>>>>>>> It needs specification update and then heap dump analyzers >>>>>>>>>>> should be >>>>>>>>>>> updated as well. >>>>>>>>>>> >>>>>>>>>>> I have 2 alternative approaches (hacky and not-so-hacky :-)): >>>>>>>>>>> >>>>>>>>>>> - artificial class static field in the dump >>>>>>>>>>> ("" >>>>>>>>>>> + optional id to guarantee unique name); >>>>>>>>>>> >>>>>>>>>>> - add j.l.Class::_resolved_references field; >>>>>>>>>>> Not sure how much overhead (mostly reads from bytecode) the >>>>>>>>>>> move >>>>>>>>>>> from ConstantPool to j.l.Class adds, so I propose just to >>>>>>>>>>> duplicate >>>>>>>>>>> it for now. >>>>>>>>>> I really like this second approach, so much so that I had a >>>>>>>>>> prototype >>>>>>>>>> for moving resolved_references directly to the j.l.Class object >>>>>>>>>> about >>>>>>>>>> a year ago. I couldn't find any benefit other than >>>>>>>>>> consolidating oops >>>>>>>>>> so the GC would have less work to do. If the >>>>>>>>>> resolved_references are >>>>>>>>>> moved to j.l.C instance, they can not be jobjects and the >>>>>>>>>> ClassLoaderData::_handles area wouldn't have to contain them (but >>>>>>>>>> there are other things that could go there so don't delete the >>>>>>>>>> _handles field yet). >>>>>>>>>> >>>>>>>>>> The change I had was relatively simple. The only annoying >>>>>>>>>> part was >>>>>>>>>> that getting to the resolved references has to be in >>>>>>>>>> macroAssembler >>>>>>>>>> and do: >>>>>>>>>> >>>>>>>>>> go through >>>>>>>>>> method->cpCache->constants->instanceKlass->java_mirror() >>>>>>>>>> rather than >>>>>>>>>> method->cpCache->constants->resolved_references->jmethod >>>>>>>>>> indirection >>>>>>>>>> >>>>>>>>>> I think it only affects the interpreter so the extra indirection >>>>>>>>>> wouldn't affect performance, so don't duplicate it! You don't >>>>>>>>>> want to >>>>>>>>>> increase space used by j.l.C without taking it out somewhere >>>>>>>>>> else! >>>>>>>>> I like this approach. Can we do this? >>>>>>>>> >>>>>>>>>>> What do you think about that? >>>>>>>>>> Is this bug worth doing this? I don't know but I'd really >>>>>>>>>> like it. >>>>>>>>>> >>>>>>>>>> Coleen >>>>>>>>>> >>>>>>>>>>> Best regards, >>>>>>>>>>> Vladimir Ivanov >>>>>>>>>>> >>>>>>>>>>> On 10/6/14 11:35 AM, Staffan Larsen wrote: >>>>>>>>>>>> This looks like a good approach. However, there are a couple >>>>>>>>>>>> of more >>>>>>>>>>>> places that need to be updated. >>>>>>>>>>>> >>>>>>>>>>>> The hprof binary format is described in >>>>>>>>>>>> jdk/src/jdk.hprof.agent/share/native/libhprof/manual.html and >>>>>>>>>>>> needs >>>>>>>>>>>> to be updated. It?s also more formally specified in >>>>>>>>>>>> hprof_b_spec.h >>>>>>>>>>>> in the same directory. >>>>>>>>>>>> >>>>>>>>>>>> The hprof JVMTI agent in jdk/src/jdk.hprof.agent code would >>>>>>>>>>>> also >>>>>>>>>>>> need to be updated to show this field. Since this is a JVMTI >>>>>>>>>>>> agent >>>>>>>>>>>> it needs to be possible to find the resolved_refrences array >>>>>>>>>>>> via the >>>>>>>>>>>> JVMTI heap walking API. Perhaps that already works? - I haven?t >>>>>>>>>>>> looked. >>>>>>>>>>>> >>>>>>>>>>>> Finally, the Serviceability Agent implements yet another hprof >>>>>>>>>>>> binary dumper in >>>>>>>>>>>> hotspot/agent//src/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> which also needs to write this reference. >>>>>>>>>>>> >>>>>>>>>>>> Thanks, >>>>>>>>>>>> /Staffan >>>>>>>>>>>> >>>>>>>>>>>> On 29 sep 2014, at 16:51, Vladimir Ivanov >>>>>>>>>>>> >>>>>>>>>>> > >>>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.00/ >>>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8059340 >>>>>>>>>>>>> >>>>>>>>>>>>> VM heap dump doesn't contain >>>>>>>>>>>>> ConstantPool::_resolved_references for >>>>>>>>>>>>> classes which have resolved references. >>>>>>>>>>>>> >>>>>>>>>>>>> ConstantPool::_resolved_references points to an Object[] >>>>>>>>>>>>> holding >>>>>>>>>>>>> resolved constant pool entries (patches for VM anonymous >>>>>>>>>>>>> classes, >>>>>>>>>>>>> linked CallSite & MethodType for invokedynamic instructions). >>>>>>>>>>>>> >>>>>>>>>>>>> I've decided to use reserved slot in HPROF class header >>>>>>>>>>>>> format. >>>>>>>>>>>>> It requires an update in jhat to correctly display new info. >>>>>>>>>>>>> >>>>>>>>>>>>> The other approach I tried was to dump the reference as a fake >>>>>>>>>>>>> static field [1], but storing VM internal >>>>>>>>>>>>> ConstantPool::_resolved_references among user defined fields >>>>>>>>>>>>> looks >>>>>>>>>>>>> confusing. >>>>>>>>>>>>> >>>>>>>>>>>>> Testing: manual (verified that corresponding arrays are >>>>>>>>>>>>> properly >>>>>>>>>>>>> linked in Nashorn heap dump). >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks! >>>>>>>>>>>>> >>>>>>>>>>>>> Best regards, >>>>>>>>>>>>> Vladimir Ivanov >>>>>>>>>>>>> >>>>>>>>>>>>> [1] http://cr.openjdk.java.net/~vlivanov/8059340/static >>> > From harold.seigel at oracle.com Wed May 20 19:38:51 2015 From: harold.seigel at oracle.com (harold seigel) Date: Wed, 20 May 2015 15:38:51 -0400 Subject: RFR JDK-8079466: JNI Specification Update and Clean-up In-Reply-To: <5559ABCF.7040009@oracle.com> References: <5559ABCF.7040009@oracle.com> Message-ID: <555CE2CB.6000501@oracle.com> Hi David, It looks like a lot of work! I have just a few small comments: 1. In functions.html, delete the 'a' before 'this' 944 reference. May be a NULL value, in which case a this function will 945 return NULL.

2. In function.html, perhaps some commas around 'for example' ? 1035 (e.g. JNI_ERR or JNI_EINVAL). The HotSpot JVM 1036 implementation for example uses the -XX:+MaxJNILocalCapacity flag 1037 (default: 65536).

3. In function.html, should the words "string length" be added to line 4339, like they are in line 4335? 4334

start: the index of the first unicode character in the string to 4335 copy. Must be greater than or equal to zero, and less than string length 4336 ("GetStringLength()").

4337 4338

len: the number of unicode characters to copy. Must be greater 4339 than or equal to zero, and "start + len" must be less than 4340 "GetStringLength()".

4. In function.html, what does "this number" refer to in line 4361? 4359

The len argument specifies the number of 4360 unicode characters. The resulting number modified UTF-8 encoding 4361 characters may be greater than this number. GetStringUTFLength() 4362 may be used to determine the maximum size of the required character buffer.

5. In function.htlm, line 4366, change "safetly' to "to safely" 4366 "memset()") before using this function, in order safetly perform 4367 strlen().

6. In jni-6.html can the following: 15

JNI has been enhanced in Java SE 6 with a few minor changes. The addition of 16 the GetObjectRefType function. Deprecated structures 17 JDK1_1InitArgs and JDK1_1AttachArgs have been removed. 18 And an increment in the JNI version number.

to 15

JNI has been enhanced in Java SE 6 with a few minor changes. The 16 GetObjectRefType function has been added. Deprecated structures 17 JDK1_1InitArgs and JDK1_1AttachArgs have been removed. 18 The JNI version number has also been incremented.

Thanks, Harold On 5/18/2015 5:07 AM, David Simms wrote: > Greetings, > > Posting this JNI Specification docs clean up for public review/comment... > > JDK Bug: https://bugs.openjdk.java.net/browse/JDK-8079466 > > Web review: http://cr.openjdk.java.net/~dsimms/8079466/rev0/ > > Original Document for HTML comparison: > http://docs.oracle.com/javase/8/docs/technotes/guides/jni/spec/jniTOC.html > > *** Summary of changes *** > > Wholly confined to documentation changes, no code modifications made. > Since there were a number of conflicts with previous doc change > review, all have incorporated all current patches for one push: > > ------------------------------------------------------------------------------------------ > > JDK-8051947 JNI spec for ExceptionDescribe contradicts hotspot behavior > - Added text explaining pending exception is cleared as a side effect > > JDK-4907359 JNI spec should describe functions more strictly > - Made the split between function definitions more obvious (hr) > - Added a general note on OOM to the beginning of chapter 4: > "Functions whose definition may both return NULL and throw an > exception on error, may choose only to return NULL to indicate an > error, but not throw any exception. For example, a JNI implementation > may consider an "out of memory" condition temporary, and may not wish > to throw an OutOfMemoryError since this would appear fatal (JDK API > java.lang.Error documentation: "indicates serious problems that a > reasonable application should not try to catch")." > - Every function needs "Parameters", "Returns" and "Throws" documented > (chapters 4 & 5). > -- Have documented parameters as must not be "null" when appropriate > (although there are cases where the HotSpot reference implementation > does not crash): > > JDK-7172129 Integration of the JNI spec updates for JDK 1.2 was > incomplete > - Previously reviewed > > JDK-8034923 JNI: static linking assertions specs are incomplete and > are in the wrong section of spec > - Previously reviewed > > JDK-6462398 jni spec should specify which characters are > unicode-escaped when mangling > - Previously reviewed > > JDK-6590839 JNI Spec should point out Java objects created in JNI > using AllocObject are not finalized > - Previously reviewed > > JDK-8039184 JNI Spec missing documentation on calling default methods > - Previously reviewed > > JDK-6616502 JNI specification should discuss multiple invocations of > DetachCurrentThread > -Previously reviewed > > JDK-6681965 The documentation for GetStringChars and GetStringUTFChars > is unclear > -Previously reviewed > ------------------------------------------------------------------------------------------ > > > Cheers > /David Simms From coleen.phillimore at oracle.com Wed May 20 21:27:36 2015 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 20 May 2015 17:27:36 -0400 Subject: [9] RFR (S): 8059340: ConstantPool::_resolved_references is missing in heap dump In-Reply-To: <555CCB35.8000305@oracle.com> References: <542971DC.2090803@oracle.com> <24133E27-C024-41D8-80EF-B5707CFF0D0C@oracle.com> <552FFBAA.6050103@oracle.com> <55358D0A.1020407@oracle.com> <21C42551-29AA-44DC-AFE5-6156D681503D@oracle.com> <5536975F.60108@oracle.com> <555A2228.3080908@oracle.com> <555AEA98.3060104@oracle.com> <555B6BC3.1020809@oracle.com> <555C2699.9030808@oracle.com> <555CA390.9000803@oracle.com> <555CC8EB.90200@oracle.com> <555CCB35.8000305@oracle.com> Message-ID: <9858759F-F180-461F-B46B-02F5AFB68A47@oracle.com> I am on vacation and can't read this webrev from my iPhone. I assume that Stefan and Serguei's and others reviews are good. The CLD _handles field could be used to hold the mirrors in a later change and might be needed for jigsaw so might have to come back then. Thank you for doing this change!! Coleen Sent from my iPhone > On May 20, 2015, at 1:58 PM, Vladimir Ivanov wrote: > > Thanks for spotting that, Stefan. > > Updated webrev in place: > http://cr.openjdk.java.net/~vlivanov/8059340/webrev.02 > > Best regards, > Vladimir Ivanov > >> On 5/20/15 8:48 PM, Stefan Karlsson wrote: >>> On 2015-05-20 17:09, Vladimir Ivanov wrote: >>> Stefan, Chris, >>> >>> Yes, you are right. ClassLoaderData::_handles isn't used anymore and >>> can go away. >> >> Great. >>> >>> Updated webrev: >>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.02 >> >> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.02/hotspot/src/share/vm/classfile/classLoaderData.cpp.udiff.html >> >> >> // release the handles >> - if (_handles != NULL) { >> - JNIHandleBlock::release_block(_handles); >> - _handles = NULL; >> - } >> >> The comment should be removed. >> >> Could this include be removed? >> >> 63 #include "runtime/jniHandles.hpp" >> >> >> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.02/hotspot/src/share/vm/classfile/classLoaderData.hpp.frames.html >> >> >> 54 class JNIHandleBlock; >> >> The forward declaration could be removed. >> >> Thanks, >> StefanK >>> >>> Best regards, >>> Vladimir Ivanov >>> >>>> On 5/20/15 9:15 AM, Stefan Karlsson wrote: >>>> On 2015-05-20 03:54, Christian Thalinger wrote: >>>>>> On May 19, 2015, at 9:58 AM, Vladimir Ivanov >>>>>> wrote: >>>>>> >>>>>> Thanks for the review, Serguei. >>>>>> >>>>>> Updated webrev in place: >>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 >>>>>> >>>>> Shouldn?t there be some GC code in InstanceKlass that can be removed >>>>> now? >>>> >>>> Yes, this is a nice patch from the GCs perspective, since it removes >>>> some of the work that we need to perform during the root processing. >>>> >>>> Unless I'm mistaken, you removed the only calls to >>>> ClassLoadeData::add_handle, so I think you should remove the handles >>>> block in ClassLoaderData. >>>> >>>> Thanks, >>>> StefanK >>>> >>>>> >>>>> + private transient Object[] resolved_references; >>>>> >>>>> We should follow Java naming conventions and use ?resolvedReferences?. >>>>> >>>>>> Switched to ConstantPool::resolved_references() as you suggested. >>>>>> >>>>>> Regarding declaring the field in vmStructs.cpp, it is not needed >>>>>> since the field is located in Java mirror and not in InstanceKlass. >>>>>> >>>>>> Best regards, >>>>>> Vladimir Ivanov >>>>>> >>>>>>> On 5/19/15 10:47 AM, serguei.spitsyn at oracle.com wrote: >>>>>>> Hi Vladimir, >>>>>>> >>>>>>> It looks good in general. >>>>>>> Some comments are below. >>>>>>> >>>>>>> || *src/share/vm/oops/cpCache.cpp* >>>>>>> >>>>>>> @@ -281,11 +281,11 @@ >>>>>>> // Competing writers must acquire exclusive access via a lock. >>>>>>> // A losing writer waits on the lock until the winner writes f1 >>>>>>> and leaves >>>>>>> // the lock, so that when the losing writer returns, he can use >>>>>>> the linked >>>>>>> // cache entry. >>>>>>> >>>>>>> - objArrayHandle resolved_references = cpool->resolved_references(); >>>>>>> + objArrayHandle resolved_references = >>>>>>> cpool->pool_holder()->resolved_references(); >>>>>>> // Use the resolved_references() lock for this cpCache entry. >>>>>>> // resolved_references are created for all classes with >>>>>>> Invokedynamic, MethodHandle >>>>>>> // or MethodType constant pool cache entries. >>>>>>> assert(resolved_references() != NULL, >>>>>>> "a resolved_references array should have been created for >>>>>>> this class"); >>>>>>> >>>>>>> ------------------------------------------------------------------------ >>>>>>> >>>>>>> >>>>>>> >>>>>>> @@ -410,20 +410,20 @@ >>>>>>> >>>>>>> oop >>>>>>> ConstantPoolCacheEntry::appendix_if_resolved(constantPoolHandle >>>>>>> cpool) { >>>>>>> if (!has_appendix()) >>>>>>> return NULL; >>>>>>> const int ref_index = f2_as_index() + >>>>>>> _indy_resolved_references_appendix_offset; >>>>>>> - objArrayOop resolved_references = cpool->resolved_references(); >>>>>>> + objArrayOop resolved_references = >>>>>>> cpool->pool_holder()->resolved_references(); >>>>>>> return resolved_references->obj_at(ref_index); >>>>>>> } >>>>>>> >>>>>>> >>>>>>> oop >>>>>>> ConstantPoolCacheEntry::method_type_if_resolved(constantPoolHandle >>>>>>> cpool) { >>>>>>> if (!has_method_type()) >>>>>>> return NULL; >>>>>>> const int ref_index = f2_as_index() + >>>>>>> _indy_resolved_references_method_type_offset; >>>>>>> - objArrayOop resolved_references = cpool->resolved_references(); >>>>>>> + objArrayOop resolved_references = >>>>>>> cpool->pool_holder()->resolved_references(); >>>>>>> return resolved_references->obj_at(ref_index); >>>>>>> } >>>>>>> >>>>>>> There is no need in the update above as the constant pool still >>>>>>> has the >>>>>>> function resolved_references(): >>>>>>> +objArrayOop ConstantPool::resolved_references() const { >>>>>>> + return pool_holder()->resolved_references(); >>>>>>> +} >>>>>>> >>>>>>> The same is true for the files: >>>>>>> src/share/vm/interpreter/interpreterRuntime.cpp >>>>>>> src/share/vm/interpreter/bytecodeTracer.cpp >>>>>>> || src/share/vm/ci/ciEnv.cpp >>>>>>> >>>>>>> >>>>>>> || src/share/vm/runtime/vmStructs.cpp* >>>>>>> >>>>>>> *@@ -286,11 +286,10 @@ >>>>>>> nonstatic_field(ConstantPool, _tags, >>>>>>> Array*) \ >>>>>>> nonstatic_field(ConstantPool, _cache, >>>>>>> ConstantPoolCache*) \ >>>>>>> nonstatic_field(ConstantPool, _pool_holder, >>>>>>> InstanceKlass*) \ >>>>>>> nonstatic_field(ConstantPool, _operands, >>>>>>> Array*) \ >>>>>>> nonstatic_field(ConstantPool, _length, >>>>>>> int) \ >>>>>>> - nonstatic_field(ConstantPool, _resolved_references, >>>>>>> jobject) \ >>>>>>> nonstatic_field(ConstantPool, _reference_map, >>>>>>> Array*) \ >>>>>>> nonstatic_field(ConstantPoolCache, _length, >>>>>>> int) \ >>>>>>> nonstatic_field(ConstantPoolCache, _constant_pool, >>>>>>> ConstantPool*) \ >>>>>>> nonstatic_field(InstanceKlass, _array_klasses, >>>>>>> Klass*) \ >>>>>>> nonstatic_field(InstanceKlass, _methods, >>>>>>> Array*) \ >>>>>>> * >>>>>>> >>>>>>> *I guess, we need to cover the same field in the InstanceKlass >>>>>>> instead. >>>>>>> >>>>>>> Thanks, >>>>>>> Serguei >>>>>>> >>>>>>> >>>>>>>> On 5/18/15 10:32 AM, Vladimir Ivanov wrote: >>>>>>>> Here's updated version: >>>>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 >>>>>>>> >>>>>>>> Moved ConstantPool::_resolved_references to mirror class instance. >>>>>>>> >>>>>>>> Fixed a couple of issues in CDS and JVMTI (class redefinition) >>>>>>>> caused >>>>>>>> by this change. >>>>>>>> >>>>>>>> I had to hard code Class::resolved_references offset since it is >>>>>>>> used >>>>>>>> in template interpreter which is generated earlier than j.l.Class is >>>>>>>> loaded during VM bootstrap. >>>>>>>> >>>>>>>> Testing: hotspot/test, vm testbase (in progress) >>>>>>>> >>>>>>>> Best regards, >>>>>>>> Vladimir Ivanov >>>>>>>> >>>>>>>>> On 4/21/15 9:30 PM, Vladimir Ivanov wrote: >>>>>>>>> Coleen, Chris, >>>>>>>>> >>>>>>>>> I'll proceed with moving ConstantPool::_resolved_references to >>>>>>>>> j.l.Class >>>>>>>>> instance then. >>>>>>>>> >>>>>>>>> Thanks for the feedback. >>>>>>>>> >>>>>>>>> Best regards, >>>>>>>>> Vladimir Ivanov >>>>>>>>> >>>>>>>>> On 4/21/15 3:22 AM, Christian Thalinger wrote: >>>>>>>>>>> On Apr 20, 2015, at 4:34 PM, Coleen Phillimore >>>>>>>>>>> >>>>>>>>>> > >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Vladimir, >>>>>>>>>>> >>>>>>>>>>> I think that changing the format of the heap dump isn't a good >>>>>>>>>>> idea >>>>>>>>>>> either. >>>>>>>>>>> >>>>>>>>>>>> On 4/16/15, 2:12 PM, Vladimir Ivanov wrote: >>>>>>>>>>>> (sorry for really late response; just got enough time to >>>>>>>>>>>> return to >>>>>>>>>>>> the bug) >>>>>>>>>>> I'd forgotten about it! >>>>>>>>>>>> Coleen, Staffan, >>>>>>>>>>>> >>>>>>>>>>>> Thanks a lot for the feedback! >>>>>>>>>>>> >>>>>>>>>>>> After thinking about the fix more, I don't think that using >>>>>>>>>>>> reserved >>>>>>>>>>>> oop slot in CLASS DUMP for recording _resolved_references is >>>>>>>>>>>> the best >>>>>>>>>>>> thing to do. IMO the change causes too much work for the users >>>>>>>>>>>> (heap >>>>>>>>>>>> dump analysis tools). >>>>>>>>>>>> >>>>>>>>>>>> It needs specification update and then heap dump analyzers >>>>>>>>>>>> should be >>>>>>>>>>>> updated as well. >>>>>>>>>>>> >>>>>>>>>>>> I have 2 alternative approaches (hacky and not-so-hacky :-)): >>>>>>>>>>>> >>>>>>>>>>>> - artificial class static field in the dump >>>>>>>>>>>> ("" >>>>>>>>>>>> + optional id to guarantee unique name); >>>>>>>>>>>> >>>>>>>>>>>> - add j.l.Class::_resolved_references field; >>>>>>>>>>>> Not sure how much overhead (mostly reads from bytecode) the >>>>>>>>>>>> move >>>>>>>>>>>> from ConstantPool to j.l.Class adds, so I propose just to >>>>>>>>>>>> duplicate >>>>>>>>>>>> it for now. >>>>>>>>>>> I really like this second approach, so much so that I had a >>>>>>>>>>> prototype >>>>>>>>>>> for moving resolved_references directly to the j.l.Class object >>>>>>>>>>> about >>>>>>>>>>> a year ago. I couldn't find any benefit other than >>>>>>>>>>> consolidating oops >>>>>>>>>>> so the GC would have less work to do. If the >>>>>>>>>>> resolved_references are >>>>>>>>>>> moved to j.l.C instance, they can not be jobjects and the >>>>>>>>>>> ClassLoaderData::_handles area wouldn't have to contain them (but >>>>>>>>>>> there are other things that could go there so don't delete the >>>>>>>>>>> _handles field yet). >>>>>>>>>>> >>>>>>>>>>> The change I had was relatively simple. The only annoying >>>>>>>>>>> part was >>>>>>>>>>> that getting to the resolved references has to be in >>>>>>>>>>> macroAssembler >>>>>>>>>>> and do: >>>>>>>>>>> >>>>>>>>>>> go through >>>>>>>>>>> method->cpCache->constants->instanceKlass->java_mirror() >>>>>>>>>>> rather than >>>>>>>>>>> method->cpCache->constants->resolved_references->jmethod >>>>>>>>>>> indirection >>>>>>>>>>> >>>>>>>>>>> I think it only affects the interpreter so the extra indirection >>>>>>>>>>> wouldn't affect performance, so don't duplicate it! You don't >>>>>>>>>>> want to >>>>>>>>>>> increase space used by j.l.C without taking it out somewhere >>>>>>>>>>> else! >>>>>>>>>> I like this approach. Can we do this? >>>>>>>>>> >>>>>>>>>>>> What do you think about that? >>>>>>>>>>> Is this bug worth doing this? I don't know but I'd really >>>>>>>>>>> like it. >>>>>>>>>>> >>>>>>>>>>> Coleen >>>>>>>>>>> >>>>>>>>>>>> Best regards, >>>>>>>>>>>> Vladimir Ivanov >>>>>>>>>>>> >>>>>>>>>>>>> On 10/6/14 11:35 AM, Staffan Larsen wrote: >>>>>>>>>>>>> This looks like a good approach. However, there are a couple >>>>>>>>>>>>> of more >>>>>>>>>>>>> places that need to be updated. >>>>>>>>>>>>> >>>>>>>>>>>>> The hprof binary format is described in >>>>>>>>>>>>> jdk/src/jdk.hprof.agent/share/native/libhprof/manual.html and >>>>>>>>>>>>> needs >>>>>>>>>>>>> to be updated. It?s also more formally specified in >>>>>>>>>>>>> hprof_b_spec.h >>>>>>>>>>>>> in the same directory. >>>>>>>>>>>>> >>>>>>>>>>>>> The hprof JVMTI agent in jdk/src/jdk.hprof.agent code would >>>>>>>>>>>>> also >>>>>>>>>>>>> need to be updated to show this field. Since this is a JVMTI >>>>>>>>>>>>> agent >>>>>>>>>>>>> it needs to be possible to find the resolved_refrences array >>>>>>>>>>>>> via the >>>>>>>>>>>>> JVMTI heap walking API. Perhaps that already works? - I haven?t >>>>>>>>>>>>> looked. >>>>>>>>>>>>> >>>>>>>>>>>>> Finally, the Serviceability Agent implements yet another hprof >>>>>>>>>>>>> binary dumper in >>>>>>>>>>>>> hotspot/agent//src/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> which also needs to write this reference. >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> /Staffan >>>>>>>>>>>>> >>>>>>>>>>>>> On 29 sep 2014, at 16:51, Vladimir Ivanov >>>>>>>>>>>>> >>>>>>>>>>>> > >>>>>>>>>>>>> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.00/ >>>>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8059340 >>>>>>>>>>>>>> >>>>>>>>>>>>>> VM heap dump doesn't contain >>>>>>>>>>>>>> ConstantPool::_resolved_references for >>>>>>>>>>>>>> classes which have resolved references. >>>>>>>>>>>>>> >>>>>>>>>>>>>> ConstantPool::_resolved_references points to an Object[] >>>>>>>>>>>>>> holding >>>>>>>>>>>>>> resolved constant pool entries (patches for VM anonymous >>>>>>>>>>>>>> classes, >>>>>>>>>>>>>> linked CallSite & MethodType for invokedynamic instructions). >>>>>>>>>>>>>> >>>>>>>>>>>>>> I've decided to use reserved slot in HPROF class header >>>>>>>>>>>>>> format. >>>>>>>>>>>>>> It requires an update in jhat to correctly display new info. >>>>>>>>>>>>>> >>>>>>>>>>>>>> The other approach I tried was to dump the reference as a fake >>>>>>>>>>>>>> static field [1], but storing VM internal >>>>>>>>>>>>>> ConstantPool::_resolved_references among user defined fields >>>>>>>>>>>>>> looks >>>>>>>>>>>>>> confusing. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Testing: manual (verified that corresponding arrays are >>>>>>>>>>>>>> properly >>>>>>>>>>>>>> linked in Nashorn heap dump). >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>>> >>>>>>>>>>>>>> Best regards, >>>>>>>>>>>>>> Vladimir Ivanov >>>>>>>>>>>>>> >>>>>>>>>>>>>> [1] http://cr.openjdk.java.net/~vlivanov/8059340/static >> From serguei.spitsyn at oracle.com Wed May 20 23:25:51 2015 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Wed, 20 May 2015 16:25:51 -0700 Subject: [9] RFR (S): 8059340: ConstantPool::_resolved_references is missing in heap dump In-Reply-To: <555CCB35.8000305@oracle.com> References: <542971DC.2090803@oracle.com> <24133E27-C024-41D8-80EF-B5707CFF0D0C@oracle.com> <552FFBAA.6050103@oracle.com> <55358D0A.1020407@oracle.com> <21C42551-29AA-44DC-AFE5-6156D681503D@oracle.com> <5536975F.60108@oracle.com> <555A2228.3080908@oracle.com> <555AEA98.3060104@oracle.com> <555B6BC3.1020809@oracle.com> <555C2699.9030808@oracle.com> <555CA390.9000803@oracle.com> <555CC8EB.90200@oracle.com> <555CCB35.8000305@oracle.com> Message-ID: <555D17FF.7060003@oracle.com> Looks good. Thanks, Serguei On 5/20/15 10:58 AM, Vladimir Ivanov wrote: > Thanks for spotting that, Stefan. > > Updated webrev in place: > http://cr.openjdk.java.net/~vlivanov/8059340/webrev.02 > > Best regards, > Vladimir Ivanov > > On 5/20/15 8:48 PM, Stefan Karlsson wrote: >> On 2015-05-20 17:09, Vladimir Ivanov wrote: >>> Stefan, Chris, >>> >>> Yes, you are right. ClassLoaderData::_handles isn't used anymore and >>> can go away. >> >> Great. >>> >>> Updated webrev: >>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.02 >> >> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.02/hotspot/src/share/vm/classfile/classLoaderData.cpp.udiff.html >> >> >> >> // release the handles >> - if (_handles != NULL) { >> - JNIHandleBlock::release_block(_handles); >> - _handles = NULL; >> - } >> >> The comment should be removed. >> >> Could this include be removed? >> >> 63 #include "runtime/jniHandles.hpp" >> >> >> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.02/hotspot/src/share/vm/classfile/classLoaderData.hpp.frames.html >> >> >> >> 54 class JNIHandleBlock; >> >> The forward declaration could be removed. >> >> Thanks, >> StefanK >>> >>> Best regards, >>> Vladimir Ivanov >>> >>> On 5/20/15 9:15 AM, Stefan Karlsson wrote: >>>> On 2015-05-20 03:54, Christian Thalinger wrote: >>>>>> On May 19, 2015, at 9:58 AM, Vladimir Ivanov >>>>>> wrote: >>>>>> >>>>>> Thanks for the review, Serguei. >>>>>> >>>>>> Updated webrev in place: >>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 >>>>>> >>>>> Shouldn?t there be some GC code in InstanceKlass that can be removed >>>>> now? >>>> >>>> Yes, this is a nice patch from the GCs perspective, since it removes >>>> some of the work that we need to perform during the root processing. >>>> >>>> Unless I'm mistaken, you removed the only calls to >>>> ClassLoadeData::add_handle, so I think you should remove the handles >>>> block in ClassLoaderData. >>>> >>>> Thanks, >>>> StefanK >>>> >>>>> >>>>> + private transient Object[] resolved_references; >>>>> >>>>> We should follow Java naming conventions and use >>>>> ?resolvedReferences?. >>>>> >>>>>> Switched to ConstantPool::resolved_references() as you suggested. >>>>>> >>>>>> Regarding declaring the field in vmStructs.cpp, it is not needed >>>>>> since the field is located in Java mirror and not in InstanceKlass. >>>>>> >>>>>> Best regards, >>>>>> Vladimir Ivanov >>>>>> >>>>>> On 5/19/15 10:47 AM, serguei.spitsyn at oracle.com wrote: >>>>>>> Hi Vladimir, >>>>>>> >>>>>>> It looks good in general. >>>>>>> Some comments are below. >>>>>>> >>>>>>> || *src/share/vm/oops/cpCache.cpp* >>>>>>> >>>>>>> @@ -281,11 +281,11 @@ >>>>>>> // Competing writers must acquire exclusive access via a lock. >>>>>>> // A losing writer waits on the lock until the winner writes f1 >>>>>>> and leaves >>>>>>> // the lock, so that when the losing writer returns, he can use >>>>>>> the linked >>>>>>> // cache entry. >>>>>>> >>>>>>> - objArrayHandle resolved_references = >>>>>>> cpool->resolved_references(); >>>>>>> + objArrayHandle resolved_references = >>>>>>> cpool->pool_holder()->resolved_references(); >>>>>>> // Use the resolved_references() lock for this cpCache entry. >>>>>>> // resolved_references are created for all classes with >>>>>>> Invokedynamic, MethodHandle >>>>>>> // or MethodType constant pool cache entries. >>>>>>> assert(resolved_references() != NULL, >>>>>>> "a resolved_references array should have been created >>>>>>> for >>>>>>> this class"); >>>>>>> >>>>>>> ------------------------------------------------------------------------ >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> @@ -410,20 +410,20 @@ >>>>>>> >>>>>>> oop >>>>>>> ConstantPoolCacheEntry::appendix_if_resolved(constantPoolHandle >>>>>>> cpool) { >>>>>>> if (!has_appendix()) >>>>>>> return NULL; >>>>>>> const int ref_index = f2_as_index() + >>>>>>> _indy_resolved_references_appendix_offset; >>>>>>> - objArrayOop resolved_references = cpool->resolved_references(); >>>>>>> + objArrayOop resolved_references = >>>>>>> cpool->pool_holder()->resolved_references(); >>>>>>> return resolved_references->obj_at(ref_index); >>>>>>> } >>>>>>> >>>>>>> >>>>>>> oop >>>>>>> ConstantPoolCacheEntry::method_type_if_resolved(constantPoolHandle >>>>>>> cpool) { >>>>>>> if (!has_method_type()) >>>>>>> return NULL; >>>>>>> const int ref_index = f2_as_index() + >>>>>>> _indy_resolved_references_method_type_offset; >>>>>>> - objArrayOop resolved_references = cpool->resolved_references(); >>>>>>> + objArrayOop resolved_references = >>>>>>> cpool->pool_holder()->resolved_references(); >>>>>>> return resolved_references->obj_at(ref_index); >>>>>>> } >>>>>>> >>>>>>> There is no need in the update above as the constant pool still >>>>>>> has the >>>>>>> function resolved_references(): >>>>>>> +objArrayOop ConstantPool::resolved_references() const { >>>>>>> + return pool_holder()->resolved_references(); >>>>>>> +} >>>>>>> >>>>>>> The same is true for the files: >>>>>>> src/share/vm/interpreter/interpreterRuntime.cpp >>>>>>> src/share/vm/interpreter/bytecodeTracer.cpp >>>>>>> || src/share/vm/ci/ciEnv.cpp >>>>>>> >>>>>>> >>>>>>> || src/share/vm/runtime/vmStructs.cpp* >>>>>>> >>>>>>> *@@ -286,11 +286,10 @@ >>>>>>> nonstatic_field(ConstantPool, _tags, >>>>>>> Array*) \ >>>>>>> nonstatic_field(ConstantPool, _cache, >>>>>>> ConstantPoolCache*) \ >>>>>>> nonstatic_field(ConstantPool, _pool_holder, >>>>>>> InstanceKlass*) \ >>>>>>> nonstatic_field(ConstantPool, _operands, >>>>>>> Array*) \ >>>>>>> nonstatic_field(ConstantPool, _length, >>>>>>> int) \ >>>>>>> - nonstatic_field(ConstantPool, _resolved_references, >>>>>>> jobject) \ >>>>>>> nonstatic_field(ConstantPool, _reference_map, >>>>>>> Array*) \ >>>>>>> nonstatic_field(ConstantPoolCache, _length, >>>>>>> int) \ >>>>>>> nonstatic_field(ConstantPoolCache, _constant_pool, >>>>>>> ConstantPool*) \ >>>>>>> nonstatic_field(InstanceKlass, _array_klasses, >>>>>>> Klass*) \ >>>>>>> nonstatic_field(InstanceKlass, _methods, >>>>>>> Array*) \ >>>>>>> * >>>>>>> >>>>>>> *I guess, we need to cover the same field in the InstanceKlass >>>>>>> instead. >>>>>>> >>>>>>> Thanks, >>>>>>> Serguei >>>>>>> >>>>>>> >>>>>>> On 5/18/15 10:32 AM, Vladimir Ivanov wrote: >>>>>>>> Here's updated version: >>>>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 >>>>>>>> >>>>>>>> Moved ConstantPool::_resolved_references to mirror class instance. >>>>>>>> >>>>>>>> Fixed a couple of issues in CDS and JVMTI (class redefinition) >>>>>>>> caused >>>>>>>> by this change. >>>>>>>> >>>>>>>> I had to hard code Class::resolved_references offset since it is >>>>>>>> used >>>>>>>> in template interpreter which is generated earlier than >>>>>>>> j.l.Class is >>>>>>>> loaded during VM bootstrap. >>>>>>>> >>>>>>>> Testing: hotspot/test, vm testbase (in progress) >>>>>>>> >>>>>>>> Best regards, >>>>>>>> Vladimir Ivanov >>>>>>>> >>>>>>>> On 4/21/15 9:30 PM, Vladimir Ivanov wrote: >>>>>>>>> Coleen, Chris, >>>>>>>>> >>>>>>>>> I'll proceed with moving ConstantPool::_resolved_references to >>>>>>>>> j.l.Class >>>>>>>>> instance then. >>>>>>>>> >>>>>>>>> Thanks for the feedback. >>>>>>>>> >>>>>>>>> Best regards, >>>>>>>>> Vladimir Ivanov >>>>>>>>> >>>>>>>>> On 4/21/15 3:22 AM, Christian Thalinger wrote: >>>>>>>>>>> On Apr 20, 2015, at 4:34 PM, Coleen Phillimore >>>>>>>>>>> >>>>>>>>>> > >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Vladimir, >>>>>>>>>>> >>>>>>>>>>> I think that changing the format of the heap dump isn't a good >>>>>>>>>>> idea >>>>>>>>>>> either. >>>>>>>>>>> >>>>>>>>>>> On 4/16/15, 2:12 PM, Vladimir Ivanov wrote: >>>>>>>>>>>> (sorry for really late response; just got enough time to >>>>>>>>>>>> return to >>>>>>>>>>>> the bug) >>>>>>>>>>> I'd forgotten about it! >>>>>>>>>>>> Coleen, Staffan, >>>>>>>>>>>> >>>>>>>>>>>> Thanks a lot for the feedback! >>>>>>>>>>>> >>>>>>>>>>>> After thinking about the fix more, I don't think that using >>>>>>>>>>>> reserved >>>>>>>>>>>> oop slot in CLASS DUMP for recording _resolved_references is >>>>>>>>>>>> the best >>>>>>>>>>>> thing to do. IMO the change causes too much work for the users >>>>>>>>>>>> (heap >>>>>>>>>>>> dump analysis tools). >>>>>>>>>>>> >>>>>>>>>>>> It needs specification update and then heap dump analyzers >>>>>>>>>>>> should be >>>>>>>>>>>> updated as well. >>>>>>>>>>>> >>>>>>>>>>>> I have 2 alternative approaches (hacky and not-so-hacky :-)): >>>>>>>>>>>> >>>>>>>>>>>> - artificial class static field in the dump >>>>>>>>>>>> ("" >>>>>>>>>>>> + optional id to guarantee unique name); >>>>>>>>>>>> >>>>>>>>>>>> - add j.l.Class::_resolved_references field; >>>>>>>>>>>> Not sure how much overhead (mostly reads from bytecode) the >>>>>>>>>>>> move >>>>>>>>>>>> from ConstantPool to j.l.Class adds, so I propose just to >>>>>>>>>>>> duplicate >>>>>>>>>>>> it for now. >>>>>>>>>>> I really like this second approach, so much so that I had a >>>>>>>>>>> prototype >>>>>>>>>>> for moving resolved_references directly to the j.l.Class object >>>>>>>>>>> about >>>>>>>>>>> a year ago. I couldn't find any benefit other than >>>>>>>>>>> consolidating oops >>>>>>>>>>> so the GC would have less work to do. If the >>>>>>>>>>> resolved_references are >>>>>>>>>>> moved to j.l.C instance, they can not be jobjects and the >>>>>>>>>>> ClassLoaderData::_handles area wouldn't have to contain them >>>>>>>>>>> (but >>>>>>>>>>> there are other things that could go there so don't delete the >>>>>>>>>>> _handles field yet). >>>>>>>>>>> >>>>>>>>>>> The change I had was relatively simple. The only annoying >>>>>>>>>>> part was >>>>>>>>>>> that getting to the resolved references has to be in >>>>>>>>>>> macroAssembler >>>>>>>>>>> and do: >>>>>>>>>>> >>>>>>>>>>> go through >>>>>>>>>>> method->cpCache->constants->instanceKlass->java_mirror() >>>>>>>>>>> rather than >>>>>>>>>>> method->cpCache->constants->resolved_references->jmethod >>>>>>>>>>> indirection >>>>>>>>>>> >>>>>>>>>>> I think it only affects the interpreter so the extra >>>>>>>>>>> indirection >>>>>>>>>>> wouldn't affect performance, so don't duplicate it! You don't >>>>>>>>>>> want to >>>>>>>>>>> increase space used by j.l.C without taking it out somewhere >>>>>>>>>>> else! >>>>>>>>>> I like this approach. Can we do this? >>>>>>>>>> >>>>>>>>>>>> What do you think about that? >>>>>>>>>>> Is this bug worth doing this? I don't know but I'd really >>>>>>>>>>> like it. >>>>>>>>>>> >>>>>>>>>>> Coleen >>>>>>>>>>> >>>>>>>>>>>> Best regards, >>>>>>>>>>>> Vladimir Ivanov >>>>>>>>>>>> >>>>>>>>>>>> On 10/6/14 11:35 AM, Staffan Larsen wrote: >>>>>>>>>>>>> This looks like a good approach. However, there are a couple >>>>>>>>>>>>> of more >>>>>>>>>>>>> places that need to be updated. >>>>>>>>>>>>> >>>>>>>>>>>>> The hprof binary format is described in >>>>>>>>>>>>> jdk/src/jdk.hprof.agent/share/native/libhprof/manual.html and >>>>>>>>>>>>> needs >>>>>>>>>>>>> to be updated. It?s also more formally specified in >>>>>>>>>>>>> hprof_b_spec.h >>>>>>>>>>>>> in the same directory. >>>>>>>>>>>>> >>>>>>>>>>>>> The hprof JVMTI agent in jdk/src/jdk.hprof.agent code would >>>>>>>>>>>>> also >>>>>>>>>>>>> need to be updated to show this field. Since this is a JVMTI >>>>>>>>>>>>> agent >>>>>>>>>>>>> it needs to be possible to find the resolved_refrences array >>>>>>>>>>>>> via the >>>>>>>>>>>>> JVMTI heap walking API. Perhaps that already works? - I >>>>>>>>>>>>> haven?t >>>>>>>>>>>>> looked. >>>>>>>>>>>>> >>>>>>>>>>>>> Finally, the Serviceability Agent implements yet another >>>>>>>>>>>>> hprof >>>>>>>>>>>>> binary dumper in >>>>>>>>>>>>> hotspot/agent//src/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> which also needs to write this reference. >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> /Staffan >>>>>>>>>>>>> >>>>>>>>>>>>> On 29 sep 2014, at 16:51, Vladimir Ivanov >>>>>>>>>>>>> >>>>>>>>>>>> > >>>>>>>>>>>>> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.00/ >>>>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8059340 >>>>>>>>>>>>>> >>>>>>>>>>>>>> VM heap dump doesn't contain >>>>>>>>>>>>>> ConstantPool::_resolved_references for >>>>>>>>>>>>>> classes which have resolved references. >>>>>>>>>>>>>> >>>>>>>>>>>>>> ConstantPool::_resolved_references points to an Object[] >>>>>>>>>>>>>> holding >>>>>>>>>>>>>> resolved constant pool entries (patches for VM anonymous >>>>>>>>>>>>>> classes, >>>>>>>>>>>>>> linked CallSite & MethodType for invokedynamic >>>>>>>>>>>>>> instructions). >>>>>>>>>>>>>> >>>>>>>>>>>>>> I've decided to use reserved slot in HPROF class header >>>>>>>>>>>>>> format. >>>>>>>>>>>>>> It requires an update in jhat to correctly display new info. >>>>>>>>>>>>>> >>>>>>>>>>>>>> The other approach I tried was to dump the reference as a >>>>>>>>>>>>>> fake >>>>>>>>>>>>>> static field [1], but storing VM internal >>>>>>>>>>>>>> ConstantPool::_resolved_references among user defined fields >>>>>>>>>>>>>> looks >>>>>>>>>>>>>> confusing. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Testing: manual (verified that corresponding arrays are >>>>>>>>>>>>>> properly >>>>>>>>>>>>>> linked in Nashorn heap dump). >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>>> >>>>>>>>>>>>>> Best regards, >>>>>>>>>>>>>> Vladimir Ivanov >>>>>>>>>>>>>> >>>>>>>>>>>>>> [1] http://cr.openjdk.java.net/~vlivanov/8059340/static >>>> >> From vladimir.x.ivanov at oracle.com Thu May 21 10:55:31 2015 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Thu, 21 May 2015 13:55:31 +0300 Subject: [9] RFR (S): 8059340: ConstantPool::_resolved_references is missing in heap dump In-Reply-To: <9858759F-F180-461F-B46B-02F5AFB68A47@oracle.com> References: <542971DC.2090803@oracle.com> <24133E27-C024-41D8-80EF-B5707CFF0D0C@oracle.com> <552FFBAA.6050103@oracle.com> <55358D0A.1020407@oracle.com> <21C42551-29AA-44DC-AFE5-6156D681503D@oracle.com> <5536975F.60108@oracle.com> <555A2228.3080908@oracle.com> <555AEA98.3060104@oracle.com> <555B6BC3.1020809@oracle.com> <555C2699.9030808@oracle.com> <555CA390.9000803@oracle.com> <555CC8EB.90200@oracle.com> <555CCB35.8000305@oracle.com> <9858759F-F180-461F-B46B-02F5AFB68A47@oracle.com> Message-ID: <555DB9A3.3010505@oracle.com> Thanks, Coleen. It shouldn't be a hassle to add CLD::_handles back back if needed. Best regards, Vladimir Ivanov PS: have a good vacation! On 5/21/15 12:27 AM, Coleen Phillimore wrote: > I am on vacation and can't read this webrev from my iPhone. I assume that Stefan and Serguei's and others reviews are good. The CLD _handles field could be used to hold the mirrors in a later change and might be needed for jigsaw so might have to come back then. > Thank you for doing this change!! > Coleen > > Sent from my iPhone > >> On May 20, 2015, at 1:58 PM, Vladimir Ivanov wrote: >> >> Thanks for spotting that, Stefan. >> >> Updated webrev in place: >> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.02 >> >> Best regards, >> Vladimir Ivanov >> >>> On 5/20/15 8:48 PM, Stefan Karlsson wrote: >>>> On 2015-05-20 17:09, Vladimir Ivanov wrote: >>>> Stefan, Chris, >>>> >>>> Yes, you are right. ClassLoaderData::_handles isn't used anymore and >>>> can go away. >>> >>> Great. >>>> >>>> Updated webrev: >>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.02 >>> >>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.02/hotspot/src/share/vm/classfile/classLoaderData.cpp.udiff.html >>> >>> >>> // release the handles >>> - if (_handles != NULL) { >>> - JNIHandleBlock::release_block(_handles); >>> - _handles = NULL; >>> - } >>> >>> The comment should be removed. >>> >>> Could this include be removed? >>> >>> 63 #include "runtime/jniHandles.hpp" >>> >>> >>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.02/hotspot/src/share/vm/classfile/classLoaderData.hpp.frames.html >>> >>> >>> 54 class JNIHandleBlock; >>> >>> The forward declaration could be removed. >>> >>> Thanks, >>> StefanK >>>> >>>> Best regards, >>>> Vladimir Ivanov >>>> >>>>> On 5/20/15 9:15 AM, Stefan Karlsson wrote: >>>>> On 2015-05-20 03:54, Christian Thalinger wrote: >>>>>>> On May 19, 2015, at 9:58 AM, Vladimir Ivanov >>>>>>> wrote: >>>>>>> >>>>>>> Thanks for the review, Serguei. >>>>>>> >>>>>>> Updated webrev in place: >>>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 >>>>>>> >>>>>> Shouldn?t there be some GC code in InstanceKlass that can be removed >>>>>> now? >>>>> >>>>> Yes, this is a nice patch from the GCs perspective, since it removes >>>>> some of the work that we need to perform during the root processing. >>>>> >>>>> Unless I'm mistaken, you removed the only calls to >>>>> ClassLoadeData::add_handle, so I think you should remove the handles >>>>> block in ClassLoaderData. >>>>> >>>>> Thanks, >>>>> StefanK >>>>> >>>>>> >>>>>> + private transient Object[] resolved_references; >>>>>> >>>>>> We should follow Java naming conventions and use ?resolvedReferences?. >>>>>> >>>>>>> Switched to ConstantPool::resolved_references() as you suggested. >>>>>>> >>>>>>> Regarding declaring the field in vmStructs.cpp, it is not needed >>>>>>> since the field is located in Java mirror and not in InstanceKlass. >>>>>>> >>>>>>> Best regards, >>>>>>> Vladimir Ivanov >>>>>>> >>>>>>>> On 5/19/15 10:47 AM, serguei.spitsyn at oracle.com wrote: >>>>>>>> Hi Vladimir, >>>>>>>> >>>>>>>> It looks good in general. >>>>>>>> Some comments are below. >>>>>>>> >>>>>>>> || *src/share/vm/oops/cpCache.cpp* >>>>>>>> >>>>>>>> @@ -281,11 +281,11 @@ >>>>>>>> // Competing writers must acquire exclusive access via a lock. >>>>>>>> // A losing writer waits on the lock until the winner writes f1 >>>>>>>> and leaves >>>>>>>> // the lock, so that when the losing writer returns, he can use >>>>>>>> the linked >>>>>>>> // cache entry. >>>>>>>> >>>>>>>> - objArrayHandle resolved_references = cpool->resolved_references(); >>>>>>>> + objArrayHandle resolved_references = >>>>>>>> cpool->pool_holder()->resolved_references(); >>>>>>>> // Use the resolved_references() lock for this cpCache entry. >>>>>>>> // resolved_references are created for all classes with >>>>>>>> Invokedynamic, MethodHandle >>>>>>>> // or MethodType constant pool cache entries. >>>>>>>> assert(resolved_references() != NULL, >>>>>>>> "a resolved_references array should have been created for >>>>>>>> this class"); >>>>>>>> >>>>>>>> ------------------------------------------------------------------------ >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> @@ -410,20 +410,20 @@ >>>>>>>> >>>>>>>> oop >>>>>>>> ConstantPoolCacheEntry::appendix_if_resolved(constantPoolHandle >>>>>>>> cpool) { >>>>>>>> if (!has_appendix()) >>>>>>>> return NULL; >>>>>>>> const int ref_index = f2_as_index() + >>>>>>>> _indy_resolved_references_appendix_offset; >>>>>>>> - objArrayOop resolved_references = cpool->resolved_references(); >>>>>>>> + objArrayOop resolved_references = >>>>>>>> cpool->pool_holder()->resolved_references(); >>>>>>>> return resolved_references->obj_at(ref_index); >>>>>>>> } >>>>>>>> >>>>>>>> >>>>>>>> oop >>>>>>>> ConstantPoolCacheEntry::method_type_if_resolved(constantPoolHandle >>>>>>>> cpool) { >>>>>>>> if (!has_method_type()) >>>>>>>> return NULL; >>>>>>>> const int ref_index = f2_as_index() + >>>>>>>> _indy_resolved_references_method_type_offset; >>>>>>>> - objArrayOop resolved_references = cpool->resolved_references(); >>>>>>>> + objArrayOop resolved_references = >>>>>>>> cpool->pool_holder()->resolved_references(); >>>>>>>> return resolved_references->obj_at(ref_index); >>>>>>>> } >>>>>>>> >>>>>>>> There is no need in the update above as the constant pool still >>>>>>>> has the >>>>>>>> function resolved_references(): >>>>>>>> +objArrayOop ConstantPool::resolved_references() const { >>>>>>>> + return pool_holder()->resolved_references(); >>>>>>>> +} >>>>>>>> >>>>>>>> The same is true for the files: >>>>>>>> src/share/vm/interpreter/interpreterRuntime.cpp >>>>>>>> src/share/vm/interpreter/bytecodeTracer.cpp >>>>>>>> || src/share/vm/ci/ciEnv.cpp >>>>>>>> >>>>>>>> >>>>>>>> || src/share/vm/runtime/vmStructs.cpp* >>>>>>>> >>>>>>>> *@@ -286,11 +286,10 @@ >>>>>>>> nonstatic_field(ConstantPool, _tags, >>>>>>>> Array*) \ >>>>>>>> nonstatic_field(ConstantPool, _cache, >>>>>>>> ConstantPoolCache*) \ >>>>>>>> nonstatic_field(ConstantPool, _pool_holder, >>>>>>>> InstanceKlass*) \ >>>>>>>> nonstatic_field(ConstantPool, _operands, >>>>>>>> Array*) \ >>>>>>>> nonstatic_field(ConstantPool, _length, >>>>>>>> int) \ >>>>>>>> - nonstatic_field(ConstantPool, _resolved_references, >>>>>>>> jobject) \ >>>>>>>> nonstatic_field(ConstantPool, _reference_map, >>>>>>>> Array*) \ >>>>>>>> nonstatic_field(ConstantPoolCache, _length, >>>>>>>> int) \ >>>>>>>> nonstatic_field(ConstantPoolCache, _constant_pool, >>>>>>>> ConstantPool*) \ >>>>>>>> nonstatic_field(InstanceKlass, _array_klasses, >>>>>>>> Klass*) \ >>>>>>>> nonstatic_field(InstanceKlass, _methods, >>>>>>>> Array*) \ >>>>>>>> * >>>>>>>> >>>>>>>> *I guess, we need to cover the same field in the InstanceKlass >>>>>>>> instead. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Serguei >>>>>>>> >>>>>>>> >>>>>>>>> On 5/18/15 10:32 AM, Vladimir Ivanov wrote: >>>>>>>>> Here's updated version: >>>>>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 >>>>>>>>> >>>>>>>>> Moved ConstantPool::_resolved_references to mirror class instance. >>>>>>>>> >>>>>>>>> Fixed a couple of issues in CDS and JVMTI (class redefinition) >>>>>>>>> caused >>>>>>>>> by this change. >>>>>>>>> >>>>>>>>> I had to hard code Class::resolved_references offset since it is >>>>>>>>> used >>>>>>>>> in template interpreter which is generated earlier than j.l.Class is >>>>>>>>> loaded during VM bootstrap. >>>>>>>>> >>>>>>>>> Testing: hotspot/test, vm testbase (in progress) >>>>>>>>> >>>>>>>>> Best regards, >>>>>>>>> Vladimir Ivanov >>>>>>>>> >>>>>>>>>> On 4/21/15 9:30 PM, Vladimir Ivanov wrote: >>>>>>>>>> Coleen, Chris, >>>>>>>>>> >>>>>>>>>> I'll proceed with moving ConstantPool::_resolved_references to >>>>>>>>>> j.l.Class >>>>>>>>>> instance then. >>>>>>>>>> >>>>>>>>>> Thanks for the feedback. >>>>>>>>>> >>>>>>>>>> Best regards, >>>>>>>>>> Vladimir Ivanov >>>>>>>>>> >>>>>>>>>> On 4/21/15 3:22 AM, Christian Thalinger wrote: >>>>>>>>>>>> On Apr 20, 2015, at 4:34 PM, Coleen Phillimore >>>>>>>>>>>> >>>>>>>>>>> > >>>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Vladimir, >>>>>>>>>>>> >>>>>>>>>>>> I think that changing the format of the heap dump isn't a good >>>>>>>>>>>> idea >>>>>>>>>>>> either. >>>>>>>>>>>> >>>>>>>>>>>>> On 4/16/15, 2:12 PM, Vladimir Ivanov wrote: >>>>>>>>>>>>> (sorry for really late response; just got enough time to >>>>>>>>>>>>> return to >>>>>>>>>>>>> the bug) >>>>>>>>>>>> I'd forgotten about it! >>>>>>>>>>>>> Coleen, Staffan, >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks a lot for the feedback! >>>>>>>>>>>>> >>>>>>>>>>>>> After thinking about the fix more, I don't think that using >>>>>>>>>>>>> reserved >>>>>>>>>>>>> oop slot in CLASS DUMP for recording _resolved_references is >>>>>>>>>>>>> the best >>>>>>>>>>>>> thing to do. IMO the change causes too much work for the users >>>>>>>>>>>>> (heap >>>>>>>>>>>>> dump analysis tools). >>>>>>>>>>>>> >>>>>>>>>>>>> It needs specification update and then heap dump analyzers >>>>>>>>>>>>> should be >>>>>>>>>>>>> updated as well. >>>>>>>>>>>>> >>>>>>>>>>>>> I have 2 alternative approaches (hacky and not-so-hacky :-)): >>>>>>>>>>>>> >>>>>>>>>>>>> - artificial class static field in the dump >>>>>>>>>>>>> ("" >>>>>>>>>>>>> + optional id to guarantee unique name); >>>>>>>>>>>>> >>>>>>>>>>>>> - add j.l.Class::_resolved_references field; >>>>>>>>>>>>> Not sure how much overhead (mostly reads from bytecode) the >>>>>>>>>>>>> move >>>>>>>>>>>>> from ConstantPool to j.l.Class adds, so I propose just to >>>>>>>>>>>>> duplicate >>>>>>>>>>>>> it for now. >>>>>>>>>>>> I really like this second approach, so much so that I had a >>>>>>>>>>>> prototype >>>>>>>>>>>> for moving resolved_references directly to the j.l.Class object >>>>>>>>>>>> about >>>>>>>>>>>> a year ago. I couldn't find any benefit other than >>>>>>>>>>>> consolidating oops >>>>>>>>>>>> so the GC would have less work to do. If the >>>>>>>>>>>> resolved_references are >>>>>>>>>>>> moved to j.l.C instance, they can not be jobjects and the >>>>>>>>>>>> ClassLoaderData::_handles area wouldn't have to contain them (but >>>>>>>>>>>> there are other things that could go there so don't delete the >>>>>>>>>>>> _handles field yet). >>>>>>>>>>>> >>>>>>>>>>>> The change I had was relatively simple. The only annoying >>>>>>>>>>>> part was >>>>>>>>>>>> that getting to the resolved references has to be in >>>>>>>>>>>> macroAssembler >>>>>>>>>>>> and do: >>>>>>>>>>>> >>>>>>>>>>>> go through >>>>>>>>>>>> method->cpCache->constants->instanceKlass->java_mirror() >>>>>>>>>>>> rather than >>>>>>>>>>>> method->cpCache->constants->resolved_references->jmethod >>>>>>>>>>>> indirection >>>>>>>>>>>> >>>>>>>>>>>> I think it only affects the interpreter so the extra indirection >>>>>>>>>>>> wouldn't affect performance, so don't duplicate it! You don't >>>>>>>>>>>> want to >>>>>>>>>>>> increase space used by j.l.C without taking it out somewhere >>>>>>>>>>>> else! >>>>>>>>>>> I like this approach. Can we do this? >>>>>>>>>>> >>>>>>>>>>>>> What do you think about that? >>>>>>>>>>>> Is this bug worth doing this? I don't know but I'd really >>>>>>>>>>>> like it. >>>>>>>>>>>> >>>>>>>>>>>> Coleen >>>>>>>>>>>> >>>>>>>>>>>>> Best regards, >>>>>>>>>>>>> Vladimir Ivanov >>>>>>>>>>>>> >>>>>>>>>>>>>> On 10/6/14 11:35 AM, Staffan Larsen wrote: >>>>>>>>>>>>>> This looks like a good approach. However, there are a couple >>>>>>>>>>>>>> of more >>>>>>>>>>>>>> places that need to be updated. >>>>>>>>>>>>>> >>>>>>>>>>>>>> The hprof binary format is described in >>>>>>>>>>>>>> jdk/src/jdk.hprof.agent/share/native/libhprof/manual.html and >>>>>>>>>>>>>> needs >>>>>>>>>>>>>> to be updated. It?s also more formally specified in >>>>>>>>>>>>>> hprof_b_spec.h >>>>>>>>>>>>>> in the same directory. >>>>>>>>>>>>>> >>>>>>>>>>>>>> The hprof JVMTI agent in jdk/src/jdk.hprof.agent code would >>>>>>>>>>>>>> also >>>>>>>>>>>>>> need to be updated to show this field. Since this is a JVMTI >>>>>>>>>>>>>> agent >>>>>>>>>>>>>> it needs to be possible to find the resolved_refrences array >>>>>>>>>>>>>> via the >>>>>>>>>>>>>> JVMTI heap walking API. Perhaps that already works? - I haven?t >>>>>>>>>>>>>> looked. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Finally, the Serviceability Agent implements yet another hprof >>>>>>>>>>>>>> binary dumper in >>>>>>>>>>>>>> hotspot/agent//src/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> which also needs to write this reference. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>> /Staffan >>>>>>>>>>>>>> >>>>>>>>>>>>>> On 29 sep 2014, at 16:51, Vladimir Ivanov >>>>>>>>>>>>>> >>>>>>>>>>>>> > >>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.00/ >>>>>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8059340 >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> VM heap dump doesn't contain >>>>>>>>>>>>>>> ConstantPool::_resolved_references for >>>>>>>>>>>>>>> classes which have resolved references. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ConstantPool::_resolved_references points to an Object[] >>>>>>>>>>>>>>> holding >>>>>>>>>>>>>>> resolved constant pool entries (patches for VM anonymous >>>>>>>>>>>>>>> classes, >>>>>>>>>>>>>>> linked CallSite & MethodType for invokedynamic instructions). >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I've decided to use reserved slot in HPROF class header >>>>>>>>>>>>>>> format. >>>>>>>>>>>>>>> It requires an update in jhat to correctly display new info. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> The other approach I tried was to dump the reference as a fake >>>>>>>>>>>>>>> static field [1], but storing VM internal >>>>>>>>>>>>>>> ConstantPool::_resolved_references among user defined fields >>>>>>>>>>>>>>> looks >>>>>>>>>>>>>>> confusing. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Testing: manual (verified that corresponding arrays are >>>>>>>>>>>>>>> properly >>>>>>>>>>>>>>> linked in Nashorn heap dump). >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Best regards, >>>>>>>>>>>>>>> Vladimir Ivanov >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> [1] http://cr.openjdk.java.net/~vlivanov/8059340/static >>> From erik.osterlund at lnu.se Thu May 21 12:54:13 2015 From: erik.osterlund at lnu.se (=?iso-8859-1?Q?Erik_=D6sterlund?=) Date: Thu, 21 May 2015 12:54:13 +0000 Subject: RFR: 8079315: UseCondCardMark broken in conjunction with CMS precleaning In-Reply-To: <5552838B.3020305@oracle.com> References: <5548D913.1030507@redhat.com> <5548EDE9.8090803@redhat.com> <5548F59D.6090008@oracle.com> <5549D681.3000806@redhat.com> <554A1CA0.6070801@oracle.com> <473C887F-8CC5-4E67-BD9A-1882660CFD3C@lnu.se> <554B3B5E.8000705@oracle.com> <8EE73C49-83CD-4162-A3EF-19DDA2C2CDFA@lnu.se> <554FC66F.20307@oracle.com> <55507BE4.3050605@redhat.com> <03C141D8-952E-4E7A-BDAD-024CE2239010@lnu.se> <55508B60.1000200@redhat.com> <8868A273-7C97-45F0-BF10-85DDE5859E85@lnu.se> <5550B182.7090009@redhat.com> <5551FA97.9090502@oracle.com> <5551FD72.1020504@oracle.com> <38CAAA0B-6F32-4508-818F-AD9D248E8B35@lnu.se> <55524C58.4030904@oracle.com> <55524E8C.6080709@redhat.com> <55526041.1030709@redhat.com> <7C4CBC57-D7B2-4779-9353-DEACCB626370@lnu.se> <5552838B.3020305@oracle.com> Message-ID: Hi Aleksey, Sorry I thought I sent a reply earlier but it looks like I didn't. :/ Den 12/05/15 23:49 skrev Aleksey Shipilev : >On 12.05.2015 23:44, Erik ?sterlund wrote: >> I don?t know what windows does because it?s open source but we only have >> x86 there and its hardware has no support for doing it any other way >> than with IPI messages which is all we need. And if we feel that scared, >> windows has a system call that does exactly what we want and with the >> architecture I propose it?s trivial to specialize the barrier for >> windows do use this instead. > >I think I get what you tell, but I am not convinced. The thing about >reading stuff in the mutator is to align the actions in collector with >the actions in mutator. So what if you push the IPI to all processors. >Some lucky processor will get that interrupt *after* (e.g. too late!) >both the reference store and (reordered/stale) card mark read => same >problem, right? In other words, asking a mutator to do a fence-like op >after an already missed card mark update solves what? The IPI will be received for sure on all processors after mprotect begins and before it ends. Otherwise they wouldn't serve any purpose. The purpose of the cross call is to shoot down TLBs and make the new permissions visible. If IPIs were to be delayed until after mprotect returns, it simply would not work. And this is all we need. > >> If there was to suddenly pop up a magical fancy OS + hardware solution >> that is too clever for this optimization (seems unlikely to me) then >> there are other ways of issuing such a global fence. But I don?t see the >> point in doing that now when there is no such problem in sight. > >When you are dealing with a platform that has a billion of >installations, millions of developers, countless different hardware and >OS flavors, it does not seem very sane to lock in the correctness >guarantees on an undocumented implementation detail and/or guesses. >(Aside: doing that for performance is totally fine, we do that all the >time) I understand it might feel a bit icky to rely on OS implementations (even though unchanged for a long time) rather than an official contract. This solution was just a suggestion. Interestingly enough they had pretty much the same discussion in the urcu library mailing list. Somebody came up with such a fancy mprotect scheme and they discussed whether it was safe, and if so if it was wise. Conclusion was that kernel people did not want to put linux into a corner (you probably know how they care a lot about not breaking userspace). They have therefore been pushing for a sys_membarrier system call for linux since 2010. It didn't make it back then because it was believed that urcu is the only library needing such a mechanism. But now it seems like it is making it to linux 4.1 anyway if I understood the discussions right. Maybe we should push it a bit too so they know more people are interested too? Anyway, if we want stronger contracts and not rely on the state of OS/hardware implementation, the following seems possible: 1) Fence on all platforms: Obvious and easy. But... :( I really don't like fences! 2) Use the already implemented ADS mechanism with pretty solid OS contract: easy, but the global fence takes on my machine around ~15 micro seconds and during that time mutators may be locked (globally) and unable to make reference writes. Meh. 3) Try and do better on platforms offering better contracts: Windows/linux: Detect windows and linux system calls for explicit global fencing and use them if available, otherwise fall back to naked mprotect: if a new OS version wants to support fancy TLB sniping hardware, then that future OS will certainly have the system call, and the naked mprotect covers backward compatibility to current and older OS when such fancy TLB sniping did not exist. BSD: Fence, or some other fancy scheme such as sending fencing handshakes via signals to JavaThreads. Use cpuid instruction to find out which physical processors rather than threads have seen the handshake so an oversaturated system with too many threads for its own good can exit early without handshaking all threads, just enough to cover all cores. Darwin can do even better with the mach microkernel allowing introspection of thread states using thread_info. This way, "offline" threads not currently on CPU automatically handshake (don't need flushing/interrupts). I tried this and the global fence is slower unless the yieldpoint cooperates in the handshaking (requires switch from global polling page to thread-local flag like in Jikes). Then it becomes faster than mprotect schemes. But then again, does that guy with 4000 cores have darwin? Hmm! solaris: Tell the guys to fix a system call like linux did/is doing :p aix: No idea I'm afraid, but maybe similar to BSD or just fence. Now two questions remain: 1) Will it blend? Maybe. 2) Is it worth it? Maybe not if G1 is gonna replace CMS anyway. Personally I think a global fence with no requirements on mutator barrier instructions or mutator side global locking seems like a *VERY* useful tool in hotspot that I think we could use more often and not only for thread transitions. Who knows, maybe it even becomes interesting for G1. The problem now is that exactly when the fence is a problem for G1, other stuff is so much more of a problem that the gain in removing it seems not worth it. But maybe if we can make that other stuff slim and smooth, it starts becoming worth looking over that fence too. What do you think? Thanks, /Erik > >Thanks, >-Aleksey > From roland.westrelin at oracle.com Thu May 21 13:57:15 2015 From: roland.westrelin at oracle.com (Roland Westrelin) Date: Thu, 21 May 2015 15:57:15 +0200 Subject: RFR: JDK-8080627: JavaThread::satb_mark_queue_offset() is too big for an ARM ldrsb instruction In-Reply-To: <555C42A5.1050307@oracle.com> References: <555C42A5.1050307@oracle.com> Message-ID: Hi Bengt, > This is a fix for the C1 generated G1 write barriers. It is a bit unclear if this is compiler or GC code, so I'm using the broader mailing list for this review. > > http://cr.openjdk.java.net/~brutisso/8080627/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8080627 > > The problem is that on ARM the T_BYTE type will boil down to using the ldrsb instruction, which has a limitation on the offset it can load from. It can only load from offsets -256 to 256. But in the G1 pre barrier we want to load the _satb_mark_queue field in JavaThread, which is on offset 760. > > Changing the type from T_BYTE to T_BOOLEAN will use the unsigned instruction ldrb instead, which can handle offsets up to 4096. Ideally we would have a T_UBYTE type to use unsigned instructions for this load, but that does not exist. > > On the other platforms (x86 and Sparc) we treat T_BYTE and T_BOOLEAN the same, it is only on ARM that we have the distinction between these to types. I assume that is to get the sign extension for free when we use T_BYTE type. The fact that we treat T_BYTE and T_BOOLEAN the same on the other platforms makes it safe to do this change. That looks good to me. Please add a comment. Roland. > > I got some great help with this change from Dean Long. Thanks, Dean! > > I tried a couple of different solutions. Moving the _satb_mark_queue field earlier in JavaThread did not help since the Thread superclass already has enough members to exceed the 256 limit for offsets. It also didn't seem like a stable solution. Loading the field into a register would work, but keeping the load an immediate seems like a nicer solution. Changing to treat T_BYTE and T_BOOLEAN the same on ARM (similarly to x86 and Sparc) would mean to have to do explicit sign extension, which seems like a more complex solution than just switching the type in this case. > > Bengt From gerard.ziemski at oracle.com Thu May 21 15:56:24 2015 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Thu, 21 May 2015 10:56:24 -0500 Subject: Revision1: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <5554FE2D.6020405@oracle.com> References: <5554FE2D.6020405@oracle.com> Message-ID: <555E0028.8070108@oracle.com> hi all, Here is a revision 1 of the feature taking into account feedback from Coleen, Dmitry and David. I will be responding to each of the feedback emails shortly. We introduce a new mechanism that allows specification of a valid range per flag that is then used to automatically validate given flag's value every time it changes. Ranges values must be constant and can not change. Optionally, a constraint can also be specified and applied every time a flag value changes for those flags whose valid value can not be trivially checked by a simple min and max (ex. whether it's power of 2, or bigger or smaller than some other flag that can also change) I have chosen to modify the table macros (ex. RUNTIME_FLAGS in globals.hpp) instead of using a more sophisticated solution, such as C++ templates, because even though macros were unfriendly when initially developing, once a solution was arrived at, subsequent additions to the tables of new ranges, or constraint are trivial from developer's point of view. (The intial development unfriendliness of macros was mitigated by using a pre-processor, which for those using a modern IDE like Xcode, is easily available from a menu). Using macros also allowed for more minimal code changes. The presented solution is based on expansion of macros using variadic functions and can be readily seen in runtime/commandLineFlagConstraintList.cpp and runtime/commandLineFlagRangeList.cpp In commandLineFlagConstraintList.cpp or commandLineFlagRangesList.cpp, there is bunch of classes and methods that seems to beg for C++ template to be used. I have tried, but when the compiler tries to generate code for both uintx and size_t, which happen to have the same underlying type (on BSD), it fails to compile overridden methods with same type, but different name. If someone has a way of simplifying the new code via C++ templates, however, we can file a new enhancement request to address that. This webrev represents only the initial range checking framework and only 100 or so flags that were ported from an existing ad hoc range checking code to this new mechanism. There are about 250 remaining flags that still need their ranges determined and ported over to this new mechansim and they are tracked by individual subtasks. I had to modify several existing tests to change the error message that they expected when VM refuses to run, which was changed to provide uniform error messages. To help with testing and subtask efforts I have introduced a new runtime flag: PrintFlagsRanges: "Print VM flags and their ranges and exit VM" which in addition to the already existing flags: "PrintFlagsInitial" and "PrintFlagsFinal" allow for thorough examination of the flags values and their ranges. The code change builds and passes JPRT (-testset hotspot) and UTE (vm.quick.testlist) References: Webrev: http://cr.openjdk.java.net/~gziemski/8059557_rev1/ note: due to "awk" limit of 50 pats the Frames diff is not available for "src/share/vm/runtime/arguments.cpp" JEP:https://bugs.openjdk.java.net/browse/JDK-8059557 Compiler subtask:https://bugs.openjdk.java.net/browse/JDK-8078554 GC subtask:https://bugs.openjdk.java.net/browse/JDK-8078555 Runtime subtask:https://bugs.openjdk.java.net/browse/JDK-8078556 # hgstat: src/cpu/ppc/vm/globals_ppc.hpp | 2 +- src/cpu/sparc/vm/globals_sparc.hpp | 2 +- src/cpu/x86/vm/globals_x86.hpp | 2 +- src/cpu/zero/vm/globals_zero.hpp | 3 +- src/os/aix/vm/globals_aix.hpp | 2 +- src/os/bsd/vm/globals_bsd.hpp | 29 +- src/os/linux/vm/globals_linux.hpp | 9 +- src/os/solaris/vm/globals_solaris.hpp | 4 +- src/os/windows/vm/globals_windows.hpp | 5 +- src/share/vm/c1/c1_globals.cpp | 4 +- src/share/vm/c1/c1_globals.hpp | 17 +- src/share/vm/gc_implementation/g1/g1_globals.cpp | 16 +- src/share/vm/gc_implementation/g1/g1_globals.hpp | 38 ++- src/share/vm/opto/c2_globals.cpp | 12 +- src/share/vm/opto/c2_globals.hpp | 39 ++- src/share/vm/prims/whitebox.cpp | 12 +- src/share/vm/runtime/arguments.cpp | 753 ++++++++++++++++++++++++++---------------------------------------- src/share/vm/runtime/arguments.hpp | 24 +- src/share/vm/runtime/commandLineFlagConstraintList.cpp | 242 +++++++++++++++++++++ src/share/vm/runtime/commandLineFlagConstraintList.hpp | 72 ++++++ src/share/vm/runtime/commandLineFlagConstraintsGC.cpp | 251 ++++++++++++++++++++++ src/share/vm/runtime/commandLineFlagConstraintsGC.hpp | 59 +++++ src/share/vm/runtime/commandLineFlagConstraintsRuntime.cpp | 67 +++++ src/share/vm/runtime/commandLineFlagConstraintsRuntime.hpp | 41 +++ src/share/vm/runtime/commandLineFlagRangeList.cpp | 304 +++++++++++++++++++++++++++ src/share/vm/runtime/commandLineFlagRangeList.hpp | 67 +++++ src/share/vm/runtime/globals.cpp | 699 +++++++++++++++++++++++++++++++++++++++++++++++++------------ src/share/vm/runtime/globals.hpp | 310 ++++++++++++++++++++++----- src/share/vm/runtime/globals_extension.hpp | 101 +++++++- src/share/vm/runtime/init.cpp | 6 +- src/share/vm/runtime/os.hpp | 17 + src/share/vm/runtime/os_ext.hpp | 7 +- src/share/vm/runtime/thread.cpp | 6 + src/share/vm/services/attachListener.cpp | 4 +- src/share/vm/services/classLoadingService.cpp | 6 +- src/share/vm/services/diagnosticCommand.cpp | 3 +- src/share/vm/services/management.cpp | 6 +- src/share/vm/services/memoryService.cpp | 2 +- src/share/vm/services/writeableFlags.cpp | 161 ++++++++++---- src/share/vm/services/writeableFlags.hpp | 52 +--- test/compiler/c2/7200264/Test7200264.sh | 5 +- test/compiler/startup/NumCompilerThreadsCheck.java | 2 +- test/gc/arguments/TestHeapFreeRatio.java | 23 +- test/gc/arguments/TestSurvivorAlignmentInBytesOption.java | 4 +- test/gc/g1/TestStringDeduplicationTools.java | 6 +- test/runtime/CompressedOops/CompressedClassSpaceSize.java | 4 +- test/runtime/CompressedOops/ObjectAlignment.java | 9 +- test/runtime/contended/Options.java | 10 +- 48 files changed, 2641 insertions(+), 878 deletions(-) From gerard.ziemski at oracle.com Thu May 21 15:58:59 2015 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Thu, 21 May 2015 10:58:59 -0500 Subject: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <55565BC0.4000308@oracle.com> References: <5554FE2D.6020405@oracle.com> <55565BC0.4000308@oracle.com> Message-ID: <555E00C3.9040601@oracle.com> hi Coleen, Thank you for taking the time to review this considerable change. I have responded to your points below: On 5/15/2015 3:49 PM, Coleen Phillimore wrote: > > Gerard, > This is significant work! The macro re-expansions are daunting but > the simpler user interface makes it worth while. Someone better at > macros should review this in more detail to see if there's any > gotchas, especially wrt to C++11 and beyond. Hopefully there aren't > any surprises. I think there's some things people should know about > globals.hpp: > > *+ /* NB: The default value of UseLinuxPosixThreadCPUClocks may be */ \* > *+ /* overridden in Arguments::parse_each_vm_init_arg. */ \* > * product(bool, UseLinuxPosixThreadCPUClocks, true, \* > * "enable fast Linux Posix clocks where available") \* > * /* NB: The default value of UseLinuxPosixThreadCPUClocks may be \* > * overridden in Arguments::parse_each_vm_init_arg. */ \* > It looks like if you have a comment for an option, do you need to have > it above the option, or is this just nicer? To make it nicer, but more importantly to make it more consistent. > > In > http://cr.openjdk.java.net/~gziemski/8059557_rev0/src/share/vm/runtime/globals.cpp.udiff.html > > This should be out->print_cr() > > *+ if (printRanges == false) {* > * out->print_cr("[Global flags]");* > *+ } else {* > *+ tty->print_cr("[Global flags ranges]");* > *+ }* > *+* Done. > There's some ifdef debugging code left in but I think that's ok for > now, because it's not much and may be helpful but not helpful enough > in the long run to add a XX:OnlyPrintProductFlags option. > > The name checkAllRangesAndConstraints should be > check_all_ranges_and_constraints as per the coding standard, but the > constraint functions in > http://cr.openjdk.java.net/~gziemski/8059557_rev0/src/share/vm/runtime/commandLineFlagConstraintsGC.hpp.html > seem okay in mixed case so you can find them when you're grepping for > the flag. Done. cheers From dmitry.dmitriev at oracle.com Thu May 21 16:05:01 2015 From: dmitry.dmitriev at oracle.com (Dmitry Dmitriev) Date: Thu, 21 May 2015 19:05:01 +0300 Subject: Revision1: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <555E0028.8070108@oracle.com> References: <5554FE2D.6020405@oracle.com> <555E0028.8070108@oracle.com> Message-ID: <555E022D.7030108@oracle.com> Hi Gerard, Your webrev link actually points to old revision: http://cr.openjdk.java.net/~gziemski/8059557_rev0/ So, actual webrev is located here: http://cr.openjdk.java.net/~gziemski/8059557_rev1/ Thanks, Dmitry On 21.05.2015 18:56, Gerard Ziemski wrote: > hi all, > > Here is a revision 1 of the feature taking into account feedback from > Coleen, Dmitry and David. I will be responding to each of the feedback > emails shortly. > > We introduce a new mechanism that allows specification of a valid > range per flag that is then used to automatically validate given > flag's value every time it changes. Ranges values must be constant and > can not change. Optionally, a constraint can also be specified and > applied every time a flag value changes for those flags whose valid > value can not be trivially checked by a simple min and max (ex. > whether it's power of 2, or bigger or smaller than some other flag > that can also change) > > I have chosen to modify the table macros (ex. RUNTIME_FLAGS in > globals.hpp) instead of using a more sophisticated solution, such as > C++ templates, because even though macros were unfriendly when > initially developing, once a solution was arrived at, subsequent > additions to the tables of new ranges, or constraint are trivial from > developer's point of view. (The intial development unfriendliness of > macros was mitigated by using a pre-processor, which for those using a > modern IDE like Xcode, is easily available from a menu). Using macros > also allowed for more minimal code changes. > > The presented solution is based on expansion of macros using variadic > functions and can be readily seen in > runtime/commandLineFlagConstraintList.cpp and > runtime/commandLineFlagRangeList.cpp > > In commandLineFlagConstraintList.cpp or commandLineFlagRangesList.cpp, > there is bunch of classes and methods that seems to beg for C++ > template to be used. I have tried, but when the compiler tries to > generate code for both uintx and size_t, which happen to have the same > underlying type (on BSD), it fails to compile overridden methods with > same type, but different name. If someone has a way of simplifying the > new code via C++ templates, however, we can file a new enhancement > request to address that. > > This webrev represents only the initial range checking framework and > only 100 or so flags that were ported from an existing ad hoc range > checking code to this new mechanism. There are about 250 remaining > flags that still need their ranges determined and ported over to this > new mechansim and they are tracked by individual subtasks. > > I had to modify several existing tests to change the error message > that they expected when VM refuses to run, which was changed to > provide uniform error messages. > > To help with testing and subtask efforts I have introduced a new > runtime flag: > > PrintFlagsRanges: "Print VM flags and their ranges and exit VM" > > which in addition to the already existing flags: "PrintFlagsInitial" > and "PrintFlagsFinal" allow for thorough examination of the flags > values and their ranges. > > The code change builds and passes JPRT (-testset hotspot) and UTE > (vm.quick.testlist) > > > References: > > Webrev: http://cr.openjdk.java.net/~gziemski/8059557_rev1/ > note: due to "awk" limit of 50 pats the Frames diff is not > available for "src/share/vm/runtime/arguments.cpp" > > JEP:https://bugs.openjdk.java.net/browse/JDK-8059557 > Compiler subtask:https://bugs.openjdk.java.net/browse/JDK-8078554 > GC subtask:https://bugs.openjdk.java.net/browse/JDK-8078555 > Runtime subtask:https://bugs.openjdk.java.net/browse/JDK-8078556 > > > # hgstat: > src/cpu/ppc/vm/globals_ppc.hpp | 2 +- > src/cpu/sparc/vm/globals_sparc.hpp | 2 +- > src/cpu/x86/vm/globals_x86.hpp | 2 +- > src/cpu/zero/vm/globals_zero.hpp | 3 +- > src/os/aix/vm/globals_aix.hpp | 2 +- > src/os/bsd/vm/globals_bsd.hpp | 29 +- > src/os/linux/vm/globals_linux.hpp | 9 +- > src/os/solaris/vm/globals_solaris.hpp | 4 +- > src/os/windows/vm/globals_windows.hpp | 5 +- > src/share/vm/c1/c1_globals.cpp | 4 +- > src/share/vm/c1/c1_globals.hpp | 17 +- > src/share/vm/gc_implementation/g1/g1_globals.cpp | 16 +- > src/share/vm/gc_implementation/g1/g1_globals.hpp | 38 ++- > src/share/vm/opto/c2_globals.cpp | 12 +- > src/share/vm/opto/c2_globals.hpp | 39 ++- > src/share/vm/prims/whitebox.cpp | 12 +- > src/share/vm/runtime/arguments.cpp | 753 ++++++++++++++++++++++++++---------------------------------------- > src/share/vm/runtime/arguments.hpp | 24 +- > src/share/vm/runtime/commandLineFlagConstraintList.cpp | 242 +++++++++++++++++++++ > src/share/vm/runtime/commandLineFlagConstraintList.hpp | 72 ++++++ > src/share/vm/runtime/commandLineFlagConstraintsGC.cpp | 251 ++++++++++++++++++++++ > src/share/vm/runtime/commandLineFlagConstraintsGC.hpp | 59 +++++ > src/share/vm/runtime/commandLineFlagConstraintsRuntime.cpp | 67 +++++ > src/share/vm/runtime/commandLineFlagConstraintsRuntime.hpp | 41 +++ > src/share/vm/runtime/commandLineFlagRangeList.cpp | 304 +++++++++++++++++++++++++++ > src/share/vm/runtime/commandLineFlagRangeList.hpp | 67 +++++ > src/share/vm/runtime/globals.cpp | 699 +++++++++++++++++++++++++++++++++++++++++++++++++------------ > src/share/vm/runtime/globals.hpp | 310 ++++++++++++++++++++++----- > src/share/vm/runtime/globals_extension.hpp | 101 +++++++- > src/share/vm/runtime/init.cpp | 6 +- > src/share/vm/runtime/os.hpp | 17 + > src/share/vm/runtime/os_ext.hpp | 7 +- > src/share/vm/runtime/thread.cpp | 6 + > src/share/vm/services/attachListener.cpp | 4 +- > src/share/vm/services/classLoadingService.cpp | 6 +- > src/share/vm/services/diagnosticCommand.cpp | 3 +- > src/share/vm/services/management.cpp | 6 +- > src/share/vm/services/memoryService.cpp | 2 +- > src/share/vm/services/writeableFlags.cpp | 161 ++++++++++---- > src/share/vm/services/writeableFlags.hpp | 52 +--- > test/compiler/c2/7200264/Test7200264.sh | 5 +- > test/compiler/startup/NumCompilerThreadsCheck.java | 2 +- > test/gc/arguments/TestHeapFreeRatio.java | 23 +- > test/gc/arguments/TestSurvivorAlignmentInBytesOption.java | 4 +- > test/gc/g1/TestStringDeduplicationTools.java | 6 +- > test/runtime/CompressedOops/CompressedClassSpaceSize.java | 4 +- > test/runtime/CompressedOops/ObjectAlignment.java | 9 +- > test/runtime/contended/Options.java | 10 +- > 48 files changed, 2641 insertions(+), 878 deletions(-) > From gerard.ziemski at oracle.com Thu May 21 16:09:01 2015 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Thu, 21 May 2015 11:09:01 -0500 Subject: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <55597860.2050403@oracle.com> References: <5554FE2D.6020405@oracle.com> <55597860.2050403@oracle.com> Message-ID: <555E031D.8070906@oracle.com> hi David, Thank you for taking the time to review this considerable change. I have responded to your points below: On 5/18/2015 12:28 AM, David Holmes wrote: > > A few comments: > > For constructs like: > > void emit_range_bool(const char* name) { (void)name; /* NOP > */ } > > I seem to recall that this was not sufficient to avoid "unused" > warnings with some compilers - which is presumably why you did it? That's indeed the reason why I did. A search on internet reveals that this seems to be the preferred way of handling such case, and it works for all the currently building JPRT platforms. Is that enough? If there is a question of whether this works on some unusual compiler, can we leave it up to the the engineers dealing with that compiler, which certainly would make them more of experts in such situation than me? If not, how would you recommend we handle this differently now? > > --- > > In src/share/vm/runtime/globals.hpp it says: > > see "checkRanges" function in arguments.cpp > > but there is no such function in arguments.cpp Done. > > --- > > src/share/vm/runtime/arguments.cpp > > The set_object_alignment() functions seems to have some redundant > constraint assertions. As does verify_object_alignment() - seems to me > that everything in verify_object_alignment should either be in the > constraint function for ObjectAlignmentInBytes or one for > SurvivorAlignmentInBytes - though the combination of verification and > the actual setting of SurvivorAlignmentInBytes may be a problem in the > new architecture. If you can't get rid of verify_object_alignment() > I'd be tempted to not process it the new way at all, as splitting the > constraint checking just leads to confusion IMO. I moved the code to constraints and in fact this little bit of refactoring looks very nice indeed. > > The changes to test/runtime/contended/Options.java suggest to me that > a constraint is missing on ContendedPaddingWidth - that it is a > multiple of 8 (BytesPerLong). That is (still) checked in arguments.cpp > (and given it is still checked there is no need to have removed it > from the test). I could argue whether the tests is really loosing a value in displaying only one error at the time, but I tried to modify my feature, so that it could also detect and report more than one violation at the time, and it turned out as doable in these cases, so this is now resolved, with no perceived functionality regression. > > --- > > Some of the test changes, such as: > > test/gc/g1/TestStringDeduplicationTools.java > > seem to be losing some of what they test. Not only is the test > checking the value is detected as erroneous, but it also detects that > the user is told in what way it is erroneous. The updated test doesn't > validate that part of the argument processing logic Done, with same fix as above. > > ---- > > There is some inconsistency in the test changes, sometimes you use: > > shouldContain("outside the allowed range") > > and sometimes: > > shouldContain("is outside the allowed range") Done. cheers From gerard.ziemski at oracle.com Thu May 21 16:13:24 2015 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Thu, 21 May 2015 11:13:24 -0500 Subject: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <5559E152.50105@oracle.com> References: <5554FE2D.6020405@oracle.com> <5559E152.50105@oracle.com> Message-ID: <555E0424.9000403@oracle.com> hi Dmitry, Thank you for taking the time to review this considerable change for an n-th time now :-) On 5/18/2015 7:55 AM, Dmitry Dmitriev wrote: > Hi Gerard, > > Can you please correct format string for jio_fprintf function in the > following functions(all from > src/share/vm/runtime/commandLineFlagRangeList.cpp): > Flag::Error check_intx(intx value, bool verbose = true) > Portion of format string from "intx %s = %ld is outside ..." to "intx > %s = "INTX_FORMAT" is outside ..." > > Flag::Error check_uintx(uintx value, bool verbose = true) > Portion of format string from "uintx %s = %lu is outside ..." to > "uintx %s = "UINTX_FORMAT" is outside ..." > > Flag::Error check_uint64_t(uint64_t value, bool verbose = true) > Portion of format string from "uint64_t %s = %lu is outside ..." to > "uint64_t %s = "UINT64_FORMAT" is outside ..." Are you sure you are looking at the current webrev? I know I sent you many before internally, but the rev0 and rev1 seem to already include the changes you request? I have, however, modified CICompilerCount range max value as you requested, ie. to "max_jint" instead of "max_intx" to avoid a potential overflow issue you found. cheers From alexander.harlap at oracle.com Thu May 21 16:26:58 2015 From: alexander.harlap at oracle.com (Alexander Harlap) Date: Thu, 21 May 2015 12:26:58 -0400 Subject: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <555E031D.8070906@oracle.com> References: <5554FE2D.6020405@oracle.com> <55597860.2050403@oracle.com> <555E031D.8070906@oracle.com> Message-ID: <555E0752.5080902@oracle.com> And what about this: void emit_range_bool(const char* /* name */) { } Alex On 5/21/2015 12:09 PM, Gerard Ziemski wrote: > hi David, > > Thank you for taking the time to review this considerable change. I > have responded to your points below: > > > On 5/18/2015 12:28 AM, David Holmes wrote: >> >> A few comments: >> >> For constructs like: >> >> void emit_range_bool(const char* name) { (void)name; /* >> NOP */ } >> >> I seem to recall that this was not sufficient to avoid "unused" >> warnings with some compilers - which is presumably why you did it? > > That's indeed the reason why I did. A search on internet reveals that > this seems to be the preferred way of handling such case, and it works > for all the currently building JPRT platforms. Is that enough? If > there is a question of whether this works on some unusual compiler, > can we leave it up to the the engineers dealing with that compiler, > which certainly would make them more of experts in such situation than > me? If not, how would you recommend we handle this differently now? > > >> >> --- >> >> In src/share/vm/runtime/globals.hpp it says: >> >> see "checkRanges" function in arguments.cpp >> >> but there is no such function in arguments.cpp > > Done. > >> >> --- >> >> src/share/vm/runtime/arguments.cpp >> >> The set_object_alignment() functions seems to have some redundant >> constraint assertions. As does verify_object_alignment() - seems to >> me that everything in verify_object_alignment should either be in the >> constraint function for ObjectAlignmentInBytes or one for >> SurvivorAlignmentInBytes - though the combination of verification and >> the actual setting of SurvivorAlignmentInBytes may be a problem in >> the new architecture. If you can't get rid of >> verify_object_alignment() I'd be tempted to not process it the new >> way at all, as splitting the constraint checking just leads to >> confusion IMO. > > I moved the code to constraints and in fact this little bit of > refactoring looks very nice indeed. > > >> >> The changes to test/runtime/contended/Options.java suggest to me that >> a constraint is missing on ContendedPaddingWidth - that it is a >> multiple of 8 (BytesPerLong). That is (still) checked in >> arguments.cpp (and given it is still checked there is no need to have >> removed it from the test). > > I could argue whether the tests is really loosing a value in > displaying only one error at the time, but I tried to modify my > feature, so that it could also detect and report more than one > violation at the time, and it turned out as doable in these cases, so > this is now resolved, with no perceived functionality regression. > > >> >> --- >> >> Some of the test changes, such as: >> >> test/gc/g1/TestStringDeduplicationTools.java >> >> seem to be losing some of what they test. Not only is the test >> checking the value is detected as erroneous, but it also detects that >> the user is told in what way it is erroneous. The updated test >> doesn't validate that part of the argument processing logic > > Done, with same fix as above. > > >> >> ---- >> >> There is some inconsistency in the test changes, sometimes you use: >> >> shouldContain("outside the allowed range") >> >> and sometimes: >> >> shouldContain("is outside the allowed range") > > Done. > > > cheers > From gerard.ziemski at oracle.com Thu May 21 16:54:09 2015 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Thu, 21 May 2015 11:54:09 -0500 Subject: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <555E0752.5080902@oracle.com> References: <5554FE2D.6020405@oracle.com> <55597860.2050403@oracle.com> <555E031D.8070906@oracle.com> <555E0752.5080902@oracle.com> Message-ID: <555E0DB1.4040009@oracle.com> hi Alexander, Yes, handling it this way was an alternative, but I thought it required a C++ compiler with some specific feature support? C++ 11? It was my understanding that such solution was more restrictive, than the one I used. Thank you. On 5/21/2015 11:26 AM, Alexander Harlap wrote: > And what about this: > > void emit_range_bool(const char* /* name */) { } > > Alex > > On 5/21/2015 12:09 PM, Gerard Ziemski wrote: >> hi David, >> >> Thank you for taking the time to review this considerable change. I >> have responded to your points below: >> >> >> On 5/18/2015 12:28 AM, David Holmes wrote: >>> >>> A few comments: >>> >>> For constructs like: >>> >>> void emit_range_bool(const char* name) { (void)name; /* >>> NOP */ } >>> >>> I seem to recall that this was not sufficient to avoid "unused" >>> warnings with some compilers - which is presumably why you did it? >> >> That's indeed the reason why I did. A search on internet reveals that >> this seems to be the preferred way of handling such case, and it >> works for all the currently building JPRT platforms. Is that enough? >> If there is a question of whether this works on some unusual >> compiler, can we leave it up to the the engineers dealing with that >> compiler, which certainly would make them more of experts in such >> situation than me? If not, how would you recommend we handle this >> differently now? >> >> >>> >>> --- >>> >>> In src/share/vm/runtime/globals.hpp it says: >>> >>> see "checkRanges" function in arguments.cpp >>> >>> but there is no such function in arguments.cpp >> >> Done. >> >>> >>> --- >>> >>> src/share/vm/runtime/arguments.cpp >>> >>> The set_object_alignment() functions seems to have some redundant >>> constraint assertions. As does verify_object_alignment() - seems to >>> me that everything in verify_object_alignment should either be in >>> the constraint function for ObjectAlignmentInBytes or one for >>> SurvivorAlignmentInBytes - though the combination of verification >>> and the actual setting of SurvivorAlignmentInBytes may be a problem >>> in the new architecture. If you can't get rid of >>> verify_object_alignment() I'd be tempted to not process it the new >>> way at all, as splitting the constraint checking just leads to >>> confusion IMO. >> >> I moved the code to constraints and in fact this little bit of >> refactoring looks very nice indeed. >> >> >>> >>> The changes to test/runtime/contended/Options.java suggest to me >>> that a constraint is missing on ContendedPaddingWidth - that it is a >>> multiple of 8 (BytesPerLong). That is (still) checked in >>> arguments.cpp (and given it is still checked there is no need to >>> have removed it from the test). >> >> I could argue whether the tests is really loosing a value in >> displaying only one error at the time, but I tried to modify my >> feature, so that it could also detect and report more than one >> violation at the time, and it turned out as doable in these cases, so >> this is now resolved, with no perceived functionality regression. >> >> >>> >>> --- >>> >>> Some of the test changes, such as: >>> >>> test/gc/g1/TestStringDeduplicationTools.java >>> >>> seem to be losing some of what they test. Not only is the test >>> checking the value is detected as erroneous, but it also detects >>> that the user is told in what way it is erroneous. The updated test >>> doesn't validate that part of the argument processing logic >> >> Done, with same fix as above. >> >> >>> >>> ---- >>> >>> There is some inconsistency in the test changes, sometimes you use: >>> >>> shouldContain("outside the allowed range") >>> >>> and sometimes: >>> >>> shouldContain("is outside the allowed range") >> >> Done. >> >> >> cheers >> > > > From alexander.harlap at oracle.com Thu May 21 17:05:17 2015 From: alexander.harlap at oracle.com (Alexander Harlap) Date: Thu, 21 May 2015 13:05:17 -0400 Subject: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <555E0DB1.4040009@oracle.com> References: <5554FE2D.6020405@oracle.com> <55597860.2050403@oracle.com> <555E031D.8070906@oracle.com> <555E0752.5080902@oracle.com> <555E0DB1.4040009@oracle.com> Message-ID: <555E104D.7040403@oracle.com> Hi Gerald, Unnamed parameter is standard C++. No restrictions at all. Alex On 5/21/2015 12:54 PM, Gerard Ziemski wrote: > hi Alexander, > > Yes, handling it this way was an alternative, but I thought it > required a C++ compiler with some specific feature support? C++ 11? > > It was my understanding that such solution was more restrictive, than > the one I used. > > Thank you. > > On 5/21/2015 11:26 AM, Alexander Harlap wrote: >> And what about this: >> >> void emit_range_bool(const char* /* name */) { } >> >> Alex >> >> On 5/21/2015 12:09 PM, Gerard Ziemski wrote: >>> hi David, >>> >>> Thank you for taking the time to review this considerable change. I >>> have responded to your points below: >>> >>> >>> On 5/18/2015 12:28 AM, David Holmes wrote: >>>> >>>> A few comments: >>>> >>>> For constructs like: >>>> >>>> void emit_range_bool(const char* name) { (void)name; /* >>>> NOP */ } >>>> >>>> I seem to recall that this was not sufficient to avoid "unused" >>>> warnings with some compilers - which is presumably why you did it? >>> >>> That's indeed the reason why I did. A search on internet reveals >>> that this seems to be the preferred way of handling such case, and >>> it works for all the currently building JPRT platforms. Is that >>> enough? If there is a question of whether this works on some unusual >>> compiler, can we leave it up to the the engineers dealing with that >>> compiler, which certainly would make them more of experts in such >>> situation than me? If not, how would you recommend we handle this >>> differently now? >>> >>> >>>> >>>> --- >>>> >>>> In src/share/vm/runtime/globals.hpp it says: >>>> >>>> see "checkRanges" function in arguments.cpp >>>> >>>> but there is no such function in arguments.cpp >>> >>> Done. >>> >>>> >>>> --- >>>> >>>> src/share/vm/runtime/arguments.cpp >>>> >>>> The set_object_alignment() functions seems to have some redundant >>>> constraint assertions. As does verify_object_alignment() - seems to >>>> me that everything in verify_object_alignment should either be in >>>> the constraint function for ObjectAlignmentInBytes or one for >>>> SurvivorAlignmentInBytes - though the combination of verification >>>> and the actual setting of SurvivorAlignmentInBytes may be a problem >>>> in the new architecture. If you can't get rid of >>>> verify_object_alignment() I'd be tempted to not process it the new >>>> way at all, as splitting the constraint checking just leads to >>>> confusion IMO. >>> >>> I moved the code to constraints and in fact this little bit of >>> refactoring looks very nice indeed. >>> >>> >>>> >>>> The changes to test/runtime/contended/Options.java suggest to me >>>> that a constraint is missing on ContendedPaddingWidth - that it is >>>> a multiple of 8 (BytesPerLong). That is (still) checked in >>>> arguments.cpp (and given it is still checked there is no need to >>>> have removed it from the test). >>> >>> I could argue whether the tests is really loosing a value in >>> displaying only one error at the time, but I tried to modify my >>> feature, so that it could also detect and report more than one >>> violation at the time, and it turned out as doable in these cases, >>> so this is now resolved, with no perceived functionality regression. >>> >>> >>>> >>>> --- >>>> >>>> Some of the test changes, such as: >>>> >>>> test/gc/g1/TestStringDeduplicationTools.java >>>> >>>> seem to be losing some of what they test. Not only is the test >>>> checking the value is detected as erroneous, but it also detects >>>> that the user is told in what way it is erroneous. The updated test >>>> doesn't validate that part of the argument processing logic >>> >>> Done, with same fix as above. >>> >>> >>>> >>>> ---- >>>> >>>> There is some inconsistency in the test changes, sometimes you use: >>>> >>>> shouldContain("outside the allowed range") >>>> >>>> and sometimes: >>>> >>>> shouldContain("is outside the allowed range") >>> >>> Done. >>> >>> >>> cheers >>> >> >> >> > From dmitry.dmitriev at oracle.com Thu May 21 20:25:41 2015 From: dmitry.dmitriev at oracle.com (Dmitry Dmitriev) Date: Thu, 21 May 2015 23:25:41 +0300 Subject: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <555E0424.9000403@oracle.com> References: <5554FE2D.6020405@oracle.com> <5559E152.50105@oracle.com> <555E0424.9000403@oracle.com> Message-ID: <555E3F45.3060100@oracle.com> Gerard, My comments inline. On 21.05.2015 19:13, Gerard Ziemski wrote: > hi Dmitry, > > Thank you for taking the time to review this considerable change for > an n-th time now :-) You are welcome! :) > > On 5/18/2015 7:55 AM, Dmitry Dmitriev wrote: >> Hi Gerard, >> >> Can you please correct format string for jio_fprintf function in the >> following functions(all from >> src/share/vm/runtime/commandLineFlagRangeList.cpp): >> Flag::Error check_intx(intx value, bool verbose = true) >> Portion of format string from "intx %s = %ld is outside ..." to "intx >> %s = "INTX_FORMAT" is outside ..." >> >> Flag::Error check_uintx(uintx value, bool verbose = true) >> Portion of format string from "uintx %s = %lu is outside ..." to >> "uintx %s = "UINTX_FORMAT" is outside ..." >> >> Flag::Error check_uint64_t(uint64_t value, bool verbose = true) >> Portion of format string from "uint64_t %s = %lu is outside ..." to >> "uint64_t %s = "UINT64_FORMAT" is outside ..." > > Are you sure you are looking at the current webrev? I know I sent you > many before internally, but the rev0 and rev1 seem to already include > the changes you request? > > I have, however, modified CICompilerCount range max value as you > requested, ie. to "max_jint" instead of "max_intx" to avoid a > potential overflow issue you found. Thank you for modifing CICompilerCount! Concerning format issue... Yes, I think that it still exist. Unfortunately I didn't catch it before. So, I still see following code in the src/share/vm/runtime/commandLineFlagRangeList.cpp module. For example, from this link in rev1: http://cr.openjdk.java.net/~gziemski/8059557_rev1/src/share/vm/runtime/commandLineFlagRangeList.cpp.html 44 Flag::Error check_intx(intx value, bool verbose = true) { 45 if ((value < _min) || (value > _max)) { 46 if (verbose == true) { 47 jio_fprintf(defaultStream::error_stream(), 48 "intx %s=%ld is outside the allowed range [ "INTX_FORMAT" ... "INTX_FORMAT" ]\n", 49 _name, value, _min, _max); 50 } On line 48 after "=" sign I still see "%ld" format, but I think it must be INTX_FORMAT with double quotes around. The same thing on lines 75 and 102 for uintx and uint64_t types where must be UINTX_FORMAT and UINT64_FORMAT formats correspondingly. size_t have correct formats(all SIZE_FORMAT for numeric values, line 129). Thank you, Dmitry > > > cheers > From dmitry.dmitriev at oracle.com Thu May 21 20:57:11 2015 From: dmitry.dmitriev at oracle.com (Dmitry Dmitriev) Date: Thu, 21 May 2015 23:57:11 +0300 Subject: RFR 8059557: Test set for "Validate JVM Command-Line Flag Arguments" In-Reply-To: <555A09B8.7010402@oracle.com> References: <555A09B8.7010402@oracle.com> Message-ID: <555E46A7.4020402@oracle.com> Hello all, Recently I correct several typos, so here a new webrev for tests: http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.02/ Thanks, Dmitry On 18.05.2015 18:48, Dmitry Dmitriev wrote: > Hello all, > > Please review test set for verifying functionality implemented by JEP > 245 "Validate JVM Command-Line Flag Arguments"(JDK-8059557). Review > request for this JEP can be found there: > http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018539.html > > I create 3 tests for verifying options with ranges. The tests mostly > rely on common/optionsvalidation/JVMOptionsUtils.java. Class in this > file contains functions to get options with ranges as list(by parsing > new option "-XX:+PrintFlagsRanges" output), run command line test for > list of options and other. The actual test code contained in > common/optionsvalidation/JVMOption.java file - testCommandLine(), > testDynamic(), testJcmd() and testAttach() methods. > common/optionsvalidation/IntJVMOption.java and > common/optionsvalidation/DoubleJVMOption.java source files contain > classes derived from JVMOption class for integer and double JVM > options correspondingly. > > Here are description of the tests: > 1) > hotspot/test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java > > This test get all options with ranges by parsing output of new option > "-XX:+PrintFlagsRanges" and verify these options by starting Java and > passing options in command line with valid and invalid values. > Currently it verifies about 106 options which have ranges. > Invalid values are values which out-of-range. In test used values > "min-1" and "max+1".In this case Java should always exit with code 1 > and print error message about out-of-range value(with one exception, > if option is unsigned and passing negative value, then out-of-range > error message is not printed because error occurred earlier). > Valid values are values in range, e.g. min&max and also several > additional values. In this case Java should successfully exit(exit > code 0) or exit with error code 1 for other reasons(low memory with > certain option value etc.). In any case for values in range Java > should not print messages about out of range value. > In any case Java should not crash. > This test excluded from JPRT because it takes long time to execute and > also fails - some options with value in valid range cause Java to > crash(bugs are submitted). > > 2) > hotspot/test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java > > This test get all writeable options with ranges by parsing output of > new option "-XX:+PrintFlagsRanges" and verify these options by > dynamically changing it's values to the valid and invalid values. Used > 3 methods for that: DynamicVMOption isValidValue and isInvalidValue > methods, Jcmd and by attach method. Currently 3 writeable options with > ranges are verified by this test. > This test pass in JPRT. > > 3) hotspot/test/runtime/CommandLine/OptionsValidation/TestJcmdOutput.java > > This test verified output of Jcmd when out-of-range value is set to > the writeable option or value violates option constraint. Also this > test verify that jcmd not write error message to the target process. > This test pass in JPRT. > > > I am not write special tests for constraints for this JEP because > there are exist test for that(e.g. > test/runtime/CompressedOops/ObjectAlignment.java for > ObjectAlignmentInBytes or > hotspot/test/gc/arguments/TestHeapFreeRatio.java for > MinHeapFreeRatio/MaxHeapFreeRatio). > > Webrev: http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.00/ > > > JEP: https://bugs.openjdk.java.net/browse/JDK-8059557 > > Thanks, > Dmitry > From kim.barrett at oracle.com Thu May 21 21:31:33 2015 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 21 May 2015 17:31:33 -0400 Subject: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <555E0DB1.4040009@oracle.com> References: <5554FE2D.6020405@oracle.com> <55597860.2050403@oracle.com> <555E031D.8070906@oracle.com> <555E0752.5080902@oracle.com> <555E0DB1.4040009@oracle.com> Message-ID: On May 21, 2015, at 12:54 PM, Gerard Ziemski wrote: > > hi Alexander, > > Yes, handling it this way was an alternative, but I thought it required a C++ compiler with some specific feature support? C++ 11? > > It was my understanding that such solution was more restrictive, than the one I used. > > Thank you. > > On 5/21/2015 11:26 AM, Alexander Harlap wrote: >> And what about this: >> >> void emit_range_bool(const char* /* name */) { } Unnamed arguments in C++ are not new; see, for example, C++ ARM 8.2.5 (p.140). See also C++03 8.3.5/8 Functions, and Note regarding them in 8.4/6 Function definitions. That?s the standard way to deal with the problem. I?ve not heard of a compiler that is so stupid as to warn about that usage. From kim.barrett at oracle.com Thu May 21 21:55:53 2015 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 21 May 2015 17:55:53 -0400 Subject: RFR: JDK-8080627: JavaThread::satb_mark_queue_offset() is too big for an ARM ldrsb instruction In-Reply-To: <555C42A5.1050307@oracle.com> References: <555C42A5.1050307@oracle.com> Message-ID: <996D1316-494E-4C5E-AE9C-7CA2FB2FBC4C@oracle.com> On May 20, 2015, at 4:15 AM, Bengt Rutisson wrote: > > > Hi everyone, > > This is a fix for the C1 generated G1 write barriers. It is a bit unclear if this is compiler or GC code, so I'm using the broader mailing list for this review. > > http://cr.openjdk.java.net/~brutisso/8080627/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8080627 Using a Java type designator for a C++ value seems fishy, but not unique to here, so oh well. Looks good. > > The problem is that on ARM the T_BYTE type will boil down to using the ldrsb instruction, which has a limitation on the offset it can load from. It can only load from offsets -256 to 256. But in the G1 pre barrier we want to load the _satb_mark_queue field in JavaThread, which is on offset 760. > > Changing the type from T_BYTE to T_BOOLEAN will use the unsigned instruction ldrb instead, which can handle offsets up to 4096. Ideally we would have a T_UBYTE type to use unsigned instructions for this load, but that does not exist. > > On the other platforms (x86 and Sparc) we treat T_BYTE and T_BOOLEAN the same, it is only on ARM that we have the distinction between these to types. I assume that is to get the sign extension for free when we use T_BYTE type. The fact that we treat T_BYTE and T_BOOLEAN the same on the other platforms makes it safe to do this change. > > I got some great help with this change from Dean Long. Thanks, Dean! > > I tried a couple of different solutions. Moving the _satb_mark_queue field earlier in JavaThread did not help since the Thread superclass already has enough members to exceed the 256 limit for offsets. It also didn't seem like a stable solution. Loading the field into a register would work, but keeping the load an immediate seems like a nicer solution. Changing to treat T_BYTE and T_BOOLEAN the same on ARM (similarly to x86 and Sparc) would mean to have to do explicit sign extension, which seems like a more complex solution than just switching the type in this case. > > Bengt From bengt.rutisson at oracle.com Fri May 22 07:30:45 2015 From: bengt.rutisson at oracle.com (Bengt Rutisson) Date: Fri, 22 May 2015 09:30:45 +0200 Subject: RFR: JDK-8080627: JavaThread::satb_mark_queue_offset() is too big for an ARM ldrsb instruction In-Reply-To: References: <555C42A5.1050307@oracle.com> Message-ID: <555EDB25.8000908@oracle.com> On 21/05/15 15:57, Roland Westrelin wrote: > Hi Bengt, > >> This is a fix for the C1 generated G1 write barriers. It is a bit unclear if this is compiler or GC code, so I'm using the broader mailing list for this review. >> >> http://cr.openjdk.java.net/~brutisso/8080627/webrev.00/ >> https://bugs.openjdk.java.net/browse/JDK-8080627 >> >> The problem is that on ARM the T_BYTE type will boil down to using the ldrsb instruction, which has a limitation on the offset it can load from. It can only load from offsets -256 to 256. But in the G1 pre barrier we want to load the _satb_mark_queue field in JavaThread, which is on offset 760. >> >> Changing the type from T_BYTE to T_BOOLEAN will use the unsigned instruction ldrb instead, which can handle offsets up to 4096. Ideally we would have a T_UBYTE type to use unsigned instructions for this load, but that does not exist. >> >> On the other platforms (x86 and Sparc) we treat T_BYTE and T_BOOLEAN the same, it is only on ARM that we have the distinction between these to types. I assume that is to get the sign extension for free when we use T_BYTE type. The fact that we treat T_BYTE and T_BOOLEAN the same on the other platforms makes it safe to do this change. > That looks good to me. Please add a comment. Thanks Roland. I'll add this comment before I push: // Use unsigned type T_BOOLEAN here rather than signed T_BYTE since some platforms, eg. ARM, // need to use unsigned instructions to use the large offset to load the satb_mark_queue. flag_type = T_BOOLEAN; Thanks, Bengt > > Roland. > >> I got some great help with this change from Dean Long. Thanks, Dean! >> >> I tried a couple of different solutions. Moving the _satb_mark_queue field earlier in JavaThread did not help since the Thread superclass already has enough members to exceed the 256 limit for offsets. It also didn't seem like a stable solution. Loading the field into a register would work, but keeping the load an immediate seems like a nicer solution. Changing to treat T_BYTE and T_BOOLEAN the same on ARM (similarly to x86 and Sparc) would mean to have to do explicit sign extension, which seems like a more complex solution than just switching the type in this case. >> >> Bengt From david.simms at oracle.com Fri May 22 08:30:21 2015 From: david.simms at oracle.com (David Simms) Date: Fri, 22 May 2015 10:30:21 +0200 Subject: RFR JDK-8079466: JNI Specification Update and Clean-up In-Reply-To: <555CE2CB.6000501@oracle.com> References: <5559ABCF.7040009@oracle.com> <555CE2CB.6000501@oracle.com> Message-ID: <555EE91D.6040702@oracle.com> Thanks Harold, that was quick work ! Updated web review: http://cr.openjdk.java.net/~dsimms/8079466/rev1/ Adjusted as per your comments: On 20/05/15 21:38, harold seigel wrote: > Hi David, > > It looks like a lot of work! I have just a few small comments: > > 1. In functions.html, delete the 'a' before 'this' > > 944 reference. May be a NULL value, in which case a this > function will > 945 return NULL.

> Done > > 2. In function.html, perhaps some commas around 'for example' ? > > 1035 (e.g. JNI_ERR or JNI_EINVAL). The > HotSpot JVM > 1036 implementation for example uses the > -XX:+MaxJNILocalCapacity flag > 1037 (default: 65536).

> Done > > 3. In function.html, should the words "string length" be added to line > 4339, like they are in line 4335? > > 4334

start: the index of the first unicode character > in the string to > 4335 copy. Must be greater than or equal to zero, and less than string > length > 4336 ("GetStringLength()").

> 4337 > 4338

len: the number of unicode characters to copy. > Must be greater > 4339 than or equal to zero, and "start + len" must be > less than > 4340 "GetStringLength()".

> > Done, further updated all "string length" and "array length" to be the same form. > 4. In function.html, what does "this number" refer to in line 4361? > > 4359

The len argument specifies the number of > 4360 unicode characters. The resulting number modified UTF-8 > encoding > 4361 characters may be greater than this number. > GetStringUTFLength() > 4362 may be used to determine the maximum size of the required > character buffer.

Done: "greater than the given len argument." > > 5. In function.htlm, line 4366, change "safetly' to "to safely" > > 4366 "memset()") before using this function, in order > safetly perform > 4367 strlen().

> > Done. Nice spotting. > 6. In jni-6.html can the following: > > 15

JNI has been enhanced in Java SE 6 with a few minor changes. > The addition of > 16 the GetObjectRefType function. Deprecated structures > 17 JDK1_1InitArgs and JDK1_1AttachArgs > have been removed. > 18 And an increment in the JNI version number.

> > to > > 15

JNI has been enhanced in Java SE 6 with a few minor changes. The > 16 GetObjectRefType function has been added. Deprecated > structures > 17 JDK1_1InitArgs and JDK1_1AttachArgs > have been removed. > 18 The JNI version number has also been incremented.

> Done. > Thanks, Harold > Cheers /David Simms From bengt.rutisson at oracle.com Fri May 22 09:22:06 2015 From: bengt.rutisson at oracle.com (Bengt Rutisson) Date: Fri, 22 May 2015 11:22:06 +0200 Subject: RFR: JDK-8080627: JavaThread::satb_mark_queue_offset() is too big for an ARM ldrsb instruction In-Reply-To: <996D1316-494E-4C5E-AE9C-7CA2FB2FBC4C@oracle.com> References: <555C42A5.1050307@oracle.com> <996D1316-494E-4C5E-AE9C-7CA2FB2FBC4C@oracle.com> Message-ID: <555EF53E.6050206@oracle.com> Hi Kim, On 21/05/15 23:55, Kim Barrett wrote: > On May 20, 2015, at 4:15 AM, Bengt Rutisson wrote: >> >> Hi everyone, >> >> This is a fix for the C1 generated G1 write barriers. It is a bit unclear if this is compiler or GC code, so I'm using the broader mailing list for this review. >> >> http://cr.openjdk.java.net/~brutisso/8080627/webrev.00/ >> https://bugs.openjdk.java.net/browse/JDK-8080627 > Using a Java type designator for a C++ value seems fishy, but not unique to here, so oh well. I totally agree. > > Looks good. Thanks for the review! Bengt > >> The problem is that on ARM the T_BYTE type will boil down to using the ldrsb instruction, which has a limitation on the offset it can load from. It can only load from offsets -256 to 256. But in the G1 pre barrier we want to load the _satb_mark_queue field in JavaThread, which is on offset 760. >> >> Changing the type from T_BYTE to T_BOOLEAN will use the unsigned instruction ldrb instead, which can handle offsets up to 4096. Ideally we would have a T_UBYTE type to use unsigned instructions for this load, but that does not exist. >> >> On the other platforms (x86 and Sparc) we treat T_BYTE and T_BOOLEAN the same, it is only on ARM that we have the distinction between these to types. I assume that is to get the sign extension for free when we use T_BYTE type. The fact that we treat T_BYTE and T_BOOLEAN the same on the other platforms makes it safe to do this change. >> >> I got some great help with this change from Dean Long. Thanks, Dean! >> >> I tried a couple of different solutions. Moving the _satb_mark_queue field earlier in JavaThread did not help since the Thread superclass already has enough members to exceed the 256 limit for offsets. It also didn't seem like a stable solution. Loading the field into a register would work, but keeping the load an immediate seems like a nicer solution. Changing to treat T_BYTE and T_BOOLEAN the same on ARM (similarly to x86 and Sparc) would mean to have to do explicit sign extension, which seems like a more complex solution than just switching the type in this case. >> >> Bengt > From per.liden at oracle.com Fri May 22 13:45:55 2015 From: per.liden at oracle.com (Per Liden) Date: Fri, 22 May 2015 15:45:55 +0200 Subject: RFR: 8080746: Refactor oop iteration macros to be more general In-Reply-To: <555C84C3.4090405@oracle.com> References: <555C4C9A.4080508@oracle.com> <555C5796.3070304@oracle.com> <555C84C3.4090405@oracle.com> Message-ID: <555F3313.9010200@oracle.com> On 2015-05-20 14:57, Stefan Johansson wrote: > Thanks for looking at this Stefan, > > New webrevs: > Full: http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.01/ > Inc: http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00-01/ Looks good, just one minor thing. We now get: oop_oop_iterate_range##nv_suffix() oop_oop_iterate##nv_suffix##_m() oop_oop_iterate_range##nv_suffix() oop_oop_iterate_backwards##nv_suffix() Could align the _m version into this: oop_oop_iterate_range##nv_suffix() oop_oop_iterate_bounded##nv_suffix() oop_oop_iterate_range##nv_suffix() oop_oop_iterate_backwards##nv_suffix() cheers, /Per > > On 2015-05-20 11:44, Stefan Karlsson wrote: >> Hi Stefan, >> >> On 2015-05-20 10:58, Stefan Johansson wrote: >>> Hi, >>> >>> Please review this change to generalize the oop iteration macros: >>> https://bugs.openjdk.java.net/browse/JDK-8080746 >>> >>> Webrev: >>> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/ >> >> This looks great! >> >> Here's a couple of cleanup/style comments: >> >> ======================================================================== >> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/klass.hpp.udiff.html >> >> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/arrayKlass.hpp.udiff.html >> >> >> -------------------------------------------------------------------------------- >> >> Could you visually separate the DEFN and DECL defines so that it's >> more obvious that they serve different purposes. It might be worth >> adding a comment describing how the DEFN definitions are used. >> > Fixed, added an extra new line and extended the comments. >> -------------------------------------------------------------------------------- >> >> + int oop_oop_iterate_range##nv_suffix(oop obj, OopClosureType* blk, \ >> + int start, int end); >> >> Could you combine these two lines. >> > Fixed. >> -------------------------------------------------------------------------------- >> >> The indentation of the ending backslashes are inconsistent. >> > Fixed. >> -------------------------------------------------------------------------------- >> >> Pre-existing naming issue: >> >> + int oop_oop_iterate_backwards##nv_suffix(oop obj, OopClosureType* >> blk); >> >> +int KlassType::oop_oop_iterate_backwards##nv_suffix(oop obj, >> OopClosureType* closure) { \ >> >> Could you change parameter name blk to closure? >> > Fixed. >> -------------------------------------------------------------------------------- >> >> >> ======================================================================== >> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/objArrayKlass.inline.hpp.frames.html >> >> >> -------------------------------------------------------------------------------- >> >> 155 OOP_OOP_ITERATE_DEFN( ObjArrayKlass, >> OopClosureType, nv_suffix) \ >> 156 OOP_OOP_ITERATE_DEFN_m( ObjArrayKlass, >> OopClosureType, nv_suffix) \ >> 157 OOP_OOP_ITERATE_RANGE_DEFN( ObjArrayKlass, >> OopClosureType, nv_suffix) \ >> 158 OOP_OOP_ITERATE_NO_BACKWARDS_DEFN(ObjArrayKlass, >> OopClosureType, nv_suffix) >> >> It would be nice to prefix all these macros with OOP_OOP_ITERATE_DEFN > Fixed, did the same for OOP_OOP_ITERATE_DECL. Change _m to BOUNDED. >> -------------------------------------------------------------------------------- >> >> >> ======================================================================== >> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/typeArrayKlass.inline.hpp.frames.html >> >> >> -------------------------------------------------------------------------------- >> >> 44 template >> 45 int TypeArrayKlass::oop_oop_iterate(oop obj, OopClosureType* >> closure) { >> 46 return oop_oop_iterate_impl(obj, closure); >> 47 } >> 48 >> 49 template >> 50 int TypeArrayKlass::oop_oop_iterate_bounded(oop obj, >> OopClosureType* closure, MemRegion mr) { >> 51 return oop_oop_iterate_impl(obj, closure); >> 52 } >> >> I think you should add the inline keyword to these functions. > Skipped this, does not seem to be needed and leaving it out matches how > objArrayKlass.inline.hpp is handled. >> >> -------------------------------------------------------------------------------- >> >> > Thanks for the review, > StefanJ >> Thanks, >> StefanK >> >>> >>> Summary: >>> The macros for the oop_oop_iterate functions were defined for all >>> *Klass types even though they were very similar. This change extracts >>> and generalizes the macros to klass.hpp and arrayKlass.hpp. >>> >>> For the arrays the *_OOP_OOP_ITERATE_BACKWARDS_* macros is now called >>> OOP_OOP_ITERATE_NO_BACKWARDS_* to reflect that for arrays we >>> currently don't have a reverse implementation. >>> >>> Thanks, >>> Stefan >> > From daniel.smith at oracle.com Fri May 22 22:20:11 2015 From: daniel.smith at oracle.com (Dan Smith) Date: Fri, 22 May 2015 16:20:11 -0600 Subject: Call for Speakers -- 2015 JVM Language Summit In-Reply-To: References: Message-ID: <6FC0BD44-66FB-4C93-A10F-DA7DC33CC037@oracle.com> Reminder: last call for speaker submissions. General registration is also open. We've hit our initial allotment for early registration, but you can register as a "Wait List Attendee" to be notified when we increase the limit (should be soon). ?Dan > On Apr 15, 2015, at 11:02 AM, Dan Smith wrote: > > CALL FOR SPEAKERS -- JVM LANGUAGE SUMMIT, AUGUST 2015 > > We are pleased to announce the 2015 JVM Language Summit to be held at Oracle's Santa Clara campus on August 10-12, 2015. Registration is now open for speaker submissions (presentations and workshops) and will remain open until May 22, 2015. There is no registration fee for speakers. > > The JVM Language Summit is an open technical collaboration among language designers, compiler writers, tool builders, runtime engineers, and VM architects. We will share our experiences as creators of both the JVM and programming languages for the JVM. We also welcome non-JVM developers of similar technologies to attend or speak on their runtime, VM, or language of choice. > > Presentations will be recorded and made available to the public via the Oracle Technology Network. > > This event is being organized by language and JVM engineers; no marketers involved! So bring your slide rules and be prepared for some seriously geeky discussions. > > Format > > The summit is held in a single classroom-style room to support direct communication between participants. About 80-100 attendees are expected. > > As in previous years, we will divide the schedule between traditional presentations and "workshops." Workshops are informal, facilitated discussion groups among smaller, self-selected participants, and should enable deeper "dives" into the subject matter. If there is interest, there will also be impromptu "lightning talks." Traditional presentations (about 7 each day) will be given in a single track, while workshops (2?3 each day) will occur in parallel. > > Instructions for Speaker Registration > > If you'd like give a presentation or lead a workshop, please register as a Speaker and include a detailed abstract. There is no fee. You will be notified about whether your proposal has been accepted; if not, you will be able to register as a regular attendee. > > For a successful presentation or workshop submission, please note the following: > > - All talks should be deeply technical, given by designers and implementors to designers and implementors. We all speak Code here! > > - Each talk, we hope and expect, will inform the audience, in detail, about the state of the art of language design and implementation on the JVM, or will explore the present and future capabilities of the JVM itself. (Some will do so indirectly by discussing non-JVM technologies.) > > - Know your audience: attendees may not be likely to ever use your specific language or tool, but could learn something from your interactions with the JVM. A broad goal of the summit is to inspire us to work together on JVM-based technologies that enable a rich ecosystem at higher layers. > > We encourage speakers to submit both a presentation and a workshop; we will arrange to schedule the presentation before the workshop, so that the presentation can spark people's interest and the workshop will allow those who are really interested to go deeper into the subject area. Workshop facilitators may, but are not expected to, prepare presentation materials; in any case, they should come prepared to guide a deep technical discussion. > > To register: > http://regonline.com/jvmls2015 > > For further information: > http://jvmlangsummit.com > > Questions: > inquire at jvmlangsummit.com > > We hope to see you in August! From dmitry.dmitriev at oracle.com Sun May 24 20:52:29 2015 From: dmitry.dmitriev at oracle.com (Dmitry Dmitriev) Date: Sun, 24 May 2015 23:52:29 +0300 Subject: Revision1: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <555E0028.8070108@oracle.com> References: <5554FE2D.6020405@oracle.com> <555E0028.8070108@oracle.com> Message-ID: <55623A0D.2080002@oracle.com> Hi Gerard, I have found new several issues. Here my comments for 2 modules: 1) In src/share/vm/runtime/commandLineFlagConstraintsGC.cpp: 156 Flag::Error G1MaxNewSizePercentConstraintFunc(bool verbose, uintx* value) { 157 if ((CommandLineFlags::finishedInitializing() == true) && (*value < G1NewSizePercent)) { 158 if (verbose == true) { 159 jio_fprintf(defaultStream::error_stream(), 160 "G1MaxNewSizePercent (" UINTX_FORMAT ") must be less than or " 161 "equal to G1NewSizePercent (" UINTX_FORMAT ")\n", 162 *value, G1NewSizePercent); 163 } Message on line 160 must state that G1MaxNewSizePercent must be greater than G1NewSizePercent. 186 Flag::Error CMSOldPLABMaxConstraintFunc(bool verbose, size_t* value) { 187 if ((CommandLineFlags::finishedInitializing() == true) && (*value < CMSOldPLABMax)) { 188 if (verbose == true) { 189 jio_fprintf(defaultStream::error_stream(), 190 "CMSOldPLABMax (" SIZE_FORMAT ") must be greater than or " 191 "equal to CMSOldPLABMax (" SIZE_FORMAT ")\n", 192 *value, CMSOldPLABMax); 193 } 194 return Flag::VIOLATES_CONSTRAINT; It seems that this function perform wrong check. It verifies value for CMSOldPLABMax and compare it against CMSOldPLABMax. I think that it should be compared against CMSOldPLABMin. In this case error message should be corrected. 228 Flag::Error SurvivorAlignmentInBytesConstraintFunc(bool verbose, intx* value) { 229 if (CommandLineFlags::finishedInitializing() == true) { 230 if (*value != 0) { 231 if (!is_power_of_2(*value)) { 232 if (verbose == true) { 233 jio_fprintf(defaultStream::error_stream(), 234 "SurvivorAlignmentInBytes (" INTX_FORMAT ") must be power of 2\n", 235 *value); 236 } 237 return Flag::VIOLATES_CONSTRAINT; 238 } 239 if (SurvivorAlignmentInBytes < ObjectAlignmentInBytes) { 240 if (verbose == true) { 241 jio_fprintf(defaultStream::error_stream(), 242 "SurvivorAlignmentInBytes (" INTX_FORMAT ") must be greater " 243 "than ObjectAlignmentInBytes (" INTX_FORMAT ") \n", 244 *value, ObjectAlignmentInBytes); 245 } 246 return Flag::VIOLATES_CONSTRAINT; 247 } On line 239 "*value" should be instead of "SurvivorAlignmentInBytes". 2) In src/share/vm/runtime/commandLineFlagConstraintsRuntime.cpp: 33 Flag::Error ObjectAlignmentInBytesConstraintFunc(bool verbose, intx* value) { 34 if (!is_power_of_2(*value)) { 35 if (verbose == true) { 36 jio_fprintf(defaultStream::error_stream(), 37 "ObjectAlignmentInBytes=%d must be power of 2\n", 38 (int)*value); 39 } 40 return Flag::VIOLATES_CONSTRAINT; 41 } 42 // In case page size is very small. 43 if ((int)*value >= os::vm_page_size()) { 44 if (verbose == true) { 45 jio_fprintf(defaultStream::error_stream(), 46 "ObjectAlignmentInBytes=%d must be less than page size %d\n", 47 (int)*value, os::vm_page_size()); 48 } I understand that ObjectAlignmentInBytesConstraintFunc have not huge upper range and in this code not introduce problems, so it can be leaved as is. I think that on lines 37-38 and 46 it is unnecessary to convert "*value" to "int", because instead of "%d" format you can use INTX_FORMAT. Also, on line 43 os::vm_page_size can be converted to wide type(from int to intx) instead of converting *value to narrow type(on 64 bit systems from intx to int), i.e. use following compare statement (*value >= (intx)os::vm_page_size()). Regards, Dmitry On 21.05.2015 18:56, Gerard Ziemski wrote: > hi all, > > Here is a revision 1 of the feature taking into account feedback from > Coleen, Dmitry and David. I will be responding to each of the feedback > emails shortly. > > We introduce a new mechanism that allows specification of a valid > range per flag that is then used to automatically validate given > flag's value every time it changes. Ranges values must be constant and > can not change. Optionally, a constraint can also be specified and > applied every time a flag value changes for those flags whose valid > value can not be trivially checked by a simple min and max (ex. > whether it's power of 2, or bigger or smaller than some other flag > that can also change) > > I have chosen to modify the table macros (ex. RUNTIME_FLAGS in > globals.hpp) instead of using a more sophisticated solution, such as > C++ templates, because even though macros were unfriendly when > initially developing, once a solution was arrived at, subsequent > additions to the tables of new ranges, or constraint are trivial from > developer's point of view. (The intial development unfriendliness of > macros was mitigated by using a pre-processor, which for those using a > modern IDE like Xcode, is easily available from a menu). Using macros > also allowed for more minimal code changes. > > The presented solution is based on expansion of macros using variadic > functions and can be readily seen in > runtime/commandLineFlagConstraintList.cpp and > runtime/commandLineFlagRangeList.cpp > > In commandLineFlagConstraintList.cpp or commandLineFlagRangesList.cpp, > there is bunch of classes and methods that seems to beg for C++ > template to be used. I have tried, but when the compiler tries to > generate code for both uintx and size_t, which happen to have the same > underlying type (on BSD), it fails to compile overridden methods with > same type, but different name. If someone has a way of simplifying the > new code via C++ templates, however, we can file a new enhancement > request to address that. > > This webrev represents only the initial range checking framework and > only 100 or so flags that were ported from an existing ad hoc range > checking code to this new mechanism. There are about 250 remaining > flags that still need their ranges determined and ported over to this > new mechansim and they are tracked by individual subtasks. > > I had to modify several existing tests to change the error message > that they expected when VM refuses to run, which was changed to > provide uniform error messages. > > To help with testing and subtask efforts I have introduced a new > runtime flag: > > PrintFlagsRanges: "Print VM flags and their ranges and exit VM" > > which in addition to the already existing flags: "PrintFlagsInitial" > and "PrintFlagsFinal" allow for thorough examination of the flags > values and their ranges. > > The code change builds and passes JPRT (-testset hotspot) and UTE > (vm.quick.testlist) > > > References: > > Webrev: http://cr.openjdk.java.net/~gziemski/8059557_rev1/ > note: due to "awk" limit of 50 pats the Frames diff is not > available for "src/share/vm/runtime/arguments.cpp" > > JEP:https://bugs.openjdk.java.net/browse/JDK-8059557 > Compiler subtask:https://bugs.openjdk.java.net/browse/JDK-8078554 > GC subtask:https://bugs.openjdk.java.net/browse/JDK-8078555 > Runtime subtask:https://bugs.openjdk.java.net/browse/JDK-8078556 > > > # hgstat: > src/cpu/ppc/vm/globals_ppc.hpp | 2 +- > src/cpu/sparc/vm/globals_sparc.hpp | 2 +- > src/cpu/x86/vm/globals_x86.hpp | 2 +- > src/cpu/zero/vm/globals_zero.hpp | 3 +- > src/os/aix/vm/globals_aix.hpp | 2 +- > src/os/bsd/vm/globals_bsd.hpp | 29 +- > src/os/linux/vm/globals_linux.hpp | 9 +- > src/os/solaris/vm/globals_solaris.hpp | 4 +- > src/os/windows/vm/globals_windows.hpp | 5 +- > src/share/vm/c1/c1_globals.cpp | 4 +- > src/share/vm/c1/c1_globals.hpp | 17 +- > src/share/vm/gc_implementation/g1/g1_globals.cpp | 16 +- > src/share/vm/gc_implementation/g1/g1_globals.hpp | 38 ++- > src/share/vm/opto/c2_globals.cpp | 12 +- > src/share/vm/opto/c2_globals.hpp | 39 ++- > src/share/vm/prims/whitebox.cpp | 12 +- > src/share/vm/runtime/arguments.cpp | 753 ++++++++++++++++++++++++++---------------------------------------- > src/share/vm/runtime/arguments.hpp | 24 +- > src/share/vm/runtime/commandLineFlagConstraintList.cpp | 242 +++++++++++++++++++++ > src/share/vm/runtime/commandLineFlagConstraintList.hpp | 72 ++++++ > src/share/vm/runtime/commandLineFlagConstraintsGC.cpp | 251 ++++++++++++++++++++++ > src/share/vm/runtime/commandLineFlagConstraintsGC.hpp | 59 +++++ > src/share/vm/runtime/commandLineFlagConstraintsRuntime.cpp | 67 +++++ > src/share/vm/runtime/commandLineFlagConstraintsRuntime.hpp | 41 +++ > src/share/vm/runtime/commandLineFlagRangeList.cpp | 304 +++++++++++++++++++++++++++ > src/share/vm/runtime/commandLineFlagRangeList.hpp | 67 +++++ > src/share/vm/runtime/globals.cpp | 699 +++++++++++++++++++++++++++++++++++++++++++++++++------------ > src/share/vm/runtime/globals.hpp | 310 ++++++++++++++++++++++----- > src/share/vm/runtime/globals_extension.hpp | 101 +++++++- > src/share/vm/runtime/init.cpp | 6 +- > src/share/vm/runtime/os.hpp | 17 + > src/share/vm/runtime/os_ext.hpp | 7 +- > src/share/vm/runtime/thread.cpp | 6 + > src/share/vm/services/attachListener.cpp | 4 +- > src/share/vm/services/classLoadingService.cpp | 6 +- > src/share/vm/services/diagnosticCommand.cpp | 3 +- > src/share/vm/services/management.cpp | 6 +- > src/share/vm/services/memoryService.cpp | 2 +- > src/share/vm/services/writeableFlags.cpp | 161 ++++++++++---- > src/share/vm/services/writeableFlags.hpp | 52 +--- > test/compiler/c2/7200264/Test7200264.sh | 5 +- > test/compiler/startup/NumCompilerThreadsCheck.java | 2 +- > test/gc/arguments/TestHeapFreeRatio.java | 23 +- > test/gc/arguments/TestSurvivorAlignmentInBytesOption.java | 4 +- > test/gc/g1/TestStringDeduplicationTools.java | 6 +- > test/runtime/CompressedOops/CompressedClassSpaceSize.java | 4 +- > test/runtime/CompressedOops/ObjectAlignment.java | 9 +- > test/runtime/contended/Options.java | 10 +- > 48 files changed, 2641 insertions(+), 878 deletions(-) > From stefan.johansson at oracle.com Mon May 25 08:42:59 2015 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Mon, 25 May 2015 10:42:59 +0200 Subject: RFR: 8080746: Refactor oop iteration macros to be more general In-Reply-To: <555F3313.9010200@oracle.com> References: <555C4C9A.4080508@oracle.com> <555C5796.3070304@oracle.com> <555C84C3.4090405@oracle.com> <555F3313.9010200@oracle.com> Message-ID: <5562E093.9010101@oracle.com> Thanks Per for looking at this, New webrevs: http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.02/ http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.01-02/ On 2015-05-22 15:45, Per Liden wrote: > On 2015-05-20 14:57, Stefan Johansson wrote: >> Thanks for looking at this Stefan, >> >> New webrevs: >> Full: http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.01/ >> Inc: http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00-01/ > > > Looks good, just one minor thing. We now get: > > oop_oop_iterate_range##nv_suffix() > oop_oop_iterate##nv_suffix##_m() > oop_oop_iterate_range##nv_suffix() > oop_oop_iterate_backwards##nv_suffix() > > Could align the _m version into this: > > oop_oop_iterate_range##nv_suffix() > oop_oop_iterate_bounded##nv_suffix() > oop_oop_iterate_range##nv_suffix() > oop_oop_iterate_backwards##nv_suffix() > Fixed. Thanks, Stefan > cheers, > /Per > >> >> On 2015-05-20 11:44, Stefan Karlsson wrote: >>> Hi Stefan, >>> >>> On 2015-05-20 10:58, Stefan Johansson wrote: >>>> Hi, >>>> >>>> Please review this change to generalize the oop iteration macros: >>>> https://bugs.openjdk.java.net/browse/JDK-8080746 >>>> >>>> Webrev: >>>> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/ >>> >>> This looks great! >>> >>> Here's a couple of cleanup/style comments: >>> >>> ======================================================================== >>> >>> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/klass.hpp.udiff.html >>> >>> >>> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/arrayKlass.hpp.udiff.html >>> >>> >>> >>> -------------------------------------------------------------------------------- >>> >>> >>> Could you visually separate the DEFN and DECL defines so that it's >>> more obvious that they serve different purposes. It might be worth >>> adding a comment describing how the DEFN definitions are used. >>> >> Fixed, added an extra new line and extended the comments. >>> -------------------------------------------------------------------------------- >>> >>> >>> + int oop_oop_iterate_range##nv_suffix(oop obj, OopClosureType* blk, \ >>> + int start, int end); >>> >>> Could you combine these two lines. >>> >> Fixed. >>> -------------------------------------------------------------------------------- >>> >>> >>> The indentation of the ending backslashes are inconsistent. >>> >> Fixed. >>> -------------------------------------------------------------------------------- >>> >>> >>> Pre-existing naming issue: >>> >>> + int oop_oop_iterate_backwards##nv_suffix(oop obj, OopClosureType* >>> blk); >>> >>> +int KlassType::oop_oop_iterate_backwards##nv_suffix(oop obj, >>> OopClosureType* closure) { \ >>> >>> Could you change parameter name blk to closure? >>> >> Fixed. >>> -------------------------------------------------------------------------------- >>> >>> >>> >>> ======================================================================== >>> >>> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/objArrayKlass.inline.hpp.frames.html >>> >>> >>> >>> -------------------------------------------------------------------------------- >>> >>> >>> 155 OOP_OOP_ITERATE_DEFN( ObjArrayKlass, >>> OopClosureType, nv_suffix) \ >>> 156 OOP_OOP_ITERATE_DEFN_m( ObjArrayKlass, >>> OopClosureType, nv_suffix) \ >>> 157 OOP_OOP_ITERATE_RANGE_DEFN( ObjArrayKlass, >>> OopClosureType, nv_suffix) \ >>> 158 OOP_OOP_ITERATE_NO_BACKWARDS_DEFN(ObjArrayKlass, >>> OopClosureType, nv_suffix) >>> >>> It would be nice to prefix all these macros with OOP_OOP_ITERATE_DEFN >> Fixed, did the same for OOP_OOP_ITERATE_DECL. Change _m to BOUNDED. >>> -------------------------------------------------------------------------------- >>> >>> >>> >>> ======================================================================== >>> >>> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/typeArrayKlass.inline.hpp.frames.html >>> >>> >>> >>> -------------------------------------------------------------------------------- >>> >>> >>> 44 template >>> 45 int TypeArrayKlass::oop_oop_iterate(oop obj, OopClosureType* >>> closure) { >>> 46 return oop_oop_iterate_impl(obj, closure); >>> 47 } >>> 48 >>> 49 template >>> 50 int TypeArrayKlass::oop_oop_iterate_bounded(oop obj, >>> OopClosureType* closure, MemRegion mr) { >>> 51 return oop_oop_iterate_impl(obj, closure); >>> 52 } >>> >>> I think you should add the inline keyword to these functions. >> Skipped this, does not seem to be needed and leaving it out matches how >> objArrayKlass.inline.hpp is handled. >>> >>> -------------------------------------------------------------------------------- >>> >>> >>> >> Thanks for the review, >> StefanJ >>> Thanks, >>> StefanK >>> >>>> >>>> Summary: >>>> The macros for the oop_oop_iterate functions were defined for all >>>> *Klass types even though they were very similar. This change extracts >>>> and generalizes the macros to klass.hpp and arrayKlass.hpp. >>>> >>>> For the arrays the *_OOP_OOP_ITERATE_BACKWARDS_* macros is now called >>>> OOP_OOP_ITERATE_NO_BACKWARDS_* to reflect that for arrays we >>>> currently don't have a reverse implementation. >>>> >>>> Thanks, >>>> Stefan >>> >> From stefan.karlsson at oracle.com Mon May 25 09:03:30 2015 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 25 May 2015 11:03:30 +0200 Subject: RFR: 8080746: Refactor oop iteration macros to be more general In-Reply-To: <5562E093.9010101@oracle.com> References: <555C4C9A.4080508@oracle.com> <555C5796.3070304@oracle.com> <555C84C3.4090405@oracle.com> <555F3313.9010200@oracle.com> <5562E093.9010101@oracle.com> Message-ID: <5562E562.3000902@oracle.com> On 25/05/15 10:42, Stefan Johansson wrote: > Thanks Per for looking at this, > > New webrevs: > http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.02/ > http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.01-02/ The rename looks good to me. Thanks, StefanK > > On 2015-05-22 15:45, Per Liden wrote: >> On 2015-05-20 14:57, Stefan Johansson wrote: >>> Thanks for looking at this Stefan, >>> >>> New webrevs: >>> Full: http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.01/ >>> Inc: http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00-01/ >> >> >> Looks good, just one minor thing. We now get: >> >> oop_oop_iterate_range##nv_suffix() >> oop_oop_iterate##nv_suffix##_m() >> oop_oop_iterate_range##nv_suffix() >> oop_oop_iterate_backwards##nv_suffix() >> >> Could align the _m version into this: >> >> oop_oop_iterate_range##nv_suffix() >> oop_oop_iterate_bounded##nv_suffix() >> oop_oop_iterate_range##nv_suffix() >> oop_oop_iterate_backwards##nv_suffix() >> > Fixed. > > Thanks, > Stefan >> cheers, >> /Per >> >>> >>> On 2015-05-20 11:44, Stefan Karlsson wrote: >>>> Hi Stefan, >>>> >>>> On 2015-05-20 10:58, Stefan Johansson wrote: >>>>> Hi, >>>>> >>>>> Please review this change to generalize the oop iteration macros: >>>>> https://bugs.openjdk.java.net/browse/JDK-8080746 >>>>> >>>>> Webrev: >>>>> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/ >>>> >>>> This looks great! >>>> >>>> Here's a couple of cleanup/style comments: >>>> >>>> ======================================================================== >>>> >>>> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/klass.hpp.udiff.html >>>> >>>> >>>> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/arrayKlass.hpp.udiff.html >>>> >>>> >>>> >>>> -------------------------------------------------------------------------------- >>>> >>>> >>>> Could you visually separate the DEFN and DECL defines so that it's >>>> more obvious that they serve different purposes. It might be worth >>>> adding a comment describing how the DEFN definitions are used. >>>> >>> Fixed, added an extra new line and extended the comments. >>>> -------------------------------------------------------------------------------- >>>> >>>> >>>> + int oop_oop_iterate_range##nv_suffix(oop obj, OopClosureType* >>>> blk, \ >>>> + int start, int end); >>>> >>>> Could you combine these two lines. >>>> >>> Fixed. >>>> -------------------------------------------------------------------------------- >>>> >>>> >>>> The indentation of the ending backslashes are inconsistent. >>>> >>> Fixed. >>>> -------------------------------------------------------------------------------- >>>> >>>> >>>> Pre-existing naming issue: >>>> >>>> + int oop_oop_iterate_backwards##nv_suffix(oop obj, OopClosureType* >>>> blk); >>>> >>>> +int KlassType::oop_oop_iterate_backwards##nv_suffix(oop obj, >>>> OopClosureType* closure) { \ >>>> >>>> Could you change parameter name blk to closure? >>>> >>> Fixed. >>>> -------------------------------------------------------------------------------- >>>> >>>> >>>> >>>> ======================================================================== >>>> >>>> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/objArrayKlass.inline.hpp.frames.html >>>> >>>> >>>> >>>> -------------------------------------------------------------------------------- >>>> >>>> >>>> 155 OOP_OOP_ITERATE_DEFN( ObjArrayKlass, >>>> OopClosureType, nv_suffix) \ >>>> 156 OOP_OOP_ITERATE_DEFN_m( ObjArrayKlass, >>>> OopClosureType, nv_suffix) \ >>>> 157 OOP_OOP_ITERATE_RANGE_DEFN( ObjArrayKlass, >>>> OopClosureType, nv_suffix) \ >>>> 158 OOP_OOP_ITERATE_NO_BACKWARDS_DEFN(ObjArrayKlass, >>>> OopClosureType, nv_suffix) >>>> >>>> It would be nice to prefix all these macros with OOP_OOP_ITERATE_DEFN >>> Fixed, did the same for OOP_OOP_ITERATE_DECL. Change _m to BOUNDED. >>>> -------------------------------------------------------------------------------- >>>> >>>> >>>> >>>> ======================================================================== >>>> >>>> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/typeArrayKlass.inline.hpp.frames.html >>>> >>>> >>>> >>>> -------------------------------------------------------------------------------- >>>> >>>> >>>> 44 template >>>> 45 int TypeArrayKlass::oop_oop_iterate(oop obj, OopClosureType* >>>> closure) { >>>> 46 return oop_oop_iterate_impl(obj, closure); >>>> 47 } >>>> 48 >>>> 49 template >>>> 50 int TypeArrayKlass::oop_oop_iterate_bounded(oop obj, >>>> OopClosureType* closure, MemRegion mr) { >>>> 51 return oop_oop_iterate_impl(obj, closure); >>>> 52 } >>>> >>>> I think you should add the inline keyword to these functions. >>> Skipped this, does not seem to be needed and leaving it out matches how >>> objArrayKlass.inline.hpp is handled. >>>> >>>> -------------------------------------------------------------------------------- >>>> >>>> >>>> >>> Thanks for the review, >>> StefanJ >>>> Thanks, >>>> StefanK >>>> >>>>> >>>>> Summary: >>>>> The macros for the oop_oop_iterate functions were defined for all >>>>> *Klass types even though they were very similar. This change extracts >>>>> and generalizes the macros to klass.hpp and arrayKlass.hpp. >>>>> >>>>> For the arrays the *_OOP_OOP_ITERATE_BACKWARDS_* macros is now called >>>>> OOP_OOP_ITERATE_NO_BACKWARDS_* to reflect that for arrays we >>>>> currently don't have a reverse implementation. >>>>> >>>>> Thanks, >>>>> Stefan >>>> >>> > From per.liden at oracle.com Mon May 25 09:15:14 2015 From: per.liden at oracle.com (Per Liden) Date: Mon, 25 May 2015 11:15:14 +0200 Subject: RFR: 8080746: Refactor oop iteration macros to be more general In-Reply-To: <5562E093.9010101@oracle.com> References: <555C4C9A.4080508@oracle.com> <555C5796.3070304@oracle.com> <555C84C3.4090405@oracle.com> <555F3313.9010200@oracle.com> <5562E093.9010101@oracle.com> Message-ID: <5562E822.10208@oracle.com> Looks good. /Per On 2015-05-25 10:42, Stefan Johansson wrote: > Thanks Per for looking at this, > > New webrevs: > http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.02/ > http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.01-02/ > > On 2015-05-22 15:45, Per Liden wrote: >> On 2015-05-20 14:57, Stefan Johansson wrote: >>> Thanks for looking at this Stefan, >>> >>> New webrevs: >>> Full: http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.01/ >>> Inc: http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00-01/ >> >> >> Looks good, just one minor thing. We now get: >> >> oop_oop_iterate_range##nv_suffix() >> oop_oop_iterate##nv_suffix##_m() >> oop_oop_iterate_range##nv_suffix() >> oop_oop_iterate_backwards##nv_suffix() >> >> Could align the _m version into this: >> >> oop_oop_iterate_range##nv_suffix() >> oop_oop_iterate_bounded##nv_suffix() >> oop_oop_iterate_range##nv_suffix() >> oop_oop_iterate_backwards##nv_suffix() >> > Fixed. > > Thanks, > Stefan >> cheers, >> /Per >> >>> >>> On 2015-05-20 11:44, Stefan Karlsson wrote: >>>> Hi Stefan, >>>> >>>> On 2015-05-20 10:58, Stefan Johansson wrote: >>>>> Hi, >>>>> >>>>> Please review this change to generalize the oop iteration macros: >>>>> https://bugs.openjdk.java.net/browse/JDK-8080746 >>>>> >>>>> Webrev: >>>>> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/ >>>> >>>> This looks great! >>>> >>>> Here's a couple of cleanup/style comments: >>>> >>>> ======================================================================== >>>> >>>> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/klass.hpp.udiff.html >>>> >>>> >>>> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/arrayKlass.hpp.udiff.html >>>> >>>> >>>> >>>> -------------------------------------------------------------------------------- >>>> >>>> >>>> Could you visually separate the DEFN and DECL defines so that it's >>>> more obvious that they serve different purposes. It might be worth >>>> adding a comment describing how the DEFN definitions are used. >>>> >>> Fixed, added an extra new line and extended the comments. >>>> -------------------------------------------------------------------------------- >>>> >>>> >>>> + int oop_oop_iterate_range##nv_suffix(oop obj, OopClosureType* blk, \ >>>> + int start, int end); >>>> >>>> Could you combine these two lines. >>>> >>> Fixed. >>>> -------------------------------------------------------------------------------- >>>> >>>> >>>> The indentation of the ending backslashes are inconsistent. >>>> >>> Fixed. >>>> -------------------------------------------------------------------------------- >>>> >>>> >>>> Pre-existing naming issue: >>>> >>>> + int oop_oop_iterate_backwards##nv_suffix(oop obj, OopClosureType* >>>> blk); >>>> >>>> +int KlassType::oop_oop_iterate_backwards##nv_suffix(oop obj, >>>> OopClosureType* closure) { \ >>>> >>>> Could you change parameter name blk to closure? >>>> >>> Fixed. >>>> -------------------------------------------------------------------------------- >>>> >>>> >>>> >>>> ======================================================================== >>>> >>>> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/objArrayKlass.inline.hpp.frames.html >>>> >>>> >>>> >>>> -------------------------------------------------------------------------------- >>>> >>>> >>>> 155 OOP_OOP_ITERATE_DEFN( ObjArrayKlass, >>>> OopClosureType, nv_suffix) \ >>>> 156 OOP_OOP_ITERATE_DEFN_m( ObjArrayKlass, >>>> OopClosureType, nv_suffix) \ >>>> 157 OOP_OOP_ITERATE_RANGE_DEFN( ObjArrayKlass, >>>> OopClosureType, nv_suffix) \ >>>> 158 OOP_OOP_ITERATE_NO_BACKWARDS_DEFN(ObjArrayKlass, >>>> OopClosureType, nv_suffix) >>>> >>>> It would be nice to prefix all these macros with OOP_OOP_ITERATE_DEFN >>> Fixed, did the same for OOP_OOP_ITERATE_DECL. Change _m to BOUNDED. >>>> -------------------------------------------------------------------------------- >>>> >>>> >>>> >>>> ======================================================================== >>>> >>>> http://cr.openjdk.java.net/~sjohanss/8080746/hotspot.00/src/share/vm/oops/typeArrayKlass.inline.hpp.frames.html >>>> >>>> >>>> >>>> -------------------------------------------------------------------------------- >>>> >>>> >>>> 44 template >>>> 45 int TypeArrayKlass::oop_oop_iterate(oop obj, OopClosureType* >>>> closure) { >>>> 46 return oop_oop_iterate_impl(obj, closure); >>>> 47 } >>>> 48 >>>> 49 template >>>> 50 int TypeArrayKlass::oop_oop_iterate_bounded(oop obj, >>>> OopClosureType* closure, MemRegion mr) { >>>> 51 return oop_oop_iterate_impl(obj, closure); >>>> 52 } >>>> >>>> I think you should add the inline keyword to these functions. >>> Skipped this, does not seem to be needed and leaving it out matches how >>> objArrayKlass.inline.hpp is handled. >>>> >>>> -------------------------------------------------------------------------------- >>>> >>>> >>>> >>> Thanks for the review, >>> StefanJ >>>> Thanks, >>>> StefanK >>>> >>>>> >>>>> Summary: >>>>> The macros for the oop_oop_iterate functions were defined for all >>>>> *Klass types even though they were very similar. This change extracts >>>>> and generalizes the macros to klass.hpp and arrayKlass.hpp. >>>>> >>>>> For the arrays the *_OOP_OOP_ITERATE_BACKWARDS_* macros is now called >>>>> OOP_OOP_ITERATE_NO_BACKWARDS_* to reflect that for arrays we >>>>> currently don't have a reverse implementation. >>>>> >>>>> Thanks, >>>>> Stefan >>>> >>> > From vladimir.x.ivanov at oracle.com Mon May 25 15:05:53 2015 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Mon, 25 May 2015 18:05:53 +0300 Subject: [9] RFR (XS): 8081000: gc/metaspace/CompressedClassSpaceSizeInJmapHeap.java fails with RuntimeException: field "_resolved_references" not found in type ConstantPool Message-ID: <55633A51.6010809@oracle.com> http://cr.openjdk.java.net/~vlivanov/8081000/webrev.00/ https://bugs.openjdk.java.net/browse/JDK-8081000 A followup on 8059340 (moved ConstantPool::_resolved_references to j.l.Class). Missed relevant SA code. Thanks! Best regards, Vladimir Ivanov From pujarimahesh_kumar at yahoo.com Mon May 25 15:52:59 2015 From: pujarimahesh_kumar at yahoo.com (Mahesh Pujari) Date: Mon, 25 May 2015 15:52:59 +0000 (UTC) Subject: Issues with dtrace enabled in OpenJdk 9 Message-ID: <816652759.861398.1432569179316.JavaMail.yahoo@mail.yahoo.com> Hi, ?I am trying to build OpenJDK 9 with dtrace enabled on my Ubuntu machine (Linux 3.13.0-45-generic #74-Ubuntu), I have asked this question on build-dev at openjdk.java.net (http://mail.openjdk.java.net/pipermail/build-dev/2015-May/014969.html) and I was directed to this mailing list (including distro-pkg mailing list, but had no luck there, so trying out here). ?If SDT headers are found then dtrace is enabled by default, this is what I understood. Now when I build, I end-up with below errors ... ... vmThread.o: In function `VMOperationQueue::add(VM_Operation*)': /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/runtime/vmThread.cpp:156: undefined reference to `__dtrace_hotspot___vmops__request' vmThread.o: In function `VMThread::evaluate_operation(VM_Operation*)': /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/runtime/vmThread.cpp:354: undefined reference to `__dtrace_hotspot___vmops__begin' /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/runtime/vmThread.cpp:374: undefined reference to `__dtrace_hotspot___vmops__end' ... .. classLoadingService.o: In function `ClassLoadingService::notify_class_unloaded(InstanceKlass*)': /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/services/classLoadingService.cpp:119: undefined reference to `__dtrace_hotspot___class__unloaded' classLoadingService.o: In function `ClassLoadingService::notify_class_loaded(InstanceKlass*, bool)': /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/services/classLoadingService.cpp:144: undefined reference to `__dtrace_hotspot___class__loaded' compileBroker.o: In function `CompileBroker::invoke_compiler_on_method(CompileTask*)': /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/compiler/compileBroker.cpp:1927: undefined reference to `__dtrace_hotspot___method__compile__begin' /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/compiler/compileBroker.cpp:2028: undefined reference to `__dtrace_hotspot___method__compile__end' ... ... ?Compilation is success but during linkage things fail. Can someone help me with this, any directions to what I am missing. thanks and regards, Mahesh Pujari From serguei.spitsyn at oracle.com Mon May 25 20:48:27 2015 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Mon, 25 May 2015 13:48:27 -0700 Subject: [9] RFR (XS): 8081000: gc/metaspace/CompressedClassSpaceSizeInJmapHeap.java fails with RuntimeException: field "_resolved_references" not found in type ConstantPool In-Reply-To: <55633A51.6010809@oracle.com> References: <55633A51.6010809@oracle.com> Message-ID: <55638A9B.8040306@oracle.com> Looks good. Thanks, Serguei On 5/25/15 8:05 AM, Vladimir Ivanov wrote: > http://cr.openjdk.java.net/~vlivanov/8081000/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8081000 > > A followup on 8059340 (moved ConstantPool::_resolved_references to > j.l.Class). Missed relevant SA code. > > Thanks! > > Best regards, > Vladimir Ivanov From staffan.larsen at oracle.com Tue May 26 06:19:27 2015 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Tue, 26 May 2015 08:19:27 +0200 Subject: [9] RFR (XS): 8081000: gc/metaspace/CompressedClassSpaceSizeInJmapHeap.java fails with RuntimeException: field "_resolved_references" not found in type ConstantPool In-Reply-To: <55633A51.6010809@oracle.com> References: <55633A51.6010809@oracle.com> Message-ID: <77CA72D9-9BBE-4152-94E3-6093F8FB6184@oracle.com> Looks good! Thanks, /Staffan > On 25 maj 2015, at 17:05, Vladimir Ivanov wrote: > > http://cr.openjdk.java.net/~vlivanov/8081000/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8081000 > > A followup on 8059340 (moved ConstantPool::_resolved_references to j.l.Class). Missed relevant SA code. > > Thanks! > > Best regards, > Vladimir Ivanov From david.holmes at oracle.com Tue May 26 08:10:14 2015 From: david.holmes at oracle.com (David Holmes) Date: Tue, 26 May 2015 18:10:14 +1000 Subject: Revision1: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <555E0028.8070108@oracle.com> References: <5554FE2D.6020405@oracle.com> <555E0028.8070108@oracle.com> Message-ID: <55642A66.4050308@oracle.com> Hi Gerard, Progress review so I can send this out tonight - I still have to complete the review and double-check the responses to my previous comments. In globals.hpp, looking at all the "stack parameters" I expect to see a constraint function specified somewhere, but there isn't one. So now I'm a bit confused about how constraint functions are specified and used. If there has to be a relationship maintained between A, B and C, is the constraint function specified for all of them or none of them and simply executed as post argument processing step? Can you elaborate on when constraint functions may be used, and must be used, and how they are processed? A few minor specific comments below: src/share/vm/c1/c1_globals.hpp Minor nit: Could you change: range(1, NOT_LP64(K) LP64_ONLY(32*K)) to range(1, NOT_LP64(1*K) LP64_ONLY(32*K)) or even: range(1, NOT_LP64(1) LP64_ONLY(32) *K) I find the (K) by itself a little odd-looking. --- src/share/vm/runtime/globals.cpp + if (withComments) { + #ifndef PRODUCT + st->print("%s", _doc); + #endif + } The ifdef should be around the whole if block (as it should for the existing code just before this change). --- src/share/vm/runtime/globals.hpp range(1, NOT_LP64(K) LP64_ONLY(M)) 1*K and 1*M please. 1328 /* 8K is well beyond the reasonable HW cache line size, even with the */\ delete the end 'the' 1329 /* aggressive prefetching, while still leaving the room for segregating */\ delete 'the'. Nit: 1981 range(ReferenceProcessor::DiscoveryPolicyMin, \ 1982 ReferenceProcessor::DiscoveryPolicyMax) \ 1982 should be indented so the Reference's line up. 2588 range((intx)Arguments::get_min_number_of_compiler_threads(), \ 2589 max_jint) \ Seems odd for a range to be expressed this way - seems more like a constraint. And get_min_number_of_compiler_threads doesn't really seem like an API for Arguments. That's all for now. Thanks, David ----- On 22/05/2015 1:56 AM, Gerard Ziemski wrote: > hi all, > > Here is a revision 1 of the feature taking into account feedback from > Coleen, Dmitry and David. I will be responding to each of the feedback > emails shortly. > > We introduce a new mechanism that allows specification of a valid range > per flag that is then used to automatically validate given flag's value > every time it changes. Ranges values must be constant and can not > change. Optionally, a constraint can also be specified and applied every > time a flag value changes for those flags whose valid value can not be > trivially checked by a simple min and max (ex. whether it's power of 2, > or bigger or smaller than some other flag that can also change) > > I have chosen to modify the table macros (ex. RUNTIME_FLAGS in > globals.hpp) instead of using a more sophisticated solution, such as C++ > templates, because even though macros were unfriendly when initially > developing, once a solution was arrived at, subsequent additions to the > tables of new ranges, or constraint are trivial from developer's point > of view. (The intial development unfriendliness of macros was mitigated > by using a pre-processor, which for those using a modern IDE like Xcode, > is easily available from a menu). Using macros also allowed for more > minimal code changes. > > The presented solution is based on expansion of macros using variadic > functions and can be readily seen in > runtime/commandLineFlagConstraintList.cpp and > runtime/commandLineFlagRangeList.cpp > > In commandLineFlagConstraintList.cpp or commandLineFlagRangesList.cpp, > there is bunch of classes and methods that seems to beg for C++ template > to be used. I have tried, but when the compiler tries to generate code > for both uintx and size_t, which happen to have the same underlying type > (on BSD), it fails to compile overridden methods with same type, but > different name. If someone has a way of simplifying the new code via C++ > templates, however, we can file a new enhancement request to address that. > > This webrev represents only the initial range checking framework and > only 100 or so flags that were ported from an existing ad hoc range > checking code to this new mechanism. There are about 250 remaining flags > that still need their ranges determined and ported over to this new > mechansim and they are tracked by individual subtasks. > > I had to modify several existing tests to change the error message that > they expected when VM refuses to run, which was changed to provide > uniform error messages. > > To help with testing and subtask efforts I have introduced a new runtime > flag: > > PrintFlagsRanges: "Print VM flags and their ranges and exit VM" > > which in addition to the already existing flags: "PrintFlagsInitial" and > "PrintFlagsFinal" allow for thorough examination of the flags values and > their ranges. > > The code change builds and passes JPRT (-testset hotspot) and UTE > (vm.quick.testlist) > > > References: > > Webrev: http://cr.openjdk.java.net/~gziemski/8059557_rev1/ > note: due to "awk" limit of 50 pats the Frames diff is not > available for "src/share/vm/runtime/arguments.cpp" > > JEP:https://bugs.openjdk.java.net/browse/JDK-8059557 > Compiler subtask:https://bugs.openjdk.java.net/browse/JDK-8078554 > GC subtask:https://bugs.openjdk.java.net/browse/JDK-8078555 > Runtime subtask:https://bugs.openjdk.java.net/browse/JDK-8078556 > > > # hgstat: > src/cpu/ppc/vm/globals_ppc.hpp | 2 +- > src/cpu/sparc/vm/globals_sparc.hpp | 2 +- > src/cpu/x86/vm/globals_x86.hpp | 2 +- > src/cpu/zero/vm/globals_zero.hpp | 3 +- > src/os/aix/vm/globals_aix.hpp | 2 +- > src/os/bsd/vm/globals_bsd.hpp | 29 +- > src/os/linux/vm/globals_linux.hpp | 9 +- > src/os/solaris/vm/globals_solaris.hpp | 4 +- > src/os/windows/vm/globals_windows.hpp | 5 +- > src/share/vm/c1/c1_globals.cpp | 4 +- > src/share/vm/c1/c1_globals.hpp | 17 +- > src/share/vm/gc_implementation/g1/g1_globals.cpp | 16 +- > src/share/vm/gc_implementation/g1/g1_globals.hpp | 38 ++- > src/share/vm/opto/c2_globals.cpp | 12 +- > src/share/vm/opto/c2_globals.hpp | 39 ++- > src/share/vm/prims/whitebox.cpp | 12 +- > src/share/vm/runtime/arguments.cpp | 753 ++++++++++++++++++++++++++---------------------------------------- > src/share/vm/runtime/arguments.hpp | 24 +- > src/share/vm/runtime/commandLineFlagConstraintList.cpp | 242 +++++++++++++++++++++ > src/share/vm/runtime/commandLineFlagConstraintList.hpp | 72 ++++++ > src/share/vm/runtime/commandLineFlagConstraintsGC.cpp | 251 ++++++++++++++++++++++ > src/share/vm/runtime/commandLineFlagConstraintsGC.hpp | 59 +++++ > src/share/vm/runtime/commandLineFlagConstraintsRuntime.cpp | 67 +++++ > src/share/vm/runtime/commandLineFlagConstraintsRuntime.hpp | 41 +++ > src/share/vm/runtime/commandLineFlagRangeList.cpp | 304 +++++++++++++++++++++++++++ > src/share/vm/runtime/commandLineFlagRangeList.hpp | 67 +++++ > src/share/vm/runtime/globals.cpp | 699 +++++++++++++++++++++++++++++++++++++++++++++++++------------ > src/share/vm/runtime/globals.hpp | 310 ++++++++++++++++++++++----- > src/share/vm/runtime/globals_extension.hpp | 101 +++++++- > src/share/vm/runtime/init.cpp | 6 +- > src/share/vm/runtime/os.hpp | 17 + > src/share/vm/runtime/os_ext.hpp | 7 +- > src/share/vm/runtime/thread.cpp | 6 + > src/share/vm/services/attachListener.cpp | 4 +- > src/share/vm/services/classLoadingService.cpp | 6 +- > src/share/vm/services/diagnosticCommand.cpp | 3 +- > src/share/vm/services/management.cpp | 6 +- > src/share/vm/services/memoryService.cpp | 2 +- > src/share/vm/services/writeableFlags.cpp | 161 ++++++++++---- > src/share/vm/services/writeableFlags.hpp | 52 +--- > test/compiler/c2/7200264/Test7200264.sh | 5 +- > test/compiler/startup/NumCompilerThreadsCheck.java | 2 +- > test/gc/arguments/TestHeapFreeRatio.java | 23 +- > test/gc/arguments/TestSurvivorAlignmentInBytesOption.java | 4 +- > test/gc/g1/TestStringDeduplicationTools.java | 6 +- > test/runtime/CompressedOops/CompressedClassSpaceSize.java | 4 +- > test/runtime/CompressedOops/ObjectAlignment.java | 9 +- > test/runtime/contended/Options.java | 10 +- > 48 files changed, 2641 insertions(+), 878 deletions(-) > From daniel.daugherty at oracle.com Tue May 26 13:42:13 2015 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Tue, 26 May 2015 07:42:13 -0600 Subject: Issues with dtrace enabled in OpenJdk 9 In-Reply-To: <816652759.861398.1432569179316.JavaMail.yahoo@mail.yahoo.com> References: <816652759.861398.1432569179316.JavaMail.yahoo@mail.yahoo.com> Message-ID: <55647835.2050401@oracle.com> Adding the Serviceability alias. Dan On 5/25/15 9:52 AM, Mahesh Pujari wrote: > Hi, > > I am trying to build OpenJDK 9 with dtrace enabled on my Ubuntu machine (Linux 3.13.0-45-generic #74-Ubuntu), I have asked this question on build-dev at openjdk.java.net (http://mail.openjdk.java.net/pipermail/build-dev/2015-May/014969.html) and I was directed to this mailing list (including distro-pkg mailing list, but had no luck there, so trying out here). > > If SDT headers are found then dtrace is enabled by default, this is what I understood. Now when I build, I end-up with below errors > ... > ... > vmThread.o: In function `VMOperationQueue::add(VM_Operation*)': > /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/runtime/vmThread.cpp:156: undefined reference to `__dtrace_hotspot___vmops__request' > vmThread.o: In function `VMThread::evaluate_operation(VM_Operation*)': > /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/runtime/vmThread.cpp:354: undefined reference to `__dtrace_hotspot___vmops__begin' > /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/runtime/vmThread.cpp:374: undefined reference to `__dtrace_hotspot___vmops__end' > ... > .. > classLoadingService.o: In function `ClassLoadingService::notify_class_unloaded(InstanceKlass*)': > /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/services/classLoadingService.cpp:119: undefined reference to `__dtrace_hotspot___class__unloaded' > classLoadingService.o: In function `ClassLoadingService::notify_class_loaded(InstanceKlass*, bool)': > /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/services/classLoadingService.cpp:144: undefined reference to `__dtrace_hotspot___class__loaded' > compileBroker.o: In function `CompileBroker::invoke_compiler_on_method(CompileTask*)': > /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/compiler/compileBroker.cpp:1927: undefined reference to `__dtrace_hotspot___method__compile__begin' > /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/compiler/compileBroker.cpp:2028: undefined reference to `__dtrace_hotspot___method__compile__end' > ... > ... > > Compilation is success but during linkage things fail. Can someone help me with this, any directions to what I am missing. > > thanks and regards, > Mahesh Pujari > From harold.seigel at oracle.com Tue May 26 14:12:03 2015 From: harold.seigel at oracle.com (harold seigel) Date: Tue, 26 May 2015 10:12:03 -0400 Subject: RFR JDK-8079466: JNI Specification Update and Clean-up In-Reply-To: <555EE91D.6040702@oracle.com> References: <5559ABCF.7040009@oracle.com> <555CE2CB.6000501@oracle.com> <555EE91D.6040702@oracle.com> Message-ID: <55647F33.5060003@oracle.com> Hi David, The changes look good. Thanks, Harold On 5/22/2015 4:30 AM, David Simms wrote: > > Thanks Harold, that was quick work ! > > Updated web review: http://cr.openjdk.java.net/~dsimms/8079466/rev1/ > > Adjusted as per your comments: > > On 20/05/15 21:38, harold seigel wrote: >> Hi David, >> >> It looks like a lot of work! I have just a few small comments: >> >> 1. In functions.html, delete the 'a' before 'this' >> >> 944 reference. May be a NULL value, in which case a >> this function will >> 945 return NULL.

>> > Done >> >> 2. In function.html, perhaps some commas around 'for example' ? >> >> 1035 (e.g. JNI_ERR or JNI_EINVAL). The >> HotSpot JVM >> 1036 implementation for example uses the >> -XX:+MaxJNILocalCapacity flag >> 1037 (default: 65536).

>> > Done >> >> 3. In function.html, should the words "string length" be added to >> line 4339, like they are in line 4335? >> >> 4334

start: the index of the first unicode character >> in the string to >> 4335 copy. Must be greater than or equal to zero, and less than >> string length >> 4336 ("GetStringLength()").

>> 4337 >> 4338

len: the number of unicode characters to copy. >> Must be greater >> 4339 than or equal to zero, and "start + len" must be >> less than >> 4340 "GetStringLength()".

>> >> > Done, further updated all "string length" and "array length" to be the > same form. >> 4. In function.html, what does "this number" refer to in line 4361? >> >> 4359

The len argument specifies the number of >> 4360 unicode characters. The resulting number modified UTF-8 >> encoding >> 4361 characters may be greater than this number. >> GetStringUTFLength() >> 4362 may be used to determine the maximum size of the required >> character buffer.

> Done: "greater than the given len argument." >> >> 5. In function.htlm, line 4366, change "safetly' to "to safely" >> >> 4366 "memset()") before using this function, in order >> safetly perform >> 4367 strlen().

>> >> > Done. Nice spotting. >> 6. In jni-6.html can the following: >> >> 15

JNI has been enhanced in Java SE 6 with a few minor changes. >> The addition of >> 16 the GetObjectRefType function. Deprecated structures >> 17 JDK1_1InitArgs and JDK1_1AttachArgs >> have been removed. >> 18 And an increment in the JNI version number.

>> >> to >> >> 15

JNI has been enhanced in Java SE 6 with a few minor changes. The >> 16 GetObjectRefType function has been added. >> Deprecated structures >> 17 JDK1_1InitArgs and JDK1_1AttachArgs >> have been removed. >> 18 The JNI version number has also been incremented.

>> > Done. >> Thanks, Harold >> > > Cheers > /David Simms From edward.nevill at linaro.org Tue May 26 16:33:37 2015 From: edward.nevill at linaro.org (Edward Nevill) Date: Tue, 26 May 2015 17:33:37 +0100 Subject: RFR: 8079565: aarch64: Add vectorization support for aarch64 Message-ID: <1432658017.17486.32.camel@mylittlepony.linaroharston> Hi, The following webrev http://cr.openjdk.java.net/~enevill/8079565/webrev.00/ adds support for vectorization on aarch64. This is an initial pass at adding vectorization. There are a number of limitations. - Only 128 bit vectors are supported. The current implementation only supports vectors which are exactly 128 bits in length. Support needs to be added for shorter vectors (64 / 32??). - The pack/unpack vectorizations are missing - The Replicate opcode is suboptimal. Currently it just uses a sequence of MOVI/ORRI or MVNI/BICI instructions to replicate an immediate value across a vector. This can take up to 4 instructions to replicate a 32 bit value across the vector (1 MOVI and 3 ORRIs ot 1 MVNI and 3 BICIs). - The cost model needs tuning. At the moment most vectorizations are just costed at 1 X instruction cost. - It needs benchmarking and tuning across different partners hardware. For example, on some partners hardware it may not be worthwhile performing the Long or Double vectorizations. I have done some performance testing on one partners hardware using the hotspot vector tests with the following results:- Byte Vectors: approx 3-4 X improvement Short Vectors: approx 2.5-3.25 X improvement Int Vectors: approx 1.5-2.5 X improvement Long Vectors: approx 1.0-1.33 X improvement Float Vectors: approx 1.4-1.8 X improvement Double Vectors: approx 0.85-1.25 X improvement I have also implemented the scalar reduction optimization with the following results:- Scalar Sum Reduction Int: ~4.6 X improvement Scalar Sum Reduction Float: ~2.3 X improvement Scalar Sum Reduction Double: ~1.9 X improvement Scalar Product Reduction Int: ~1.2 X improvement Scalar Product Reduction Float: ~1.1 X improvement Scalar Product Reduction Double: ~0.8 X improvement Tested with JTreg hotspot Original: Test results: passed: 814; failed: 32; error: 3 Revised : Test results: passed: 814; failed: 32; error: 3 Langtools: Test results: passed: 3,222; error: 11 Please review, Thanks, Ed. From gerard.ziemski at oracle.com Tue May 26 17:11:04 2015 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Tue, 26 May 2015 12:11:04 -0500 Subject: Revision1: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <55623A0D.2080002@oracle.com> References: <5554FE2D.6020405@oracle.com> <555E0028.8070108@oracle.com> <55623A0D.2080002@oracle.com> Message-ID: <5564A928.2020506@oracle.com> Thank you Dmitry for the corrections. I have fixed the string format issues you pointed out in your other email, and responded to the other issues inline here: > On May 26, 2015, at 11:30 AM, Gerard Ziemski wrote: > > > > > -------- Forwarded Message -------- > Subject: Re: Revision1: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments > Date: Sun, 24 May 2015 23:52:29 +0300 > From: Dmitry Dmitriev > Organization: Oracle Corporation > To: Gerard Ziemski,hotspot-dev at openjdk.java.net Developers, David Holmes, Coleen Phillimore > CC: david.therkelsen at oracle.com, Thomas St?fe, sangheon.kim > > Hi Gerard, > > I have found new several issues. Here my comments for 2 modules: > > 1) In src/share/vm/runtime/commandLineFlagConstraintsGC.cpp: > 156 Flag::Error G1MaxNewSizePercentConstraintFunc(bool verbose, uintx* value) { > 157 if ((CommandLineFlags::finishedInitializing() == true) && (*value < G1NewSizePercent)) { > 158 if (verbose == true) { > 159 jio_fprintf(defaultStream::error_stream(), > 160 "G1MaxNewSizePercent (" UINTX_FORMAT ") must be less than or " > 161 "equal to G1NewSizePercent (" UINTX_FORMAT ")\n", > 162 *value, G1NewSizePercent); > 163 } > > Message on line 160 must state that G1MaxNewSizePercent must be greater than G1NewSizePercent. Done. > 186 Flag::Error CMSOldPLABMaxConstraintFunc(bool verbose, size_t* value) { > 187 if ((CommandLineFlags::finishedInitializing() == true) && (*value < CMSOldPLABMax)) { > 188 if (verbose == true) { > 189 jio_fprintf(defaultStream::error_stream(), > 190 "CMSOldPLABMax (" SIZE_FORMAT ") must be greater than or " > 191 "equal to CMSOldPLABMax (" SIZE_FORMAT ")\n", > 192 *value, CMSOldPLABMax); > 193 } > 194 return Flag::VIOLATES_CONSTRAINT; > > It seems that this function perform wrong check. It verifies value for CMSOldPLABMax and compare it against CMSOldPLABMax. I think that it should be compared against CMSOldPLABMin. In this case error message should be corrected. This constraint implements the previous ad hoc code logic: status = status && verify_min_value(CMSOldPLABMax, 1, "CMSOldPLABMax"); I think the desire here was to limit CMSOldPLABMax to be between 1 and the default value (the current CMSOldPLABMax value). > 228 Flag::Error SurvivorAlignmentInBytesConstraintFunc(bool verbose, intx* value) { > 229 if (CommandLineFlags::finishedInitializing() == true) { > 230 if (*value != 0) { > 231 if (!is_power_of_2(*value)) { > 232 if (verbose == true) { > 233 jio_fprintf(defaultStream::error_stream(), > 234 "SurvivorAlignmentInBytes (" INTX_FORMAT ") must be power of 2\n", > 235 *value); > 236 } > 237 return Flag::VIOLATES_CONSTRAINT; > 238 } > 239 if (SurvivorAlignmentInBytes < ObjectAlignmentInBytes) { > 240 if (verbose == true) { > 241 jio_fprintf(defaultStream::error_stream(), > 242 "SurvivorAlignmentInBytes (" INTX_FORMAT ") must be greater " > 243 "than ObjectAlignmentInBytes (" INTX_FORMAT ") \n", > 244 *value, ObjectAlignmentInBytes); > 245 } > 246 return Flag::VIOLATES_CONSTRAINT; > 247 } > > On line 239 "*value" should be instead of "SurvivorAlignmentInBytes?. Done. > 2) In src/share/vm/runtime/commandLineFlagConstraintsRuntime.cpp: > 33 Flag::Error ObjectAlignmentInBytesConstraintFunc(bool verbose, intx* value) { > 34 if (!is_power_of_2(*value)) { > 35 if (verbose == true) { > 36 jio_fprintf(defaultStream::error_stream(), > 37 "ObjectAlignmentInBytes=%d must be power of 2\n", > 38 (int)*value); > 39 } > 40 return Flag::VIOLATES_CONSTRAINT; > 41 } > 42 // In case page size is very small. > 43 if ((int)*value >= os::vm_page_size()) { > 44 if (verbose == true) { > 45 jio_fprintf(defaultStream::error_stream(), > 46 "ObjectAlignmentInBytes=%d must be less than page size %d\n", > 47 (int)*value, os::vm_page_size()); > 48 } > > I understand that ObjectAlignmentInBytesConstraintFunc have not huge upper range and in this code not introduce problems, so it can be leaved as is. I think that on lines 37-38 and 46 it is unnecessary to convert "*value" to "int", because instead of "%d" format you can use INTX_FORMAT. Also, on line 43 os::vm_page_size can be converted to wide type(from int to intx) instead of converting *value to narrow type(on 64 bit systems from intx to int), i.e. use following compare statement (*value >= (intx)os::vm_page_size()). I copied the old ad hoc code "as is?, but we can tighten it up a bit. I will be presenting web rev 2 shortly. cheers From gerard.ziemski at oracle.com Tue May 26 17:15:22 2015 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Tue, 26 May 2015 12:15:22 -0500 Subject: Revision1: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <55642A66.4050308@oracle.com> References: <5554FE2D.6020405@oracle.com> <555E0028.8070108@oracle.com> <55642A66.4050308@oracle.com> Message-ID: <5564AA2A.6090801@oracle.com> Thank you David for feedback. Please see my answers inline: > -------- Forwarded Message -------- > Subject: Re: Revision1: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments > Date: Tue, 26 May 2015 18:10:14 +1000 > From: David Holmes > Organization: Oracle Corporation > To: Gerard Ziemski,hotspot-dev at openjdk.java.net Developers, Coleen Phillimore, Dmitry Dmitriev > CC: david.therkelsen at oracle.com, Thomas St?fe, sangheon.kim > > Hi Gerard, > > Progress review so I can send this out tonight - I still have to > complete the review and double-check the responses to my previous comments. > > In globals.hpp, looking at all the "stack parameters" I expect to see a > constraint function specified somewhere, but there isn't one. So now I'm > a bit confused about how constraint functions are specified and used. If > there has to be a relationship maintained between A, B and C, is the > constraint function specified for all of them or none of them and simply > executed as post argument processing step? Can you elaborate on when > constraint functions may be used, and must be used, and how they are > processed? > Constraints were not meant as a framework that imposes restrictions as to when and how to be used. It?s a helper framework that makes it easy for a developer to implement the kind of a constraint that a particular flag(s) demands. The decision as to what goes into it is left to the engineer responsible for a particular flag. The process of implementing constraints and ranges is still ongoing for many of the flags, and there are 3 subtasks tracking the issue. This webrev covers the introduction of the range/constraint framework and a subset of ranges/constraints implemented for those flags for which I was able to find existing ad hoc code or comments describing them. > A few minor specific comments below: > > src/share/vm/c1/c1_globals.hpp > > Minor nit: Could you change: > > range(1, NOT_LP64(K) LP64_ONLY(32*K)) > > to > > range(1, NOT_LP64(1*K) LP64_ONLY(32*K)) > > or even: > > range(1, NOT_LP64(1) LP64_ONLY(32) *K) > > I find the (K) by itself a little odd-looking. > Done. > --- > > src/share/vm/runtime/globals.cpp > > + if (withComments) { > + #ifndef PRODUCT > + st->print("%s", _doc); > + #endif > + } > > The ifdef should be around the whole if block (as it should for the > existing code just before this change). Done. > --- > > src/share/vm/runtime/globals.hpp > > range(1, NOT_LP64(K) LP64_ONLY(M)) > > 1*K and 1*M please. Done. > 1328 /* 8K is well beyond the reasonable HW cache line size, even with > the */\ > > delete the end 'the' > > 1329 /* aggressive prefetching, while still leaving the room for > segregating */\ > > delete 'the?. Done. > Nit: > > 1981 range(ReferenceProcessor::DiscoveryPolicyMin, > \ > 1982 ReferenceProcessor::DiscoveryPolicyMax) > \ > > 1982 should be indented so the Reference's line up. Done. > 2588 > range((intx)Arguments::get_min_number_of_compiler_threads(), \ > 2589 max_jint) > \ > > Seems odd for a range to be expressed this way - seems more like a > constraint. And get_min_number_of_compiler_threads doesn't really seem > like an API for Arguments. > All ranges could be expressed as constraints. Ranges are just ?trivial? constraint, which is most of them, and are supposed to be easy to use by the developer. get_min_number_of_compiler_threads() in this context of expressing a range is ?constant?. I need that value; it used to be accessed from Arguments internally, so it did not need to be exposed before, but with the range/constraints factored out, I need to be able to get at the value. I will be persenting webrev 2 shortly after I do build and some testing. cheers From gerard.ziemski at oracle.com Tue May 26 17:20:31 2015 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Tue, 26 May 2015 12:20:31 -0500 Subject: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: References: <5554FE2D.6020405@oracle.com> <55597860.2050403@oracle.com> <555E031D.8070906@oracle.com> <555E0752.5080902@oracle.com> <555E0DB1.4040009@oracle.com> Message-ID: <5564AB5F.1040708@oracle.com> Thank you for the feedback. I have adopted the unnamed arguments based solution. cheers On 5/21/2015 4:31 PM, Kim Barrett wrote: > On May 21, 2015, at 12:54 PM, Gerard Ziemski wrote: >> hi Alexander, >> >> Yes, handling it this way was an alternative, but I thought it required a C++ compiler with some specific feature support? C++ 11? >> >> It was my understanding that such solution was more restrictive, than the one I used. >> >> Thank you. >> >> On 5/21/2015 11:26 AM, Alexander Harlap wrote: >>> And what about this: >>> >>> void emit_range_bool(const char* /* name */) { } > Unnamed arguments in C++ are not new; see, for example, C++ ARM 8.2.5 (p.140). > See also C++03 8.3.5/8 Functions, and Note regarding them in 8.4/6 Function definitions. > That?s the standard way to deal with the problem. I?ve not heard of a compiler that is so > stupid as to warn about that usage. > > > > From vladimir.x.ivanov at oracle.com Tue May 26 19:22:58 2015 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Tue, 26 May 2015 22:22:58 +0300 Subject: [9] RFR (XS): 8081000: gc/metaspace/CompressedClassSpaceSizeInJmapHeap.java fails with RuntimeException: field "_resolved_references" not found in type ConstantPool In-Reply-To: <77CA72D9-9BBE-4152-94E3-6093F8FB6184@oracle.com> References: <55633A51.6010809@oracle.com> <77CA72D9-9BBE-4152-94E3-6093F8FB6184@oracle.com> Message-ID: <5564C812.30002@oracle.com> Stefan, Serguei, thanks for review. Best regards, Vladimir Ivanov On 5/26/15 9:19 AM, Staffan Larsen wrote: > Looks good! > > Thanks, > /Staffan > >> On 25 maj 2015, at 17:05, Vladimir Ivanov wrote: >> >> http://cr.openjdk.java.net/~vlivanov/8081000/webrev.00/ >> https://bugs.openjdk.java.net/browse/JDK-8081000 >> >> A followup on 8059340 (moved ConstantPool::_resolved_references to j.l.Class). Missed relevant SA code. >> >> Thanks! >> >> Best regards, >> Vladimir Ivanov > From vladimir.x.ivanov at oracle.com Tue May 26 19:24:15 2015 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Tue, 26 May 2015 22:24:15 +0300 Subject: [9] RFR (XS): 8081000: gc/metaspace/CompressedClassSpaceSizeInJmapHeap.java fails with RuntimeException: field "_resolved_references" not found in type ConstantPool In-Reply-To: <5564C812.30002@oracle.com> References: <55633A51.6010809@oracle.com> <77CA72D9-9BBE-4152-94E3-6093F8FB6184@oracle.com> <5564C812.30002@oracle.com> Message-ID: <5564C85F.7050904@oracle.com> Sorry, Staffan. Misspelled your name. Best regards, Vladimir Ivanov On 5/26/15 10:22 PM, Vladimir Ivanov wrote: > Stefan, Serguei, thanks for review. > > Best regards, > Vladimir Ivanov > > On 5/26/15 9:19 AM, Staffan Larsen wrote: >> Looks good! >> >> Thanks, >> /Staffan >> >>> On 25 maj 2015, at 17:05, Vladimir Ivanov >>> wrote: >>> >>> http://cr.openjdk.java.net/~vlivanov/8081000/webrev.00/ >>> https://bugs.openjdk.java.net/browse/JDK-8081000 >>> >>> A followup on 8059340 (moved ConstantPool::_resolved_references to >>> j.l.Class). Missed relevant SA code. >>> >>> Thanks! >>> >>> Best regards, >>> Vladimir Ivanov >> From dmitry.dmitriev at oracle.com Tue May 26 19:33:15 2015 From: dmitry.dmitriev at oracle.com (Dmitry Dmitriev) Date: Tue, 26 May 2015 22:33:15 +0300 Subject: Revision1: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <5564A928.2020506@oracle.com> References: <5554FE2D.6020405@oracle.com> <555E0028.8070108@oracle.com> <55623A0D.2080002@oracle.com> <5564A928.2020506@oracle.com> Message-ID: <5564CA7B.5080100@oracle.com> Hi Gerard, Thank you for fixing the code. Please, look at my comments inline about CMSOldPLABMax and SurvivorAlignmentInBytes constraints: On 26.05.2015 20:11, Gerard Ziemski wrote: > Thank you Dmitry for the corrections. > > I have fixed the string format issues you pointed out in your other email, and responded to the other issues inline here: > > >> On May 26, 2015, at 11:30 AM, Gerard Ziemski wrote: >> >> >> >> >> -------- Forwarded Message -------- >> Subject: Re: Revision1: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments >> Date: Sun, 24 May 2015 23:52:29 +0300 >> From: Dmitry Dmitriev >> Organization: Oracle Corporation >> To: Gerard Ziemski,hotspot-dev at openjdk.java.net Developers, David Holmes, Coleen Phillimore >> CC: david.therkelsen at oracle.com, Thomas St?fe, sangheon.kim >> >> Hi Gerard, >> >> I have found new several issues. Here my comments for 2 modules: >> >> 1) In src/share/vm/runtime/commandLineFlagConstraintsGC.cpp: >> 156 Flag::Error G1MaxNewSizePercentConstraintFunc(bool verbose, uintx* value) { >> 157 if ((CommandLineFlags::finishedInitializing() == true) && (*value < G1NewSizePercent)) { >> 158 if (verbose == true) { >> 159 jio_fprintf(defaultStream::error_stream(), >> 160 "G1MaxNewSizePercent (" UINTX_FORMAT ") must be less than or " >> 161 "equal to G1NewSizePercent (" UINTX_FORMAT ")\n", >> 162 *value, G1NewSizePercent); >> 163 } >> >> Message on line 160 must state that G1MaxNewSizePercent must be greater than G1NewSizePercent. > Done. > > >> 186 Flag::Error CMSOldPLABMaxConstraintFunc(bool verbose, size_t* value) { >> 187 if ((CommandLineFlags::finishedInitializing() == true) && (*value < CMSOldPLABMax)) { >> 188 if (verbose == true) { >> 189 jio_fprintf(defaultStream::error_stream(), >> 190 "CMSOldPLABMax (" SIZE_FORMAT ") must be greater than or " >> 191 "equal to CMSOldPLABMax (" SIZE_FORMAT ")\n", >> 192 *value, CMSOldPLABMax); >> 193 } >> 194 return Flag::VIOLATES_CONSTRAINT; >> >> It seems that this function perform wrong check. It verifies value for CMSOldPLABMax and compare it against CMSOldPLABMax. I think that it should be compared against CMSOldPLABMin. In this case error message should be corrected. > This constraint implements the previous ad hoc code logic: > > status = status && verify_min_value(CMSOldPLABMax, 1, "CMSOldPLABMax"); > > I think the desire here was to limit CMSOldPLABMax to be between 1 and the default value (the current CMSOldPLABMax value). I think that verify_min_value only verify parameter for minimal value, i.e. CMSOldPLABMax should not be less than 1. So, it seems that CMSOldPLABMax doesn't need constraint. > > >> 228 Flag::Error SurvivorAlignmentInBytesConstraintFunc(bool verbose, intx* value) { >> 229 if (CommandLineFlags::finishedInitializing() == true) { >> 230 if (*value != 0) { >> 231 if (!is_power_of_2(*value)) { >> 232 if (verbose == true) { >> 233 jio_fprintf(defaultStream::error_stream(), >> 234 "SurvivorAlignmentInBytes (" INTX_FORMAT ") must be power of 2\n", >> 235 *value); >> 236 } >> 237 return Flag::VIOLATES_CONSTRAINT; >> 238 } >> 239 if (SurvivorAlignmentInBytes < ObjectAlignmentInBytes) { >> 240 if (verbose == true) { >> 241 jio_fprintf(defaultStream::error_stream(), >> 242 "SurvivorAlignmentInBytes (" INTX_FORMAT ") must be greater " >> 243 "than ObjectAlignmentInBytes (" INTX_FORMAT ") \n", >> 244 *value, ObjectAlignmentInBytes); >> 245 } >> 246 return Flag::VIOLATES_CONSTRAINT; >> 247 } >> >> On line 239 "*value" should be instead of "SurvivorAlignmentInBytes?. > Done. I oversight that error message is also incompelete. It should state that SurvivorAlignmentInBytes must be greater or equal than. Currently it states only that SurvivorAlignmentInBytes must be greater. Can you please fix that? Thanks, Dmitry > > >> 2) In src/share/vm/runtime/commandLineFlagConstraintsRuntime.cpp: >> 33 Flag::Error ObjectAlignmentInBytesConstraintFunc(bool verbose, intx* value) { >> 34 if (!is_power_of_2(*value)) { >> 35 if (verbose == true) { >> 36 jio_fprintf(defaultStream::error_stream(), >> 37 "ObjectAlignmentInBytes=%d must be power of 2\n", >> 38 (int)*value); >> 39 } >> 40 return Flag::VIOLATES_CONSTRAINT; >> 41 } >> 42 // In case page size is very small. >> 43 if ((int)*value >= os::vm_page_size()) { >> 44 if (verbose == true) { >> 45 jio_fprintf(defaultStream::error_stream(), >> 46 "ObjectAlignmentInBytes=%d must be less than page size %d\n", >> 47 (int)*value, os::vm_page_size()); >> 48 } >> >> I understand that ObjectAlignmentInBytesConstraintFunc have not huge upper range and in this code not introduce problems, so it can be leaved as is. I think that on lines 37-38 and 46 it is unnecessary to convert "*value" to "int", because instead of "%d" format you can use INTX_FORMAT. Also, on line 43 os::vm_page_size can be converted to wide type(from int to intx) instead of converting *value to narrow type(on 64 bit systems from intx to int), i.e. use following compare statement (*value >= (intx)os::vm_page_size()). > I copied the old ad hoc code "as is?, but we can tighten it up a bit. > > I will be presenting web rev 2 shortly. > > > cheers > > > From christian.tornqvist at oracle.com Tue May 26 21:55:06 2015 From: christian.tornqvist at oracle.com (Christian Tornqvist) Date: Tue, 26 May 2015 17:55:06 -0400 Subject: RFR 8059557: Test set for "Validate JVM Command-Line Flag Arguments" In-Reply-To: <555E46A7.4020402@oracle.com> References: <555A09B8.7010402@oracle.com> <555E46A7.4020402@oracle.com> Message-ID: <024b01d097fe$a0db0520$e2910f60$@oracle.com> Hi Dmitry, First of all, the code looks really good. One thing that I noticed is that it seems like the invalid values (out of range) is done for all the options with ranges, this seems redundant to me. It should be enough to test one for each data type instead. I'll continue to review the code tomorrow :) Thanks, Christian -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Dmitry Dmitriev Sent: Thursday, May 21, 2015 4:57 PM To: hotspot-dev at openjdk.java.net; Gerard Ziemski Subject: Re: RFR 8059557: Test set for "Validate JVM Command-Line Flag Arguments" Hello all, Recently I correct several typos, so here a new webrev for tests: http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.02/ Thanks, Dmitry On 18.05.2015 18:48, Dmitry Dmitriev wrote: > Hello all, > > Please review test set for verifying functionality implemented by JEP > 245 "Validate JVM Command-Line Flag Arguments"(JDK-8059557). Review > request for this JEP can be found there: > http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018539.htm > l > > I create 3 tests for verifying options with ranges. The tests mostly > rely on common/optionsvalidation/JVMOptionsUtils.java. Class in this > file contains functions to get options with ranges as list(by parsing > new option "-XX:+PrintFlagsRanges" output), run command line test for > list of options and other. The actual test code contained in > common/optionsvalidation/JVMOption.java file - testCommandLine(), > testDynamic(), testJcmd() and testAttach() methods. > common/optionsvalidation/IntJVMOption.java and > common/optionsvalidation/DoubleJVMOption.java source files contain > classes derived from JVMOption class for integer and double JVM > options correspondingly. > > Here are description of the tests: > 1) > hotspot/test/runtime/CommandLine/OptionsValidation/TestOptionsWithRang > es.java > > This test get all options with ranges by parsing output of new option > "-XX:+PrintFlagsRanges" and verify these options by starting Java and > passing options in command line with valid and invalid values. > Currently it verifies about 106 options which have ranges. > Invalid values are values which out-of-range. In test used values > "min-1" and "max+1".In this case Java should always exit with code 1 > and print error message about out-of-range value(with one exception, > if option is unsigned and passing negative value, then out-of-range > error message is not printed because error occurred earlier). > Valid values are values in range, e.g. min&max and also several > additional values. In this case Java should successfully exit(exit > code 0) or exit with error code 1 for other reasons(low memory with > certain option value etc.). In any case for values in range Java > should not print messages about out of range value. > In any case Java should not crash. > This test excluded from JPRT because it takes long time to execute and > also fails - some options with value in valid range cause Java to > crash(bugs are submitted). > > 2) > hotspot/test/runtime/CommandLine/OptionsValidation/TestOptionsWithRang > es.java > > This test get all writeable options with ranges by parsing output of > new option "-XX:+PrintFlagsRanges" and verify these options by > dynamically changing it's values to the valid and invalid values. Used > 3 methods for that: DynamicVMOption isValidValue and isInvalidValue > methods, Jcmd and by attach method. Currently 3 writeable options with > ranges are verified by this test. > This test pass in JPRT. > > 3) > hotspot/test/runtime/CommandLine/OptionsValidation/TestJcmdOutput.java > > This test verified output of Jcmd when out-of-range value is set to > the writeable option or value violates option constraint. Also this > test verify that jcmd not write error message to the target process. > This test pass in JPRT. > > > I am not write special tests for constraints for this JEP because > there are exist test for that(e.g. > test/runtime/CompressedOops/ObjectAlignment.java for > ObjectAlignmentInBytes or > hotspot/test/gc/arguments/TestHeapFreeRatio.java for > MinHeapFreeRatio/MaxHeapFreeRatio). > > Webrev: http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.00/ > > > JEP: https://bugs.openjdk.java.net/browse/JDK-8059557 > > Thanks, > Dmitry > From david.holmes at oracle.com Wed May 27 00:43:26 2015 From: david.holmes at oracle.com (David Holmes) Date: Wed, 27 May 2015 10:43:26 +1000 Subject: Revision1: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <5564AA2A.6090801@oracle.com> References: <5554FE2D.6020405@oracle.com> <555E0028.8070108@oracle.com> <55642A66.4050308@oracle.com> <5564AA2A.6090801@oracle.com> Message-ID: <5565132E.2030406@oracle.com> On 27/05/2015 3:15 AM, Gerard Ziemski wrote: > Thank you David for feedback. Please see my answers inline: > >> -------- Forwarded Message -------- >> Subject: Re: Revision1: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments >> Date: Tue, 26 May 2015 18:10:14 +1000 >> From: David Holmes >> Organization: Oracle Corporation >> To: Gerard Ziemski,hotspot-dev at openjdk.java.net Developers, Coleen Phillimore, Dmitry Dmitriev >> CC: david.therkelsen at oracle.com, Thomas St?fe, sangheon.kim >> >> Hi Gerard, >> >> Progress review so I can send this out tonight - I still have to >> complete the review and double-check the responses to my previous comments. >> >> In globals.hpp, looking at all the "stack parameters" I expect to see a >> constraint function specified somewhere, but there isn't one. So now I'm >> a bit confused about how constraint functions are specified and used. If >> there has to be a relationship maintained between A, B and C, is the >> constraint function specified for all of them or none of them and simply >> executed as post argument processing step? Can you elaborate on when >> constraint functions may be used, and must be used, and how they are >> processed? >> > Constraints were not meant as a framework that imposes restrictions as to when and how to be used. It?s a helper framework that makes it easy for a developer to implement the kind of a constraint that a particular flag(s) demands. The decision as to what goes into it is left to the engineer responsible for a particular flag. The process of implementing constraints and ranges is still ongoing for many of the flags, and there are 3 subtasks tracking the issue. This webrev covers the introduction of the range/constraint framework and a subset of ranges/constraints implemented for those flags for which I was able to find existing ad hoc code or comments describing them. That's not really answering the question. Let me assume from this that the stack parameters have not been updated yet - fine. Now lets suppose that I want to update them using a constraint function. How do I do that? Do I specify the constraint function on each argument involved in the constraint? When will the constraint function be executed? Thanks, David > >> A few minor specific comments below: >> >> src/share/vm/c1/c1_globals.hpp >> >> Minor nit: Could you change: >> >> range(1, NOT_LP64(K) LP64_ONLY(32*K)) >> >> to >> >> range(1, NOT_LP64(1*K) LP64_ONLY(32*K)) >> >> or even: >> >> range(1, NOT_LP64(1) LP64_ONLY(32) *K) >> >> I find the (K) by itself a little odd-looking. >> > Done. > > >> --- >> >> src/share/vm/runtime/globals.cpp >> >> + if (withComments) { >> + #ifndef PRODUCT >> + st->print("%s", _doc); >> + #endif >> + } >> >> The ifdef should be around the whole if block (as it should for the >> existing code just before this change). > > Done. > > >> --- >> >> src/share/vm/runtime/globals.hpp >> >> range(1, NOT_LP64(K) LP64_ONLY(M)) >> >> 1*K and 1*M please. > > Done. > > >> 1328 /* 8K is well beyond the reasonable HW cache line size, even with >> the */\ >> >> delete the end 'the' >> >> 1329 /* aggressive prefetching, while still leaving the room for >> segregating */\ >> >> delete 'the?. > > Done. > > >> Nit: >> >> 1981 range(ReferenceProcessor::DiscoveryPolicyMin, >> \ >> 1982 ReferenceProcessor::DiscoveryPolicyMax) >> \ >> >> 1982 should be indented so the Reference's line up. > > Done. > > >> 2588 >> range((intx)Arguments::get_min_number_of_compiler_threads(), \ >> 2589 max_jint) >> \ >> >> Seems odd for a range to be expressed this way - seems more like a >> constraint. And get_min_number_of_compiler_threads doesn't really seem >> like an API for Arguments. >> > All ranges could be expressed as constraints. Ranges are just ?trivial? constraint, which is most of them, and are supposed to be easy to use by the developer. get_min_number_of_compiler_threads() in this context of expressing a range is ?constant?. I need that value; it used to be accessed from Arguments internally, so it did not need to be exposed before, but with the range/constraints factored out, I need to be able to get at the value. > > I will be persenting webrev 2 shortly after I do build and some testing. > > > cheers > From david.holmes at oracle.com Wed May 27 01:18:58 2015 From: david.holmes at oracle.com (David Holmes) Date: Wed, 27 May 2015 11:18:58 +1000 Subject: RFR 8059557: Test set for "Validate JVM Command-Line Flag Arguments" In-Reply-To: <555E46A7.4020402@oracle.com> References: <555A09B8.7010402@oracle.com> <555E46A7.4020402@oracle.com> Message-ID: <55651B82.3040103@oracle.com> Hi Dmitry, Just browsing through ... test/runtime/CommandLine/OptionsValidation/common/optionsvalidation/JVMOptionsUtils.java 60 if (name.startsWith("CMS")) { 61 option.addPrepend("-XX:+UseConcMarkSweepGC"); 62 } Not all CMS related options start with CMS. Is this just looking for the particular subset of options that do start with CMS? Is starting with CMS or starting with G1 significant with regards to the type of option? 64 switch (name) { 65 case "MinHeapFreeRatio": 66 option.addPrepend("-XX:MaxHeapFreeRatio=100"); 67 break; 68 case "MaxHeapFreeRatio": 69 option.addPrepend("-XX:MinHeapFreeRatio=0"); 70 break; It isn't at all clear why these options have to have special treatment, or how the prepended option relates to the option under test? 106 static private void addTypeDependency(JVMOption option, String type) { 107 if (type.contains("C1") || type.contains("C2")) { 108 /* Run in compiler mode for compiler flags */ 109 option.addPrepend("-Xcomp"); 110 } 111 } Don't you need to ensure -client or -server to deal with C1 and C2 options? --- test/runtime/CommandLine/OptionsValidation/common/optionsvalidation/IntJVMOption.java 58 * Is this value is signed or unsigned 63 * Is this value is 64 bit unsigned delete 'is' I'm confused by the MIN_LONG/MAX_LONG etc variables. Are these supposed to be representing C-types? We don't use C "long" as a rule. Also unclear why you need to use BigInteger here instead of just Java long for everything ?? --- test/runtime/CommandLine/OptionsValidation/common/optionsvalidation/DoubleJVMOption.java 31 private final double MAX_DOUBLE = 18446744073709551616.000; where does this magic number come from? (And why 3 zeroes after the decimal point ??) Thanks, David On 22/05/2015 6:57 AM, Dmitry Dmitriev wrote: > Hello all, > > Recently I correct several typos, so here a new webrev for tests: > http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.02/ > > > Thanks, > Dmitry > > On 18.05.2015 18:48, Dmitry Dmitriev wrote: >> Hello all, >> >> Please review test set for verifying functionality implemented by JEP >> 245 "Validate JVM Command-Line Flag Arguments"(JDK-8059557). Review >> request for this JEP can be found there: >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018539.html >> >> I create 3 tests for verifying options with ranges. The tests mostly >> rely on common/optionsvalidation/JVMOptionsUtils.java. Class in this >> file contains functions to get options with ranges as list(by parsing >> new option "-XX:+PrintFlagsRanges" output), run command line test for >> list of options and other. The actual test code contained in >> common/optionsvalidation/JVMOption.java file - testCommandLine(), >> testDynamic(), testJcmd() and testAttach() methods. >> common/optionsvalidation/IntJVMOption.java and >> common/optionsvalidation/DoubleJVMOption.java source files contain >> classes derived from JVMOption class for integer and double JVM >> options correspondingly. >> >> Here are description of the tests: >> 1) >> hotspot/test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java >> >> >> This test get all options with ranges by parsing output of new option >> "-XX:+PrintFlagsRanges" and verify these options by starting Java and >> passing options in command line with valid and invalid values. >> Currently it verifies about 106 options which have ranges. >> Invalid values are values which out-of-range. In test used values >> "min-1" and "max+1".In this case Java should always exit with code 1 >> and print error message about out-of-range value(with one exception, >> if option is unsigned and passing negative value, then out-of-range >> error message is not printed because error occurred earlier). >> Valid values are values in range, e.g. min&max and also several >> additional values. In this case Java should successfully exit(exit >> code 0) or exit with error code 1 for other reasons(low memory with >> certain option value etc.). In any case for values in range Java >> should not print messages about out of range value. >> In any case Java should not crash. >> This test excluded from JPRT because it takes long time to execute and >> also fails - some options with value in valid range cause Java to >> crash(bugs are submitted). >> >> 2) >> hotspot/test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java >> >> >> This test get all writeable options with ranges by parsing output of >> new option "-XX:+PrintFlagsRanges" and verify these options by >> dynamically changing it's values to the valid and invalid values. Used >> 3 methods for that: DynamicVMOption isValidValue and isInvalidValue >> methods, Jcmd and by attach method. Currently 3 writeable options with >> ranges are verified by this test. >> This test pass in JPRT. >> >> 3) hotspot/test/runtime/CommandLine/OptionsValidation/TestJcmdOutput.java >> >> This test verified output of Jcmd when out-of-range value is set to >> the writeable option or value violates option constraint. Also this >> test verify that jcmd not write error message to the target process. >> This test pass in JPRT. >> >> >> I am not write special tests for constraints for this JEP because >> there are exist test for that(e.g. >> test/runtime/CompressedOops/ObjectAlignment.java for >> ObjectAlignmentInBytes or >> hotspot/test/gc/arguments/TestHeapFreeRatio.java for >> MinHeapFreeRatio/MaxHeapFreeRatio). >> >> Webrev: http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.00/ >> >> >> JEP: https://bugs.openjdk.java.net/browse/JDK-8059557 >> >> Thanks, >> Dmitry >> > From vladimir.x.ivanov at oracle.com Wed May 27 10:00:35 2015 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Wed, 27 May 2015 13:00:35 +0300 Subject: [9] RFR (S): 8059340: ConstantPool::_resolved_references is missing in heap dump In-Reply-To: <555A2228.3080908@oracle.com> References: <542971DC.2090803@oracle.com> <24133E27-C024-41D8-80EF-B5707CFF0D0C@oracle.com> <552FFBAA.6050103@oracle.com> <55358D0A.1020407@oracle.com> <21C42551-29AA-44DC-AFE5-6156D681503D@oracle.com> <5536975F.60108@oracle.com> <555A2228.3080908@oracle.com> Message-ID: <556595C3.7080800@oracle.com> Unfortunately, it broke class redefinition (see JDK-8081004 [1]) and I don't see a reasonable way to fix that. The problem is that multiple class versions can be alive at the same moment w.r.t. class redefinition. Every class version should have its dedicated set of resolved_reference slots for invokedynamic/invokehandle instructions. They still have dedicated constant pools (and hence constant pool caches), but resolved_references array is shared now. It's not viable to simply append slots to the array during class redefinition since it causes memory leak (array size grows indefinitely). Compactification strategies are also complex, since slot indices should be either (1) stable (they are recorded in CPCEs) and slots are reference counted (in order to free and reuse them on next class redefinition); or (2) all class versions are patched with new indices on every class redefinition. IMO both approaches complicate things too much compared to keeping dedicated resolved_references per ConstantPool instance. Serguei, Staffan, what do you think? Do you see any viable solutions to the problem? Otherwise, I'm inclined to backout the change. Best regards, Vladimir Ivanov [1] https://bugs.openjdk.java.net/browse/JDK-8081004 On 5/18/15 8:32 PM, Vladimir Ivanov wrote: > Here's updated version: > http://cr.openjdk.java.net/~vlivanov/8059340/webrev.01 > > Moved ConstantPool::_resolved_references to mirror class instance. > > Fixed a couple of issues in CDS and JVMTI (class redefinition) caused by > this change. > > I had to hard code Class::resolved_references offset since it is used in > template interpreter which is generated earlier than j.l.Class is loaded > during VM bootstrap. > > Testing: hotspot/test, vm testbase (in progress) > > Best regards, > Vladimir Ivanov > > On 4/21/15 9:30 PM, Vladimir Ivanov wrote: >> Coleen, Chris, >> >> I'll proceed with moving ConstantPool::_resolved_references to j.l.Class >> instance then. >> >> Thanks for the feedback. >> >> Best regards, >> Vladimir Ivanov >> >> On 4/21/15 3:22 AM, Christian Thalinger wrote: >>> >>>> On Apr 20, 2015, at 4:34 PM, Coleen Phillimore >>>> > >>>> wrote: >>>> >>>> >>>> Vladimir, >>>> >>>> I think that changing the format of the heap dump isn't a good idea >>>> either. >>>> >>>> On 4/16/15, 2:12 PM, Vladimir Ivanov wrote: >>>>> (sorry for really late response; just got enough time to return to >>>>> the bug) >>>> >>>> I'd forgotten about it! >>>>> >>>>> Coleen, Staffan, >>>>> >>>>> Thanks a lot for the feedback! >>>>> >>>>> After thinking about the fix more, I don't think that using reserved >>>>> oop slot in CLASS DUMP for recording _resolved_references is the best >>>>> thing to do. IMO the change causes too much work for the users (heap >>>>> dump analysis tools). >>>>> >>>>> It needs specification update and then heap dump analyzers should be >>>>> updated as well. >>>>> >>>>> I have 2 alternative approaches (hacky and not-so-hacky :-)): >>>>> >>>>> - artificial class static field in the dump ("" >>>>> + optional id to guarantee unique name); >>>>> >>>>> - add j.l.Class::_resolved_references field; >>>>> Not sure how much overhead (mostly reads from bytecode) the move >>>>> from ConstantPool to j.l.Class adds, so I propose just to duplicate >>>>> it for now. >>>> >>>> I really like this second approach, so much so that I had a prototype >>>> for moving resolved_references directly to the j.l.Class object about >>>> a year ago. I couldn't find any benefit other than consolidating oops >>>> so the GC would have less work to do. If the resolved_references are >>>> moved to j.l.C instance, they can not be jobjects and the >>>> ClassLoaderData::_handles area wouldn't have to contain them (but >>>> there are other things that could go there so don't delete the >>>> _handles field yet). >>>> >>>> The change I had was relatively simple. The only annoying part was >>>> that getting to the resolved references has to be in macroAssembler >>>> and do: >>>> >>>> go through method->cpCache->constants->instanceKlass->java_mirror() >>>> rather than >>>> method->cpCache->constants->resolved_references->jmethod indirection >>>> >>>> I think it only affects the interpreter so the extra indirection >>>> wouldn't affect performance, so don't duplicate it! You don't want to >>>> increase space used by j.l.C without taking it out somewhere else! >>> >>> I like this approach. Can we do this? >>> >>>> >>>>> >>>>> What do you think about that? >>>> >>>> Is this bug worth doing this? I don't know but I'd really like it. >>>> >>>> Coleen >>>> >>>>> >>>>> Best regards, >>>>> Vladimir Ivanov >>>>> >>>>> On 10/6/14 11:35 AM, Staffan Larsen wrote: >>>>>> This looks like a good approach. However, there are a couple of more >>>>>> places that need to be updated. >>>>>> >>>>>> The hprof binary format is described in >>>>>> jdk/src/jdk.hprof.agent/share/native/libhprof/manual.html and needs >>>>>> to be updated. It?s also more formally specified in hprof_b_spec.h >>>>>> in the same directory. >>>>>> >>>>>> The hprof JVMTI agent in jdk/src/jdk.hprof.agent code would also >>>>>> need to be updated to show this field. Since this is a JVMTI agent >>>>>> it needs to be possible to find the resolved_refrences array via the >>>>>> JVMTI heap walking API. Perhaps that already works? - I haven?t >>>>>> looked. >>>>>> >>>>>> Finally, the Serviceability Agent implements yet another hprof >>>>>> binary dumper in >>>>>> hotspot/agent//src/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java >>>>>> >>>>>> >>>>>> which also needs to write this reference. >>>>>> >>>>>> Thanks, >>>>>> /Staffan >>>>>> >>>>>> On 29 sep 2014, at 16:51, Vladimir Ivanov >>>>>> > >>>>>> wrote: >>>>>> >>>>>>> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.00/ >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8059340 >>>>>>> >>>>>>> VM heap dump doesn't contain ConstantPool::_resolved_references for >>>>>>> classes which have resolved references. >>>>>>> >>>>>>> ConstantPool::_resolved_references points to an Object[] holding >>>>>>> resolved constant pool entries (patches for VM anonymous classes, >>>>>>> linked CallSite & MethodType for invokedynamic instructions). >>>>>>> >>>>>>> I've decided to use reserved slot in HPROF class header format. >>>>>>> It requires an update in jhat to correctly display new info. >>>>>>> >>>>>>> The other approach I tried was to dump the reference as a fake >>>>>>> static field [1], but storing VM internal >>>>>>> ConstantPool::_resolved_references among user defined fields looks >>>>>>> confusing. >>>>>>> >>>>>>> Testing: manual (verified that corresponding arrays are properly >>>>>>> linked in Nashorn heap dump). >>>>>>> >>>>>>> Thanks! >>>>>>> >>>>>>> Best regards, >>>>>>> Vladimir Ivanov >>>>>>> >>>>>>> [1] http://cr.openjdk.java.net/~vlivanov/8059340/static >>> From yekaterina.kantserova at oracle.com Wed May 27 13:02:00 2015 From: yekaterina.kantserova at oracle.com (Yekaterina Kantserova) Date: Wed, 27 May 2015 15:02:00 +0200 Subject: RFR(S): 8081037: serviceability/sa/ tests time out on Windows Message-ID: <5565C048.4050801@oracle.com> Hi, Could I please have a review of this fix. bug: https://bugs.openjdk.java.net/browse/JDK-8081037 webrev root: http://cr.openjdk.java.net/~ykantser/8081037/webrev.00 webrev jdk: http://cr.openjdk.java.net/~ykantser/8081037.jdk/webrev.00 webrev hotspot: http://cr.openjdk.java.net/~ykantser/8081037.hotspot/webrev.00 From the bug: "The problem is most likely that SA will pause the target process while it is running. In this case, the target process is the same as the process that launched SA. That process is also handling the output from SA over a pipe, but when that pipe fills up the process cannot empty it and the SA process is blocked because it cannot write any more output. Deadlock." The solutions is to start a separate target process. Dmitry Samersoff has already created a test application for such cases so I've decided to move it on the top level library instead of duplicating it. The test application will reside under test/lib/share/classes/jdk/test/lib/apps and the test under test/lib-test/jdk/test/lib/apps. Thanks, Katja From jaroslav.bachorik at oracle.com Wed May 27 13:05:09 2015 From: jaroslav.bachorik at oracle.com (Jaroslav Bachorik) Date: Wed, 27 May 2015 15:05:09 +0200 Subject: RFR(S): 8081037: serviceability/sa/ tests time out on Windows In-Reply-To: <5565C048.4050801@oracle.com> References: <5565C048.4050801@oracle.com> Message-ID: <5565C105.7060900@oracle.com> Looks fine! -JB- On 27.5.2015 15:02, Yekaterina Kantserova wrote: > Hi, > > Could I please have a review of this fix. > > bug: https://bugs.openjdk.java.net/browse/JDK-8081037 > webrev root: http://cr.openjdk.java.net/~ykantser/8081037/webrev.00 > webrev jdk: http://cr.openjdk.java.net/~ykantser/8081037.jdk/webrev.00 > webrev hotspot: > http://cr.openjdk.java.net/~ykantser/8081037.hotspot/webrev.00 > > From the bug: > "The problem is most likely that SA will pause the target process while > it is running. In this case, the target process is the same as the > process that launched SA. That process is also handling the output from > SA over a pipe, but when that pipe fills up the process cannot empty it > and the SA process is blocked because it cannot write any more output. > Deadlock." > > The solutions is to start a separate target process. Dmitry Samersoff > has already created a test application for such cases so I've decided to > move it on the top level library instead of duplicating it. The test > application will reside under test/lib/share/classes/jdk/test/lib/apps > and the test under test/lib-test/jdk/test/lib/apps. > > Thanks, > Katja From gerard.ziemski at oracle.com Wed May 27 13:18:32 2015 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Wed, 27 May 2015 08:18:32 -0500 Subject: Revision1: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <5565132E.2030406@oracle.com> References: <5554FE2D.6020405@oracle.com> <555E0028.8070108@oracle.com> <55642A66.4050308@oracle.com> <5564AA2A.6090801@oracle.com> <5565132E.2030406@oracle.com> Message-ID: <5565C428.4080907@oracle.com> On 5/26/2015 7:43 PM, David Holmes wrote: >>> In globals.hpp, looking at all the "stack parameters" I expect to see a >>> constraint function specified somewhere, but there isn't one. So now >>> I'm >>> a bit confused about how constraint functions are specified and >>> used. If >>> there has to be a relationship maintained between A, B and C, is the >>> constraint function specified for all of them or none of them and >>> simply >>> executed as post argument processing step? Can you elaborate on when >>> constraint functions may be used, and must be used, and how they are >>> processed? >>> >> Constraints were not meant as a framework that imposes restrictions >> as to when and how to be used. It?s a helper framework that makes it >> easy for a developer to implement the kind of a constraint that a >> particular flag(s) demands. The decision as to what goes into it is >> left to the engineer responsible for a particular flag. The process >> of implementing constraints and ranges is still ongoing for many of >> the flags, and there are 3 subtasks tracking the issue. This webrev >> covers the introduction of the range/constraint framework and a >> subset of ranges/constraints implemented for those flags for which I >> was able to find existing ad hoc code or comments describing them. > > That's not really answering the question. Let me assume from this that > the stack parameters have not been updated yet - fine. Now lets > suppose that I want to update them using a constraint function. How do > I do that? Do I specify the constraint function on each argument > involved in the constraint? When will the constraint function be > executed? Constraints are not methods and should not be used to set values - they are read-only value verification functions. Constraints are called whenever the flag in question changes its value (via CommandLineFlags::*AtPut for those flags set by external tools like jcmd), and also a final check is run (CommandLineFlags::check_all_ranges_and_constraints()) right after Arguments::apply_ergo(), at which point it is assumed that all flags have their final values set. There is Arguments::post_final_range_and_constraint_check() method provided for any custom code that can assume that all range and constraints checks have been performed at that point and set any dependent values. Setting values of the flags themselves, however, should be performed before CommandLineFlags::check_all_ranges_and_constraints() is called. cheers cheers From staffan.larsen at oracle.com Wed May 27 13:19:31 2015 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Wed, 27 May 2015 15:19:31 +0200 Subject: RFR(S): 8081037: serviceability/sa/ tests time out on Windows In-Reply-To: <5565C048.4050801@oracle.com> References: <5565C048.4050801@oracle.com> Message-ID: <821F5B9A-D18B-4458-81A1-D94B839F9CDC@oracle.com> Looks good! Thanks, /Staffan > On 27 maj 2015, at 15:02, Yekaterina Kantserova wrote: > > Hi, > > Could I please have a review of this fix. > > bug: https://bugs.openjdk.java.net/browse/JDK-8081037 > webrev root: http://cr.openjdk.java.net/~ykantser/8081037/webrev.00 > webrev jdk: http://cr.openjdk.java.net/~ykantser/8081037.jdk/webrev.00 > webrev hotspot: http://cr.openjdk.java.net/~ykantser/8081037.hotspot/webrev.00 > > From the bug: > "The problem is most likely that SA will pause the target process while it is running. In this case, the target process is the same as the process that launched SA. That process is also handling the output from SA over a pipe, but when that pipe fills up the process cannot empty it and the SA process is blocked because it cannot write any more output. Deadlock." > > The solutions is to start a separate target process. Dmitry Samersoff has already created a test application for such cases so I've decided to move it on the top level library instead of duplicating it. The test application will reside under test/lib/share/classes/jdk/test/lib/apps and the test under test/lib-test/jdk/test/lib/apps. > > Thanks, > Katja From edward.nevill at linaro.org Wed May 27 13:21:20 2015 From: edward.nevill at linaro.org (Edward Nevill) Date: Wed, 27 May 2015 14:21:20 +0100 Subject: RFR: 8081289: aarch64: add support for RewriteFrequentPairs in interpreter Message-ID: <1432732880.11287.10.camel@mylittlepony.linaroharston> Hi, The following webrev adds support for RewriteFrequentPairs to the template interpreter for aarch64. http://cr.openjdk.java.net/~enevill/8081289/webrev.00 This was contributed by Alexander Alexeev (alexander.alexeev at caviumnetworks.com) This gives a small improvement to the interpreter on aarch64, and brings it in line with all the other ports (x86, sparc, ppc, zero) which all support RewriteFrequentPairs. I have done some performance measurement using -Xint with some micro benchmarks and I see a small improvement on each. java dhrystone: +9% embedded caffeinemark: +4% grinderbench: +1% dacapo (avrora): +1% Tested with hotspot jtreg:- Original: Test results: passed: 787; failed: 24; error: 44 With patch: Test results: passed: 785; failed: 24; error: 46 The difference in the # of errors is due to timeouts because we are running -Xint. Please review and if OK I will push. All the best, Ed. From dmitry.samersoff at oracle.com Wed May 27 13:26:34 2015 From: dmitry.samersoff at oracle.com (Dmitry Samersoff) Date: Wed, 27 May 2015 16:26:34 +0300 Subject: RFR(S): 8081037: serviceability/sa/ tests time out on Windows In-Reply-To: <5565C048.4050801@oracle.com> References: <5565C048.4050801@oracle.com> Message-ID: <5565C60A.9090404@oracle.com> Katja, Looks good for me. Thank you for doing it! -Dmitry On 2015-05-27 16:02, Yekaterina Kantserova wrote: > Hi, > > Could I please have a review of this fix. > > bug: https://bugs.openjdk.java.net/browse/JDK-8081037 > webrev root: http://cr.openjdk.java.net/~ykantser/8081037/webrev.00 > webrev jdk: http://cr.openjdk.java.net/~ykantser/8081037.jdk/webrev.00 > webrev hotspot: > http://cr.openjdk.java.net/~ykantser/8081037.hotspot/webrev.00 > > From the bug: > "The problem is most likely that SA will pause the target process while > it is running. In this case, the target process is the same as the > process that launched SA. That process is also handling the output from > SA over a pipe, but when that pipe fills up the process cannot empty it > and the SA process is blocked because it cannot write any more output. > Deadlock." > > The solutions is to start a separate target process. Dmitry Samersoff > has already created a test application for such cases so I've decided to > move it on the top level library instead of duplicating it. The test > application will reside under test/lib/share/classes/jdk/test/lib/apps > and the test under test/lib-test/jdk/test/lib/apps. > > Thanks, > Katja -- Dmitry Samersoff Oracle Java development team, Saint Petersburg, Russia * I would love to change the world, but they won't give me the sources. From yekaterina.kantserova at oracle.com Wed May 27 13:38:11 2015 From: yekaterina.kantserova at oracle.com (Yekaterina Kantserova) Date: Wed, 27 May 2015 15:38:11 +0200 Subject: RFR(S): 8081037: serviceability/sa/ tests time out on Windows In-Reply-To: <5565C105.7060900@oracle.com> References: <5565C048.4050801@oracle.com> <5565C105.7060900@oracle.com> Message-ID: <5565C8C3.4040901@oracle.com> Jaroslav, Staffan, Dmitry, thank you for the reviews! // Katja On 05/27/2015 03:05 PM, Jaroslav Bachorik wrote: > Looks fine! > > -JB- > > On 27.5.2015 15:02, Yekaterina Kantserova wrote: >> Hi, >> >> Could I please have a review of this fix. >> >> bug: https://bugs.openjdk.java.net/browse/JDK-8081037 >> webrev root: http://cr.openjdk.java.net/~ykantser/8081037/webrev.00 >> webrev jdk: http://cr.openjdk.java.net/~ykantser/8081037.jdk/webrev.00 >> webrev hotspot: >> http://cr.openjdk.java.net/~ykantser/8081037.hotspot/webrev.00 >> >> From the bug: >> "The problem is most likely that SA will pause the target process while >> it is running. In this case, the target process is the same as the >> process that launched SA. That process is also handling the output from >> SA over a pipe, but when that pipe fills up the process cannot empty it >> and the SA process is blocked because it cannot write any more output. >> Deadlock." >> >> The solutions is to start a separate target process. Dmitry Samersoff >> has already created a test application for such cases so I've decided to >> move it on the top level library instead of duplicating it. The test >> application will reside under test/lib/share/classes/jdk/test/lib/apps >> and the test under test/lib-test/jdk/test/lib/apps. >> >> Thanks, >> Katja > From dmitry.dmitriev at oracle.com Wed May 27 16:24:35 2015 From: dmitry.dmitriev at oracle.com (Dmitry Dmitriev) Date: Wed, 27 May 2015 19:24:35 +0300 Subject: RFR 8059557: Test set for "Validate JVM Command-Line Flag Arguments" In-Reply-To: <024b01d097fe$a0db0520$e2910f60$@oracle.com> References: <555A09B8.7010402@oracle.com> <555E46A7.4020402@oracle.com> <024b01d097fe$a0db0520$e2910f60$@oracle.com> Message-ID: <5565EFC3.2040109@oracle.com> Hi Christian, Thank you for reviewing the code! I will limit number of options to test. But I think instead of one option for each type I will add several options for each type with different combination of min/max range(e.g. max range = maximum number for type, min = small or min = minimum number for type, max = not huge and so on). I expect to have about 15 options in this case. Thanks, Dmitry On 27.05.2015 0:55, Christian Tornqvist wrote: > Hi Dmitry, > > First of all, the code looks really good. One thing that I noticed is that it seems like the invalid values (out of range) is done for all the options with ranges, this seems redundant to me. It should be enough to test one for each data type instead. > > I'll continue to review the code tomorrow :) > > Thanks, > Christian > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Dmitry Dmitriev > Sent: Thursday, May 21, 2015 4:57 PM > To: hotspot-dev at openjdk.java.net; Gerard Ziemski > Subject: Re: RFR 8059557: Test set for "Validate JVM Command-Line Flag Arguments" > > Hello all, > > Recently I correct several typos, so here a new webrev for tests: > http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.02/ > > > Thanks, > Dmitry > > On 18.05.2015 18:48, Dmitry Dmitriev wrote: >> Hello all, >> >> Please review test set for verifying functionality implemented by JEP >> 245 "Validate JVM Command-Line Flag Arguments"(JDK-8059557). Review >> request for this JEP can be found there: >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018539.htm >> l >> >> I create 3 tests for verifying options with ranges. The tests mostly >> rely on common/optionsvalidation/JVMOptionsUtils.java. Class in this >> file contains functions to get options with ranges as list(by parsing >> new option "-XX:+PrintFlagsRanges" output), run command line test for >> list of options and other. The actual test code contained in >> common/optionsvalidation/JVMOption.java file - testCommandLine(), >> testDynamic(), testJcmd() and testAttach() methods. >> common/optionsvalidation/IntJVMOption.java and >> common/optionsvalidation/DoubleJVMOption.java source files contain >> classes derived from JVMOption class for integer and double JVM >> options correspondingly. >> >> Here are description of the tests: >> 1) >> hotspot/test/runtime/CommandLine/OptionsValidation/TestOptionsWithRang >> es.java >> >> This test get all options with ranges by parsing output of new option >> "-XX:+PrintFlagsRanges" and verify these options by starting Java and >> passing options in command line with valid and invalid values. >> Currently it verifies about 106 options which have ranges. >> Invalid values are values which out-of-range. In test used values >> "min-1" and "max+1".In this case Java should always exit with code 1 >> and print error message about out-of-range value(with one exception, >> if option is unsigned and passing negative value, then out-of-range >> error message is not printed because error occurred earlier). >> Valid values are values in range, e.g. min&max and also several >> additional values. In this case Java should successfully exit(exit >> code 0) or exit with error code 1 for other reasons(low memory with >> certain option value etc.). In any case for values in range Java >> should not print messages about out of range value. >> In any case Java should not crash. >> This test excluded from JPRT because it takes long time to execute and >> also fails - some options with value in valid range cause Java to >> crash(bugs are submitted). >> >> 2) >> hotspot/test/runtime/CommandLine/OptionsValidation/TestOptionsWithRang >> es.java >> >> This test get all writeable options with ranges by parsing output of >> new option "-XX:+PrintFlagsRanges" and verify these options by >> dynamically changing it's values to the valid and invalid values. Used >> 3 methods for that: DynamicVMOption isValidValue and isInvalidValue >> methods, Jcmd and by attach method. Currently 3 writeable options with >> ranges are verified by this test. >> This test pass in JPRT. >> >> 3) >> hotspot/test/runtime/CommandLine/OptionsValidation/TestJcmdOutput.java >> >> This test verified output of Jcmd when out-of-range value is set to >> the writeable option or value violates option constraint. Also this >> test verify that jcmd not write error message to the target process. >> This test pass in JPRT. >> >> >> I am not write special tests for constraints for this JEP because >> there are exist test for that(e.g. >> test/runtime/CompressedOops/ObjectAlignment.java for >> ObjectAlignmentInBytes or >> hotspot/test/gc/arguments/TestHeapFreeRatio.java for >> MinHeapFreeRatio/MaxHeapFreeRatio). >> >> Webrev: http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.00/ >> >> >> JEP: https://bugs.openjdk.java.net/browse/JDK-8059557 >> >> Thanks, >> Dmitry >> > From vladimir.x.ivanov at oracle.com Wed May 27 16:41:45 2015 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Wed, 27 May 2015 19:41:45 +0300 Subject: [9] RFR (M): Backout JDK-8059340: ConstantPool::_resolved_references is missing in heap dump Message-ID: <5565F3C9.2000200@oracle.com> http://cr.openjdk.java.net/~vlivanov/8081320/webrev.00/ https://bugs.openjdk.java.net/browse/JDK-8081320 Backout JDK-8059340: "ConstantPool::_resolved_references is missing in heap dump." The fix breaks JVMTI and there's no feasible fix for the problem. Testing: jprt Thanks! Best regards, Vladimir Ivanov From dmitry.dmitriev at oracle.com Wed May 27 16:57:12 2015 From: dmitry.dmitriev at oracle.com (Dmitry Dmitriev) Date: Wed, 27 May 2015 19:57:12 +0300 Subject: RFR 8059557: Test set for "Validate JVM Command-Line Flag Arguments" In-Reply-To: <55651B82.3040103@oracle.com> References: <555A09B8.7010402@oracle.com> <555E46A7.4020402@oracle.com> <55651B82.3040103@oracle.com> Message-ID: <5565F768.6090600@oracle.com> Hello David, Thank you for reviewing the code! Please see my comments inline. On 27.05.2015 4:18, David Holmes wrote: > Hi Dmitry, > > Just browsing through ... > > test/runtime/CommandLine/OptionsValidation/common/optionsvalidation/JVMOptionsUtils.java > > > 60 if (name.startsWith("CMS")) { > 61 option.addPrepend("-XX:+UseConcMarkSweepGC"); > 62 } > > Not all CMS related options start with CMS. Is this just looking for > the particular subset of options that do start with CMS? Is starting > with CMS or starting with G1 significant with regards to the type of > option? The intention for this function and addTypeDependency is following: It is good if option is used somehow during start-up because this can help to catch possible problems(crash of VM,hangs) when option have min/max values or other valid values(withing range). Thus I add several dependencies. This approach are quite rough, so I add "-XX:+UseConcMarkSweepGC" for options which start with "CMS" and "-XX:+UseG1GC" for options which start with "G1". > > 64 switch (name) { > 65 case "MinHeapFreeRatio": > 66 option.addPrepend("-XX:MaxHeapFreeRatio=100"); > 67 break; > 68 case "MaxHeapFreeRatio": > 69 option.addPrepend("-XX:MinHeapFreeRatio=0"); > 70 break; > > It isn't at all clear why these options have to have special > treatment, or how the prepended option relates to the option under test? The intention the same as above: these options have constraints and to avoid violating of constraint during testing valid values I add these dependencies. This is only to help to catch possible problems like crash. VM can exit with error for valid value(in range) and this not considered as fail, because it can not continue for other reasons(e.g. not enough memory). So, adding these dependencies is needed to bypass constraints check and successfully start up VM. > > 106 static private void addTypeDependency(JVMOption option, String > type) { > 107 if (type.contains("C1") || type.contains("C2")) { > 108 /* Run in compiler mode for compiler flags */ > 109 option.addPrepend("-Xcomp"); > 110 } > 111 } > > Don't you need to ensure -client or -server to deal with C1 and C2 > options? The intention the same as above: it is good if option is used during start up and thus I ran VM in compiler mode for compiler options. I not add explicit "client" or "server" because I verify options with the same VM arguments as test. But actually "client" or "server" can be added explicitly. > > --- > > test/runtime/CommandLine/OptionsValidation/common/optionsvalidation/IntJVMOption.java > > > 58 * Is this value is signed or unsigned > 63 * Is this value is 64 bit unsigned > > delete 'is' Fixed. > > I'm confused by the MIN_LONG/MAX_LONG etc variables. Are these > supposed to be representing C-types? We don't use C "long" as a rule. Probably name of these variables seems confusing. MIN_LONG/MAX_LONG are needed to hold minumum/maximum value for intx HotSpot type. For 32 bit Java it equals to min/max for 32 bit long and for 64 bit Java it equals to min/max for 64 bit long. This is needed to determine that option have minumum/maximum allowed value for it's type and not pass value equal to "min-1" or "max+1". > > Also unclear why you need to use BigInteger here instead of just Java > long for everything ?? My first approach was to use Java long, but unfortunately Java long is not enough to hold values with type uintx and similar types on 64 bit systems. On 64 bit systems uintx can have value greater than 9223372036854775807 and therefore I use BigInteger. > > --- > > test/runtime/CommandLine/OptionsValidation/common/optionsvalidation/DoubleJVMOption.java > > > 31 private final double MAX_DOUBLE = 18446744073709551616.000; > > where does this magic number come from? (And why 3 zeroes after the > decimal point ??) > It was from one double option which have range, but this construction seems not correct now. Thank you for pointing this. I will rewrite this logic. Regards, Dmitry > Thanks, > David > > On 22/05/2015 6:57 AM, Dmitry Dmitriev wrote: >> Hello all, >> >> Recently I correct several typos, so here a new webrev for tests: >> http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.02/ >> >> >> Thanks, >> Dmitry >> >> On 18.05.2015 18:48, Dmitry Dmitriev wrote: >>> Hello all, >>> >>> Please review test set for verifying functionality implemented by JEP >>> 245 "Validate JVM Command-Line Flag Arguments"(JDK-8059557). Review >>> request for this JEP can be found there: >>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018539.html >>> >>> I create 3 tests for verifying options with ranges. The tests mostly >>> rely on common/optionsvalidation/JVMOptionsUtils.java. Class in this >>> file contains functions to get options with ranges as list(by parsing >>> new option "-XX:+PrintFlagsRanges" output), run command line test for >>> list of options and other. The actual test code contained in >>> common/optionsvalidation/JVMOption.java file - testCommandLine(), >>> testDynamic(), testJcmd() and testAttach() methods. >>> common/optionsvalidation/IntJVMOption.java and >>> common/optionsvalidation/DoubleJVMOption.java source files contain >>> classes derived from JVMOption class for integer and double JVM >>> options correspondingly. >>> >>> Here are description of the tests: >>> 1) >>> hotspot/test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java >>> >>> >>> >>> This test get all options with ranges by parsing output of new option >>> "-XX:+PrintFlagsRanges" and verify these options by starting Java and >>> passing options in command line with valid and invalid values. >>> Currently it verifies about 106 options which have ranges. >>> Invalid values are values which out-of-range. In test used values >>> "min-1" and "max+1".In this case Java should always exit with code 1 >>> and print error message about out-of-range value(with one exception, >>> if option is unsigned and passing negative value, then out-of-range >>> error message is not printed because error occurred earlier). >>> Valid values are values in range, e.g. min&max and also several >>> additional values. In this case Java should successfully exit(exit >>> code 0) or exit with error code 1 for other reasons(low memory with >>> certain option value etc.). In any case for values in range Java >>> should not print messages about out of range value. >>> In any case Java should not crash. >>> This test excluded from JPRT because it takes long time to execute and >>> also fails - some options with value in valid range cause Java to >>> crash(bugs are submitted). >>> >>> 2) >>> hotspot/test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java >>> >>> >>> >>> This test get all writeable options with ranges by parsing output of >>> new option "-XX:+PrintFlagsRanges" and verify these options by >>> dynamically changing it's values to the valid and invalid values. Used >>> 3 methods for that: DynamicVMOption isValidValue and isInvalidValue >>> methods, Jcmd and by attach method. Currently 3 writeable options with >>> ranges are verified by this test. >>> This test pass in JPRT. >>> >>> 3) >>> hotspot/test/runtime/CommandLine/OptionsValidation/TestJcmdOutput.java >>> >>> This test verified output of Jcmd when out-of-range value is set to >>> the writeable option or value violates option constraint. Also this >>> test verify that jcmd not write error message to the target process. >>> This test pass in JPRT. >>> >>> >>> I am not write special tests for constraints for this JEP because >>> there are exist test for that(e.g. >>> test/runtime/CompressedOops/ObjectAlignment.java for >>> ObjectAlignmentInBytes or >>> hotspot/test/gc/arguments/TestHeapFreeRatio.java for >>> MinHeapFreeRatio/MaxHeapFreeRatio). >>> >>> Webrev: http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.00/ >>> >>> >>> JEP: https://bugs.openjdk.java.net/browse/JDK-8059557 >>> >>> Thanks, >>> Dmitry >>> >> From coleen.phillimore at oracle.com Wed May 27 17:02:26 2015 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 27 May 2015 13:02:26 -0400 Subject: RFR (2nd round) 8071627: Code refactoring to override == operator of Symbol* In-Reply-To: <804BDA1C-1318-4D27-B9CC-42CB21004AE3@oracle.com> References: <552EC286.5000005@oracle.com> <55380E8B.2060904@oracle.com> <804BDA1C-1318-4D27-B9CC-42CB21004AE3@oracle.com> Message-ID: <5565F8A2.9020704@oracle.com> Hi John, Sorry I didn't answer this thread. We were waiting for more data to answer the question whether the benefit and need for this change was worth it, and decided it was not at this time. On 4/30/15 3:42 PM, John Rose wrote: > I have a few followup points on this question. Actually, it reminds me a lot of some struggles in Java with object references and their sometimes-broken == operation. > > Calvin's V2 patch shows that there are about 200 of the risky Symbol*::== operations. It introduces 347 new lines to fix them. With high confidence it finds all the the risky operations (because he started with handles). As you point out, Coleen, our confidence about introducing new bugs is not so good. But I like the patch because it is relatively localized. Though it is not as localized as I had hoped, since 200 is not a small count. > > Calvin's V1 patch, by comparison, has 1530 new lines, four times more than V2. Nearly all of those are trivial changes from Symbol* to SymbolRef (or maybe SymbolHandle in V3). If we go with that then we invest in more FooHandles instead of Foo* types. The change to a class incrementally increases the cost of working with the code. TempNewSymbol did this, and was painful though worth it (and hard to get right) because it helped us get our reference counting right. Will SymbolHandle be worth it enough? > > This is a real question, because C++ programmers know that some pointer operations are just tricky, and so they already have a certain amount of defensiveness about expressions like p==q (oh, was that strcmp I wanted?) or p++ (oh, wait, I wanted p->increment()). Given that defensiveness, the incremental benefit of adding more defenses is? debatable. (Hence this debate.) > > On Apr 22, 2015, at 2:11 PM, Coleen Phillimore wrote: >> Hi Calvin, and fellow JVM engineers, >> >> I prefer a modification of your first version of this change much better. >> >> I really don't like this... It feels very unsafe to me. I don't know how to run any tools to make sure I don't break this! Honestly this seems wrong and there are too many places that compare Symbols even though the changeset is smaller. >> >> We triage bugs in the runtime group weekly with SQE. This change will cause bugs that have various symptoms and will be hard to trace to this root cause. The bugs will mostly land in the Runtime component of the JVM because in fact, the Symbol class is mostly used by the runtime component of the JVM. In addition, running internal tools to find these errors *monthly* is too late and running them individually adds overhead and friction to making changes in the JVM. More overhead is the last thing we need! > I'm talking with the Parfait folks today about whether their tools can be configured to diagnose suspicious uses of Symbol*::==, and also things like char*::== and Foo*::+(int) and Foo*::-(Foo*). I think they can. Would that help, say, if we had nightly scans to find problems that escape programmer diligence? The programmer diligence is always the first line of defense though: There is no excuse for not knowing how C pointers work. > >> Having to use ->equals() is clunky too. > It's clunky but in thousands fewer places, if you agree with me that SymbolHandle is clunky also. > >> For better or worse, the JVM is written in C++ which has operator overloading for these purposes. Modern C++ programming already avoids raw pointers in favore of smart pointers! > This, and the previous point about access to tools, is the strongest point in your argument, IMO. > >> The JVM code has historically avoided raw metadata pointers, first because of PermGen but now because the values pointed to have semantics that we want to encapsulate. I admit that it was nice using raw pointers and brought all of us back to a simpler day but they're not safe in general for this sort of system software. >> >> In the JVM code, we have Handles: >> >> 1. oops => Handle because they may move with garbage collection. >> 2. Method* => methodHandle because they may get deallocated with class redefinition (same for Klass eventually) >> 3. Symbol* => SymbolHandle because pointer equality isn't sufficient to ensure equality >> >> The other objection for Calvin's first change was that it's a lot of code changed. But there's a lot of other large code changes going forward at this time. This is the simplest of large changes, ie. simple renaming. This feature is needed for others going forward to support our important customers. This amount of code change is justified. >> >> Embedded in this mail, if you've read this far, is a suggestion to rename SymbolRef (a name I hate) to SymbolHandle. Because that's what it now is. > I am leaning towards agreeing, despite the thousands of lines of noisy changes required, because we won't need tooling support if we bite the bullet and dump the Symbol pointers. But I keep coming back to this: Where does it stop? How many of our object classes need auxiliary handle classes? Are all of our pointer types just bugs waiting to happen? There might be. We tried to get rid of all Handles, other than ones around oops, when we eliminated PermGen but had to keep methodHandles and constantPoolHandles for class redefinition. There are still instanceKlassHandles and KlassHandles but they are dummies now, but they will be needed if we change the redefinition code. I hope we don't need more handles than that for our metadata at least. One note about methodHandles. Since they have nontrivial copy constructors, they should be passed as const references to prevent copy constructor calls. > If so, why are we fixating on Symbol*::==? If not, why is Symbol*::== such a uniquely bad problem? Symbol was because we wanted to allow the same Symbol be in two different storage locations, but we don't need this change in jdk9 at least. Thanks, Coleen > > Thanks, Coleen. > > ? John > >> Thanks, >> Coleen >> >> >> On 4/15/15, 3:56 PM, Calvin Cheung wrote: >>> Please review this second version of the fix. >>> >>> This version has 2 new functions (equals() and not_equals()) in the Symbol class. >>> It replaces the Symbol* == and != comparisons with those 2 function calls. From gerard.ziemski at oracle.com Wed May 27 18:19:58 2015 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Wed, 27 May 2015 13:19:58 -0500 Subject: RFR 8059557: Test set for "Validate JVM Command-Line Flag Arguments" In-Reply-To: <555E46A7.4020402@oracle.com> References: <555A09B8.7010402@oracle.com> <555E46A7.4020402@oracle.com> Message-ID: <55660ACE.3060000@oracle.com> Nice code Dmitry! I have a bit of feedback, but I might be off on some of them, and some reflect my personal preference: 1.http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.02/test/runtime/CommandLine/OptionsValidation/TestJcmdOutput.java.html 1 a. Line 27, from: "value which not allowd by constraint. Also check that? to: "value which is not allowed by constraint. Also check that? 1 b. Line 28, from: "jcmd not print error message to the target process output.? to: "jcmd does not print an error message to the target process output." 1 c. Line 49, from: "System.out.println("Verify jcmd error message and that jcmd not write errors to the target process output");? to: "System.out.println(?Verify jcmd error message and that jcmd does not write errors to the target process output");? 1 d. Line 67: "67 Asserts.assertGT(minHeapFreeRatio, 0, "MinHeapFreeRatio must be greater than 0?);? Shouldn?t that be "Asserts.assertGTOE (>=)", not ?Asserts.assertGT?? 1 e. Line 68: ?Asserts.assertLT(maxHeapFreeRatio, 100, "MaxHeapFreeRatio must be less than 100");? Shouldn?t that be "Asserts.assertLTOE (<=)", not ? Asserts.assertLT?? 2.http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.02/test/runtime/CommandLine/OptionsValidation/common/optionsvalidation/DoubleJVMOption.java.html 2 a. Line 31: "private final double MAX_DOUBLE = 18446744073709551616.000;? Just curious: why can?t we use Double.MAX_VALUE? where does your value come from? 2 b. Line 126, from : "if (min == Double.MIN_VALUE && max == MAX_DOUBLE) {? to : "if ((Double.equal(min, Double.MIN_VALUE) == 0) && (Double.equal(max, MAX_DOUBLE) == 0)) {? 2 c. Line 127: "validValues.add(String.format("%f", -1.5));? Line 129: validValues.add(String.format("%f", 0.85)); Just curious: why did you pick -1.5 and 0.85? I personally would have selected min/2.0 and max/2.0 and maybe -1.0 and 1.0 2 d. We need to change from: 145 if (min != Double.MIN_VALUE) { 146 invalidValues.add(String.format("%f", min - 0.01)); 147 } 148 149 if (max != MAX_DOUBLE) { 150 invalidValues.add(String.format("%f", max + 0.01)); 151 } to: 145 if ((Double.equal(min, Double.MIN_VALUE) != 0) && (Double.isNaN(min-0.01) == false)) { 146 invalidValues.add(String.format("%f", min - 0.01)); 147 } 148 149 if ((Double.equal(max, MAX_DOUBLE) != 0) && (Double.isNaN(max+0.01) == false)) { 150 invalidValues.add(String.format("%f", max + 0.01)); 151 } 3.http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.02/test/runtime/CommandLine/OptionsValidation/common/optionsvalidation/JVMOption.java.html 3 a. Line 331, from : * @param valid Indicates, should JVM failed or not to : * @param valid Indicates whether the JVM should fail or not 3 b. Line 361, from : } else if (returnCode == 1 && out.getOutput().isEmpty()) { to : } else if ((returnCode == 1) && (out.getOutput().isEmpty() == true)) { 4.http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.02/test/runtime/CommandLine/OptionsValidation/common/optionsvalidation/JVMOptionsUtils.java.html 4 a. 125 static private Map getJVMOptions(Reader inputReader, 126 boolean withRanges, Predicate acceptOrigin) throws IOException { I see that the implementation uses hardcoded positions, ex. "line.substring(0, 9)? Would it be hard to make the code character position independent by considering an implementation based on a ?grep? or some other pattern recognition? If we ever tweak the output in ranges printout, even ever so slightly, this code will need to be updated. cheers > On May 26, 2015, at 1:24 PM, Gerard Ziemski wrote: > > > > > -------- Forwarded Message -------- > Subject: Re: RFR 8059557: Test set for "Validate JVM Command-Line Flag Arguments" > Date: Thu, 21 May 2015 23:57:11 +0300 > From: Dmitry Dmitriev > Organization: Oracle Corporation > To: hotspot-dev at openjdk.java.net, Gerard Ziemski > > Hello all, > > Recently I correct several typos, so here a new webrev for tests:http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.02/ > > Thanks, > Dmitry > > On 18.05.2015 18:48, Dmitry Dmitriev wrote: >> Hello all, >> >> Please review test set for verifying functionality implemented by JEP 245 "Validate JVM Command-Line Flag Arguments"(JDK-8059557). Review request for this JEP can be found there:http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018539.html >> >> I create 3 tests for verifying options with ranges. The tests mostly rely on common/optionsvalidation/JVMOptionsUtils.java. Class in this file contains functions to get options with ranges as list(by parsing new option "-XX:+PrintFlagsRanges" output), run command line test for list of options and other. The actual test code contained in common/optionsvalidation/JVMOption.java file - testCommandLine(), testDynamic(), testJcmd() and testAttach() methods. common/optionsvalidation/IntJVMOption.java and common/optionsvalidation/DoubleJVMOption.java source files contain classes derived from JVMOption class for integer and double JVM options correspondingly. >> >> Here are description of the tests: >> 1) hotspot/test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java >> >> This test get all options with ranges by parsing output of new option "-XX:+PrintFlagsRanges" and verify these options by starting Java and passing options in command line with valid and invalid values. Currently it verifies about 106 options which have ranges. >> Invalid values are values which out-of-range. In test used values "min-1" and "max+1".In this case Java should always exit with code 1 and print error message about out-of-range value(with one exception, if option is unsigned and passing negative value, then out-of-range error message is not printed because error occurred earlier). >> Valid values are values in range, e.g. min&max and also several additional values. In this case Java should successfully exit(exit code 0) or exit with error code 1 for other reasons(low memory with certain option value etc.). In any case for values in range Java should not print messages about out of range value. >> In any case Java should not crash. >> This test excluded from JPRT because it takes long time to execute and also fails - some options with value in valid range cause Java to crash(bugs are submitted). >> >> 2) hotspot/test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java >> >> This test get all writeable options with ranges by parsing output of new option "-XX:+PrintFlagsRanges" and verify these options by dynamically changing it's values to the valid and invalid values. Used 3 methods for that: DynamicVMOption isValidValue and isInvalidValue methods, Jcmd and by attach method. Currently 3 writeable options with ranges are verified by this test. >> This test pass in JPRT. >> >> 3) hotspot/test/runtime/CommandLine/OptionsValidation/TestJcmdOutput.java >> >> This test verified output of Jcmd when out-of-range value is set to the writeable option or value violates option constraint. Also this test verify that jcmd not write error message to the target process. >> This test pass in JPRT. >> >> >> I am not write special tests for constraints for this JEP because there are exist test for that(e.g. test/runtime/CompressedOops/ObjectAlignment.java for ObjectAlignmentInBytes or hotspot/test/gc/arguments/TestHeapFreeRatio.java for MinHeapFreeRatio/MaxHeapFreeRatio). >> >> Webrev:http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.00/ >> >> JEP:https://bugs.openjdk.java.net/browse/JDK-8059557 >> >> Thanks, >> Dmitry >> > From david.holmes at oracle.com Wed May 27 20:38:53 2015 From: david.holmes at oracle.com (David Holmes) Date: Thu, 28 May 2015 06:38:53 +1000 Subject: Revision1: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <5565C428.4080907@oracle.com> References: <5554FE2D.6020405@oracle.com> <555E0028.8070108@oracle.com> <55642A66.4050308@oracle.com> <5564AA2A.6090801@oracle.com> <5565132E.2030406@oracle.com> <5565C428.4080907@oracle.com> Message-ID: <55662B5D.9000303@oracle.com> On 27/05/2015 11:18 PM, Gerard Ziemski wrote: > > > On 5/26/2015 7:43 PM, David Holmes wrote: >>>> In globals.hpp, looking at all the "stack parameters" I expect to see a >>>> constraint function specified somewhere, but there isn't one. So now >>>> I'm >>>> a bit confused about how constraint functions are specified and >>>> used. If >>>> there has to be a relationship maintained between A, B and C, is the >>>> constraint function specified for all of them or none of them and >>>> simply >>>> executed as post argument processing step? Can you elaborate on when >>>> constraint functions may be used, and must be used, and how they are >>>> processed? >>>> >>> Constraints were not meant as a framework that imposes restrictions >>> as to when and how to be used. It?s a helper framework that makes it >>> easy for a developer to implement the kind of a constraint that a >>> particular flag(s) demands. The decision as to what goes into it is >>> left to the engineer responsible for a particular flag. The process >>> of implementing constraints and ranges is still ongoing for many of >>> the flags, and there are 3 subtasks tracking the issue. This webrev >>> covers the introduction of the range/constraint framework and a >>> subset of ranges/constraints implemented for those flags for which I >>> was able to find existing ad hoc code or comments describing them. >> >> That's not really answering the question. Let me assume from this that >> the stack parameters have not been updated yet - fine. Now lets >> suppose that I want to update them using a constraint function. How do >> I do that? Do I specify the constraint function on each argument >> involved in the constraint? When will the constraint function be >> executed? > > Constraints are not methods and should not be used to set values - they > are read-only value verification functions. Sorry I meant to write "update them to start using a constraint function" - I didn't literally mean to update the flag values using the constraint function. :) > Constraints are called whenever the flag in question changes its value > (via CommandLineFlags::*AtPut for those flags set by external tools > like jcmd), and also a final check is run > (CommandLineFlags::check_all_ranges_and_constraints()) right after > Arguments::apply_ergo(), at which point it is assumed that all flags > have their final values set. Ok. So I would specify the constraint function on each of the flags involved. When the flags are processed individually the constraint check may not make sense (because they don't have their final values) so the constraint function will be predicated on CommandLineFlags::finishedInitializing(). > There is Arguments::post_final_range_and_constraint_check() method > provided for any custom code that can assume that all range and > constraints checks have been performed at that point and set any > dependent values. Setting values of the flags themselves, however, > should be performed before > CommandLineFlags::check_all_ranges_and_constraints() is called. Ok. Thanks for clarifying. David > > cheers > > > cheers From gerard.ziemski at oracle.com Wed May 27 21:06:34 2015 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Wed, 27 May 2015 16:06:34 -0500 Subject: Revision1: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <55662B5D.9000303@oracle.com> References: <5554FE2D.6020405@oracle.com> <555E0028.8070108@oracle.com> <55642A66.4050308@oracle.com> <5564AA2A.6090801@oracle.com> <5565132E.2030406@oracle.com> <5565C428.4080907@oracle.com> <55662B5D.9000303@oracle.com> Message-ID: <556631DA.6070203@oracle.com> On 5/27/2015 3:38 PM, David Holmes wrote: > >> Constraints are called whenever the flag in question changes its value >> (via CommandLineFlags::*AtPut for those flags set by external tools >> like jcmd), and also a final check is run >> (CommandLineFlags::check_all_ranges_and_constraints()) right after >> Arguments::apply_ergo(), at which point it is assumed that all flags >> have their final values set. > > Ok. So I would specify the constraint function on each of the flags > involved. When the flags are processed individually the constraint > check may not make sense (because they don't have their final values) > so the constraint function will be predicated on > CommandLineFlags::finishedInitializing(). That's indeed why we introduced CommandLineFlags::finishedInitializing() API - it might not be perfect (maybe I should have named it finalCheck() instead?), but given the way current code works and the new requirements coming from JEP (changing writeable flags values must be range/constraint validated) it does the job. Thank you for taking the time to raise questions and providing feedback. cheers From gerard.ziemski at oracle.com Wed May 27 21:28:32 2015 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Wed, 27 May 2015 16:28:32 -0500 Subject: Revision2: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <555E0028.8070108@oracle.com> References: <5554FE2D.6020405@oracle.com> <555E0028.8070108@oracle.com> Message-ID: <55663700.5050606@oracle.com> hi all, Here is a revision 2 of the feature taking into account feedback from Dmitry, David, Kim and Alexander. One significant change in this rev is the addition of runtime/commandLineFlagConstraintsCompiler.hpp/.cpp with one simple constraint needed by Dmitry's test framework. We introduce a new mechanism that allows specification of a valid range per flag that is then used to automatically validate given flag's value every time it changes. Ranges values must be constant and can not change. Optionally, a constraint can also be specified and applied every time a flag value changes for those flags whose valid value can not be trivially checked by a simple min and max (ex. whether it's power of 2, or bigger or smaller than some other flag that can also change) I have chosen to modify the table macros (ex. RUNTIME_FLAGS in globals.hpp) instead of using a more sophisticated solution, such as C++ templates, because even though macros were unfriendly when initially developing, once a solution was arrived at, subsequent additions to the tables of new ranges, or constraint are trivial from developer's point of view. (The intial development unfriendliness of macros was mitigated by using a pre-processor, which for those using a modern IDE like Xcode, is easily available from a menu). Using macros also allowed for more minimal code changes. The presented solution is based on expansion of macros using variadic functions and can be readily seen in runtime/commandLineFlagConstraintList.cpp and runtime/commandLineFlagRangeList.cpp In commandLineFlagConstraintList.cpp or commandLineFlagRangesList.cpp, there is bunch of classes and methods that seems to beg for C++ template to be used. I have tried, but when the compiler tries to generate code for both uintx and size_t, which happen to have the same underlying type (on BSD), it fails to compile overridden methods with same type, but different name. If someone has a way of simplifying the new code via C++ templates, however, we can file a new enhancement request to address that. This webrev represents only the initial range checking framework and only 100 or so flags that were ported from an existing ad hoc range checking code to this new mechanism. There are about 250 remaining flags that still need their ranges determined and ported over to this new mechansim and they are tracked by individual subtasks. I had to modify several existing tests to change the error message that they expected when VM refuses to run, which was changed to provide uniform error messages. To help with testing and subtask efforts I have introduced a new runtime flag: PrintFlagsRanges: "Print VM flags and their ranges and exit VM" which in addition to the already existing flags: "PrintFlagsInitial" and "PrintFlagsFinal" allow for thorough examination of the flags values and their ranges. The code change builds and passes JPRT (-testset hotspot) and UTE (vm.quick.testlist) References: Webrev: http://cr.openjdk.java.net/~gziemski/8059557_rev2 note: due to "awk" limit of 50 pats the Frames diff is not available for "src/share/vm/runtime/arguments.cpp" JEP:https://bugs.openjdk.java.net/browse/JDK-8059557 Compiler subtask:https://bugs.openjdk.java.net/browse/JDK-8078554 GC subtask:https://bugs.openjdk.java.net/browse/JDK-8078555 Runtime subtask:https://bugs.openjdk.java.net/browse/JDK-8078556 hgstat: src/cpu/ppc/vm/globals_ppc.hpp | 2 +- src/cpu/sparc/vm/globals_sparc.hpp | 2 +- src/cpu/x86/vm/globals_x86.hpp | 2 +- src/cpu/zero/vm/globals_zero.hpp | 3 +- src/os/aix/vm/globals_aix.hpp | 2 +- src/os/bsd/vm/globals_bsd.hpp | 29 +- src/os/linux/vm/globals_linux.hpp | 9 +- src/os/solaris/vm/globals_solaris.hpp | 4 +- src/os/windows/vm/globals_windows.hpp | 5 +- src/share/vm/c1/c1_globals.cpp | 4 +- src/share/vm/c1/c1_globals.hpp | 17 +- src/share/vm/gc_implementation/g1/g1_globals.cpp | 16 +- src/share/vm/gc_implementation/g1/g1_globals.hpp | 38 +- src/share/vm/opto/c2_globals.cpp | 12 +- src/share/vm/opto/c2_globals.hpp | 40 +- src/share/vm/prims/whitebox.cpp | 12 +- src/share/vm/runtime/arguments.cpp | 753 ++++++++++++++---------------------- src/share/vm/runtime/arguments.hpp | 24 +- src/share/vm/runtime/commandLineFlagConstraintList.cpp | 243 +++++++++++ src/share/vm/runtime/commandLineFlagConstraintList.hpp | 73 +++ src/share/vm/runtime/commandLineFlagConstraintsCompiler.cpp | 46 ++ src/share/vm/runtime/commandLineFlagConstraintsCompiler.hpp | 39 + src/share/vm/runtime/commandLineFlagConstraintsGC.cpp | 251 ++++++++++++ src/share/vm/runtime/commandLineFlagConstraintsGC.hpp | 59 ++ src/share/vm/runtime/commandLineFlagConstraintsRuntime.cpp | 67 +++ src/share/vm/runtime/commandLineFlagConstraintsRuntime.hpp | 41 ++ src/share/vm/runtime/commandLineFlagRangeList.cpp | 304 ++++++++++++++ src/share/vm/runtime/commandLineFlagRangeList.hpp | 67 +++ src/share/vm/runtime/globals.cpp | 699 +++++++++++++++++++++++++++------ src/share/vm/runtime/globals.hpp | 310 ++++++++++++-- src/share/vm/runtime/globals_extension.hpp | 101 +++- src/share/vm/runtime/init.cpp | 6 +- src/share/vm/runtime/os.hpp | 17 + src/share/vm/runtime/os_ext.hpp | 7 +- src/share/vm/runtime/thread.cpp | 6 + src/share/vm/services/attachListener.cpp | 4 +- src/share/vm/services/classLoadingService.cpp | 6 +- src/share/vm/services/diagnosticCommand.cpp | 3 +- src/share/vm/services/management.cpp | 6 +- src/share/vm/services/memoryService.cpp | 2 +- src/share/vm/services/writeableFlags.cpp | 161 +++++-- src/share/vm/services/writeableFlags.hpp | 52 +- test/compiler/c2/7200264/Test7200264.sh | 5 +- test/compiler/startup/NumCompilerThreadsCheck.java | 2 +- test/gc/arguments/TestHeapFreeRatio.java | 23 +- test/gc/arguments/TestSurvivorAlignmentInBytesOption.java | 6 +- test/gc/g1/TestStringDeduplicationTools.java | 6 +- test/runtime/CompressedOops/CompressedClassSpaceSize.java | 4 +- test/runtime/CompressedOops/ObjectAlignment.java | 9 +- test/runtime/contended/Options.java | 10 +- 50 files changed, 2730 insertions(+), 879 deletions(-) From serguei.spitsyn at oracle.com Wed May 27 22:24:13 2015 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Wed, 27 May 2015 15:24:13 -0700 Subject: [9] RFR (M): Backout JDK-8059340: ConstantPool::_resolved_references is missing in heap dump In-Reply-To: <5565F3C9.2000200@oracle.com> References: <5565F3C9.2000200@oracle.com> Message-ID: <5566440D.9010109@oracle.com> Vladimir, The fix looks good. I've checked that anti-delta matches the original fix. Thanks, Serguei On 5/27/15 9:41 AM, Vladimir Ivanov wrote: > http://cr.openjdk.java.net/~vlivanov/8081320/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8081320 > > Backout JDK-8059340: "ConstantPool::_resolved_references is missing in > heap dump." > > The fix breaks JVMTI and there's no feasible fix for the problem. > > Testing: jprt > > Thanks! > > Best regards, > Vladimir Ivanov From bill.pittore at oracle.com Wed May 27 22:36:13 2015 From: bill.pittore at oracle.com (bill pittore) Date: Wed, 27 May 2015 18:36:13 -0400 Subject: RFR 8081202 C++11 requires a space between literal and identifier Message-ID: <556646DD.6040707@oracle.com> As part of some work I'm doing I had to fix this particular problem with string literals and macros. A few people mentioned to me that it would be a good idea to just get this pushed into JDK 9 so here is the webrev. I tested this with gcc 5.1.0 using -std=c++11 option as well as Visual Studio 2015 RC. Note that there are other issues WRT building using C++11 but this webrev only deals with the string literal issue. In my workspace, hg diff -w shows no files with diffs meaning that all the changes in this webrev are whitespace only. Ran through JPRT with no issues. This will most likely be pushed after the hs-gc repo effectively merges into hs-rt repo, sometime in the next week or so pending approval. http://cr.openjdk.java.net/~bpittore/8081202/ thanks, bill From kim.barrett at oracle.com Thu May 28 00:18:24 2015 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 27 May 2015 20:18:24 -0400 Subject: RFR 8081202 C++11 requires a space between literal and identifier In-Reply-To: <556646DD.6040707@oracle.com> References: <556646DD.6040707@oracle.com> Message-ID: <55A4EA10-7625-467B-9B76-1EB3850ABA85@oracle.com> On May 27, 2015, at 6:36 PM, bill pittore wrote: > > As part of some work I'm doing I had to fix this particular problem with string literals and macros. A few people mentioned to me that it would be a good idea to just get this pushed into JDK 9 so here is the webrev. I tested this with gcc 5.1.0 using -std=c++11 option as well as Visual Studio 2015 RC. Note that there are other issues WRT building using C++11 but this webrev only deals with the string literal issue. In my workspace, hg diff -w shows no files with diffs meaning that all the changes in this webrev are whitespace only. Ran through JPRT with no issues. This will most likely be pushed after the hs-gc repo effectively merges into hs-rt repo, sometime in the next week or so pending approval. > > http://cr.openjdk.java.net/~bpittore/8081202/ > > thanks, > bill ------------------------------------------------------------------------------ src/share/vm/gc/g1/concurrentMark.inline.hpp 200 err_msg("Trying to access not available bitmap " PTR_FORMAT \ 201 " corresponding to " PTR_FORMAT " (%u)", \ Line continuation backslashes no longer lined up with others in the same macro expansion. ------------------------------------------------------------------------------ src/share/vm/prims/methodHandles.cpp 1365 static JNINativeMethod MHN_methods[] = { ... The FN_PTR parameters in the initializers used to be aligned. ------------------------------------------------------------------------------ src/share/vm/prims/perf.cpp 301 static JNINativeMethod perfmethods[] = { ... The FN_PTR parameters in the initializers used to be aligned. ------------------------------------------------------------------------------ src/share/vm/prims/unsafe.cpp Lots of FN_PTR initializers are no longer aligned. ------------------------------------------------------------------------------ Otherwise, looks good. From bill.pittore at oracle.com Thu May 28 02:15:44 2015 From: bill.pittore at oracle.com (bill pittore) Date: Wed, 27 May 2015 22:15:44 -0400 Subject: RFR 8081202 C++11 requires a space between literal and identifier In-Reply-To: <55A4EA10-7625-467B-9B76-1EB3850ABA85@oracle.com> References: <556646DD.6040707@oracle.com> <55A4EA10-7625-467B-9B76-1EB3850ABA85@oracle.com> Message-ID: <55667A50.6060301@oracle.com> Thanks Kim. I fixed most of the alignments. One or two lines were so far off that I just left them as outliers so I didn't have to move all the lines over. Updated the webrev: http://cr.openjdk.java.net/~bpittore/8081202/hotspot-webrev.01/ thanks, bill On 5/27/2015 8:18 PM, Kim Barrett wrote: > On May 27, 2015, at 6:36 PM, bill pittore wrote: >> As part of some work I'm doing I had to fix this particular problem with string literals and macros. A few people mentioned to me that it would be a good idea to just get this pushed into JDK 9 so here is the webrev. I tested this with gcc 5.1.0 using -std=c++11 option as well as Visual Studio 2015 RC. Note that there are other issues WRT building using C++11 but this webrev only deals with the string literal issue. In my workspace, hg diff -w shows no files with diffs meaning that all the changes in this webrev are whitespace only. Ran through JPRT with no issues. This will most likely be pushed after the hs-gc repo effectively merges into hs-rt repo, sometime in the next week or so pending approval. >> >> http://cr.openjdk.java.net/~bpittore/8081202/ >> >> thanks, >> bill > ------------------------------------------------------------------------------ > src/share/vm/gc/g1/concurrentMark.inline.hpp > 200 err_msg("Trying to access not available bitmap " PTR_FORMAT \ > 201 " corresponding to " PTR_FORMAT " (%u)", \ > > Line continuation backslashes no longer lined up with others in the > same macro expansion. > > ------------------------------------------------------------------------------ > src/share/vm/prims/methodHandles.cpp > 1365 static JNINativeMethod MHN_methods[] = { > ... > > The FN_PTR parameters in the initializers used to be aligned. > > ------------------------------------------------------------------------------ > src/share/vm/prims/perf.cpp > 301 static JNINativeMethod perfmethods[] = { > ... > > The FN_PTR parameters in the initializers used to be aligned. > > ------------------------------------------------------------------------------ > src/share/vm/prims/unsafe.cpp > > Lots of FN_PTR initializers are no longer aligned. > > ------------------------------------------------------------------------------ > > Otherwise, looks good. > From coleen.phillimore at oracle.com Thu May 28 02:39:17 2015 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 27 May 2015 22:39:17 -0400 Subject: [9] RFR (M): Backout JDK-8059340: ConstantPool::_resolved_references is missing in heap dump In-Reply-To: <5565F3C9.2000200@oracle.com> References: <5565F3C9.2000200@oracle.com> Message-ID: <55667FD5.3030600@oracle.com> This is unfortunate, but I've reviewed this change. Was there an SA change also that goes with this that has to be backed out too? Thanks, Coleen On 5/27/15 12:41 PM, Vladimir Ivanov wrote: > http://cr.openjdk.java.net/~vlivanov/8081320/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8081320 > > Backout JDK-8059340: "ConstantPool::_resolved_references is missing in > heap dump." > > The fix breaks JVMTI and there's no feasible fix for the problem. > > Testing: jprt > > Thanks! > > Best regards, > Vladimir Ivanov From kim.barrett at oracle.com Thu May 28 05:52:02 2015 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 28 May 2015 01:52:02 -0400 Subject: RFR 8081202 C++11 requires a space between literal and identifier In-Reply-To: <55667A50.6060301@oracle.com> References: <556646DD.6040707@oracle.com> <55A4EA10-7625-467B-9B76-1EB3850ABA85@oracle.com> <55667A50.6060301@oracle.com> Message-ID: <8A9D2CFA-7E6F-480C-9F77-5D2A7AC99E4C@oracle.com> On May 27, 2015, at 10:15 PM, bill pittore wrote: > > Thanks Kim. I fixed most of the alignments. One or two lines were so far off that I just left them as outliers so I didn't have to move all the lines over. Updated the webrev: http://cr.openjdk.java.net/~bpittore/8081202/hotspot-webrev.01/ > > thanks, > bill Looks good. > > On 5/27/2015 8:18 PM, Kim Barrett wrote: >> On May 27, 2015, at 6:36 PM, bill pittore wrote: >>> As part of some work I'm doing I had to fix this particular problem with string literals and macros. A few people mentioned to me that it would be a good idea to just get this pushed into JDK 9 so here is the webrev. I tested this with gcc 5.1.0 using -std=c++11 option as well as Visual Studio 2015 RC. Note that there are other issues WRT building using C++11 but this webrev only deals with the string literal issue. In my workspace, hg diff -w shows no files with diffs meaning that all the changes in this webrev are whitespace only. Ran through JPRT with no issues. This will most likely be pushed after the hs-gc repo effectively merges into hs-rt repo, sometime in the next week or so pending approval. >>> >>> http://cr.openjdk.java.net/~bpittore/8081202/ >>> >>> thanks, >>> bill >> ------------------------------------------------------------------------------ >> src/share/vm/gc/g1/concurrentMark.inline.hpp >> 200 err_msg("Trying to access not available bitmap " PTR_FORMAT \ >> 201 " corresponding to " PTR_FORMAT " (%u)", \ >> >> Line continuation backslashes no longer lined up with others in the >> same macro expansion. >> >> ------------------------------------------------------------------------------ >> src/share/vm/prims/methodHandles.cpp >> 1365 static JNINativeMethod MHN_methods[] = { >> ... >> >> The FN_PTR parameters in the initializers used to be aligned. >> >> ------------------------------------------------------------------------------ >> src/share/vm/prims/perf.cpp >> 301 static JNINativeMethod perfmethods[] = { >> ... >> >> The FN_PTR parameters in the initializers used to be aligned. >> >> ------------------------------------------------------------------------------ >> src/share/vm/prims/unsafe.cpp >> >> Lots of FN_PTR initializers are no longer aligned. >> >> ------------------------------------------------------------------------------ >> >> Otherwise, looks good. From stefan.karlsson at oracle.com Thu May 28 06:40:09 2015 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 28 May 2015 08:40:09 +0200 Subject: RFR 8081202 C++11 requires a space between literal and identifier In-Reply-To: <55667A50.6060301@oracle.com> References: <556646DD.6040707@oracle.com> <55A4EA10-7625-467B-9B76-1EB3850ABA85@oracle.com> <55667A50.6060301@oracle.com> Message-ID: <5566B849.8040906@oracle.com> Hi Bill, On 2015-05-28 04:15, bill pittore wrote: > Thanks Kim. I fixed most of the alignments. One or two lines were so > far off that I just left them as outliers so I didn't have to move all > the lines over. Updated the webrev: > http://cr.openjdk.java.net/~bpittore/8081202/hotspot-webrev.01/ src/share/vm/gc/g1/g1BlockOffsetTable.cpp @@ -514,7 +514,7 @@ void G1BlockOffsetArrayContigSpace::print_on(outputStream* out) { G1BlockOffsetArray::print_on(out); - out->print_cr(" next offset threshold: "PTR_FORMAT, p2i(_next_offset_threshold)); - out->print_cr(" next offset index: "SIZE_FORMAT, _next_offset_index); + out->print_cr(" next offset threshold: " PTR_FORMAT, p2i(_next_offset_threshold)); + out->print_cr(" next offset index: " SIZE_FORMAT, _next_offset_index); } #endif // !PRODUCT You did whitespace changes inside the string literals: " next offset threshold: " and " next offset index: ". Was that intentional? Otherwise, looks good. Thanks, StefanK > > thanks, > bill > > On 5/27/2015 8:18 PM, Kim Barrett wrote: >> On May 27, 2015, at 6:36 PM, bill pittore >> wrote: >>> As part of some work I'm doing I had to fix this particular problem >>> with string literals and macros. A few people mentioned to me that >>> it would be a good idea to just get this pushed into JDK 9 so here >>> is the webrev. I tested this with gcc 5.1.0 using -std=c++11 option >>> as well as Visual Studio 2015 RC. Note that there are other issues >>> WRT building using C++11 but this webrev only deals with the string >>> literal issue. In my workspace, hg diff -w shows no files with >>> diffs meaning that all the changes in this webrev are whitespace >>> only. Ran through JPRT with no issues. This will most likely be >>> pushed after the hs-gc repo effectively merges into hs-rt repo, >>> sometime in the next week or so pending approval. >>> >>> http://cr.openjdk.java.net/~bpittore/8081202/ >>> >>> thanks, >>> bill >> ------------------------------------------------------------------------------ >> >> src/share/vm/gc/g1/concurrentMark.inline.hpp >> 200 err_msg("Trying to access not available bitmap " >> PTR_FORMAT \ >> 201 " corresponding to " PTR_FORMAT " >> (%u)", \ >> >> Line continuation backslashes no longer lined up with others in the >> same macro expansion. >> >> ------------------------------------------------------------------------------ >> >> src/share/vm/prims/methodHandles.cpp >> 1365 static JNINativeMethod MHN_methods[] = { >> ... >> >> The FN_PTR parameters in the initializers used to be aligned. >> >> ------------------------------------------------------------------------------ >> >> src/share/vm/prims/perf.cpp >> 301 static JNINativeMethod perfmethods[] = { >> ... >> >> The FN_PTR parameters in the initializers used to be aligned. >> >> ------------------------------------------------------------------------------ >> >> src/share/vm/prims/unsafe.cpp >> >> Lots of FN_PTR initializers are no longer aligned. >> >> ------------------------------------------------------------------------------ >> >> >> Otherwise, looks good. >> > From john.r.rose at oracle.com Thu May 28 08:26:29 2015 From: john.r.rose at oracle.com (John Rose) Date: Thu, 28 May 2015 01:26:29 -0700 Subject: RFR (2nd round) 8071627: Code refactoring to override == operator of Symbol* In-Reply-To: <5565F8A2.9020704@oracle.com> References: <552EC286.5000005@oracle.com> <55380E8B.2060904@oracle.com> <804BDA1C-1318-4D27-B9CC-42CB21004AE3@oracle.com> <5565F8A2.9020704@oracle.com> Message-ID: <32DB56BD-7A6F-4A84-B9BA-EE6E83D05735@oracle.com> On May 27, 2015, at 10:02 AM, Coleen Phillimore wrote: > > One note about methodHandles. Since they have nontrivial copy constructors, they should be passed as const references to prevent copy constructor calls. I would love to have a robust enough C++ linter to be able to issue and check for rules like, "this type should not implicitly call its copy constructor". I know some of those "gotchas" can be defended against using new C++11 features ("explicit"), but by no means all of them. Linting for Symbol*::operator== is one of them that will never (?) be supported by the core language. ? John From erik.helin at oracle.com Thu May 28 11:12:10 2015 From: erik.helin at oracle.com (Erik Helin) Date: Thu, 28 May 2015 13:12:10 +0200 Subject: RFR (2nd round) 8071627: Code refactoring to override == operator of Symbol* In-Reply-To: <32DB56BD-7A6F-4A84-B9BA-EE6E83D05735@oracle.com> References: <552EC286.5000005@oracle.com> <55380E8B.2060904@oracle.com> <804BDA1C-1318-4D27-B9CC-42CB21004AE3@oracle.com> <5565F8A2.9020704@oracle.com> <32DB56BD-7A6F-4A84-B9BA-EE6E83D05735@oracle.com> Message-ID: <20150528111209.GR2552@ehelin.jrpg.bea.com> On 2015-05-28, John Rose wrote: > On May 27, 2015, at 10:02 AM, Coleen Phillimore wrote: > > > > One note about methodHandles. Since they have nontrivial copy constructors, they should be passed as const references to prevent copy constructor calls. > > I would love to have a robust enough C++ linter to be able to issue and check for rules like, "this type should not implicitly call its copy constructor". I know some of those "gotchas" can be defended against using new C++11 features ("explicit"), but by no means all of them. Linting for Symbol*::operator== is one of them that will never (?) be supported by the core language. What about clang-tidy [0]? I haven't used it myself but it certainly looks powerful enough, see some examples regarding contructors in the slides at [1]. You could also probably implement a RecursiveASTVisitor [2] using the clang framework to do the analysis. We should use clang + llvm more :) Thanks, Erik [0]: http://clang.llvm.org/extra/clang-tidy.html [1]: http://llvm.org/devmtg/2014-04/PDFs/Talks/clang-tidy%20LLVM%20Euro%202014.pdf [2]: http://clang.llvm.org/docs/RAVFrontendAction.html > ? John From david.lindholm at oracle.com Thu May 28 11:28:52 2015 From: david.lindholm at oracle.com (David Lindholm) Date: Thu, 28 May 2015 13:28:52 +0200 Subject: RFR: 8080947: Add uint as a valid VM flag type Message-ID: <5566FBF4.1060401@oracle.com> Hi, Please review this patch that adds uint and int as valid VM flag types. This patch adds the possibility to specify VM flags with types int and uint, it does not change the type of any flags. Webrev: http://cr.openjdk.java.net/~david/JDK-8080947/webrev.hotspot.00/ Webrev: http://cr.openjdk.java.net/~david/JDK-8080947/webrev.jdk.00/ Bug: https://bugs.openjdk.java.net/browse/JDK-8080947 Testing: Passed JPRT Thanks, David From dmitry.dmitriev at oracle.com Thu May 28 12:03:21 2015 From: dmitry.dmitriev at oracle.com (Dmitry Dmitriev) Date: Thu, 28 May 2015 15:03:21 +0300 Subject: RFR 8059557: Test set for "Validate JVM Command-Line Flag Arguments" In-Reply-To: <55660ACE.3060000@oracle.com> References: <555A09B8.7010402@oracle.com> <555E46A7.4020402@oracle.com> <55660ACE.3060000@oracle.com> Message-ID: <55670409.4090605@oracle.com> Hi Gerard, Thank you for reviewing the code! Please see my comment inline. On 27.05.2015 21:19, Gerard Ziemski wrote: > Nice code Dmitry! > > I have a bit of feedback, but I might be off on some of them, and some reflect my personal preference: > > > 1.http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.02/test/runtime/CommandLine/OptionsValidation/TestJcmdOutput.java.html > > 1 a. > Line 27, from: "value which not allowd by constraint. Also check that? > to: "value which is not allowed by constraint. Also check that? Fixed! > > 1 b. > Line 28, from: "jcmd not print error message to the target process output.? > to: "jcmd does not print an error message to the target process output." Fixed! > 1 c. > Line 49, from: "System.out.println("Verify jcmd error message and that jcmd not write errors to the target process output");? > to: "System.out.println(?Verify jcmd error message and that jcmd does not write errors to the target process output");? Fixed! > > > 1 d. > Line 67: "67 Asserts.assertGT(minHeapFreeRatio, 0, "MinHeapFreeRatio must be greater than 0?);? > > Shouldn?t that be "Asserts.assertGTOE (>=)", not ?Asserts.assertGT?? Actually, for this test MinHeapFreeRatio should be strictly greater than 0, because otherwise I will be unable to violate constraint, i.e. when MinHeapFreeRatio/MaxHeapFreeRatio have valid values(in range) and MaxHeapFreeRatio < MinHeapFreeRatio. > 1 e. > Line 68: ?Asserts.assertLT(maxHeapFreeRatio, 100, "MaxHeapFreeRatio must be less than 100");? > > Shouldn?t that be "Asserts.assertLTOE (<=)", not ? Asserts.assertLT?? The same as above. MaxHeapFreeRatio must be strictly less than 100 to violate constraint. > > 2.http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.02/test/runtime/CommandLine/OptionsValidation/common/optionsvalidation/DoubleJVMOption.java.html > > 2 a. > Line 31: "private final double MAX_DOUBLE = 18446744073709551616.000;? > > Just curious: why can?t we use Double.MAX_VALUE? where does your value come from? Fixed and removed mention of this old value! Actually, I reworked code for double option. > > > 2 b. > Line 126, from : "if (min == Double.MIN_VALUE && max == MAX_DOUBLE) {? > to : "if ((Double.equal(min, Double.MIN_VALUE) == 0) && (Double.equal(max, MAX_DOUBLE) == 0)) {? Fixed! Use Double.compare for comparing doubles(I think you mean that method). > > 2 c. > Line 127: "validValues.add(String.format("%f", -1.5));? > Line 129: validValues.add(String.format("%f", 0.85)); > > Just curious: why did you pick -1.5 and 0.85? I personally would have selected min/2.0 and max/2.0 and maybe -1.0 and 1.0 I pick values with fraction part. Actually, I reworked this part of code and slightly changed these values(also I define then via constants now). > 2 d. > We need to change from: > > 145 if (min != Double.MIN_VALUE) { > 146 invalidValues.add(String.format("%f", min - 0.01)); > 147 } > 148 > 149 if (max != MAX_DOUBLE) { > 150 invalidValues.add(String.format("%f", max + 0.01)); > 151 } > > > to: > > 145 if ((Double.equal(min, Double.MIN_VALUE) != 0) && (Double.isNaN(min-0.01) == false)) { > 146 invalidValues.add(String.format("%f", min - 0.01)); > 147 } > 148 > 149 if ((Double.equal(max, MAX_DOUBLE) != 0) && (Double.isNaN(max+0.01) == false)) { > 150 invalidValues.add(String.format("%f", max + 0.01)); > 151 } Fixed! Also, I changed max + 0.01 to max * 1.001 (for positive max), because max+0.01 is not work for huge max values. > > > 3.http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.02/test/runtime/CommandLine/OptionsValidation/common/optionsvalidation/JVMOption.java.html > > > 3 a. > > Line 331, from : * @param valid Indicates, should JVM failed or not > to : * @param valid Indicates whether the JVM should fail or not Fixed! > > 3 b. > > Line 361, from : } else if (returnCode == 1 && out.getOutput().isEmpty()) { > to : } else if ((returnCode == 1) && (out.getOutput().isEmpty() == true)) { Fixed! > > > > 4.http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.02/test/runtime/CommandLine/OptionsValidation/common/optionsvalidation/JVMOptionsUtils.java.html > > 4 a. > > 125 static private Map getJVMOptions(Reader inputReader, > 126 boolean withRanges, Predicate acceptOrigin) throws IOException { > > I see that the implementation uses hardcoded positions, ex. "line.substring(0, 9)? Would it be hard to make the code character position independent by considering an implementation based on a ?grep? or some other pattern recognition? If we ever tweak the output in ranges printout, even ever so slightly, this code will need to be updated. I rewrite this code and not use StringTokenizer to parse line with option. Regards, Dmitry > > > cheers > > >> On May 26, 2015, at 1:24 PM, Gerard Ziemski wrote: >> >> >> >> >> -------- Forwarded Message -------- >> Subject: Re: RFR 8059557: Test set for "Validate JVM Command-Line Flag Arguments" >> Date: Thu, 21 May 2015 23:57:11 +0300 >> From: Dmitry Dmitriev >> Organization: Oracle Corporation >> To: hotspot-dev at openjdk.java.net, Gerard Ziemski >> >> Hello all, >> >> Recently I correct several typos, so here a new webrev for tests:http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.02/ >> >> Thanks, >> Dmitry >> >> On 18.05.2015 18:48, Dmitry Dmitriev wrote: >>> Hello all, >>> >>> Please review test set for verifying functionality implemented by JEP 245 "Validate JVM Command-Line Flag Arguments"(JDK-8059557). Review request for this JEP can be found there:http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018539.html >>> >>> I create 3 tests for verifying options with ranges. The tests mostly rely on common/optionsvalidation/JVMOptionsUtils.java. Class in this file contains functions to get options with ranges as list(by parsing new option "-XX:+PrintFlagsRanges" output), run command line test for list of options and other. The actual test code contained in common/optionsvalidation/JVMOption.java file - testCommandLine(), testDynamic(), testJcmd() and testAttach() methods. common/optionsvalidation/IntJVMOption.java and common/optionsvalidation/DoubleJVMOption.java source files contain classes derived from JVMOption class for integer and double JVM options correspondingly. >>> >>> Here are description of the tests: >>> 1) hotspot/test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java >>> >>> This test get all options with ranges by parsing output of new option "-XX:+PrintFlagsRanges" and verify these options by starting Java and passing options in command line with valid and invalid values. Currently it verifies about 106 options which have ranges. >>> Invalid values are values which out-of-range. In test used values "min-1" and "max+1".In this case Java should always exit with code 1 and print error message about out-of-range value(with one exception, if option is unsigned and passing negative value, then out-of-range error message is not printed because error occurred earlier). >>> Valid values are values in range, e.g. min&max and also several additional values. In this case Java should successfully exit(exit code 0) or exit with error code 1 for other reasons(low memory with certain option value etc.). In any case for values in range Java should not print messages about out of range value. >>> In any case Java should not crash. >>> This test excluded from JPRT because it takes long time to execute and also fails - some options with value in valid range cause Java to crash(bugs are submitted). >>> >>> 2) hotspot/test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java >>> >>> This test get all writeable options with ranges by parsing output of new option "-XX:+PrintFlagsRanges" and verify these options by dynamically changing it's values to the valid and invalid values. Used 3 methods for that: DynamicVMOption isValidValue and isInvalidValue methods, Jcmd and by attach method. Currently 3 writeable options with ranges are verified by this test. >>> This test pass in JPRT. >>> >>> 3) hotspot/test/runtime/CommandLine/OptionsValidation/TestJcmdOutput.java >>> >>> This test verified output of Jcmd when out-of-range value is set to the writeable option or value violates option constraint. Also this test verify that jcmd not write error message to the target process. >>> This test pass in JPRT. >>> >>> >>> I am not write special tests for constraints for this JEP because there are exist test for that(e.g. test/runtime/CompressedOops/ObjectAlignment.java for ObjectAlignmentInBytes or hotspot/test/gc/arguments/TestHeapFreeRatio.java for MinHeapFreeRatio/MaxHeapFreeRatio). >>> >>> Webrev:http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.00/ >>> >>> JEP:https://bugs.openjdk.java.net/browse/JDK-8059557 >>> >>> Thanks, >>> Dmitry >>> From coleen.phillimore at oracle.com Thu May 28 12:09:25 2015 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Thu, 28 May 2015 08:09:25 -0400 Subject: RFR: 8080947: Add uint as a valid VM flag type In-Reply-To: <5566FBF4.1060401@oracle.com> References: <5566FBF4.1060401@oracle.com> Message-ID: <55670575.6060401@oracle.com> Can you hold off this change until the command line verification change is checked in? This requires additional code to be added to the command line argument verification. The command line argument verification change also needs a reviewer from someone who knows the GC. See subject titled: Revision1: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments Thanks, Coleen On 5/28/15 7:28 AM, David Lindholm wrote: > Hi, > > Please review this patch that adds uint and int as valid VM flag > types. This patch adds the possibility to specify VM flags with types > int and uint, it does not change the type of any flags. > > > Webrev: http://cr.openjdk.java.net/~david/JDK-8080947/webrev.hotspot.00/ > Webrev: http://cr.openjdk.java.net/~david/JDK-8080947/webrev.jdk.00/ > Bug: https://bugs.openjdk.java.net/browse/JDK-8080947 > > > Testing: Passed JPRT > > > Thanks, > David From dmitry.dmitriev at oracle.com Thu May 28 13:32:54 2015 From: dmitry.dmitriev at oracle.com (Dmitry Dmitriev) Date: Thu, 28 May 2015 16:32:54 +0300 Subject: RFR 8059557: Test set for "Validate JVM Command-Line Flag Arguments" In-Reply-To: <555E46A7.4020402@oracle.com> References: <555A09B8.7010402@oracle.com> <555E46A7.4020402@oracle.com> Message-ID: <55671906.10000@oracle.com> Hello all, Here is a 3 version of the tests taking into account feedback from Christian, David and Gerard. I limit number of options in TestOptionsWithRanges.java to 15. This include options of different types(intx, uintx etc.) and with different combination of min/max range. TestOptionsWithRangesDynamic.java leaved as is, because amount of manageable numeric options is very small and currently only 3 of them have range. Also, I improve code for double option. Webrev: http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.03/ Thanks, Dmitry On 21.05.2015 23:57, Dmitry Dmitriev wrote: > Hello all, > > Recently I correct several typos, so here a new webrev for tests: > http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.02/ > > > Thanks, > Dmitry > > On 18.05.2015 18:48, Dmitry Dmitriev wrote: >> Hello all, >> >> Please review test set for verifying functionality implemented by JEP >> 245 "Validate JVM Command-Line Flag Arguments"(JDK-8059557). Review >> request for this JEP can be found there: >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018539.html >> >> I create 3 tests for verifying options with ranges. The tests mostly >> rely on common/optionsvalidation/JVMOptionsUtils.java. Class in this >> file contains functions to get options with ranges as list(by parsing >> new option "-XX:+PrintFlagsRanges" output), run command line test for >> list of options and other. The actual test code contained in >> common/optionsvalidation/JVMOption.java file - testCommandLine(), >> testDynamic(), testJcmd() and testAttach() methods. >> common/optionsvalidation/IntJVMOption.java and >> common/optionsvalidation/DoubleJVMOption.java source files contain >> classes derived from JVMOption class for integer and double JVM >> options correspondingly. >> >> Here are description of the tests: >> 1) >> hotspot/test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java >> >> This test get all options with ranges by parsing output of new option >> "-XX:+PrintFlagsRanges" and verify these options by starting Java and >> passing options in command line with valid and invalid values. >> Currently it verifies about 106 options which have ranges. >> Invalid values are values which out-of-range. In test used values >> "min-1" and "max+1".In this case Java should always exit with code 1 >> and print error message about out-of-range value(with one exception, >> if option is unsigned and passing negative value, then out-of-range >> error message is not printed because error occurred earlier). >> Valid values are values in range, e.g. min&max and also several >> additional values. In this case Java should successfully exit(exit >> code 0) or exit with error code 1 for other reasons(low memory with >> certain option value etc.). In any case for values in range Java >> should not print messages about out of range value. >> In any case Java should not crash. >> This test excluded from JPRT because it takes long time to execute >> and also fails - some options with value in valid range cause Java to >> crash(bugs are submitted). >> >> 2) >> hotspot/test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java >> >> This test get all writeable options with ranges by parsing output of >> new option "-XX:+PrintFlagsRanges" and verify these options by >> dynamically changing it's values to the valid and invalid values. >> Used 3 methods for that: DynamicVMOption isValidValue and >> isInvalidValue methods, Jcmd and by attach method. Currently 3 >> writeable options with ranges are verified by this test. >> This test pass in JPRT. >> >> 3) >> hotspot/test/runtime/CommandLine/OptionsValidation/TestJcmdOutput.java >> >> This test verified output of Jcmd when out-of-range value is set to >> the writeable option or value violates option constraint. Also this >> test verify that jcmd not write error message to the target process. >> This test pass in JPRT. >> >> >> I am not write special tests for constraints for this JEP because >> there are exist test for that(e.g. >> test/runtime/CompressedOops/ObjectAlignment.java for >> ObjectAlignmentInBytes or >> hotspot/test/gc/arguments/TestHeapFreeRatio.java for >> MinHeapFreeRatio/MaxHeapFreeRatio). >> >> Webrev: http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.00/ >> >> >> JEP: https://bugs.openjdk.java.net/browse/JDK-8059557 >> >> Thanks, >> Dmitry >> > From bill.pittore at oracle.com Thu May 28 14:15:55 2015 From: bill.pittore at oracle.com (bill pittore) Date: Thu, 28 May 2015 10:15:55 -0400 Subject: RFR 8081202 C++11 requires a space between literal and identifier In-Reply-To: <5566B849.8040906@oracle.com> References: <556646DD.6040707@oracle.com> <55A4EA10-7625-467B-9B76-1EB3850ABA85@oracle.com> <55667A50.6060301@oracle.com> <5566B849.8040906@oracle.com> Message-ID: <5567231B.8090705@oracle.com> Thank you Stefan, your eyesight is extraordinary! I fixed those two lines. bill On 5/28/2015 2:40 AM, Stefan Karlsson wrote: > Hi Bill, > > On 2015-05-28 04:15, bill pittore wrote: >> Thanks Kim. I fixed most of the alignments. One or two lines were so >> far off that I just left them as outliers so I didn't have to move >> all the lines over. Updated the webrev: >> http://cr.openjdk.java.net/~bpittore/8081202/hotspot-webrev.01/ > > src/share/vm/gc/g1/g1BlockOffsetTable.cpp > > @@ -514,7 +514,7 @@ > void > G1BlockOffsetArrayContigSpace::print_on(outputStream* out) { > G1BlockOffsetArray::print_on(out); > - out->print_cr(" next offset threshold: "PTR_FORMAT, > p2i(_next_offset_threshold)); > - out->print_cr(" next offset index: "SIZE_FORMAT, > _next_offset_index); > + out->print_cr(" next offset threshold: " PTR_FORMAT, > p2i(_next_offset_threshold)); > + out->print_cr(" next offset index: " SIZE_FORMAT, > _next_offset_index); > } > #endif // !PRODUCT > > You did whitespace changes inside the string literals: " next offset > threshold: " and " next offset index: ". Was that intentional? > > Otherwise, looks good. > > Thanks, > StefanK > >> >> thanks, >> bill >> >> On 5/27/2015 8:18 PM, Kim Barrett wrote: >>> On May 27, 2015, at 6:36 PM, bill pittore >>> wrote: >>>> As part of some work I'm doing I had to fix this particular problem >>>> with string literals and macros. A few people mentioned to me that >>>> it would be a good idea to just get this pushed into JDK 9 so here >>>> is the webrev. I tested this with gcc 5.1.0 using -std=c++11 >>>> option as well as Visual Studio 2015 RC. Note that there are other >>>> issues WRT building using C++11 but this webrev only deals with the >>>> string literal issue. In my workspace, hg diff -w shows no files >>>> with diffs meaning that all the changes in this webrev are >>>> whitespace only. Ran through JPRT with no issues. This will most >>>> likely be pushed after the hs-gc repo effectively merges into hs-rt >>>> repo, sometime in the next week or so pending approval. >>>> >>>> http://cr.openjdk.java.net/~bpittore/8081202/ >>>> >>>> thanks, >>>> bill >>> ------------------------------------------------------------------------------ >>> >>> src/share/vm/gc/g1/concurrentMark.inline.hpp >>> 200 err_msg("Trying to access not available bitmap " >>> PTR_FORMAT \ >>> 201 " corresponding to " PTR_FORMAT " >>> (%u)", \ >>> >>> Line continuation backslashes no longer lined up with others in the >>> same macro expansion. >>> >>> ------------------------------------------------------------------------------ >>> >>> src/share/vm/prims/methodHandles.cpp >>> 1365 static JNINativeMethod MHN_methods[] = { >>> ... >>> >>> The FN_PTR parameters in the initializers used to be aligned. >>> >>> ------------------------------------------------------------------------------ >>> >>> src/share/vm/prims/perf.cpp >>> 301 static JNINativeMethod perfmethods[] = { >>> ... >>> >>> The FN_PTR parameters in the initializers used to be aligned. >>> >>> ------------------------------------------------------------------------------ >>> >>> src/share/vm/prims/unsafe.cpp >>> >>> Lots of FN_PTR initializers are no longer aligned. >>> >>> ------------------------------------------------------------------------------ >>> >>> >>> Otherwise, looks good. >>> >> > From vladimir.x.ivanov at oracle.com Thu May 28 14:16:49 2015 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Thu, 28 May 2015 17:16:49 +0300 Subject: [9] RFR (M): Backout JDK-8059340: ConstantPool::_resolved_references is missing in heap dump In-Reply-To: <55667FD5.3030600@oracle.com> References: <5565F3C9.2000200@oracle.com> <55667FD5.3030600@oracle.com> Message-ID: <55672351.6030906@oracle.com> Coleen, Serguei, thanks for review. I haven't pushed SA change. Best regards, Vladimir Ivanov On 5/28/15 5:39 AM, Coleen Phillimore wrote: > > This is unfortunate, but I've reviewed this change. Was there an SA > change also that goes with this that has to be backed out too? > Thanks, > Coleen > > On 5/27/15 12:41 PM, Vladimir Ivanov wrote: >> http://cr.openjdk.java.net/~vlivanov/8081320/webrev.00/ >> https://bugs.openjdk.java.net/browse/JDK-8081320 >> >> Backout JDK-8059340: "ConstantPool::_resolved_references is missing in >> heap dump." >> >> The fix breaks JVMTI and there's no feasible fix for the problem. >> >> Testing: jprt >> >> Thanks! >> >> Best regards, >> Vladimir Ivanov > From yekaterina.kantserova at oracle.com Thu May 28 14:45:04 2015 From: yekaterina.kantserova at oracle.com (Yekaterina Kantserova) Date: Thu, 28 May 2015 16:45:04 +0200 Subject: RFR(S): 8081037: serviceability/sa/ tests time out on Windows In-Reply-To: <5565C048.4050801@oracle.com> References: <5565C048.4050801@oracle.com> Message-ID: <556729F0.70703@oracle.com> Hi, due to https://bugs.openjdk.java.net/browse/JDK-8081381 I wasn't able to push this fix. The problem is LingeredApp.java contains JDK 9 feature java.lang.Process.getPid() but the test library is compiled with JDK 8 today. This issue is not trivial to solve so I suggest a temporary fix to test/lib/Makefile. webrev root: http://cr.openjdk.java.net/~ykantser/8081037/webrev.01 Thanks, Katja On 05/27/2015 03:02 PM, Yekaterina Kantserova wrote: > Hi, > > Could I please have a review of this fix. > > bug: https://bugs.openjdk.java.net/browse/JDK-8081037 > webrev root: http://cr.openjdk.java.net/~ykantser/8081037/webrev.00 > webrev jdk: http://cr.openjdk.java.net/~ykantser/8081037.jdk/webrev.00 > webrev hotspot: > http://cr.openjdk.java.net/~ykantser/8081037.hotspot/webrev.00 > > From the bug: > "The problem is most likely that SA will pause the target process > while it is running. In this case, the target process is the same as > the process that launched SA. That process is also handling the output > from SA over a pipe, but when that pipe fills up the process cannot > empty it and the SA process is blocked because it cannot write any > more output. Deadlock." > > The solutions is to start a separate target process. Dmitry Samersoff > has already created a test application for such cases so I've decided > to move it on the top level library instead of duplicating it. The > test application will reside under > test/lib/share/classes/jdk/test/lib/apps and the test under > test/lib-test/jdk/test/lib/apps. > > Thanks, > Katja From gerard.ziemski at oracle.com Thu May 28 16:31:11 2015 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Thu, 28 May 2015 11:31:11 -0500 Subject: RFR 8059557: Test set for "Validate JVM Command-Line Flag Arguments" In-Reply-To: <55671906.10000@oracle.com> References: <555A09B8.7010402@oracle.com> <555E46A7.4020402@oracle.com> <55671906.10000@oracle.com> Message-ID: <556742CF.3020107@oracle.com> Looks good. Please consider it reviewed, with small "r". cheers On 5/28/2015 8:32 AM, Dmitry Dmitriev wrote: > Hello all, > > Here is a 3 version of the tests taking into account feedback from > Christian, David and Gerard. > > I limit number of options in TestOptionsWithRanges.java to 15. This > include options of different types(intx, uintx etc.) and with > different combination of min/max range. > TestOptionsWithRangesDynamic.java leaved as is, because amount of > manageable numeric options is very small and currently only 3 of them > have range. Also, I improve code for double option. > > Webrev: http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.03/ > > > Thanks, > Dmitry > > > On 21.05.2015 23:57, Dmitry Dmitriev wrote: >> Hello all, >> >> Recently I correct several typos, so here a new webrev for tests: >> http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.02/ >> >> >> Thanks, >> Dmitry >> >> On 18.05.2015 18:48, Dmitry Dmitriev wrote: >>> Hello all, >>> >>> Please review test set for verifying functionality implemented by >>> JEP 245 "Validate JVM Command-Line Flag Arguments"(JDK-8059557). >>> Review request for this JEP can be found there: >>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-May/018539.html >>> >>> I create 3 tests for verifying options with ranges. The tests mostly >>> rely on common/optionsvalidation/JVMOptionsUtils.java. Class in this >>> file contains functions to get options with ranges as list(by >>> parsing new option "-XX:+PrintFlagsRanges" output), run command line >>> test for list of options and other. The actual test code contained >>> in common/optionsvalidation/JVMOption.java file - testCommandLine(), >>> testDynamic(), testJcmd() and testAttach() methods. >>> common/optionsvalidation/IntJVMOption.java and >>> common/optionsvalidation/DoubleJVMOption.java source files contain >>> classes derived from JVMOption class for integer and double JVM >>> options correspondingly. >>> >>> Here are description of the tests: >>> 1) >>> hotspot/test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java >>> >>> This test get all options with ranges by parsing output of new >>> option "-XX:+PrintFlagsRanges" and verify these options by starting >>> Java and passing options in command line with valid and invalid >>> values. Currently it verifies about 106 options which have ranges. >>> Invalid values are values which out-of-range. In test used values >>> "min-1" and "max+1".In this case Java should always exit with code 1 >>> and print error message about out-of-range value(with one exception, >>> if option is unsigned and passing negative value, then out-of-range >>> error message is not printed because error occurred earlier). >>> Valid values are values in range, e.g. min&max and also several >>> additional values. In this case Java should successfully exit(exit >>> code 0) or exit with error code 1 for other reasons(low memory with >>> certain option value etc.). In any case for values in range Java >>> should not print messages about out of range value. >>> In any case Java should not crash. >>> This test excluded from JPRT because it takes long time to execute >>> and also fails - some options with value in valid range cause Java >>> to crash(bugs are submitted). >>> >>> 2) >>> hotspot/test/runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java >>> >>> This test get all writeable options with ranges by parsing output of >>> new option "-XX:+PrintFlagsRanges" and verify these options by >>> dynamically changing it's values to the valid and invalid values. >>> Used 3 methods for that: DynamicVMOption isValidValue and >>> isInvalidValue methods, Jcmd and by attach method. Currently 3 >>> writeable options with ranges are verified by this test. >>> This test pass in JPRT. >>> >>> 3) >>> hotspot/test/runtime/CommandLine/OptionsValidation/TestJcmdOutput.java >>> >>> This test verified output of Jcmd when out-of-range value is set to >>> the writeable option or value violates option constraint. Also this >>> test verify that jcmd not write error message to the target process. >>> This test pass in JPRT. >>> >>> >>> I am not write special tests for constraints for this JEP because >>> there are exist test for that(e.g. >>> test/runtime/CompressedOops/ObjectAlignment.java for >>> ObjectAlignmentInBytes or >>> hotspot/test/gc/arguments/TestHeapFreeRatio.java for >>> MinHeapFreeRatio/MaxHeapFreeRatio). >>> >>> Webrev: http://cr.openjdk.java.net/~ddmitriev/8059557/webrev.00/ >>> >>> >>> JEP: https://bugs.openjdk.java.net/browse/JDK-8059557 >>> >>> Thanks, >>> Dmitry >>> >> > > From bertrand.delsart at oracle.com Thu May 28 17:02:28 2015 From: bertrand.delsart at oracle.com (Bertrand Delsart) Date: Thu, 28 May 2015 19:02:28 +0200 Subject: RFR [S] 8081406: cleanup and minor extensions of the debugging facilities in CodeStrings Message-ID: <55674A24.4050501@oracle.com> Hi all, Small RFR to address minor issues in debug mode with CodeStrings https://bugs.openjdk.java.net/browse/JDK-8081406 http://cr.openjdk.java.net/~bdelsart/8081406/webrev.00/ The change does not impact the product mode. In non product mode, CodeStrings allows to associate a linked list of strings to a CodeBuffer, CodeBlob or Stub. In addition, with ASSERTS, it defines a boolean asserting whether the list of strings are valid. Here are the issues addressed by this CR: - The old code mentioned the fact that CodeStrings was not always correctly initialized. This is addressed by the fix, allowing check_valid to be added at a few locations where it could currently failed due to uninitialized values (like at the beginning of CodeStrings::free). This also makes the code more robust against future versions of CodeStrings. - As a minor extension, it is now possible for platform dependent code to modify the comment separator used by print_block_comment, which was hard coded to " ;; ". - As another minor extension, related to the validity assertions, freeing a code string no longer necessarily marks it (and hence its Stub/CodeBlob/CodeBuffer) as invalid. If CodeStrings contains only comment, removing them does not change the validity of the CodeStrings. For similar reason, assignment over a non null CodeStrings is now valid when we can safely free the old string. The modified code passes JPRT. It was also validated in fastdebug mode with the vm.compiler.testlist to check that the validity assertion were not triggered. One of our closed extensions also validated advanced use of CodeStrings::assign (included cases where the target of the assignment was not free). Best regards, Bertrand. -- Bertrand Delsart, Grenoble Engineering Center Oracle, 180 av. de l'Europe, ZIRST de Montbonnot 38330 Montbonnot Saint Martin, FRANCE bertrand.delsart at oracle.com Phone : +33 4 76 18 81 23 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ NOTICE: This email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From kim.barrett at oracle.com Thu May 28 22:10:15 2015 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 28 May 2015 18:10:15 -0400 Subject: Revision2: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <55663700.5050606@oracle.com> References: <5554FE2D.6020405@oracle.com> <555E0028.8070108@oracle.com> <55663700.5050606@oracle.com> Message-ID: On May 27, 2015, at 5:28 PM, Gerard Ziemski wrote: > > Here is a revision 2 of the feature taking into account feedback from Dmitry, David, Kim and Alexander. > > [?] > References: > > Webrev: http://cr.openjdk.java.net/~gziemski/8059557_rev2 > note: due to "awk" limit of 50 pats the Frames diff is not available for "src/share/vm/runtime/arguments.cpp? This is only a partial review. I haven't taken a detailed look at arguments.[ch]pp or globals.cpp, and haven't looked at the new range and constraint checking or the tests. But I'm running out of steam for today, and won't be able to get back to this until Monday, so giving you what I have so far. ------------------------------------------------------------------------------ Many files are being changed to #include globals.hpp just to get access to Flag values. Perhaps globals.hpp ought to be split up. This probably ought to be addressed as a separate CR though. ------------------------------------------------------------------------------ src/share/vm/runtime/globals.hpp 362 void print_on(outputStream* st, bool printRanges = false, bool withComments = false ); 458 static void printFlags(outputStream* out, bool withComments, bool printRanges = false); Why are the withComments and printRanges arguments in different orders? Just in general, multiple bool arguments tend to be problematic for understanding the using code. Common alternative is to use enums. Another common alternative (used often in hotspot) is to use comments in the calls; that helps the reader, so long as the (not checked by compiler) order is actually as commented. Calls should at least be commented, for consistency with other hotspot usage. ------------------------------------------------------------------------------ src/share/vm/runtime/init.cpp 145 if (PrintFlagsFinal) { 146 CommandLineFlags::printFlags(tty, false); 147 } 148 if (PrintFlagsRanges) { 149 CommandLineFlags::printFlags(tty, false, true); 150 } [145-147 from original, 148-150 added] Really print the flags twice if both options are provided? I was expecting something like: if (PrintFlagsFinal || PrintFlagsRanges) { CommandLineFlags::printFlags(tty, false, PrintFlagsRanges); } ------------------------------------------------------------------------------ src/share/vm/runtime/os.hpp 167 // A strlcat like API for safe string concatenation of 2 NULL limited C strings 168 // strlcat is not guranteed to exist on all platforms, so we implement our own 169 static void strlcat(char *dst, const char *src, size_t size) { 170 register char *_dst = dst; 171 register char *_src = (char *)src; 172 register int _size = (int)size; 173 174 while ((_size-- != 0) && (*_dst != '\0')) { 175 _dst++; 176 } 177 while ((_size-- != 0) && (*_src != '\0')) { 178 *_dst = *_src; 179 _dst++; _src++; 180 } 181 *_dst = '\0'; 182 } Several problems here: 1. In the description comment: NULL is a pointer. The string terminator is NUL. 2. Use of "register" storage class specifiers: The "register" storage class is deprecated in C++11 and slated for removal in C++17. Compilers have for a long time been parsing it but otherwise ascribing no special meaning beyond specifying automatic storage allocation. 3. There's no reason for any of the casts. 4. Use of _ prefixed names is generally reserved for member variables in hotspot. There's no need for these additional variables anyway, after elimination of the register storage class usage and the inappropriate casts. 5. This will buffer overrun if strlen(dst) >= size. This differs from BSD strlcat. 6. Unlike BSD strlcat, this doesn't provide a return value that allows the caller to detect truncation. I would think most real uses aught to care about that case, though I haven't audited uses in this change set yet. ------------------------------------------------------------------------------ src/share/vm/services/classLoadingService.cpp 184 bool succeed = (CommandLineFlags::boolAtPut((char*)"TraceClassLoading", &verbose, Flag::MANAGEMENT) == Flag::SUCCESS); 185 assert(succeed, "Setting TraceClassLoading flag fails"); I'm surprised this doesn't produce an unused variable warning in a product build. Rather than converting the boolAtPut result to a bool success and asserting it to be true, it would be better to capture the resulting Flag::Error, test for success in the assert, and report the failure reason in the assert message. Pre-existing defect: Unnecessary cast. Possible pre-existing defect: I don't understand exactly how these service functions are used, but I wonder whether an assertion is really the appropriate method to check for failure to perform the operation. Similarly for line 195. Similarly for src/share/vm/services/memoryService.cpp:521 ------------------------------------------------------------------------------ src/share/vm/services/writeableFlags.cpp 62 if (error != Flag::SUCCESS) { ... 93 } Rather than if != success and indenting the whole function, I think it would be more readable if this were if (error == Flag::SUCCESS) { return; } ... ------------------------------------------------------------------------------ src/share/vm/services/writeableFlags.cpp 88 buffer[79] = '\0'; What's this about? ------------------------------------------------------------------------------ src/share/vm/services/writeableFlags.cpp 108 Flag::Error err = CommandLineFlags::boolAtPut((char*)name, &value, origin); 125 Flag::Error err = CommandLineFlags::intxAtPut((char*)name, &value, origin); 142 Flag::Error err = CommandLineFlags::uintxAtPut((char*)name, &value, origin); ... and so on ... Pre-existing inappropriate casts. From david.holmes at oracle.com Fri May 29 04:39:11 2015 From: david.holmes at oracle.com (David Holmes) Date: Fri, 29 May 2015 14:39:11 +1000 Subject: Revision2: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <55663700.5050606@oracle.com> References: <5554FE2D.6020405@oracle.com> <555E0028.8070108@oracle.com> <55663700.5050606@oracle.com> Message-ID: <5567ED6F.7000906@oracle.com> Hi Gerard, Meta-comment: I had expected to see constraint functions subsume range checks - ie that the range check would be folded into the constraints function so that all the constraints are clearly seen in one place. Not sure splitting things across two checks aids in understandability. This isn't a blocker but I'd be interested to hear other views on this. A general comment, note that in cases like: "intx %s="INTX_FORMAT" is outside the allowed range [ "INTX_FORMAT" ... "INTX_FORMAT" ]\n", we now need to add spaces between the macros like INTX_FRORMAT and the double-quotes - see 8081202. Not sure who will be pushing first but it's an easy fix to make. A few minor specific comments: src/share/vm/runtime/commandLineFlagConstraintsCompiler.hpp Has the comment: 32 * Here we have runtime arguments constraints functions, - should say 'compiler' --- src/share/vm/runtime/commandLineFlagConstraintsCompiler.cpp Not obvious all the includes are needed ie java.hpp and os.hpp --- src/share/vm/runtime/commandLineFlagConstraintsGC.hpp Has the comment: 32 * Here we have runtime arguments constraints functions, - should say 'GC' I'm a little surprised the #if INCLUDE_ALL_GCS only covers the G1 options. I guess we don't as aggressively exclude the code for the other GC's. --- src/share/vm/runtime/commandLineFlagConstraintsGC.cpp Not obvious all the includes are needed ie java.hpp and os.hpp, and c1/c2 globals? --- Nothing else jumped out at me, but I didn't verify the non-runtime constraints. Thanks, David On 28/05/2015 7:28 AM, Gerard Ziemski wrote: > hi all, > > Here is a revision 2 of the feature taking into account feedback from > Dmitry, David, Kim and Alexander. > > One significant change in this rev is the addition of > runtime/commandLineFlagConstraintsCompiler.hpp/.cpp with one simple > constraint needed by Dmitry's test framework. > > We introduce a new mechanism that allows specification of a valid range > per flag that is then used to automatically validate given flag's value > every time it changes. Ranges values must be constant and can not > change. Optionally, a constraint can also be specified and applied every > time a flag value changes for those flags whose valid value can not be > trivially checked by a simple min and max (ex. whether it's power of 2, > or bigger or smaller than some other flag that can also change) > > I have chosen to modify the table macros (ex. RUNTIME_FLAGS in > globals.hpp) instead of using a more sophisticated solution, such as C++ > templates, because even though macros were unfriendly when initially > developing, once a solution was arrived at, subsequent additions to the > tables of new ranges, or constraint are trivial from developer's point > of view. (The intial development unfriendliness of macros was mitigated > by using a pre-processor, which for those using a modern IDE like Xcode, > is easily available from a menu). Using macros also allowed for more > minimal code changes. > > The presented solution is based on expansion of macros using variadic > functions and can be readily seen in > runtime/commandLineFlagConstraintList.cpp and > runtime/commandLineFlagRangeList.cpp > > In commandLineFlagConstraintList.cpp or commandLineFlagRangesList.cpp, > there is bunch of classes and methods that seems to beg for C++ template > to be used. I have tried, but when the compiler tries to generate code > for both uintx and size_t, which happen to have the same underlying type > (on BSD), it fails to compile overridden methods with same type, but > different name. If someone has a way of simplifying the new code via C++ > templates, however, we can file a new enhancement request to address that. > > This webrev represents only the initial range checking framework and > only 100 or so flags that were ported from an existing ad hoc range > checking code to this new mechanism. There are about 250 remaining flags > that still need their ranges determined and ported over to this new > mechansim and they are tracked by individual subtasks. > > I had to modify several existing tests to change the error message that > they expected when VM refuses to run, which was changed to provide > uniform error messages. > > To help with testing and subtask efforts I have introduced a new runtime > flag: > > PrintFlagsRanges: "Print VM flags and their ranges and exit VM" > > which in addition to the already existing flags: "PrintFlagsInitial" and > "PrintFlagsFinal" allow for thorough examination of the flags values and > their ranges. > > The code change builds and passes JPRT (-testset hotspot) and UTE > (vm.quick.testlist) > > > References: > > Webrev: http://cr.openjdk.java.net/~gziemski/8059557_rev2 > note: due to "awk" limit of 50 pats the Frames diff is not > available for "src/share/vm/runtime/arguments.cpp" > > JEP:https://bugs.openjdk.java.net/browse/JDK-8059557 > Compiler subtask:https://bugs.openjdk.java.net/browse/JDK-8078554 > GC subtask:https://bugs.openjdk.java.net/browse/JDK-8078555 > Runtime subtask:https://bugs.openjdk.java.net/browse/JDK-8078556 > > > hgstat: > > src/cpu/ppc/vm/globals_ppc.hpp | 2 +- > src/cpu/sparc/vm/globals_sparc.hpp | 2 +- > src/cpu/x86/vm/globals_x86.hpp | 2 +- > src/cpu/zero/vm/globals_zero.hpp | 3 +- > src/os/aix/vm/globals_aix.hpp | 2 +- > src/os/bsd/vm/globals_bsd.hpp | 29 +- > src/os/linux/vm/globals_linux.hpp | 9 +- > src/os/solaris/vm/globals_solaris.hpp | 4 +- > src/os/windows/vm/globals_windows.hpp | 5 +- > src/share/vm/c1/c1_globals.cpp | 4 +- > src/share/vm/c1/c1_globals.hpp | 17 +- > src/share/vm/gc_implementation/g1/g1_globals.cpp | 16 +- > src/share/vm/gc_implementation/g1/g1_globals.hpp | 38 +- > src/share/vm/opto/c2_globals.cpp | 12 +- > src/share/vm/opto/c2_globals.hpp | 40 +- > src/share/vm/prims/whitebox.cpp | 12 +- > src/share/vm/runtime/arguments.cpp | 753 ++++++++++++++---------------------- > src/share/vm/runtime/arguments.hpp | 24 +- > src/share/vm/runtime/commandLineFlagConstraintList.cpp | 243 +++++++++++ > src/share/vm/runtime/commandLineFlagConstraintList.hpp | 73 +++ > src/share/vm/runtime/commandLineFlagConstraintsCompiler.cpp | 46 ++ > src/share/vm/runtime/commandLineFlagConstraintsCompiler.hpp | 39 + > src/share/vm/runtime/commandLineFlagConstraintsGC.cpp | 251 ++++++++++++ > src/share/vm/runtime/commandLineFlagConstraintsGC.hpp | 59 ++ > src/share/vm/runtime/commandLineFlagConstraintsRuntime.cpp | 67 +++ > src/share/vm/runtime/commandLineFlagConstraintsRuntime.hpp | 41 ++ > src/share/vm/runtime/commandLineFlagRangeList.cpp | 304 ++++++++++++++ > src/share/vm/runtime/commandLineFlagRangeList.hpp | 67 +++ > src/share/vm/runtime/globals.cpp | 699 +++++++++++++++++++++++++++------ > src/share/vm/runtime/globals.hpp | 310 ++++++++++++-- > src/share/vm/runtime/globals_extension.hpp | 101 +++- > src/share/vm/runtime/init.cpp | 6 +- > src/share/vm/runtime/os.hpp | 17 + > src/share/vm/runtime/os_ext.hpp | 7 +- > src/share/vm/runtime/thread.cpp | 6 + > src/share/vm/services/attachListener.cpp | 4 +- > src/share/vm/services/classLoadingService.cpp | 6 +- > src/share/vm/services/diagnosticCommand.cpp | 3 +- > src/share/vm/services/management.cpp | 6 +- > src/share/vm/services/memoryService.cpp | 2 +- > src/share/vm/services/writeableFlags.cpp | 161 +++++-- > src/share/vm/services/writeableFlags.hpp | 52 +- > test/compiler/c2/7200264/Test7200264.sh | 5 +- > test/compiler/startup/NumCompilerThreadsCheck.java | 2 +- > test/gc/arguments/TestHeapFreeRatio.java | 23 +- > test/gc/arguments/TestSurvivorAlignmentInBytesOption.java | 6 +- > test/gc/g1/TestStringDeduplicationTools.java | 6 +- > test/runtime/CompressedOops/CompressedClassSpaceSize.java | 4 +- > test/runtime/CompressedOops/ObjectAlignment.java | 9 +- > test/runtime/contended/Options.java | 10 +- > 50 files changed, 2730 insertions(+), 879 deletions(-) > From david.holmes at oracle.com Fri May 29 04:54:06 2015 From: david.holmes at oracle.com (David Holmes) Date: Fri, 29 May 2015 14:54:06 +1000 Subject: Revision2: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <5567ED6F.7000906@oracle.com> References: <5554FE2D.6020405@oracle.com> <555E0028.8070108@oracle.com> <55663700.5050606@oracle.com> <5567ED6F.7000906@oracle.com> Message-ID: <5567F0EE.6030009@oracle.com> Missed the tests: test/runtime/CompressedOops/ObjectAlignment.java testObjectAlignment(-1) ! .shouldContain("must be power of 2") ! .shouldContain("outside the allowed range") I'm missing how it can fail for both reasons - won't the range error stop the constraint check from even being applied? --- test/runtime/contended/Options.java output.shouldContain("ContendedPaddingWidth"); ! output.shouldContain("outside the allowed range"); output.shouldContain("must be a multiple of 8"); Again with the new scheme the range check should prevent the constraint check, so I don't see how the error message can report both errors ?? David ----- On 29/05/2015 2:39 PM, David Holmes wrote: > Hi Gerard, > > Meta-comment: I had expected to see constraint functions subsume range > checks - ie that the range check would be folded into the constraints > function so that all the constraints are clearly seen in one place. Not > sure splitting things across two checks aids in understandability. This > isn't a blocker but I'd be interested to hear other views on this. > > > A general comment, note that in cases like: > > "intx %s="INTX_FORMAT" is outside the allowed range [ "INTX_FORMAT" > ... "INTX_FORMAT" ]\n", > > we now need to add spaces between the macros like INTX_FRORMAT and the > double-quotes - see 8081202. Not sure who will be pushing first but it's > an easy fix to make. > > A few minor specific comments: > > src/share/vm/runtime/commandLineFlagConstraintsCompiler.hpp > > Has the comment: > > 32 * Here we have runtime arguments constraints functions, > > - should say 'compiler' > > --- > > src/share/vm/runtime/commandLineFlagConstraintsCompiler.cpp > > Not obvious all the includes are needed ie java.hpp and os.hpp > > --- > > src/share/vm/runtime/commandLineFlagConstraintsGC.hpp > > Has the comment: > > 32 * Here we have runtime arguments constraints functions, > > - should say 'GC' > > I'm a little surprised the #if INCLUDE_ALL_GCS only covers the G1 > options. I guess we don't as aggressively exclude the code for the other > GC's. > > --- > src/share/vm/runtime/commandLineFlagConstraintsGC.cpp > > Not obvious all the includes are needed ie java.hpp and os.hpp, and > c1/c2 globals? > > --- > > Nothing else jumped out at me, but I didn't verify the non-runtime > constraints. > > Thanks, > David > > On 28/05/2015 7:28 AM, Gerard Ziemski wrote: >> hi all, >> >> Here is a revision 2 of the feature taking into account feedback from >> Dmitry, David, Kim and Alexander. >> >> One significant change in this rev is the addition of >> runtime/commandLineFlagConstraintsCompiler.hpp/.cpp with one simple >> constraint needed by Dmitry's test framework. >> >> We introduce a new mechanism that allows specification of a valid range >> per flag that is then used to automatically validate given flag's value >> every time it changes. Ranges values must be constant and can not >> change. Optionally, a constraint can also be specified and applied every >> time a flag value changes for those flags whose valid value can not be >> trivially checked by a simple min and max (ex. whether it's power of 2, >> or bigger or smaller than some other flag that can also change) >> >> I have chosen to modify the table macros (ex. RUNTIME_FLAGS in >> globals.hpp) instead of using a more sophisticated solution, such as C++ >> templates, because even though macros were unfriendly when initially >> developing, once a solution was arrived at, subsequent additions to the >> tables of new ranges, or constraint are trivial from developer's point >> of view. (The intial development unfriendliness of macros was mitigated >> by using a pre-processor, which for those using a modern IDE like Xcode, >> is easily available from a menu). Using macros also allowed for more >> minimal code changes. >> >> The presented solution is based on expansion of macros using variadic >> functions and can be readily seen in >> runtime/commandLineFlagConstraintList.cpp and >> runtime/commandLineFlagRangeList.cpp >> >> In commandLineFlagConstraintList.cpp or commandLineFlagRangesList.cpp, >> there is bunch of classes and methods that seems to beg for C++ template >> to be used. I have tried, but when the compiler tries to generate code >> for both uintx and size_t, which happen to have the same underlying type >> (on BSD), it fails to compile overridden methods with same type, but >> different name. If someone has a way of simplifying the new code via C++ >> templates, however, we can file a new enhancement request to address >> that. >> >> This webrev represents only the initial range checking framework and >> only 100 or so flags that were ported from an existing ad hoc range >> checking code to this new mechanism. There are about 250 remaining flags >> that still need their ranges determined and ported over to this new >> mechansim and they are tracked by individual subtasks. >> >> I had to modify several existing tests to change the error message that >> they expected when VM refuses to run, which was changed to provide >> uniform error messages. >> >> To help with testing and subtask efforts I have introduced a new runtime >> flag: >> >> PrintFlagsRanges: "Print VM flags and their ranges and exit VM" >> >> which in addition to the already existing flags: "PrintFlagsInitial" and >> "PrintFlagsFinal" allow for thorough examination of the flags values and >> their ranges. >> >> The code change builds and passes JPRT (-testset hotspot) and UTE >> (vm.quick.testlist) >> >> >> References: >> >> Webrev: http://cr.openjdk.java.net/~gziemski/8059557_rev2 >> note: due to "awk" limit of 50 pats the Frames diff is not >> available for "src/share/vm/runtime/arguments.cpp" >> >> JEP:https://bugs.openjdk.java.net/browse/JDK-8059557 >> Compiler subtask:https://bugs.openjdk.java.net/browse/JDK-8078554 >> GC subtask:https://bugs.openjdk.java.net/browse/JDK-8078555 >> Runtime subtask:https://bugs.openjdk.java.net/browse/JDK-8078556 >> >> >> hgstat: >> >> src/cpu/ppc/vm/globals_ppc.hpp | 2 +- >> src/cpu/sparc/vm/globals_sparc.hpp | 2 +- >> src/cpu/x86/vm/globals_x86.hpp | 2 +- >> src/cpu/zero/vm/globals_zero.hpp | 3 +- >> src/os/aix/vm/globals_aix.hpp | 2 +- >> src/os/bsd/vm/globals_bsd.hpp | 29 +- >> src/os/linux/vm/globals_linux.hpp | 9 +- >> src/os/solaris/vm/globals_solaris.hpp | 4 +- >> src/os/windows/vm/globals_windows.hpp | 5 +- >> src/share/vm/c1/c1_globals.cpp | 4 +- >> src/share/vm/c1/c1_globals.hpp | 17 +- >> src/share/vm/gc_implementation/g1/g1_globals.cpp | 16 +- >> src/share/vm/gc_implementation/g1/g1_globals.hpp | 38 +- >> src/share/vm/opto/c2_globals.cpp | 12 +- >> src/share/vm/opto/c2_globals.hpp | 40 +- >> src/share/vm/prims/whitebox.cpp | 12 +- >> src/share/vm/runtime/arguments.cpp | 753 >> ++++++++++++++---------------------- >> src/share/vm/runtime/arguments.hpp | 24 +- >> src/share/vm/runtime/commandLineFlagConstraintList.cpp | 243 >> +++++++++++ >> src/share/vm/runtime/commandLineFlagConstraintList.hpp | 73 +++ >> src/share/vm/runtime/commandLineFlagConstraintsCompiler.cpp | 46 ++ >> src/share/vm/runtime/commandLineFlagConstraintsCompiler.hpp | 39 + >> src/share/vm/runtime/commandLineFlagConstraintsGC.cpp | 251 >> ++++++++++++ >> src/share/vm/runtime/commandLineFlagConstraintsGC.hpp | 59 ++ >> src/share/vm/runtime/commandLineFlagConstraintsRuntime.cpp | 67 +++ >> src/share/vm/runtime/commandLineFlagConstraintsRuntime.hpp | 41 ++ >> src/share/vm/runtime/commandLineFlagRangeList.cpp | 304 >> ++++++++++++++ >> src/share/vm/runtime/commandLineFlagRangeList.hpp | 67 +++ >> src/share/vm/runtime/globals.cpp | 699 >> +++++++++++++++++++++++++++------ >> src/share/vm/runtime/globals.hpp | 310 >> ++++++++++++-- >> src/share/vm/runtime/globals_extension.hpp | 101 +++- >> src/share/vm/runtime/init.cpp | 6 +- >> src/share/vm/runtime/os.hpp | 17 + >> src/share/vm/runtime/os_ext.hpp | 7 +- >> src/share/vm/runtime/thread.cpp | 6 + >> src/share/vm/services/attachListener.cpp | 4 +- >> src/share/vm/services/classLoadingService.cpp | 6 +- >> src/share/vm/services/diagnosticCommand.cpp | 3 +- >> src/share/vm/services/management.cpp | 6 +- >> src/share/vm/services/memoryService.cpp | 2 +- >> src/share/vm/services/writeableFlags.cpp | 161 >> +++++-- >> src/share/vm/services/writeableFlags.hpp | 52 +- >> test/compiler/c2/7200264/Test7200264.sh | 5 +- >> test/compiler/startup/NumCompilerThreadsCheck.java | 2 +- >> test/gc/arguments/TestHeapFreeRatio.java | 23 +- >> test/gc/arguments/TestSurvivorAlignmentInBytesOption.java | 6 +- >> test/gc/g1/TestStringDeduplicationTools.java | 6 +- >> test/runtime/CompressedOops/CompressedClassSpaceSize.java | 4 +- >> test/runtime/CompressedOops/ObjectAlignment.java | 9 +- >> test/runtime/contended/Options.java | 10 +- >> 50 files changed, 2730 insertions(+), 879 deletions(-) >> From bengt.rutisson at oracle.com Fri May 29 06:35:56 2015 From: bengt.rutisson at oracle.com (Bengt Rutisson) Date: Fri, 29 May 2015 08:35:56 +0200 Subject: Revision2: Corrected: RFR 8059557 (XL): Validate JVM Command-Line Flag Arguments In-Reply-To: <55663700.5050606@oracle.com> References: <5554FE2D.6020405@oracle.com> <555E0028.8070108@oracle.com> <55663700.5050606@oracle.com> Message-ID: <556808CC.80203@oracle.com> Hi Gerard, Not a full review, but just a couple of questions. The new constraint methods added in commandLineFlagConstraintsGC.cpp are all just implementations of the existing constraints for the GC flags, right? Basically these checks are just moved from arguments.cpp to the constraint methods, or have any new ones been added? All methods in commandLineFlagConstraintsGC.cpp and commandLineFlagConstraintsCompiler.cpp start with this check: if ((CommandLineFlags::finishedInitializing() == true) Only the runtime flags, checked in commandLineFlagConstraintsRuntime.cpp, are checked even when initialization has not been completed. Why is it important to be able to check the constraints even before we have finished initializing? Wouldn't it be simpler to just call the constraint methods once after we've reached CommandLineFlags::finishedInitializing()? Then all the constraint methods wouldn't have to remember to check for that. (BTW, it is error prone to use == for boolean values. Better to just use "if (CommandLineFlags::finishedInitializing())" .) Thanks, Bengt On 2015-05-27 23:28, Gerard Ziemski wrote: > hi all, > > Here is a revision 2 of the feature taking into account feedback from > Dmitry, David, Kim and Alexander. > > One significant change in this rev is the addition of > runtime/commandLineFlagConstraintsCompiler.hpp/.cpp with one simple > constraint needed by Dmitry's test framework. > > We introduce a new mechanism that allows specification of a valid > range per flag that is then used to automatically validate given > flag's value every time it changes. Ranges values must be constant and > can not change. Optionally, a constraint can also be specified and > applied every time a flag value changes for those flags whose valid > value can not be trivially checked by a simple min and max (ex. > whether it's power of 2, or bigger or smaller than some other flag > that can also change) > > I have chosen to modify the table macros (ex. RUNTIME_FLAGS in > globals.hpp) instead of using a more sophisticated solution, such as > C++ templates, because even though macros were unfriendly when > initially developing, once a solution was arrived at, subsequent > additions to the tables of new ranges, or constraint are trivial from > developer's point of view. (The intial development unfriendliness of > macros was mitigated by using a pre-processor, which for those using a > modern IDE like Xcode, is easily available from a menu). Using macros > also allowed for more minimal code changes. > > The presented solution is based on expansion of macros using variadic > functions and can be readily seen in > runtime/commandLineFlagConstraintList.cpp and > runtime/commandLineFlagRangeList.cpp > > In commandLineFlagConstraintList.cpp or commandLineFlagRangesList.cpp, > there is bunch of classes and methods that seems to beg for C++ > template to be used. I have tried, but when the compiler tries to > generate code for both uintx and size_t, which happen to have the same > underlying type (on BSD), it fails to compile overridden methods with > same type, but different name. If someone has a way of simplifying the > new code via C++ templates, however, we can file a new enhancement > request to address that. > > This webrev represents only the initial range checking framework and > only 100 or so flags that were ported from an existing ad hoc range > checking code to this new mechanism. There are about 250 remaining > flags that still need their ranges determined and ported over to this > new mechansim and they are tracked by individual subtasks. > > I had to modify several existing tests to change the error message > that they expected when VM refuses to run, which was changed to > provide uniform error messages. > > To help with testing and subtask efforts I have introduced a new > runtime flag: > > PrintFlagsRanges: "Print VM flags and their ranges and exit VM" > > which in addition to the already existing flags: "PrintFlagsInitial" > and "PrintFlagsFinal" allow for thorough examination of the flags > values and their ranges. > > The code change builds and passes JPRT (-testset hotspot) and UTE > (vm.quick.testlist) > > > References: > > Webrev: http://cr.openjdk.java.net/~gziemski/8059557_rev2 > note: due to "awk" limit of 50 pats the Frames diff is not > available for "src/share/vm/runtime/arguments.cpp" > > JEP:https://bugs.openjdk.java.net/browse/JDK-8059557 > Compiler subtask:https://bugs.openjdk.java.net/browse/JDK-8078554 > GC subtask:https://bugs.openjdk.java.net/browse/JDK-8078555 > Runtime subtask:https://bugs.openjdk.java.net/browse/JDK-8078556 > > > hgstat: > > src/cpu/ppc/vm/globals_ppc.hpp | 2 +- > src/cpu/sparc/vm/globals_sparc.hpp | 2 +- > src/cpu/x86/vm/globals_x86.hpp | 2 +- > src/cpu/zero/vm/globals_zero.hpp | 3 +- > src/os/aix/vm/globals_aix.hpp | 2 +- > src/os/bsd/vm/globals_bsd.hpp | 29 +- > src/os/linux/vm/globals_linux.hpp | 9 +- > src/os/solaris/vm/globals_solaris.hpp | 4 +- > src/os/windows/vm/globals_windows.hpp | 5 +- > src/share/vm/c1/c1_globals.cpp | 4 +- > src/share/vm/c1/c1_globals.hpp | 17 +- > src/share/vm/gc_implementation/g1/g1_globals.cpp | 16 +- > src/share/vm/gc_implementation/g1/g1_globals.hpp | 38 +- > src/share/vm/opto/c2_globals.cpp | 12 +- > src/share/vm/opto/c2_globals.hpp | 40 +- > src/share/vm/prims/whitebox.cpp | 12 +- > src/share/vm/runtime/arguments.cpp | 753 > ++++++++++++++---------------------- > src/share/vm/runtime/arguments.hpp | 24 +- > src/share/vm/runtime/commandLineFlagConstraintList.cpp | 243 > +++++++++++ > src/share/vm/runtime/commandLineFlagConstraintList.hpp | 73 +++ > src/share/vm/runtime/commandLineFlagConstraintsCompiler.cpp | 46 ++ > src/share/vm/runtime/commandLineFlagConstraintsCompiler.hpp | 39 + > src/share/vm/runtime/commandLineFlagConstraintsGC.cpp | 251 > ++++++++++++ > src/share/vm/runtime/commandLineFlagConstraintsGC.hpp | 59 ++ > src/share/vm/runtime/commandLineFlagConstraintsRuntime.cpp | 67 +++ > src/share/vm/runtime/commandLineFlagConstraintsRuntime.hpp | 41 ++ > src/share/vm/runtime/commandLineFlagRangeList.cpp | 304 > ++++++++++++++ > src/share/vm/runtime/commandLineFlagRangeList.hpp | 67 +++ > src/share/vm/runtime/globals.cpp | 699 > +++++++++++++++++++++++++++------ > src/share/vm/runtime/globals.hpp | 310 > ++++++++++++-- > src/share/vm/runtime/globals_extension.hpp | 101 +++- > src/share/vm/runtime/init.cpp | 6 +- > src/share/vm/runtime/os.hpp | 17 + > src/share/vm/runtime/os_ext.hpp | 7 +- > src/share/vm/runtime/thread.cpp | 6 + > src/share/vm/services/attachListener.cpp | 4 +- > src/share/vm/services/classLoadingService.cpp | 6 +- > src/share/vm/services/diagnosticCommand.cpp | 3 +- > src/share/vm/services/management.cpp | 6 +- > src/share/vm/services/memoryService.cpp | 2 +- > src/share/vm/services/writeableFlags.cpp | 161 > +++++-- > src/share/vm/services/writeableFlags.hpp | 52 +- > test/compiler/c2/7200264/Test7200264.sh | 5 +- > test/compiler/startup/NumCompilerThreadsCheck.java | 2 +- > test/gc/arguments/TestHeapFreeRatio.java | 23 +- > test/gc/arguments/TestSurvivorAlignmentInBytesOption.java | 6 +- > test/gc/g1/TestStringDeduplicationTools.java | 6 +- > test/runtime/CompressedOops/CompressedClassSpaceSize.java | 4 +- > test/runtime/CompressedOops/ObjectAlignment.java | 9 +- > test/runtime/contended/Options.java | 10 +- > 50 files changed, 2730 insertions(+), 879 deletions(-) > From staffan.larsen at oracle.com Fri May 29 08:53:49 2015 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Fri, 29 May 2015 10:53:49 +0200 Subject: [urgent] RFR: 8081470 com/sun/jdi tests are failing with "Error. failed to clean up files after test" with jtreg 4.1 b12 Message-ID: <11FEF918-2B72-49D6-926D-A6CD90E4019D@oracle.com> Can I have a fast review of the following change that is currently blocking hotspot pushes in jprt. jtreg 4.1b12 adds stricter checking of @library tags. Some com/sun/jdi tests have @library clauses that are not needed. I do not intend to wait 24 hours before pushing this... webrev: http://cr.openjdk.java.net/~sla/8081470/webrev.00/ bug: https://bugs.openjdk.java.net/browse/JDK-8081470 Thanks, /Staffan From mikael.gerdin at oracle.com Fri May 29 08:54:47 2015 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Fri, 29 May 2015 10:54:47 +0200 Subject: [urgent] RFR: 8081470 com/sun/jdi tests are failing with "Error. failed to clean up files after test" with jtreg 4.1 b12 In-Reply-To: <11FEF918-2B72-49D6-926D-A6CD90E4019D@oracle.com> References: <11FEF918-2B72-49D6-926D-A6CD90E4019D@oracle.com> Message-ID: <55682957.3020405@oracle.com> Hi Staffan, On 2015-05-29 10:53, Staffan Larsen wrote: > Can I have a fast review of the following change that is currently blocking hotspot pushes in jprt. jtreg 4.1b12 adds stricter checking of @library tags. Some com/sun/jdi tests have @library clauses that are not needed. > > I do not intend to wait 24 hours before pushing this... > > webrev: http://cr.openjdk.java.net/~sla/8081470/webrev.00/ Looks good, Reviewed. /Mikael > bug: https://bugs.openjdk.java.net/browse/JDK-8081470 > > Thanks, > /Staffan > From bengt.rutisson at oracle.com Fri May 29 08:54:48 2015 From: bengt.rutisson at oracle.com (Bengt Rutisson) Date: Fri, 29 May 2015 10:54:48 +0200 Subject: [urgent] RFR: 8081470 com/sun/jdi tests are failing with "Error. failed to clean up files after test" with jtreg 4.1 b12 In-Reply-To: <11FEF918-2B72-49D6-926D-A6CD90E4019D@oracle.com> References: <11FEF918-2B72-49D6-926D-A6CD90E4019D@oracle.com> Message-ID: <55682958.5060703@oracle.com> On 2015-05-29 10:53, Staffan Larsen wrote: > Can I have a fast review of the following change that is currently blocking hotspot pushes in jprt. jtreg 4.1b12 adds stricter checking of @library tags. Some com/sun/jdi tests have @library clauses that are not needed. > > I do not intend to wait 24 hours before pushing this... > > webrev: http://cr.openjdk.java.net/~sla/8081470/webrev.00/ > bug: https://bugs.openjdk.java.net/browse/JDK-8081470 Looks good to me too. Bengt > > Thanks, > /Staffan From igor.ignatyev at oracle.com Fri May 29 08:57:31 2015 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Fri, 29 May 2015 11:57:31 +0300 Subject: [urgent] RFR: 8081470 com/sun/jdi tests are failing with "Error. failed to clean up files after test" with jtreg 4.1 b12 In-Reply-To: <11FEF918-2B72-49D6-926D-A6CD90E4019D@oracle.com> References: <11FEF918-2B72-49D6-926D-A6CD90E4019D@oracle.com> Message-ID: <556829FB.2070402@oracle.com> Looks good to me. Igor On 05/29/2015 11:53 AM, Staffan Larsen wrote: > Can I have a fast review of the following change that is currently blocking hotspot pushes in jprt. jtreg 4.1b12 adds stricter checking of @library tags. Some com/sun/jdi tests have @library clauses that are not needed. > > I do not intend to wait 24 hours before pushing this... > > webrev: http://cr.openjdk.java.net/~sla/8081470/webrev.00/ > bug: https://bugs.openjdk.java.net/browse/JDK-8081470 > > Thanks, > /Staffan > From staffan.larsen at oracle.com Fri May 29 09:07:06 2015 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Fri, 29 May 2015 11:07:06 +0200 Subject: [urgent] RFR: 8081470 com/sun/jdi tests are failing with "Error. failed to clean up files after test" with jtreg 4.1 b12 In-Reply-To: <556829FB.2070402@oracle.com> References: <11FEF918-2B72-49D6-926D-A6CD90E4019D@oracle.com> <556829FB.2070402@oracle.com> Message-ID: Thanks all. Fix is in the queue to jdk9/hs/jdk and will then be pulled into the group repos. > On 29 maj 2015, at 10:57, Igor Ignatyev wrote: > > Looks good to me. > > Igor > > On 05/29/2015 11:53 AM, Staffan Larsen wrote: >> Can I have a fast review of the following change that is currently blocking hotspot pushes in jprt. jtreg 4.1b12 adds stricter checking of @library tags. Some com/sun/jdi tests have @library clauses that are not needed. >> >> I do not intend to wait 24 hours before pushing this... >> >> webrev: http://cr.openjdk.java.net/~sla/8081470/webrev.00/ >> bug: https://bugs.openjdk.java.net/browse/JDK-8081470 >> >> Thanks, >> /Staffan >> From pujarimahesh_kumar at yahoo.com Fri May 29 10:09:04 2015 From: pujarimahesh_kumar at yahoo.com (Mahesh Pujari) Date: Fri, 29 May 2015 10:09:04 +0000 (UTC) Subject: Issues with dtrace enabled in OpenJdk 9 In-Reply-To: <55647835.2050401@oracle.com> References: <55647835.2050401@oracle.com> Message-ID: <1654770900.500152.1432894144653.JavaMail.yahoo@mail.yahoo.com> Hi, ?I am able to compile OpenJDK9 with dtrace on Linux container with Ubuntu distribution (but still having compilation issues on host machine). Now as I went a step a head, I tried to test "method__entry" marker. Below is the script probe process("JDK_PATH/lib/amd64/server/libjvm.so").mark("thread__start") { ??? printf("Marker thread__start [%s] %d %d %d %d\n", user_string($arg1),$arg2,$arg3,$arg4,$arg5); } Note: Need to replace JDK_PATH with actual path. above script did run and I did got output as below Marker thread__start [Reference Handler] 17 2 21151 1 .... .... But I am not able to use hotspot provider i.e. below script does not run stap -e 'probe hotspot.thread_* { printf("%s: %s%s\n", ctime(gettimeofday_s()), name, thread_name) }' \ -c 'java Hello' And I end up in below erorr semantic error: while resolving probe point: identifier 'hotspot' at :1:7 ??????? source: probe hotspot.thread_* { printf("%s: %s%s\n", ctime(gettimeofday_s()), name, thread_name) } ????????????????????? ^ semantic error: probe point mismatch (similar: oneshot, tcp, init, scsi, stap): identifier 'hotspot' at :1:7 ??????? source: probe hotspot.thread_* { printf("%s: %s%s\n", ctime(gettimeofday_s()), name, thread_name) } ????????????????????? ^ Can some one point me to where I can get hotspot provider or am I missing something. thanks and regards, Mahesh Pujari On Tuesday, May 26, 2015 7:10 PM, Daniel D. Daugherty wrote: Adding the Serviceability alias. Dan On 5/25/15 9:52 AM, Mahesh Pujari wrote: > Hi, > >? I am trying to build OpenJDK 9 with dtrace enabled on my Ubuntu machine (Linux 3.13.0-45-generic #74-Ubuntu), I have asked this question on build-dev at openjdk.java.net (http://mail.openjdk.java.net/pipermail/build-dev/2015-May/014969.html) and I was directed to this mailing list (including distro-pkg mailing list, but had no luck there, so trying out here). > >? If SDT headers are found then dtrace is enabled by default, this is what I understood. Now when I build, I end-up with below errors > ... > ... > vmThread.o: In function `VMOperationQueue::add(VM_Operation*)': > /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/runtime/vmThread.cpp:156: undefined reference to `__dtrace_hotspot___vmops__request' > vmThread.o: In function `VMThread::evaluate_operation(VM_Operation*)': > /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/runtime/vmThread.cpp:354: undefined reference to `__dtrace_hotspot___vmops__begin' > /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/runtime/vmThread.cpp:374: undefined reference to `__dtrace_hotspot___vmops__end' > ... > .. > classLoadingService.o: In function `ClassLoadingService::notify_class_unloaded(InstanceKlass*)': > /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/services/classLoadingService.cpp:119: undefined reference to `__dtrace_hotspot___class__unloaded' > classLoadingService.o: In function `ClassLoadingService::notify_class_loaded(InstanceKlass*, bool)': > /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/services/classLoadingService.cpp:144: undefined reference to `__dtrace_hotspot___class__loaded' > compileBroker.o: In function `CompileBroker::invoke_compiler_on_method(CompileTask*)': > /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/compiler/compileBroker.cpp:1927: undefined reference to `__dtrace_hotspot___method__compile__begin' > /mnt/ubuntu/dev/jdk9/hotspot/src/share/vm/compiler/compileBroker.cpp:2028: undefined reference to `__dtrace_hotspot___method__compile__end' > ... > ... > >? Compilation is success but during linkage things fail. Can someone help me with this, any directions to what I am missing. > > thanks and regards, > Mahesh Pujari > From zoltan.majo at oracle.com Fri May 29 12:22:33 2015 From: zoltan.majo at oracle.com (=?UTF-8?B?Wm9sdMOhbiBNYWrDsw==?=) Date: Fri, 29 May 2015 14:22:33 +0200 Subject: [8u60] Bulk backport request: 8075798, 8068945, and 8080281 Message-ID: <55685A09.9020501@oracle.com> Hi, please review the following backports to 8u60: (1) 8068945: Use RBP register as proper frame pointer in JIT compiled code on x86 https://bugs.openjdk.java.net/browse/JDK-8068945 http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/382e9e4b3b71 (2) 8075798: Allow ADLC register class to depend on runtime conditions also for cisc-spillable classes https://bugs.openjdk.java.net/browse/JDK-8075798 http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/ac291bc3ece2 (3) 8080281: 8068945 changes break building the zero JVM variant https://bugs.openjdk.java.net/browse/JDK-8080281 http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/dee9ddf02864 Only (2) applies cleanly to the 8u tree, for (1) and (3) minor "adjustments" were necessary around the declaration of the PreserveFramePointer flag: - for (1) in src/share/vm/runtime/globals.hpp, moreover, src/cpu/aarch64/vm/globals_aarch64.hpp was not patched as the file does not exist in 8u; - for (2) in src/cpu/zero/vm/globals_zero.hpp. Here is a webrev that shows *all* changes to be pushed (incl. the "adjustments"): http://cr.openjdk.java.net/~zmajo/8068945_8078798_8080281_8u/webrev.00/ The original changes were pushed to 9 more than a month ago (except (3), which is a minor change) and nightly testing showed no problems. In addition, the following testing was done: - full JPRT run, all tests pass; - JTREG testing with java/lang/invoke and with all hotspot tests, all tests pass that pass also with an unpatched VM. The Release Team has approved all three backports (as indicated in the corresponding JBS issues). Thank you and best regards, Zoltan From vladimir.kozlov at oracle.com Fri May 29 15:09:12 2015 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 29 May 2015 08:09:12 -0700 Subject: [8u60] Bulk backport request: 8075798, 8068945, and 8080281 In-Reply-To: <55685A09.9020501@oracle.com> References: <55685A09.9020501@oracle.com> Message-ID: <55688118.4000001@oracle.com> Looks good. Thanks, Vladimir On 5/29/15 5:22 AM, Zolt?n Maj? wrote: > Hi, > > > please review the following backports to 8u60: > > (1) 8068945: Use RBP register as proper frame pointer in JIT compiled code on x86 > https://bugs.openjdk.java.net/browse/JDK-8068945 > http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/382e9e4b3b71 > > (2) 8075798: Allow ADLC register class to depend on runtime conditions also for cisc-spillable classes > https://bugs.openjdk.java.net/browse/JDK-8075798 > http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/ac291bc3ece2 > > (3) 8080281: 8068945 changes break building the zero JVM variant > https://bugs.openjdk.java.net/browse/JDK-8080281 > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/dee9ddf02864 > > Only (2) applies cleanly to the 8u tree, for (1) and (3) minor "adjustments" were necessary around the declaration of > the PreserveFramePointer flag: > - for (1) in src/share/vm/runtime/globals.hpp, moreover, src/cpu/aarch64/vm/globals_aarch64.hpp was not patched as the > file does not exist in 8u; > - for (2) in src/cpu/zero/vm/globals_zero.hpp. > > Here is a webrev that shows *all* changes to be pushed (incl. the "adjustments"): > http://cr.openjdk.java.net/~zmajo/8068945_8078798_8080281_8u/webrev.00/ > > The original changes were pushed to 9 more than a month ago (except (3), which is a minor change) and nightly testing > showed no problems. > > In addition, the following testing was done: > - full JPRT run, all tests pass; > - JTREG testing with java/lang/invoke and with all hotspot tests, all tests pass that pass also with an unpatched VM. > > The Release Team has approved all three backports (as indicated in the corresponding JBS issues). > > Thank you and best regards, > > > Zoltan > From zoltan.majo at oracle.com Fri May 29 15:40:09 2015 From: zoltan.majo at oracle.com (=?UTF-8?B?Wm9sdMOhbiBNYWrDsw==?=) Date: Fri, 29 May 2015 17:40:09 +0200 Subject: [8u60] Bulk backport request: 8075798, 8068945, and 8080281 In-Reply-To: <55688118.4000001@oracle.com> References: <55685A09.9020501@oracle.com> <55688118.4000001@oracle.com> Message-ID: <55688859.8010002@oracle.com> Thank you, Vladimir, for the review! Best regards, Zoltan On 05/29/2015 05:09 PM, Vladimir Kozlov wrote: > Looks good. > > Thanks, > Vladimir > > On 5/29/15 5:22 AM, Zolt?n Maj? wrote: >> Hi, >> >> >> please review the following backports to 8u60: >> >> (1) 8068945: Use RBP register as proper frame pointer in JIT compiled >> code on x86 >> https://bugs.openjdk.java.net/browse/JDK-8068945 >> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/382e9e4b3b71 >> >> (2) 8075798: Allow ADLC register class to depend on runtime >> conditions also for cisc-spillable classes >> https://bugs.openjdk.java.net/browse/JDK-8075798 >> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/ac291bc3ece2 >> >> (3) 8080281: 8068945 changes break building the zero JVM variant >> https://bugs.openjdk.java.net/browse/JDK-8080281 >> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/dee9ddf02864 >> >> Only (2) applies cleanly to the 8u tree, for (1) and (3) minor >> "adjustments" were necessary around the declaration of >> the PreserveFramePointer flag: >> - for (1) in src/share/vm/runtime/globals.hpp, moreover, >> src/cpu/aarch64/vm/globals_aarch64.hpp was not patched as the >> file does not exist in 8u; >> - for (2) in src/cpu/zero/vm/globals_zero.hpp. >> >> Here is a webrev that shows *all* changes to be pushed (incl. the >> "adjustments"): >> http://cr.openjdk.java.net/~zmajo/8068945_8078798_8080281_8u/webrev.00/ >> >> The original changes were pushed to 9 more than a month ago (except >> (3), which is a minor change) and nightly testing >> showed no problems. >> >> In addition, the following testing was done: >> - full JPRT run, all tests pass; >> - JTREG testing with java/lang/invoke and with all hotspot tests, all >> tests pass that pass also with an unpatched VM. >> >> The Release Team has approved all three backports (as indicated in >> the corresponding JBS issues). >> >> Thank you and best regards, >> >> >> Zoltan >> From mikael.vidstedt at oracle.com Fri May 29 17:31:40 2015 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Fri, 29 May 2015 10:31:40 -0700 Subject: Experiment: Merging hs-rt and hs-gc Message-ID: <5568A27C.9020205@oracle.com> All, As you all know the JDK 9 development of Hotspot is done in three different mercurial forests: hs-rt[1], hs-gc[2] and hs-comp[3]. This division has served as a way of isolating changes from each other in order to get more testing done on them before they are shared to other forests. However, as a side effect of this the propagation time of fixes is also impacted, and in some cases we have seen fixes stuck for several weeks waiting for the respective forests to stabilize. We would like to propose an experiment, which will merge the hs-rt and hs-gc forests, having hs-rt be the forest through which all the hs-rt and hs-gc changes are integrated. For the duration of the experiment the hs-gc forest will be locked down so that no accidental integrations are made to it. This change would mean that the combined hs-rt gets more testing faster, and that the fix propagation time goes to zero for changes between hs-rt and hs-gc. The hs-comp forest will not be affected. We suggest that the experiment starts June 4th and goes on for at least two weeks (giving us some time to adapt in case of issues). Monitoring and evaluation of the new structure will take place continuously, with an option to revert back if things do not work out. This will remain an experiment for at least a few months, after which we will evaluate it and depending on the results consider making it the new standard. If so, the hs-gc forest will eventually be retired, with an option of looking at further reduction of forests going forward. It's worth pointing out explicitly that if you have any changes based on the hs-gc forest those changes would have to be rebased on top of hs-rt instead once the hs-gc forest has been locked down. Please let us know if you have any feedback or questions! Cheers, Mikael [1] http://hg.openjdk.java.net/jdk9/hs-rt [2] http://hg.openjdk.java.net/jdk9/hs-gc [3] http://hg.openjdk.java.net/jdk9/hs-comp From volker.simonis at gmail.com Fri May 29 17:46:37 2015 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 29 May 2015 19:46:37 +0200 Subject: Experiment: Merging hs-rt and hs-gc In-Reply-To: <5568A27C.9020205@oracle.com> References: <5568A27C.9020205@oracle.com> Message-ID: +1 from a port maintainers perspective Regards, Volker On Fri, May 29, 2015 at 7:31 PM, Mikael Vidstedt wrote: > > All, > > As you all know the JDK 9 development of Hotspot is done in three different > mercurial forests: hs-rt[1], hs-gc[2] and hs-comp[3]. This division has > served as a way of isolating changes from each other in order to get more > testing done on them before they are shared to other forests. However, as a > side effect of this the propagation time of fixes is also impacted, and in > some cases we have seen fixes stuck for several weeks waiting for the > respective forests to stabilize. > > We would like to propose an experiment, which will merge the hs-rt and hs-gc > forests, having hs-rt be the forest through which all the hs-rt and hs-gc > changes are integrated. For the duration of the experiment the hs-gc forest > will be locked down so that no accidental integrations are made to it. This > change would mean that the combined hs-rt gets more testing faster, and that > the fix propagation time goes to zero for changes between hs-rt and hs-gc. > The hs-comp forest will not be affected. > > We suggest that the experiment starts June 4th and goes on for at least two > weeks (giving us some time to adapt in case of issues). Monitoring and > evaluation of the new structure will take place continuously, with an option > to revert back if things do not work out. This will remain an experiment for > at least a few months, after which we will evaluate it and depending on the > results consider making it the new standard. If so, the hs-gc forest will > eventually be retired, with an option of looking at further reduction of > forests going forward. > > It's worth pointing out explicitly that if you have any changes based on the > hs-gc forest those changes would have to be rebased on top of hs-rt instead > once the hs-gc forest has been locked down. > > Please let us know if you have any feedback or questions! > > Cheers, > Mikael > > [1] http://hg.openjdk.java.net/jdk9/hs-rt > [2] http://hg.openjdk.java.net/jdk9/hs-gc > [3] http://hg.openjdk.java.net/jdk9/hs-comp >